Depression remains common among dystonia patients

Article Type
Changed

 

About one-third of individuals with adult-onset idiopathic dystonia experience major depression or dysthymia, data from a meta-analysis of 54 studies show.

Adult-onset idiopathic dystonia (AOID) is the third-most common movement disorder after essential tremor and Parkinson’s disease, and data show that depression and anxiety are the largest contributors to reduced quality of life in these patients, wrote Alex Medina Escobar, MD, of the University of Calgary (Alta.), and colleagues. However, “the pathogenic mechanisms of depression and anxiety in AOID remain unclear” and might involve a combination of biologic factors, as well as social stigma.

In the meta-analysis, published in Neuroscience and Biobehavioral Reviews, the researchers examined the point prevalence of supraclinical threshold depressive symptoms/depressive disorders in AOID using 54 studies. The resulting study population included 12,635 patients: 6,977 with cervical dystonia, 732 with cranial dystonia, 4,504 with mixed forms, 303 with laryngeal dystonia, and 119 with upper-limb dystonia. The studies were published between 1988 and 2020, and included patients from 21 countries in 52 single-center studies and 2 multicenter studies.

Overall, the pooled prevalence of either supraclinical threshold depressive symptoms or depressive disorders was 31.5% for cervical dystonia, 29.2 % for cranial dystonia, and 33.6 % for clinical samples with mixed forms of AOID.

Among patients with cervical dystonia, major depressive disorder was more prevalent than dysthymia, but among patients with cranial dystonia, dysthymia was more prevalent. Among patients with mixed forms, the prevalence of major depressive disorder was higher than dysthymia. Heterogeneity varied among the studies but was higher in studies that used rating scales.

Treatment of patients with AOID does not take into account the impact of depression on quality of life, Dr. Escobar and colleagues reported.

The current model of care for AOID remains primarily centered on the treatment of the movement disorder with local injections of botulinum toxin. Such model appears to be inefficient to guarantee resources to address these comorbidities within secondary or tertiary care, or through shared care pathways engaging both primary and hospital-based care.” They also said the use of antidepressants and cognitive-behavioral therapy as a way to target negative body concept or social stigma among these patients are “underexplored and underutilized.”

The study findings were limited by several factors, including the inclusion only of studies published in English. In addition, most of the studies were conducted at movement disorders clinics, which may have yielded a patient population with more severe AOID. Further limitations included the inability to perform subgroup analysis based on demographic and clinical factors, and the insufficient number of studies for meta-analysis of laryngeal and hand dystonia, Dr. Escobar and colleagues added.

However, the results represent the first pooled estimate of depression prevalence in AOID and confirm a high prevalence across different clinical forms, the researchers said. The heterogeneity across studies highlights the need for standardized screening for depression and improved diagnosis of mood disorders in AOID.

“The meta-analytic estimates provided here will be highly useful for the planning of future mechanistic and interventional studies, as well as for the redefinition of current models of care,” they concluded.

The study received no outside funding. Dr. Escobar and colleagues had no disclosures.

Issue
Neurology Reviews- 29(8)
Publications
Topics
Sections

 

About one-third of individuals with adult-onset idiopathic dystonia experience major depression or dysthymia, data from a meta-analysis of 54 studies show.

Adult-onset idiopathic dystonia (AOID) is the third-most common movement disorder after essential tremor and Parkinson’s disease, and data show that depression and anxiety are the largest contributors to reduced quality of life in these patients, wrote Alex Medina Escobar, MD, of the University of Calgary (Alta.), and colleagues. However, “the pathogenic mechanisms of depression and anxiety in AOID remain unclear” and might involve a combination of biologic factors, as well as social stigma.

In the meta-analysis, published in Neuroscience and Biobehavioral Reviews, the researchers examined the point prevalence of supraclinical threshold depressive symptoms/depressive disorders in AOID using 54 studies. The resulting study population included 12,635 patients: 6,977 with cervical dystonia, 732 with cranial dystonia, 4,504 with mixed forms, 303 with laryngeal dystonia, and 119 with upper-limb dystonia. The studies were published between 1988 and 2020, and included patients from 21 countries in 52 single-center studies and 2 multicenter studies.

Overall, the pooled prevalence of either supraclinical threshold depressive symptoms or depressive disorders was 31.5% for cervical dystonia, 29.2 % for cranial dystonia, and 33.6 % for clinical samples with mixed forms of AOID.

Among patients with cervical dystonia, major depressive disorder was more prevalent than dysthymia, but among patients with cranial dystonia, dysthymia was more prevalent. Among patients with mixed forms, the prevalence of major depressive disorder was higher than dysthymia. Heterogeneity varied among the studies but was higher in studies that used rating scales.

Treatment of patients with AOID does not take into account the impact of depression on quality of life, Dr. Escobar and colleagues reported.

The current model of care for AOID remains primarily centered on the treatment of the movement disorder with local injections of botulinum toxin. Such model appears to be inefficient to guarantee resources to address these comorbidities within secondary or tertiary care, or through shared care pathways engaging both primary and hospital-based care.” They also said the use of antidepressants and cognitive-behavioral therapy as a way to target negative body concept or social stigma among these patients are “underexplored and underutilized.”

The study findings were limited by several factors, including the inclusion only of studies published in English. In addition, most of the studies were conducted at movement disorders clinics, which may have yielded a patient population with more severe AOID. Further limitations included the inability to perform subgroup analysis based on demographic and clinical factors, and the insufficient number of studies for meta-analysis of laryngeal and hand dystonia, Dr. Escobar and colleagues added.

However, the results represent the first pooled estimate of depression prevalence in AOID and confirm a high prevalence across different clinical forms, the researchers said. The heterogeneity across studies highlights the need for standardized screening for depression and improved diagnosis of mood disorders in AOID.

“The meta-analytic estimates provided here will be highly useful for the planning of future mechanistic and interventional studies, as well as for the redefinition of current models of care,” they concluded.

The study received no outside funding. Dr. Escobar and colleagues had no disclosures.

 

About one-third of individuals with adult-onset idiopathic dystonia experience major depression or dysthymia, data from a meta-analysis of 54 studies show.

Adult-onset idiopathic dystonia (AOID) is the third-most common movement disorder after essential tremor and Parkinson’s disease, and data show that depression and anxiety are the largest contributors to reduced quality of life in these patients, wrote Alex Medina Escobar, MD, of the University of Calgary (Alta.), and colleagues. However, “the pathogenic mechanisms of depression and anxiety in AOID remain unclear” and might involve a combination of biologic factors, as well as social stigma.

In the meta-analysis, published in Neuroscience and Biobehavioral Reviews, the researchers examined the point prevalence of supraclinical threshold depressive symptoms/depressive disorders in AOID using 54 studies. The resulting study population included 12,635 patients: 6,977 with cervical dystonia, 732 with cranial dystonia, 4,504 with mixed forms, 303 with laryngeal dystonia, and 119 with upper-limb dystonia. The studies were published between 1988 and 2020, and included patients from 21 countries in 52 single-center studies and 2 multicenter studies.

Overall, the pooled prevalence of either supraclinical threshold depressive symptoms or depressive disorders was 31.5% for cervical dystonia, 29.2 % for cranial dystonia, and 33.6 % for clinical samples with mixed forms of AOID.

Among patients with cervical dystonia, major depressive disorder was more prevalent than dysthymia, but among patients with cranial dystonia, dysthymia was more prevalent. Among patients with mixed forms, the prevalence of major depressive disorder was higher than dysthymia. Heterogeneity varied among the studies but was higher in studies that used rating scales.

Treatment of patients with AOID does not take into account the impact of depression on quality of life, Dr. Escobar and colleagues reported.

The current model of care for AOID remains primarily centered on the treatment of the movement disorder with local injections of botulinum toxin. Such model appears to be inefficient to guarantee resources to address these comorbidities within secondary or tertiary care, or through shared care pathways engaging both primary and hospital-based care.” They also said the use of antidepressants and cognitive-behavioral therapy as a way to target negative body concept or social stigma among these patients are “underexplored and underutilized.”

The study findings were limited by several factors, including the inclusion only of studies published in English. In addition, most of the studies were conducted at movement disorders clinics, which may have yielded a patient population with more severe AOID. Further limitations included the inability to perform subgroup analysis based on demographic and clinical factors, and the insufficient number of studies for meta-analysis of laryngeal and hand dystonia, Dr. Escobar and colleagues added.

However, the results represent the first pooled estimate of depression prevalence in AOID and confirm a high prevalence across different clinical forms, the researchers said. The heterogeneity across studies highlights the need for standardized screening for depression and improved diagnosis of mood disorders in AOID.

“The meta-analytic estimates provided here will be highly useful for the planning of future mechanistic and interventional studies, as well as for the redefinition of current models of care,” they concluded.

The study received no outside funding. Dr. Escobar and colleagues had no disclosures.

Issue
Neurology Reviews- 29(8)
Issue
Neurology Reviews- 29(8)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROSCIENCE AND BIOBEHAVIORAL REVIEWS

Citation Override
Publish date: June 22, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cortical surface changes tied to risk for movement disorders in schizophrenia

Article Type
Changed

 

Schizophrenia patients with parkinsonism show distinctive patterns of cortical surface markers, compared with schizophrenia patients without parkinsonism and healthy controls, results of a multimodal magnetic resonance imaging study suggest.

Dr. Robert Christian Wolf

Sensorimotor abnormalities are common in schizophrenia patients, however, “the neurobiological mechanisms underlying parkinsonism in [schizophrenia], which in treated samples represents the unity of interplay between spontaneous and antipsychotic drug-exacerbated movement disorder, are poorly understood,” wrote Robert Christian Wolf, MD, of Heidelberg (Germany) University, and colleagues.

In a study published in Schizophrenia Research (2021 May;231:54-60), the investigators examined brain imaging findings from 20 healthy controls, 38 schizophrenia patients with parkinsonism (SZ-P), and 35 schizophrenia patients without parkinsonism (SZ-nonP). Dr. Wolf and colleagues examined three cortical surface markers: cortical thickness, complexity of cortical folding, and sulcus depth.

Compared with SZ-nonP patients, the SZ-P patients showed significantly increased complexity of cortical folding in the left supplementary motor cortex (SMC) and significantly decreased left postcentral sulcus (PCS) depth. In addition, left SMC activity was higher in both SZ-P and SZ-nonP patient groups, compared with controls.

In a regression analysis, the researchers examined relationships between parkinsonism severity and brain structure. They found that parkinsonism severity was negatively associated with left middle frontal complexity of cortical folding and left anterior cingulate cortex cortical thickness.

“Overall, the data support the notion that cortical features of distinct neurodevelopmental origin, particularly cortical folding indices such as [complexity of cortical folding] and sulcus depth, contribute to the pathogenesis of parkinsonism in SZ,” the researchers wrote.

The study findings were limited by several factors, including the cross-sectional design, the potential limitations of the Simpson-Angus Scale in characterizing parkinsonism, the inability to record lifetime antibiotics exposure in the patient population, and the inability to identify changes in brain stem nuclei, the researchers noted. However, the results were strengthened by the well-matched study groups and use of multimodal MRI, they said.

Consequently, “these data provide novel insights into different trajectories of cortical development in SZ patients evidencing parkinsonism,” and suggest a link between abnormal neurodevelopmental processes and an increased risk for movement disorders in schizophrenia, they concluded.

The study was funded by the German Research Foundation and the German Federal Ministry of Education and Research. Dr. Wolf and colleagues disclosed no conflicts.

Publications
Topics
Sections

 

Schizophrenia patients with parkinsonism show distinctive patterns of cortical surface markers, compared with schizophrenia patients without parkinsonism and healthy controls, results of a multimodal magnetic resonance imaging study suggest.

Dr. Robert Christian Wolf

Sensorimotor abnormalities are common in schizophrenia patients, however, “the neurobiological mechanisms underlying parkinsonism in [schizophrenia], which in treated samples represents the unity of interplay between spontaneous and antipsychotic drug-exacerbated movement disorder, are poorly understood,” wrote Robert Christian Wolf, MD, of Heidelberg (Germany) University, and colleagues.

In a study published in Schizophrenia Research (2021 May;231:54-60), the investigators examined brain imaging findings from 20 healthy controls, 38 schizophrenia patients with parkinsonism (SZ-P), and 35 schizophrenia patients without parkinsonism (SZ-nonP). Dr. Wolf and colleagues examined three cortical surface markers: cortical thickness, complexity of cortical folding, and sulcus depth.

Compared with SZ-nonP patients, the SZ-P patients showed significantly increased complexity of cortical folding in the left supplementary motor cortex (SMC) and significantly decreased left postcentral sulcus (PCS) depth. In addition, left SMC activity was higher in both SZ-P and SZ-nonP patient groups, compared with controls.

In a regression analysis, the researchers examined relationships between parkinsonism severity and brain structure. They found that parkinsonism severity was negatively associated with left middle frontal complexity of cortical folding and left anterior cingulate cortex cortical thickness.

“Overall, the data support the notion that cortical features of distinct neurodevelopmental origin, particularly cortical folding indices such as [complexity of cortical folding] and sulcus depth, contribute to the pathogenesis of parkinsonism in SZ,” the researchers wrote.

The study findings were limited by several factors, including the cross-sectional design, the potential limitations of the Simpson-Angus Scale in characterizing parkinsonism, the inability to record lifetime antibiotics exposure in the patient population, and the inability to identify changes in brain stem nuclei, the researchers noted. However, the results were strengthened by the well-matched study groups and use of multimodal MRI, they said.

Consequently, “these data provide novel insights into different trajectories of cortical development in SZ patients evidencing parkinsonism,” and suggest a link between abnormal neurodevelopmental processes and an increased risk for movement disorders in schizophrenia, they concluded.

The study was funded by the German Research Foundation and the German Federal Ministry of Education and Research. Dr. Wolf and colleagues disclosed no conflicts.

 

Schizophrenia patients with parkinsonism show distinctive patterns of cortical surface markers, compared with schizophrenia patients without parkinsonism and healthy controls, results of a multimodal magnetic resonance imaging study suggest.

Dr. Robert Christian Wolf

Sensorimotor abnormalities are common in schizophrenia patients, however, “the neurobiological mechanisms underlying parkinsonism in [schizophrenia], which in treated samples represents the unity of interplay between spontaneous and antipsychotic drug-exacerbated movement disorder, are poorly understood,” wrote Robert Christian Wolf, MD, of Heidelberg (Germany) University, and colleagues.

In a study published in Schizophrenia Research (2021 May;231:54-60), the investigators examined brain imaging findings from 20 healthy controls, 38 schizophrenia patients with parkinsonism (SZ-P), and 35 schizophrenia patients without parkinsonism (SZ-nonP). Dr. Wolf and colleagues examined three cortical surface markers: cortical thickness, complexity of cortical folding, and sulcus depth.

Compared with SZ-nonP patients, the SZ-P patients showed significantly increased complexity of cortical folding in the left supplementary motor cortex (SMC) and significantly decreased left postcentral sulcus (PCS) depth. In addition, left SMC activity was higher in both SZ-P and SZ-nonP patient groups, compared with controls.

In a regression analysis, the researchers examined relationships between parkinsonism severity and brain structure. They found that parkinsonism severity was negatively associated with left middle frontal complexity of cortical folding and left anterior cingulate cortex cortical thickness.

“Overall, the data support the notion that cortical features of distinct neurodevelopmental origin, particularly cortical folding indices such as [complexity of cortical folding] and sulcus depth, contribute to the pathogenesis of parkinsonism in SZ,” the researchers wrote.

The study findings were limited by several factors, including the cross-sectional design, the potential limitations of the Simpson-Angus Scale in characterizing parkinsonism, the inability to record lifetime antibiotics exposure in the patient population, and the inability to identify changes in brain stem nuclei, the researchers noted. However, the results were strengthened by the well-matched study groups and use of multimodal MRI, they said.

Consequently, “these data provide novel insights into different trajectories of cortical development in SZ patients evidencing parkinsonism,” and suggest a link between abnormal neurodevelopmental processes and an increased risk for movement disorders in schizophrenia, they concluded.

The study was funded by the German Research Foundation and the German Federal Ministry of Education and Research. Dr. Wolf and colleagues disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SCHIZOPHRENIA RESEARCH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Can laparoscopic lavage beat resection for acute perforated diverticulitis?

Article Type
Changed

 

Severe complications at 5 years were no different for patients with perforated purulent diverticulitis who underwent laparoscopic peritoneal lavage or colon resection, according to data from 199 individuals treated at 21 hospitals in Norway and Sweden. But it may yet prove appropriate in the right patient.

Acute perforated diverticulitis with peritonitis remains a challenging complication with high morbidity and mortality among patients with diverticular disease, and bowel resection remains the standard of treatment, Najia Azhar, MD, of Skåne University Hospital, Malmö, Sweden, and colleagues wrote.

Short-term data suggest that laparoscopic lavage with drainage and antibiotics might be a viable alternative, but long-term data are lacking, they said.

In the Scandinavian Diverticulitis (SCANDIV) trial, published in JAMA Surgery, researchers randomized 101 patients to laparoscopic peritoneal lavage and 98 to colon resection. With 3 patients lost to follow-up, the final analysis included 73 patients who underwent laparoscopic lavage and 69 who underwent resection. The mean age of the lavage patients was 66.4 years, and 39 were men. The mean age of the resection patients was 63.5 years, and 36 were men. The primary outcome was severe complications – excluding stoma reversals and elective sigmoid resections because of recurrence – at an average of 5 years’ follow-up. Secondary outcomes included stoma prevalence, diverticulitis recurrence, and secondary sigmoid resection.

Severe complications were similar for the lavage and resection groups (36% and 35%, respectively), as were the overall mortality rates (32% and 25%, respectively).

The prevalence of stoma was significantly lower in the lavage group, compared with the resection group (8% vs. 33%, P = .002). However, secondary operations (including reversal of stoma) were similar between the lavage and resection groups, performed in 26 lavage patients (36%) versus 24 resection patients (35%).

Diverticulitis recurrence was significantly more common in the lavage, compared with the resection group (21% vs. 4%, P = .004), the researchers noted.

In the laparoscopic lavage group, 30% (n = 21) underwent a sigmoid resection; all but one of these occurred within a year of the index procedure, the researchers wrote. In addition, overall length of hospital stay was similar for both groups.

No significant differences in quality of life were noted between the groups, based on the EuroQoL-5D questionnaire or Cleveland Global Quality of Life scores.
 

Balance secondary pros and cons

Laparoscopic lavage is not common practice today in the United States, the researchers noted. In clinical practice guidelines issued in 2020, the American Society of Colon and Rectal Surgeons strongly recommend colectomy over laparoscopic lavage for the treatment of left-sided colonic diverticulitis. However, the European Society of Coloproctology’s guidelines state that laparoscopic lavage is feasible for patients with peritonitis at Hinchey stage III.

The findings of the current study were limited primarily by the exclusion of 50% of eligible patients because of challenges associated with conducting randomized trials in emergency settings, the researchers noted. However, the number of excluded patients and their baseline characteristics after exclusion were very similar in the two groups, and the study represents the largest randomized trial to date to examine long-term outcomes in patients with perforated diverticulitis.

“Laparoscopic lavage is faster and cost-effective but leads to a higher reoperation rate and recurrence rate, often requiring secondary sigmoid resection,” the researchers emphasized. Consequently, patients undergoing lavage should have consented for resection surgery.

The similar rates of severe complications and quality of life scores support laparoscopic lavage as an option for perforated purulent diverticulitis, but shared decision-making will be essential for better optimal patient management, the researchers concluded.
 

 

 

Similar outcomes, but unanswered questions

Even though the primary outcome of disease-related morbidity was similar for both groups, “the issue still remains regarding when and how, if ever, this therapeutic approach should be considered for purulent peritonitis,” Kellie E. Cunningham, MD, and Brian S. Zuckerbraun, MD, both of the University of Pittsburgh, wrote in an accompanying editorial.

Although laparoscopic lavage has the obvious advantages of avoiding a laparotomy and stoma, previous studies have shown a higher rate of early reoperations and recurrent diverticulitis, despite lower stoma prevalence and equal mortality rates, they said. In addition, “patients who are immunosuppressed or would be expected to have a higher mortality rate with failure to achieve definitive source control should likely not be offered this therapy.”

A “philosophical” argument could be made in favor of laparoscopic lavage based on the potential consequences of early treatment failure, they wrote.

“Although one may consider the need for early reoperation a complication, some would argue it affects the minority of patients, thus avoiding the more morbid procedure with creation of a stoma at the index operation in the majority of patients,” they noted. “Additionally, patients who underwent lavage that subsequently proceed to colectomy would have otherwise been offered this therapy initially at the time of the index operation.”

More research is needed to answer questions such as which, if any, operative findings are associated with failure. In addition, an analysis of long-term cost benefits between the two options should be explored, the authors wrote.

Based on current evidence, shared decision-making is necessary, with individualized care and short and long-term trade-offs taken into account, they wrote.
 

Gastroenterologist perspective: Study fills gap in follow-up data

In an interview, David A. Johnson, MD, professor of medicine and chief of gastroenterology at Eastern Virginia School of Medicine, Norfolk, said the study is important because data have been lacking on outcomes of a laparoscopic lavage without a resection.

The findings represent “a major shift” in the growing consensus among surgeons that laparoscopic lavage is a viable option in appropriate patients, he said.

A key issue is the high rate of morbidity in patients who undergo traditional diverticulitis surgery. Complications can include wound infection and poor quality of life associated with stoma, Dr. Johnson said. Consequently, “a nonoperative approach from a patient perspective is certainly refreshing.”

Dr. Johnson said he was surprised by how well the patients fared after lavage given the severity of the diverticulitis in the patient population. However, this may be in part because of the relatively small numbers of patients at highest risk for complications, such as those with diabetes or immunocompromising conditions.

Dr. Johnson also said he was struck by the fact that the adenocarcinomas in the lavage group were diagnosed within the first year after the procedure. “The cancer diagnosis shouldn’t reflect on the lavage group,” but emphasizes the importance of having an earlier colonoscopy, he noted.

Next steps for research might include identifying a standardized endpoint for lavage, and determining how expanded use of the procedure might impact community practice, Dr. Johnson said. In addition, more research is needed to more clearly define patients most likely to benefit from laparoscopic lavage.

The study was supported in part by the department of surgery at Skåne University Hospital, Akershus University Hospital, and a fellowship to one of the study coauthors from the Southeastern Norway Regional Health Authority. Lead author Dr. Azhar disclosed grants from the department of surgery of Skåne University Hospital. Dr. Cunningham and Dr. Zuckerbraun had no financial conflicts to disclose. Dr. Johnson had no relevant financial disclosures.

Publications
Topics
Sections

 

Severe complications at 5 years were no different for patients with perforated purulent diverticulitis who underwent laparoscopic peritoneal lavage or colon resection, according to data from 199 individuals treated at 21 hospitals in Norway and Sweden. But it may yet prove appropriate in the right patient.

Acute perforated diverticulitis with peritonitis remains a challenging complication with high morbidity and mortality among patients with diverticular disease, and bowel resection remains the standard of treatment, Najia Azhar, MD, of Skåne University Hospital, Malmö, Sweden, and colleagues wrote.

Short-term data suggest that laparoscopic lavage with drainage and antibiotics might be a viable alternative, but long-term data are lacking, they said.

In the Scandinavian Diverticulitis (SCANDIV) trial, published in JAMA Surgery, researchers randomized 101 patients to laparoscopic peritoneal lavage and 98 to colon resection. With 3 patients lost to follow-up, the final analysis included 73 patients who underwent laparoscopic lavage and 69 who underwent resection. The mean age of the lavage patients was 66.4 years, and 39 were men. The mean age of the resection patients was 63.5 years, and 36 were men. The primary outcome was severe complications – excluding stoma reversals and elective sigmoid resections because of recurrence – at an average of 5 years’ follow-up. Secondary outcomes included stoma prevalence, diverticulitis recurrence, and secondary sigmoid resection.

Severe complications were similar for the lavage and resection groups (36% and 35%, respectively), as were the overall mortality rates (32% and 25%, respectively).

The prevalence of stoma was significantly lower in the lavage group, compared with the resection group (8% vs. 33%, P = .002). However, secondary operations (including reversal of stoma) were similar between the lavage and resection groups, performed in 26 lavage patients (36%) versus 24 resection patients (35%).

Diverticulitis recurrence was significantly more common in the lavage, compared with the resection group (21% vs. 4%, P = .004), the researchers noted.

In the laparoscopic lavage group, 30% (n = 21) underwent a sigmoid resection; all but one of these occurred within a year of the index procedure, the researchers wrote. In addition, overall length of hospital stay was similar for both groups.

No significant differences in quality of life were noted between the groups, based on the EuroQoL-5D questionnaire or Cleveland Global Quality of Life scores.
 

Balance secondary pros and cons

Laparoscopic lavage is not common practice today in the United States, the researchers noted. In clinical practice guidelines issued in 2020, the American Society of Colon and Rectal Surgeons strongly recommend colectomy over laparoscopic lavage for the treatment of left-sided colonic diverticulitis. However, the European Society of Coloproctology’s guidelines state that laparoscopic lavage is feasible for patients with peritonitis at Hinchey stage III.

The findings of the current study were limited primarily by the exclusion of 50% of eligible patients because of challenges associated with conducting randomized trials in emergency settings, the researchers noted. However, the number of excluded patients and their baseline characteristics after exclusion were very similar in the two groups, and the study represents the largest randomized trial to date to examine long-term outcomes in patients with perforated diverticulitis.

“Laparoscopic lavage is faster and cost-effective but leads to a higher reoperation rate and recurrence rate, often requiring secondary sigmoid resection,” the researchers emphasized. Consequently, patients undergoing lavage should have consented for resection surgery.

The similar rates of severe complications and quality of life scores support laparoscopic lavage as an option for perforated purulent diverticulitis, but shared decision-making will be essential for better optimal patient management, the researchers concluded.
 

 

 

Similar outcomes, but unanswered questions

Even though the primary outcome of disease-related morbidity was similar for both groups, “the issue still remains regarding when and how, if ever, this therapeutic approach should be considered for purulent peritonitis,” Kellie E. Cunningham, MD, and Brian S. Zuckerbraun, MD, both of the University of Pittsburgh, wrote in an accompanying editorial.

Although laparoscopic lavage has the obvious advantages of avoiding a laparotomy and stoma, previous studies have shown a higher rate of early reoperations and recurrent diverticulitis, despite lower stoma prevalence and equal mortality rates, they said. In addition, “patients who are immunosuppressed or would be expected to have a higher mortality rate with failure to achieve definitive source control should likely not be offered this therapy.”

A “philosophical” argument could be made in favor of laparoscopic lavage based on the potential consequences of early treatment failure, they wrote.

“Although one may consider the need for early reoperation a complication, some would argue it affects the minority of patients, thus avoiding the more morbid procedure with creation of a stoma at the index operation in the majority of patients,” they noted. “Additionally, patients who underwent lavage that subsequently proceed to colectomy would have otherwise been offered this therapy initially at the time of the index operation.”

More research is needed to answer questions such as which, if any, operative findings are associated with failure. In addition, an analysis of long-term cost benefits between the two options should be explored, the authors wrote.

Based on current evidence, shared decision-making is necessary, with individualized care and short and long-term trade-offs taken into account, they wrote.
 

Gastroenterologist perspective: Study fills gap in follow-up data

In an interview, David A. Johnson, MD, professor of medicine and chief of gastroenterology at Eastern Virginia School of Medicine, Norfolk, said the study is important because data have been lacking on outcomes of a laparoscopic lavage without a resection.

The findings represent “a major shift” in the growing consensus among surgeons that laparoscopic lavage is a viable option in appropriate patients, he said.

A key issue is the high rate of morbidity in patients who undergo traditional diverticulitis surgery. Complications can include wound infection and poor quality of life associated with stoma, Dr. Johnson said. Consequently, “a nonoperative approach from a patient perspective is certainly refreshing.”

Dr. Johnson said he was surprised by how well the patients fared after lavage given the severity of the diverticulitis in the patient population. However, this may be in part because of the relatively small numbers of patients at highest risk for complications, such as those with diabetes or immunocompromising conditions.

Dr. Johnson also said he was struck by the fact that the adenocarcinomas in the lavage group were diagnosed within the first year after the procedure. “The cancer diagnosis shouldn’t reflect on the lavage group,” but emphasizes the importance of having an earlier colonoscopy, he noted.

Next steps for research might include identifying a standardized endpoint for lavage, and determining how expanded use of the procedure might impact community practice, Dr. Johnson said. In addition, more research is needed to more clearly define patients most likely to benefit from laparoscopic lavage.

The study was supported in part by the department of surgery at Skåne University Hospital, Akershus University Hospital, and a fellowship to one of the study coauthors from the Southeastern Norway Regional Health Authority. Lead author Dr. Azhar disclosed grants from the department of surgery of Skåne University Hospital. Dr. Cunningham and Dr. Zuckerbraun had no financial conflicts to disclose. Dr. Johnson had no relevant financial disclosures.

 

Severe complications at 5 years were no different for patients with perforated purulent diverticulitis who underwent laparoscopic peritoneal lavage or colon resection, according to data from 199 individuals treated at 21 hospitals in Norway and Sweden. But it may yet prove appropriate in the right patient.

Acute perforated diverticulitis with peritonitis remains a challenging complication with high morbidity and mortality among patients with diverticular disease, and bowel resection remains the standard of treatment, Najia Azhar, MD, of Skåne University Hospital, Malmö, Sweden, and colleagues wrote.

Short-term data suggest that laparoscopic lavage with drainage and antibiotics might be a viable alternative, but long-term data are lacking, they said.

In the Scandinavian Diverticulitis (SCANDIV) trial, published in JAMA Surgery, researchers randomized 101 patients to laparoscopic peritoneal lavage and 98 to colon resection. With 3 patients lost to follow-up, the final analysis included 73 patients who underwent laparoscopic lavage and 69 who underwent resection. The mean age of the lavage patients was 66.4 years, and 39 were men. The mean age of the resection patients was 63.5 years, and 36 were men. The primary outcome was severe complications – excluding stoma reversals and elective sigmoid resections because of recurrence – at an average of 5 years’ follow-up. Secondary outcomes included stoma prevalence, diverticulitis recurrence, and secondary sigmoid resection.

Severe complications were similar for the lavage and resection groups (36% and 35%, respectively), as were the overall mortality rates (32% and 25%, respectively).

The prevalence of stoma was significantly lower in the lavage group, compared with the resection group (8% vs. 33%, P = .002). However, secondary operations (including reversal of stoma) were similar between the lavage and resection groups, performed in 26 lavage patients (36%) versus 24 resection patients (35%).

Diverticulitis recurrence was significantly more common in the lavage, compared with the resection group (21% vs. 4%, P = .004), the researchers noted.

In the laparoscopic lavage group, 30% (n = 21) underwent a sigmoid resection; all but one of these occurred within a year of the index procedure, the researchers wrote. In addition, overall length of hospital stay was similar for both groups.

No significant differences in quality of life were noted between the groups, based on the EuroQoL-5D questionnaire or Cleveland Global Quality of Life scores.
 

Balance secondary pros and cons

Laparoscopic lavage is not common practice today in the United States, the researchers noted. In clinical practice guidelines issued in 2020, the American Society of Colon and Rectal Surgeons strongly recommend colectomy over laparoscopic lavage for the treatment of left-sided colonic diverticulitis. However, the European Society of Coloproctology’s guidelines state that laparoscopic lavage is feasible for patients with peritonitis at Hinchey stage III.

The findings of the current study were limited primarily by the exclusion of 50% of eligible patients because of challenges associated with conducting randomized trials in emergency settings, the researchers noted. However, the number of excluded patients and their baseline characteristics after exclusion were very similar in the two groups, and the study represents the largest randomized trial to date to examine long-term outcomes in patients with perforated diverticulitis.

“Laparoscopic lavage is faster and cost-effective but leads to a higher reoperation rate and recurrence rate, often requiring secondary sigmoid resection,” the researchers emphasized. Consequently, patients undergoing lavage should have consented for resection surgery.

The similar rates of severe complications and quality of life scores support laparoscopic lavage as an option for perforated purulent diverticulitis, but shared decision-making will be essential for better optimal patient management, the researchers concluded.
 

 

 

Similar outcomes, but unanswered questions

Even though the primary outcome of disease-related morbidity was similar for both groups, “the issue still remains regarding when and how, if ever, this therapeutic approach should be considered for purulent peritonitis,” Kellie E. Cunningham, MD, and Brian S. Zuckerbraun, MD, both of the University of Pittsburgh, wrote in an accompanying editorial.

Although laparoscopic lavage has the obvious advantages of avoiding a laparotomy and stoma, previous studies have shown a higher rate of early reoperations and recurrent diverticulitis, despite lower stoma prevalence and equal mortality rates, they said. In addition, “patients who are immunosuppressed or would be expected to have a higher mortality rate with failure to achieve definitive source control should likely not be offered this therapy.”

A “philosophical” argument could be made in favor of laparoscopic lavage based on the potential consequences of early treatment failure, they wrote.

“Although one may consider the need for early reoperation a complication, some would argue it affects the minority of patients, thus avoiding the more morbid procedure with creation of a stoma at the index operation in the majority of patients,” they noted. “Additionally, patients who underwent lavage that subsequently proceed to colectomy would have otherwise been offered this therapy initially at the time of the index operation.”

More research is needed to answer questions such as which, if any, operative findings are associated with failure. In addition, an analysis of long-term cost benefits between the two options should be explored, the authors wrote.

Based on current evidence, shared decision-making is necessary, with individualized care and short and long-term trade-offs taken into account, they wrote.
 

Gastroenterologist perspective: Study fills gap in follow-up data

In an interview, David A. Johnson, MD, professor of medicine and chief of gastroenterology at Eastern Virginia School of Medicine, Norfolk, said the study is important because data have been lacking on outcomes of a laparoscopic lavage without a resection.

The findings represent “a major shift” in the growing consensus among surgeons that laparoscopic lavage is a viable option in appropriate patients, he said.

A key issue is the high rate of morbidity in patients who undergo traditional diverticulitis surgery. Complications can include wound infection and poor quality of life associated with stoma, Dr. Johnson said. Consequently, “a nonoperative approach from a patient perspective is certainly refreshing.”

Dr. Johnson said he was surprised by how well the patients fared after lavage given the severity of the diverticulitis in the patient population. However, this may be in part because of the relatively small numbers of patients at highest risk for complications, such as those with diabetes or immunocompromising conditions.

Dr. Johnson also said he was struck by the fact that the adenocarcinomas in the lavage group were diagnosed within the first year after the procedure. “The cancer diagnosis shouldn’t reflect on the lavage group,” but emphasizes the importance of having an earlier colonoscopy, he noted.

Next steps for research might include identifying a standardized endpoint for lavage, and determining how expanded use of the procedure might impact community practice, Dr. Johnson said. In addition, more research is needed to more clearly define patients most likely to benefit from laparoscopic lavage.

The study was supported in part by the department of surgery at Skåne University Hospital, Akershus University Hospital, and a fellowship to one of the study coauthors from the Southeastern Norway Regional Health Authority. Lead author Dr. Azhar disclosed grants from the department of surgery of Skåne University Hospital. Dr. Cunningham and Dr. Zuckerbraun had no financial conflicts to disclose. Dr. Johnson had no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Neurodegeneration complicates psychiatric care for Parkinson’s patients

Article Type
Changed

 

Managing depression and anxiety in Parkinson’s disease should start with a review of medications and involve multidisciplinary care, according to a recent summary of evidence.

“Depression and anxiety have a complex relationship with the disease and while the exact mechanism for this association is unknown, both disturbances occur with increased prevalence across the disease course and when present earlier in life, increase the risk of PD by about twofold,” wrote Gregory M. Pontone, MD, of Johns Hopkins University, Baltimore, and colleagues.

Randomized trials to guide treatment of anxiety and depression in patients with Parkinson’s disease (PD) are limited, the researchers noted. However, data from a longitudinal study showed that PD patients whose depression remitted spontaneously or responded to treatment were able to attain a level of function similar to that of never-depressed PD patients, Dr. Pontone and colleagues said.

The researchers offered a pair of treatment algorithms to help guide clinicians in managing depression and anxiety in PD. However, a caveat to keep in mind is that “the benefit of antidepressant medications, used for depression or anxiety, can be confounded when motor symptoms are not optimally treated,” the researchers emphasized.

For depression, the researchers advised starting with some lab work; “at a minimum we suggest checking a complete blood count, metabolic panel, TSH, B12, and folate,” they noted. They recommended an antidepressant, cognitive-behavioral therapy, or both, as a first-line treatment, such as monotherapy with selective norepinephrine reuptake inhibitors or selective serotonin reuptake inhibitors. They advised titrating the chosen monotherapy to a minimum effective dose over a 2- to 3-week period to assess response.

“We recommend continuing antidepressant therapy for at least 1 year based on literature in non-PD populations and anecdotal clinical experience. At 1 year, if not in remission, consider continuing treatment or augmenting to improve response,” the researchers said.

Based on the current DSM-5 criteria, up to one-third of PD patients have an unspecified anxiety disorder, the researchers said, and they recommended using anxiety rating scales to diagnose anxiety in PD. “Given the high prevalence of atypical anxiety syndromes in PD and their potential association with both motor and nonmotor symptoms of the disease, working with the neurologist to achieve optimal control of PD is an essential first step to alleviating anxiety,” they emphasized.

The researchers also advised addressing comorbidities, including cardiovascular disease, chronic pain, diabetes, gastrointestinal issues, hyperthyroidism, and lung disease, all of which can be associated with anxiety. Once comorbidities are addressed, they advised caution given the lack of evidence for efficacy of both pharmacologic and nonpharmacologic anxiety treatments for PD patients. However, first-tier treatment for anxiety could include monotherapy with serotonin-norepinephrine reuptake inhibitors or selective serotonin reuptake inhibitors, they said.

PD patients with depression and anxiety also may benefit from nonpharmacologic interventions, including exercise, mindfulness, relaxation therapy, and cognitive behavioral therapy the researchers said.

Although the algorithm may not differ significantly from current treatment protocols, it highlights aspects unique to PD patients, the researchers said. In particular, the algorithm shows “that interventions used for motor symptoms, for example, dopamine agonists, may be especially potent for mood in the PD population and that augmentation strategies, such as antipsychotics and lithium, may not be well tolerated given their outsized risk of adverse events in PD,” they said.

“While an article of this kind cannot hope to address the gap in knowledge on comparative efficacy between interventions, it can guide readers on the best strategies for implementation and risk mitigation in PD – essentially focusing more on effectiveness,” they concluded.

The study received no outside funding. Dr. Pontone disclosed serving as a consultant for Acadia Pharmaceuticals and Concert Pharmaceuticals.

Publications
Topics
Sections

 

Managing depression and anxiety in Parkinson’s disease should start with a review of medications and involve multidisciplinary care, according to a recent summary of evidence.

“Depression and anxiety have a complex relationship with the disease and while the exact mechanism for this association is unknown, both disturbances occur with increased prevalence across the disease course and when present earlier in life, increase the risk of PD by about twofold,” wrote Gregory M. Pontone, MD, of Johns Hopkins University, Baltimore, and colleagues.

Randomized trials to guide treatment of anxiety and depression in patients with Parkinson’s disease (PD) are limited, the researchers noted. However, data from a longitudinal study showed that PD patients whose depression remitted spontaneously or responded to treatment were able to attain a level of function similar to that of never-depressed PD patients, Dr. Pontone and colleagues said.

The researchers offered a pair of treatment algorithms to help guide clinicians in managing depression and anxiety in PD. However, a caveat to keep in mind is that “the benefit of antidepressant medications, used for depression or anxiety, can be confounded when motor symptoms are not optimally treated,” the researchers emphasized.

For depression, the researchers advised starting with some lab work; “at a minimum we suggest checking a complete blood count, metabolic panel, TSH, B12, and folate,” they noted. They recommended an antidepressant, cognitive-behavioral therapy, or both, as a first-line treatment, such as monotherapy with selective norepinephrine reuptake inhibitors or selective serotonin reuptake inhibitors. They advised titrating the chosen monotherapy to a minimum effective dose over a 2- to 3-week period to assess response.

“We recommend continuing antidepressant therapy for at least 1 year based on literature in non-PD populations and anecdotal clinical experience. At 1 year, if not in remission, consider continuing treatment or augmenting to improve response,” the researchers said.

Based on the current DSM-5 criteria, up to one-third of PD patients have an unspecified anxiety disorder, the researchers said, and they recommended using anxiety rating scales to diagnose anxiety in PD. “Given the high prevalence of atypical anxiety syndromes in PD and their potential association with both motor and nonmotor symptoms of the disease, working with the neurologist to achieve optimal control of PD is an essential first step to alleviating anxiety,” they emphasized.

The researchers also advised addressing comorbidities, including cardiovascular disease, chronic pain, diabetes, gastrointestinal issues, hyperthyroidism, and lung disease, all of which can be associated with anxiety. Once comorbidities are addressed, they advised caution given the lack of evidence for efficacy of both pharmacologic and nonpharmacologic anxiety treatments for PD patients. However, first-tier treatment for anxiety could include monotherapy with serotonin-norepinephrine reuptake inhibitors or selective serotonin reuptake inhibitors, they said.

PD patients with depression and anxiety also may benefit from nonpharmacologic interventions, including exercise, mindfulness, relaxation therapy, and cognitive behavioral therapy the researchers said.

Although the algorithm may not differ significantly from current treatment protocols, it highlights aspects unique to PD patients, the researchers said. In particular, the algorithm shows “that interventions used for motor symptoms, for example, dopamine agonists, may be especially potent for mood in the PD population and that augmentation strategies, such as antipsychotics and lithium, may not be well tolerated given their outsized risk of adverse events in PD,” they said.

“While an article of this kind cannot hope to address the gap in knowledge on comparative efficacy between interventions, it can guide readers on the best strategies for implementation and risk mitigation in PD – essentially focusing more on effectiveness,” they concluded.

The study received no outside funding. Dr. Pontone disclosed serving as a consultant for Acadia Pharmaceuticals and Concert Pharmaceuticals.

 

Managing depression and anxiety in Parkinson’s disease should start with a review of medications and involve multidisciplinary care, according to a recent summary of evidence.

“Depression and anxiety have a complex relationship with the disease and while the exact mechanism for this association is unknown, both disturbances occur with increased prevalence across the disease course and when present earlier in life, increase the risk of PD by about twofold,” wrote Gregory M. Pontone, MD, of Johns Hopkins University, Baltimore, and colleagues.

Randomized trials to guide treatment of anxiety and depression in patients with Parkinson’s disease (PD) are limited, the researchers noted. However, data from a longitudinal study showed that PD patients whose depression remitted spontaneously or responded to treatment were able to attain a level of function similar to that of never-depressed PD patients, Dr. Pontone and colleagues said.

The researchers offered a pair of treatment algorithms to help guide clinicians in managing depression and anxiety in PD. However, a caveat to keep in mind is that “the benefit of antidepressant medications, used for depression or anxiety, can be confounded when motor symptoms are not optimally treated,” the researchers emphasized.

For depression, the researchers advised starting with some lab work; “at a minimum we suggest checking a complete blood count, metabolic panel, TSH, B12, and folate,” they noted. They recommended an antidepressant, cognitive-behavioral therapy, or both, as a first-line treatment, such as monotherapy with selective norepinephrine reuptake inhibitors or selective serotonin reuptake inhibitors. They advised titrating the chosen monotherapy to a minimum effective dose over a 2- to 3-week period to assess response.

“We recommend continuing antidepressant therapy for at least 1 year based on literature in non-PD populations and anecdotal clinical experience. At 1 year, if not in remission, consider continuing treatment or augmenting to improve response,” the researchers said.

Based on the current DSM-5 criteria, up to one-third of PD patients have an unspecified anxiety disorder, the researchers said, and they recommended using anxiety rating scales to diagnose anxiety in PD. “Given the high prevalence of atypical anxiety syndromes in PD and their potential association with both motor and nonmotor symptoms of the disease, working with the neurologist to achieve optimal control of PD is an essential first step to alleviating anxiety,” they emphasized.

The researchers also advised addressing comorbidities, including cardiovascular disease, chronic pain, diabetes, gastrointestinal issues, hyperthyroidism, and lung disease, all of which can be associated with anxiety. Once comorbidities are addressed, they advised caution given the lack of evidence for efficacy of both pharmacologic and nonpharmacologic anxiety treatments for PD patients. However, first-tier treatment for anxiety could include monotherapy with serotonin-norepinephrine reuptake inhibitors or selective serotonin reuptake inhibitors, they said.

PD patients with depression and anxiety also may benefit from nonpharmacologic interventions, including exercise, mindfulness, relaxation therapy, and cognitive behavioral therapy the researchers said.

Although the algorithm may not differ significantly from current treatment protocols, it highlights aspects unique to PD patients, the researchers said. In particular, the algorithm shows “that interventions used for motor symptoms, for example, dopamine agonists, may be especially potent for mood in the PD population and that augmentation strategies, such as antipsychotics and lithium, may not be well tolerated given their outsized risk of adverse events in PD,” they said.

“While an article of this kind cannot hope to address the gap in knowledge on comparative efficacy between interventions, it can guide readers on the best strategies for implementation and risk mitigation in PD – essentially focusing more on effectiveness,” they concluded.

The study received no outside funding. Dr. Pontone disclosed serving as a consultant for Acadia Pharmaceuticals and Concert Pharmaceuticals.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF GERIATRIC PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Safety-net burden linked with poorer inpatient cirrhosis outcomes

Article Type
Changed

Patients with cirrhosis treated at hospitals with the highest safety-net burden, defined by their proportion of Medicaid or uninsured patients, had a 5% higher mortality rate than patients who were treated at hospitals with the lowest burden, according to a study of over 300,000 patients.

Dr. Robert J. Wong

The study, which was published in the Journal of Clinical Gastroenterology, analyzed inpatient data from the National Inpatient Sample (NIS) database focusing on a 4-year time span between 2012 and 2016. The hospitals were categorized by safety-net burden, which was defined as having either a high, medium, or low number of uninsured patients or patients with Medicaid.

This is the first-known study to evaluate the impact of a hospital’s safety-net burden on hospitalization outcomes in cirrhosis patients, wrote authors Robert J. Wong, MD, MS, of Stanford (Calif.) University and Grishma Hirode, MAS, of the University of Toronto. Previous studies have shown that safety-net hospitals, especially those with a high safety-net burden, have poorer patient outcomes. These hospitals also serve a patient population that is at high risk for chronic liver disease and cirrhosis.

The new analysis included 322,944 individual hospitalizations of patients with cirrhosis. Of these, 57.8% were male, 63.7% were White, 9.9% were Black, and 15.6% were Hispanic. In terms of safety-net burden, 107,446 hospitalizations were at high-burden hospitals, 103,508 were at medium-burden hospitals, and 111,990 hospitalizations were at low-burden hospitals.

Overall, cirrhosis-related hospitalizations in hospitals with the highest burden were found to have significantly greater odds of in-hospital mortality than the lowest tertile hospitals (odds ratio, 1.05, P = .044). The patients were also younger (mean age, 56.7 years vs. 59.8 years in low-burden hospitals). They also had a higher proportion of male patients, minority patients, Hispanic patients, and patients with Medicaid or no insurance.

The odds of hospitalization in the highest tertile hospitals were found to be significantly higher, compared with the middle and lowest tertiles for Blacks and Hispanics, compared with Whites (OR 1.26 and OR 1.63, respectively). Black patients (OR, 1.26; 95%CI, 1.17-1.35; P < .001) and Hispanic patients (OR, 1.63; 95% CI, 1.50-1.78; P< .001) were more likely to be admitted for care at high-burden hospitals (26% to 54%). In-hospital mortality rates among all hospitalizations were 5.95% and the rate did not significantly differ by hospital burden status.

“Despite adjusting for safety-net burden, our study continued to demonstrate ethnic disparities in in-hospital mortality among cirrhosis-related hospitalizations,” the researchers wrote. Overall, the odds of in-hospital mortality were 27% higher in Black patients as compared with White patients.

However, significantly lower mortality was observed in Hispanic patients as compared with White patients (4.9% vs. 6.0%, P < .001), but why this occurred was not entirely clear. “Hispanic patients may be more likely to have NASH [nonalcoholic steatohepatitis]-related cirrhosis, which generally has a slower disease progression, compared with [hepatitis C virus] or alcoholic cirrhosis. As such, it is likely that NASH-cirrhosis Hispanic patients had less severe disease at presentation,” the researchers wrote.
 

Study design has limitations, but shows concerning trends

The study findings were limited by several factors including the inability to show causality based on the observational study design and cross-sectional nature of the database, the researchers said. The NIS database records individual hospitalizations, not individual patient data which means that it may include repeat hospitalizations from the same patient. In addition, the study was limited by a lack of data on outpatient cirrhosis outcomes and non–liver-related comorbidities.

However, the finding that ethnic minorities with cirrhosis were significantly more likely to be hospitalized in high safety-net hospitals than White patients is concerning, and more research is needed, they said.

“These observations highlight that, while disparities in resources and health care delivery inherent to safety-net health systems may partly explain and provide opportunities to improve cirrhosis hospitalization care, they alone do not explain all of the ethnic disparities in cirrhosis outcomes observed,” they concluded.

The current study was important to conduct at this time because rates of cirrhosis are on the rise, Michael Volk, MD, of Loma Linda (Calif.) University Health, said in an interview. “Millions of patients receive care in safety-net hospitals across the country.”

Dr. Volk said that he was not surprised by the overall outcomes. “Unfortunately, I expected that patient outcomes would be worse at safety-net hospitals than wealthier hospitals. However, I was surprised that Blacks had higher in-hospital mortality than Whites, even after adjusting for the hospital.”

Dr. Volk echoed the study’s stated limitation of the lack of data to address disparities.

“Additional research is needed to determine whether the higher in-hospital mortality among Blacks is related to biological differences such as differential rates of disease progression, or social differences such as access to outpatient care,” he said.

The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Volk had no relevant financial conflicts to disclose.

Publications
Topics
Sections

Patients with cirrhosis treated at hospitals with the highest safety-net burden, defined by their proportion of Medicaid or uninsured patients, had a 5% higher mortality rate than patients who were treated at hospitals with the lowest burden, according to a study of over 300,000 patients.

Dr. Robert J. Wong

The study, which was published in the Journal of Clinical Gastroenterology, analyzed inpatient data from the National Inpatient Sample (NIS) database focusing on a 4-year time span between 2012 and 2016. The hospitals were categorized by safety-net burden, which was defined as having either a high, medium, or low number of uninsured patients or patients with Medicaid.

This is the first-known study to evaluate the impact of a hospital’s safety-net burden on hospitalization outcomes in cirrhosis patients, wrote authors Robert J. Wong, MD, MS, of Stanford (Calif.) University and Grishma Hirode, MAS, of the University of Toronto. Previous studies have shown that safety-net hospitals, especially those with a high safety-net burden, have poorer patient outcomes. These hospitals also serve a patient population that is at high risk for chronic liver disease and cirrhosis.

The new analysis included 322,944 individual hospitalizations of patients with cirrhosis. Of these, 57.8% were male, 63.7% were White, 9.9% were Black, and 15.6% were Hispanic. In terms of safety-net burden, 107,446 hospitalizations were at high-burden hospitals, 103,508 were at medium-burden hospitals, and 111,990 hospitalizations were at low-burden hospitals.

Overall, cirrhosis-related hospitalizations in hospitals with the highest burden were found to have significantly greater odds of in-hospital mortality than the lowest tertile hospitals (odds ratio, 1.05, P = .044). The patients were also younger (mean age, 56.7 years vs. 59.8 years in low-burden hospitals). They also had a higher proportion of male patients, minority patients, Hispanic patients, and patients with Medicaid or no insurance.

The odds of hospitalization in the highest tertile hospitals were found to be significantly higher, compared with the middle and lowest tertiles for Blacks and Hispanics, compared with Whites (OR 1.26 and OR 1.63, respectively). Black patients (OR, 1.26; 95%CI, 1.17-1.35; P < .001) and Hispanic patients (OR, 1.63; 95% CI, 1.50-1.78; P< .001) were more likely to be admitted for care at high-burden hospitals (26% to 54%). In-hospital mortality rates among all hospitalizations were 5.95% and the rate did not significantly differ by hospital burden status.

“Despite adjusting for safety-net burden, our study continued to demonstrate ethnic disparities in in-hospital mortality among cirrhosis-related hospitalizations,” the researchers wrote. Overall, the odds of in-hospital mortality were 27% higher in Black patients as compared with White patients.

However, significantly lower mortality was observed in Hispanic patients as compared with White patients (4.9% vs. 6.0%, P < .001), but why this occurred was not entirely clear. “Hispanic patients may be more likely to have NASH [nonalcoholic steatohepatitis]-related cirrhosis, which generally has a slower disease progression, compared with [hepatitis C virus] or alcoholic cirrhosis. As such, it is likely that NASH-cirrhosis Hispanic patients had less severe disease at presentation,” the researchers wrote.
 

Study design has limitations, but shows concerning trends

The study findings were limited by several factors including the inability to show causality based on the observational study design and cross-sectional nature of the database, the researchers said. The NIS database records individual hospitalizations, not individual patient data which means that it may include repeat hospitalizations from the same patient. In addition, the study was limited by a lack of data on outpatient cirrhosis outcomes and non–liver-related comorbidities.

However, the finding that ethnic minorities with cirrhosis were significantly more likely to be hospitalized in high safety-net hospitals than White patients is concerning, and more research is needed, they said.

“These observations highlight that, while disparities in resources and health care delivery inherent to safety-net health systems may partly explain and provide opportunities to improve cirrhosis hospitalization care, they alone do not explain all of the ethnic disparities in cirrhosis outcomes observed,” they concluded.

The current study was important to conduct at this time because rates of cirrhosis are on the rise, Michael Volk, MD, of Loma Linda (Calif.) University Health, said in an interview. “Millions of patients receive care in safety-net hospitals across the country.”

Dr. Volk said that he was not surprised by the overall outcomes. “Unfortunately, I expected that patient outcomes would be worse at safety-net hospitals than wealthier hospitals. However, I was surprised that Blacks had higher in-hospital mortality than Whites, even after adjusting for the hospital.”

Dr. Volk echoed the study’s stated limitation of the lack of data to address disparities.

“Additional research is needed to determine whether the higher in-hospital mortality among Blacks is related to biological differences such as differential rates of disease progression, or social differences such as access to outpatient care,” he said.

The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Volk had no relevant financial conflicts to disclose.

Patients with cirrhosis treated at hospitals with the highest safety-net burden, defined by their proportion of Medicaid or uninsured patients, had a 5% higher mortality rate than patients who were treated at hospitals with the lowest burden, according to a study of over 300,000 patients.

Dr. Robert J. Wong

The study, which was published in the Journal of Clinical Gastroenterology, analyzed inpatient data from the National Inpatient Sample (NIS) database focusing on a 4-year time span between 2012 and 2016. The hospitals were categorized by safety-net burden, which was defined as having either a high, medium, or low number of uninsured patients or patients with Medicaid.

This is the first-known study to evaluate the impact of a hospital’s safety-net burden on hospitalization outcomes in cirrhosis patients, wrote authors Robert J. Wong, MD, MS, of Stanford (Calif.) University and Grishma Hirode, MAS, of the University of Toronto. Previous studies have shown that safety-net hospitals, especially those with a high safety-net burden, have poorer patient outcomes. These hospitals also serve a patient population that is at high risk for chronic liver disease and cirrhosis.

The new analysis included 322,944 individual hospitalizations of patients with cirrhosis. Of these, 57.8% were male, 63.7% were White, 9.9% were Black, and 15.6% were Hispanic. In terms of safety-net burden, 107,446 hospitalizations were at high-burden hospitals, 103,508 were at medium-burden hospitals, and 111,990 hospitalizations were at low-burden hospitals.

Overall, cirrhosis-related hospitalizations in hospitals with the highest burden were found to have significantly greater odds of in-hospital mortality than the lowest tertile hospitals (odds ratio, 1.05, P = .044). The patients were also younger (mean age, 56.7 years vs. 59.8 years in low-burden hospitals). They also had a higher proportion of male patients, minority patients, Hispanic patients, and patients with Medicaid or no insurance.

The odds of hospitalization in the highest tertile hospitals were found to be significantly higher, compared with the middle and lowest tertiles for Blacks and Hispanics, compared with Whites (OR 1.26 and OR 1.63, respectively). Black patients (OR, 1.26; 95%CI, 1.17-1.35; P < .001) and Hispanic patients (OR, 1.63; 95% CI, 1.50-1.78; P< .001) were more likely to be admitted for care at high-burden hospitals (26% to 54%). In-hospital mortality rates among all hospitalizations were 5.95% and the rate did not significantly differ by hospital burden status.

“Despite adjusting for safety-net burden, our study continued to demonstrate ethnic disparities in in-hospital mortality among cirrhosis-related hospitalizations,” the researchers wrote. Overall, the odds of in-hospital mortality were 27% higher in Black patients as compared with White patients.

However, significantly lower mortality was observed in Hispanic patients as compared with White patients (4.9% vs. 6.0%, P < .001), but why this occurred was not entirely clear. “Hispanic patients may be more likely to have NASH [nonalcoholic steatohepatitis]-related cirrhosis, which generally has a slower disease progression, compared with [hepatitis C virus] or alcoholic cirrhosis. As such, it is likely that NASH-cirrhosis Hispanic patients had less severe disease at presentation,” the researchers wrote.
 

Study design has limitations, but shows concerning trends

The study findings were limited by several factors including the inability to show causality based on the observational study design and cross-sectional nature of the database, the researchers said. The NIS database records individual hospitalizations, not individual patient data which means that it may include repeat hospitalizations from the same patient. In addition, the study was limited by a lack of data on outpatient cirrhosis outcomes and non–liver-related comorbidities.

However, the finding that ethnic minorities with cirrhosis were significantly more likely to be hospitalized in high safety-net hospitals than White patients is concerning, and more research is needed, they said.

“These observations highlight that, while disparities in resources and health care delivery inherent to safety-net health systems may partly explain and provide opportunities to improve cirrhosis hospitalization care, they alone do not explain all of the ethnic disparities in cirrhosis outcomes observed,” they concluded.

The current study was important to conduct at this time because rates of cirrhosis are on the rise, Michael Volk, MD, of Loma Linda (Calif.) University Health, said in an interview. “Millions of patients receive care in safety-net hospitals across the country.”

Dr. Volk said that he was not surprised by the overall outcomes. “Unfortunately, I expected that patient outcomes would be worse at safety-net hospitals than wealthier hospitals. However, I was surprised that Blacks had higher in-hospital mortality than Whites, even after adjusting for the hospital.”

Dr. Volk echoed the study’s stated limitation of the lack of data to address disparities.

“Additional research is needed to determine whether the higher in-hospital mortality among Blacks is related to biological differences such as differential rates of disease progression, or social differences such as access to outpatient care,” he said.

The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Volk had no relevant financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Performance matters in adenoma detection

Article Type
Changed

Low adenoma detection rates (ADRs) were associated with a greater risk of death in colorectal cancer (CRC) patients, especially among those with high-risk adenomas, based on a review of more than 250,000 colonoscopies.

pixologicstudio/Thinkstock

“Both performance quality of the endoscopist as well as specific characteristics of resected adenomas at colonoscopy are associated with colorectal cancer mortality,” but the impact of these combined factors on colorectal cancer mortality has not been examined on a large scale, according to Elisabeth A. Waldmann, MD, of the Medical University of Vienna and colleagues.

In a study published in Clinical Gastroenterology & Hepatology, the researchers reviewed 259,885 colonoscopies performed by 361 endoscopists. Over an average follow-up period of 59 months, 165 CRC-related deaths occurred.

Across all risk groups, CRC mortality was higher among patients whose colonoscopies yielded an ADR of less than 25%, although this was not statistically significant in all groups.

The researchers then stratified patients into those with a negative colonoscopy, those with low-risk adenomas (one to two adenomas less than 10 mm), and those with high-risk adenomas (advanced adenomas or at least three adenomas), with the negative colonoscopy group used as the reference group for comparisons. The average age of the patients was 61 years, and approximately half were women.

Endoscopists were classified as having an ADR of less than 25% or 25% and higher.

Among individuals with low-risk adenomas, CRC mortality was similar whether the ADR on a negative colonoscopy was less than 25% or 25% or higher (adjusted hazard ratios, 1.25 and 1.22, respectively). CRC mortality also remained unaffected by ADR in patients with negatively colonoscopies (aHR, 1.27).

By contrast, individuals with high-risk adenomas had a significantly increased risk of CRC death if their colonoscopy was performed by an endoscopist with an ADR of less than 25%, compared with those whose endoscopists had ADRs of 25% or higher (aHR, 2.25 and 1.35, respectively).

“Our study demonstrated that adding ADR to the risk stratification model improved risk assessment in all risk groups,” the researchers noted. “Importantly, stratification improved most for individuals with high-risk adenomas, the group demanding most resources in health care systems.”

The study findings were limited by several factors including the focus on only screening and surveillance colonoscopies, not including diagnostic colonoscopies, and the inability to adjust for comorbidities and lifestyle factors that might impact CRC mortality, the researchers noted. The 22.4% average ADR in the current study was low, compared with other studies, and could be a limitation as well, although previous guidelines recommend a target ADR of at least 20%.

“Despite the extensive body of literature supporting the importance of ADR in terms of CRC prevention, its implementation into clinical surveillance is challenging,” as physicians under pressure might try to game their ADRs, the researchers wrote.

The findings support the value of mandatory assessment of performance quality, the researchers added. However, “because of the potential possibility of gaming one’s ADR one conclusion drawn by the study results should be that endoscopists’ quality parameters should be monitored and those not meeting the standards trained to improve rather than requiring minimum ADRs as premise for offering screening colonoscopy.”
 

 

 

Improve performance, but don’t discount patient factors

The study is important at this time because colorectal cancer is the third-leading cause of cancer death in the United States, Atsushi Sakuraba, MD, of the University of Chicago said in an interview.

“Screening colonoscopy has been shown to decrease CRC mortality, but factors influencing outcomes after screening colonoscopies remain to be determined,” he said.

“It was expected that high-quality colonoscopy performed by an endoscopist with ADR of 25% or greater was associated with a lower risk for CRC death,” Dr. Sakuraba said. “The strength of the study is that the authors demonstrated that high-quality colonoscopy was more important in individuals with high-risk adenomas, such as advanced adenomas or at least three adenomas.”

The study findings have implications for practice in that they show the importance of monitoring performance quality in screening colonoscopy, Dr. Sakuraba said, “especially when patients have high-risk adenomas.” However, “the authors included only age and sex as variables, but the influence of other factors, such as smoking, [body mass index], and race, need to be studied.”

The researchers had no financial conflicts to disclose. Dr. Sakuraba had no financial conflicts to disclose.

Publications
Topics
Sections

Low adenoma detection rates (ADRs) were associated with a greater risk of death in colorectal cancer (CRC) patients, especially among those with high-risk adenomas, based on a review of more than 250,000 colonoscopies.

pixologicstudio/Thinkstock

“Both performance quality of the endoscopist as well as specific characteristics of resected adenomas at colonoscopy are associated with colorectal cancer mortality,” but the impact of these combined factors on colorectal cancer mortality has not been examined on a large scale, according to Elisabeth A. Waldmann, MD, of the Medical University of Vienna and colleagues.

In a study published in Clinical Gastroenterology & Hepatology, the researchers reviewed 259,885 colonoscopies performed by 361 endoscopists. Over an average follow-up period of 59 months, 165 CRC-related deaths occurred.

Across all risk groups, CRC mortality was higher among patients whose colonoscopies yielded an ADR of less than 25%, although this was not statistically significant in all groups.

The researchers then stratified patients into those with a negative colonoscopy, those with low-risk adenomas (one to two adenomas less than 10 mm), and those with high-risk adenomas (advanced adenomas or at least three adenomas), with the negative colonoscopy group used as the reference group for comparisons. The average age of the patients was 61 years, and approximately half were women.

Endoscopists were classified as having an ADR of less than 25% or 25% and higher.

Among individuals with low-risk adenomas, CRC mortality was similar whether the ADR on a negative colonoscopy was less than 25% or 25% or higher (adjusted hazard ratios, 1.25 and 1.22, respectively). CRC mortality also remained unaffected by ADR in patients with negatively colonoscopies (aHR, 1.27).

By contrast, individuals with high-risk adenomas had a significantly increased risk of CRC death if their colonoscopy was performed by an endoscopist with an ADR of less than 25%, compared with those whose endoscopists had ADRs of 25% or higher (aHR, 2.25 and 1.35, respectively).

“Our study demonstrated that adding ADR to the risk stratification model improved risk assessment in all risk groups,” the researchers noted. “Importantly, stratification improved most for individuals with high-risk adenomas, the group demanding most resources in health care systems.”

The study findings were limited by several factors including the focus on only screening and surveillance colonoscopies, not including diagnostic colonoscopies, and the inability to adjust for comorbidities and lifestyle factors that might impact CRC mortality, the researchers noted. The 22.4% average ADR in the current study was low, compared with other studies, and could be a limitation as well, although previous guidelines recommend a target ADR of at least 20%.

“Despite the extensive body of literature supporting the importance of ADR in terms of CRC prevention, its implementation into clinical surveillance is challenging,” as physicians under pressure might try to game their ADRs, the researchers wrote.

The findings support the value of mandatory assessment of performance quality, the researchers added. However, “because of the potential possibility of gaming one’s ADR one conclusion drawn by the study results should be that endoscopists’ quality parameters should be monitored and those not meeting the standards trained to improve rather than requiring minimum ADRs as premise for offering screening colonoscopy.”
 

 

 

Improve performance, but don’t discount patient factors

The study is important at this time because colorectal cancer is the third-leading cause of cancer death in the United States, Atsushi Sakuraba, MD, of the University of Chicago said in an interview.

“Screening colonoscopy has been shown to decrease CRC mortality, but factors influencing outcomes after screening colonoscopies remain to be determined,” he said.

“It was expected that high-quality colonoscopy performed by an endoscopist with ADR of 25% or greater was associated with a lower risk for CRC death,” Dr. Sakuraba said. “The strength of the study is that the authors demonstrated that high-quality colonoscopy was more important in individuals with high-risk adenomas, such as advanced adenomas or at least three adenomas.”

The study findings have implications for practice in that they show the importance of monitoring performance quality in screening colonoscopy, Dr. Sakuraba said, “especially when patients have high-risk adenomas.” However, “the authors included only age and sex as variables, but the influence of other factors, such as smoking, [body mass index], and race, need to be studied.”

The researchers had no financial conflicts to disclose. Dr. Sakuraba had no financial conflicts to disclose.

Low adenoma detection rates (ADRs) were associated with a greater risk of death in colorectal cancer (CRC) patients, especially among those with high-risk adenomas, based on a review of more than 250,000 colonoscopies.

pixologicstudio/Thinkstock

“Both performance quality of the endoscopist as well as specific characteristics of resected adenomas at colonoscopy are associated with colorectal cancer mortality,” but the impact of these combined factors on colorectal cancer mortality has not been examined on a large scale, according to Elisabeth A. Waldmann, MD, of the Medical University of Vienna and colleagues.

In a study published in Clinical Gastroenterology & Hepatology, the researchers reviewed 259,885 colonoscopies performed by 361 endoscopists. Over an average follow-up period of 59 months, 165 CRC-related deaths occurred.

Across all risk groups, CRC mortality was higher among patients whose colonoscopies yielded an ADR of less than 25%, although this was not statistically significant in all groups.

The researchers then stratified patients into those with a negative colonoscopy, those with low-risk adenomas (one to two adenomas less than 10 mm), and those with high-risk adenomas (advanced adenomas or at least three adenomas), with the negative colonoscopy group used as the reference group for comparisons. The average age of the patients was 61 years, and approximately half were women.

Endoscopists were classified as having an ADR of less than 25% or 25% and higher.

Among individuals with low-risk adenomas, CRC mortality was similar whether the ADR on a negative colonoscopy was less than 25% or 25% or higher (adjusted hazard ratios, 1.25 and 1.22, respectively). CRC mortality also remained unaffected by ADR in patients with negatively colonoscopies (aHR, 1.27).

By contrast, individuals with high-risk adenomas had a significantly increased risk of CRC death if their colonoscopy was performed by an endoscopist with an ADR of less than 25%, compared with those whose endoscopists had ADRs of 25% or higher (aHR, 2.25 and 1.35, respectively).

“Our study demonstrated that adding ADR to the risk stratification model improved risk assessment in all risk groups,” the researchers noted. “Importantly, stratification improved most for individuals with high-risk adenomas, the group demanding most resources in health care systems.”

The study findings were limited by several factors including the focus on only screening and surveillance colonoscopies, not including diagnostic colonoscopies, and the inability to adjust for comorbidities and lifestyle factors that might impact CRC mortality, the researchers noted. The 22.4% average ADR in the current study was low, compared with other studies, and could be a limitation as well, although previous guidelines recommend a target ADR of at least 20%.

“Despite the extensive body of literature supporting the importance of ADR in terms of CRC prevention, its implementation into clinical surveillance is challenging,” as physicians under pressure might try to game their ADRs, the researchers wrote.

The findings support the value of mandatory assessment of performance quality, the researchers added. However, “because of the potential possibility of gaming one’s ADR one conclusion drawn by the study results should be that endoscopists’ quality parameters should be monitored and those not meeting the standards trained to improve rather than requiring minimum ADRs as premise for offering screening colonoscopy.”
 

 

 

Improve performance, but don’t discount patient factors

The study is important at this time because colorectal cancer is the third-leading cause of cancer death in the United States, Atsushi Sakuraba, MD, of the University of Chicago said in an interview.

“Screening colonoscopy has been shown to decrease CRC mortality, but factors influencing outcomes after screening colonoscopies remain to be determined,” he said.

“It was expected that high-quality colonoscopy performed by an endoscopist with ADR of 25% or greater was associated with a lower risk for CRC death,” Dr. Sakuraba said. “The strength of the study is that the authors demonstrated that high-quality colonoscopy was more important in individuals with high-risk adenomas, such as advanced adenomas or at least three adenomas.”

The study findings have implications for practice in that they show the importance of monitoring performance quality in screening colonoscopy, Dr. Sakuraba said, “especially when patients have high-risk adenomas.” However, “the authors included only age and sex as variables, but the influence of other factors, such as smoking, [body mass index], and race, need to be studied.”

The researchers had no financial conflicts to disclose. Dr. Sakuraba had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY & HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lupus images fall short on diverse examples

Article Type
Changed

Lupus images in medical resource materials underrepresent patients with skin of color, based on data from a review of more than 1,400 images published between 2014 and 2019 in materials from a university’s online medical library.

Courtesy Dr. Catalina Matiz
The female teen has pink and violaceous indurated annular plaques on her right nasal sidewall and cheek.

Patients with skin of color who develop lupus tend to present earlier and with more severe cases, and often experience worse outcomes, compared with other populations, wrote Amaad Rana, MD, of Washington University, St. Louis, and colleagues. Medical resources in general have historically underrepresented patients of color, and the researchers reviewed lupus materials for a similar publication bias.

In a study published in Arthritis Care & Research, the investigators identified 1,417 images in rheumatology, dermatology, and internal medicine resources, including 119 medical textbooks, 15 medical journals, 2 online image libraries, and the online image collections of Google and UpToDate. An additional 24 images came from skin of color atlases.

Excluding the skin of color atlases, 56.4% of the images represented light skin, 35.1% showed medium skin, and 8.5% showed dark skin. Overall, publishers were more than twice as likely to portray light skin tones and were significantly less likely to portray dark skin tones (odds ratios, 2.59 and 0.19, respectively), compared with an equal representation of skin tones; however, the difference was not significant for portrayal of medium skin tones (OR, 1.08).

By specialty, dermatology was more inclusive of skin of color images than rheumatology or internal medicine, although the internal medicine sample size was too small for comparable analysis, the researchers noted. Dermatology textbooks were 2.42 times more likely and rheumatology textbooks were 4.87 times more likely to depict light skin tones than an equal representation of light, medium, and dark skin tones.



The researchers rated the skin color in the images using the New Immigrant Survey Skin Color Scale and categorized the images as representing light (NISSCS scores, 1-2), medium (NISSCS scores, 3-5), or dark skin (NISSCS scores, 6-10). Medical journals had the most images of dark skin, excluding skin of color atlases. In a comparison of specialties, dermatology materials included the most images of medium and darker skin tones.

The underrepresentation of skin of color patients can contribute to a limited knowledge of lupus presentation that could lead to disparate health outcomes, the researchers noted.

The study findings were limited by several factors, including the review of only the online textbooks and journals available through the medical library of a single university, the researchers noted. In addition, definitions of light, medium, and dark skin tones were variable among studies, and the researchers did not distinguish among lupus pathologies.

“Further research is needed to quantitatively assess the influence these materials have on healthcare providers’ ability to care for patients with lupus and SOC, and new material and strategies will be required to correct this disparity and promote equitable representation,” the researchers emphasized. “Ultimately, this will arm practitioners with the resources to competently treat patients with any skin color and work towards reducing disparities in health outcomes.”

The study received no outside funding. The researchers had no financial conflicts to disclose.

Publications
Topics
Sections

Lupus images in medical resource materials underrepresent patients with skin of color, based on data from a review of more than 1,400 images published between 2014 and 2019 in materials from a university’s online medical library.

Courtesy Dr. Catalina Matiz
The female teen has pink and violaceous indurated annular plaques on her right nasal sidewall and cheek.

Patients with skin of color who develop lupus tend to present earlier and with more severe cases, and often experience worse outcomes, compared with other populations, wrote Amaad Rana, MD, of Washington University, St. Louis, and colleagues. Medical resources in general have historically underrepresented patients of color, and the researchers reviewed lupus materials for a similar publication bias.

In a study published in Arthritis Care & Research, the investigators identified 1,417 images in rheumatology, dermatology, and internal medicine resources, including 119 medical textbooks, 15 medical journals, 2 online image libraries, and the online image collections of Google and UpToDate. An additional 24 images came from skin of color atlases.

Excluding the skin of color atlases, 56.4% of the images represented light skin, 35.1% showed medium skin, and 8.5% showed dark skin. Overall, publishers were more than twice as likely to portray light skin tones and were significantly less likely to portray dark skin tones (odds ratios, 2.59 and 0.19, respectively), compared with an equal representation of skin tones; however, the difference was not significant for portrayal of medium skin tones (OR, 1.08).

By specialty, dermatology was more inclusive of skin of color images than rheumatology or internal medicine, although the internal medicine sample size was too small for comparable analysis, the researchers noted. Dermatology textbooks were 2.42 times more likely and rheumatology textbooks were 4.87 times more likely to depict light skin tones than an equal representation of light, medium, and dark skin tones.



The researchers rated the skin color in the images using the New Immigrant Survey Skin Color Scale and categorized the images as representing light (NISSCS scores, 1-2), medium (NISSCS scores, 3-5), or dark skin (NISSCS scores, 6-10). Medical journals had the most images of dark skin, excluding skin of color atlases. In a comparison of specialties, dermatology materials included the most images of medium and darker skin tones.

The underrepresentation of skin of color patients can contribute to a limited knowledge of lupus presentation that could lead to disparate health outcomes, the researchers noted.

The study findings were limited by several factors, including the review of only the online textbooks and journals available through the medical library of a single university, the researchers noted. In addition, definitions of light, medium, and dark skin tones were variable among studies, and the researchers did not distinguish among lupus pathologies.

“Further research is needed to quantitatively assess the influence these materials have on healthcare providers’ ability to care for patients with lupus and SOC, and new material and strategies will be required to correct this disparity and promote equitable representation,” the researchers emphasized. “Ultimately, this will arm practitioners with the resources to competently treat patients with any skin color and work towards reducing disparities in health outcomes.”

The study received no outside funding. The researchers had no financial conflicts to disclose.

Lupus images in medical resource materials underrepresent patients with skin of color, based on data from a review of more than 1,400 images published between 2014 and 2019 in materials from a university’s online medical library.

Courtesy Dr. Catalina Matiz
The female teen has pink and violaceous indurated annular plaques on her right nasal sidewall and cheek.

Patients with skin of color who develop lupus tend to present earlier and with more severe cases, and often experience worse outcomes, compared with other populations, wrote Amaad Rana, MD, of Washington University, St. Louis, and colleagues. Medical resources in general have historically underrepresented patients of color, and the researchers reviewed lupus materials for a similar publication bias.

In a study published in Arthritis Care & Research, the investigators identified 1,417 images in rheumatology, dermatology, and internal medicine resources, including 119 medical textbooks, 15 medical journals, 2 online image libraries, and the online image collections of Google and UpToDate. An additional 24 images came from skin of color atlases.

Excluding the skin of color atlases, 56.4% of the images represented light skin, 35.1% showed medium skin, and 8.5% showed dark skin. Overall, publishers were more than twice as likely to portray light skin tones and were significantly less likely to portray dark skin tones (odds ratios, 2.59 and 0.19, respectively), compared with an equal representation of skin tones; however, the difference was not significant for portrayal of medium skin tones (OR, 1.08).

By specialty, dermatology was more inclusive of skin of color images than rheumatology or internal medicine, although the internal medicine sample size was too small for comparable analysis, the researchers noted. Dermatology textbooks were 2.42 times more likely and rheumatology textbooks were 4.87 times more likely to depict light skin tones than an equal representation of light, medium, and dark skin tones.



The researchers rated the skin color in the images using the New Immigrant Survey Skin Color Scale and categorized the images as representing light (NISSCS scores, 1-2), medium (NISSCS scores, 3-5), or dark skin (NISSCS scores, 6-10). Medical journals had the most images of dark skin, excluding skin of color atlases. In a comparison of specialties, dermatology materials included the most images of medium and darker skin tones.

The underrepresentation of skin of color patients can contribute to a limited knowledge of lupus presentation that could lead to disparate health outcomes, the researchers noted.

The study findings were limited by several factors, including the review of only the online textbooks and journals available through the medical library of a single university, the researchers noted. In addition, definitions of light, medium, and dark skin tones were variable among studies, and the researchers did not distinguish among lupus pathologies.

“Further research is needed to quantitatively assess the influence these materials have on healthcare providers’ ability to care for patients with lupus and SOC, and new material and strategies will be required to correct this disparity and promote equitable representation,” the researchers emphasized. “Ultimately, this will arm practitioners with the resources to competently treat patients with any skin color and work towards reducing disparities in health outcomes.”

The study received no outside funding. The researchers had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ARTHRITIS CARE & RESEARCH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Prediction rule identifies low infection risk in febrile infants

Article Type
Changed

 

A clinical prediction rule combining procalcitonin, absolute neutrophil count, and urinalysis effectively identified most febrile infants at low risk for serious bacterial infections, based on data from 702 individuals

The clinical prediction rule (CPR) described in 2019 in JAMA Pediatrics was developed by the Febrile Infant Working Group of the Pediatric Emergency Care Applied Research Network (PECARN) to identify febrile infants at low risk for serious bacterial infections in order to reduce unnecessary procedures, antibiotics use, and hospitalization, according to April Clawson, MD, of Arkansas Children’s Hospital, Little Rock, and colleagues.

In a poster presented at the Pediatric Academic Societies annual meeting, the researchers conducted an external validation of the rule via a retrospective, observational study of febrile infants aged 60 days and younger who presented to an urban pediatric ED between October 2014 and June 2019. The study population included 702 infants with an average age of 36 days. Approximately 45% were female, and 60% were White. Fever was defined as 38° C or greater. Exclusion criteria were prematurity, receipt of antibiotics in the past 48 hours, presence of an indwelling medical device, and evidence of focal infection (not including otitis media); those who were critically ill at presentation or had a previous medical condition were excluded as well, the researchers said. A serious bacterial infection (SBI) was defined as a urinary tract infection (UTI), bacteremia, or bacterial meningitis.

Based on the CPR, a patient is considered low risk for an SBI if all the following criteria are met: normal urinalysis (defined as absence of leukocyte esterase, nitrite, and 5 or less white blood cells per high power field); an absolute neutrophil count of 4,090/mL or less; and procalcitonin of 1.71 ng/mL or less.

Overall, 62 infants (8.8%) were diagnosed with an SBI, similar to the 9.3% seen in the parent study of the CPR, Dr. Clawson said.

Of these, 42 had a UTI only (6%), 10 had bacteremia only (1.4%), and 1 had meningitis only (0.1%). Another five infants had UTI with bacteremia (0.7%), and four had bacteremia and meningitis (0.6%).

According to the CPR, 432 infants met criteria for low risk and 270 were considered high risk. A total of five infants who were classified as low risk had SBIs, including two with UTIs, two with bacteremia, and one with meningitis.

“The CPR derived and validated by Kupperman et al. had a decreased sensitivity for the patients in our study and missed some SBIs,” Dr. Clawson noted. “However, it had a strong negative predictive value, so it may still be a useful CPR.”

The sensitivity for the CPR in the parent study and the current study was 97.7 and 91.9, respectively; specificity was 60 and 66.7, respectively. The negative predictive values for the parent and current studies were 99.6 and 98.8, respectively, and the positive predictive values were 20.7 and 21.1.

The results support the potential of the CPR, but more external validation is needed, they said.
 

PECARN rule keeps it simple

“It has always been a challenge to identify infants with fever with serious bacterial infections when they are well-appearing,” Yashas Nathani, MD, of Oklahoma University, Oklahoma City, said in an interview. “The clinical prediction rule offers a simple, step-by-step approach for pediatricians and emergency medicine physicians to stratify infants in high or low risk categories for SBIs. However, as with everything, validation of protocols, guidelines and decision-making algorithms is extremely important, especially as more clinicians start to employ this CPR to their daily practice. This study objectively puts the CPR to the test and offers an independent external validation.

“Although this study had a lower sensitivity in identifying infants with SBI using the clinical prediction rule as compared to the original study, the robust validation of negative predictive value is extremely important and not surprising,” said Dr. Nathani. “The goal of this CPR is to identify infants with low-risk for SBI and the stated NPV helps clinicians in doing just that.”

Overall, “the clinical prediction rule is a fantastic resource for physicians to identify potentially sick infants with fever, especially the ones that appear well on initial evaluation,” said Dr. Nathani. However, “it is important to acknowledge that this is merely a guideline, and not an absolute rule. Clinicians also must remain cautious, as this rule does not incorporate the presence of viral pathogens as a factor.

“It is important to continue the scientific quest to refine our approach in identifying infants with serious bacterial infections when fever is the only presentation,” Dr. Nathani noted. “Additional research is needed to continue fine-tuning this CPR and the thresholds for procalcitonin and absolute neutrophil counts to improve the sensitivity and specificity.” Research also is needed to explore whether this CPR can be extended to incorporate viral testing, “as a large number of infants with fever have viral pathogens as the primary etiology,” he concluded.

The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Nathani had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

A clinical prediction rule combining procalcitonin, absolute neutrophil count, and urinalysis effectively identified most febrile infants at low risk for serious bacterial infections, based on data from 702 individuals

The clinical prediction rule (CPR) described in 2019 in JAMA Pediatrics was developed by the Febrile Infant Working Group of the Pediatric Emergency Care Applied Research Network (PECARN) to identify febrile infants at low risk for serious bacterial infections in order to reduce unnecessary procedures, antibiotics use, and hospitalization, according to April Clawson, MD, of Arkansas Children’s Hospital, Little Rock, and colleagues.

In a poster presented at the Pediatric Academic Societies annual meeting, the researchers conducted an external validation of the rule via a retrospective, observational study of febrile infants aged 60 days and younger who presented to an urban pediatric ED between October 2014 and June 2019. The study population included 702 infants with an average age of 36 days. Approximately 45% were female, and 60% were White. Fever was defined as 38° C or greater. Exclusion criteria were prematurity, receipt of antibiotics in the past 48 hours, presence of an indwelling medical device, and evidence of focal infection (not including otitis media); those who were critically ill at presentation or had a previous medical condition were excluded as well, the researchers said. A serious bacterial infection (SBI) was defined as a urinary tract infection (UTI), bacteremia, or bacterial meningitis.

Based on the CPR, a patient is considered low risk for an SBI if all the following criteria are met: normal urinalysis (defined as absence of leukocyte esterase, nitrite, and 5 or less white blood cells per high power field); an absolute neutrophil count of 4,090/mL or less; and procalcitonin of 1.71 ng/mL or less.

Overall, 62 infants (8.8%) were diagnosed with an SBI, similar to the 9.3% seen in the parent study of the CPR, Dr. Clawson said.

Of these, 42 had a UTI only (6%), 10 had bacteremia only (1.4%), and 1 had meningitis only (0.1%). Another five infants had UTI with bacteremia (0.7%), and four had bacteremia and meningitis (0.6%).

According to the CPR, 432 infants met criteria for low risk and 270 were considered high risk. A total of five infants who were classified as low risk had SBIs, including two with UTIs, two with bacteremia, and one with meningitis.

“The CPR derived and validated by Kupperman et al. had a decreased sensitivity for the patients in our study and missed some SBIs,” Dr. Clawson noted. “However, it had a strong negative predictive value, so it may still be a useful CPR.”

The sensitivity for the CPR in the parent study and the current study was 97.7 and 91.9, respectively; specificity was 60 and 66.7, respectively. The negative predictive values for the parent and current studies were 99.6 and 98.8, respectively, and the positive predictive values were 20.7 and 21.1.

The results support the potential of the CPR, but more external validation is needed, they said.
 

PECARN rule keeps it simple

“It has always been a challenge to identify infants with fever with serious bacterial infections when they are well-appearing,” Yashas Nathani, MD, of Oklahoma University, Oklahoma City, said in an interview. “The clinical prediction rule offers a simple, step-by-step approach for pediatricians and emergency medicine physicians to stratify infants in high or low risk categories for SBIs. However, as with everything, validation of protocols, guidelines and decision-making algorithms is extremely important, especially as more clinicians start to employ this CPR to their daily practice. This study objectively puts the CPR to the test and offers an independent external validation.

“Although this study had a lower sensitivity in identifying infants with SBI using the clinical prediction rule as compared to the original study, the robust validation of negative predictive value is extremely important and not surprising,” said Dr. Nathani. “The goal of this CPR is to identify infants with low-risk for SBI and the stated NPV helps clinicians in doing just that.”

Overall, “the clinical prediction rule is a fantastic resource for physicians to identify potentially sick infants with fever, especially the ones that appear well on initial evaluation,” said Dr. Nathani. However, “it is important to acknowledge that this is merely a guideline, and not an absolute rule. Clinicians also must remain cautious, as this rule does not incorporate the presence of viral pathogens as a factor.

“It is important to continue the scientific quest to refine our approach in identifying infants with serious bacterial infections when fever is the only presentation,” Dr. Nathani noted. “Additional research is needed to continue fine-tuning this CPR and the thresholds for procalcitonin and absolute neutrophil counts to improve the sensitivity and specificity.” Research also is needed to explore whether this CPR can be extended to incorporate viral testing, “as a large number of infants with fever have viral pathogens as the primary etiology,” he concluded.

The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Nathani had no financial conflicts to disclose.

 

A clinical prediction rule combining procalcitonin, absolute neutrophil count, and urinalysis effectively identified most febrile infants at low risk for serious bacterial infections, based on data from 702 individuals

The clinical prediction rule (CPR) described in 2019 in JAMA Pediatrics was developed by the Febrile Infant Working Group of the Pediatric Emergency Care Applied Research Network (PECARN) to identify febrile infants at low risk for serious bacterial infections in order to reduce unnecessary procedures, antibiotics use, and hospitalization, according to April Clawson, MD, of Arkansas Children’s Hospital, Little Rock, and colleagues.

In a poster presented at the Pediatric Academic Societies annual meeting, the researchers conducted an external validation of the rule via a retrospective, observational study of febrile infants aged 60 days and younger who presented to an urban pediatric ED between October 2014 and June 2019. The study population included 702 infants with an average age of 36 days. Approximately 45% were female, and 60% were White. Fever was defined as 38° C or greater. Exclusion criteria were prematurity, receipt of antibiotics in the past 48 hours, presence of an indwelling medical device, and evidence of focal infection (not including otitis media); those who were critically ill at presentation or had a previous medical condition were excluded as well, the researchers said. A serious bacterial infection (SBI) was defined as a urinary tract infection (UTI), bacteremia, or bacterial meningitis.

Based on the CPR, a patient is considered low risk for an SBI if all the following criteria are met: normal urinalysis (defined as absence of leukocyte esterase, nitrite, and 5 or less white blood cells per high power field); an absolute neutrophil count of 4,090/mL or less; and procalcitonin of 1.71 ng/mL or less.

Overall, 62 infants (8.8%) were diagnosed with an SBI, similar to the 9.3% seen in the parent study of the CPR, Dr. Clawson said.

Of these, 42 had a UTI only (6%), 10 had bacteremia only (1.4%), and 1 had meningitis only (0.1%). Another five infants had UTI with bacteremia (0.7%), and four had bacteremia and meningitis (0.6%).

According to the CPR, 432 infants met criteria for low risk and 270 were considered high risk. A total of five infants who were classified as low risk had SBIs, including two with UTIs, two with bacteremia, and one with meningitis.

“The CPR derived and validated by Kupperman et al. had a decreased sensitivity for the patients in our study and missed some SBIs,” Dr. Clawson noted. “However, it had a strong negative predictive value, so it may still be a useful CPR.”

The sensitivity for the CPR in the parent study and the current study was 97.7 and 91.9, respectively; specificity was 60 and 66.7, respectively. The negative predictive values for the parent and current studies were 99.6 and 98.8, respectively, and the positive predictive values were 20.7 and 21.1.

The results support the potential of the CPR, but more external validation is needed, they said.
 

PECARN rule keeps it simple

“It has always been a challenge to identify infants with fever with serious bacterial infections when they are well-appearing,” Yashas Nathani, MD, of Oklahoma University, Oklahoma City, said in an interview. “The clinical prediction rule offers a simple, step-by-step approach for pediatricians and emergency medicine physicians to stratify infants in high or low risk categories for SBIs. However, as with everything, validation of protocols, guidelines and decision-making algorithms is extremely important, especially as more clinicians start to employ this CPR to their daily practice. This study objectively puts the CPR to the test and offers an independent external validation.

“Although this study had a lower sensitivity in identifying infants with SBI using the clinical prediction rule as compared to the original study, the robust validation of negative predictive value is extremely important and not surprising,” said Dr. Nathani. “The goal of this CPR is to identify infants with low-risk for SBI and the stated NPV helps clinicians in doing just that.”

Overall, “the clinical prediction rule is a fantastic resource for physicians to identify potentially sick infants with fever, especially the ones that appear well on initial evaluation,” said Dr. Nathani. However, “it is important to acknowledge that this is merely a guideline, and not an absolute rule. Clinicians also must remain cautious, as this rule does not incorporate the presence of viral pathogens as a factor.

“It is important to continue the scientific quest to refine our approach in identifying infants with serious bacterial infections when fever is the only presentation,” Dr. Nathani noted. “Additional research is needed to continue fine-tuning this CPR and the thresholds for procalcitonin and absolute neutrophil counts to improve the sensitivity and specificity.” Research also is needed to explore whether this CPR can be extended to incorporate viral testing, “as a large number of infants with fever have viral pathogens as the primary etiology,” he concluded.

The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Nathani had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PAS 2021

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Red meat intake tied to higher coronary heart disease risk

Article Type
Changed

Increased intake of meat was linked to the risk of coronary heart disease, and substituting plant protein for red or processed meat appeared to reduce that risk, in a study from pooled cohorts totaling more than a million persons.

Fuse/Thinkstock

“We know that red and processed meat intake has been associated with higher risks of fatal coronary heart disease,” said Laila Al-Shaar, PhD, of Penn State University, Hershey. However, very few studies have evaluated substitution of alternative protein sources for red and processed meat in relation to fatal CHD risk, she said.

In a study presented at the Epidemiology and Prevention/Lifestyle and Cardiometabolic Health meeting, Dr. Al-Shaar and colleagues reviewed individual-level data from the Pooling Project of Prospective Studies of Diet and Cancer, which included 16 prospective cohorts totaling 1,364,211 participants. The average age of the participants was 57 years, and 40% were men. Individuals with a history of cancer or cardiovascular disease were excluded. The participants were followed for 7-32 years. Diet was assessed in each cohort using baselines questionnaires, and cases were identified through medical records.

Total red meat included processed meat and unprocessed red meat; animal protein sources included seafood, poultry, eggs, and low- and high-fat dairy products; and plant protein sources included nuts and beans.

The researchers identified 51,176 fatal CHD cases during the study period. After controlling for dietary and nondietary factors, they found that an increase of 100 g per day of total red meat intake was associated with a 7% increased risk of fatal coronary heart disease (relative risk, 1.07).

However, substituting 200 calories (kcal) per day from nuts, low- and high-fat dairy products, and poultry for 200 calories per day from total red meat was associated with a 6%-14% lower risk of fatal CHD, Dr. Al-Shaar added at the meeting sponsored by the American Heart Association.

These associations were stronger when substituting the alternative protein sources for processed meat, especially among women; risk was reduced by 17%-24%, on the basis of 14,888 cases.

The researchers also found that substituting 200 calories per day from eggs for 200 calories per day for total red meat and unprocessed red meat was associated with 8% and 14% higher risk of fatal CHD, respectively; but this substitution of eggs for processed meat was not significant (4%).

“When we did the association by gender, the results were even stronger in women,” said Dr. Al-Shaar. However, “these are very preliminary results” that should be interpreted with caution, and more analysis is needed, she said. “We are planning to include other cohorts with other protein sources such as soy protein,” she noted. However, the results provide additional evidence that consumption of red and processed meat contributes to an increased risk of coronary heart disease, and that substituting some red and processed meat with nuts, dairy products, or poultry may reduce this risk, she concluded.
 

Women especially benefit from red meat reduction

The study is important because of the continuing interest in various sources of dietary protein intake, Linda Van Horn, PhD, RD, of Northwestern University, Chicago, said in an interview.

“The investigators studied associations of substituting other animal and plant protein sources for total red meat, unprocessed red meat, and processed meat in relation to risk of fatal CHD,” she said.

The researchers found that swapping as little as 200 calories per day of total red meat for nuts, low- or high-fat dairy products, or poultry were associated with a 6%-14% reduced risk of fatal CHD, said Dr. Van Horn. “Alternatively, if those 200 calories per day for red meat were substituted with eggs, they saw as much as 14% higher risk of fatal CHD,” she noted.

The message for both consumers and clinicians is that the findings from this large study support recommendations for plant-based and lean animal sources of protein instead of red and processed meat or eggs, as these sources “offer significantly lower risk for CHD mortality,” Dr. Van Horn said. “This may be especially true for women, but the total population is likely to benefit from this approach,” she said.

Additional research is needed, Dr. Van Horn emphasized. “Prospective lifetime data, starting in utero and over the life course, are needed to better establish recommended dietary patterns at every age and among all ethnicities and diverse socioeconomic groups,” she said.

Dr. Al-Shaar had no financial conflicts to disclose. Dr. Van Horn had no financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Increased intake of meat was linked to the risk of coronary heart disease, and substituting plant protein for red or processed meat appeared to reduce that risk, in a study from pooled cohorts totaling more than a million persons.

Fuse/Thinkstock

“We know that red and processed meat intake has been associated with higher risks of fatal coronary heart disease,” said Laila Al-Shaar, PhD, of Penn State University, Hershey. However, very few studies have evaluated substitution of alternative protein sources for red and processed meat in relation to fatal CHD risk, she said.

In a study presented at the Epidemiology and Prevention/Lifestyle and Cardiometabolic Health meeting, Dr. Al-Shaar and colleagues reviewed individual-level data from the Pooling Project of Prospective Studies of Diet and Cancer, which included 16 prospective cohorts totaling 1,364,211 participants. The average age of the participants was 57 years, and 40% were men. Individuals with a history of cancer or cardiovascular disease were excluded. The participants were followed for 7-32 years. Diet was assessed in each cohort using baselines questionnaires, and cases were identified through medical records.

Total red meat included processed meat and unprocessed red meat; animal protein sources included seafood, poultry, eggs, and low- and high-fat dairy products; and plant protein sources included nuts and beans.

The researchers identified 51,176 fatal CHD cases during the study period. After controlling for dietary and nondietary factors, they found that an increase of 100 g per day of total red meat intake was associated with a 7% increased risk of fatal coronary heart disease (relative risk, 1.07).

However, substituting 200 calories (kcal) per day from nuts, low- and high-fat dairy products, and poultry for 200 calories per day from total red meat was associated with a 6%-14% lower risk of fatal CHD, Dr. Al-Shaar added at the meeting sponsored by the American Heart Association.

These associations were stronger when substituting the alternative protein sources for processed meat, especially among women; risk was reduced by 17%-24%, on the basis of 14,888 cases.

The researchers also found that substituting 200 calories per day from eggs for 200 calories per day for total red meat and unprocessed red meat was associated with 8% and 14% higher risk of fatal CHD, respectively; but this substitution of eggs for processed meat was not significant (4%).

“When we did the association by gender, the results were even stronger in women,” said Dr. Al-Shaar. However, “these are very preliminary results” that should be interpreted with caution, and more analysis is needed, she said. “We are planning to include other cohorts with other protein sources such as soy protein,” she noted. However, the results provide additional evidence that consumption of red and processed meat contributes to an increased risk of coronary heart disease, and that substituting some red and processed meat with nuts, dairy products, or poultry may reduce this risk, she concluded.
 

Women especially benefit from red meat reduction

The study is important because of the continuing interest in various sources of dietary protein intake, Linda Van Horn, PhD, RD, of Northwestern University, Chicago, said in an interview.

“The investigators studied associations of substituting other animal and plant protein sources for total red meat, unprocessed red meat, and processed meat in relation to risk of fatal CHD,” she said.

The researchers found that swapping as little as 200 calories per day of total red meat for nuts, low- or high-fat dairy products, or poultry were associated with a 6%-14% reduced risk of fatal CHD, said Dr. Van Horn. “Alternatively, if those 200 calories per day for red meat were substituted with eggs, they saw as much as 14% higher risk of fatal CHD,” she noted.

The message for both consumers and clinicians is that the findings from this large study support recommendations for plant-based and lean animal sources of protein instead of red and processed meat or eggs, as these sources “offer significantly lower risk for CHD mortality,” Dr. Van Horn said. “This may be especially true for women, but the total population is likely to benefit from this approach,” she said.

Additional research is needed, Dr. Van Horn emphasized. “Prospective lifetime data, starting in utero and over the life course, are needed to better establish recommended dietary patterns at every age and among all ethnicities and diverse socioeconomic groups,” she said.

Dr. Al-Shaar had no financial conflicts to disclose. Dr. Van Horn had no financial conflicts to disclose.

Increased intake of meat was linked to the risk of coronary heart disease, and substituting plant protein for red or processed meat appeared to reduce that risk, in a study from pooled cohorts totaling more than a million persons.

Fuse/Thinkstock

“We know that red and processed meat intake has been associated with higher risks of fatal coronary heart disease,” said Laila Al-Shaar, PhD, of Penn State University, Hershey. However, very few studies have evaluated substitution of alternative protein sources for red and processed meat in relation to fatal CHD risk, she said.

In a study presented at the Epidemiology and Prevention/Lifestyle and Cardiometabolic Health meeting, Dr. Al-Shaar and colleagues reviewed individual-level data from the Pooling Project of Prospective Studies of Diet and Cancer, which included 16 prospective cohorts totaling 1,364,211 participants. The average age of the participants was 57 years, and 40% were men. Individuals with a history of cancer or cardiovascular disease were excluded. The participants were followed for 7-32 years. Diet was assessed in each cohort using baselines questionnaires, and cases were identified through medical records.

Total red meat included processed meat and unprocessed red meat; animal protein sources included seafood, poultry, eggs, and low- and high-fat dairy products; and plant protein sources included nuts and beans.

The researchers identified 51,176 fatal CHD cases during the study period. After controlling for dietary and nondietary factors, they found that an increase of 100 g per day of total red meat intake was associated with a 7% increased risk of fatal coronary heart disease (relative risk, 1.07).

However, substituting 200 calories (kcal) per day from nuts, low- and high-fat dairy products, and poultry for 200 calories per day from total red meat was associated with a 6%-14% lower risk of fatal CHD, Dr. Al-Shaar added at the meeting sponsored by the American Heart Association.

These associations were stronger when substituting the alternative protein sources for processed meat, especially among women; risk was reduced by 17%-24%, on the basis of 14,888 cases.

The researchers also found that substituting 200 calories per day from eggs for 200 calories per day for total red meat and unprocessed red meat was associated with 8% and 14% higher risk of fatal CHD, respectively; but this substitution of eggs for processed meat was not significant (4%).

“When we did the association by gender, the results were even stronger in women,” said Dr. Al-Shaar. However, “these are very preliminary results” that should be interpreted with caution, and more analysis is needed, she said. “We are planning to include other cohorts with other protein sources such as soy protein,” she noted. However, the results provide additional evidence that consumption of red and processed meat contributes to an increased risk of coronary heart disease, and that substituting some red and processed meat with nuts, dairy products, or poultry may reduce this risk, she concluded.
 

Women especially benefit from red meat reduction

The study is important because of the continuing interest in various sources of dietary protein intake, Linda Van Horn, PhD, RD, of Northwestern University, Chicago, said in an interview.

“The investigators studied associations of substituting other animal and plant protein sources for total red meat, unprocessed red meat, and processed meat in relation to risk of fatal CHD,” she said.

The researchers found that swapping as little as 200 calories per day of total red meat for nuts, low- or high-fat dairy products, or poultry were associated with a 6%-14% reduced risk of fatal CHD, said Dr. Van Horn. “Alternatively, if those 200 calories per day for red meat were substituted with eggs, they saw as much as 14% higher risk of fatal CHD,” she noted.

The message for both consumers and clinicians is that the findings from this large study support recommendations for plant-based and lean animal sources of protein instead of red and processed meat or eggs, as these sources “offer significantly lower risk for CHD mortality,” Dr. Van Horn said. “This may be especially true for women, but the total population is likely to benefit from this approach,” she said.

Additional research is needed, Dr. Van Horn emphasized. “Prospective lifetime data, starting in utero and over the life course, are needed to better establish recommended dietary patterns at every age and among all ethnicities and diverse socioeconomic groups,” she said.

Dr. Al-Shaar had no financial conflicts to disclose. Dr. Van Horn had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EPI/LIFESTYLE 2021

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Adding daily steps linked to longer life

Article Type
Changed

Taking more steps each day, in short spurts or longer bouts, was associated with a longer life in women older than 60 years, according to data from more than 16,000 participants in the ongoing Women’s Health Study.

Christopher C. Moore

The American Heart Association recommends at least 150 minutes per week of moderate physical activity, 75 minutes of vigorous physical activity, or a combination of both as fitness guidelines for adults. Walking is a safe and easy way for many adults to follow these guidelines, according to Christopher C. Moore, MS, a PhD candidate at the University of North Carolina at Chapel Hill.

The popularity of step counts reflect that they are simple and objective, and “focusing on steps can help promote an active lifestyle,” he said. Data on the impact of sporadic steps accumulated outside of longer bouts of activity on health outcomes are limited; however, technology advances in the form of fitness apps and wearable devices make it possible for researchers to track and measure the benefits of short periods of activity as well as longer periods.

In a study presented at the Epidemiology and Prevention/Lifestyle and Cardiometabolic Health meeting, sponsored by the AHA, Mr. Moore and colleagues assessed data from women older than 60 years who used wearable step-counting devices to measure their daily steps and walking patterns.

The study population included 16,732 women enrolled in the Women’s Health Study, a longstanding study of heart disease, cancer, and disease prevention among women in the United States. The participants wore waist step counters 4-7 days a week during 2011-2015. The average of the women was 72 years; 96% were non-Hispanic White, and the average BMI was 26 kg/m2.

The researchers divided the total number of steps for each study participant into two groups: “bouted” steps, defined as 10 minutes or longer bouts of walking with few interruptions; and “sporadic” steps, defined as short spurts of walking during regular daily activities such as housework, taking the stairs, or walking to or from a car.

A total of 804 deaths occurred during an average of 6 years of follow-up. Each initial increase of 1,000 steps including sporadic or bouted steps was associated with a 28% decrease in death, compared with no daily steps (hazard ratio, 0.72).

Each increasing quartile of sporadic steps was linked with higher total steps per day, Mr. Moore said. “Initial increase in sporadic steps corresponded to the greatest reductions in mortality,” with a HR of 0.69 per additional sporadic steps below 3,200 per day, and the impact on reduced mortality plateaued at about 4,500 sporadic steps per day.

In further analysis, the researchers also found a roughly 32% decrease in death in participants who took more than 2,000 steps daily in uninterrupted bouts (HR, 0.69).

The study findings were limited by several factors, including the relatively short follow-up period and number of events, the assessment of steps at a single time point, and the mostly homogeneous population, Mr. Moore noted. Additional research is needed to assess whether the results are generalizable to men, younger women, and diverse racial and ethnic groups.

However, the results may have implications for public health messaging, he emphasized. The message is that, to impact longevity, the total volume of steps is more important than the type of activity through which they are accumulated.

“You can accumulate your steps through longer bouts of purposeful activity or through everyday behaviors such as walking to your car, taking the stairs, and doing housework,” Mr. Moore concluded.

Find a friend, both of you benefit

On the basis of this study and other available evidence, more steps daily are recommended for everyone, Nieca Goldberg, MD, a cardiologist at New York University Langone Health, said in an interview.

“You can increase minutes of walking and frequency of walking,” she said.

Dr. Goldberg emphasized that you don’t need a fancy app or wearable device to up your steps. She offered some tips to help overcome barriers to putting one foot in front of the other. “Take the steps instead of the elevator. Park your car farther from your destination so you can walk.” Also, you can help yourself and help a friend to better health. “Get a walking buddy so you can encourage each other to walk,” Dr. Goldberg added.

Mr. Moore and Dr. Goldberg had no financial conflicts to disclose. The Women’s Health Study is funded by Brigham and Women’s Hospital; the National Heart, Lung, and Blood Institute; and the National Cancer Institute. Mr. Moore was funded by a grant from the NHLBI but had no other financial conflicts to disclose.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Taking more steps each day, in short spurts or longer bouts, was associated with a longer life in women older than 60 years, according to data from more than 16,000 participants in the ongoing Women’s Health Study.

Christopher C. Moore

The American Heart Association recommends at least 150 minutes per week of moderate physical activity, 75 minutes of vigorous physical activity, or a combination of both as fitness guidelines for adults. Walking is a safe and easy way for many adults to follow these guidelines, according to Christopher C. Moore, MS, a PhD candidate at the University of North Carolina at Chapel Hill.

The popularity of step counts reflect that they are simple and objective, and “focusing on steps can help promote an active lifestyle,” he said. Data on the impact of sporadic steps accumulated outside of longer bouts of activity on health outcomes are limited; however, technology advances in the form of fitness apps and wearable devices make it possible for researchers to track and measure the benefits of short periods of activity as well as longer periods.

In a study presented at the Epidemiology and Prevention/Lifestyle and Cardiometabolic Health meeting, sponsored by the AHA, Mr. Moore and colleagues assessed data from women older than 60 years who used wearable step-counting devices to measure their daily steps and walking patterns.

The study population included 16,732 women enrolled in the Women’s Health Study, a longstanding study of heart disease, cancer, and disease prevention among women in the United States. The participants wore waist step counters 4-7 days a week during 2011-2015. The average of the women was 72 years; 96% were non-Hispanic White, and the average BMI was 26 kg/m2.

The researchers divided the total number of steps for each study participant into two groups: “bouted” steps, defined as 10 minutes or longer bouts of walking with few interruptions; and “sporadic” steps, defined as short spurts of walking during regular daily activities such as housework, taking the stairs, or walking to or from a car.

A total of 804 deaths occurred during an average of 6 years of follow-up. Each initial increase of 1,000 steps including sporadic or bouted steps was associated with a 28% decrease in death, compared with no daily steps (hazard ratio, 0.72).

Each increasing quartile of sporadic steps was linked with higher total steps per day, Mr. Moore said. “Initial increase in sporadic steps corresponded to the greatest reductions in mortality,” with a HR of 0.69 per additional sporadic steps below 3,200 per day, and the impact on reduced mortality plateaued at about 4,500 sporadic steps per day.

In further analysis, the researchers also found a roughly 32% decrease in death in participants who took more than 2,000 steps daily in uninterrupted bouts (HR, 0.69).

The study findings were limited by several factors, including the relatively short follow-up period and number of events, the assessment of steps at a single time point, and the mostly homogeneous population, Mr. Moore noted. Additional research is needed to assess whether the results are generalizable to men, younger women, and diverse racial and ethnic groups.

However, the results may have implications for public health messaging, he emphasized. The message is that, to impact longevity, the total volume of steps is more important than the type of activity through which they are accumulated.

“You can accumulate your steps through longer bouts of purposeful activity or through everyday behaviors such as walking to your car, taking the stairs, and doing housework,” Mr. Moore concluded.

Find a friend, both of you benefit

On the basis of this study and other available evidence, more steps daily are recommended for everyone, Nieca Goldberg, MD, a cardiologist at New York University Langone Health, said in an interview.

“You can increase minutes of walking and frequency of walking,” she said.

Dr. Goldberg emphasized that you don’t need a fancy app or wearable device to up your steps. She offered some tips to help overcome barriers to putting one foot in front of the other. “Take the steps instead of the elevator. Park your car farther from your destination so you can walk.” Also, you can help yourself and help a friend to better health. “Get a walking buddy so you can encourage each other to walk,” Dr. Goldberg added.

Mr. Moore and Dr. Goldberg had no financial conflicts to disclose. The Women’s Health Study is funded by Brigham and Women’s Hospital; the National Heart, Lung, and Blood Institute; and the National Cancer Institute. Mr. Moore was funded by a grant from the NHLBI but had no other financial conflicts to disclose.

Taking more steps each day, in short spurts or longer bouts, was associated with a longer life in women older than 60 years, according to data from more than 16,000 participants in the ongoing Women’s Health Study.

Christopher C. Moore

The American Heart Association recommends at least 150 minutes per week of moderate physical activity, 75 minutes of vigorous physical activity, or a combination of both as fitness guidelines for adults. Walking is a safe and easy way for many adults to follow these guidelines, according to Christopher C. Moore, MS, a PhD candidate at the University of North Carolina at Chapel Hill.

The popularity of step counts reflect that they are simple and objective, and “focusing on steps can help promote an active lifestyle,” he said. Data on the impact of sporadic steps accumulated outside of longer bouts of activity on health outcomes are limited; however, technology advances in the form of fitness apps and wearable devices make it possible for researchers to track and measure the benefits of short periods of activity as well as longer periods.

In a study presented at the Epidemiology and Prevention/Lifestyle and Cardiometabolic Health meeting, sponsored by the AHA, Mr. Moore and colleagues assessed data from women older than 60 years who used wearable step-counting devices to measure their daily steps and walking patterns.

The study population included 16,732 women enrolled in the Women’s Health Study, a longstanding study of heart disease, cancer, and disease prevention among women in the United States. The participants wore waist step counters 4-7 days a week during 2011-2015. The average of the women was 72 years; 96% were non-Hispanic White, and the average BMI was 26 kg/m2.

The researchers divided the total number of steps for each study participant into two groups: “bouted” steps, defined as 10 minutes or longer bouts of walking with few interruptions; and “sporadic” steps, defined as short spurts of walking during regular daily activities such as housework, taking the stairs, or walking to or from a car.

A total of 804 deaths occurred during an average of 6 years of follow-up. Each initial increase of 1,000 steps including sporadic or bouted steps was associated with a 28% decrease in death, compared with no daily steps (hazard ratio, 0.72).

Each increasing quartile of sporadic steps was linked with higher total steps per day, Mr. Moore said. “Initial increase in sporadic steps corresponded to the greatest reductions in mortality,” with a HR of 0.69 per additional sporadic steps below 3,200 per day, and the impact on reduced mortality plateaued at about 4,500 sporadic steps per day.

In further analysis, the researchers also found a roughly 32% decrease in death in participants who took more than 2,000 steps daily in uninterrupted bouts (HR, 0.69).

The study findings were limited by several factors, including the relatively short follow-up period and number of events, the assessment of steps at a single time point, and the mostly homogeneous population, Mr. Moore noted. Additional research is needed to assess whether the results are generalizable to men, younger women, and diverse racial and ethnic groups.

However, the results may have implications for public health messaging, he emphasized. The message is that, to impact longevity, the total volume of steps is more important than the type of activity through which they are accumulated.

“You can accumulate your steps through longer bouts of purposeful activity or through everyday behaviors such as walking to your car, taking the stairs, and doing housework,” Mr. Moore concluded.

Find a friend, both of you benefit

On the basis of this study and other available evidence, more steps daily are recommended for everyone, Nieca Goldberg, MD, a cardiologist at New York University Langone Health, said in an interview.

“You can increase minutes of walking and frequency of walking,” she said.

Dr. Goldberg emphasized that you don’t need a fancy app or wearable device to up your steps. She offered some tips to help overcome barriers to putting one foot in front of the other. “Take the steps instead of the elevator. Park your car farther from your destination so you can walk.” Also, you can help yourself and help a friend to better health. “Get a walking buddy so you can encourage each other to walk,” Dr. Goldberg added.

Mr. Moore and Dr. Goldberg had no financial conflicts to disclose. The Women’s Health Study is funded by Brigham and Women’s Hospital; the National Heart, Lung, and Blood Institute; and the National Cancer Institute. Mr. Moore was funded by a grant from the NHLBI but had no other financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EPI LIFESTYLE 2021

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article