Community genetic testing prompts behavior change in patients

Article Type
Changed
Fri, 03/11/2022 - 10:03

Giving patients and their providers genetic test results for kidney failure risk promotes positive behavioral change that could decrease an individual’s likelihood of developing chronic kidney disease (CKD) and end-stage renal failure (ESRF), a new pilot study suggests.

“Disclosing APOL1 genetic testing results to patients of African ancestry with hypertension and their clinicians was associated with a greater reduction in systolic blood pressure [SBP], increased kidney disease screening, and positive self-reported behavior change in those with high-risk genotypes,” Girish Nadkarni, MD, MPH, Icahn Mount Sinai School of Medicine, New York, and colleagues reported.

“These two measurements – the change in blood pressure and increased kidney function tests – act as hallmarks for detecting beneficial lifestyle change,” Dr. Nadkarni noted in a statement from his institution.

“For many years, researchers have wondered whether reporting APOL1 genetic test results would help improve clinical management. This is the first pragmatic randomized clinical trial to test this out [and] these results suggest we are headed in the right direction,” he added.

The study was published online March 4, 2022, in JAMA Network Open.
 

A quarter of those with high-risk genotype changed medication behavior

High-risk APOL1 genotypes confer a 5- to 10-fold increased risk for CKD and ESRF caused by hypertension and are found in one out of seven individuals of African ancestry. People of African ancestry also have the highest age-adjusted prevalence of high BP and the lowest rates of BP control, Dr. Nadkarni and colleagues wrote.

They studied a total of 2,050 patients of African ancestry with hypertension but without CKD who were randomized to undergo either immediate APOL1 testing (intervention group) or delayed APOL1 testing (control group).

“Patients randomly assigned to the intervention group received APOL1 genetic testing results from trained staff [while] their clinicians received results through clinical decision support in electronic health records,” the investigators explained.

Control patients received results after 12 months of follow-up. The mean age of the cohort was 53 years and almost two-thirds were female. Mean baseline SBP was significantly higher in patients with high-risk APOL1 genotypes, at 137 mm Hg, compared with those with low-risk APOL1 genotypes, at 134 mm Hg (P = .003), and controls, at 133 mm Hg (P = .001), the authors reported. 

At 3 months, “all groups had some decrease in SBP,” Dr. Nadkarni and colleagues observed.

However, patients with high-risk APOL1 genotypes had a significantly greater decrease in SBP, at 6 mm Hg, compared with a mean decrease of 3 mm Hg for those with low-risk APOL1 genotypes (P = .004) as well as controls (P = .01). At 12 months, there was no significant difference in SBP or change in SBP from baseline to 12 months between the three groups.

“All three groups showed a significant increase in the rate of urine protein testing over time,” the authors added.

Again, however, the most significant increase in urine protein testing over time was seen in patients with high-risk APOL1 genotypes, with a 12% increase from baseline, compared with a 6% increase for patients with low-risk APOL1 genotypes and a 7% increase among controls. The difference was significant only between patients with high-risk APOL1 genotypes and controls (P = .01).

Significantly more patients with high-risk APOL1 genotypes, at 59%, reported making positive lifestyle changes as reflected in better dietary and exercise habits after receiving their test results than did those with low-risk APOL1 genotypes, at 37% (P < .001).

Moreover, 24% of those with high-risk genotypes reported that receiving test results changed how they take their BP medication, compared with only 10% of those with low-risk genotypes.

More high-risk genotype carriers also reported taking their medications more often, at 10%, compared with 5% of low-risk genotype carriers (P = .005).

On the other hand, more patients with the high-risk genotype, at 27%, worried that they would develop kidney problems than low-risk carriers, at 17% (P < .001). Although investigators did offer patients the opportunity to speak with a genetic counselor at no cost, none chose to do so, the authors noted.
 

Small improvements

As the investigators emphasized, the magnitude of BP improvement seen in high-risk APOL1 carriers was small. However, they did not provide specific BP target recommendations or BP-lowering strategies, which, had they done so, may have brought BP down to a greater degree.

Health behavior changes were similarly small and may not have been clinically that meaningful.

Still, “results suggest that the trial clearly influenced those who received positive results and may have had some positive effects on other patients,” Dr. Nadkarni concluded.

Dr. Nadkarni is a cofounder of and has equity in Renalytx, and has been a member of the scientific advisory board and received personal fees from the company. He is also a cofounder of Pensieve Health.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Giving patients and their providers genetic test results for kidney failure risk promotes positive behavioral change that could decrease an individual’s likelihood of developing chronic kidney disease (CKD) and end-stage renal failure (ESRF), a new pilot study suggests.

“Disclosing APOL1 genetic testing results to patients of African ancestry with hypertension and their clinicians was associated with a greater reduction in systolic blood pressure [SBP], increased kidney disease screening, and positive self-reported behavior change in those with high-risk genotypes,” Girish Nadkarni, MD, MPH, Icahn Mount Sinai School of Medicine, New York, and colleagues reported.

“These two measurements – the change in blood pressure and increased kidney function tests – act as hallmarks for detecting beneficial lifestyle change,” Dr. Nadkarni noted in a statement from his institution.

“For many years, researchers have wondered whether reporting APOL1 genetic test results would help improve clinical management. This is the first pragmatic randomized clinical trial to test this out [and] these results suggest we are headed in the right direction,” he added.

The study was published online March 4, 2022, in JAMA Network Open.
 

A quarter of those with high-risk genotype changed medication behavior

High-risk APOL1 genotypes confer a 5- to 10-fold increased risk for CKD and ESRF caused by hypertension and are found in one out of seven individuals of African ancestry. People of African ancestry also have the highest age-adjusted prevalence of high BP and the lowest rates of BP control, Dr. Nadkarni and colleagues wrote.

They studied a total of 2,050 patients of African ancestry with hypertension but without CKD who were randomized to undergo either immediate APOL1 testing (intervention group) or delayed APOL1 testing (control group).

“Patients randomly assigned to the intervention group received APOL1 genetic testing results from trained staff [while] their clinicians received results through clinical decision support in electronic health records,” the investigators explained.

Control patients received results after 12 months of follow-up. The mean age of the cohort was 53 years and almost two-thirds were female. Mean baseline SBP was significantly higher in patients with high-risk APOL1 genotypes, at 137 mm Hg, compared with those with low-risk APOL1 genotypes, at 134 mm Hg (P = .003), and controls, at 133 mm Hg (P = .001), the authors reported. 

At 3 months, “all groups had some decrease in SBP,” Dr. Nadkarni and colleagues observed.

However, patients with high-risk APOL1 genotypes had a significantly greater decrease in SBP, at 6 mm Hg, compared with a mean decrease of 3 mm Hg for those with low-risk APOL1 genotypes (P = .004) as well as controls (P = .01). At 12 months, there was no significant difference in SBP or change in SBP from baseline to 12 months between the three groups.

“All three groups showed a significant increase in the rate of urine protein testing over time,” the authors added.

Again, however, the most significant increase in urine protein testing over time was seen in patients with high-risk APOL1 genotypes, with a 12% increase from baseline, compared with a 6% increase for patients with low-risk APOL1 genotypes and a 7% increase among controls. The difference was significant only between patients with high-risk APOL1 genotypes and controls (P = .01).

Significantly more patients with high-risk APOL1 genotypes, at 59%, reported making positive lifestyle changes as reflected in better dietary and exercise habits after receiving their test results than did those with low-risk APOL1 genotypes, at 37% (P < .001).

Moreover, 24% of those with high-risk genotypes reported that receiving test results changed how they take their BP medication, compared with only 10% of those with low-risk genotypes.

More high-risk genotype carriers also reported taking their medications more often, at 10%, compared with 5% of low-risk genotype carriers (P = .005).

On the other hand, more patients with the high-risk genotype, at 27%, worried that they would develop kidney problems than low-risk carriers, at 17% (P < .001). Although investigators did offer patients the opportunity to speak with a genetic counselor at no cost, none chose to do so, the authors noted.
 

Small improvements

As the investigators emphasized, the magnitude of BP improvement seen in high-risk APOL1 carriers was small. However, they did not provide specific BP target recommendations or BP-lowering strategies, which, had they done so, may have brought BP down to a greater degree.

Health behavior changes were similarly small and may not have been clinically that meaningful.

Still, “results suggest that the trial clearly influenced those who received positive results and may have had some positive effects on other patients,” Dr. Nadkarni concluded.

Dr. Nadkarni is a cofounder of and has equity in Renalytx, and has been a member of the scientific advisory board and received personal fees from the company. He is also a cofounder of Pensieve Health.

A version of this article first appeared on Medscape.com.

Giving patients and their providers genetic test results for kidney failure risk promotes positive behavioral change that could decrease an individual’s likelihood of developing chronic kidney disease (CKD) and end-stage renal failure (ESRF), a new pilot study suggests.

“Disclosing APOL1 genetic testing results to patients of African ancestry with hypertension and their clinicians was associated with a greater reduction in systolic blood pressure [SBP], increased kidney disease screening, and positive self-reported behavior change in those with high-risk genotypes,” Girish Nadkarni, MD, MPH, Icahn Mount Sinai School of Medicine, New York, and colleagues reported.

“These two measurements – the change in blood pressure and increased kidney function tests – act as hallmarks for detecting beneficial lifestyle change,” Dr. Nadkarni noted in a statement from his institution.

“For many years, researchers have wondered whether reporting APOL1 genetic test results would help improve clinical management. This is the first pragmatic randomized clinical trial to test this out [and] these results suggest we are headed in the right direction,” he added.

The study was published online March 4, 2022, in JAMA Network Open.
 

A quarter of those with high-risk genotype changed medication behavior

High-risk APOL1 genotypes confer a 5- to 10-fold increased risk for CKD and ESRF caused by hypertension and are found in one out of seven individuals of African ancestry. People of African ancestry also have the highest age-adjusted prevalence of high BP and the lowest rates of BP control, Dr. Nadkarni and colleagues wrote.

They studied a total of 2,050 patients of African ancestry with hypertension but without CKD who were randomized to undergo either immediate APOL1 testing (intervention group) or delayed APOL1 testing (control group).

“Patients randomly assigned to the intervention group received APOL1 genetic testing results from trained staff [while] their clinicians received results through clinical decision support in electronic health records,” the investigators explained.

Control patients received results after 12 months of follow-up. The mean age of the cohort was 53 years and almost two-thirds were female. Mean baseline SBP was significantly higher in patients with high-risk APOL1 genotypes, at 137 mm Hg, compared with those with low-risk APOL1 genotypes, at 134 mm Hg (P = .003), and controls, at 133 mm Hg (P = .001), the authors reported. 

At 3 months, “all groups had some decrease in SBP,” Dr. Nadkarni and colleagues observed.

However, patients with high-risk APOL1 genotypes had a significantly greater decrease in SBP, at 6 mm Hg, compared with a mean decrease of 3 mm Hg for those with low-risk APOL1 genotypes (P = .004) as well as controls (P = .01). At 12 months, there was no significant difference in SBP or change in SBP from baseline to 12 months between the three groups.

“All three groups showed a significant increase in the rate of urine protein testing over time,” the authors added.

Again, however, the most significant increase in urine protein testing over time was seen in patients with high-risk APOL1 genotypes, with a 12% increase from baseline, compared with a 6% increase for patients with low-risk APOL1 genotypes and a 7% increase among controls. The difference was significant only between patients with high-risk APOL1 genotypes and controls (P = .01).

Significantly more patients with high-risk APOL1 genotypes, at 59%, reported making positive lifestyle changes as reflected in better dietary and exercise habits after receiving their test results than did those with low-risk APOL1 genotypes, at 37% (P < .001).

Moreover, 24% of those with high-risk genotypes reported that receiving test results changed how they take their BP medication, compared with only 10% of those with low-risk genotypes.

More high-risk genotype carriers also reported taking their medications more often, at 10%, compared with 5% of low-risk genotype carriers (P = .005).

On the other hand, more patients with the high-risk genotype, at 27%, worried that they would develop kidney problems than low-risk carriers, at 17% (P < .001). Although investigators did offer patients the opportunity to speak with a genetic counselor at no cost, none chose to do so, the authors noted.
 

Small improvements

As the investigators emphasized, the magnitude of BP improvement seen in high-risk APOL1 carriers was small. However, they did not provide specific BP target recommendations or BP-lowering strategies, which, had they done so, may have brought BP down to a greater degree.

Health behavior changes were similarly small and may not have been clinically that meaningful.

Still, “results suggest that the trial clearly influenced those who received positive results and may have had some positive effects on other patients,” Dr. Nadkarni concluded.

Dr. Nadkarni is a cofounder of and has equity in Renalytx, and has been a member of the scientific advisory board and received personal fees from the company. He is also a cofounder of Pensieve Health.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Pediatric IBD increases cancer risk later in life

Article Type
Changed
Mon, 03/07/2022 - 10:23

Children who are diagnosed with inflammatory bowel disease (IBD) are more than twice as likely to develop cancer, especially gastrointestinal cancer, later in life compared with the general pediatric population, a new meta-analysis suggests.

Although the overall incidence rate of cancer in this population is low, “we found a 2.4-fold increase in the relative rate of cancer among patients with pediatric-onset IBD compared with the general pediatric population, primarily associated with an increased rate of gastrointestinal cancers,” wrote senior author Tine Jess, MD, DMSci, Aalborg University, Copenhagen, and colleagues.

The study was published online March 1 in JAMA Network Open.

Previous research indicates that IBD is associated with an increased risk for colon, small bowel, and other types of cancer in adults, but the risk among children with IBD is not well understood.

In the current analysis, Dr. Jess and colleagues examined five population-based studies from North America and Europe, which included more than 19,800 participants with pediatric-onset IBD. Of these participants, 715 were later diagnosed with cancer.

Overall, the risk for cancer among individuals with pediatric-onset IBD was 2.4-fold higher than that of their peers without IBD, but those rates varied by IBD subtype. Those with Crohn’s disease, for instance, were about two times more likely to develop cancer, while those with ulcerative colitis were 2.6 times more likely to develop cancer later.

Two studies included in the meta-analysis broke down results by sex and found that the risk for cancer was higher among male versus female patients (pooled relative rates [pRR], 3.23 in men and 2.45 in women).

These two studies also calculated the risk for cancer by exposure to thiopurines. Patients receiving these immunosuppressive drugs had an increased relative rate of cancer (pRR, 2.09). Although numerically higher, this rate was not statistically higher compared with patients not exposed to the drugs (pRR, 1.82).

When looking at risk by cancer site, the authors consistently observed the highest relative rates for gastrointestinal cancers. Specifically, the investigators calculated a 55-fold increased risk for liver cancer (pRR, 55.4), followed by a 20-fold increased risk for colorectal cancer (pRR, 20.2), and a 16-fold increased risk for small bowel cancer (pRR, 16.2).

Despite such high estimates for gastrointestinal cancers, “this risk corresponds to a mean incidence rate of 0.3 cases of liver cancer, 0.6 cases of colorectal cancer, and 0.1 cases of small bowel cancer per 1,000 person-years in this population,” the authors noted.

In other words, “the overall incidence rate of cancer in this population is low,” at less than 3.3 cases per 1,000 person-years, the authors concluded.

Relative rates of extraintestinal cancers were even lower, with the highest risks for nonmelanoma skin cancer (pRR, 3.62), lymphoid cancer (pRR, 3.10), and melanoma (pRR, 2.05).

The authors suggest that identifying variables that might reduce cancer risk in pediatric patients who develop IBD could better shape management and prevention strategies.

CRC screening guidelines already recommend that children undergo a colonoscopy 6-8 years after being diagnosed with colitis extending beyond the rectum. Annual colonoscopy is also recommended for patients with primary sclerosing cholangitis from the time of diagnosis and annual screening for skin cancer is recommended for all patients with IBD.

The investigators further suggest that because ongoing inflammation is an important risk factor for cancer, early and adequate control of inflammation could be critical in the prevention of long-term complications.

The study was supported by a grant from the Danish National Research Foundation. Dr. Jess and coauthors Rahma Elmahdi, MD, Camilla Lemser, and Kristine Allin, MD, reported receiving grants from the Danish National Research Foundation National Center of Excellence during the conduct of the study. Coauthor Manasi Agrawal, MD, reported receiving grants from the National Institutes of Health/National Institute of Diabetes and Digestive and Kidney Diseases during the conduct of the study.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Children who are diagnosed with inflammatory bowel disease (IBD) are more than twice as likely to develop cancer, especially gastrointestinal cancer, later in life compared with the general pediatric population, a new meta-analysis suggests.

Although the overall incidence rate of cancer in this population is low, “we found a 2.4-fold increase in the relative rate of cancer among patients with pediatric-onset IBD compared with the general pediatric population, primarily associated with an increased rate of gastrointestinal cancers,” wrote senior author Tine Jess, MD, DMSci, Aalborg University, Copenhagen, and colleagues.

The study was published online March 1 in JAMA Network Open.

Previous research indicates that IBD is associated with an increased risk for colon, small bowel, and other types of cancer in adults, but the risk among children with IBD is not well understood.

In the current analysis, Dr. Jess and colleagues examined five population-based studies from North America and Europe, which included more than 19,800 participants with pediatric-onset IBD. Of these participants, 715 were later diagnosed with cancer.

Overall, the risk for cancer among individuals with pediatric-onset IBD was 2.4-fold higher than that of their peers without IBD, but those rates varied by IBD subtype. Those with Crohn’s disease, for instance, were about two times more likely to develop cancer, while those with ulcerative colitis were 2.6 times more likely to develop cancer later.

Two studies included in the meta-analysis broke down results by sex and found that the risk for cancer was higher among male versus female patients (pooled relative rates [pRR], 3.23 in men and 2.45 in women).

These two studies also calculated the risk for cancer by exposure to thiopurines. Patients receiving these immunosuppressive drugs had an increased relative rate of cancer (pRR, 2.09). Although numerically higher, this rate was not statistically higher compared with patients not exposed to the drugs (pRR, 1.82).

When looking at risk by cancer site, the authors consistently observed the highest relative rates for gastrointestinal cancers. Specifically, the investigators calculated a 55-fold increased risk for liver cancer (pRR, 55.4), followed by a 20-fold increased risk for colorectal cancer (pRR, 20.2), and a 16-fold increased risk for small bowel cancer (pRR, 16.2).

Despite such high estimates for gastrointestinal cancers, “this risk corresponds to a mean incidence rate of 0.3 cases of liver cancer, 0.6 cases of colorectal cancer, and 0.1 cases of small bowel cancer per 1,000 person-years in this population,” the authors noted.

In other words, “the overall incidence rate of cancer in this population is low,” at less than 3.3 cases per 1,000 person-years, the authors concluded.

Relative rates of extraintestinal cancers were even lower, with the highest risks for nonmelanoma skin cancer (pRR, 3.62), lymphoid cancer (pRR, 3.10), and melanoma (pRR, 2.05).

The authors suggest that identifying variables that might reduce cancer risk in pediatric patients who develop IBD could better shape management and prevention strategies.

CRC screening guidelines already recommend that children undergo a colonoscopy 6-8 years after being diagnosed with colitis extending beyond the rectum. Annual colonoscopy is also recommended for patients with primary sclerosing cholangitis from the time of diagnosis and annual screening for skin cancer is recommended for all patients with IBD.

The investigators further suggest that because ongoing inflammation is an important risk factor for cancer, early and adequate control of inflammation could be critical in the prevention of long-term complications.

The study was supported by a grant from the Danish National Research Foundation. Dr. Jess and coauthors Rahma Elmahdi, MD, Camilla Lemser, and Kristine Allin, MD, reported receiving grants from the Danish National Research Foundation National Center of Excellence during the conduct of the study. Coauthor Manasi Agrawal, MD, reported receiving grants from the National Institutes of Health/National Institute of Diabetes and Digestive and Kidney Diseases during the conduct of the study.

A version of this article first appeared on Medscape.com.

Children who are diagnosed with inflammatory bowel disease (IBD) are more than twice as likely to develop cancer, especially gastrointestinal cancer, later in life compared with the general pediatric population, a new meta-analysis suggests.

Although the overall incidence rate of cancer in this population is low, “we found a 2.4-fold increase in the relative rate of cancer among patients with pediatric-onset IBD compared with the general pediatric population, primarily associated with an increased rate of gastrointestinal cancers,” wrote senior author Tine Jess, MD, DMSci, Aalborg University, Copenhagen, and colleagues.

The study was published online March 1 in JAMA Network Open.

Previous research indicates that IBD is associated with an increased risk for colon, small bowel, and other types of cancer in adults, but the risk among children with IBD is not well understood.

In the current analysis, Dr. Jess and colleagues examined five population-based studies from North America and Europe, which included more than 19,800 participants with pediatric-onset IBD. Of these participants, 715 were later diagnosed with cancer.

Overall, the risk for cancer among individuals with pediatric-onset IBD was 2.4-fold higher than that of their peers without IBD, but those rates varied by IBD subtype. Those with Crohn’s disease, for instance, were about two times more likely to develop cancer, while those with ulcerative colitis were 2.6 times more likely to develop cancer later.

Two studies included in the meta-analysis broke down results by sex and found that the risk for cancer was higher among male versus female patients (pooled relative rates [pRR], 3.23 in men and 2.45 in women).

These two studies also calculated the risk for cancer by exposure to thiopurines. Patients receiving these immunosuppressive drugs had an increased relative rate of cancer (pRR, 2.09). Although numerically higher, this rate was not statistically higher compared with patients not exposed to the drugs (pRR, 1.82).

When looking at risk by cancer site, the authors consistently observed the highest relative rates for gastrointestinal cancers. Specifically, the investigators calculated a 55-fold increased risk for liver cancer (pRR, 55.4), followed by a 20-fold increased risk for colorectal cancer (pRR, 20.2), and a 16-fold increased risk for small bowel cancer (pRR, 16.2).

Despite such high estimates for gastrointestinal cancers, “this risk corresponds to a mean incidence rate of 0.3 cases of liver cancer, 0.6 cases of colorectal cancer, and 0.1 cases of small bowel cancer per 1,000 person-years in this population,” the authors noted.

In other words, “the overall incidence rate of cancer in this population is low,” at less than 3.3 cases per 1,000 person-years, the authors concluded.

Relative rates of extraintestinal cancers were even lower, with the highest risks for nonmelanoma skin cancer (pRR, 3.62), lymphoid cancer (pRR, 3.10), and melanoma (pRR, 2.05).

The authors suggest that identifying variables that might reduce cancer risk in pediatric patients who develop IBD could better shape management and prevention strategies.

CRC screening guidelines already recommend that children undergo a colonoscopy 6-8 years after being diagnosed with colitis extending beyond the rectum. Annual colonoscopy is also recommended for patients with primary sclerosing cholangitis from the time of diagnosis and annual screening for skin cancer is recommended for all patients with IBD.

The investigators further suggest that because ongoing inflammation is an important risk factor for cancer, early and adequate control of inflammation could be critical in the prevention of long-term complications.

The study was supported by a grant from the Danish National Research Foundation. Dr. Jess and coauthors Rahma Elmahdi, MD, Camilla Lemser, and Kristine Allin, MD, reported receiving grants from the Danish National Research Foundation National Center of Excellence during the conduct of the study. Coauthor Manasi Agrawal, MD, reported receiving grants from the National Institutes of Health/National Institute of Diabetes and Digestive and Kidney Diseases during the conduct of the study.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Most Americans unaware alcohol can cause cancer

Article Type
Changed
Thu, 12/15/2022 - 14:33

The majority of Americans are not aware that alcohol consumption causes a variety of cancers and especially do not consider wine and beer to have a link with cancer, suggest the results from a national survey.

coldsnowstorm/iStock / Getty Images Plus

“Alcohol is a leading modifiable risk factor for cancer, yet most Americans are unaware that alcohol increases cancer risk,” write lead author Andrew Seidenberg, PhD, MPH, National Cancer Institute, Rockville, Md., and colleagues.

“Increasing awareness of the alcohol-cancer link, such as through multimedia campaigns and patient-provider communication, may be an important new strategy for health advocates working to implement preventive alcohol policies,” they add.

The findings were published in the February issue of the American Journal of Preventive Medicine.

“This is the first study to examine the relationship between alcohol control policy support and awareness of the alcohol-cancer link among a national U.S. sample,” the authors write.

The results show that there is some public support for the idea of adding written warnings about the alcohol-cancer risk to alcoholic beverages, which is something that a number of cancer organizations have been petitioning for.

A petition filed by the American Society of Clinical Oncology, the American Institute for Cancer Research, and Breast Cancer Prevention Partners, all in collaboration with several public health organizations, proposes labeling that would read: “WARNING: According to the Surgeon General, consumption of alcoholic beverages can cause cancer, including breast and colon cancers.”

Such labeling has “the potential to save lives by ensuring that consumers have a more accurate understanding of the link between alcohol and cancer, which will empower them to better protect their health,” the groups said in the petition.
 

Public support

The findings come from an analysis of the 2020 Health Information National Trends Survey 5 Cycle 4. A total of 3,865 adults participated in the survey, approximately half of whom were nondrinkers.

As well as investigating how aware people were of the alcohol-cancer link, the investigators looked at how prevalent public support might be for the following three communication-focused alcohol policies:

  • Banning outdoor alcohol-related advertising
  • Requiring health warnings on alcohol beverage containers
  • Requiring recommended drinking guidelines on alcoholic beverage containers

“Awareness of the alcohol-cancer link was measured separately for wine, beer, and liquor by asking: In your opinion, how much does drinking the following types of alcohol affect the risk of getting cancer?” the authors explain.

“Awareness of the alcohol-cancer link was low,” the investigators comment; only about one-third (31.8%) of participants were aware that alcohol increases the risk of cancer. The figures were even lower for individual beverage type, at 20.3% for wine, 24.9% for beer, and 31.2% for liquor. Furthermore, approximately half of participants responded with “don’t know” to the three awareness items, investigators noted.

On the other hand, more than half of the Americans surveyed supported adding both health warning labels (65.1%) and information on recommended drinking guidelines (63.9%) to alcoholic beverage containers. Support was lower (34.4% of respondents) for banning outdoor alcohol advertising.

Among Americans who were aware that alcohol increased cancer risk, support was also higher for all three policies.

For example, about 75% of respondents who were aware that alcohol increases cancer risk supported adding health warnings and drinking guidelines to beverage containers, compared with about half of Americans who felt that alcohol consumption had either no effect on or decreased cancer risk.

Even among those who were aware of the alcohol-cancer link, public support for outdoor advertising was not high (37.8%), but it was even lower (23.6%) among respondents who felt alcohol had no effect on or decreased the risk of cancer.

“Policy support was highest among nondrinkers, followed by drinkers, and was lowest among heavier drinkers,” the authors report.

For example, almost 43% of nondrinkers supported restrictions on outdoor alcohol advertising, compared with only about 28.6% of drinkers and 22% of heavier drinkers. More respondents supported adding health warning labels on alcoholic beverages – 70% of nondrinkers, 65% of drinkers, and 57% of heavier drinkers, investigators observe.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The majority of Americans are not aware that alcohol consumption causes a variety of cancers and especially do not consider wine and beer to have a link with cancer, suggest the results from a national survey.

coldsnowstorm/iStock / Getty Images Plus

“Alcohol is a leading modifiable risk factor for cancer, yet most Americans are unaware that alcohol increases cancer risk,” write lead author Andrew Seidenberg, PhD, MPH, National Cancer Institute, Rockville, Md., and colleagues.

“Increasing awareness of the alcohol-cancer link, such as through multimedia campaigns and patient-provider communication, may be an important new strategy for health advocates working to implement preventive alcohol policies,” they add.

The findings were published in the February issue of the American Journal of Preventive Medicine.

“This is the first study to examine the relationship between alcohol control policy support and awareness of the alcohol-cancer link among a national U.S. sample,” the authors write.

The results show that there is some public support for the idea of adding written warnings about the alcohol-cancer risk to alcoholic beverages, which is something that a number of cancer organizations have been petitioning for.

A petition filed by the American Society of Clinical Oncology, the American Institute for Cancer Research, and Breast Cancer Prevention Partners, all in collaboration with several public health organizations, proposes labeling that would read: “WARNING: According to the Surgeon General, consumption of alcoholic beverages can cause cancer, including breast and colon cancers.”

Such labeling has “the potential to save lives by ensuring that consumers have a more accurate understanding of the link between alcohol and cancer, which will empower them to better protect their health,” the groups said in the petition.
 

Public support

The findings come from an analysis of the 2020 Health Information National Trends Survey 5 Cycle 4. A total of 3,865 adults participated in the survey, approximately half of whom were nondrinkers.

As well as investigating how aware people were of the alcohol-cancer link, the investigators looked at how prevalent public support might be for the following three communication-focused alcohol policies:

  • Banning outdoor alcohol-related advertising
  • Requiring health warnings on alcohol beverage containers
  • Requiring recommended drinking guidelines on alcoholic beverage containers

“Awareness of the alcohol-cancer link was measured separately for wine, beer, and liquor by asking: In your opinion, how much does drinking the following types of alcohol affect the risk of getting cancer?” the authors explain.

“Awareness of the alcohol-cancer link was low,” the investigators comment; only about one-third (31.8%) of participants were aware that alcohol increases the risk of cancer. The figures were even lower for individual beverage type, at 20.3% for wine, 24.9% for beer, and 31.2% for liquor. Furthermore, approximately half of participants responded with “don’t know” to the three awareness items, investigators noted.

On the other hand, more than half of the Americans surveyed supported adding both health warning labels (65.1%) and information on recommended drinking guidelines (63.9%) to alcoholic beverage containers. Support was lower (34.4% of respondents) for banning outdoor alcohol advertising.

Among Americans who were aware that alcohol increased cancer risk, support was also higher for all three policies.

For example, about 75% of respondents who were aware that alcohol increases cancer risk supported adding health warnings and drinking guidelines to beverage containers, compared with about half of Americans who felt that alcohol consumption had either no effect on or decreased cancer risk.

Even among those who were aware of the alcohol-cancer link, public support for outdoor advertising was not high (37.8%), but it was even lower (23.6%) among respondents who felt alcohol had no effect on or decreased the risk of cancer.

“Policy support was highest among nondrinkers, followed by drinkers, and was lowest among heavier drinkers,” the authors report.

For example, almost 43% of nondrinkers supported restrictions on outdoor alcohol advertising, compared with only about 28.6% of drinkers and 22% of heavier drinkers. More respondents supported adding health warning labels on alcoholic beverages – 70% of nondrinkers, 65% of drinkers, and 57% of heavier drinkers, investigators observe.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

The majority of Americans are not aware that alcohol consumption causes a variety of cancers and especially do not consider wine and beer to have a link with cancer, suggest the results from a national survey.

coldsnowstorm/iStock / Getty Images Plus

“Alcohol is a leading modifiable risk factor for cancer, yet most Americans are unaware that alcohol increases cancer risk,” write lead author Andrew Seidenberg, PhD, MPH, National Cancer Institute, Rockville, Md., and colleagues.

“Increasing awareness of the alcohol-cancer link, such as through multimedia campaigns and patient-provider communication, may be an important new strategy for health advocates working to implement preventive alcohol policies,” they add.

The findings were published in the February issue of the American Journal of Preventive Medicine.

“This is the first study to examine the relationship between alcohol control policy support and awareness of the alcohol-cancer link among a national U.S. sample,” the authors write.

The results show that there is some public support for the idea of adding written warnings about the alcohol-cancer risk to alcoholic beverages, which is something that a number of cancer organizations have been petitioning for.

A petition filed by the American Society of Clinical Oncology, the American Institute for Cancer Research, and Breast Cancer Prevention Partners, all in collaboration with several public health organizations, proposes labeling that would read: “WARNING: According to the Surgeon General, consumption of alcoholic beverages can cause cancer, including breast and colon cancers.”

Such labeling has “the potential to save lives by ensuring that consumers have a more accurate understanding of the link between alcohol and cancer, which will empower them to better protect their health,” the groups said in the petition.
 

Public support

The findings come from an analysis of the 2020 Health Information National Trends Survey 5 Cycle 4. A total of 3,865 adults participated in the survey, approximately half of whom were nondrinkers.

As well as investigating how aware people were of the alcohol-cancer link, the investigators looked at how prevalent public support might be for the following three communication-focused alcohol policies:

  • Banning outdoor alcohol-related advertising
  • Requiring health warnings on alcohol beverage containers
  • Requiring recommended drinking guidelines on alcoholic beverage containers

“Awareness of the alcohol-cancer link was measured separately for wine, beer, and liquor by asking: In your opinion, how much does drinking the following types of alcohol affect the risk of getting cancer?” the authors explain.

“Awareness of the alcohol-cancer link was low,” the investigators comment; only about one-third (31.8%) of participants were aware that alcohol increases the risk of cancer. The figures were even lower for individual beverage type, at 20.3% for wine, 24.9% for beer, and 31.2% for liquor. Furthermore, approximately half of participants responded with “don’t know” to the three awareness items, investigators noted.

On the other hand, more than half of the Americans surveyed supported adding both health warning labels (65.1%) and information on recommended drinking guidelines (63.9%) to alcoholic beverage containers. Support was lower (34.4% of respondents) for banning outdoor alcohol advertising.

Among Americans who were aware that alcohol increased cancer risk, support was also higher for all three policies.

For example, about 75% of respondents who were aware that alcohol increases cancer risk supported adding health warnings and drinking guidelines to beverage containers, compared with about half of Americans who felt that alcohol consumption had either no effect on or decreased cancer risk.

Even among those who were aware of the alcohol-cancer link, public support for outdoor advertising was not high (37.8%), but it was even lower (23.6%) among respondents who felt alcohol had no effect on or decreased the risk of cancer.

“Policy support was highest among nondrinkers, followed by drinkers, and was lowest among heavier drinkers,” the authors report.

For example, almost 43% of nondrinkers supported restrictions on outdoor alcohol advertising, compared with only about 28.6% of drinkers and 22% of heavier drinkers. More respondents supported adding health warning labels on alcoholic beverages – 70% of nondrinkers, 65% of drinkers, and 57% of heavier drinkers, investigators observe.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF PREVENTIVE MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Early flu treatment of hospital CAP patients improves outcomes

Article Type
Changed
Tue, 02/22/2022 - 10:47

Early initiation of the antiviral oseltamivir (Tamiflu) reduces the risk for death in patients hospitalized with community-acquired pneumonia (CAP) but patients have to be tested for influenza first and that is not happening often enough, a large observational cohort of adult patients indicates.

“Early testing allows for early treatment, and we found that early treatment was associated with reduced mortality so testing patients during the flu season is crucial,” senior author Michael Rothberg, MD, MPH, of the Cleveland Clinic said in an interview.

“Even during the flu season, most patients with CAP in our study went untested for influenza [even though] those who received early oseltamivir exhibited lower 14-day in-hospital case fatality ... suggesting more widespread testing might improve patient outcomes,” the authors added.

The study was published online Feb. 5, 2022, in the journal CHEST.
 

Premier database

Data from the Premier Database – a hospital discharge database with information from over 600 hospitals in the United States – were analyzed between July 2010 and June 2015. Microbiological laboratory data was provided by 179 hospitals. “For each year, we evaluated the total percentage of patients tested for influenza A/B within 3 days of hospitalization,” lead author Abhishek Deshpande, MD, PhD, Cleveland Clinic, and colleagues explained.

A total of 166,268 patients with CAP were included in the study, among which only about one-quarter were tested for influenza. Some 11.5% tested positive for the flu, the authors noted. Testing did increase from 15.4% in 2010 to 35.6% in 2015 and it was higher at close to 29% during the influenza season, compared with only about 8% during the summer months.

Patients who were tested for influenza were younger at age 66.6 years, compared with untested patients, who were 70 years of age (P < .001). Tested patients were also less likely to have been admitted from a nursing facility (P < .001), were less likely to have been hospitalized in the preceding 6 months (P < .001) and have fewer comorbidities than those who were not tested (P < .001).

“Both groups had similar illness severities on admission,” the authors observed, “but patients who were tested were less likely to die in the hospital within 14 days,” the authors reported – at 6.7% versus 10.9% for untested patients (P < .001).

More than 80% of patients who tested positive for influenza received an antibacterial on day 1 of their admission, compared with virtually all those who were either not tested or who tested negative, the investigators added (P < .001). The mean duration of antibacterial therapy among patients with a bacterial coinfection was not influenced by influenza test results.

However, among those who tested positive for influenza, almost 60% received oseltamivir on day 1 whereas roughly 30% received treatment on day 2 or later. In fact, almost all patients who received early oseltamivir were tested for influenza on day 1, the investigators pointed out. Patients who received early oseltamivir had a 25% lower risk of death within the first 14 days in hospital at an adjusted odds ratio of 0.75 (95% confidence interval, 0.59-0.96).

Early initiation of the antiviral also reduced the risk of requiring subsequent ICU care by 36% at an aOR of 0.64; invasive mechanical ventilation by 46% at an aOR of 0.54, and the need for vasopressor therapy by 47% at an aOR of 0.53. All results were within the 95% confidence levels.

Early use of antiviral therapy also reduced both the length of hospital stay and the cost of that stay by 12%.
 

 

 

ATS-IDSA guidelines

As Dr. Deshpande noted, the American Thoracic Society and the Infectious Diseases Society of America guidelines recommend testing and empiric treatment of influenza in patients hospitalized with CAP. “Testing more inpatients especially during the flu season can reduce other diagnostic testing and improve antimicrobial stewardship,” Deshpande noted.

Thus, while the rate of testing for influenza did increase over the 5-year study interval, “there is substantial room for improvement,” he added, as a positive test clearly does trigger the need for intervention. As Dr. Deshpande also noted, the past two influenza seasons have been mild, but influenza activity has again picked up lately again in many parts of the United States.

With the COVID-19 pandemic overwhelming influenza over the past few years, “differentiating between the two based on symptoms alone can be challenging,” he acknowledged, “and clinicians will need to test and treat accordingly.” This is particularly important given that this study clearly indicates that early treatment with an antiviral can lower the risk of short-term mortality in hospitalized CAP patients.

One limitation of the study was the lack of data on time of symptom onset, which may be an important confounder of the effect of oseltamivir on outcomes, the authors point out. Asked to comment on the findings, Barbara Jones, MD, University of Utah Health, Salt Lake City, noted that timely antivirals for patients with influenza are highly effective at mitigating severe disease and are thus strongly recommended by practice guidelines.

“However, it is hard for clinicians to keep influenza on the radar and change testing and treatment approaches according to the season and prevalence [of influenza infections],” she said in an interview. “This is an important study that highlights this challenge.

“We need a better understanding of the solutions that have been effective at improving influenza recognition and treatment, possibly by studying facilities that perform well at this process,” she said.

Dr. Deshpande reported receiving research funding to his institution from the Clorox Company and consultant fees from Merck.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Early initiation of the antiviral oseltamivir (Tamiflu) reduces the risk for death in patients hospitalized with community-acquired pneumonia (CAP) but patients have to be tested for influenza first and that is not happening often enough, a large observational cohort of adult patients indicates.

“Early testing allows for early treatment, and we found that early treatment was associated with reduced mortality so testing patients during the flu season is crucial,” senior author Michael Rothberg, MD, MPH, of the Cleveland Clinic said in an interview.

“Even during the flu season, most patients with CAP in our study went untested for influenza [even though] those who received early oseltamivir exhibited lower 14-day in-hospital case fatality ... suggesting more widespread testing might improve patient outcomes,” the authors added.

The study was published online Feb. 5, 2022, in the journal CHEST.
 

Premier database

Data from the Premier Database – a hospital discharge database with information from over 600 hospitals in the United States – were analyzed between July 2010 and June 2015. Microbiological laboratory data was provided by 179 hospitals. “For each year, we evaluated the total percentage of patients tested for influenza A/B within 3 days of hospitalization,” lead author Abhishek Deshpande, MD, PhD, Cleveland Clinic, and colleagues explained.

A total of 166,268 patients with CAP were included in the study, among which only about one-quarter were tested for influenza. Some 11.5% tested positive for the flu, the authors noted. Testing did increase from 15.4% in 2010 to 35.6% in 2015 and it was higher at close to 29% during the influenza season, compared with only about 8% during the summer months.

Patients who were tested for influenza were younger at age 66.6 years, compared with untested patients, who were 70 years of age (P < .001). Tested patients were also less likely to have been admitted from a nursing facility (P < .001), were less likely to have been hospitalized in the preceding 6 months (P < .001) and have fewer comorbidities than those who were not tested (P < .001).

“Both groups had similar illness severities on admission,” the authors observed, “but patients who were tested were less likely to die in the hospital within 14 days,” the authors reported – at 6.7% versus 10.9% for untested patients (P < .001).

More than 80% of patients who tested positive for influenza received an antibacterial on day 1 of their admission, compared with virtually all those who were either not tested or who tested negative, the investigators added (P < .001). The mean duration of antibacterial therapy among patients with a bacterial coinfection was not influenced by influenza test results.

However, among those who tested positive for influenza, almost 60% received oseltamivir on day 1 whereas roughly 30% received treatment on day 2 or later. In fact, almost all patients who received early oseltamivir were tested for influenza on day 1, the investigators pointed out. Patients who received early oseltamivir had a 25% lower risk of death within the first 14 days in hospital at an adjusted odds ratio of 0.75 (95% confidence interval, 0.59-0.96).

Early initiation of the antiviral also reduced the risk of requiring subsequent ICU care by 36% at an aOR of 0.64; invasive mechanical ventilation by 46% at an aOR of 0.54, and the need for vasopressor therapy by 47% at an aOR of 0.53. All results were within the 95% confidence levels.

Early use of antiviral therapy also reduced both the length of hospital stay and the cost of that stay by 12%.
 

 

 

ATS-IDSA guidelines

As Dr. Deshpande noted, the American Thoracic Society and the Infectious Diseases Society of America guidelines recommend testing and empiric treatment of influenza in patients hospitalized with CAP. “Testing more inpatients especially during the flu season can reduce other diagnostic testing and improve antimicrobial stewardship,” Deshpande noted.

Thus, while the rate of testing for influenza did increase over the 5-year study interval, “there is substantial room for improvement,” he added, as a positive test clearly does trigger the need for intervention. As Dr. Deshpande also noted, the past two influenza seasons have been mild, but influenza activity has again picked up lately again in many parts of the United States.

With the COVID-19 pandemic overwhelming influenza over the past few years, “differentiating between the two based on symptoms alone can be challenging,” he acknowledged, “and clinicians will need to test and treat accordingly.” This is particularly important given that this study clearly indicates that early treatment with an antiviral can lower the risk of short-term mortality in hospitalized CAP patients.

One limitation of the study was the lack of data on time of symptom onset, which may be an important confounder of the effect of oseltamivir on outcomes, the authors point out. Asked to comment on the findings, Barbara Jones, MD, University of Utah Health, Salt Lake City, noted that timely antivirals for patients with influenza are highly effective at mitigating severe disease and are thus strongly recommended by practice guidelines.

“However, it is hard for clinicians to keep influenza on the radar and change testing and treatment approaches according to the season and prevalence [of influenza infections],” she said in an interview. “This is an important study that highlights this challenge.

“We need a better understanding of the solutions that have been effective at improving influenza recognition and treatment, possibly by studying facilities that perform well at this process,” she said.

Dr. Deshpande reported receiving research funding to his institution from the Clorox Company and consultant fees from Merck.

A version of this article first appeared on Medscape.com.

Early initiation of the antiviral oseltamivir (Tamiflu) reduces the risk for death in patients hospitalized with community-acquired pneumonia (CAP) but patients have to be tested for influenza first and that is not happening often enough, a large observational cohort of adult patients indicates.

“Early testing allows for early treatment, and we found that early treatment was associated with reduced mortality so testing patients during the flu season is crucial,” senior author Michael Rothberg, MD, MPH, of the Cleveland Clinic said in an interview.

“Even during the flu season, most patients with CAP in our study went untested for influenza [even though] those who received early oseltamivir exhibited lower 14-day in-hospital case fatality ... suggesting more widespread testing might improve patient outcomes,” the authors added.

The study was published online Feb. 5, 2022, in the journal CHEST.
 

Premier database

Data from the Premier Database – a hospital discharge database with information from over 600 hospitals in the United States – were analyzed between July 2010 and June 2015. Microbiological laboratory data was provided by 179 hospitals. “For each year, we evaluated the total percentage of patients tested for influenza A/B within 3 days of hospitalization,” lead author Abhishek Deshpande, MD, PhD, Cleveland Clinic, and colleagues explained.

A total of 166,268 patients with CAP were included in the study, among which only about one-quarter were tested for influenza. Some 11.5% tested positive for the flu, the authors noted. Testing did increase from 15.4% in 2010 to 35.6% in 2015 and it was higher at close to 29% during the influenza season, compared with only about 8% during the summer months.

Patients who were tested for influenza were younger at age 66.6 years, compared with untested patients, who were 70 years of age (P < .001). Tested patients were also less likely to have been admitted from a nursing facility (P < .001), were less likely to have been hospitalized in the preceding 6 months (P < .001) and have fewer comorbidities than those who were not tested (P < .001).

“Both groups had similar illness severities on admission,” the authors observed, “but patients who were tested were less likely to die in the hospital within 14 days,” the authors reported – at 6.7% versus 10.9% for untested patients (P < .001).

More than 80% of patients who tested positive for influenza received an antibacterial on day 1 of their admission, compared with virtually all those who were either not tested or who tested negative, the investigators added (P < .001). The mean duration of antibacterial therapy among patients with a bacterial coinfection was not influenced by influenza test results.

However, among those who tested positive for influenza, almost 60% received oseltamivir on day 1 whereas roughly 30% received treatment on day 2 or later. In fact, almost all patients who received early oseltamivir were tested for influenza on day 1, the investigators pointed out. Patients who received early oseltamivir had a 25% lower risk of death within the first 14 days in hospital at an adjusted odds ratio of 0.75 (95% confidence interval, 0.59-0.96).

Early initiation of the antiviral also reduced the risk of requiring subsequent ICU care by 36% at an aOR of 0.64; invasive mechanical ventilation by 46% at an aOR of 0.54, and the need for vasopressor therapy by 47% at an aOR of 0.53. All results were within the 95% confidence levels.

Early use of antiviral therapy also reduced both the length of hospital stay and the cost of that stay by 12%.
 

 

 

ATS-IDSA guidelines

As Dr. Deshpande noted, the American Thoracic Society and the Infectious Diseases Society of America guidelines recommend testing and empiric treatment of influenza in patients hospitalized with CAP. “Testing more inpatients especially during the flu season can reduce other diagnostic testing and improve antimicrobial stewardship,” Deshpande noted.

Thus, while the rate of testing for influenza did increase over the 5-year study interval, “there is substantial room for improvement,” he added, as a positive test clearly does trigger the need for intervention. As Dr. Deshpande also noted, the past two influenza seasons have been mild, but influenza activity has again picked up lately again in many parts of the United States.

With the COVID-19 pandemic overwhelming influenza over the past few years, “differentiating between the two based on symptoms alone can be challenging,” he acknowledged, “and clinicians will need to test and treat accordingly.” This is particularly important given that this study clearly indicates that early treatment with an antiviral can lower the risk of short-term mortality in hospitalized CAP patients.

One limitation of the study was the lack of data on time of symptom onset, which may be an important confounder of the effect of oseltamivir on outcomes, the authors point out. Asked to comment on the findings, Barbara Jones, MD, University of Utah Health, Salt Lake City, noted that timely antivirals for patients with influenza are highly effective at mitigating severe disease and are thus strongly recommended by practice guidelines.

“However, it is hard for clinicians to keep influenza on the radar and change testing and treatment approaches according to the season and prevalence [of influenza infections],” she said in an interview. “This is an important study that highlights this challenge.

“We need a better understanding of the solutions that have been effective at improving influenza recognition and treatment, possibly by studying facilities that perform well at this process,” she said.

Dr. Deshpande reported receiving research funding to his institution from the Clorox Company and consultant fees from Merck.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CHEST

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Sepsis common cause of ICU admissions in patients with MS

Article Type
Changed
Mon, 02/28/2022 - 15:25

Sepsis is an alarmingly common cause behind ICU admissions in patients with multiple sclerosis (MS), a retrospective, population-based cohort study indicates.

Furthermore, it contributes to a disproportionately high percentage of the short-term mortality risk among patients with MS admitted to the ICU, findings also show. Short-term mortality risk was defined in the study as a combination of in-hospital death or discharge to hospice.

“We found that the risk of short-term mortality in critically ill patients with MS is four times higher among those with sepsis ... so sepsis appears to be comparatively more lethal among patients with MS than in the general population,” Lavi Oud, MD, professor of medicine, Texas Tech University HSC at the Permian Basin, Odessa, said in an email.

“[Although] the specific mechanisms underlying the markedly higher risk of sepsis among patients with MS compared to the general population remain to be fully elucidated ... it’s thought that the risk may stem from the dysfunction of the immune system in these patients related to MS itself and to the potentially adverse effect of the immunomodulating therapy we use in these patients,” he added.

The study was published online Jan. 11 in the Journal of Critical Care.
 

Sepsis rates

The Texas Inpatient Public Use Data File was used to identify adults with a diagnosis of MS admitted to the hospital between 2010 and 2017. Among the 19,837 patients with MS admitted to the ICU during the study interval, almost one-third (31.5%) had sepsis, investigators report. “The rate of sepsis among ICU admissions increased with age, ranging from 20.8% among those aged 18-44 to 39.4% among those aged 65 years or older,” investigators note.

The most common site of infection among MS patients admitted to the ICU were urinary in nature (65.2%), followed by respiratory (36.1%). A smaller proportion of infections (7.6%) involved the skin and soft tissues, researchers note. A full one-quarter of patients developed septic shock in response to their infection while the length of stay among patients with sepsis (mean of 10.9 days) was substantially longer than it was for those without sepsis (mean of 5.6 days), they observe.

At a mean total hospital cost of $121,797 for each ICU patient with sepsis, the cost of caring for each patient was nearly twofold higher than the mean total cost of taking care of ICU patients without sepsis (mean total cost, $65,179). On adjusted analysis, sepsis was associated with a 42.7% (95% confidence interval, 38.9-46.5; P < .0001) longer length of hospital stay and a 26.2% (95% CI, 23.1-29.1; P < .0001) higher total hospital cost compared with patients without sepsis, the authors point out.

Indeed, ICU admissions with sepsis accounted for 47.3% of all hospital days and for 46.1% of the aggregate hospital charges among all MS patients admitted to the ICU.

“The adjusted probability of short-term mortality was 13.4% (95% CI, 13.0-13.7) among ICU admissions with sepsis and 3.3% (95% CI, 3.2-3.4) among ICU admissions without sepsis,” the authors report.

This translated into a 44% higher risk of short-term mortality at an adjusted odds ratio of 1.44 (95% CI, 1.23-1.69; P < .0001) for those with sepsis, compared with those without, they add. Among all ICU admissions, sepsis was reported in over two-thirds of documented short-term mortality events. The risk of short-term mortality was also almost threefold higher among patients with sepsis who were age 65 years and older compared with patients aged 18-44. 

As Dr. Oud noted, there is no specific test for sepsis, and it can initially present in an atypical manner, especially in older, frailer, chronically ill patients as well as in patients with immune dysfunction. “Thus, considering sepsis as a possible cause of new deterioration in a patient’s condition is essential, along with the timely start of sepsis-related care,” Dr. Oud observed.

A limitation of the study was that the dataset did not include information on the type of MS a patient had, the duration of their illness, the treatment received, the level of disease activity, or the level of disability.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(3)
Publications
Topics
Sections

Sepsis is an alarmingly common cause behind ICU admissions in patients with multiple sclerosis (MS), a retrospective, population-based cohort study indicates.

Furthermore, it contributes to a disproportionately high percentage of the short-term mortality risk among patients with MS admitted to the ICU, findings also show. Short-term mortality risk was defined in the study as a combination of in-hospital death or discharge to hospice.

“We found that the risk of short-term mortality in critically ill patients with MS is four times higher among those with sepsis ... so sepsis appears to be comparatively more lethal among patients with MS than in the general population,” Lavi Oud, MD, professor of medicine, Texas Tech University HSC at the Permian Basin, Odessa, said in an email.

“[Although] the specific mechanisms underlying the markedly higher risk of sepsis among patients with MS compared to the general population remain to be fully elucidated ... it’s thought that the risk may stem from the dysfunction of the immune system in these patients related to MS itself and to the potentially adverse effect of the immunomodulating therapy we use in these patients,” he added.

The study was published online Jan. 11 in the Journal of Critical Care.
 

Sepsis rates

The Texas Inpatient Public Use Data File was used to identify adults with a diagnosis of MS admitted to the hospital between 2010 and 2017. Among the 19,837 patients with MS admitted to the ICU during the study interval, almost one-third (31.5%) had sepsis, investigators report. “The rate of sepsis among ICU admissions increased with age, ranging from 20.8% among those aged 18-44 to 39.4% among those aged 65 years or older,” investigators note.

The most common site of infection among MS patients admitted to the ICU were urinary in nature (65.2%), followed by respiratory (36.1%). A smaller proportion of infections (7.6%) involved the skin and soft tissues, researchers note. A full one-quarter of patients developed septic shock in response to their infection while the length of stay among patients with sepsis (mean of 10.9 days) was substantially longer than it was for those without sepsis (mean of 5.6 days), they observe.

At a mean total hospital cost of $121,797 for each ICU patient with sepsis, the cost of caring for each patient was nearly twofold higher than the mean total cost of taking care of ICU patients without sepsis (mean total cost, $65,179). On adjusted analysis, sepsis was associated with a 42.7% (95% confidence interval, 38.9-46.5; P < .0001) longer length of hospital stay and a 26.2% (95% CI, 23.1-29.1; P < .0001) higher total hospital cost compared with patients without sepsis, the authors point out.

Indeed, ICU admissions with sepsis accounted for 47.3% of all hospital days and for 46.1% of the aggregate hospital charges among all MS patients admitted to the ICU.

“The adjusted probability of short-term mortality was 13.4% (95% CI, 13.0-13.7) among ICU admissions with sepsis and 3.3% (95% CI, 3.2-3.4) among ICU admissions without sepsis,” the authors report.

This translated into a 44% higher risk of short-term mortality at an adjusted odds ratio of 1.44 (95% CI, 1.23-1.69; P < .0001) for those with sepsis, compared with those without, they add. Among all ICU admissions, sepsis was reported in over two-thirds of documented short-term mortality events. The risk of short-term mortality was also almost threefold higher among patients with sepsis who were age 65 years and older compared with patients aged 18-44. 

As Dr. Oud noted, there is no specific test for sepsis, and it can initially present in an atypical manner, especially in older, frailer, chronically ill patients as well as in patients with immune dysfunction. “Thus, considering sepsis as a possible cause of new deterioration in a patient’s condition is essential, along with the timely start of sepsis-related care,” Dr. Oud observed.

A limitation of the study was that the dataset did not include information on the type of MS a patient had, the duration of their illness, the treatment received, the level of disease activity, or the level of disability.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Sepsis is an alarmingly common cause behind ICU admissions in patients with multiple sclerosis (MS), a retrospective, population-based cohort study indicates.

Furthermore, it contributes to a disproportionately high percentage of the short-term mortality risk among patients with MS admitted to the ICU, findings also show. Short-term mortality risk was defined in the study as a combination of in-hospital death or discharge to hospice.

“We found that the risk of short-term mortality in critically ill patients with MS is four times higher among those with sepsis ... so sepsis appears to be comparatively more lethal among patients with MS than in the general population,” Lavi Oud, MD, professor of medicine, Texas Tech University HSC at the Permian Basin, Odessa, said in an email.

“[Although] the specific mechanisms underlying the markedly higher risk of sepsis among patients with MS compared to the general population remain to be fully elucidated ... it’s thought that the risk may stem from the dysfunction of the immune system in these patients related to MS itself and to the potentially adverse effect of the immunomodulating therapy we use in these patients,” he added.

The study was published online Jan. 11 in the Journal of Critical Care.
 

Sepsis rates

The Texas Inpatient Public Use Data File was used to identify adults with a diagnosis of MS admitted to the hospital between 2010 and 2017. Among the 19,837 patients with MS admitted to the ICU during the study interval, almost one-third (31.5%) had sepsis, investigators report. “The rate of sepsis among ICU admissions increased with age, ranging from 20.8% among those aged 18-44 to 39.4% among those aged 65 years or older,” investigators note.

The most common site of infection among MS patients admitted to the ICU were urinary in nature (65.2%), followed by respiratory (36.1%). A smaller proportion of infections (7.6%) involved the skin and soft tissues, researchers note. A full one-quarter of patients developed septic shock in response to their infection while the length of stay among patients with sepsis (mean of 10.9 days) was substantially longer than it was for those without sepsis (mean of 5.6 days), they observe.

At a mean total hospital cost of $121,797 for each ICU patient with sepsis, the cost of caring for each patient was nearly twofold higher than the mean total cost of taking care of ICU patients without sepsis (mean total cost, $65,179). On adjusted analysis, sepsis was associated with a 42.7% (95% confidence interval, 38.9-46.5; P < .0001) longer length of hospital stay and a 26.2% (95% CI, 23.1-29.1; P < .0001) higher total hospital cost compared with patients without sepsis, the authors point out.

Indeed, ICU admissions with sepsis accounted for 47.3% of all hospital days and for 46.1% of the aggregate hospital charges among all MS patients admitted to the ICU.

“The adjusted probability of short-term mortality was 13.4% (95% CI, 13.0-13.7) among ICU admissions with sepsis and 3.3% (95% CI, 3.2-3.4) among ICU admissions without sepsis,” the authors report.

This translated into a 44% higher risk of short-term mortality at an adjusted odds ratio of 1.44 (95% CI, 1.23-1.69; P < .0001) for those with sepsis, compared with those without, they add. Among all ICU admissions, sepsis was reported in over two-thirds of documented short-term mortality events. The risk of short-term mortality was also almost threefold higher among patients with sepsis who were age 65 years and older compared with patients aged 18-44. 

As Dr. Oud noted, there is no specific test for sepsis, and it can initially present in an atypical manner, especially in older, frailer, chronically ill patients as well as in patients with immune dysfunction. “Thus, considering sepsis as a possible cause of new deterioration in a patient’s condition is essential, along with the timely start of sepsis-related care,” Dr. Oud observed.

A limitation of the study was that the dataset did not include information on the type of MS a patient had, the duration of their illness, the treatment received, the level of disease activity, or the level of disability.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(3)
Issue
Neurology Reviews - 30(3)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CRITICAL CARE

Citation Override
Publish date: February 14, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cancer, infection risk higher in transplant patients than rejection

Article Type
Changed
Wed, 02/02/2022 - 08:15

Cancer, infection, and heart disease are greater risk factors for death in kidney transplant recipients who die with a functional graft than organ rejection, a retrospective Mayo Clinic cohort study indicates.

“It’s important to have immunosuppression to protect people from rejection, but we wanted to be able to say, ‘What are the other causes of kidney failure that we might be able to identify that help improve longer-term outcomes’,” coauthor Andrew Bentall, MBChB, MD, a Mayo Clinic nephrologist, told this news organization.

“And I think the main thing we found is that we need to differentiate people into two groups,” he said, including younger, nondiabetic patients who develop graft failure due to alloimmunity and older often diabetic patients “who are less likely to have a rejection episode but who are still at high risk for death from a malignancy or infection so maybe we can modify their immunosuppression, for example, and reduce their mortality risk which could be very helpful.”

The study was published online Jan. 17 in Transplantation Direct.
 

Cohort study

The cohort was made up of 5,752 consecutive kidney transplant recipients treated at one of three Mayo Clinic sites. The mean age of recipients was 53.8 years and one-quarter were 65 years of age or older. “At the time of transplantation, 69.8% were on dialysis, and 10.3% had received a prior kidney,” of which half were from a deceased donor, the authors note.

Almost all patients received tacrolimus as part of their maintenance immunosuppressive regimen. At a median follow-up of 3.5 years, overall graft failure occurred in 21.6% of patients, including death with a functioning graft (DWFG) in 12% and graft failure in 9.6% of patients. The most common causes of DWFG included malignancy at 20.0%, followed closely by infection at 19.7%, investigators note.

Cardiac disease was the cause of DWFG in 12.6% of patients, and the cause was unknown in 37%. Of those patients who died with a functioning graft, 12.3% died within the first year of transplantation. Roughly 45% died between 1 and 5 years later, and 42% died more than 5 years after transplantation.

On multivariable analysis, independent predictors of DWFG included:

  • Older age at transplantation (hazard ratio, 1.75; P < .001)
  • Male sex (HR, 1.34; P < .001)
  • Dialysis prior to transplant (HR, 1.49; P < .001)
  • Diabetes as a cause of end-stage renal disease (ESRD) (HR, 1.88; P < .001)
  • Prednisone use as maintenance therapy (HR, 1.34; P = .008)

Graft failure

Of patients who had graft failure, almost one-quarter occurred within the first year of transplantation, about 42% occurred 1 to 5 years later, and a third occurred more than 5 years later.

Most patients (39%) who went on to graft failure did so as a result of “alloimmunity”, a term investigators used to cover all types of rejection, with a smaller number of graft failures being caused by glomerular diseases, at 18.6%, and renal tubular injury, at 13.9%.

“In the first year after transplantation, surgical complications and primary nonfunction of the allograft caused 60.3% ... of graft losses,” the authors point out. Beyond the first year, alloimmunity accounted for approximately half of the cases of graft failure, investigators note.

In the multivariable analysis for overall graft failure, risk factors included:

  • Young recipient age (HR, 0.80; P < .001)
  • History of a previous kidney transplant (HR, 1.33; P = .042)
  • Dialysis at time of transplantation (HR, 1.54; P < .001)
  • Black recipient race (HR, 1.40; P = .006)
  • Black donor race (HR, 1.35; P = .038)
  • Diabetes as a cause of ESRD (HR, 1.40; P = .002)
  • HLA mismatch (HR, 1.27; P < .001)
  • Delayed graft function (HR, 2.20; P < .001)

“Over time, DWFG was more common than graft failure,” the authors note.
 

Modifiable risk factors

As Dr. Bentall acknowledged, not all risk factors contributing to DWFG or graft failure are modifiable. However, diabetes – which stood out as a risk factor for both DWFG and graft failure – is potentially modifiable before patients reach ESRD, as he suggested. Diabetes is currently a cause for up to 40% of all ESRD cases in the United States.

“We can’t necessarily always reverse the diabetes, but there are significant new medications that can be used along with weight loss strategies to improve diabetes control,” he noted.

Similarly, it’s well established that patients who come into transplantation with a body mass index in excess of 30 kg/m2 have more scarring and damage to the kidney 5- and 10-years post-transplantation than healthy weight patients, as Dr. Bentall observed. “Again, this is a key modifiable component, and it fits into diabetes intervention strategies as well,” he emphasized. The use of prednisone as maintenance immunosuppressive therapy similarly emerged as a risk factor for DWFG.

Transplant recipients who receive prednisone may well be a higher risk population to begin with, “but we are also using prednisone in our older patients because we try to use less induction immunosuppression at the time of transplantation. So if we can try and get people off prednisone, that may lessen their risk of infection and subsequent mortality,” Dr. Bentall noted.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Cancer, infection, and heart disease are greater risk factors for death in kidney transplant recipients who die with a functional graft than organ rejection, a retrospective Mayo Clinic cohort study indicates.

“It’s important to have immunosuppression to protect people from rejection, but we wanted to be able to say, ‘What are the other causes of kidney failure that we might be able to identify that help improve longer-term outcomes’,” coauthor Andrew Bentall, MBChB, MD, a Mayo Clinic nephrologist, told this news organization.

“And I think the main thing we found is that we need to differentiate people into two groups,” he said, including younger, nondiabetic patients who develop graft failure due to alloimmunity and older often diabetic patients “who are less likely to have a rejection episode but who are still at high risk for death from a malignancy or infection so maybe we can modify their immunosuppression, for example, and reduce their mortality risk which could be very helpful.”

The study was published online Jan. 17 in Transplantation Direct.
 

Cohort study

The cohort was made up of 5,752 consecutive kidney transplant recipients treated at one of three Mayo Clinic sites. The mean age of recipients was 53.8 years and one-quarter were 65 years of age or older. “At the time of transplantation, 69.8% were on dialysis, and 10.3% had received a prior kidney,” of which half were from a deceased donor, the authors note.

Almost all patients received tacrolimus as part of their maintenance immunosuppressive regimen. At a median follow-up of 3.5 years, overall graft failure occurred in 21.6% of patients, including death with a functioning graft (DWFG) in 12% and graft failure in 9.6% of patients. The most common causes of DWFG included malignancy at 20.0%, followed closely by infection at 19.7%, investigators note.

Cardiac disease was the cause of DWFG in 12.6% of patients, and the cause was unknown in 37%. Of those patients who died with a functioning graft, 12.3% died within the first year of transplantation. Roughly 45% died between 1 and 5 years later, and 42% died more than 5 years after transplantation.

On multivariable analysis, independent predictors of DWFG included:

  • Older age at transplantation (hazard ratio, 1.75; P < .001)
  • Male sex (HR, 1.34; P < .001)
  • Dialysis prior to transplant (HR, 1.49; P < .001)
  • Diabetes as a cause of end-stage renal disease (ESRD) (HR, 1.88; P < .001)
  • Prednisone use as maintenance therapy (HR, 1.34; P = .008)

Graft failure

Of patients who had graft failure, almost one-quarter occurred within the first year of transplantation, about 42% occurred 1 to 5 years later, and a third occurred more than 5 years later.

Most patients (39%) who went on to graft failure did so as a result of “alloimmunity”, a term investigators used to cover all types of rejection, with a smaller number of graft failures being caused by glomerular diseases, at 18.6%, and renal tubular injury, at 13.9%.

“In the first year after transplantation, surgical complications and primary nonfunction of the allograft caused 60.3% ... of graft losses,” the authors point out. Beyond the first year, alloimmunity accounted for approximately half of the cases of graft failure, investigators note.

In the multivariable analysis for overall graft failure, risk factors included:

  • Young recipient age (HR, 0.80; P < .001)
  • History of a previous kidney transplant (HR, 1.33; P = .042)
  • Dialysis at time of transplantation (HR, 1.54; P < .001)
  • Black recipient race (HR, 1.40; P = .006)
  • Black donor race (HR, 1.35; P = .038)
  • Diabetes as a cause of ESRD (HR, 1.40; P = .002)
  • HLA mismatch (HR, 1.27; P < .001)
  • Delayed graft function (HR, 2.20; P < .001)

“Over time, DWFG was more common than graft failure,” the authors note.
 

Modifiable risk factors

As Dr. Bentall acknowledged, not all risk factors contributing to DWFG or graft failure are modifiable. However, diabetes – which stood out as a risk factor for both DWFG and graft failure – is potentially modifiable before patients reach ESRD, as he suggested. Diabetes is currently a cause for up to 40% of all ESRD cases in the United States.

“We can’t necessarily always reverse the diabetes, but there are significant new medications that can be used along with weight loss strategies to improve diabetes control,” he noted.

Similarly, it’s well established that patients who come into transplantation with a body mass index in excess of 30 kg/m2 have more scarring and damage to the kidney 5- and 10-years post-transplantation than healthy weight patients, as Dr. Bentall observed. “Again, this is a key modifiable component, and it fits into diabetes intervention strategies as well,” he emphasized. The use of prednisone as maintenance immunosuppressive therapy similarly emerged as a risk factor for DWFG.

Transplant recipients who receive prednisone may well be a higher risk population to begin with, “but we are also using prednisone in our older patients because we try to use less induction immunosuppression at the time of transplantation. So if we can try and get people off prednisone, that may lessen their risk of infection and subsequent mortality,” Dr. Bentall noted.

A version of this article first appeared on Medscape.com.

Cancer, infection, and heart disease are greater risk factors for death in kidney transplant recipients who die with a functional graft than organ rejection, a retrospective Mayo Clinic cohort study indicates.

“It’s important to have immunosuppression to protect people from rejection, but we wanted to be able to say, ‘What are the other causes of kidney failure that we might be able to identify that help improve longer-term outcomes’,” coauthor Andrew Bentall, MBChB, MD, a Mayo Clinic nephrologist, told this news organization.

“And I think the main thing we found is that we need to differentiate people into two groups,” he said, including younger, nondiabetic patients who develop graft failure due to alloimmunity and older often diabetic patients “who are less likely to have a rejection episode but who are still at high risk for death from a malignancy or infection so maybe we can modify their immunosuppression, for example, and reduce their mortality risk which could be very helpful.”

The study was published online Jan. 17 in Transplantation Direct.
 

Cohort study

The cohort was made up of 5,752 consecutive kidney transplant recipients treated at one of three Mayo Clinic sites. The mean age of recipients was 53.8 years and one-quarter were 65 years of age or older. “At the time of transplantation, 69.8% were on dialysis, and 10.3% had received a prior kidney,” of which half were from a deceased donor, the authors note.

Almost all patients received tacrolimus as part of their maintenance immunosuppressive regimen. At a median follow-up of 3.5 years, overall graft failure occurred in 21.6% of patients, including death with a functioning graft (DWFG) in 12% and graft failure in 9.6% of patients. The most common causes of DWFG included malignancy at 20.0%, followed closely by infection at 19.7%, investigators note.

Cardiac disease was the cause of DWFG in 12.6% of patients, and the cause was unknown in 37%. Of those patients who died with a functioning graft, 12.3% died within the first year of transplantation. Roughly 45% died between 1 and 5 years later, and 42% died more than 5 years after transplantation.

On multivariable analysis, independent predictors of DWFG included:

  • Older age at transplantation (hazard ratio, 1.75; P < .001)
  • Male sex (HR, 1.34; P < .001)
  • Dialysis prior to transplant (HR, 1.49; P < .001)
  • Diabetes as a cause of end-stage renal disease (ESRD) (HR, 1.88; P < .001)
  • Prednisone use as maintenance therapy (HR, 1.34; P = .008)

Graft failure

Of patients who had graft failure, almost one-quarter occurred within the first year of transplantation, about 42% occurred 1 to 5 years later, and a third occurred more than 5 years later.

Most patients (39%) who went on to graft failure did so as a result of “alloimmunity”, a term investigators used to cover all types of rejection, with a smaller number of graft failures being caused by glomerular diseases, at 18.6%, and renal tubular injury, at 13.9%.

“In the first year after transplantation, surgical complications and primary nonfunction of the allograft caused 60.3% ... of graft losses,” the authors point out. Beyond the first year, alloimmunity accounted for approximately half of the cases of graft failure, investigators note.

In the multivariable analysis for overall graft failure, risk factors included:

  • Young recipient age (HR, 0.80; P < .001)
  • History of a previous kidney transplant (HR, 1.33; P = .042)
  • Dialysis at time of transplantation (HR, 1.54; P < .001)
  • Black recipient race (HR, 1.40; P = .006)
  • Black donor race (HR, 1.35; P = .038)
  • Diabetes as a cause of ESRD (HR, 1.40; P = .002)
  • HLA mismatch (HR, 1.27; P < .001)
  • Delayed graft function (HR, 2.20; P < .001)

“Over time, DWFG was more common than graft failure,” the authors note.
 

Modifiable risk factors

As Dr. Bentall acknowledged, not all risk factors contributing to DWFG or graft failure are modifiable. However, diabetes – which stood out as a risk factor for both DWFG and graft failure – is potentially modifiable before patients reach ESRD, as he suggested. Diabetes is currently a cause for up to 40% of all ESRD cases in the United States.

“We can’t necessarily always reverse the diabetes, but there are significant new medications that can be used along with weight loss strategies to improve diabetes control,” he noted.

Similarly, it’s well established that patients who come into transplantation with a body mass index in excess of 30 kg/m2 have more scarring and damage to the kidney 5- and 10-years post-transplantation than healthy weight patients, as Dr. Bentall observed. “Again, this is a key modifiable component, and it fits into diabetes intervention strategies as well,” he emphasized. The use of prednisone as maintenance immunosuppressive therapy similarly emerged as a risk factor for DWFG.

Transplant recipients who receive prednisone may well be a higher risk population to begin with, “but we are also using prednisone in our older patients because we try to use less induction immunosuppression at the time of transplantation. So if we can try and get people off prednisone, that may lessen their risk of infection and subsequent mortality,” Dr. Bentall noted.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM TRANSPLANTATION DIRECT

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Surrogate endpoints acceptable in AML trials, says FDA

Article Type
Changed
Tue, 02/01/2022 - 10:06

The Food and Drug Administration has been harshly criticized for using surrogate endpoints in clinical trials in its approval of new drugs, and especially so in oncology, where critics have argued that the only truly meaningful endpoint is overall survival (OS).

But the FDA is now fighting back and is arguing that in certain cases surrogate endpoints do translate to overall survival benefits. A case in point is in clinical trials of new treatments being investigated in patients newly diagnosed with acute myeloid leukemia (AML).

FDA investigators led by Kelly Norsworthy, MD, conducted an analysis of eight randomized clinical trials of intensive chemotherapy for the treatment of newly diagnosed AML and found that both complete remission (CR) and event-free survival (EFS) were strongly correlated with OS in all the trials studied.

The results show that these particular surrogate endpoints do have real value in predicting the efficacy of drugs for the treatment of newly diagnosed AML, they concluded.

“To our knowledge, our results represent the first direct examination of the relationship of response rate and EFS to OS using trial-level and patient-level data in patients with newly diagnosed AML treated with intensive induction chemotherapy,” the FDA investigators commented.

“The results support the FDA’s acceptance of EFS as a clinical benefit endpoint supportive of traditional approval for treatments with curative intent,” they added.

The analysis was published online on Dec. 10, 2021, in the Journal of Clinical Oncology.

“The central finding of the article, that both EFS and the CR rate are reliably associated with improved OS, is of immediate relevance and indicates that these parameters represent acceptable and appropriate surrogate endpoints for clinical trials of AML,” Courtney DiNardo, MD, University of Texas MD Anderson Cancer Center, Houston, and Daniel Pollyea, MD, University of Colorado at Denver, Aurora, wrote in an accompanying editorial.

“The establishment of CR and EFS as appropriate surrogate endpoints for patients with newly diagnosed AML receiving intensive chemotherapy will allow earlier evaluation of novel therapies and speed the delivery of safe and effective therapies to our patients,” they added.
 

Analysis of clinical trials submitted for approval

The FDA investigators conducted an analysis of eight trials that had been submitted to the agency for licensing approval between 2007 and 2011. Together, the trials included a total of 4,482 patients with newly diagnosed AML.

Five trials evaluated gemtuzumab ozogamicin (Mylotarg), while two trials evaluated a daunorubicin and cytarabine liposome injection, also known as CPX-351 (Vyxeos), and one trial evaluated midostaurin (RYDAPT).

“All were approved in combination with, or for use as, intensive induction chemotherapy in patients with newly diagnosed AML,” the team wrote. Both trial-level and patient-level associations between responses, EFS, and OS were evaluated.

The association between the hazard ratio for OS and the odds ratio for CR at the trial level was “moderate.” The association between the HR for OS and the OR for lesser response rates – namely, CR with incomplete hematologic recovery (CRi) and CR with incomplete platelet recovery (CRp) – was similarly moderate, as investigators note.

On the other hand, while the associations between the HR for OS and the HR for EFS were again moderate, “on the basis of the harmonized primary definition of EFS across trials, the association became stronger,” the authors reported. The harmonized definition of EFS across the trials included time from random assignment to treatment failure, relapse from CR, or death from any cause, whichever occurred earlier.

The FDA authors cautioned that a significant number of patients who relapsed did not die during the course of the clinical trial, resulting in a considerably longer OS, compared with EFS in some patients. However, to further explore the relationship between response and survival, a patient-level analysis of response – namely, CR versus CRi or CRp versus no response – was performed.”Patients who achieved a CR had a [27%] better OS, compared with patients whose best response was CRi or CRp,” the authors noted.

Patients who achieved a CRi or a CRp as their best response still had a 54% better OS, compared with patients who achieved no response, irrespective of the treatment received, they added.

Interestingly enough, a small number of patients who had no response to treatment also experienced prolonged survival, possibly because of successful second-line therapy, the authors speculated.
 

 

 

Effective salvage therapies

Commenting further in their editorial, Dr. DiNardo and Dr. Pollyea wrote that, given that there are now multiple effective salvage therapies for the treatment of AML, an OS endpoint no longer solely reflects the effectiveness of an initial therapy, as survival will also be affected by subsequent lines of AML-directed treatment.

“Accordingly, OS should no longer be considered the sole determinant of the value of a new therapy,” the editorialists emphasized.

Furthermore, as the treatment of AML is increasingly based on biologically defined and differentially targeted subsets, “the required sample sizes and timelines to run a proper randomized, phase 3 study for an OS end point of a rare AML subset become logistically untenable,” they wrote.

That said, the editorialists felt the fact that FDA employees performed this meta-analysis at all was “highly laudable.” “It is, moreover, gratifying to know that the experiences of clinical trial participants can be maximized beyond the original contributions made to the studies in which they originally volunteered,” the editorialists observed.

“In the United States, this example should inspire investigators and industry partners to prioritize similar analyses with their [own] data sets,” they added.

Dr. Norsworthy, Dr. DiNardo, and Dr. Pollyea disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The Food and Drug Administration has been harshly criticized for using surrogate endpoints in clinical trials in its approval of new drugs, and especially so in oncology, where critics have argued that the only truly meaningful endpoint is overall survival (OS).

But the FDA is now fighting back and is arguing that in certain cases surrogate endpoints do translate to overall survival benefits. A case in point is in clinical trials of new treatments being investigated in patients newly diagnosed with acute myeloid leukemia (AML).

FDA investigators led by Kelly Norsworthy, MD, conducted an analysis of eight randomized clinical trials of intensive chemotherapy for the treatment of newly diagnosed AML and found that both complete remission (CR) and event-free survival (EFS) were strongly correlated with OS in all the trials studied.

The results show that these particular surrogate endpoints do have real value in predicting the efficacy of drugs for the treatment of newly diagnosed AML, they concluded.

“To our knowledge, our results represent the first direct examination of the relationship of response rate and EFS to OS using trial-level and patient-level data in patients with newly diagnosed AML treated with intensive induction chemotherapy,” the FDA investigators commented.

“The results support the FDA’s acceptance of EFS as a clinical benefit endpoint supportive of traditional approval for treatments with curative intent,” they added.

The analysis was published online on Dec. 10, 2021, in the Journal of Clinical Oncology.

“The central finding of the article, that both EFS and the CR rate are reliably associated with improved OS, is of immediate relevance and indicates that these parameters represent acceptable and appropriate surrogate endpoints for clinical trials of AML,” Courtney DiNardo, MD, University of Texas MD Anderson Cancer Center, Houston, and Daniel Pollyea, MD, University of Colorado at Denver, Aurora, wrote in an accompanying editorial.

“The establishment of CR and EFS as appropriate surrogate endpoints for patients with newly diagnosed AML receiving intensive chemotherapy will allow earlier evaluation of novel therapies and speed the delivery of safe and effective therapies to our patients,” they added.
 

Analysis of clinical trials submitted for approval

The FDA investigators conducted an analysis of eight trials that had been submitted to the agency for licensing approval between 2007 and 2011. Together, the trials included a total of 4,482 patients with newly diagnosed AML.

Five trials evaluated gemtuzumab ozogamicin (Mylotarg), while two trials evaluated a daunorubicin and cytarabine liposome injection, also known as CPX-351 (Vyxeos), and one trial evaluated midostaurin (RYDAPT).

“All were approved in combination with, or for use as, intensive induction chemotherapy in patients with newly diagnosed AML,” the team wrote. Both trial-level and patient-level associations between responses, EFS, and OS were evaluated.

The association between the hazard ratio for OS and the odds ratio for CR at the trial level was “moderate.” The association between the HR for OS and the OR for lesser response rates – namely, CR with incomplete hematologic recovery (CRi) and CR with incomplete platelet recovery (CRp) – was similarly moderate, as investigators note.

On the other hand, while the associations between the HR for OS and the HR for EFS were again moderate, “on the basis of the harmonized primary definition of EFS across trials, the association became stronger,” the authors reported. The harmonized definition of EFS across the trials included time from random assignment to treatment failure, relapse from CR, or death from any cause, whichever occurred earlier.

The FDA authors cautioned that a significant number of patients who relapsed did not die during the course of the clinical trial, resulting in a considerably longer OS, compared with EFS in some patients. However, to further explore the relationship between response and survival, a patient-level analysis of response – namely, CR versus CRi or CRp versus no response – was performed.”Patients who achieved a CR had a [27%] better OS, compared with patients whose best response was CRi or CRp,” the authors noted.

Patients who achieved a CRi or a CRp as their best response still had a 54% better OS, compared with patients who achieved no response, irrespective of the treatment received, they added.

Interestingly enough, a small number of patients who had no response to treatment also experienced prolonged survival, possibly because of successful second-line therapy, the authors speculated.
 

 

 

Effective salvage therapies

Commenting further in their editorial, Dr. DiNardo and Dr. Pollyea wrote that, given that there are now multiple effective salvage therapies for the treatment of AML, an OS endpoint no longer solely reflects the effectiveness of an initial therapy, as survival will also be affected by subsequent lines of AML-directed treatment.

“Accordingly, OS should no longer be considered the sole determinant of the value of a new therapy,” the editorialists emphasized.

Furthermore, as the treatment of AML is increasingly based on biologically defined and differentially targeted subsets, “the required sample sizes and timelines to run a proper randomized, phase 3 study for an OS end point of a rare AML subset become logistically untenable,” they wrote.

That said, the editorialists felt the fact that FDA employees performed this meta-analysis at all was “highly laudable.” “It is, moreover, gratifying to know that the experiences of clinical trial participants can be maximized beyond the original contributions made to the studies in which they originally volunteered,” the editorialists observed.

“In the United States, this example should inspire investigators and industry partners to prioritize similar analyses with their [own] data sets,” they added.

Dr. Norsworthy, Dr. DiNardo, and Dr. Pollyea disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

The Food and Drug Administration has been harshly criticized for using surrogate endpoints in clinical trials in its approval of new drugs, and especially so in oncology, where critics have argued that the only truly meaningful endpoint is overall survival (OS).

But the FDA is now fighting back and is arguing that in certain cases surrogate endpoints do translate to overall survival benefits. A case in point is in clinical trials of new treatments being investigated in patients newly diagnosed with acute myeloid leukemia (AML).

FDA investigators led by Kelly Norsworthy, MD, conducted an analysis of eight randomized clinical trials of intensive chemotherapy for the treatment of newly diagnosed AML and found that both complete remission (CR) and event-free survival (EFS) were strongly correlated with OS in all the trials studied.

The results show that these particular surrogate endpoints do have real value in predicting the efficacy of drugs for the treatment of newly diagnosed AML, they concluded.

“To our knowledge, our results represent the first direct examination of the relationship of response rate and EFS to OS using trial-level and patient-level data in patients with newly diagnosed AML treated with intensive induction chemotherapy,” the FDA investigators commented.

“The results support the FDA’s acceptance of EFS as a clinical benefit endpoint supportive of traditional approval for treatments with curative intent,” they added.

The analysis was published online on Dec. 10, 2021, in the Journal of Clinical Oncology.

“The central finding of the article, that both EFS and the CR rate are reliably associated with improved OS, is of immediate relevance and indicates that these parameters represent acceptable and appropriate surrogate endpoints for clinical trials of AML,” Courtney DiNardo, MD, University of Texas MD Anderson Cancer Center, Houston, and Daniel Pollyea, MD, University of Colorado at Denver, Aurora, wrote in an accompanying editorial.

“The establishment of CR and EFS as appropriate surrogate endpoints for patients with newly diagnosed AML receiving intensive chemotherapy will allow earlier evaluation of novel therapies and speed the delivery of safe and effective therapies to our patients,” they added.
 

Analysis of clinical trials submitted for approval

The FDA investigators conducted an analysis of eight trials that had been submitted to the agency for licensing approval between 2007 and 2011. Together, the trials included a total of 4,482 patients with newly diagnosed AML.

Five trials evaluated gemtuzumab ozogamicin (Mylotarg), while two trials evaluated a daunorubicin and cytarabine liposome injection, also known as CPX-351 (Vyxeos), and one trial evaluated midostaurin (RYDAPT).

“All were approved in combination with, or for use as, intensive induction chemotherapy in patients with newly diagnosed AML,” the team wrote. Both trial-level and patient-level associations between responses, EFS, and OS were evaluated.

The association between the hazard ratio for OS and the odds ratio for CR at the trial level was “moderate.” The association between the HR for OS and the OR for lesser response rates – namely, CR with incomplete hematologic recovery (CRi) and CR with incomplete platelet recovery (CRp) – was similarly moderate, as investigators note.

On the other hand, while the associations between the HR for OS and the HR for EFS were again moderate, “on the basis of the harmonized primary definition of EFS across trials, the association became stronger,” the authors reported. The harmonized definition of EFS across the trials included time from random assignment to treatment failure, relapse from CR, or death from any cause, whichever occurred earlier.

The FDA authors cautioned that a significant number of patients who relapsed did not die during the course of the clinical trial, resulting in a considerably longer OS, compared with EFS in some patients. However, to further explore the relationship between response and survival, a patient-level analysis of response – namely, CR versus CRi or CRp versus no response – was performed.”Patients who achieved a CR had a [27%] better OS, compared with patients whose best response was CRi or CRp,” the authors noted.

Patients who achieved a CRi or a CRp as their best response still had a 54% better OS, compared with patients who achieved no response, irrespective of the treatment received, they added.

Interestingly enough, a small number of patients who had no response to treatment also experienced prolonged survival, possibly because of successful second-line therapy, the authors speculated.
 

 

 

Effective salvage therapies

Commenting further in their editorial, Dr. DiNardo and Dr. Pollyea wrote that, given that there are now multiple effective salvage therapies for the treatment of AML, an OS endpoint no longer solely reflects the effectiveness of an initial therapy, as survival will also be affected by subsequent lines of AML-directed treatment.

“Accordingly, OS should no longer be considered the sole determinant of the value of a new therapy,” the editorialists emphasized.

Furthermore, as the treatment of AML is increasingly based on biologically defined and differentially targeted subsets, “the required sample sizes and timelines to run a proper randomized, phase 3 study for an OS end point of a rare AML subset become logistically untenable,” they wrote.

That said, the editorialists felt the fact that FDA employees performed this meta-analysis at all was “highly laudable.” “It is, moreover, gratifying to know that the experiences of clinical trial participants can be maximized beyond the original contributions made to the studies in which they originally volunteered,” the editorialists observed.

“In the United States, this example should inspire investigators and industry partners to prioritize similar analyses with their [own] data sets,” they added.

Dr. Norsworthy, Dr. DiNardo, and Dr. Pollyea disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

ICIs for NSCLC: Patients with ILD show greater risk

Article Type
Changed
Mon, 01/31/2022 - 15:03

Immune checkpoint inhibitors (ICIs) are at least as effective in patients with advanced non–small cell lung cancer (NSCLC) and mild preexisting interstitial lung disease (ILD) as in those without ILD. However, the risk of checkpoint inhibitor pneumonitis (CIP) is higher in patients with the dual diagnoses and they need careful monitoring when introducing an ICI, a systematic review and meta-analysis indicated.

“Patients with preexisting ILD, especially symptomatic ILD, are frequently excluded from clinical trials so almost all the patients [we analyzed] were diagnosed with mild preexisting ILD,” said Yuan Cheng, MD, Peking University First Hospital, Beijing, China.

“At this stage, we think that mild ILD is not a contraindication to the use of anti-programmed death-1 (PD-1) and anti-programmed death-ligand 1 (PD-L1) treatment for patients with NSCLC but whether ICIs can be used in patients with moderate to severe ILD needs further study,” she added.

The study was published online Jan. 10 in the journal CHEST.
 

Ten studies

A total of 179 patients from 10 studies were included in the review and meta-analysis. Among these, six were retrospective case-control studies, one was a retrospective noncontrolled study and three were prospective, noncontrolled clinical trials. “All the included studies were from East Asian countries,” the authors noted.

Preexisting ILD was diagnosed by use of CT or high-resolution CT. The mean age of patients was 71 years (range, 33-85 years), 87% were male and 96% of the cohort had a history of smoking. Approximately one-quarter of patients with ILD had usual interstitial pneumonitis (UIP); about the same percentage had possible UIP; one-third were diagnosed with inconsistent UIP; 14% had nonspecific interstitial pneumonia (NSIP); and 6% had indeterminate UIP.

Patients received ICIs either as first-, second-, or third-line or higher therapy and all were treated with ICI monotherapy by way of either nivolumab (Opdivo), pembrolizumab (Keytruda), or atezolizumab (Tecentriq). About 10% of patients had a PD-L1 tumor proportion score (TPS) of less than 1%, one-quarter had a PD-L1 TPS of 1%-49%, and approximately two-thirds had a TPS of 50% or greater.
 

Objective response rates

Some 35% of patients with both NSCLC and preexisting ILD achieved an objective response rate (ORR) to ICI therapy and almost two-thirds of patients achieved disease control. However, there was considerable heterogeneity in ORRs between the studies where it ranged from 5.9% to 70%, the authors cautioned.

On meta-analysis, the pooled ORR was 34% (95% confidence interval, 20%-47%) but again, with significant heterogeneity (I2 = 75.9%). However, on meta-analysis of eligible studies, patients with NSCLC who had preexisting ILD were 99% more likely to achieve an ORR compared to those without ILD (odds ratio, 1.99; 95% CI, 1.31-3.00), the investigators pointed out.

The disease control rate (DCR) also varied considerably between studies from a low of 33.3% to a high of 100%, they added. On meta-analysis, the pooled DCR was 66% (95% CI, 56%-75%). “Meanwhile, in patients without preexisting ILD, the crude ORR and pooled ORR were 24.3% and 24% (95% CI, 17%-31%), respectively” – again with significant heterogeneity between studies (I2 = 87.4%).

In contrast to the ORR, there was no difference in the DCR between the two groups, with no evidence of heterogeneity. There were no significant differences between the two groups in either median progression-free survival (PFS) or overall survival (OS). In patients with NSCLC and preexisting ILD, median PFS ranged from 1.4 to 8 months whereas median OS ranged from 15.6 to 27.8 months.

For those without preexisting ILD, the median PFS ranged from 2.3 to 8.1 months while median OS ranged from 17.4 to 25.5 months.
 

 

 

ICI safety

In patients with NSCLC and preexisting ILD, the incidence of immune-related adverse events (irAes) of any grade was 56.7%, whereas the incidence of irAEs grade 3 and higher was 27.7%. “Among the 179 patients included in the studies, 45 developed any grade of CIP, corresponding to a crude incidence of 25.1%,” the authors noted – very similar to the pooled incidence of 27% on meta-analysis.

The pooled incidence of grade 3 and higher CIP in the same group of patients was 15%. The median time from initiation of ICIs to the development of CIP ranged from 31 to 74 days, but 88% of patients who developed CIP improved with appropriate treatment. In patients with NSCLC who did not have ILD, the pooled incidence of CIP was 10% (95% CI, 6%-13%), again with significant heterogeneity between studies (I2 = 78.8%). “Generally, CIP can be managed through ICI discontinuation with or without steroid administration,” the authors noted.

However, even if most CIP can be easily managed, “the incidence of severe CIP is higher [in NSCLC patients with preexisting ILD] than in other populations,” Dr. Chen observed. “So patients with preexisting ILD should be closely monitored during ICI therapy,” she added.

Indeed, compared with patients without preexisting ILD, grade 3 or higher CIP in patients with the dual diagnosis was significantly higher at an OR of 3.23 (95%, 2.06-5.06), the investigators emphasized.

A limitation to the review and systematic meta-analysis included the fact that none of the studies analyzed were randomized clinical trials and most of the studies were retrospective and had several other shortcomings.
 

Umbrella diagnosis

Asked to comment on the review, Karthik Suresh, MD, associate professor of medicine, Johns Hopkins University, Baltimore, pointed out that ILD is really an “umbrella” diagnosis that a few hundred diseases fit under, so the first question he and members of his multidisciplinary team ask is: What is the nature of the ILD in this patient? What is the actual underlying etiology?

It could, for example, be that the patient has undergone prior chemotherapy or radiation therapy and has developed ILD as a result, as Dr. Suresh and his coauthor, Jarushka Naidoo, MD, Sidney Kimmel Comprehensive Cancer Center, Baltimore, pointed out in their paper on how to approach patients with preexisting lung disease to avoid ICI toxicities. “We’ll go back to their prior CT scans and can see the ILD has been there for years – it’s stable and the patient’s lung function is not changing,” Dr. Suresh related to this news organization.

“That’s a very different story from a [patients] whom there are new interstitial changes, who are progressing and who are symptomatic,” he noted. Essentially, what Dr. Suresh and his team members want to know is: What is the specific subdiagnosis of this disease, how severe is it, and is it progressing? Then they need to take the tumor itself into consideration.

“Some tumors have high PD-L1 expression, others have low PD-L1 expression so response to immunotherapy is usually very different based on tumor histology,” Dr. Suresh pointed out. Thus, the next question that needs to be addressed is: What is the expected response of the tumor to ICI therapy? If a tumor is exquisitely sensitive to immunotherapy, “that changes the game,” Dr. Suresh said, “whereas with other tumors, the oncologist might say there may be some benefit but it won’t be dramatic.”

The third risk factor for ICI toxicity that needs to be evaluated is the patient’s general cardiopulmonary status – for example, if a patient has mild, even moderate, ILD but is still walking 3 miles a day, has no heart problems, and is doing fine. Another patient with the same severity of disease in turn may have mild heart failure, be relatively debilitated, and sedentary: “Performance status also plays a big role in determining treatment,” Dr. Suresh emphasized.

The presence of other pulmonary conditions such as chronic obstructive pulmonary disease – common in patients with NSCLC – has to be taken into account, too. Lastly, clinicians need to ask themselves if there are any alternative therapies that might work just as well if not better than ICI therapy for this particular patient. If the patient has had genomic testing, results might indicate that the tumor has a mutation that may respond well to targeted therapies. “We put all these factors out on the table,” Dr. Suresh said.

“And you obviously have to involve the patient, too, so they understand the risks of ICI therapy and together we decide, ‘Yes, this patient with ILD should get immunotherapy or no, they should not,’ “ he said.  

The study had no specific funding. The study authors and Dr. Suresh have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Immune checkpoint inhibitors (ICIs) are at least as effective in patients with advanced non–small cell lung cancer (NSCLC) and mild preexisting interstitial lung disease (ILD) as in those without ILD. However, the risk of checkpoint inhibitor pneumonitis (CIP) is higher in patients with the dual diagnoses and they need careful monitoring when introducing an ICI, a systematic review and meta-analysis indicated.

“Patients with preexisting ILD, especially symptomatic ILD, are frequently excluded from clinical trials so almost all the patients [we analyzed] were diagnosed with mild preexisting ILD,” said Yuan Cheng, MD, Peking University First Hospital, Beijing, China.

“At this stage, we think that mild ILD is not a contraindication to the use of anti-programmed death-1 (PD-1) and anti-programmed death-ligand 1 (PD-L1) treatment for patients with NSCLC but whether ICIs can be used in patients with moderate to severe ILD needs further study,” she added.

The study was published online Jan. 10 in the journal CHEST.
 

Ten studies

A total of 179 patients from 10 studies were included in the review and meta-analysis. Among these, six were retrospective case-control studies, one was a retrospective noncontrolled study and three were prospective, noncontrolled clinical trials. “All the included studies were from East Asian countries,” the authors noted.

Preexisting ILD was diagnosed by use of CT or high-resolution CT. The mean age of patients was 71 years (range, 33-85 years), 87% were male and 96% of the cohort had a history of smoking. Approximately one-quarter of patients with ILD had usual interstitial pneumonitis (UIP); about the same percentage had possible UIP; one-third were diagnosed with inconsistent UIP; 14% had nonspecific interstitial pneumonia (NSIP); and 6% had indeterminate UIP.

Patients received ICIs either as first-, second-, or third-line or higher therapy and all were treated with ICI monotherapy by way of either nivolumab (Opdivo), pembrolizumab (Keytruda), or atezolizumab (Tecentriq). About 10% of patients had a PD-L1 tumor proportion score (TPS) of less than 1%, one-quarter had a PD-L1 TPS of 1%-49%, and approximately two-thirds had a TPS of 50% or greater.
 

Objective response rates

Some 35% of patients with both NSCLC and preexisting ILD achieved an objective response rate (ORR) to ICI therapy and almost two-thirds of patients achieved disease control. However, there was considerable heterogeneity in ORRs between the studies where it ranged from 5.9% to 70%, the authors cautioned.

On meta-analysis, the pooled ORR was 34% (95% confidence interval, 20%-47%) but again, with significant heterogeneity (I2 = 75.9%). However, on meta-analysis of eligible studies, patients with NSCLC who had preexisting ILD were 99% more likely to achieve an ORR compared to those without ILD (odds ratio, 1.99; 95% CI, 1.31-3.00), the investigators pointed out.

The disease control rate (DCR) also varied considerably between studies from a low of 33.3% to a high of 100%, they added. On meta-analysis, the pooled DCR was 66% (95% CI, 56%-75%). “Meanwhile, in patients without preexisting ILD, the crude ORR and pooled ORR were 24.3% and 24% (95% CI, 17%-31%), respectively” – again with significant heterogeneity between studies (I2 = 87.4%).

In contrast to the ORR, there was no difference in the DCR between the two groups, with no evidence of heterogeneity. There were no significant differences between the two groups in either median progression-free survival (PFS) or overall survival (OS). In patients with NSCLC and preexisting ILD, median PFS ranged from 1.4 to 8 months whereas median OS ranged from 15.6 to 27.8 months.

For those without preexisting ILD, the median PFS ranged from 2.3 to 8.1 months while median OS ranged from 17.4 to 25.5 months.
 

 

 

ICI safety

In patients with NSCLC and preexisting ILD, the incidence of immune-related adverse events (irAes) of any grade was 56.7%, whereas the incidence of irAEs grade 3 and higher was 27.7%. “Among the 179 patients included in the studies, 45 developed any grade of CIP, corresponding to a crude incidence of 25.1%,” the authors noted – very similar to the pooled incidence of 27% on meta-analysis.

The pooled incidence of grade 3 and higher CIP in the same group of patients was 15%. The median time from initiation of ICIs to the development of CIP ranged from 31 to 74 days, but 88% of patients who developed CIP improved with appropriate treatment. In patients with NSCLC who did not have ILD, the pooled incidence of CIP was 10% (95% CI, 6%-13%), again with significant heterogeneity between studies (I2 = 78.8%). “Generally, CIP can be managed through ICI discontinuation with or without steroid administration,” the authors noted.

However, even if most CIP can be easily managed, “the incidence of severe CIP is higher [in NSCLC patients with preexisting ILD] than in other populations,” Dr. Chen observed. “So patients with preexisting ILD should be closely monitored during ICI therapy,” she added.

Indeed, compared with patients without preexisting ILD, grade 3 or higher CIP in patients with the dual diagnosis was significantly higher at an OR of 3.23 (95%, 2.06-5.06), the investigators emphasized.

A limitation to the review and systematic meta-analysis included the fact that none of the studies analyzed were randomized clinical trials and most of the studies were retrospective and had several other shortcomings.
 

Umbrella diagnosis

Asked to comment on the review, Karthik Suresh, MD, associate professor of medicine, Johns Hopkins University, Baltimore, pointed out that ILD is really an “umbrella” diagnosis that a few hundred diseases fit under, so the first question he and members of his multidisciplinary team ask is: What is the nature of the ILD in this patient? What is the actual underlying etiology?

It could, for example, be that the patient has undergone prior chemotherapy or radiation therapy and has developed ILD as a result, as Dr. Suresh and his coauthor, Jarushka Naidoo, MD, Sidney Kimmel Comprehensive Cancer Center, Baltimore, pointed out in their paper on how to approach patients with preexisting lung disease to avoid ICI toxicities. “We’ll go back to their prior CT scans and can see the ILD has been there for years – it’s stable and the patient’s lung function is not changing,” Dr. Suresh related to this news organization.

“That’s a very different story from a [patients] whom there are new interstitial changes, who are progressing and who are symptomatic,” he noted. Essentially, what Dr. Suresh and his team members want to know is: What is the specific subdiagnosis of this disease, how severe is it, and is it progressing? Then they need to take the tumor itself into consideration.

“Some tumors have high PD-L1 expression, others have low PD-L1 expression so response to immunotherapy is usually very different based on tumor histology,” Dr. Suresh pointed out. Thus, the next question that needs to be addressed is: What is the expected response of the tumor to ICI therapy? If a tumor is exquisitely sensitive to immunotherapy, “that changes the game,” Dr. Suresh said, “whereas with other tumors, the oncologist might say there may be some benefit but it won’t be dramatic.”

The third risk factor for ICI toxicity that needs to be evaluated is the patient’s general cardiopulmonary status – for example, if a patient has mild, even moderate, ILD but is still walking 3 miles a day, has no heart problems, and is doing fine. Another patient with the same severity of disease in turn may have mild heart failure, be relatively debilitated, and sedentary: “Performance status also plays a big role in determining treatment,” Dr. Suresh emphasized.

The presence of other pulmonary conditions such as chronic obstructive pulmonary disease – common in patients with NSCLC – has to be taken into account, too. Lastly, clinicians need to ask themselves if there are any alternative therapies that might work just as well if not better than ICI therapy for this particular patient. If the patient has had genomic testing, results might indicate that the tumor has a mutation that may respond well to targeted therapies. “We put all these factors out on the table,” Dr. Suresh said.

“And you obviously have to involve the patient, too, so they understand the risks of ICI therapy and together we decide, ‘Yes, this patient with ILD should get immunotherapy or no, they should not,’ “ he said.  

The study had no specific funding. The study authors and Dr. Suresh have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Immune checkpoint inhibitors (ICIs) are at least as effective in patients with advanced non–small cell lung cancer (NSCLC) and mild preexisting interstitial lung disease (ILD) as in those without ILD. However, the risk of checkpoint inhibitor pneumonitis (CIP) is higher in patients with the dual diagnoses and they need careful monitoring when introducing an ICI, a systematic review and meta-analysis indicated.

“Patients with preexisting ILD, especially symptomatic ILD, are frequently excluded from clinical trials so almost all the patients [we analyzed] were diagnosed with mild preexisting ILD,” said Yuan Cheng, MD, Peking University First Hospital, Beijing, China.

“At this stage, we think that mild ILD is not a contraindication to the use of anti-programmed death-1 (PD-1) and anti-programmed death-ligand 1 (PD-L1) treatment for patients with NSCLC but whether ICIs can be used in patients with moderate to severe ILD needs further study,” she added.

The study was published online Jan. 10 in the journal CHEST.
 

Ten studies

A total of 179 patients from 10 studies were included in the review and meta-analysis. Among these, six were retrospective case-control studies, one was a retrospective noncontrolled study and three were prospective, noncontrolled clinical trials. “All the included studies were from East Asian countries,” the authors noted.

Preexisting ILD was diagnosed by use of CT or high-resolution CT. The mean age of patients was 71 years (range, 33-85 years), 87% were male and 96% of the cohort had a history of smoking. Approximately one-quarter of patients with ILD had usual interstitial pneumonitis (UIP); about the same percentage had possible UIP; one-third were diagnosed with inconsistent UIP; 14% had nonspecific interstitial pneumonia (NSIP); and 6% had indeterminate UIP.

Patients received ICIs either as first-, second-, or third-line or higher therapy and all were treated with ICI monotherapy by way of either nivolumab (Opdivo), pembrolizumab (Keytruda), or atezolizumab (Tecentriq). About 10% of patients had a PD-L1 tumor proportion score (TPS) of less than 1%, one-quarter had a PD-L1 TPS of 1%-49%, and approximately two-thirds had a TPS of 50% or greater.
 

Objective response rates

Some 35% of patients with both NSCLC and preexisting ILD achieved an objective response rate (ORR) to ICI therapy and almost two-thirds of patients achieved disease control. However, there was considerable heterogeneity in ORRs between the studies where it ranged from 5.9% to 70%, the authors cautioned.

On meta-analysis, the pooled ORR was 34% (95% confidence interval, 20%-47%) but again, with significant heterogeneity (I2 = 75.9%). However, on meta-analysis of eligible studies, patients with NSCLC who had preexisting ILD were 99% more likely to achieve an ORR compared to those without ILD (odds ratio, 1.99; 95% CI, 1.31-3.00), the investigators pointed out.

The disease control rate (DCR) also varied considerably between studies from a low of 33.3% to a high of 100%, they added. On meta-analysis, the pooled DCR was 66% (95% CI, 56%-75%). “Meanwhile, in patients without preexisting ILD, the crude ORR and pooled ORR were 24.3% and 24% (95% CI, 17%-31%), respectively” – again with significant heterogeneity between studies (I2 = 87.4%).

In contrast to the ORR, there was no difference in the DCR between the two groups, with no evidence of heterogeneity. There were no significant differences between the two groups in either median progression-free survival (PFS) or overall survival (OS). In patients with NSCLC and preexisting ILD, median PFS ranged from 1.4 to 8 months whereas median OS ranged from 15.6 to 27.8 months.

For those without preexisting ILD, the median PFS ranged from 2.3 to 8.1 months while median OS ranged from 17.4 to 25.5 months.
 

 

 

ICI safety

In patients with NSCLC and preexisting ILD, the incidence of immune-related adverse events (irAes) of any grade was 56.7%, whereas the incidence of irAEs grade 3 and higher was 27.7%. “Among the 179 patients included in the studies, 45 developed any grade of CIP, corresponding to a crude incidence of 25.1%,” the authors noted – very similar to the pooled incidence of 27% on meta-analysis.

The pooled incidence of grade 3 and higher CIP in the same group of patients was 15%. The median time from initiation of ICIs to the development of CIP ranged from 31 to 74 days, but 88% of patients who developed CIP improved with appropriate treatment. In patients with NSCLC who did not have ILD, the pooled incidence of CIP was 10% (95% CI, 6%-13%), again with significant heterogeneity between studies (I2 = 78.8%). “Generally, CIP can be managed through ICI discontinuation with or without steroid administration,” the authors noted.

However, even if most CIP can be easily managed, “the incidence of severe CIP is higher [in NSCLC patients with preexisting ILD] than in other populations,” Dr. Chen observed. “So patients with preexisting ILD should be closely monitored during ICI therapy,” she added.

Indeed, compared with patients without preexisting ILD, grade 3 or higher CIP in patients with the dual diagnosis was significantly higher at an OR of 3.23 (95%, 2.06-5.06), the investigators emphasized.

A limitation to the review and systematic meta-analysis included the fact that none of the studies analyzed were randomized clinical trials and most of the studies were retrospective and had several other shortcomings.
 

Umbrella diagnosis

Asked to comment on the review, Karthik Suresh, MD, associate professor of medicine, Johns Hopkins University, Baltimore, pointed out that ILD is really an “umbrella” diagnosis that a few hundred diseases fit under, so the first question he and members of his multidisciplinary team ask is: What is the nature of the ILD in this patient? What is the actual underlying etiology?

It could, for example, be that the patient has undergone prior chemotherapy or radiation therapy and has developed ILD as a result, as Dr. Suresh and his coauthor, Jarushka Naidoo, MD, Sidney Kimmel Comprehensive Cancer Center, Baltimore, pointed out in their paper on how to approach patients with preexisting lung disease to avoid ICI toxicities. “We’ll go back to their prior CT scans and can see the ILD has been there for years – it’s stable and the patient’s lung function is not changing,” Dr. Suresh related to this news organization.

“That’s a very different story from a [patients] whom there are new interstitial changes, who are progressing and who are symptomatic,” he noted. Essentially, what Dr. Suresh and his team members want to know is: What is the specific subdiagnosis of this disease, how severe is it, and is it progressing? Then they need to take the tumor itself into consideration.

“Some tumors have high PD-L1 expression, others have low PD-L1 expression so response to immunotherapy is usually very different based on tumor histology,” Dr. Suresh pointed out. Thus, the next question that needs to be addressed is: What is the expected response of the tumor to ICI therapy? If a tumor is exquisitely sensitive to immunotherapy, “that changes the game,” Dr. Suresh said, “whereas with other tumors, the oncologist might say there may be some benefit but it won’t be dramatic.”

The third risk factor for ICI toxicity that needs to be evaluated is the patient’s general cardiopulmonary status – for example, if a patient has mild, even moderate, ILD but is still walking 3 miles a day, has no heart problems, and is doing fine. Another patient with the same severity of disease in turn may have mild heart failure, be relatively debilitated, and sedentary: “Performance status also plays a big role in determining treatment,” Dr. Suresh emphasized.

The presence of other pulmonary conditions such as chronic obstructive pulmonary disease – common in patients with NSCLC – has to be taken into account, too. Lastly, clinicians need to ask themselves if there are any alternative therapies that might work just as well if not better than ICI therapy for this particular patient. If the patient has had genomic testing, results might indicate that the tumor has a mutation that may respond well to targeted therapies. “We put all these factors out on the table,” Dr. Suresh said.

“And you obviously have to involve the patient, too, so they understand the risks of ICI therapy and together we decide, ‘Yes, this patient with ILD should get immunotherapy or no, they should not,’ “ he said.  

The study had no specific funding. The study authors and Dr. Suresh have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Pediatric community-acquired pneumonia: 5 days of antibiotics better than 10 days

Article Type
Changed
Thu, 01/20/2022 - 14:00

The evidence is in: Less is more when it comes to treating uncomplicated community-acquired pneumonia (CAP) in young children. Five days of antibiotic therapy resulted in a superior clinical response compared to 10 days of treatment and had the added benefit of a lower risk of inducing antibiotic resistance, according to the randomized, controlled SCOUT-CAP trial.

“Several studies have shown shorter antibiotic courses to be non-inferior to the standard treatment strategy, but in our study, we show that a shortened 5-day course of therapy was superior to standard therapy because the short course achieved similar outcomes with fewer days of antibiotics,” Derek Williams, MD, MPH, Vanderbilt University Medical Center, Nashville, Tenn., said in an email.

“These data are immediately applicable to frontline clinicians, and we hope this study will shift the paradigm towards more judicious treatment approaches for childhood pneumonia, resulting in care that is safer and more effective,” he added.

The study was published online Jan. 18 in JAMA Pediatrics.
 

Uncomplicated CAP

The study enrolled children aged 6 months to 71 months diagnosed with uncomplicated CAP who demonstrated early clinical improvement in response to 5 days of antibiotic treatment. Participants were prescribed either amoxicillin, amoxicillin and clavulanate, or cefdinir according to standard of care and were randomized on day 6 to another 5 days of their initially prescribed antibiotic course or to placebo.

“Those assessed on day 6 were eligible only if they had not yet received a dose of antibiotic therapy on that day,” the authors write. The primary endpoint was end-of-treatment response, adjusted for the duration of antibiotic risk as assessed by RADAR. As the authors explain, RADAR is a composite endpoint that ranks each child’s clinical response, resolution of symptoms, and antibiotic-associated adverse effects (AEs) in an ordinal desirability of outcome ranking, or DOOR.

“There were no differences between strategies in the DOOR or in its individual components,” Dr. Williams and colleagues point out. A total of 380 children took part in the study. The mean age of participants was 35.7 months, and half were male.

Over 90% of children randomized to active therapy were prescribed amoxicillin. “Fewer than 10% of children in either strategy had an inadequate clinical response,” the authors report.

However, the 5-day antibiotic strategy had a 69% (95% CI, 63%-75%) probability of children achieving a more desirable RADAR outcome compared with the standard, 10-day course, as assessed either on days 6 to 10 at outcome assessment visit one (OAV1) or at OAV2 on days 19 to 25.

There were also no significant differences between the two groups in the percentage of participants with persistent symptoms at either assessment point, they note. At assessment visit one, 40% of children assigned to the short-course strategy and 37% of children assigned to the 10-day strategy reported an antibiotic-related AE, most of which were mild.
 

Resistome analysis

Some 171 children were included in a resistome analysis in which throat swabs were collected between study days 19 and 25 to quantify antibiotic resistance genes in oropharyngeal flora. The total number of resistance genes per prokaryotic cell (RGPC) was significantly lower in children treated with antibiotics for 5 days compared with children who were treated for 10 days.

Specifically, the median number of total RGPC was 1.17 (95% CI, 0.35-2.43) for the short-course strategy and 1.33 (95% CI, 0.46-11.08) for the standard-course strategy (P = .01). Similarly, the median number of β-lactamase RGPC was 0.55 (0.18-1.24) for the short-course strategy and 0.60 (0.21-2.45) for the standard-course strategy (P = .03).

“Providing the shortest duration of antibiotics necessary to effectively treat an infection is a central tenet of antimicrobial stewardship and a convenient and cost-effective strategy for caregivers,” the authors observe. For example, reducing treatment from 10 to 5 days for outpatient CAP could reduce the number of days spent on antibiotics by up to 7.5 million days in the U.S. each year.

“If we can safely reduce antibiotic exposure, we can minimize antibiotic side effects while also helping to slow antibiotic resistance,” Dr. Williams pointed out.

Fewer days of having to give their child repeated doses of antibiotics is also more convenient for families, he added.

Asked to comment on the study, David Greenberg, MD, professor of pediatrics and infectious diseases, Ben Gurion University of the Negev, Israel, explained that the length of antibiotic therapy as recommended by various guidelines is more or less arbitrary, some infections being excepted.

“There have been no studies evaluating the recommendation for a 100-day treatment course, and it’s kind of a joke because if you look at the treatment of just about any infection, it’s either for 7 days or 14 days or even 20 days because it’s easy to calculate – it’s not that anybody proved that treatment of whatever infection it is should last this long,” he told this news organization.

Moreover, adherence to a shorter antibiotic course is much better than it is to a longer course. If, for example, physicians tell a mother to take two bottles of antibiotics for a treatment course of 10 days, she’ll finish the first bottle which is good for 5 days and, because the child is fine, “she forgets about the second bottle,” Dr. Greenberg said.

In one of the first studies to compare a short versus long course of antibiotic therapy in uncomplicated CAP in young children, Dr. Greenberg and colleagues initially compared a 3-day course of high-dose amoxicillin to a 10-day course of the same treatment, but the 3-day course was associated with an unacceptable failure rate. (At the time, the World Health Organization was recommending a 3-day course of antibiotics for the treatment of uncomplicated CAP in children.)

They stopped the study and then initiated a second study in which they compared a 5-day course of the same antibiotic to a 10-day course and found the 5-day course was comparable to the 10-day course in terms of clinical cure rates. As a result of his study, Dr. Greenberg has long since prescribed a 5-day course of antibiotics for his own patients.

“Five days is good,” he affirmed. “And if patients start a 10-day course of an antibiotic for, say, a urinary tract infection and a subsequent culture comes back negative, they don’t have to finish the antibiotics either.” Dr. Greenberg said.

Dr. Williams said he has no financial ties to industry. Dr. Greenberg said he has served as a consultant for Pfizer, Merck, Johnson & Johnson, and AstraZeneca. He is also a founder of the company Beyond Air.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The evidence is in: Less is more when it comes to treating uncomplicated community-acquired pneumonia (CAP) in young children. Five days of antibiotic therapy resulted in a superior clinical response compared to 10 days of treatment and had the added benefit of a lower risk of inducing antibiotic resistance, according to the randomized, controlled SCOUT-CAP trial.

“Several studies have shown shorter antibiotic courses to be non-inferior to the standard treatment strategy, but in our study, we show that a shortened 5-day course of therapy was superior to standard therapy because the short course achieved similar outcomes with fewer days of antibiotics,” Derek Williams, MD, MPH, Vanderbilt University Medical Center, Nashville, Tenn., said in an email.

“These data are immediately applicable to frontline clinicians, and we hope this study will shift the paradigm towards more judicious treatment approaches for childhood pneumonia, resulting in care that is safer and more effective,” he added.

The study was published online Jan. 18 in JAMA Pediatrics.
 

Uncomplicated CAP

The study enrolled children aged 6 months to 71 months diagnosed with uncomplicated CAP who demonstrated early clinical improvement in response to 5 days of antibiotic treatment. Participants were prescribed either amoxicillin, amoxicillin and clavulanate, or cefdinir according to standard of care and were randomized on day 6 to another 5 days of their initially prescribed antibiotic course or to placebo.

“Those assessed on day 6 were eligible only if they had not yet received a dose of antibiotic therapy on that day,” the authors write. The primary endpoint was end-of-treatment response, adjusted for the duration of antibiotic risk as assessed by RADAR. As the authors explain, RADAR is a composite endpoint that ranks each child’s clinical response, resolution of symptoms, and antibiotic-associated adverse effects (AEs) in an ordinal desirability of outcome ranking, or DOOR.

“There were no differences between strategies in the DOOR or in its individual components,” Dr. Williams and colleagues point out. A total of 380 children took part in the study. The mean age of participants was 35.7 months, and half were male.

Over 90% of children randomized to active therapy were prescribed amoxicillin. “Fewer than 10% of children in either strategy had an inadequate clinical response,” the authors report.

However, the 5-day antibiotic strategy had a 69% (95% CI, 63%-75%) probability of children achieving a more desirable RADAR outcome compared with the standard, 10-day course, as assessed either on days 6 to 10 at outcome assessment visit one (OAV1) or at OAV2 on days 19 to 25.

There were also no significant differences between the two groups in the percentage of participants with persistent symptoms at either assessment point, they note. At assessment visit one, 40% of children assigned to the short-course strategy and 37% of children assigned to the 10-day strategy reported an antibiotic-related AE, most of which were mild.
 

Resistome analysis

Some 171 children were included in a resistome analysis in which throat swabs were collected between study days 19 and 25 to quantify antibiotic resistance genes in oropharyngeal flora. The total number of resistance genes per prokaryotic cell (RGPC) was significantly lower in children treated with antibiotics for 5 days compared with children who were treated for 10 days.

Specifically, the median number of total RGPC was 1.17 (95% CI, 0.35-2.43) for the short-course strategy and 1.33 (95% CI, 0.46-11.08) for the standard-course strategy (P = .01). Similarly, the median number of β-lactamase RGPC was 0.55 (0.18-1.24) for the short-course strategy and 0.60 (0.21-2.45) for the standard-course strategy (P = .03).

“Providing the shortest duration of antibiotics necessary to effectively treat an infection is a central tenet of antimicrobial stewardship and a convenient and cost-effective strategy for caregivers,” the authors observe. For example, reducing treatment from 10 to 5 days for outpatient CAP could reduce the number of days spent on antibiotics by up to 7.5 million days in the U.S. each year.

“If we can safely reduce antibiotic exposure, we can minimize antibiotic side effects while also helping to slow antibiotic resistance,” Dr. Williams pointed out.

Fewer days of having to give their child repeated doses of antibiotics is also more convenient for families, he added.

Asked to comment on the study, David Greenberg, MD, professor of pediatrics and infectious diseases, Ben Gurion University of the Negev, Israel, explained that the length of antibiotic therapy as recommended by various guidelines is more or less arbitrary, some infections being excepted.

“There have been no studies evaluating the recommendation for a 100-day treatment course, and it’s kind of a joke because if you look at the treatment of just about any infection, it’s either for 7 days or 14 days or even 20 days because it’s easy to calculate – it’s not that anybody proved that treatment of whatever infection it is should last this long,” he told this news organization.

Moreover, adherence to a shorter antibiotic course is much better than it is to a longer course. If, for example, physicians tell a mother to take two bottles of antibiotics for a treatment course of 10 days, she’ll finish the first bottle which is good for 5 days and, because the child is fine, “she forgets about the second bottle,” Dr. Greenberg said.

In one of the first studies to compare a short versus long course of antibiotic therapy in uncomplicated CAP in young children, Dr. Greenberg and colleagues initially compared a 3-day course of high-dose amoxicillin to a 10-day course of the same treatment, but the 3-day course was associated with an unacceptable failure rate. (At the time, the World Health Organization was recommending a 3-day course of antibiotics for the treatment of uncomplicated CAP in children.)

They stopped the study and then initiated a second study in which they compared a 5-day course of the same antibiotic to a 10-day course and found the 5-day course was comparable to the 10-day course in terms of clinical cure rates. As a result of his study, Dr. Greenberg has long since prescribed a 5-day course of antibiotics for his own patients.

“Five days is good,” he affirmed. “And if patients start a 10-day course of an antibiotic for, say, a urinary tract infection and a subsequent culture comes back negative, they don’t have to finish the antibiotics either.” Dr. Greenberg said.

Dr. Williams said he has no financial ties to industry. Dr. Greenberg said he has served as a consultant for Pfizer, Merck, Johnson & Johnson, and AstraZeneca. He is also a founder of the company Beyond Air.

A version of this article first appeared on Medscape.com.

The evidence is in: Less is more when it comes to treating uncomplicated community-acquired pneumonia (CAP) in young children. Five days of antibiotic therapy resulted in a superior clinical response compared to 10 days of treatment and had the added benefit of a lower risk of inducing antibiotic resistance, according to the randomized, controlled SCOUT-CAP trial.

“Several studies have shown shorter antibiotic courses to be non-inferior to the standard treatment strategy, but in our study, we show that a shortened 5-day course of therapy was superior to standard therapy because the short course achieved similar outcomes with fewer days of antibiotics,” Derek Williams, MD, MPH, Vanderbilt University Medical Center, Nashville, Tenn., said in an email.

“These data are immediately applicable to frontline clinicians, and we hope this study will shift the paradigm towards more judicious treatment approaches for childhood pneumonia, resulting in care that is safer and more effective,” he added.

The study was published online Jan. 18 in JAMA Pediatrics.
 

Uncomplicated CAP

The study enrolled children aged 6 months to 71 months diagnosed with uncomplicated CAP who demonstrated early clinical improvement in response to 5 days of antibiotic treatment. Participants were prescribed either amoxicillin, amoxicillin and clavulanate, or cefdinir according to standard of care and were randomized on day 6 to another 5 days of their initially prescribed antibiotic course or to placebo.

“Those assessed on day 6 were eligible only if they had not yet received a dose of antibiotic therapy on that day,” the authors write. The primary endpoint was end-of-treatment response, adjusted for the duration of antibiotic risk as assessed by RADAR. As the authors explain, RADAR is a composite endpoint that ranks each child’s clinical response, resolution of symptoms, and antibiotic-associated adverse effects (AEs) in an ordinal desirability of outcome ranking, or DOOR.

“There were no differences between strategies in the DOOR or in its individual components,” Dr. Williams and colleagues point out. A total of 380 children took part in the study. The mean age of participants was 35.7 months, and half were male.

Over 90% of children randomized to active therapy were prescribed amoxicillin. “Fewer than 10% of children in either strategy had an inadequate clinical response,” the authors report.

However, the 5-day antibiotic strategy had a 69% (95% CI, 63%-75%) probability of children achieving a more desirable RADAR outcome compared with the standard, 10-day course, as assessed either on days 6 to 10 at outcome assessment visit one (OAV1) or at OAV2 on days 19 to 25.

There were also no significant differences between the two groups in the percentage of participants with persistent symptoms at either assessment point, they note. At assessment visit one, 40% of children assigned to the short-course strategy and 37% of children assigned to the 10-day strategy reported an antibiotic-related AE, most of which were mild.
 

Resistome analysis

Some 171 children were included in a resistome analysis in which throat swabs were collected between study days 19 and 25 to quantify antibiotic resistance genes in oropharyngeal flora. The total number of resistance genes per prokaryotic cell (RGPC) was significantly lower in children treated with antibiotics for 5 days compared with children who were treated for 10 days.

Specifically, the median number of total RGPC was 1.17 (95% CI, 0.35-2.43) for the short-course strategy and 1.33 (95% CI, 0.46-11.08) for the standard-course strategy (P = .01). Similarly, the median number of β-lactamase RGPC was 0.55 (0.18-1.24) for the short-course strategy and 0.60 (0.21-2.45) for the standard-course strategy (P = .03).

“Providing the shortest duration of antibiotics necessary to effectively treat an infection is a central tenet of antimicrobial stewardship and a convenient and cost-effective strategy for caregivers,” the authors observe. For example, reducing treatment from 10 to 5 days for outpatient CAP could reduce the number of days spent on antibiotics by up to 7.5 million days in the U.S. each year.

“If we can safely reduce antibiotic exposure, we can minimize antibiotic side effects while also helping to slow antibiotic resistance,” Dr. Williams pointed out.

Fewer days of having to give their child repeated doses of antibiotics is also more convenient for families, he added.

Asked to comment on the study, David Greenberg, MD, professor of pediatrics and infectious diseases, Ben Gurion University of the Negev, Israel, explained that the length of antibiotic therapy as recommended by various guidelines is more or less arbitrary, some infections being excepted.

“There have been no studies evaluating the recommendation for a 100-day treatment course, and it’s kind of a joke because if you look at the treatment of just about any infection, it’s either for 7 days or 14 days or even 20 days because it’s easy to calculate – it’s not that anybody proved that treatment of whatever infection it is should last this long,” he told this news organization.

Moreover, adherence to a shorter antibiotic course is much better than it is to a longer course. If, for example, physicians tell a mother to take two bottles of antibiotics for a treatment course of 10 days, she’ll finish the first bottle which is good for 5 days and, because the child is fine, “she forgets about the second bottle,” Dr. Greenberg said.

In one of the first studies to compare a short versus long course of antibiotic therapy in uncomplicated CAP in young children, Dr. Greenberg and colleagues initially compared a 3-day course of high-dose amoxicillin to a 10-day course of the same treatment, but the 3-day course was associated with an unacceptable failure rate. (At the time, the World Health Organization was recommending a 3-day course of antibiotics for the treatment of uncomplicated CAP in children.)

They stopped the study and then initiated a second study in which they compared a 5-day course of the same antibiotic to a 10-day course and found the 5-day course was comparable to the 10-day course in terms of clinical cure rates. As a result of his study, Dr. Greenberg has long since prescribed a 5-day course of antibiotics for his own patients.

“Five days is good,” he affirmed. “And if patients start a 10-day course of an antibiotic for, say, a urinary tract infection and a subsequent culture comes back negative, they don’t have to finish the antibiotics either.” Dr. Greenberg said.

Dr. Williams said he has no financial ties to industry. Dr. Greenberg said he has served as a consultant for Pfizer, Merck, Johnson & Johnson, and AstraZeneca. He is also a founder of the company Beyond Air.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

U.S. cancer deaths continue to fall, especially lung cancer

Article Type
Changed
Thu, 12/15/2022 - 17:24

In the United States, the risk of death from cancer overall has been continuously dropping since 1991, the American Cancer Society (ACS) noted in its latest report.

There has been an overall decline of 32% in cancer deaths as of 2019, or approximately 3.5 million cancer deaths averted, the report noted.

“This success is largely because of reductions in smoking that resulted in downstream declines in lung and other smoking-related cancers,” lead author Rebecca L. Siegel of the ACS, and colleagues, noted in the latest edition of the society’s annual report on cancer rates and trends.

The paper was published online Jan. 12 in CA: A Cancer Journal for Clinicians.

In particular, there has been a fall in both the incidence of and mortality from lung cancer, largely due to successful efforts to get people to quit smoking, but also from earlier diagnosis at a stage when the disease is far more amenable to treatment, noted the authors.

For example, the incidence of lung cancer declined by almost 3% per year in men between the years 2009 and 2018 and by 1% a year in women. Currently, the historically large gender gap in lung cancer incidence is disappearing such that in 2018, lung cancer rates were 24% higher in men than they were in women, and rates in women were actually higher in some younger age groups than they were in men.

Moreover, 28% of lung cancers detected in 2018 were found at a localized stage of disease compared with 17% in 2004.

Patients diagnosed with lung cancer are also living longer, with almost one-third of lung cancer patients still alive 3 years after their diagnosis compared with 21% a decade ago.

However, lung cancer is still the biggest contributor to cancer-related mortality overall, at a death toll of 350 per day – more than breast, prostate, and pancreatic cancer combined, the authors wrote.

This is 2.5 times higher than the death rate from colorectal cancer (CRC), the second leading cause of cancer death in the United States, they added.

Nevertheless, the decrease in lung cancer mortality accelerated from 3.1% per year between 2010 and 2014 to 5.4% per year during 2015 to 2019 in men and from 1.8% to 4.3% in women. “Overall, the lung cancer death rate has dropped by 56% from 1990 to 2019 in men and by 32% from 2002 to 2019 in women,” Ms. Siegel and colleagues emphasized.

Overall, the ACS projects there will be over 1.9 million new cancer cases and over 600,000 cancer deaths across the United States in 2022.


 

Patterns are changing

With prostate cancer now accounting for some 27% of all cancer diagnoses in men, recent trends in the incidence of prostate cancer are somewhat worrisome, the authors wrote. While the incidence for local-stage disease remained stable from 2014 through to 2018, the incidence of advanced-stage disease has increased by 6% a year since 2011. “Consequently, the proportion of distant-stage diagnoses has more than doubled,” the authors noted, “from a low of 3.9% in 2007 to 8.2% in 2018.”

 

 

The incidence of breast cancer among women has been slowly increasing by 0.5% per year since about the mid-2000s. This increase is due at least in part to declines in fertility and increases in body weight among women, the authors suggested. Declines in breast cancer mortality have slowed in recent years, dropping from 1% per year from 2013 to 2019 from 2%-3% per year seen during the 1990s and the early 2000s.

As for CRC, incidence patterns are similar by sex but differ by age. For example, incidence rates of CRC declined by about 2% per year between 2014 and 2018 in individuals 50 years and older, but they increased by 1.5% per year in adults under the age of 50. Overall, however, mortality from CRC decreased by about 2% per year between 2010 and 2019, although this trend again masks increasing mortality from CRC among younger adults, where death rates rose by 1.2% per year from 2005 through 2019 in patients under the age of 50.

The third leading cause of death in men and women combined is pancreatic cancer. Here again, mortality rates slowly increased in men between 2000 and 2013 but have remained relatively stable in women.

Between 2010 and 2019, cancers of the tongue, tonsils, and oropharynx caused by human papilloma virus (HPV) increased by about 2% per year in men and by 1% per year in women.

Death from cervical cancer – despite its being one of the most preventable cancers overall – is still the second leading cause of cancer death in women between 20 and 39 years of age. “Most of these women have never been screened so this is low-hanging fruit easily addressed by increasing access to screening and [HPV] vaccination among underserved women,” Ms. Siegel said in a statement.

On the other hand, mortality from liver cancer – having increased rapidly over the past number of decades – appears to have stabilized in more recent years.
 

Survival at 5 years

For all cancers combined, survival at 5 years between the mid-1970s and 2011 through 2017 increased from 50% to 68% for White patients and by 39% to 63% for Black patients. “For all stages combined, survival is highest for prostate cancer (98%), melanoma of the skin (93%) and female breast cancer (90%),” the authors pointed out.

In contrast, survival at 5 years is lowest, at 11% for pancreatic cancer, 20% for cancers of the liver and esophagus, and 22% for lung cancer.

Indeed, for most of the common cancers, cancer survival has improved since the mid-1970s with the exception or uterine and cervical cancer, the latter because there have been few advancements in treatment.

Even among the more rare blood and lymphoid malignancies, improvements in treatment strategies, including the use of targeted therapies, have resulted in major survival gains from around 20% in the mid-1970s for chronic myeloid leukemia (CML) patients to over 70% for CML patients diagnosed between 2011 and 2017.

Similarly, the discovery and use of immunotherapy has doubled 5-year survival rates to 30% for patients with metastatic melanoma from 15% in 2004. On the other hand, racial disparities in survival odds continue to persist. For every cancer type except for cancer of the pancreas and kidney, survival rates were lower for Black patients than for White patients, the researchers pointed out.

“Black individuals also have lower stage-specific survival for most cancer types,” the report authors noted. Indeed, after adjustment for sex, age, and stage at diagnosis, the risk of death is 33% higher in Black patients than White patients and 51% higher in American Indian/Alaska Natives compared to White patients.

That said, the overall incidence of cancer is still highest among White individuals, in part because of high rates of breast cancer in White women, which may in part reflect overdiagnosis of breast cancer in this patient population, as the authors suggested.

“However, Black women have the highest cancer mortality rates – 12% higher than White women,” they observed. Even more striking, Black women have a 4% lower incidence of breast cancer than White women but a 41% higher mortality risk from it.

As for pediatric and adolescent cancers, incidence rates may be increasing slightly among both age groups, but dramatic reductions in death by 71% among children and by 61% among adolescents from the mid-70s until now continue as a singular success story in the treatment of cancer overall.

All the authors are employed by the ACS.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

In the United States, the risk of death from cancer overall has been continuously dropping since 1991, the American Cancer Society (ACS) noted in its latest report.

There has been an overall decline of 32% in cancer deaths as of 2019, or approximately 3.5 million cancer deaths averted, the report noted.

“This success is largely because of reductions in smoking that resulted in downstream declines in lung and other smoking-related cancers,” lead author Rebecca L. Siegel of the ACS, and colleagues, noted in the latest edition of the society’s annual report on cancer rates and trends.

The paper was published online Jan. 12 in CA: A Cancer Journal for Clinicians.

In particular, there has been a fall in both the incidence of and mortality from lung cancer, largely due to successful efforts to get people to quit smoking, but also from earlier diagnosis at a stage when the disease is far more amenable to treatment, noted the authors.

For example, the incidence of lung cancer declined by almost 3% per year in men between the years 2009 and 2018 and by 1% a year in women. Currently, the historically large gender gap in lung cancer incidence is disappearing such that in 2018, lung cancer rates were 24% higher in men than they were in women, and rates in women were actually higher in some younger age groups than they were in men.

Moreover, 28% of lung cancers detected in 2018 were found at a localized stage of disease compared with 17% in 2004.

Patients diagnosed with lung cancer are also living longer, with almost one-third of lung cancer patients still alive 3 years after their diagnosis compared with 21% a decade ago.

However, lung cancer is still the biggest contributor to cancer-related mortality overall, at a death toll of 350 per day – more than breast, prostate, and pancreatic cancer combined, the authors wrote.

This is 2.5 times higher than the death rate from colorectal cancer (CRC), the second leading cause of cancer death in the United States, they added.

Nevertheless, the decrease in lung cancer mortality accelerated from 3.1% per year between 2010 and 2014 to 5.4% per year during 2015 to 2019 in men and from 1.8% to 4.3% in women. “Overall, the lung cancer death rate has dropped by 56% from 1990 to 2019 in men and by 32% from 2002 to 2019 in women,” Ms. Siegel and colleagues emphasized.

Overall, the ACS projects there will be over 1.9 million new cancer cases and over 600,000 cancer deaths across the United States in 2022.


 

Patterns are changing

With prostate cancer now accounting for some 27% of all cancer diagnoses in men, recent trends in the incidence of prostate cancer are somewhat worrisome, the authors wrote. While the incidence for local-stage disease remained stable from 2014 through to 2018, the incidence of advanced-stage disease has increased by 6% a year since 2011. “Consequently, the proportion of distant-stage diagnoses has more than doubled,” the authors noted, “from a low of 3.9% in 2007 to 8.2% in 2018.”

 

 

The incidence of breast cancer among women has been slowly increasing by 0.5% per year since about the mid-2000s. This increase is due at least in part to declines in fertility and increases in body weight among women, the authors suggested. Declines in breast cancer mortality have slowed in recent years, dropping from 1% per year from 2013 to 2019 from 2%-3% per year seen during the 1990s and the early 2000s.

As for CRC, incidence patterns are similar by sex but differ by age. For example, incidence rates of CRC declined by about 2% per year between 2014 and 2018 in individuals 50 years and older, but they increased by 1.5% per year in adults under the age of 50. Overall, however, mortality from CRC decreased by about 2% per year between 2010 and 2019, although this trend again masks increasing mortality from CRC among younger adults, where death rates rose by 1.2% per year from 2005 through 2019 in patients under the age of 50.

The third leading cause of death in men and women combined is pancreatic cancer. Here again, mortality rates slowly increased in men between 2000 and 2013 but have remained relatively stable in women.

Between 2010 and 2019, cancers of the tongue, tonsils, and oropharynx caused by human papilloma virus (HPV) increased by about 2% per year in men and by 1% per year in women.

Death from cervical cancer – despite its being one of the most preventable cancers overall – is still the second leading cause of cancer death in women between 20 and 39 years of age. “Most of these women have never been screened so this is low-hanging fruit easily addressed by increasing access to screening and [HPV] vaccination among underserved women,” Ms. Siegel said in a statement.

On the other hand, mortality from liver cancer – having increased rapidly over the past number of decades – appears to have stabilized in more recent years.
 

Survival at 5 years

For all cancers combined, survival at 5 years between the mid-1970s and 2011 through 2017 increased from 50% to 68% for White patients and by 39% to 63% for Black patients. “For all stages combined, survival is highest for prostate cancer (98%), melanoma of the skin (93%) and female breast cancer (90%),” the authors pointed out.

In contrast, survival at 5 years is lowest, at 11% for pancreatic cancer, 20% for cancers of the liver and esophagus, and 22% for lung cancer.

Indeed, for most of the common cancers, cancer survival has improved since the mid-1970s with the exception or uterine and cervical cancer, the latter because there have been few advancements in treatment.

Even among the more rare blood and lymphoid malignancies, improvements in treatment strategies, including the use of targeted therapies, have resulted in major survival gains from around 20% in the mid-1970s for chronic myeloid leukemia (CML) patients to over 70% for CML patients diagnosed between 2011 and 2017.

Similarly, the discovery and use of immunotherapy has doubled 5-year survival rates to 30% for patients with metastatic melanoma from 15% in 2004. On the other hand, racial disparities in survival odds continue to persist. For every cancer type except for cancer of the pancreas and kidney, survival rates were lower for Black patients than for White patients, the researchers pointed out.

“Black individuals also have lower stage-specific survival for most cancer types,” the report authors noted. Indeed, after adjustment for sex, age, and stage at diagnosis, the risk of death is 33% higher in Black patients than White patients and 51% higher in American Indian/Alaska Natives compared to White patients.

That said, the overall incidence of cancer is still highest among White individuals, in part because of high rates of breast cancer in White women, which may in part reflect overdiagnosis of breast cancer in this patient population, as the authors suggested.

“However, Black women have the highest cancer mortality rates – 12% higher than White women,” they observed. Even more striking, Black women have a 4% lower incidence of breast cancer than White women but a 41% higher mortality risk from it.

As for pediatric and adolescent cancers, incidence rates may be increasing slightly among both age groups, but dramatic reductions in death by 71% among children and by 61% among adolescents from the mid-70s until now continue as a singular success story in the treatment of cancer overall.

All the authors are employed by the ACS.

A version of this article first appeared on Medscape.com.

In the United States, the risk of death from cancer overall has been continuously dropping since 1991, the American Cancer Society (ACS) noted in its latest report.

There has been an overall decline of 32% in cancer deaths as of 2019, or approximately 3.5 million cancer deaths averted, the report noted.

“This success is largely because of reductions in smoking that resulted in downstream declines in lung and other smoking-related cancers,” lead author Rebecca L. Siegel of the ACS, and colleagues, noted in the latest edition of the society’s annual report on cancer rates and trends.

The paper was published online Jan. 12 in CA: A Cancer Journal for Clinicians.

In particular, there has been a fall in both the incidence of and mortality from lung cancer, largely due to successful efforts to get people to quit smoking, but also from earlier diagnosis at a stage when the disease is far more amenable to treatment, noted the authors.

For example, the incidence of lung cancer declined by almost 3% per year in men between the years 2009 and 2018 and by 1% a year in women. Currently, the historically large gender gap in lung cancer incidence is disappearing such that in 2018, lung cancer rates were 24% higher in men than they were in women, and rates in women were actually higher in some younger age groups than they were in men.

Moreover, 28% of lung cancers detected in 2018 were found at a localized stage of disease compared with 17% in 2004.

Patients diagnosed with lung cancer are also living longer, with almost one-third of lung cancer patients still alive 3 years after their diagnosis compared with 21% a decade ago.

However, lung cancer is still the biggest contributor to cancer-related mortality overall, at a death toll of 350 per day – more than breast, prostate, and pancreatic cancer combined, the authors wrote.

This is 2.5 times higher than the death rate from colorectal cancer (CRC), the second leading cause of cancer death in the United States, they added.

Nevertheless, the decrease in lung cancer mortality accelerated from 3.1% per year between 2010 and 2014 to 5.4% per year during 2015 to 2019 in men and from 1.8% to 4.3% in women. “Overall, the lung cancer death rate has dropped by 56% from 1990 to 2019 in men and by 32% from 2002 to 2019 in women,” Ms. Siegel and colleagues emphasized.

Overall, the ACS projects there will be over 1.9 million new cancer cases and over 600,000 cancer deaths across the United States in 2022.


 

Patterns are changing

With prostate cancer now accounting for some 27% of all cancer diagnoses in men, recent trends in the incidence of prostate cancer are somewhat worrisome, the authors wrote. While the incidence for local-stage disease remained stable from 2014 through to 2018, the incidence of advanced-stage disease has increased by 6% a year since 2011. “Consequently, the proportion of distant-stage diagnoses has more than doubled,” the authors noted, “from a low of 3.9% in 2007 to 8.2% in 2018.”

 

 

The incidence of breast cancer among women has been slowly increasing by 0.5% per year since about the mid-2000s. This increase is due at least in part to declines in fertility and increases in body weight among women, the authors suggested. Declines in breast cancer mortality have slowed in recent years, dropping from 1% per year from 2013 to 2019 from 2%-3% per year seen during the 1990s and the early 2000s.

As for CRC, incidence patterns are similar by sex but differ by age. For example, incidence rates of CRC declined by about 2% per year between 2014 and 2018 in individuals 50 years and older, but they increased by 1.5% per year in adults under the age of 50. Overall, however, mortality from CRC decreased by about 2% per year between 2010 and 2019, although this trend again masks increasing mortality from CRC among younger adults, where death rates rose by 1.2% per year from 2005 through 2019 in patients under the age of 50.

The third leading cause of death in men and women combined is pancreatic cancer. Here again, mortality rates slowly increased in men between 2000 and 2013 but have remained relatively stable in women.

Between 2010 and 2019, cancers of the tongue, tonsils, and oropharynx caused by human papilloma virus (HPV) increased by about 2% per year in men and by 1% per year in women.

Death from cervical cancer – despite its being one of the most preventable cancers overall – is still the second leading cause of cancer death in women between 20 and 39 years of age. “Most of these women have never been screened so this is low-hanging fruit easily addressed by increasing access to screening and [HPV] vaccination among underserved women,” Ms. Siegel said in a statement.

On the other hand, mortality from liver cancer – having increased rapidly over the past number of decades – appears to have stabilized in more recent years.
 

Survival at 5 years

For all cancers combined, survival at 5 years between the mid-1970s and 2011 through 2017 increased from 50% to 68% for White patients and by 39% to 63% for Black patients. “For all stages combined, survival is highest for prostate cancer (98%), melanoma of the skin (93%) and female breast cancer (90%),” the authors pointed out.

In contrast, survival at 5 years is lowest, at 11% for pancreatic cancer, 20% for cancers of the liver and esophagus, and 22% for lung cancer.

Indeed, for most of the common cancers, cancer survival has improved since the mid-1970s with the exception or uterine and cervical cancer, the latter because there have been few advancements in treatment.

Even among the more rare blood and lymphoid malignancies, improvements in treatment strategies, including the use of targeted therapies, have resulted in major survival gains from around 20% in the mid-1970s for chronic myeloid leukemia (CML) patients to over 70% for CML patients diagnosed between 2011 and 2017.

Similarly, the discovery and use of immunotherapy has doubled 5-year survival rates to 30% for patients with metastatic melanoma from 15% in 2004. On the other hand, racial disparities in survival odds continue to persist. For every cancer type except for cancer of the pancreas and kidney, survival rates were lower for Black patients than for White patients, the researchers pointed out.

“Black individuals also have lower stage-specific survival for most cancer types,” the report authors noted. Indeed, after adjustment for sex, age, and stage at diagnosis, the risk of death is 33% higher in Black patients than White patients and 51% higher in American Indian/Alaska Natives compared to White patients.

That said, the overall incidence of cancer is still highest among White individuals, in part because of high rates of breast cancer in White women, which may in part reflect overdiagnosis of breast cancer in this patient population, as the authors suggested.

“However, Black women have the highest cancer mortality rates – 12% higher than White women,” they observed. Even more striking, Black women have a 4% lower incidence of breast cancer than White women but a 41% higher mortality risk from it.

As for pediatric and adolescent cancers, incidence rates may be increasing slightly among both age groups, but dramatic reductions in death by 71% among children and by 61% among adolescents from the mid-70s until now continue as a singular success story in the treatment of cancer overall.

All the authors are employed by the ACS.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CA: A CANCER JOURNAL FOR CLINICIANS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article