Extended virus shedding after COVID-19 in some patients with cancer

Article Type
Changed
Thu, 08/26/2021 - 15:55

Patients who are profoundly immunosuppressed after extensive cancer treatment, and who fall ill with COVID-19, can shed viable SARS-CoV-2 virus for at least 2 months after symptom onset and may need extended periods of isolation.

Live-virus shedding was detected in 18 patients who had undergone hematopoietic stem cell transplants or chimeric antigen receptor (CAR) T-cell therapy and in 2 patients with lymphoma

The finding was reported Dec. 1 in a research letter in the New England Journal of Medicine. 

Individuals who are otherwise healthy when they get COVID-19 are “no longer infectious after the first week of illness,” said lead author Mini Kamboj, MD, chief medical epidemiologist, Memorial Sloan Kettering Cancer Center, New York. 

“We need to keep an open mind about how [much] longer immunocompromised patients could pose an infection risk to others,” she added.

Dr. Kamboj said in an interview that her team’s previous experience with stem cell transplant recipients had suggested that severely immunocompromised patients shed other viruses (such as respiratory syncytial virus, parainfluenza, and influenza) for longer periods of time than do healthy controls.

Based on their latest findings, the investigators suggest that current guidelines for COVID-19 isolation precautions may need to be revised for immunocompromised patients. Even if only a small proportion of patients with cancer who have COVID-19 remain contagious for prolonged periods of time, “it’s a residual risk that we need to address,” Dr. Kamboj said. 

Dr. Kamboj also suggested that physicians follow test-based criteria to determine when a patient undergoing transplant can be released from isolation.
 

Shedding of viable virus

For this study, the investigators used cell cultures to detect viable virus in serially collected nasopharyngeal and sputum samples from 20 immunocompromised patients who had COVID-19 (diagnosed with COVID-19 between March 10 and April 20).  

Patients had lymphoma (n = 8), multiple myeloma (n= 7), acute leukemia/myelodysplastic syndrome (n = 4), and chronic leukemia (n = 1). There were 16 patients who had undergone transplant, 2 who had received CAR T-cell therapy, and 2 who had received other therapy.

There were 15 patients receiving active treatment or chemotherapy, and 11 developed severe COVID-19 infection. 

In total, 78 respiratory samples were collected.

“Viral RNA was detected for up to 78 days after the onset of symptoms,” the researchers reported, “[and] viable virus was detected in 10 of 14 nasopharyngeal samples (71%) that were available from the first day of laboratory testing.”

Five patients were followed up, and from these patients, the team grew virus in culture for up to 61 days after symptom onset. Two among this small group of five patients had received allogenic hematopoietic stem cell transplantation and one patient had been treated with CAR T-cell therapy within the previous 6 months. This patient remained seronegative for antibodies to the coronavirus.

For 11 patients, the team obtained serial sample genomes and found that  “each patient was infected by a distinct virus and there were no major changes in the consensus sequences of the original serial specimens or cultured isolates.” These findings were consistent with persistent infection, they noted.

The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Patients who are profoundly immunosuppressed after extensive cancer treatment, and who fall ill with COVID-19, can shed viable SARS-CoV-2 virus for at least 2 months after symptom onset and may need extended periods of isolation.

Live-virus shedding was detected in 18 patients who had undergone hematopoietic stem cell transplants or chimeric antigen receptor (CAR) T-cell therapy and in 2 patients with lymphoma

The finding was reported Dec. 1 in a research letter in the New England Journal of Medicine. 

Individuals who are otherwise healthy when they get COVID-19 are “no longer infectious after the first week of illness,” said lead author Mini Kamboj, MD, chief medical epidemiologist, Memorial Sloan Kettering Cancer Center, New York. 

“We need to keep an open mind about how [much] longer immunocompromised patients could pose an infection risk to others,” she added.

Dr. Kamboj said in an interview that her team’s previous experience with stem cell transplant recipients had suggested that severely immunocompromised patients shed other viruses (such as respiratory syncytial virus, parainfluenza, and influenza) for longer periods of time than do healthy controls.

Based on their latest findings, the investigators suggest that current guidelines for COVID-19 isolation precautions may need to be revised for immunocompromised patients. Even if only a small proportion of patients with cancer who have COVID-19 remain contagious for prolonged periods of time, “it’s a residual risk that we need to address,” Dr. Kamboj said. 

Dr. Kamboj also suggested that physicians follow test-based criteria to determine when a patient undergoing transplant can be released from isolation.
 

Shedding of viable virus

For this study, the investigators used cell cultures to detect viable virus in serially collected nasopharyngeal and sputum samples from 20 immunocompromised patients who had COVID-19 (diagnosed with COVID-19 between March 10 and April 20).  

Patients had lymphoma (n = 8), multiple myeloma (n= 7), acute leukemia/myelodysplastic syndrome (n = 4), and chronic leukemia (n = 1). There were 16 patients who had undergone transplant, 2 who had received CAR T-cell therapy, and 2 who had received other therapy.

There were 15 patients receiving active treatment or chemotherapy, and 11 developed severe COVID-19 infection. 

In total, 78 respiratory samples were collected.

“Viral RNA was detected for up to 78 days after the onset of symptoms,” the researchers reported, “[and] viable virus was detected in 10 of 14 nasopharyngeal samples (71%) that were available from the first day of laboratory testing.”

Five patients were followed up, and from these patients, the team grew virus in culture for up to 61 days after symptom onset. Two among this small group of five patients had received allogenic hematopoietic stem cell transplantation and one patient had been treated with CAR T-cell therapy within the previous 6 months. This patient remained seronegative for antibodies to the coronavirus.

For 11 patients, the team obtained serial sample genomes and found that  “each patient was infected by a distinct virus and there were no major changes in the consensus sequences of the original serial specimens or cultured isolates.” These findings were consistent with persistent infection, they noted.

The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Patients who are profoundly immunosuppressed after extensive cancer treatment, and who fall ill with COVID-19, can shed viable SARS-CoV-2 virus for at least 2 months after symptom onset and may need extended periods of isolation.

Live-virus shedding was detected in 18 patients who had undergone hematopoietic stem cell transplants or chimeric antigen receptor (CAR) T-cell therapy and in 2 patients with lymphoma

The finding was reported Dec. 1 in a research letter in the New England Journal of Medicine. 

Individuals who are otherwise healthy when they get COVID-19 are “no longer infectious after the first week of illness,” said lead author Mini Kamboj, MD, chief medical epidemiologist, Memorial Sloan Kettering Cancer Center, New York. 

“We need to keep an open mind about how [much] longer immunocompromised patients could pose an infection risk to others,” she added.

Dr. Kamboj said in an interview that her team’s previous experience with stem cell transplant recipients had suggested that severely immunocompromised patients shed other viruses (such as respiratory syncytial virus, parainfluenza, and influenza) for longer periods of time than do healthy controls.

Based on their latest findings, the investigators suggest that current guidelines for COVID-19 isolation precautions may need to be revised for immunocompromised patients. Even if only a small proportion of patients with cancer who have COVID-19 remain contagious for prolonged periods of time, “it’s a residual risk that we need to address,” Dr. Kamboj said. 

Dr. Kamboj also suggested that physicians follow test-based criteria to determine when a patient undergoing transplant can be released from isolation.
 

Shedding of viable virus

For this study, the investigators used cell cultures to detect viable virus in serially collected nasopharyngeal and sputum samples from 20 immunocompromised patients who had COVID-19 (diagnosed with COVID-19 between March 10 and April 20).  

Patients had lymphoma (n = 8), multiple myeloma (n= 7), acute leukemia/myelodysplastic syndrome (n = 4), and chronic leukemia (n = 1). There were 16 patients who had undergone transplant, 2 who had received CAR T-cell therapy, and 2 who had received other therapy.

There were 15 patients receiving active treatment or chemotherapy, and 11 developed severe COVID-19 infection. 

In total, 78 respiratory samples were collected.

“Viral RNA was detected for up to 78 days after the onset of symptoms,” the researchers reported, “[and] viable virus was detected in 10 of 14 nasopharyngeal samples (71%) that were available from the first day of laboratory testing.”

Five patients were followed up, and from these patients, the team grew virus in culture for up to 61 days after symptom onset. Two among this small group of five patients had received allogenic hematopoietic stem cell transplantation and one patient had been treated with CAR T-cell therapy within the previous 6 months. This patient remained seronegative for antibodies to the coronavirus.

For 11 patients, the team obtained serial sample genomes and found that  “each patient was infected by a distinct virus and there were no major changes in the consensus sequences of the original serial specimens or cultured isolates.” These findings were consistent with persistent infection, they noted.

The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Black race linked to poorer survival in AML

Article Type
Changed
Tue, 12/15/2020 - 09:09

Black race is the most important risk factor for patients with acute myeloid leukemia (AML) and is associated with poor survival, according to new findings.

Among patients with AML younger than 60 years, the rate of overall 3-year survival was significantly less among Black patients than White patients (34% vs. 43%). The risk for death was 27% higher for Black patients compared with White patients.

“Our study demonstrates the delicate interplay between a variety of factors that influence survival disparities, particularly for younger Black AML patients,” said first author Bhavana Bhatnagar, DO, of the Ohio State University’s Comprehensive Cancer Center, Columbus. “We were able to confirm the impact of socioeconomic factors while also demonstrating that being Black is, in and of itself, an independent poor prognostic variable for survival.”

She noted that the persistently poor outcomes of young Black patients that were seen despite similar treatments in clinical trials strongly suggest that additional factors have a bearing on their survival.

The findings of the study were presented during the plenary session of the annual meeting of the American Society of Hematology, which was held online this year. The study was simultaneously published in Cancer Discovery.

Racial disparities in cancer outcomes remain a challenge. The term “health disparities” describes the differences of health outcomes among different groups, said Chancellor Donald, MD, of Tulane University, New Orleans, who introduced the article at the meeting. “Racial health disparities usually result from an unequal distribution of power and resources, not genetics.

“The examination of health disparities is certainly a worthwhile endeavor,” he continued. “For generations, differences in key health outcomes have negatively impacted the quality of life and shortened the life span of countless individuals. As scientists, clinicians, and invested members of our shared society, we are obligated to obtain a profound understanding of the mechanisms and impact of this morbid reality.”
 

Black race a risk factor

For their study, Dr. Bhatnagar and colleagues conducted a nationwide population analysis using data from the Surveillance Epidemiology End Results (SEER) Program of the National Cancer Institute to identify 11,190 adults aged 18-60 years who were diagnosed with AML between 1986 and 2015.

To characterize molecular features, they conducted targeted sequencing of 81 genes in 1,339 patients with AML who were treated on frontline Cancer and Leukemia Group B/Alliance for Clinical Trials in Oncology (Alliance) protocols based on standard-intensity cytarabine/anthracycline induction followed by consolidation between 1986 and 2016. None of these patients received an allogeneic stem cell transplant when they achieved complete remission.

Although overall survival has improved during the past 3 decades, survival disparities between Black and White patients has widened over time (P < .001). The authors found a nonstatistically significant difference in survival between 1986 and 1995 (White patients, n = 1,365; Black patients, n = 160; P = .19). However, the difference was significant between 1996 and 2005 (White patients, n = 2,994; Black patients, n = 480; P = .004). “And it became even more noticeable in the most recent decade,” said Dr. Bhatnagar. “Furthermore, younger Black AML patients were found to have worse survival compared with younger White AML patients.”

Results from the second analysis of patients treated on Alliance protocols did not show any significant differences in early death rates (10% vs. 46%; P = .02) and complete remission rates (71% vs. 71%; P = 1.00). “While relapse rates were slightly higher in Black compared to White patients, this difference did not reach statistical significance,” said Dr. Bhatnagar. “There was also no significant difference in the number of cycles of consolidation chemotherapy administered to these patients.”

However, both disease-free and overall survival were significantly worse for Black patients, suggesting that factors other than treatment selection were likely at play in influencing the survival disparity. The median disease-free survival for Black patients was 0.8 years, vs. 1.4 years for White patients (P = .02). Overall survival was 1.2 years vs. 1.8 years (P = .02).

Relapse rates were slightly higher in Black patients than in White patients, at 71% vs. 59%, but this difference did not reach statistical significance (P = .14).
 

 

 

Differences in biomarkers

With regard to underlying molecular differences between Black and White patients, the investigators found that the most common mutations were in NPM1, FLT3-ITD, and DNM3TA. Mutations were detected in more than 20% of Black patients. Other commonly mutated genes were IDH2, NRAS, TET2, IDH1, and TP53, which were mutated in more than 10% of patients. “All of these genes are established commonly mutated genes in AML,” said Bhatnagar.

On univariable and multivariable outcome analyses, which were used to identify clinical or molecular features that had a bearing on outcome, FLT3-ITD and IDH2 mutations were the only mutations associated with a higher risk for death among Black patients.

“This is actually a very important finding, as both FLT3 and IDH2 are now targetable with small-molecule inhibitors,” said Dr. Bhatnagar. “In addition, it is also worth noting that other gene mutations that have known prognostic significance in AML, such as NPM1, as well as RUNX1 and TP53, did not remain in the final statistical model.

“Importantly, our study provides powerful evidence that suggests differences in underlying disease biology between young Black and White AML patients, as evidenced by differences in the frequencies of recurrent gene mutations, “ she said.
 

Understudied disparities

Although the study showed that Black patients had worse outcomes, “surprisingly, the authors found these outcomes hold even when the patients are participating in clinical trials,” noted Elisa Weiss, PhD, senior vice president of education, services, and health research for the Leukemia and Lymphoma Society.

“The study makes clear that the medical and science community need to do more to better understand the social, economic, environmental, and biological causes of these disparities,” she said in an interview. “In fact, the findings suggest that there are myriad complex and understudied causes of the identified disparities, and they are likely to lie at the intersection of all levels of the social ecology that impact an individual’s ability to access timely and unbiased care, maintain their mental and physical health, and receive needed social support and resources.”

She noted that the Leukemia and Lymphoma Society has an Equity in Access research program that aims to “advance study of underlying causes of inequitable access to care and identify policies, strategies, and interventions that have the potential to reduce inequities and increase access to health care, services, and programs for blood cancer patients and survivors.”

The research was supported in part by the National Cancer Institute of the National Institutes of Health, other institutions, and through several scholar awards. Dr. Bhatnagar has received advisory board honoraria from Novartis, Kite Pharma, Celgene, Astellas, and Cell Therapeutics. Dr. Weiss has disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Black race is the most important risk factor for patients with acute myeloid leukemia (AML) and is associated with poor survival, according to new findings.

Among patients with AML younger than 60 years, the rate of overall 3-year survival was significantly less among Black patients than White patients (34% vs. 43%). The risk for death was 27% higher for Black patients compared with White patients.

“Our study demonstrates the delicate interplay between a variety of factors that influence survival disparities, particularly for younger Black AML patients,” said first author Bhavana Bhatnagar, DO, of the Ohio State University’s Comprehensive Cancer Center, Columbus. “We were able to confirm the impact of socioeconomic factors while also demonstrating that being Black is, in and of itself, an independent poor prognostic variable for survival.”

She noted that the persistently poor outcomes of young Black patients that were seen despite similar treatments in clinical trials strongly suggest that additional factors have a bearing on their survival.

The findings of the study were presented during the plenary session of the annual meeting of the American Society of Hematology, which was held online this year. The study was simultaneously published in Cancer Discovery.

Racial disparities in cancer outcomes remain a challenge. The term “health disparities” describes the differences of health outcomes among different groups, said Chancellor Donald, MD, of Tulane University, New Orleans, who introduced the article at the meeting. “Racial health disparities usually result from an unequal distribution of power and resources, not genetics.

“The examination of health disparities is certainly a worthwhile endeavor,” he continued. “For generations, differences in key health outcomes have negatively impacted the quality of life and shortened the life span of countless individuals. As scientists, clinicians, and invested members of our shared society, we are obligated to obtain a profound understanding of the mechanisms and impact of this morbid reality.”
 

Black race a risk factor

For their study, Dr. Bhatnagar and colleagues conducted a nationwide population analysis using data from the Surveillance Epidemiology End Results (SEER) Program of the National Cancer Institute to identify 11,190 adults aged 18-60 years who were diagnosed with AML between 1986 and 2015.

To characterize molecular features, they conducted targeted sequencing of 81 genes in 1,339 patients with AML who were treated on frontline Cancer and Leukemia Group B/Alliance for Clinical Trials in Oncology (Alliance) protocols based on standard-intensity cytarabine/anthracycline induction followed by consolidation between 1986 and 2016. None of these patients received an allogeneic stem cell transplant when they achieved complete remission.

Although overall survival has improved during the past 3 decades, survival disparities between Black and White patients has widened over time (P < .001). The authors found a nonstatistically significant difference in survival between 1986 and 1995 (White patients, n = 1,365; Black patients, n = 160; P = .19). However, the difference was significant between 1996 and 2005 (White patients, n = 2,994; Black patients, n = 480; P = .004). “And it became even more noticeable in the most recent decade,” said Dr. Bhatnagar. “Furthermore, younger Black AML patients were found to have worse survival compared with younger White AML patients.”

Results from the second analysis of patients treated on Alliance protocols did not show any significant differences in early death rates (10% vs. 46%; P = .02) and complete remission rates (71% vs. 71%; P = 1.00). “While relapse rates were slightly higher in Black compared to White patients, this difference did not reach statistical significance,” said Dr. Bhatnagar. “There was also no significant difference in the number of cycles of consolidation chemotherapy administered to these patients.”

However, both disease-free and overall survival were significantly worse for Black patients, suggesting that factors other than treatment selection were likely at play in influencing the survival disparity. The median disease-free survival for Black patients was 0.8 years, vs. 1.4 years for White patients (P = .02). Overall survival was 1.2 years vs. 1.8 years (P = .02).

Relapse rates were slightly higher in Black patients than in White patients, at 71% vs. 59%, but this difference did not reach statistical significance (P = .14).
 

 

 

Differences in biomarkers

With regard to underlying molecular differences between Black and White patients, the investigators found that the most common mutations were in NPM1, FLT3-ITD, and DNM3TA. Mutations were detected in more than 20% of Black patients. Other commonly mutated genes were IDH2, NRAS, TET2, IDH1, and TP53, which were mutated in more than 10% of patients. “All of these genes are established commonly mutated genes in AML,” said Bhatnagar.

On univariable and multivariable outcome analyses, which were used to identify clinical or molecular features that had a bearing on outcome, FLT3-ITD and IDH2 mutations were the only mutations associated with a higher risk for death among Black patients.

“This is actually a very important finding, as both FLT3 and IDH2 are now targetable with small-molecule inhibitors,” said Dr. Bhatnagar. “In addition, it is also worth noting that other gene mutations that have known prognostic significance in AML, such as NPM1, as well as RUNX1 and TP53, did not remain in the final statistical model.

“Importantly, our study provides powerful evidence that suggests differences in underlying disease biology between young Black and White AML patients, as evidenced by differences in the frequencies of recurrent gene mutations, “ she said.
 

Understudied disparities

Although the study showed that Black patients had worse outcomes, “surprisingly, the authors found these outcomes hold even when the patients are participating in clinical trials,” noted Elisa Weiss, PhD, senior vice president of education, services, and health research for the Leukemia and Lymphoma Society.

“The study makes clear that the medical and science community need to do more to better understand the social, economic, environmental, and biological causes of these disparities,” she said in an interview. “In fact, the findings suggest that there are myriad complex and understudied causes of the identified disparities, and they are likely to lie at the intersection of all levels of the social ecology that impact an individual’s ability to access timely and unbiased care, maintain their mental and physical health, and receive needed social support and resources.”

She noted that the Leukemia and Lymphoma Society has an Equity in Access research program that aims to “advance study of underlying causes of inequitable access to care and identify policies, strategies, and interventions that have the potential to reduce inequities and increase access to health care, services, and programs for blood cancer patients and survivors.”

The research was supported in part by the National Cancer Institute of the National Institutes of Health, other institutions, and through several scholar awards. Dr. Bhatnagar has received advisory board honoraria from Novartis, Kite Pharma, Celgene, Astellas, and Cell Therapeutics. Dr. Weiss has disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Black race is the most important risk factor for patients with acute myeloid leukemia (AML) and is associated with poor survival, according to new findings.

Among patients with AML younger than 60 years, the rate of overall 3-year survival was significantly less among Black patients than White patients (34% vs. 43%). The risk for death was 27% higher for Black patients compared with White patients.

“Our study demonstrates the delicate interplay between a variety of factors that influence survival disparities, particularly for younger Black AML patients,” said first author Bhavana Bhatnagar, DO, of the Ohio State University’s Comprehensive Cancer Center, Columbus. “We were able to confirm the impact of socioeconomic factors while also demonstrating that being Black is, in and of itself, an independent poor prognostic variable for survival.”

She noted that the persistently poor outcomes of young Black patients that were seen despite similar treatments in clinical trials strongly suggest that additional factors have a bearing on their survival.

The findings of the study were presented during the plenary session of the annual meeting of the American Society of Hematology, which was held online this year. The study was simultaneously published in Cancer Discovery.

Racial disparities in cancer outcomes remain a challenge. The term “health disparities” describes the differences of health outcomes among different groups, said Chancellor Donald, MD, of Tulane University, New Orleans, who introduced the article at the meeting. “Racial health disparities usually result from an unequal distribution of power and resources, not genetics.

“The examination of health disparities is certainly a worthwhile endeavor,” he continued. “For generations, differences in key health outcomes have negatively impacted the quality of life and shortened the life span of countless individuals. As scientists, clinicians, and invested members of our shared society, we are obligated to obtain a profound understanding of the mechanisms and impact of this morbid reality.”
 

Black race a risk factor

For their study, Dr. Bhatnagar and colleagues conducted a nationwide population analysis using data from the Surveillance Epidemiology End Results (SEER) Program of the National Cancer Institute to identify 11,190 adults aged 18-60 years who were diagnosed with AML between 1986 and 2015.

To characterize molecular features, they conducted targeted sequencing of 81 genes in 1,339 patients with AML who were treated on frontline Cancer and Leukemia Group B/Alliance for Clinical Trials in Oncology (Alliance) protocols based on standard-intensity cytarabine/anthracycline induction followed by consolidation between 1986 and 2016. None of these patients received an allogeneic stem cell transplant when they achieved complete remission.

Although overall survival has improved during the past 3 decades, survival disparities between Black and White patients has widened over time (P < .001). The authors found a nonstatistically significant difference in survival between 1986 and 1995 (White patients, n = 1,365; Black patients, n = 160; P = .19). However, the difference was significant between 1996 and 2005 (White patients, n = 2,994; Black patients, n = 480; P = .004). “And it became even more noticeable in the most recent decade,” said Dr. Bhatnagar. “Furthermore, younger Black AML patients were found to have worse survival compared with younger White AML patients.”

Results from the second analysis of patients treated on Alliance protocols did not show any significant differences in early death rates (10% vs. 46%; P = .02) and complete remission rates (71% vs. 71%; P = 1.00). “While relapse rates were slightly higher in Black compared to White patients, this difference did not reach statistical significance,” said Dr. Bhatnagar. “There was also no significant difference in the number of cycles of consolidation chemotherapy administered to these patients.”

However, both disease-free and overall survival were significantly worse for Black patients, suggesting that factors other than treatment selection were likely at play in influencing the survival disparity. The median disease-free survival for Black patients was 0.8 years, vs. 1.4 years for White patients (P = .02). Overall survival was 1.2 years vs. 1.8 years (P = .02).

Relapse rates were slightly higher in Black patients than in White patients, at 71% vs. 59%, but this difference did not reach statistical significance (P = .14).
 

 

 

Differences in biomarkers

With regard to underlying molecular differences between Black and White patients, the investigators found that the most common mutations were in NPM1, FLT3-ITD, and DNM3TA. Mutations were detected in more than 20% of Black patients. Other commonly mutated genes were IDH2, NRAS, TET2, IDH1, and TP53, which were mutated in more than 10% of patients. “All of these genes are established commonly mutated genes in AML,” said Bhatnagar.

On univariable and multivariable outcome analyses, which were used to identify clinical or molecular features that had a bearing on outcome, FLT3-ITD and IDH2 mutations were the only mutations associated with a higher risk for death among Black patients.

“This is actually a very important finding, as both FLT3 and IDH2 are now targetable with small-molecule inhibitors,” said Dr. Bhatnagar. “In addition, it is also worth noting that other gene mutations that have known prognostic significance in AML, such as NPM1, as well as RUNX1 and TP53, did not remain in the final statistical model.

“Importantly, our study provides powerful evidence that suggests differences in underlying disease biology between young Black and White AML patients, as evidenced by differences in the frequencies of recurrent gene mutations, “ she said.
 

Understudied disparities

Although the study showed that Black patients had worse outcomes, “surprisingly, the authors found these outcomes hold even when the patients are participating in clinical trials,” noted Elisa Weiss, PhD, senior vice president of education, services, and health research for the Leukemia and Lymphoma Society.

“The study makes clear that the medical and science community need to do more to better understand the social, economic, environmental, and biological causes of these disparities,” she said in an interview. “In fact, the findings suggest that there are myriad complex and understudied causes of the identified disparities, and they are likely to lie at the intersection of all levels of the social ecology that impact an individual’s ability to access timely and unbiased care, maintain their mental and physical health, and receive needed social support and resources.”

She noted that the Leukemia and Lymphoma Society has an Equity in Access research program that aims to “advance study of underlying causes of inequitable access to care and identify policies, strategies, and interventions that have the potential to reduce inequities and increase access to health care, services, and programs for blood cancer patients and survivors.”

The research was supported in part by the National Cancer Institute of the National Institutes of Health, other institutions, and through several scholar awards. Dr. Bhatnagar has received advisory board honoraria from Novartis, Kite Pharma, Celgene, Astellas, and Cell Therapeutics. Dr. Weiss has disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

HCV Special Populations: Effective Treatments, Addressing Unmet Needs, and Navigating COVID-19 with Dr. Hugo Rosen

Article Type
Changed
Wed, 04/14/2021 - 12:22
Display Headline
HCV Special Populations: Effective Treatments, Addressing Unmet Needs, and Navigating COVID-19 with Dr. Hugo Rosen

Has the introduction of direct-acting antiviral (DAA) medications had a significant benefit in special populations of patients with hepatitis C virus (HCV)?

Dr. Rosen:  Absolutely. The gaps and challenges for particular special populations have been largely overcome with the development of new HCV agents.

 

For example, we now know that HIV-HCV coinfection—which was a particularly vexing situation because of the significant side effects of interferon-based therapies and the much lower efficacy—has no impact on HCV treatment outcome if we use DAAs. Remarkably, the management, indication of treatment, and follow up of HCV infection are now the same for both patient populations. It is particularly relevant because we know that HIV coinfection leads to a more accelerated development of advanced fibrosis, and we know that fibrosis is the single most important predictor of outcome.

 

Having said that, HIV-HCV coinfected patients require careful evaluation of any potential drug-drug interactions between the HCV drugs and HIV antiretroviral therapy, and it also requires attention to medication for substance abuse and other co-medications for other comorbidities. If we look at the issue of drug-drug interactions, the good news is that the main culprit in terms of interaction with antiretroviral therapy, was ritonavir. That was approved in combination with other antivirals in 2014, but it is not part of a commonly utilized regimen of DAAs. Simeprevir is also prone to drug interactions with cytochrome P450. This was contraindicated when used with several HIV antiretrovirals, but neither one of those drugs are really used to treat HCV now. Very few clinically significant interactions are expected with sofosbuvir or ledipasvir, so we do not expect to see drug-drug interactions as we did in the first generation of DAAs.

 

Let us now look at people who inject drugs or use drugs (PWIDs.) I think it clearly is the biggest challenge globally to provide treatment access to these patients. If you consider that 1 person who injects drugs can theoretically infect 20 other subjects, it is imperative to treat this population.

 

Historically, there has been stigmatization of these patients because of the use of drugs. PWIDs are increasingly being considered to be eligible for treatment, but it is important to underscore that eligibility does not necessarily translate to or equate to access. Adherence to and response to DAA therapy among this patient population represents some additional challenges. Many of these patients are receiving opioid substitution therapy and that has actually been shown to increase the likelihood of success with DAAs, but there is a concern in this patient population of ongoing drug use and HCV reinfection.

 

When we treat this patient population, we have to be cognizant of these issues and develop strategies that maximize their likelihood of staying on treatment. On average, about 12% of patients who are PWIDs drop out or are lost to follow up, which is problematic. A smaller percentage of those who continue injecting drugs develop reinfection, for example, with a different genotype. We know that DAA-mediated cure does not confer immunity to different strains or even the same strain of virus, and so we need to develop processes to maximize prevention of reinfection. That may include opioid substitution therapy, or needle exchange programs. There are also a lot of data emerging around the world about the benefit of the adjunctive role of behavioral therapy. Again, it all starts with access to DAA treatment in this patient population, and then retreatment if they get reinfected.

 

The next population is patients with end stage renal disease. It has been a remarkable transformation. These patients in the past would not tolerate interferon-based therapy and would develop severe anemia with ribavirin. Now with DAAs, those complications do not arise. We know that successful HCV therapy improves clinical outcomes in patients who have end stage renal disease. This has been associated with a very significant survival benefit in patients on dialysis. Among diabetic patients who have end stage renal disease, if one achieves sustained virologic response, it reduces the risk of developing extrahepatic manifestations of the disease and that happens regardless of cirrhosis.

 

Drugs that have been very effective in patients who have chronic renal failure include elbasvir and grazoprevir, as well as glecaprevir and pibrentasvir. We know that the concentration of sofosbuvir is a concern; there are higher concentrations of the primary sofosbuvir metabolite in persons who develop renal impairment, but many studies have demonstrated the safety and efficacy of sofosbuvir-based regimens even in those patients who have an estimated GFR less than 30 mL per minute.

 

Are there any other unmet needs that still exist in treating any of these special patient populations?

 

Dr. Rosen: Addressing medical, psychological, social, and addiction-related barriers are all important for PWIDs. As we touched on earlier, there needs to be a deliberate and nuanced approach for these subjects, a multidisciplinary model of care in order to help decrease the high dropout rate. The data that are coming out certainly justifies providing access to these patients, with real-world efficacy at more than 90%. We know that these patients can be treated and that they can achieve a high cure rate.

 

Appropriately treating patients who have renal failure does remain a challenge. The question is, when do you treat someone who has end stage renal disease and HCV? The answer pivots on the accessibility to organs. We can use HCV positive organs in patients who have either been cured of HCV and now are negative for virus, or in people who have never been exposed to HCV.

 

You might wonder, what is the rationale for that? The average wait list time for an HCV negative kidney is about 6.6 years. If you are willing to take an HCV positive kidney, it is closer to 4 years. I recently saw a patient who got a kidney transplant, developed kidney allograft failure, and is now being considered for retransplantation. In this situation, if I cure his HCV—which we know we can do with the current therapies very safely—it is potentially going to extend his time waiting for his next kidney.

In this situation, my plan would be to do a fibroscan, which tells me how much scar tissue he has in his liver. If he has fibrosis 0 or 1, I will probably not treat him and put him into the pool that he receives an HCV positive kidney. If, on the other hand, he has fibrosis 3 or 4, I probably will cure his hepatitis C because he is immunosuppressed and he has a significant chance of further developing advanced liver disease and even decompensation.

 

I think we should treat everybody who has HCV because we know it improves quality of life, it improves liver-related mortality, and it improves all-cause mortality. In the case of patients who have end organ damage and are waiting for transplant, it may prolong their wait time unless the patient is willing to accept that HCV positive organ. Considering the remarkable success in treating HCV post-organ transplant, the use of HCV-positive organs into HCV-negative continues to expand. 

 

What new challenges, if any, has COVID-19 presented in treating these high risk or special populations?

 

Dr. Rosen: The good news is we have multiple, highly effective pangenotypic DAA regimens.

We know that all patients with HCV should be treated, especially those who have advanced fibrosis. Recent data show that patients who have advanced fibrosis, whether measured by a noninvasive blood test, like FIB-4 or a noninvasive fibroscan have a higher risk of death if they develop COVID-19.

 

COVID-19 is associated with liver function abnormalities in 20% or more of all patients who contract it, and those patients who develop elevated liver function tests have a higher mortality. It is particularly high in patients who have cirrhosis.

 

We do not think that patients who have HCV are more prone to develop COVID-19—there are no data to support that. However, if there is evidence that in patients with advanced fibrosis or evidence of hepatic involvement as manifested by elevated liver function tests, those patients have at least a twofold higher chance of mortality.

 

The other challenge, of course, is what COVID-19 has done to our ability to deliver care. It has made it inconvenient to treat patients. Patients who have advanced fibrosis or cirrhosis who get cured with DAAs also still need surveillance for development of hepatocellular carcinoma. Their risk of developing hepatocellular carcinoma is lower than patients who have not been cured, but there is still a residual risk which often requires imaging COVID-19-related changes have negatively impacted our ability to do that.

Author and Disclosure Information

Hugo Rosen, MD, is the Norris Chair of Medicine and Professor of medicine, immunology, and molecular microbiology at Keck School of Medicine, University of Southern California. For additional information about the latest HCV treatment guidelines for special patient populations, please visit www.hcvguidelines.org.

Dr. Rosen had nothing to disclose

Publications
Topics
Sections
Author and Disclosure Information

Hugo Rosen, MD, is the Norris Chair of Medicine and Professor of medicine, immunology, and molecular microbiology at Keck School of Medicine, University of Southern California. For additional information about the latest HCV treatment guidelines for special patient populations, please visit www.hcvguidelines.org.

Dr. Rosen had nothing to disclose

Author and Disclosure Information

Hugo Rosen, MD, is the Norris Chair of Medicine and Professor of medicine, immunology, and molecular microbiology at Keck School of Medicine, University of Southern California. For additional information about the latest HCV treatment guidelines for special patient populations, please visit www.hcvguidelines.org.

Dr. Rosen had nothing to disclose

Has the introduction of direct-acting antiviral (DAA) medications had a significant benefit in special populations of patients with hepatitis C virus (HCV)?

Dr. Rosen:  Absolutely. The gaps and challenges for particular special populations have been largely overcome with the development of new HCV agents.

 

For example, we now know that HIV-HCV coinfection—which was a particularly vexing situation because of the significant side effects of interferon-based therapies and the much lower efficacy—has no impact on HCV treatment outcome if we use DAAs. Remarkably, the management, indication of treatment, and follow up of HCV infection are now the same for both patient populations. It is particularly relevant because we know that HIV coinfection leads to a more accelerated development of advanced fibrosis, and we know that fibrosis is the single most important predictor of outcome.

 

Having said that, HIV-HCV coinfected patients require careful evaluation of any potential drug-drug interactions between the HCV drugs and HIV antiretroviral therapy, and it also requires attention to medication for substance abuse and other co-medications for other comorbidities. If we look at the issue of drug-drug interactions, the good news is that the main culprit in terms of interaction with antiretroviral therapy, was ritonavir. That was approved in combination with other antivirals in 2014, but it is not part of a commonly utilized regimen of DAAs. Simeprevir is also prone to drug interactions with cytochrome P450. This was contraindicated when used with several HIV antiretrovirals, but neither one of those drugs are really used to treat HCV now. Very few clinically significant interactions are expected with sofosbuvir or ledipasvir, so we do not expect to see drug-drug interactions as we did in the first generation of DAAs.

 

Let us now look at people who inject drugs or use drugs (PWIDs.) I think it clearly is the biggest challenge globally to provide treatment access to these patients. If you consider that 1 person who injects drugs can theoretically infect 20 other subjects, it is imperative to treat this population.

 

Historically, there has been stigmatization of these patients because of the use of drugs. PWIDs are increasingly being considered to be eligible for treatment, but it is important to underscore that eligibility does not necessarily translate to or equate to access. Adherence to and response to DAA therapy among this patient population represents some additional challenges. Many of these patients are receiving opioid substitution therapy and that has actually been shown to increase the likelihood of success with DAAs, but there is a concern in this patient population of ongoing drug use and HCV reinfection.

 

When we treat this patient population, we have to be cognizant of these issues and develop strategies that maximize their likelihood of staying on treatment. On average, about 12% of patients who are PWIDs drop out or are lost to follow up, which is problematic. A smaller percentage of those who continue injecting drugs develop reinfection, for example, with a different genotype. We know that DAA-mediated cure does not confer immunity to different strains or even the same strain of virus, and so we need to develop processes to maximize prevention of reinfection. That may include opioid substitution therapy, or needle exchange programs. There are also a lot of data emerging around the world about the benefit of the adjunctive role of behavioral therapy. Again, it all starts with access to DAA treatment in this patient population, and then retreatment if they get reinfected.

 

The next population is patients with end stage renal disease. It has been a remarkable transformation. These patients in the past would not tolerate interferon-based therapy and would develop severe anemia with ribavirin. Now with DAAs, those complications do not arise. We know that successful HCV therapy improves clinical outcomes in patients who have end stage renal disease. This has been associated with a very significant survival benefit in patients on dialysis. Among diabetic patients who have end stage renal disease, if one achieves sustained virologic response, it reduces the risk of developing extrahepatic manifestations of the disease and that happens regardless of cirrhosis.

 

Drugs that have been very effective in patients who have chronic renal failure include elbasvir and grazoprevir, as well as glecaprevir and pibrentasvir. We know that the concentration of sofosbuvir is a concern; there are higher concentrations of the primary sofosbuvir metabolite in persons who develop renal impairment, but many studies have demonstrated the safety and efficacy of sofosbuvir-based regimens even in those patients who have an estimated GFR less than 30 mL per minute.

 

Are there any other unmet needs that still exist in treating any of these special patient populations?

 

Dr. Rosen: Addressing medical, psychological, social, and addiction-related barriers are all important for PWIDs. As we touched on earlier, there needs to be a deliberate and nuanced approach for these subjects, a multidisciplinary model of care in order to help decrease the high dropout rate. The data that are coming out certainly justifies providing access to these patients, with real-world efficacy at more than 90%. We know that these patients can be treated and that they can achieve a high cure rate.

 

Appropriately treating patients who have renal failure does remain a challenge. The question is, when do you treat someone who has end stage renal disease and HCV? The answer pivots on the accessibility to organs. We can use HCV positive organs in patients who have either been cured of HCV and now are negative for virus, or in people who have never been exposed to HCV.

 

You might wonder, what is the rationale for that? The average wait list time for an HCV negative kidney is about 6.6 years. If you are willing to take an HCV positive kidney, it is closer to 4 years. I recently saw a patient who got a kidney transplant, developed kidney allograft failure, and is now being considered for retransplantation. In this situation, if I cure his HCV—which we know we can do with the current therapies very safely—it is potentially going to extend his time waiting for his next kidney.

In this situation, my plan would be to do a fibroscan, which tells me how much scar tissue he has in his liver. If he has fibrosis 0 or 1, I will probably not treat him and put him into the pool that he receives an HCV positive kidney. If, on the other hand, he has fibrosis 3 or 4, I probably will cure his hepatitis C because he is immunosuppressed and he has a significant chance of further developing advanced liver disease and even decompensation.

 

I think we should treat everybody who has HCV because we know it improves quality of life, it improves liver-related mortality, and it improves all-cause mortality. In the case of patients who have end organ damage and are waiting for transplant, it may prolong their wait time unless the patient is willing to accept that HCV positive organ. Considering the remarkable success in treating HCV post-organ transplant, the use of HCV-positive organs into HCV-negative continues to expand. 

 

What new challenges, if any, has COVID-19 presented in treating these high risk or special populations?

 

Dr. Rosen: The good news is we have multiple, highly effective pangenotypic DAA regimens.

We know that all patients with HCV should be treated, especially those who have advanced fibrosis. Recent data show that patients who have advanced fibrosis, whether measured by a noninvasive blood test, like FIB-4 or a noninvasive fibroscan have a higher risk of death if they develop COVID-19.

 

COVID-19 is associated with liver function abnormalities in 20% or more of all patients who contract it, and those patients who develop elevated liver function tests have a higher mortality. It is particularly high in patients who have cirrhosis.

 

We do not think that patients who have HCV are more prone to develop COVID-19—there are no data to support that. However, if there is evidence that in patients with advanced fibrosis or evidence of hepatic involvement as manifested by elevated liver function tests, those patients have at least a twofold higher chance of mortality.

 

The other challenge, of course, is what COVID-19 has done to our ability to deliver care. It has made it inconvenient to treat patients. Patients who have advanced fibrosis or cirrhosis who get cured with DAAs also still need surveillance for development of hepatocellular carcinoma. Their risk of developing hepatocellular carcinoma is lower than patients who have not been cured, but there is still a residual risk which often requires imaging COVID-19-related changes have negatively impacted our ability to do that.

Has the introduction of direct-acting antiviral (DAA) medications had a significant benefit in special populations of patients with hepatitis C virus (HCV)?

Dr. Rosen:  Absolutely. The gaps and challenges for particular special populations have been largely overcome with the development of new HCV agents.

 

For example, we now know that HIV-HCV coinfection—which was a particularly vexing situation because of the significant side effects of interferon-based therapies and the much lower efficacy—has no impact on HCV treatment outcome if we use DAAs. Remarkably, the management, indication of treatment, and follow up of HCV infection are now the same for both patient populations. It is particularly relevant because we know that HIV coinfection leads to a more accelerated development of advanced fibrosis, and we know that fibrosis is the single most important predictor of outcome.

 

Having said that, HIV-HCV coinfected patients require careful evaluation of any potential drug-drug interactions between the HCV drugs and HIV antiretroviral therapy, and it also requires attention to medication for substance abuse and other co-medications for other comorbidities. If we look at the issue of drug-drug interactions, the good news is that the main culprit in terms of interaction with antiretroviral therapy, was ritonavir. That was approved in combination with other antivirals in 2014, but it is not part of a commonly utilized regimen of DAAs. Simeprevir is also prone to drug interactions with cytochrome P450. This was contraindicated when used with several HIV antiretrovirals, but neither one of those drugs are really used to treat HCV now. Very few clinically significant interactions are expected with sofosbuvir or ledipasvir, so we do not expect to see drug-drug interactions as we did in the first generation of DAAs.

 

Let us now look at people who inject drugs or use drugs (PWIDs.) I think it clearly is the biggest challenge globally to provide treatment access to these patients. If you consider that 1 person who injects drugs can theoretically infect 20 other subjects, it is imperative to treat this population.

 

Historically, there has been stigmatization of these patients because of the use of drugs. PWIDs are increasingly being considered to be eligible for treatment, but it is important to underscore that eligibility does not necessarily translate to or equate to access. Adherence to and response to DAA therapy among this patient population represents some additional challenges. Many of these patients are receiving opioid substitution therapy and that has actually been shown to increase the likelihood of success with DAAs, but there is a concern in this patient population of ongoing drug use and HCV reinfection.

 

When we treat this patient population, we have to be cognizant of these issues and develop strategies that maximize their likelihood of staying on treatment. On average, about 12% of patients who are PWIDs drop out or are lost to follow up, which is problematic. A smaller percentage of those who continue injecting drugs develop reinfection, for example, with a different genotype. We know that DAA-mediated cure does not confer immunity to different strains or even the same strain of virus, and so we need to develop processes to maximize prevention of reinfection. That may include opioid substitution therapy, or needle exchange programs. There are also a lot of data emerging around the world about the benefit of the adjunctive role of behavioral therapy. Again, it all starts with access to DAA treatment in this patient population, and then retreatment if they get reinfected.

 

The next population is patients with end stage renal disease. It has been a remarkable transformation. These patients in the past would not tolerate interferon-based therapy and would develop severe anemia with ribavirin. Now with DAAs, those complications do not arise. We know that successful HCV therapy improves clinical outcomes in patients who have end stage renal disease. This has been associated with a very significant survival benefit in patients on dialysis. Among diabetic patients who have end stage renal disease, if one achieves sustained virologic response, it reduces the risk of developing extrahepatic manifestations of the disease and that happens regardless of cirrhosis.

 

Drugs that have been very effective in patients who have chronic renal failure include elbasvir and grazoprevir, as well as glecaprevir and pibrentasvir. We know that the concentration of sofosbuvir is a concern; there are higher concentrations of the primary sofosbuvir metabolite in persons who develop renal impairment, but many studies have demonstrated the safety and efficacy of sofosbuvir-based regimens even in those patients who have an estimated GFR less than 30 mL per minute.

 

Are there any other unmet needs that still exist in treating any of these special patient populations?

 

Dr. Rosen: Addressing medical, psychological, social, and addiction-related barriers are all important for PWIDs. As we touched on earlier, there needs to be a deliberate and nuanced approach for these subjects, a multidisciplinary model of care in order to help decrease the high dropout rate. The data that are coming out certainly justifies providing access to these patients, with real-world efficacy at more than 90%. We know that these patients can be treated and that they can achieve a high cure rate.

 

Appropriately treating patients who have renal failure does remain a challenge. The question is, when do you treat someone who has end stage renal disease and HCV? The answer pivots on the accessibility to organs. We can use HCV positive organs in patients who have either been cured of HCV and now are negative for virus, or in people who have never been exposed to HCV.

 

You might wonder, what is the rationale for that? The average wait list time for an HCV negative kidney is about 6.6 years. If you are willing to take an HCV positive kidney, it is closer to 4 years. I recently saw a patient who got a kidney transplant, developed kidney allograft failure, and is now being considered for retransplantation. In this situation, if I cure his HCV—which we know we can do with the current therapies very safely—it is potentially going to extend his time waiting for his next kidney.

In this situation, my plan would be to do a fibroscan, which tells me how much scar tissue he has in his liver. If he has fibrosis 0 or 1, I will probably not treat him and put him into the pool that he receives an HCV positive kidney. If, on the other hand, he has fibrosis 3 or 4, I probably will cure his hepatitis C because he is immunosuppressed and he has a significant chance of further developing advanced liver disease and even decompensation.

 

I think we should treat everybody who has HCV because we know it improves quality of life, it improves liver-related mortality, and it improves all-cause mortality. In the case of patients who have end organ damage and are waiting for transplant, it may prolong their wait time unless the patient is willing to accept that HCV positive organ. Considering the remarkable success in treating HCV post-organ transplant, the use of HCV-positive organs into HCV-negative continues to expand. 

 

What new challenges, if any, has COVID-19 presented in treating these high risk or special populations?

 

Dr. Rosen: The good news is we have multiple, highly effective pangenotypic DAA regimens.

We know that all patients with HCV should be treated, especially those who have advanced fibrosis. Recent data show that patients who have advanced fibrosis, whether measured by a noninvasive blood test, like FIB-4 or a noninvasive fibroscan have a higher risk of death if they develop COVID-19.

 

COVID-19 is associated with liver function abnormalities in 20% or more of all patients who contract it, and those patients who develop elevated liver function tests have a higher mortality. It is particularly high in patients who have cirrhosis.

 

We do not think that patients who have HCV are more prone to develop COVID-19—there are no data to support that. However, if there is evidence that in patients with advanced fibrosis or evidence of hepatic involvement as manifested by elevated liver function tests, those patients have at least a twofold higher chance of mortality.

 

The other challenge, of course, is what COVID-19 has done to our ability to deliver care. It has made it inconvenient to treat patients. Patients who have advanced fibrosis or cirrhosis who get cured with DAAs also still need surveillance for development of hepatocellular carcinoma. Their risk of developing hepatocellular carcinoma is lower than patients who have not been cured, but there is still a residual risk which often requires imaging COVID-19-related changes have negatively impacted our ability to do that.

Publications
Publications
Topics
Article Type
Display Headline
HCV Special Populations: Effective Treatments, Addressing Unmet Needs, and Navigating COVID-19 with Dr. Hugo Rosen
Display Headline
HCV Special Populations: Effective Treatments, Addressing Unmet Needs, and Navigating COVID-19 with Dr. Hugo Rosen
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 11/05/2020 - 16:45
Un-Gate On Date
Thu, 11/05/2020 - 16:45
Use ProPublica
CFC Schedule Remove Status
Thu, 11/05/2020 - 16:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

New tool may provide point-of-care differentiation between bacterial, viral infections

Article Type
Changed
Thu, 12/10/2020 - 13:12

The World Health Organization estimates that 14.9 million of 57 million annual deaths worldwide (25%) are related directly to diseases caused by bacterial and/or viral infections.

Ivana Pennisi

The first crucial step in order to build a successful surveillance system is to accurately identify and diagnose disease, Ivana Pennisi reminded the audience at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year. A problem, particularly in primary care, is differentiating between patients with bacterial infections who might benefit from antibiotics and those with viral infections where supportive treatment is generally required. One solution might a rapid point-of-care tool.

Ms. Pennisi described early experiences of using microchip technology to detect RNA biomarkers in the blood rather than look for the pathogen itself. Early results suggest high diagnostic accuracy at low cost.

It is known that when a bacteria or virus enters the body, it stimulates the immune system in a unique way leading to the expression of different genes in the host blood. As part of the Personalized Management of Febrile Illnesses study, researchers have demonstrated a number of high correlated transcripts. Of current interest are two genes which are upregulated in childhood febrile illnesses.

Ms. Pennisi, a PhD student working as part of a multidisciplinary at the department of infectious disease and Centre for Bioinspired Technology at Imperial College, London, developed loop-mediated isothermal amplification (LAMP) assays to detect for the first time host RNA signatures on a nucleic acid–based point-of-care handheld system to discriminate bacterial from viral infection. The amplification reaction is then combined with microchip technology in the well of a portable point-of-care device named Lacewing. It translates the nucleic acid amplification signal into a quantitative electrochemical signal without the need for a thermal cycler.

The combination of genomic expertise in the section of paediatrics lead by Michael Levin, PhD, and microchip-based technologies in the department of electrical and electronic engineering under the guidance of Pantelis Georgiou, PhD, enabled the team overcome many clinical challenges.

Ms. Pennisi presented her team’s early experiences with clinical samples from 455 febrile children. First, transcription isothermal amplification techniques were employed to confirm bacterial and viral infections. Results were then validated using standard fluorescent-based quantitative polymerase chain reaction (PCR) instruments. In order to define a decision boundary between bacterial and viral patients, cutoff levels were determined using multivariate logistic regression analysis. Results then were evaluated using microarrays, reverse transcriptase PCR (RT-PCR), and the eLAMP to confirm comparability with preferred techniques.

In conclusion, Ms. Pennisi reported that the two-gene signature combined with the use of eLAMP technology in a point-of-care tool offered the potential of low cost and accurate discrimination between bacterial and viral infection in febrile children. She outlined her vision for the future: “The patient sample and reagent are loaded into a disposable cartridge. This is then placed into a device to monitor in real time the reaction and share all the data via a Bluetooth to a dedicated app on a smart phone. All data and location of the outbreak are then stored in [the] cloud, making it easier for epidemiological studies and tracking of new outbreaks. We hope that by enhancing the capability of our platform, we contribute to better patient care.”

“Distinguishing between bacterial and viral infections remains one of the key questions in the daily pediatric acute care,” commented Lauri Ivaska, MD, from the department of pediatrics and adolescent medicine at Turku (Finland) University Hospital. “One of the most promising laboratory methods to do this is by measuring quantities of two specific host RNA transcripts from a blood sample. It would be of great importance if this could be done reliably by using a fast and cheap method as presented here by Ivana Pennisi.”

Ms. Pennisi had no relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The World Health Organization estimates that 14.9 million of 57 million annual deaths worldwide (25%) are related directly to diseases caused by bacterial and/or viral infections.

Ivana Pennisi

The first crucial step in order to build a successful surveillance system is to accurately identify and diagnose disease, Ivana Pennisi reminded the audience at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year. A problem, particularly in primary care, is differentiating between patients with bacterial infections who might benefit from antibiotics and those with viral infections where supportive treatment is generally required. One solution might a rapid point-of-care tool.

Ms. Pennisi described early experiences of using microchip technology to detect RNA biomarkers in the blood rather than look for the pathogen itself. Early results suggest high diagnostic accuracy at low cost.

It is known that when a bacteria or virus enters the body, it stimulates the immune system in a unique way leading to the expression of different genes in the host blood. As part of the Personalized Management of Febrile Illnesses study, researchers have demonstrated a number of high correlated transcripts. Of current interest are two genes which are upregulated in childhood febrile illnesses.

Ms. Pennisi, a PhD student working as part of a multidisciplinary at the department of infectious disease and Centre for Bioinspired Technology at Imperial College, London, developed loop-mediated isothermal amplification (LAMP) assays to detect for the first time host RNA signatures on a nucleic acid–based point-of-care handheld system to discriminate bacterial from viral infection. The amplification reaction is then combined with microchip technology in the well of a portable point-of-care device named Lacewing. It translates the nucleic acid amplification signal into a quantitative electrochemical signal without the need for a thermal cycler.

The combination of genomic expertise in the section of paediatrics lead by Michael Levin, PhD, and microchip-based technologies in the department of electrical and electronic engineering under the guidance of Pantelis Georgiou, PhD, enabled the team overcome many clinical challenges.

Ms. Pennisi presented her team’s early experiences with clinical samples from 455 febrile children. First, transcription isothermal amplification techniques were employed to confirm bacterial and viral infections. Results were then validated using standard fluorescent-based quantitative polymerase chain reaction (PCR) instruments. In order to define a decision boundary between bacterial and viral patients, cutoff levels were determined using multivariate logistic regression analysis. Results then were evaluated using microarrays, reverse transcriptase PCR (RT-PCR), and the eLAMP to confirm comparability with preferred techniques.

In conclusion, Ms. Pennisi reported that the two-gene signature combined with the use of eLAMP technology in a point-of-care tool offered the potential of low cost and accurate discrimination between bacterial and viral infection in febrile children. She outlined her vision for the future: “The patient sample and reagent are loaded into a disposable cartridge. This is then placed into a device to monitor in real time the reaction and share all the data via a Bluetooth to a dedicated app on a smart phone. All data and location of the outbreak are then stored in [the] cloud, making it easier for epidemiological studies and tracking of new outbreaks. We hope that by enhancing the capability of our platform, we contribute to better patient care.”

“Distinguishing between bacterial and viral infections remains one of the key questions in the daily pediatric acute care,” commented Lauri Ivaska, MD, from the department of pediatrics and adolescent medicine at Turku (Finland) University Hospital. “One of the most promising laboratory methods to do this is by measuring quantities of two specific host RNA transcripts from a blood sample. It would be of great importance if this could be done reliably by using a fast and cheap method as presented here by Ivana Pennisi.”

Ms. Pennisi had no relevant financial disclosures.

The World Health Organization estimates that 14.9 million of 57 million annual deaths worldwide (25%) are related directly to diseases caused by bacterial and/or viral infections.

Ivana Pennisi

The first crucial step in order to build a successful surveillance system is to accurately identify and diagnose disease, Ivana Pennisi reminded the audience at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year. A problem, particularly in primary care, is differentiating between patients with bacterial infections who might benefit from antibiotics and those with viral infections where supportive treatment is generally required. One solution might a rapid point-of-care tool.

Ms. Pennisi described early experiences of using microchip technology to detect RNA biomarkers in the blood rather than look for the pathogen itself. Early results suggest high diagnostic accuracy at low cost.

It is known that when a bacteria or virus enters the body, it stimulates the immune system in a unique way leading to the expression of different genes in the host blood. As part of the Personalized Management of Febrile Illnesses study, researchers have demonstrated a number of high correlated transcripts. Of current interest are two genes which are upregulated in childhood febrile illnesses.

Ms. Pennisi, a PhD student working as part of a multidisciplinary at the department of infectious disease and Centre for Bioinspired Technology at Imperial College, London, developed loop-mediated isothermal amplification (LAMP) assays to detect for the first time host RNA signatures on a nucleic acid–based point-of-care handheld system to discriminate bacterial from viral infection. The amplification reaction is then combined with microchip technology in the well of a portable point-of-care device named Lacewing. It translates the nucleic acid amplification signal into a quantitative electrochemical signal without the need for a thermal cycler.

The combination of genomic expertise in the section of paediatrics lead by Michael Levin, PhD, and microchip-based technologies in the department of electrical and electronic engineering under the guidance of Pantelis Georgiou, PhD, enabled the team overcome many clinical challenges.

Ms. Pennisi presented her team’s early experiences with clinical samples from 455 febrile children. First, transcription isothermal amplification techniques were employed to confirm bacterial and viral infections. Results were then validated using standard fluorescent-based quantitative polymerase chain reaction (PCR) instruments. In order to define a decision boundary between bacterial and viral patients, cutoff levels were determined using multivariate logistic regression analysis. Results then were evaluated using microarrays, reverse transcriptase PCR (RT-PCR), and the eLAMP to confirm comparability with preferred techniques.

In conclusion, Ms. Pennisi reported that the two-gene signature combined with the use of eLAMP technology in a point-of-care tool offered the potential of low cost and accurate discrimination between bacterial and viral infection in febrile children. She outlined her vision for the future: “The patient sample and reagent are loaded into a disposable cartridge. This is then placed into a device to monitor in real time the reaction and share all the data via a Bluetooth to a dedicated app on a smart phone. All data and location of the outbreak are then stored in [the] cloud, making it easier for epidemiological studies and tracking of new outbreaks. We hope that by enhancing the capability of our platform, we contribute to better patient care.”

“Distinguishing between bacterial and viral infections remains one of the key questions in the daily pediatric acute care,” commented Lauri Ivaska, MD, from the department of pediatrics and adolescent medicine at Turku (Finland) University Hospital. “One of the most promising laboratory methods to do this is by measuring quantities of two specific host RNA transcripts from a blood sample. It would be of great importance if this could be done reliably by using a fast and cheap method as presented here by Ivana Pennisi.”

Ms. Pennisi had no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ESPID 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

To D or not to D? Vitamin D doesn’t reduce falls in older adults

Article Type
Changed
Tue, 12/15/2020 - 09:08

Higher doses of vitamin D supplementation not only show no benefit in the prevention of falls in older adults at increased risk of falling, compared with the lowest doses, but they appear to increase the risk, new research shows.

Zbynek Pospisil/iStock/Getty Images

Based on the findings, supplemental vitamin D above the minimum dose of 200 IU/day likely has little benefit, lead author Lawrence J. Appel, MD, MPH, told this news organization.

“In the absence of any benefit of 1,000 IU/day versus 2,000 IU/day [of vitamin D supplementation] on falls, along with the potential for harm from doses above 1,000 IU/day, it is hard to recommend a dose above 200 IU/day in older-aged persons, unless there is a compelling reason,” asserted Dr. Appel, director of the Welch Center for Prevention, Epidemiology, and Clinical Research at Johns Hopkins Bloomberg School of Public Health in Baltimore.

“More is not always better – and it may even be worse,” when it comes to vitamin D’s role in the prevention of falls, he said.

The research, published in Annals of Internal Medicine, adds important evidence in the ongoing struggle to prevent falls, says Bruce R. Troen, MD, in an accompanying editorial.

“Falls and their deleterious consequences remain a substantial risk for older adults and a huge challenge for health care teams,” writes Dr. Troen, a physician-investigator with the Veterans Affairs Western New York Healthcare System.

However, commenting in an interview, Dr. Troen cautions: “There are many epidemiological studies that are correlative, not causative, that do show a likelihood for benefit [with vitamin D supplementation]. … Therefore, there’s no reason for clinicians to discontinue vitamin D in individuals because of this study.”

“If you’re monitoring an older adult who is frail and has multiple comorbidities, you want to know what their vitamin D level is [and] provide them an appropriate supplement if needed,” he emphasized.

Some guidelines already reflect the lack of evidence of any role of vitamin D supplementation in the prevention of falls, including those of the 2018 U.S. Preventive Services Task Force, which, in a reversal of its 2012 recommendation, now does not recommend vitamin D supplementation for fall prevention in older persons without osteoporosis or vitamin D deficiency, Dr. Appel and colleagues note.
 

No prevention of falls regardless of baseline vitamin D

As part of STURDY (Study to understand fall reduction and vitamin D in you), Dr. Appel and colleagues enrolled 688 community-dwelling participants who had an elevated risk of falling, defined as a serum 25-hydroxyvitamin D [25(OH)D] level of 25 to 72.5 nmol/L (10-29 ng/dL).

Participants were a mean age of 77.2 years and had a mean total 25(OH)D level of 55.3 nmol/L at enrollment.

They were randomized to one of four doses of vitamin D3, including 200 IU/day (the control group), or 1,000, 2,000, or 4,000 IU/day.

The highest doses were found to be associated with worse – not better – outcomes including a shorter time to hospitalization or death, compared with the 1,000-IU/day group. The higher-dose groups were therefore switched to a dose of 1,000 IU/day or lower, and all participants were followed for up to 2 years.

Overall, 63% experienced falls over the course of the study, which, though high, was consistent with the study’s criteria of participants having an elevated fall risk.

Of the 667 participants who completed the trial, no benefit in prevention of falling was seen across any of the doses, compared with the control group dose of 200 IU/day, regardless of participants’ baseline vitamin D levels.

Safety analyses showed that even in the 1,000-IU/day group, a higher risk of first serious fall and first fall with hospitalization was seen compared with the 200-IU/day group.

A limitation is that the study did not have a placebo group, however, “200 IU/day is a very small dose, probably homeopathic,” Dr. Appel said. “It was likely close to a placebo,” he said.
 

 

 

Caveats: comorbidities, subgroups

In his editorial, Dr. Troen notes other studies, including VITAL (Vitamin D and Omega-3 Trial) also found no reduction in falls with higher vitamin D doses; however, that study did not show any significant risks with the higher doses.

He adds that the current study lacks information on subsets of participants.

“We don’t have enough information about the existing comorbidities and medications that these people are on to be able to pull back the layers. Maybe there is a subgroup that should not be getting 4,000 IU, whereas another subgroup may not be harmed and you may decide that patient can benefit,” he said.

Furthermore, the trial doesn’t address groups such as nursing home residents.

“I have, for instance, 85-year-olds with vitamin D levels of maybe 20 nmol/L with multiple medical issues, but levels that low were not included in the study, so this is a tricky business, but the bottom line is first, do no harm,” he said.

“We really need trials that factor in the multiple different aspects so we can come up, hopefully, with a holistic and interdisciplinary approach, which is usually the best way to optimize care for frail older adults,” he concluded.

The study received funding from the National Institute of Aging.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Higher doses of vitamin D supplementation not only show no benefit in the prevention of falls in older adults at increased risk of falling, compared with the lowest doses, but they appear to increase the risk, new research shows.

Zbynek Pospisil/iStock/Getty Images

Based on the findings, supplemental vitamin D above the minimum dose of 200 IU/day likely has little benefit, lead author Lawrence J. Appel, MD, MPH, told this news organization.

“In the absence of any benefit of 1,000 IU/day versus 2,000 IU/day [of vitamin D supplementation] on falls, along with the potential for harm from doses above 1,000 IU/day, it is hard to recommend a dose above 200 IU/day in older-aged persons, unless there is a compelling reason,” asserted Dr. Appel, director of the Welch Center for Prevention, Epidemiology, and Clinical Research at Johns Hopkins Bloomberg School of Public Health in Baltimore.

“More is not always better – and it may even be worse,” when it comes to vitamin D’s role in the prevention of falls, he said.

The research, published in Annals of Internal Medicine, adds important evidence in the ongoing struggle to prevent falls, says Bruce R. Troen, MD, in an accompanying editorial.

“Falls and their deleterious consequences remain a substantial risk for older adults and a huge challenge for health care teams,” writes Dr. Troen, a physician-investigator with the Veterans Affairs Western New York Healthcare System.

However, commenting in an interview, Dr. Troen cautions: “There are many epidemiological studies that are correlative, not causative, that do show a likelihood for benefit [with vitamin D supplementation]. … Therefore, there’s no reason for clinicians to discontinue vitamin D in individuals because of this study.”

“If you’re monitoring an older adult who is frail and has multiple comorbidities, you want to know what their vitamin D level is [and] provide them an appropriate supplement if needed,” he emphasized.

Some guidelines already reflect the lack of evidence of any role of vitamin D supplementation in the prevention of falls, including those of the 2018 U.S. Preventive Services Task Force, which, in a reversal of its 2012 recommendation, now does not recommend vitamin D supplementation for fall prevention in older persons without osteoporosis or vitamin D deficiency, Dr. Appel and colleagues note.
 

No prevention of falls regardless of baseline vitamin D

As part of STURDY (Study to understand fall reduction and vitamin D in you), Dr. Appel and colleagues enrolled 688 community-dwelling participants who had an elevated risk of falling, defined as a serum 25-hydroxyvitamin D [25(OH)D] level of 25 to 72.5 nmol/L (10-29 ng/dL).

Participants were a mean age of 77.2 years and had a mean total 25(OH)D level of 55.3 nmol/L at enrollment.

They were randomized to one of four doses of vitamin D3, including 200 IU/day (the control group), or 1,000, 2,000, or 4,000 IU/day.

The highest doses were found to be associated with worse – not better – outcomes including a shorter time to hospitalization or death, compared with the 1,000-IU/day group. The higher-dose groups were therefore switched to a dose of 1,000 IU/day or lower, and all participants were followed for up to 2 years.

Overall, 63% experienced falls over the course of the study, which, though high, was consistent with the study’s criteria of participants having an elevated fall risk.

Of the 667 participants who completed the trial, no benefit in prevention of falling was seen across any of the doses, compared with the control group dose of 200 IU/day, regardless of participants’ baseline vitamin D levels.

Safety analyses showed that even in the 1,000-IU/day group, a higher risk of first serious fall and first fall with hospitalization was seen compared with the 200-IU/day group.

A limitation is that the study did not have a placebo group, however, “200 IU/day is a very small dose, probably homeopathic,” Dr. Appel said. “It was likely close to a placebo,” he said.
 

 

 

Caveats: comorbidities, subgroups

In his editorial, Dr. Troen notes other studies, including VITAL (Vitamin D and Omega-3 Trial) also found no reduction in falls with higher vitamin D doses; however, that study did not show any significant risks with the higher doses.

He adds that the current study lacks information on subsets of participants.

“We don’t have enough information about the existing comorbidities and medications that these people are on to be able to pull back the layers. Maybe there is a subgroup that should not be getting 4,000 IU, whereas another subgroup may not be harmed and you may decide that patient can benefit,” he said.

Furthermore, the trial doesn’t address groups such as nursing home residents.

“I have, for instance, 85-year-olds with vitamin D levels of maybe 20 nmol/L with multiple medical issues, but levels that low were not included in the study, so this is a tricky business, but the bottom line is first, do no harm,” he said.

“We really need trials that factor in the multiple different aspects so we can come up, hopefully, with a holistic and interdisciplinary approach, which is usually the best way to optimize care for frail older adults,” he concluded.

The study received funding from the National Institute of Aging.
 

A version of this article originally appeared on Medscape.com.

Higher doses of vitamin D supplementation not only show no benefit in the prevention of falls in older adults at increased risk of falling, compared with the lowest doses, but they appear to increase the risk, new research shows.

Zbynek Pospisil/iStock/Getty Images

Based on the findings, supplemental vitamin D above the minimum dose of 200 IU/day likely has little benefit, lead author Lawrence J. Appel, MD, MPH, told this news organization.

“In the absence of any benefit of 1,000 IU/day versus 2,000 IU/day [of vitamin D supplementation] on falls, along with the potential for harm from doses above 1,000 IU/day, it is hard to recommend a dose above 200 IU/day in older-aged persons, unless there is a compelling reason,” asserted Dr. Appel, director of the Welch Center for Prevention, Epidemiology, and Clinical Research at Johns Hopkins Bloomberg School of Public Health in Baltimore.

“More is not always better – and it may even be worse,” when it comes to vitamin D’s role in the prevention of falls, he said.

The research, published in Annals of Internal Medicine, adds important evidence in the ongoing struggle to prevent falls, says Bruce R. Troen, MD, in an accompanying editorial.

“Falls and their deleterious consequences remain a substantial risk for older adults and a huge challenge for health care teams,” writes Dr. Troen, a physician-investigator with the Veterans Affairs Western New York Healthcare System.

However, commenting in an interview, Dr. Troen cautions: “There are many epidemiological studies that are correlative, not causative, that do show a likelihood for benefit [with vitamin D supplementation]. … Therefore, there’s no reason for clinicians to discontinue vitamin D in individuals because of this study.”

“If you’re monitoring an older adult who is frail and has multiple comorbidities, you want to know what their vitamin D level is [and] provide them an appropriate supplement if needed,” he emphasized.

Some guidelines already reflect the lack of evidence of any role of vitamin D supplementation in the prevention of falls, including those of the 2018 U.S. Preventive Services Task Force, which, in a reversal of its 2012 recommendation, now does not recommend vitamin D supplementation for fall prevention in older persons without osteoporosis or vitamin D deficiency, Dr. Appel and colleagues note.
 

No prevention of falls regardless of baseline vitamin D

As part of STURDY (Study to understand fall reduction and vitamin D in you), Dr. Appel and colleagues enrolled 688 community-dwelling participants who had an elevated risk of falling, defined as a serum 25-hydroxyvitamin D [25(OH)D] level of 25 to 72.5 nmol/L (10-29 ng/dL).

Participants were a mean age of 77.2 years and had a mean total 25(OH)D level of 55.3 nmol/L at enrollment.

They were randomized to one of four doses of vitamin D3, including 200 IU/day (the control group), or 1,000, 2,000, or 4,000 IU/day.

The highest doses were found to be associated with worse – not better – outcomes including a shorter time to hospitalization or death, compared with the 1,000-IU/day group. The higher-dose groups were therefore switched to a dose of 1,000 IU/day or lower, and all participants were followed for up to 2 years.

Overall, 63% experienced falls over the course of the study, which, though high, was consistent with the study’s criteria of participants having an elevated fall risk.

Of the 667 participants who completed the trial, no benefit in prevention of falling was seen across any of the doses, compared with the control group dose of 200 IU/day, regardless of participants’ baseline vitamin D levels.

Safety analyses showed that even in the 1,000-IU/day group, a higher risk of first serious fall and first fall with hospitalization was seen compared with the 200-IU/day group.

A limitation is that the study did not have a placebo group, however, “200 IU/day is a very small dose, probably homeopathic,” Dr. Appel said. “It was likely close to a placebo,” he said.
 

 

 

Caveats: comorbidities, subgroups

In his editorial, Dr. Troen notes other studies, including VITAL (Vitamin D and Omega-3 Trial) also found no reduction in falls with higher vitamin D doses; however, that study did not show any significant risks with the higher doses.

He adds that the current study lacks information on subsets of participants.

“We don’t have enough information about the existing comorbidities and medications that these people are on to be able to pull back the layers. Maybe there is a subgroup that should not be getting 4,000 IU, whereas another subgroup may not be harmed and you may decide that patient can benefit,” he said.

Furthermore, the trial doesn’t address groups such as nursing home residents.

“I have, for instance, 85-year-olds with vitamin D levels of maybe 20 nmol/L with multiple medical issues, but levels that low were not included in the study, so this is a tricky business, but the bottom line is first, do no harm,” he said.

“We really need trials that factor in the multiple different aspects so we can come up, hopefully, with a holistic and interdisciplinary approach, which is usually the best way to optimize care for frail older adults,” he concluded.

The study received funding from the National Institute of Aging.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

How should we evaluate the benefit of immunotherapy combinations?

Article Type
Changed
Thu, 12/10/2020 - 09:19

Every medical oncologist who has described a combination chemotherapy regimen to a patient with advanced cancer has likely been asked whether the benefits of tumor shrinkage, disease-free survival (DFS), and overall survival are worth the risks of adverse events (AEs).

Dr. Alan P. Lyss

Single-agent immunotherapy and, more recently, combinations of immunotherapy drugs have been approved for a variety of metastatic tumors. In general, combination immunotherapy regimens have more AEs and a higher frequency of premature treatment discontinuation for toxicity.

Michael Postow, MD, of Memorial Sloan Kettering Cancer Center in New York, reflected on new ways to evaluate the benefits and risks of immunotherapy combinations during a plenary session on novel combinations at the American Association for Cancer Research’s Virtual Special Conference on Tumor Immunology and Immunotherapy.
 

Potential targets

As with chemotherapy drugs, immunotherapy combinations make the most sense when drugs targeting independent processes are employed.

As described in a paper published in Nature in 2011, the process for recruiting the immune system to combat cancer is as follows:

  • Dendritic cells must sample antigens derived from the tumor.
  • The dendritic cells must receive an activation signal so they promote immunity rather than tolerance.
  • The tumor antigen–loaded dendritic cells need to generate protective T-cell responses, instead of T-regulatory responses, in lymphoid tissues.
  • Cancer antigen–specific T cells must enter tumor tissues.
  • Tumor-derived mechanisms for promoting immunosuppression need to be circumvented.

Since each step in the cascade is a potential therapeutic target, there are large numbers of potential drug combinations.
 

Measuring impact

Conventional measurements of tumor response may not be adequately sensitive to the impact from immunotherapy drugs. A case in point is sipuleucel-T, which is approved to treat advanced prostate cancer.

In the pivotal phase 3 trial, only 1 of 341 patients receiving sipuleucel-T achieved a partial response by RECIST criteria. Only 2.6% of patients had a 50% reduction in prostate-specific antigen levels. Nonetheless, a 4.1-month improvement in median overall survival was achieved. These results were published in the New England Journal of Medicine.

The discrepancy between tumor shrinkage and survival benefit for immunotherapy is not unexpected. As many as 10% of patients treated with ipilimumab (ipi) for stage IV malignant melanoma have progressive disease by tumor size but experience prolongation of survival, according to guidelines published in Clinical Cancer Research.

Accurate assessment of the ultimate efficacy of immunotherapy over time would benefit patients and clinicians since immune checkpoint inhibitors are often administered for several years, are financially costly, and treatment-associated AEs emerge unpredictably at any time.

Curtailing the duration of ineffective treatment could be valuable from many perspectives.
 

Immunotherapy combinations in metastatic melanoma

In the CheckMate 067 study, there was an improvement in response, progression-free survival (PFS), and overall survival for nivolumab (nivo) plus ipi or nivo alone, in comparison with ipi alone, in patients with advanced melanoma. Initial results from this trial were published in the New England Journal of Medicine in 2017.

At a minimum follow-up of 60 months, the 5-year overall survival was 52% for the nivo/ipi regimen, 44% for nivo alone, and 26% for ipi alone. These results were published in the New England Journal of Medicine in 2019.

The trial was not statistically powered to conclude whether the overall survival for the combination was superior to that of single-agent nivo alone, but both nivo regimens were superior to ipi alone.

Unfortunately, the combination also produced the highest treatment-related AE rates – 59% with nivo/ipi, 23% with nivo, and 28% with ipi in 2019. In the 2017 report, the combination regimen had more than twice as many premature treatment discontinuations as the other two study arms.

Is there a better way to quantify the risk-benefit ratio and explain it to patients?
 

Alternative strategies for assessing benefit: Treatment-free survival

Researchers have proposed treatment-free survival (TFS) as a potential new metric to characterize not only antitumor activity but also toxicity experienced after the cessation of therapy and before initiation of subsequent systemic therapy or death.

TFS is defined as the area between Kaplan-Meier curves from immunotherapy cessation until the reinitiation of systemic therapy or death. All patients who began immunotherapy are included – not just those achieving response or concluding a predefined number of cycles of treatment.

The curves can be partitioned into states with and without toxicity to establish a unique endpoint: time to cessation of both immunotherapy and toxicity.

Researchers conducted a pooled analysis of 3-year follow-up data from the 1,077 patients who participated in CheckMate 069, testing nivo/ipi versus nivo alone, and CheckMate 067, comparing nivo/ipi, nivo alone, and ipi alone. The results were published in the Journal of Clinical Oncology.

The TFS without grade 3 or higher AEs was 28% for nivo/ipi, 11% for nivo alone, and 23% for ipi alone. The restricted mean time without either treatment or grade 3 or greater AEs was 10.1 months, 4.1 months, and 8.5 months, respectively.

TFS incentivizes the use of regimens that have:

  • A short duration of treatment
  • Prolonged time to subsequent therapy or death
  • Only mild AEs of brief duration.

A higher TFS corresponds with the goals that patients and their providers would have for a treatment regimen.
 

Adaptive models provide clues about benefit from extended therapy

In contrast to cytotoxic chemotherapy and molecularly targeted agents, benefit from immune-targeted therapy can deepen and persist after treatment discontinuation.

In advanced melanoma, researchers observed that overall survival was similar for patients who discontinued nivo/ipi because of AEs during the induction phase of treatment and those who did not. These results were published in the Journal of Clinical Oncology.

This observation has led to an individualized, adaptive approach to de-escalating combination immunotherapy, described in Clinical Cancer Research. The approach is dubbed “SMART,” which stands for sequential multiple assignment randomized trial designs.

With the SMART approach, each stage of a trial corresponds to an important treatment decision point. The goal is to define the population of patients who can safely discontinue treatment based on response, rather than doing so after the development of AEs.

In the Adapt-IT prospective study, 60 patients with advanced melanoma with poor prognostic features were given two doses of nivo/ipi followed by a CT scan at week 6. They were triaged to stopping ipi and proceeding with maintenance therapy with nivo alone or continuing the combination for an additional two cycles of treatment. Results from this trial were presented at ASCO 2020 (abstract 10003).

The investigators found that 68% of patients had no tumor burden increase at week 6 and could discontinue ipi. For those patients, their response rate of 57% approached the expected results from a full course of ipi.

At median follow-up of 22.3 months, median response duration, PFS, and overall survival had not been reached for the responders who received an abbreviated course of the combination regimen.

There were two observations that suggested the first two cycles of treatment drove not only toxicity but also tumor control:

  • The rate of grade 3-4 toxicity from only two cycles was high (57%).
  • Of the 19 patients (32% of the original 60 patients) who had progressive disease after two cycles of nivo/ipi, there were no responders with continued therapy.

Dr. Postow commented that, in correlative studies conducted as part of Adapt-IT, the Ki-67 of CD8-positive T cells increased after the initial dose of nivo/ipi. However, proliferation did not continue with subsequent cycles (that is, Ki-67 did not continue to rise).

When they examined markers of T-cell stimulation such as inducible costimulator of CD8-positive T cells, the researchers observed the same effect. The “immune boost” occurred with cycle one but not after subsequent doses of the nivo/ipi combination.

Although unproven in clinical trials at this time, these data suggest that response and risks of toxicity may not support giving patients more than one cycle of combination treatment.
 

More nuanced ways of assessing tumor growth

Dr. Postow noted that judgment about treatment effects over time are often made by displaying spider plots of changes from baseline tumor size from “time zero” – the time at which combination therapy is commenced.

He speculated that it might be worthwhile to give a dose or two of immune-targeted monotherapy (such as a PD-1 or PD-L1 inhibitor alone) before time zero, measure tumor growth prior to and after the single agent, and reserve using combination immunotherapy only for those patients who do not experience a dampening of the growth curve.

Patients whose tumor growth kinetics are improved with single-agent treatment could be spared the additional toxicity (and uncertain additive benefit) from the second agent.
 

Treatment optimization: More than ‘messaging’

Oncology practice has passed through a long era of “more is better,” an era that gave rise to intensive cytotoxic chemotherapy for hematologic and solid tumors in the metastatic and adjuvant settings. In some cases, that approach proved to be curative, but not in all.

More recently, because of better staging, improved outcomes with newer technology and treatments, and concern about immediate- and late-onset health risks, there has been an effort to deintensify therapy when it can be done safely.

Once a treatment regimen and treatment duration become established, however, patients and their physicians are reluctant to deintensity therapy.

Dr. Postow’s presentation demonstrated that, with regard to immunotherapy combinations – as in other realms of medical practice – science can lead the way to treatment optimization for individual patients.

We have the potential to reassure patients that treatment de-escalation is a rational and personalized component of treatment optimization through the combination of:

  • Identifying new endpoints to quantify treatment benefits and risks.
  • SMART trial designs.
  • Innovative ways to assess tumor response during each phase of a treatment course.

Precision assessment of immunotherapy effect in individual patients can be a key part of precision medicine.

Dr. Postow disclosed relationships with Aduro, Array BioPharma, Bristol Myers Squibb, Eisai, Incyte, Infinity, Merck, NewLink Genetics, Novartis, and RGenix.


Dr. Lyss was a community-based medical oncologist and clinical researcher for more than 35 years before his recent retirement. His clinical and research interests were focused on breast and lung cancers, as well as expanding clinical trial access to medically underserved populations. He is based in St. Louis. He has no conflicts of interest.

Publications
Topics
Sections

Every medical oncologist who has described a combination chemotherapy regimen to a patient with advanced cancer has likely been asked whether the benefits of tumor shrinkage, disease-free survival (DFS), and overall survival are worth the risks of adverse events (AEs).

Dr. Alan P. Lyss

Single-agent immunotherapy and, more recently, combinations of immunotherapy drugs have been approved for a variety of metastatic tumors. In general, combination immunotherapy regimens have more AEs and a higher frequency of premature treatment discontinuation for toxicity.

Michael Postow, MD, of Memorial Sloan Kettering Cancer Center in New York, reflected on new ways to evaluate the benefits and risks of immunotherapy combinations during a plenary session on novel combinations at the American Association for Cancer Research’s Virtual Special Conference on Tumor Immunology and Immunotherapy.
 

Potential targets

As with chemotherapy drugs, immunotherapy combinations make the most sense when drugs targeting independent processes are employed.

As described in a paper published in Nature in 2011, the process for recruiting the immune system to combat cancer is as follows:

  • Dendritic cells must sample antigens derived from the tumor.
  • The dendritic cells must receive an activation signal so they promote immunity rather than tolerance.
  • The tumor antigen–loaded dendritic cells need to generate protective T-cell responses, instead of T-regulatory responses, in lymphoid tissues.
  • Cancer antigen–specific T cells must enter tumor tissues.
  • Tumor-derived mechanisms for promoting immunosuppression need to be circumvented.

Since each step in the cascade is a potential therapeutic target, there are large numbers of potential drug combinations.
 

Measuring impact

Conventional measurements of tumor response may not be adequately sensitive to the impact from immunotherapy drugs. A case in point is sipuleucel-T, which is approved to treat advanced prostate cancer.

In the pivotal phase 3 trial, only 1 of 341 patients receiving sipuleucel-T achieved a partial response by RECIST criteria. Only 2.6% of patients had a 50% reduction in prostate-specific antigen levels. Nonetheless, a 4.1-month improvement in median overall survival was achieved. These results were published in the New England Journal of Medicine.

The discrepancy between tumor shrinkage and survival benefit for immunotherapy is not unexpected. As many as 10% of patients treated with ipilimumab (ipi) for stage IV malignant melanoma have progressive disease by tumor size but experience prolongation of survival, according to guidelines published in Clinical Cancer Research.

Accurate assessment of the ultimate efficacy of immunotherapy over time would benefit patients and clinicians since immune checkpoint inhibitors are often administered for several years, are financially costly, and treatment-associated AEs emerge unpredictably at any time.

Curtailing the duration of ineffective treatment could be valuable from many perspectives.
 

Immunotherapy combinations in metastatic melanoma

In the CheckMate 067 study, there was an improvement in response, progression-free survival (PFS), and overall survival for nivolumab (nivo) plus ipi or nivo alone, in comparison with ipi alone, in patients with advanced melanoma. Initial results from this trial were published in the New England Journal of Medicine in 2017.

At a minimum follow-up of 60 months, the 5-year overall survival was 52% for the nivo/ipi regimen, 44% for nivo alone, and 26% for ipi alone. These results were published in the New England Journal of Medicine in 2019.

The trial was not statistically powered to conclude whether the overall survival for the combination was superior to that of single-agent nivo alone, but both nivo regimens were superior to ipi alone.

Unfortunately, the combination also produced the highest treatment-related AE rates – 59% with nivo/ipi, 23% with nivo, and 28% with ipi in 2019. In the 2017 report, the combination regimen had more than twice as many premature treatment discontinuations as the other two study arms.

Is there a better way to quantify the risk-benefit ratio and explain it to patients?
 

Alternative strategies for assessing benefit: Treatment-free survival

Researchers have proposed treatment-free survival (TFS) as a potential new metric to characterize not only antitumor activity but also toxicity experienced after the cessation of therapy and before initiation of subsequent systemic therapy or death.

TFS is defined as the area between Kaplan-Meier curves from immunotherapy cessation until the reinitiation of systemic therapy or death. All patients who began immunotherapy are included – not just those achieving response or concluding a predefined number of cycles of treatment.

The curves can be partitioned into states with and without toxicity to establish a unique endpoint: time to cessation of both immunotherapy and toxicity.

Researchers conducted a pooled analysis of 3-year follow-up data from the 1,077 patients who participated in CheckMate 069, testing nivo/ipi versus nivo alone, and CheckMate 067, comparing nivo/ipi, nivo alone, and ipi alone. The results were published in the Journal of Clinical Oncology.

The TFS without grade 3 or higher AEs was 28% for nivo/ipi, 11% for nivo alone, and 23% for ipi alone. The restricted mean time without either treatment or grade 3 or greater AEs was 10.1 months, 4.1 months, and 8.5 months, respectively.

TFS incentivizes the use of regimens that have:

  • A short duration of treatment
  • Prolonged time to subsequent therapy or death
  • Only mild AEs of brief duration.

A higher TFS corresponds with the goals that patients and their providers would have for a treatment regimen.
 

Adaptive models provide clues about benefit from extended therapy

In contrast to cytotoxic chemotherapy and molecularly targeted agents, benefit from immune-targeted therapy can deepen and persist after treatment discontinuation.

In advanced melanoma, researchers observed that overall survival was similar for patients who discontinued nivo/ipi because of AEs during the induction phase of treatment and those who did not. These results were published in the Journal of Clinical Oncology.

This observation has led to an individualized, adaptive approach to de-escalating combination immunotherapy, described in Clinical Cancer Research. The approach is dubbed “SMART,” which stands for sequential multiple assignment randomized trial designs.

With the SMART approach, each stage of a trial corresponds to an important treatment decision point. The goal is to define the population of patients who can safely discontinue treatment based on response, rather than doing so after the development of AEs.

In the Adapt-IT prospective study, 60 patients with advanced melanoma with poor prognostic features were given two doses of nivo/ipi followed by a CT scan at week 6. They were triaged to stopping ipi and proceeding with maintenance therapy with nivo alone or continuing the combination for an additional two cycles of treatment. Results from this trial were presented at ASCO 2020 (abstract 10003).

The investigators found that 68% of patients had no tumor burden increase at week 6 and could discontinue ipi. For those patients, their response rate of 57% approached the expected results from a full course of ipi.

At median follow-up of 22.3 months, median response duration, PFS, and overall survival had not been reached for the responders who received an abbreviated course of the combination regimen.

There were two observations that suggested the first two cycles of treatment drove not only toxicity but also tumor control:

  • The rate of grade 3-4 toxicity from only two cycles was high (57%).
  • Of the 19 patients (32% of the original 60 patients) who had progressive disease after two cycles of nivo/ipi, there were no responders with continued therapy.

Dr. Postow commented that, in correlative studies conducted as part of Adapt-IT, the Ki-67 of CD8-positive T cells increased after the initial dose of nivo/ipi. However, proliferation did not continue with subsequent cycles (that is, Ki-67 did not continue to rise).

When they examined markers of T-cell stimulation such as inducible costimulator of CD8-positive T cells, the researchers observed the same effect. The “immune boost” occurred with cycle one but not after subsequent doses of the nivo/ipi combination.

Although unproven in clinical trials at this time, these data suggest that response and risks of toxicity may not support giving patients more than one cycle of combination treatment.
 

More nuanced ways of assessing tumor growth

Dr. Postow noted that judgment about treatment effects over time are often made by displaying spider plots of changes from baseline tumor size from “time zero” – the time at which combination therapy is commenced.

He speculated that it might be worthwhile to give a dose or two of immune-targeted monotherapy (such as a PD-1 or PD-L1 inhibitor alone) before time zero, measure tumor growth prior to and after the single agent, and reserve using combination immunotherapy only for those patients who do not experience a dampening of the growth curve.

Patients whose tumor growth kinetics are improved with single-agent treatment could be spared the additional toxicity (and uncertain additive benefit) from the second agent.
 

Treatment optimization: More than ‘messaging’

Oncology practice has passed through a long era of “more is better,” an era that gave rise to intensive cytotoxic chemotherapy for hematologic and solid tumors in the metastatic and adjuvant settings. In some cases, that approach proved to be curative, but not in all.

More recently, because of better staging, improved outcomes with newer technology and treatments, and concern about immediate- and late-onset health risks, there has been an effort to deintensify therapy when it can be done safely.

Once a treatment regimen and treatment duration become established, however, patients and their physicians are reluctant to deintensity therapy.

Dr. Postow’s presentation demonstrated that, with regard to immunotherapy combinations – as in other realms of medical practice – science can lead the way to treatment optimization for individual patients.

We have the potential to reassure patients that treatment de-escalation is a rational and personalized component of treatment optimization through the combination of:

  • Identifying new endpoints to quantify treatment benefits and risks.
  • SMART trial designs.
  • Innovative ways to assess tumor response during each phase of a treatment course.

Precision assessment of immunotherapy effect in individual patients can be a key part of precision medicine.

Dr. Postow disclosed relationships with Aduro, Array BioPharma, Bristol Myers Squibb, Eisai, Incyte, Infinity, Merck, NewLink Genetics, Novartis, and RGenix.


Dr. Lyss was a community-based medical oncologist and clinical researcher for more than 35 years before his recent retirement. His clinical and research interests were focused on breast and lung cancers, as well as expanding clinical trial access to medically underserved populations. He is based in St. Louis. He has no conflicts of interest.

Every medical oncologist who has described a combination chemotherapy regimen to a patient with advanced cancer has likely been asked whether the benefits of tumor shrinkage, disease-free survival (DFS), and overall survival are worth the risks of adverse events (AEs).

Dr. Alan P. Lyss

Single-agent immunotherapy and, more recently, combinations of immunotherapy drugs have been approved for a variety of metastatic tumors. In general, combination immunotherapy regimens have more AEs and a higher frequency of premature treatment discontinuation for toxicity.

Michael Postow, MD, of Memorial Sloan Kettering Cancer Center in New York, reflected on new ways to evaluate the benefits and risks of immunotherapy combinations during a plenary session on novel combinations at the American Association for Cancer Research’s Virtual Special Conference on Tumor Immunology and Immunotherapy.
 

Potential targets

As with chemotherapy drugs, immunotherapy combinations make the most sense when drugs targeting independent processes are employed.

As described in a paper published in Nature in 2011, the process for recruiting the immune system to combat cancer is as follows:

  • Dendritic cells must sample antigens derived from the tumor.
  • The dendritic cells must receive an activation signal so they promote immunity rather than tolerance.
  • The tumor antigen–loaded dendritic cells need to generate protective T-cell responses, instead of T-regulatory responses, in lymphoid tissues.
  • Cancer antigen–specific T cells must enter tumor tissues.
  • Tumor-derived mechanisms for promoting immunosuppression need to be circumvented.

Since each step in the cascade is a potential therapeutic target, there are large numbers of potential drug combinations.
 

Measuring impact

Conventional measurements of tumor response may not be adequately sensitive to the impact from immunotherapy drugs. A case in point is sipuleucel-T, which is approved to treat advanced prostate cancer.

In the pivotal phase 3 trial, only 1 of 341 patients receiving sipuleucel-T achieved a partial response by RECIST criteria. Only 2.6% of patients had a 50% reduction in prostate-specific antigen levels. Nonetheless, a 4.1-month improvement in median overall survival was achieved. These results were published in the New England Journal of Medicine.

The discrepancy between tumor shrinkage and survival benefit for immunotherapy is not unexpected. As many as 10% of patients treated with ipilimumab (ipi) for stage IV malignant melanoma have progressive disease by tumor size but experience prolongation of survival, according to guidelines published in Clinical Cancer Research.

Accurate assessment of the ultimate efficacy of immunotherapy over time would benefit patients and clinicians since immune checkpoint inhibitors are often administered for several years, are financially costly, and treatment-associated AEs emerge unpredictably at any time.

Curtailing the duration of ineffective treatment could be valuable from many perspectives.
 

Immunotherapy combinations in metastatic melanoma

In the CheckMate 067 study, there was an improvement in response, progression-free survival (PFS), and overall survival for nivolumab (nivo) plus ipi or nivo alone, in comparison with ipi alone, in patients with advanced melanoma. Initial results from this trial were published in the New England Journal of Medicine in 2017.

At a minimum follow-up of 60 months, the 5-year overall survival was 52% for the nivo/ipi regimen, 44% for nivo alone, and 26% for ipi alone. These results were published in the New England Journal of Medicine in 2019.

The trial was not statistically powered to conclude whether the overall survival for the combination was superior to that of single-agent nivo alone, but both nivo regimens were superior to ipi alone.

Unfortunately, the combination also produced the highest treatment-related AE rates – 59% with nivo/ipi, 23% with nivo, and 28% with ipi in 2019. In the 2017 report, the combination regimen had more than twice as many premature treatment discontinuations as the other two study arms.

Is there a better way to quantify the risk-benefit ratio and explain it to patients?
 

Alternative strategies for assessing benefit: Treatment-free survival

Researchers have proposed treatment-free survival (TFS) as a potential new metric to characterize not only antitumor activity but also toxicity experienced after the cessation of therapy and before initiation of subsequent systemic therapy or death.

TFS is defined as the area between Kaplan-Meier curves from immunotherapy cessation until the reinitiation of systemic therapy or death. All patients who began immunotherapy are included – not just those achieving response or concluding a predefined number of cycles of treatment.

The curves can be partitioned into states with and without toxicity to establish a unique endpoint: time to cessation of both immunotherapy and toxicity.

Researchers conducted a pooled analysis of 3-year follow-up data from the 1,077 patients who participated in CheckMate 069, testing nivo/ipi versus nivo alone, and CheckMate 067, comparing nivo/ipi, nivo alone, and ipi alone. The results were published in the Journal of Clinical Oncology.

The TFS without grade 3 or higher AEs was 28% for nivo/ipi, 11% for nivo alone, and 23% for ipi alone. The restricted mean time without either treatment or grade 3 or greater AEs was 10.1 months, 4.1 months, and 8.5 months, respectively.

TFS incentivizes the use of regimens that have:

  • A short duration of treatment
  • Prolonged time to subsequent therapy or death
  • Only mild AEs of brief duration.

A higher TFS corresponds with the goals that patients and their providers would have for a treatment regimen.
 

Adaptive models provide clues about benefit from extended therapy

In contrast to cytotoxic chemotherapy and molecularly targeted agents, benefit from immune-targeted therapy can deepen and persist after treatment discontinuation.

In advanced melanoma, researchers observed that overall survival was similar for patients who discontinued nivo/ipi because of AEs during the induction phase of treatment and those who did not. These results were published in the Journal of Clinical Oncology.

This observation has led to an individualized, adaptive approach to de-escalating combination immunotherapy, described in Clinical Cancer Research. The approach is dubbed “SMART,” which stands for sequential multiple assignment randomized trial designs.

With the SMART approach, each stage of a trial corresponds to an important treatment decision point. The goal is to define the population of patients who can safely discontinue treatment based on response, rather than doing so after the development of AEs.

In the Adapt-IT prospective study, 60 patients with advanced melanoma with poor prognostic features were given two doses of nivo/ipi followed by a CT scan at week 6. They were triaged to stopping ipi and proceeding with maintenance therapy with nivo alone or continuing the combination for an additional two cycles of treatment. Results from this trial were presented at ASCO 2020 (abstract 10003).

The investigators found that 68% of patients had no tumor burden increase at week 6 and could discontinue ipi. For those patients, their response rate of 57% approached the expected results from a full course of ipi.

At median follow-up of 22.3 months, median response duration, PFS, and overall survival had not been reached for the responders who received an abbreviated course of the combination regimen.

There were two observations that suggested the first two cycles of treatment drove not only toxicity but also tumor control:

  • The rate of grade 3-4 toxicity from only two cycles was high (57%).
  • Of the 19 patients (32% of the original 60 patients) who had progressive disease after two cycles of nivo/ipi, there were no responders with continued therapy.

Dr. Postow commented that, in correlative studies conducted as part of Adapt-IT, the Ki-67 of CD8-positive T cells increased after the initial dose of nivo/ipi. However, proliferation did not continue with subsequent cycles (that is, Ki-67 did not continue to rise).

When they examined markers of T-cell stimulation such as inducible costimulator of CD8-positive T cells, the researchers observed the same effect. The “immune boost” occurred with cycle one but not after subsequent doses of the nivo/ipi combination.

Although unproven in clinical trials at this time, these data suggest that response and risks of toxicity may not support giving patients more than one cycle of combination treatment.
 

More nuanced ways of assessing tumor growth

Dr. Postow noted that judgment about treatment effects over time are often made by displaying spider plots of changes from baseline tumor size from “time zero” – the time at which combination therapy is commenced.

He speculated that it might be worthwhile to give a dose or two of immune-targeted monotherapy (such as a PD-1 or PD-L1 inhibitor alone) before time zero, measure tumor growth prior to and after the single agent, and reserve using combination immunotherapy only for those patients who do not experience a dampening of the growth curve.

Patients whose tumor growth kinetics are improved with single-agent treatment could be spared the additional toxicity (and uncertain additive benefit) from the second agent.
 

Treatment optimization: More than ‘messaging’

Oncology practice has passed through a long era of “more is better,” an era that gave rise to intensive cytotoxic chemotherapy for hematologic and solid tumors in the metastatic and adjuvant settings. In some cases, that approach proved to be curative, but not in all.

More recently, because of better staging, improved outcomes with newer technology and treatments, and concern about immediate- and late-onset health risks, there has been an effort to deintensify therapy when it can be done safely.

Once a treatment regimen and treatment duration become established, however, patients and their physicians are reluctant to deintensity therapy.

Dr. Postow’s presentation demonstrated that, with regard to immunotherapy combinations – as in other realms of medical practice – science can lead the way to treatment optimization for individual patients.

We have the potential to reassure patients that treatment de-escalation is a rational and personalized component of treatment optimization through the combination of:

  • Identifying new endpoints to quantify treatment benefits and risks.
  • SMART trial designs.
  • Innovative ways to assess tumor response during each phase of a treatment course.

Precision assessment of immunotherapy effect in individual patients can be a key part of precision medicine.

Dr. Postow disclosed relationships with Aduro, Array BioPharma, Bristol Myers Squibb, Eisai, Incyte, Infinity, Merck, NewLink Genetics, Novartis, and RGenix.


Dr. Lyss was a community-based medical oncologist and clinical researcher for more than 35 years before his recent retirement. His clinical and research interests were focused on breast and lung cancers, as well as expanding clinical trial access to medically underserved populations. He is based in St. Louis. He has no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AACR: TUMOR IMMUNOLOGY AND IMMUNOTHERAPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

This month in the journal CHEST®

Article Type
Changed
Thu, 12/10/2020 - 00:15

Editor’s picks

 



Power Outage: An Ignored Risk Factor for Chronic Obstructive Pulmonary Disease ExacerbationsBy Dr. Wangjian Zhang, et al.



PROPHETIC: Prospective Identification of Pneumonia in Hospitalized Patients in the ICU By Dr. Stephen P. Bergin, et al.



Chronic Beryllium Disease: Update on a Moving Target By Dr. Maeve MacMurdo, et al.



Development of Learning Curves for Bronchoscopy: Results of a Multicenter Study of Pulmonary Trainees By Dr. Nha Voduc, et al.



Bias and Racism Teaching Rounds at an Academic Medical Center By Dr. Quinn Capers, IV, et al.

Publications
Topics
Sections

Editor’s picks

Editor’s picks

 



Power Outage: An Ignored Risk Factor for Chronic Obstructive Pulmonary Disease ExacerbationsBy Dr. Wangjian Zhang, et al.



PROPHETIC: Prospective Identification of Pneumonia in Hospitalized Patients in the ICU By Dr. Stephen P. Bergin, et al.



Chronic Beryllium Disease: Update on a Moving Target By Dr. Maeve MacMurdo, et al.



Development of Learning Curves for Bronchoscopy: Results of a Multicenter Study of Pulmonary Trainees By Dr. Nha Voduc, et al.



Bias and Racism Teaching Rounds at an Academic Medical Center By Dr. Quinn Capers, IV, et al.

 



Power Outage: An Ignored Risk Factor for Chronic Obstructive Pulmonary Disease ExacerbationsBy Dr. Wangjian Zhang, et al.



PROPHETIC: Prospective Identification of Pneumonia in Hospitalized Patients in the ICU By Dr. Stephen P. Bergin, et al.



Chronic Beryllium Disease: Update on a Moving Target By Dr. Maeve MacMurdo, et al.



Development of Learning Curves for Bronchoscopy: Results of a Multicenter Study of Pulmonary Trainees By Dr. Nha Voduc, et al.



Bias and Racism Teaching Rounds at an Academic Medical Center By Dr. Quinn Capers, IV, et al.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Three genes could predict congenital Zika infection susceptibility

Article Type
Changed
Wed, 12/09/2020 - 16:52

Three genes that could predict susceptibility to congenital Zika virus (ZIKV) infection have been identified, Dr. Irene Rivero-Calle, MD, shared at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year.

ZIKV, an emerging flavivirus, is responsible for one the most critical pandemic emergencies of the last decade and has been associated with severe neonatal brain disabilities, declared Dr. Rivero-Calle, of the Hospital Clínico Universitario de Santiago de Compostela in Santiago de Compostela, Spain. “We think that understanding the genomic background could explain some of the most relevant symptoms of congenital Zika syndrome (CZS) and could be essential to better comprehend this disease.”

To achieve this understanding, Dr. Rivero-Calle and her colleagues conducted a study aiming to analyze any genetic factors that could explain the variation in phenotypes in newborns from mothers who had a Zika infection during their pregnancy. Additionally, they strove to “elucidate if the possible genetic association is specific to mothers or their newborns, and to check if this genomic background or any genomic ancestry pattern could be related with the phenotype,” she explained.

In their study, Dr. Rivero-Calle and her team analyzed 80 samples, comprising 40 samples from mothers who had been infected by ZIKV during their pregnancy and 40 from their newborns. Of those descendants, 20 were asymptomatic and 20 were symptomatic (13 had CZS, 3 had microcephaly, 2 had a pathologic MRI, 1 had hearing loss, and 1 was born preterm).

Population stratification, which Dr. Rivero-Calle explained “lets us know if the population is African, European, or Native American looking at the genes,” did not show any relation with the phenotype. We had a mixture of population genomics among all samples.”

Dr. Rivero-Calle and her team then performed three analyses: genotype analysis, an allelic test, and gene analysis. The allelic test and gene-collapsing method highlighted three genes (PANO1, PIDD1, and SLC25A22) as potential determinants of the varying phenotypes in the newborns from ZIKV-infected mothers. Overrepresentation analysis of gene ontology terms shows that PIDD1 and PANO1 are related to apoptosis and cell death, which is closely related to early infantile epilepsy. This could explain the most severe complications of CZS: seizures, brain damage, microcephaly, and detrimental neurodevelopmental growth. Regarding reactome and KEGG analysis, gene PIID1 is related with p53 pathway, which correlates with cell’s death and apoptosis, and with microcephaly, a typical phenotypic feature of CZS.

“So, in conclusion, we found three genes which could predict susceptibility to congenital Zika infection; we saw that the functionality of these genes seems to be deeply related with mechanisms which could explain the different phenotypes; and we saw that these three genes only appear in the children’s cohort, so there is no candidate gene in the mother’s genomic background which can help predict the phenotype of the newborn,” Dr. Rivero-Calle declared. “Finally, there is no ancestry pattern associated with disabilities caused by Zika infection.”

Dr. Rivero-Calle reported that this project (ZikAction) has received funding from the European Union’s Horizon 2020 research and innovation program, under grant agreement 734857.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Three genes that could predict susceptibility to congenital Zika virus (ZIKV) infection have been identified, Dr. Irene Rivero-Calle, MD, shared at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year.

ZIKV, an emerging flavivirus, is responsible for one the most critical pandemic emergencies of the last decade and has been associated with severe neonatal brain disabilities, declared Dr. Rivero-Calle, of the Hospital Clínico Universitario de Santiago de Compostela in Santiago de Compostela, Spain. “We think that understanding the genomic background could explain some of the most relevant symptoms of congenital Zika syndrome (CZS) and could be essential to better comprehend this disease.”

To achieve this understanding, Dr. Rivero-Calle and her colleagues conducted a study aiming to analyze any genetic factors that could explain the variation in phenotypes in newborns from mothers who had a Zika infection during their pregnancy. Additionally, they strove to “elucidate if the possible genetic association is specific to mothers or their newborns, and to check if this genomic background or any genomic ancestry pattern could be related with the phenotype,” she explained.

In their study, Dr. Rivero-Calle and her team analyzed 80 samples, comprising 40 samples from mothers who had been infected by ZIKV during their pregnancy and 40 from their newborns. Of those descendants, 20 were asymptomatic and 20 were symptomatic (13 had CZS, 3 had microcephaly, 2 had a pathologic MRI, 1 had hearing loss, and 1 was born preterm).

Population stratification, which Dr. Rivero-Calle explained “lets us know if the population is African, European, or Native American looking at the genes,” did not show any relation with the phenotype. We had a mixture of population genomics among all samples.”

Dr. Rivero-Calle and her team then performed three analyses: genotype analysis, an allelic test, and gene analysis. The allelic test and gene-collapsing method highlighted three genes (PANO1, PIDD1, and SLC25A22) as potential determinants of the varying phenotypes in the newborns from ZIKV-infected mothers. Overrepresentation analysis of gene ontology terms shows that PIDD1 and PANO1 are related to apoptosis and cell death, which is closely related to early infantile epilepsy. This could explain the most severe complications of CZS: seizures, brain damage, microcephaly, and detrimental neurodevelopmental growth. Regarding reactome and KEGG analysis, gene PIID1 is related with p53 pathway, which correlates with cell’s death and apoptosis, and with microcephaly, a typical phenotypic feature of CZS.

“So, in conclusion, we found three genes which could predict susceptibility to congenital Zika infection; we saw that the functionality of these genes seems to be deeply related with mechanisms which could explain the different phenotypes; and we saw that these three genes only appear in the children’s cohort, so there is no candidate gene in the mother’s genomic background which can help predict the phenotype of the newborn,” Dr. Rivero-Calle declared. “Finally, there is no ancestry pattern associated with disabilities caused by Zika infection.”

Dr. Rivero-Calle reported that this project (ZikAction) has received funding from the European Union’s Horizon 2020 research and innovation program, under grant agreement 734857.

Three genes that could predict susceptibility to congenital Zika virus (ZIKV) infection have been identified, Dr. Irene Rivero-Calle, MD, shared at the annual meeting of the European Society for Paediatric Infectious Diseases, held virtually this year.

ZIKV, an emerging flavivirus, is responsible for one the most critical pandemic emergencies of the last decade and has been associated with severe neonatal brain disabilities, declared Dr. Rivero-Calle, of the Hospital Clínico Universitario de Santiago de Compostela in Santiago de Compostela, Spain. “We think that understanding the genomic background could explain some of the most relevant symptoms of congenital Zika syndrome (CZS) and could be essential to better comprehend this disease.”

To achieve this understanding, Dr. Rivero-Calle and her colleagues conducted a study aiming to analyze any genetic factors that could explain the variation in phenotypes in newborns from mothers who had a Zika infection during their pregnancy. Additionally, they strove to “elucidate if the possible genetic association is specific to mothers or their newborns, and to check if this genomic background or any genomic ancestry pattern could be related with the phenotype,” she explained.

In their study, Dr. Rivero-Calle and her team analyzed 80 samples, comprising 40 samples from mothers who had been infected by ZIKV during their pregnancy and 40 from their newborns. Of those descendants, 20 were asymptomatic and 20 were symptomatic (13 had CZS, 3 had microcephaly, 2 had a pathologic MRI, 1 had hearing loss, and 1 was born preterm).

Population stratification, which Dr. Rivero-Calle explained “lets us know if the population is African, European, or Native American looking at the genes,” did not show any relation with the phenotype. We had a mixture of population genomics among all samples.”

Dr. Rivero-Calle and her team then performed three analyses: genotype analysis, an allelic test, and gene analysis. The allelic test and gene-collapsing method highlighted three genes (PANO1, PIDD1, and SLC25A22) as potential determinants of the varying phenotypes in the newborns from ZIKV-infected mothers. Overrepresentation analysis of gene ontology terms shows that PIDD1 and PANO1 are related to apoptosis and cell death, which is closely related to early infantile epilepsy. This could explain the most severe complications of CZS: seizures, brain damage, microcephaly, and detrimental neurodevelopmental growth. Regarding reactome and KEGG analysis, gene PIID1 is related with p53 pathway, which correlates with cell’s death and apoptosis, and with microcephaly, a typical phenotypic feature of CZS.

“So, in conclusion, we found three genes which could predict susceptibility to congenital Zika infection; we saw that the functionality of these genes seems to be deeply related with mechanisms which could explain the different phenotypes; and we saw that these three genes only appear in the children’s cohort, so there is no candidate gene in the mother’s genomic background which can help predict the phenotype of the newborn,” Dr. Rivero-Calle declared. “Finally, there is no ancestry pattern associated with disabilities caused by Zika infection.”

Dr. Rivero-Calle reported that this project (ZikAction) has received funding from the European Union’s Horizon 2020 research and innovation program, under grant agreement 734857.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ESPID 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

C. difficile control could require integrated approach

Article Type
Changed
Wed, 12/16/2020 - 15:49
Display Headline
C. difficile control could require integrated approach

Clostridioides difficile (C. diff) infection (CDI) is a pathogen of both humans and animals, and to control it will require an integrated approach that encompasses human health care, veterinary health care, environmental regulation, and public policy. That is the conclusion of a group led by Su-Chen Lim, MD, and Tom Riley, MD, of Edith Cowan University in Australia, who published a review in Clinical Microbiology and Infection.

CDI was generally considered a nuisance infection until the early 21st century, when a hypervirulent fluoroquinolone-resistant strain emerged in North America. The strain is now documented In the United States, Canada, and most countries in Europe.

Another new feature of CDI is increased evidence of community transmission, which was previously rare. This is defined as cases where the patient experienced symptom onset outside the hospital, and had no history of hospitalization in the previous 12 weeks or symptom onset within 48 hours of hospital admission. Community-associated CDI now accounts for 41% of U.S. cases, nearly 30% of Australian cases, and about 14% in Europe, according to recent studies.

Several features of CDI suggest a need for an integrated management plan. The preferred habitat of C. diff is the gastrointestinal track of mammals, and likely colonizes all mammalian neonates. Over time, colonization by other microbes likely crowd it out and prevent overgrowth. But widespread use of antimicrobials in animal production can lead to the creation of an environment resembling that of the neonate, allowing C. diff to expand. That has led to food animals becoming a major C. diff reservoir, and whole-genome studies showed that strains found in humans, food, animals, and the environment are closely related and sometimes genetically indistinguishable, suggesting transmission between humans and animals that may be attributable to contaminated food and environments.

The authors suggest that C. diff infection control should be guided by the One Health initiative, which seeks cooperation between physicians, osteopathic physicians, veterinarians, dentists, nurses, and other scientific and environmental disciplines. The goal is to enhance surveillance and interdisciplinary communication, as well as integrated policies. The authors note that C. diff is often thought of by physicians as primarily a hospital problem, who may be unaware of the increased prevalence of community-acquired disease. It is also a significant problem in agriculture, since as many as 50% of piglets succumb to the disease. Other studies have recently shown that asymptomatic carriers of toxigenic strains are likely to transmit the bacteria to C. diff-negative patients. Asymptomatic carriers cluster with symptomatic patients. In one Cleveland hospital, more than 25% of hospital-associated CDI cases were found to have been colonized prior to admission, suggesting that these were not true hospital-associated cases.

C. diff has been isolated from a wide range of sources, including food animals, meat, seafood, vegetables, household environments, and natural environments like rivers, lakes, and soil. About 20% of calves and 70% of piglets are colonized with C. diff. It has a high prevalence in meat products in the United States, but lower in the Europe, possibly because of different slaughtering practices.

The authors suggest that zoonotic C. diff spread is unlikely to be confined to any geographic region or population, and that widespread C. diff contamination is occurring through food or the environment. This could be occurring because spores can withstand cooking temperatures and disseminate through the air, and even through manure from food animals made into compost or fertilizer.

Veterinary efforts mimicking hospital measures have reduced animal CDI, but there are no rapid diagnostic tests for CDI in animals, making it challenging to control its spread in this context.

The authors call for enhanced antimicrobial stewardship in both human and animal settings, including banning of antimicrobial agents as growth promoters. This has been done in the United States and Europe, but not in Brazil, China, Canada, India, and Australia. They also call for research on inactivation of C. diff spores during waste treatment.

Even better, the authors suggest that vaccines should be developed and employed in both animals and humans. No such vaccine exists in animals, but Pfizer has one for humans in a phase 3 clinical trial, but it does not prevent colonization. Others are in development.

The epidemiology of CDI is an ongoing challenge, with emerging new strains and changing social and environmental conditions. “However, it is with the collaborative efforts of industry partners, policymakers, veterinarians, clinicians, and researchers that CDI needs to be approached, a perfect example of One Health. Opening an interdisciplinary dialogue to address CDI and One Health issues has to be the focus of future studies,” the authors concluded.

SOURCE: SC Lim et al. Clinical Microbiology and Infection. 2020;26:85-863.

Publications
Topics
Sections

Clostridioides difficile (C. diff) infection (CDI) is a pathogen of both humans and animals, and to control it will require an integrated approach that encompasses human health care, veterinary health care, environmental regulation, and public policy. That is the conclusion of a group led by Su-Chen Lim, MD, and Tom Riley, MD, of Edith Cowan University in Australia, who published a review in Clinical Microbiology and Infection.

CDI was generally considered a nuisance infection until the early 21st century, when a hypervirulent fluoroquinolone-resistant strain emerged in North America. The strain is now documented In the United States, Canada, and most countries in Europe.

Another new feature of CDI is increased evidence of community transmission, which was previously rare. This is defined as cases where the patient experienced symptom onset outside the hospital, and had no history of hospitalization in the previous 12 weeks or symptom onset within 48 hours of hospital admission. Community-associated CDI now accounts for 41% of U.S. cases, nearly 30% of Australian cases, and about 14% in Europe, according to recent studies.

Several features of CDI suggest a need for an integrated management plan. The preferred habitat of C. diff is the gastrointestinal track of mammals, and likely colonizes all mammalian neonates. Over time, colonization by other microbes likely crowd it out and prevent overgrowth. But widespread use of antimicrobials in animal production can lead to the creation of an environment resembling that of the neonate, allowing C. diff to expand. That has led to food animals becoming a major C. diff reservoir, and whole-genome studies showed that strains found in humans, food, animals, and the environment are closely related and sometimes genetically indistinguishable, suggesting transmission between humans and animals that may be attributable to contaminated food and environments.

The authors suggest that C. diff infection control should be guided by the One Health initiative, which seeks cooperation between physicians, osteopathic physicians, veterinarians, dentists, nurses, and other scientific and environmental disciplines. The goal is to enhance surveillance and interdisciplinary communication, as well as integrated policies. The authors note that C. diff is often thought of by physicians as primarily a hospital problem, who may be unaware of the increased prevalence of community-acquired disease. It is also a significant problem in agriculture, since as many as 50% of piglets succumb to the disease. Other studies have recently shown that asymptomatic carriers of toxigenic strains are likely to transmit the bacteria to C. diff-negative patients. Asymptomatic carriers cluster with symptomatic patients. In one Cleveland hospital, more than 25% of hospital-associated CDI cases were found to have been colonized prior to admission, suggesting that these were not true hospital-associated cases.

C. diff has been isolated from a wide range of sources, including food animals, meat, seafood, vegetables, household environments, and natural environments like rivers, lakes, and soil. About 20% of calves and 70% of piglets are colonized with C. diff. It has a high prevalence in meat products in the United States, but lower in the Europe, possibly because of different slaughtering practices.

The authors suggest that zoonotic C. diff spread is unlikely to be confined to any geographic region or population, and that widespread C. diff contamination is occurring through food or the environment. This could be occurring because spores can withstand cooking temperatures and disseminate through the air, and even through manure from food animals made into compost or fertilizer.

Veterinary efforts mimicking hospital measures have reduced animal CDI, but there are no rapid diagnostic tests for CDI in animals, making it challenging to control its spread in this context.

The authors call for enhanced antimicrobial stewardship in both human and animal settings, including banning of antimicrobial agents as growth promoters. This has been done in the United States and Europe, but not in Brazil, China, Canada, India, and Australia. They also call for research on inactivation of C. diff spores during waste treatment.

Even better, the authors suggest that vaccines should be developed and employed in both animals and humans. No such vaccine exists in animals, but Pfizer has one for humans in a phase 3 clinical trial, but it does not prevent colonization. Others are in development.

The epidemiology of CDI is an ongoing challenge, with emerging new strains and changing social and environmental conditions. “However, it is with the collaborative efforts of industry partners, policymakers, veterinarians, clinicians, and researchers that CDI needs to be approached, a perfect example of One Health. Opening an interdisciplinary dialogue to address CDI and One Health issues has to be the focus of future studies,” the authors concluded.

SOURCE: SC Lim et al. Clinical Microbiology and Infection. 2020;26:85-863.

Clostridioides difficile (C. diff) infection (CDI) is a pathogen of both humans and animals, and to control it will require an integrated approach that encompasses human health care, veterinary health care, environmental regulation, and public policy. That is the conclusion of a group led by Su-Chen Lim, MD, and Tom Riley, MD, of Edith Cowan University in Australia, who published a review in Clinical Microbiology and Infection.

CDI was generally considered a nuisance infection until the early 21st century, when a hypervirulent fluoroquinolone-resistant strain emerged in North America. The strain is now documented In the United States, Canada, and most countries in Europe.

Another new feature of CDI is increased evidence of community transmission, which was previously rare. This is defined as cases where the patient experienced symptom onset outside the hospital, and had no history of hospitalization in the previous 12 weeks or symptom onset within 48 hours of hospital admission. Community-associated CDI now accounts for 41% of U.S. cases, nearly 30% of Australian cases, and about 14% in Europe, according to recent studies.

Several features of CDI suggest a need for an integrated management plan. The preferred habitat of C. diff is the gastrointestinal track of mammals, and likely colonizes all mammalian neonates. Over time, colonization by other microbes likely crowd it out and prevent overgrowth. But widespread use of antimicrobials in animal production can lead to the creation of an environment resembling that of the neonate, allowing C. diff to expand. That has led to food animals becoming a major C. diff reservoir, and whole-genome studies showed that strains found in humans, food, animals, and the environment are closely related and sometimes genetically indistinguishable, suggesting transmission between humans and animals that may be attributable to contaminated food and environments.

The authors suggest that C. diff infection control should be guided by the One Health initiative, which seeks cooperation between physicians, osteopathic physicians, veterinarians, dentists, nurses, and other scientific and environmental disciplines. The goal is to enhance surveillance and interdisciplinary communication, as well as integrated policies. The authors note that C. diff is often thought of by physicians as primarily a hospital problem, who may be unaware of the increased prevalence of community-acquired disease. It is also a significant problem in agriculture, since as many as 50% of piglets succumb to the disease. Other studies have recently shown that asymptomatic carriers of toxigenic strains are likely to transmit the bacteria to C. diff-negative patients. Asymptomatic carriers cluster with symptomatic patients. In one Cleveland hospital, more than 25% of hospital-associated CDI cases were found to have been colonized prior to admission, suggesting that these were not true hospital-associated cases.

C. diff has been isolated from a wide range of sources, including food animals, meat, seafood, vegetables, household environments, and natural environments like rivers, lakes, and soil. About 20% of calves and 70% of piglets are colonized with C. diff. It has a high prevalence in meat products in the United States, but lower in the Europe, possibly because of different slaughtering practices.

The authors suggest that zoonotic C. diff spread is unlikely to be confined to any geographic region or population, and that widespread C. diff contamination is occurring through food or the environment. This could be occurring because spores can withstand cooking temperatures and disseminate through the air, and even through manure from food animals made into compost or fertilizer.

Veterinary efforts mimicking hospital measures have reduced animal CDI, but there are no rapid diagnostic tests for CDI in animals, making it challenging to control its spread in this context.

The authors call for enhanced antimicrobial stewardship in both human and animal settings, including banning of antimicrobial agents as growth promoters. This has been done in the United States and Europe, but not in Brazil, China, Canada, India, and Australia. They also call for research on inactivation of C. diff spores during waste treatment.

Even better, the authors suggest that vaccines should be developed and employed in both animals and humans. No such vaccine exists in animals, but Pfizer has one for humans in a phase 3 clinical trial, but it does not prevent colonization. Others are in development.

The epidemiology of CDI is an ongoing challenge, with emerging new strains and changing social and environmental conditions. “However, it is with the collaborative efforts of industry partners, policymakers, veterinarians, clinicians, and researchers that CDI needs to be approached, a perfect example of One Health. Opening an interdisciplinary dialogue to address CDI and One Health issues has to be the focus of future studies,” the authors concluded.

SOURCE: SC Lim et al. Clinical Microbiology and Infection. 2020;26:85-863.

Publications
Publications
Topics
Article Type
Display Headline
C. difficile control could require integrated approach
Display Headline
C. difficile control could require integrated approach
Sections
Article Source

FROM CLINICAL MICROBIOLOGY AND INFECTION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Upper GI bleeds in COVID-19 not related to increased mortality

Article Type
Changed
Thu, 08/26/2021 - 15:55

A Spanish survey of COVID-19 patients suggests that upper gastrointestinal bleeding (UGB) does not affect in-hospital mortality. It also found that fewer COVID-19–positive patients underwent endoscopies, but there was no statistically significant difference in in-hospital mortality outcome as a result of delays.

“In-hospital mortality in COVID-19 patients with upper-GI bleeding seemed to be more influenced by COVID-19 than by upper-GI bleeding, and that’s something I think is important for us to know,” Gyanprakash Ketwaroo, MD, associate professor at Baylor College of Medicine, Houston, said in an interview. Dr. Ketwaroo was not involved in the study.

The results weren’t a surprise, but they do provide some reassurance. “It’s probably what I expected. Initially, we thought there might be some COVID-19 related (GI) lesions, but that didn’t seem to be borne out. So we thought the bleeding was related to (the patient) being in a hospital or the typical reasons for bleeding. It’s also what I expected that less endoscopies would be performed in these patients, and even though fewer endoscopies were performed, the outcomes were still similar. I think it’s what most people expected,” said Dr. Ketwaroo.

The study was published online Nov. 25 in the Journal of Clinical Gastroenterology, and led by Rebeca González González, MD, of Severo Ochoa University Hospital in Leganés, Madrid, and Pascual Piñera-Salmerón, MD, of Reina Sofia University General Hospital in Murcia, Spain. The researchers retrospectively analyzed data on 71,904 COVID-19 patients at 62 emergency departments in Spain, and compared 83 patients who had COVID-19 and UGB to two control groups: 249 randomly selected COVID-19 patients without UGB, and 249 randomly selected non-COVID-19 patients with UGB.

They found that 1.11% of COVID-19 patients presented with UGB, compared with 1.78% of non-COVID-19 patients at emergency departments. In patients with COVID-19, risk of UGB was associated with hemoglobin values < 10 g/L (odds ratio [OR], 34.255, 95% confidence interval [CI], 12.752-92.021), abdominal pain (OR, 11.4; 95% CI, 5.092-25.944), and systolic blood pressure < 90 mm Hg (OR, 11.096; 95% CI, 2.975-41.390).

Compared with non-COVID-19 patients with UGB, those COVID-19 patients with UGB were more likely to have interstitial lung infiltrates (OR, 66.42; 95% CI, 15.364-287.223) and ground-glass opacities (OR, 21.27; 95% CI, 9.720-46.567) in chest radiograph, as well as fever (OR, 34.67; 95% CI, 11.719-102.572) and cough (OR, 26.4; 95% CI, 8.845-78.806).

Gastroscopy and endoscopic procedures were lower in patients with COVID-19 than in the general population (gastroscopy OR, 0.269; 95% CI, 0.160-0.453: endoscopy OR, 0.26; 95% CI, 0.165-0.623). There was no difference between the two groups with respect to endoscopic findings. After adjustment for age and sex, the only significant difference between COVID-19 patients with UGB and COVID-19 patients without UGB was a higher rate of intensive care unit admission (OR, 2.98; 95% CI, 1.16-7.65). Differences between COVID-19 patients with UGB and non–COVID-19 patients with UGB included higher rates of ICU admission (OR, 3.29; 95% CI, 1.28-8.47), prolonged hospitalizations (OR, 2.02; 95% CI, 1.15-3.55), and in-hospital mortality (OR, 2.05; 95% CI, 1.09-3.86).

UGB development was not associated with increased in-hospital mortality in COVID-19 patients (OR, 1.14; 95% CI, 0.59-2.19).

A limitation to the study it was that it was performed in Spain, where endoscopies are performed in the emergency department, and where there are different thresholds for admission to the intensive care unit than in the United States.

The authors did not report a funding source. Dr. Ketwaroo has no relevant financial disclosures.

SOURCE: González González R et al. J Clin Gastroenterol. 10.1097/MCG.0000000000001465.

Publications
Topics
Sections

A Spanish survey of COVID-19 patients suggests that upper gastrointestinal bleeding (UGB) does not affect in-hospital mortality. It also found that fewer COVID-19–positive patients underwent endoscopies, but there was no statistically significant difference in in-hospital mortality outcome as a result of delays.

“In-hospital mortality in COVID-19 patients with upper-GI bleeding seemed to be more influenced by COVID-19 than by upper-GI bleeding, and that’s something I think is important for us to know,” Gyanprakash Ketwaroo, MD, associate professor at Baylor College of Medicine, Houston, said in an interview. Dr. Ketwaroo was not involved in the study.

The results weren’t a surprise, but they do provide some reassurance. “It’s probably what I expected. Initially, we thought there might be some COVID-19 related (GI) lesions, but that didn’t seem to be borne out. So we thought the bleeding was related to (the patient) being in a hospital or the typical reasons for bleeding. It’s also what I expected that less endoscopies would be performed in these patients, and even though fewer endoscopies were performed, the outcomes were still similar. I think it’s what most people expected,” said Dr. Ketwaroo.

The study was published online Nov. 25 in the Journal of Clinical Gastroenterology, and led by Rebeca González González, MD, of Severo Ochoa University Hospital in Leganés, Madrid, and Pascual Piñera-Salmerón, MD, of Reina Sofia University General Hospital in Murcia, Spain. The researchers retrospectively analyzed data on 71,904 COVID-19 patients at 62 emergency departments in Spain, and compared 83 patients who had COVID-19 and UGB to two control groups: 249 randomly selected COVID-19 patients without UGB, and 249 randomly selected non-COVID-19 patients with UGB.

They found that 1.11% of COVID-19 patients presented with UGB, compared with 1.78% of non-COVID-19 patients at emergency departments. In patients with COVID-19, risk of UGB was associated with hemoglobin values < 10 g/L (odds ratio [OR], 34.255, 95% confidence interval [CI], 12.752-92.021), abdominal pain (OR, 11.4; 95% CI, 5.092-25.944), and systolic blood pressure < 90 mm Hg (OR, 11.096; 95% CI, 2.975-41.390).

Compared with non-COVID-19 patients with UGB, those COVID-19 patients with UGB were more likely to have interstitial lung infiltrates (OR, 66.42; 95% CI, 15.364-287.223) and ground-glass opacities (OR, 21.27; 95% CI, 9.720-46.567) in chest radiograph, as well as fever (OR, 34.67; 95% CI, 11.719-102.572) and cough (OR, 26.4; 95% CI, 8.845-78.806).

Gastroscopy and endoscopic procedures were lower in patients with COVID-19 than in the general population (gastroscopy OR, 0.269; 95% CI, 0.160-0.453: endoscopy OR, 0.26; 95% CI, 0.165-0.623). There was no difference between the two groups with respect to endoscopic findings. After adjustment for age and sex, the only significant difference between COVID-19 patients with UGB and COVID-19 patients without UGB was a higher rate of intensive care unit admission (OR, 2.98; 95% CI, 1.16-7.65). Differences between COVID-19 patients with UGB and non–COVID-19 patients with UGB included higher rates of ICU admission (OR, 3.29; 95% CI, 1.28-8.47), prolonged hospitalizations (OR, 2.02; 95% CI, 1.15-3.55), and in-hospital mortality (OR, 2.05; 95% CI, 1.09-3.86).

UGB development was not associated with increased in-hospital mortality in COVID-19 patients (OR, 1.14; 95% CI, 0.59-2.19).

A limitation to the study it was that it was performed in Spain, where endoscopies are performed in the emergency department, and where there are different thresholds for admission to the intensive care unit than in the United States.

The authors did not report a funding source. Dr. Ketwaroo has no relevant financial disclosures.

SOURCE: González González R et al. J Clin Gastroenterol. 10.1097/MCG.0000000000001465.

A Spanish survey of COVID-19 patients suggests that upper gastrointestinal bleeding (UGB) does not affect in-hospital mortality. It also found that fewer COVID-19–positive patients underwent endoscopies, but there was no statistically significant difference in in-hospital mortality outcome as a result of delays.

“In-hospital mortality in COVID-19 patients with upper-GI bleeding seemed to be more influenced by COVID-19 than by upper-GI bleeding, and that’s something I think is important for us to know,” Gyanprakash Ketwaroo, MD, associate professor at Baylor College of Medicine, Houston, said in an interview. Dr. Ketwaroo was not involved in the study.

The results weren’t a surprise, but they do provide some reassurance. “It’s probably what I expected. Initially, we thought there might be some COVID-19 related (GI) lesions, but that didn’t seem to be borne out. So we thought the bleeding was related to (the patient) being in a hospital or the typical reasons for bleeding. It’s also what I expected that less endoscopies would be performed in these patients, and even though fewer endoscopies were performed, the outcomes were still similar. I think it’s what most people expected,” said Dr. Ketwaroo.

The study was published online Nov. 25 in the Journal of Clinical Gastroenterology, and led by Rebeca González González, MD, of Severo Ochoa University Hospital in Leganés, Madrid, and Pascual Piñera-Salmerón, MD, of Reina Sofia University General Hospital in Murcia, Spain. The researchers retrospectively analyzed data on 71,904 COVID-19 patients at 62 emergency departments in Spain, and compared 83 patients who had COVID-19 and UGB to two control groups: 249 randomly selected COVID-19 patients without UGB, and 249 randomly selected non-COVID-19 patients with UGB.

They found that 1.11% of COVID-19 patients presented with UGB, compared with 1.78% of non-COVID-19 patients at emergency departments. In patients with COVID-19, risk of UGB was associated with hemoglobin values < 10 g/L (odds ratio [OR], 34.255, 95% confidence interval [CI], 12.752-92.021), abdominal pain (OR, 11.4; 95% CI, 5.092-25.944), and systolic blood pressure < 90 mm Hg (OR, 11.096; 95% CI, 2.975-41.390).

Compared with non-COVID-19 patients with UGB, those COVID-19 patients with UGB were more likely to have interstitial lung infiltrates (OR, 66.42; 95% CI, 15.364-287.223) and ground-glass opacities (OR, 21.27; 95% CI, 9.720-46.567) in chest radiograph, as well as fever (OR, 34.67; 95% CI, 11.719-102.572) and cough (OR, 26.4; 95% CI, 8.845-78.806).

Gastroscopy and endoscopic procedures were lower in patients with COVID-19 than in the general population (gastroscopy OR, 0.269; 95% CI, 0.160-0.453: endoscopy OR, 0.26; 95% CI, 0.165-0.623). There was no difference between the two groups with respect to endoscopic findings. After adjustment for age and sex, the only significant difference between COVID-19 patients with UGB and COVID-19 patients without UGB was a higher rate of intensive care unit admission (OR, 2.98; 95% CI, 1.16-7.65). Differences between COVID-19 patients with UGB and non–COVID-19 patients with UGB included higher rates of ICU admission (OR, 3.29; 95% CI, 1.28-8.47), prolonged hospitalizations (OR, 2.02; 95% CI, 1.15-3.55), and in-hospital mortality (OR, 2.05; 95% CI, 1.09-3.86).

UGB development was not associated with increased in-hospital mortality in COVID-19 patients (OR, 1.14; 95% CI, 0.59-2.19).

A limitation to the study it was that it was performed in Spain, where endoscopies are performed in the emergency department, and where there are different thresholds for admission to the intensive care unit than in the United States.

The authors did not report a funding source. Dr. Ketwaroo has no relevant financial disclosures.

SOURCE: González González R et al. J Clin Gastroenterol. 10.1097/MCG.0000000000001465.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article