User login
HPV Vaccine Shown to Be Highly Effective in Girls Years Later
TOPLINE:
METHODOLOGY:
- Cervical cancer is the fourth most common cancer among women worldwide.
- Programs to provide Cervarix, a bivalent vaccine, began in the United Kingdom in 2007.
- After the initiation of the programs, administering the vaccine became part of routine care for girls starting at age 12 years.
- Researchers collected data in 2020 from 447,845 women born between 1988 and 1996 from the Scottish cervical cancer screening system to assess the efficacy of Cervarix in lowering rates of cervical cancer.
- They correlated the rate of cervical cancer per 100,000 person-years with data on women regarding vaccination status, age when vaccinated, and deprivation in areas like income, housing, and health.
TAKEAWAY:
- No cases of cervical cancer were found among women who were immunized at ages 12 or 13 years, no matter how many doses they received.
- Women who were immunized between ages 14 and 18 years and received three doses had fewer instances of cervical cancer compared with unvaccinated women regardless of deprivation status (3.2 cases per 100,00 women vs 8.4 cases per 100,000).
IN PRACTICE:
“Continued participation in screening and monitoring of outcomes is required, however, to assess the effects of changes in vaccines used and dosage schedules since the start of vaccination in Scotland in 2008 and the longevity of protection the vaccines offer.”
SOURCE:
The study was led by Timothy J. Palmer, PhD, Scottish Clinical Lead for Cervical Screening at Public Health Scotland.
LIMITATIONS:
Only 14,645 women had received just one or two doses, which may have affected the statistical analysis.
DISCLOSURES:
The study was funded by Public Health Scotland. A coauthor reports attending an advisory board meeting for HOLOGIC and Vaccitech. Her institution received research funding or gratis support funding from Cepheid, Euroimmun, GeneFirst, SelfScreen, Hiantis, Seegene, Roche, Hologic, and Vaccitech in the past 3 years.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Cervical cancer is the fourth most common cancer among women worldwide.
- Programs to provide Cervarix, a bivalent vaccine, began in the United Kingdom in 2007.
- After the initiation of the programs, administering the vaccine became part of routine care for girls starting at age 12 years.
- Researchers collected data in 2020 from 447,845 women born between 1988 and 1996 from the Scottish cervical cancer screening system to assess the efficacy of Cervarix in lowering rates of cervical cancer.
- They correlated the rate of cervical cancer per 100,000 person-years with data on women regarding vaccination status, age when vaccinated, and deprivation in areas like income, housing, and health.
TAKEAWAY:
- No cases of cervical cancer were found among women who were immunized at ages 12 or 13 years, no matter how many doses they received.
- Women who were immunized between ages 14 and 18 years and received three doses had fewer instances of cervical cancer compared with unvaccinated women regardless of deprivation status (3.2 cases per 100,00 women vs 8.4 cases per 100,000).
IN PRACTICE:
“Continued participation in screening and monitoring of outcomes is required, however, to assess the effects of changes in vaccines used and dosage schedules since the start of vaccination in Scotland in 2008 and the longevity of protection the vaccines offer.”
SOURCE:
The study was led by Timothy J. Palmer, PhD, Scottish Clinical Lead for Cervical Screening at Public Health Scotland.
LIMITATIONS:
Only 14,645 women had received just one or two doses, which may have affected the statistical analysis.
DISCLOSURES:
The study was funded by Public Health Scotland. A coauthor reports attending an advisory board meeting for HOLOGIC and Vaccitech. Her institution received research funding or gratis support funding from Cepheid, Euroimmun, GeneFirst, SelfScreen, Hiantis, Seegene, Roche, Hologic, and Vaccitech in the past 3 years.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Cervical cancer is the fourth most common cancer among women worldwide.
- Programs to provide Cervarix, a bivalent vaccine, began in the United Kingdom in 2007.
- After the initiation of the programs, administering the vaccine became part of routine care for girls starting at age 12 years.
- Researchers collected data in 2020 from 447,845 women born between 1988 and 1996 from the Scottish cervical cancer screening system to assess the efficacy of Cervarix in lowering rates of cervical cancer.
- They correlated the rate of cervical cancer per 100,000 person-years with data on women regarding vaccination status, age when vaccinated, and deprivation in areas like income, housing, and health.
TAKEAWAY:
- No cases of cervical cancer were found among women who were immunized at ages 12 or 13 years, no matter how many doses they received.
- Women who were immunized between ages 14 and 18 years and received three doses had fewer instances of cervical cancer compared with unvaccinated women regardless of deprivation status (3.2 cases per 100,00 women vs 8.4 cases per 100,000).
IN PRACTICE:
“Continued participation in screening and monitoring of outcomes is required, however, to assess the effects of changes in vaccines used and dosage schedules since the start of vaccination in Scotland in 2008 and the longevity of protection the vaccines offer.”
SOURCE:
The study was led by Timothy J. Palmer, PhD, Scottish Clinical Lead for Cervical Screening at Public Health Scotland.
LIMITATIONS:
Only 14,645 women had received just one or two doses, which may have affected the statistical analysis.
DISCLOSURES:
The study was funded by Public Health Scotland. A coauthor reports attending an advisory board meeting for HOLOGIC and Vaccitech. Her institution received research funding or gratis support funding from Cepheid, Euroimmun, GeneFirst, SelfScreen, Hiantis, Seegene, Roche, Hologic, and Vaccitech in the past 3 years.
A version of this article appeared on Medscape.com.
Chemo-Free Maintenance Strategies May Boost Survival in TNBC
TOPLINE:
METHODOLOGY:
- First-line standard therapy for advanced TNBC generally includes taxane- or platinum-based chemotherapy which poses challenging toxicities. Exploring chemotherapy-free maintenance strategies may provide adequate disease control and improve patient quality of life.
- The researchers evaluated 45 patients, at five sites in the Republic of Korea, the United States, and Singapore, with TNBC who had ongoing stable disease or complete/partial response from first- or second-line platinum-based chemotherapy.
- The patients were randomized 1:1 to receive olaparib 300 mg twice daily with or without durvalumab 1500 mg on day 1 every 4 weeks.
- The authors compared PFS with a historical control of continued platinum-based therapy. An improvement to 4 months with maintenance therapy was considered clinically significant.
TAKEAWAY:
- After a follow-up of 9.8 months, patients who received olaparib alone demonstrated median PFS of 4.0 months, and those who received the combination therapy had median PFS of 6.1 months.
- Clinical benefit rates, defined as stable disease for at least 24 weeks or complete/partial response, were reported in 44% of the monotherapy group and 36% of the combination therapy group.
- Sustained clinical benefit was evident irrespective of germline BRCA mutation or programmed death-ligand 1 status, although it tended to be associated with complete or partial response to prior platinum.
- Grade 3-4 adverse events were reported in nine patients (39%) in the olaparib arm and eight patients (36%) in the combination arm. No treatment-related deaths or new safety signals were observed.
IN PRACTICE:
“Maintenance regimens are rarely used in [triple-negative breast cancer] but offer the possibility of more tolerable long-term treatment avoiding some of the chemotherapy-related side effects of more aggressive regimens, as is standard in the first-line treatment of HER2-positive advanced breast cancer,” the researchers concluded.
SOURCE:
This study, led by Tira J. Tan from Duke-NUS Medical School, Singapore, was published online on January 18, 2024, in Clinical Cancer Research.
LIMITATIONS:
The main limitations were the small sample size and lack of a standard control arm. Most patients (76%) were Asian, limiting generalizability. The trial was not designed to compare olaparib monotherapy and olaparib plus durvalumab regimens.
DISCLOSURES:
AstraZeneca Pharmaceuticals LP supported this study. Several authors reported financial support from various sources.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- First-line standard therapy for advanced TNBC generally includes taxane- or platinum-based chemotherapy which poses challenging toxicities. Exploring chemotherapy-free maintenance strategies may provide adequate disease control and improve patient quality of life.
- The researchers evaluated 45 patients, at five sites in the Republic of Korea, the United States, and Singapore, with TNBC who had ongoing stable disease or complete/partial response from first- or second-line platinum-based chemotherapy.
- The patients were randomized 1:1 to receive olaparib 300 mg twice daily with or without durvalumab 1500 mg on day 1 every 4 weeks.
- The authors compared PFS with a historical control of continued platinum-based therapy. An improvement to 4 months with maintenance therapy was considered clinically significant.
TAKEAWAY:
- After a follow-up of 9.8 months, patients who received olaparib alone demonstrated median PFS of 4.0 months, and those who received the combination therapy had median PFS of 6.1 months.
- Clinical benefit rates, defined as stable disease for at least 24 weeks or complete/partial response, were reported in 44% of the monotherapy group and 36% of the combination therapy group.
- Sustained clinical benefit was evident irrespective of germline BRCA mutation or programmed death-ligand 1 status, although it tended to be associated with complete or partial response to prior platinum.
- Grade 3-4 adverse events were reported in nine patients (39%) in the olaparib arm and eight patients (36%) in the combination arm. No treatment-related deaths or new safety signals were observed.
IN PRACTICE:
“Maintenance regimens are rarely used in [triple-negative breast cancer] but offer the possibility of more tolerable long-term treatment avoiding some of the chemotherapy-related side effects of more aggressive regimens, as is standard in the first-line treatment of HER2-positive advanced breast cancer,” the researchers concluded.
SOURCE:
This study, led by Tira J. Tan from Duke-NUS Medical School, Singapore, was published online on January 18, 2024, in Clinical Cancer Research.
LIMITATIONS:
The main limitations were the small sample size and lack of a standard control arm. Most patients (76%) were Asian, limiting generalizability. The trial was not designed to compare olaparib monotherapy and olaparib plus durvalumab regimens.
DISCLOSURES:
AstraZeneca Pharmaceuticals LP supported this study. Several authors reported financial support from various sources.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- First-line standard therapy for advanced TNBC generally includes taxane- or platinum-based chemotherapy which poses challenging toxicities. Exploring chemotherapy-free maintenance strategies may provide adequate disease control and improve patient quality of life.
- The researchers evaluated 45 patients, at five sites in the Republic of Korea, the United States, and Singapore, with TNBC who had ongoing stable disease or complete/partial response from first- or second-line platinum-based chemotherapy.
- The patients were randomized 1:1 to receive olaparib 300 mg twice daily with or without durvalumab 1500 mg on day 1 every 4 weeks.
- The authors compared PFS with a historical control of continued platinum-based therapy. An improvement to 4 months with maintenance therapy was considered clinically significant.
TAKEAWAY:
- After a follow-up of 9.8 months, patients who received olaparib alone demonstrated median PFS of 4.0 months, and those who received the combination therapy had median PFS of 6.1 months.
- Clinical benefit rates, defined as stable disease for at least 24 weeks or complete/partial response, were reported in 44% of the monotherapy group and 36% of the combination therapy group.
- Sustained clinical benefit was evident irrespective of germline BRCA mutation or programmed death-ligand 1 status, although it tended to be associated with complete or partial response to prior platinum.
- Grade 3-4 adverse events were reported in nine patients (39%) in the olaparib arm and eight patients (36%) in the combination arm. No treatment-related deaths or new safety signals were observed.
IN PRACTICE:
“Maintenance regimens are rarely used in [triple-negative breast cancer] but offer the possibility of more tolerable long-term treatment avoiding some of the chemotherapy-related side effects of more aggressive regimens, as is standard in the first-line treatment of HER2-positive advanced breast cancer,” the researchers concluded.
SOURCE:
This study, led by Tira J. Tan from Duke-NUS Medical School, Singapore, was published online on January 18, 2024, in Clinical Cancer Research.
LIMITATIONS:
The main limitations were the small sample size and lack of a standard control arm. Most patients (76%) were Asian, limiting generalizability. The trial was not designed to compare olaparib monotherapy and olaparib plus durvalumab regimens.
DISCLOSURES:
AstraZeneca Pharmaceuticals LP supported this study. Several authors reported financial support from various sources.
A version of this article appeared on Medscape.com.
Treatment Sequence May Impact Pancreatic Cancer Survival
TOPLINE:
METHODOLOGY:
- Despite therapeutic advances, survival among patients with unresectable and/or metastatic pancreatic ductal adenocarcinoma has not markedly improved in recent years.
- In the current analysis, researchers evaluated whether treatment sequence could affect survival outcomes in this patient population.
- To this end , researchers conducted a single institution, retrospective analysis of patients who received different lines of treatment between January 2015 and December 2021.
- The most common first-line therapy was nab-paclitaxel plus S-1 (58%), followed by FOLFIRINOX (10%), nab-paclitaxel plus gemcitabine (8%), gemcitabine alone (7%), gemcitabine plus oxaliplatin (6%); second-line therapies, in order of frequency, included gemcitabine combination therapy (48%), nab-paclitaxel combination therapy (19%), FOLFIRINOX (10%), and gemcitabine alone (7%); third-line treatments consisted of FOLFIRINOX (31%), irinotecan or oxaliplatin combination therapy (23%), immunotherapy (19%), and gemcitabine combination therapy (10%).
TAKEAWAY:
- Overall, progression occurred in 90% of patients, and the median overall survival was 12.0 months, with only 48% of patients able to start a third-line therapy.
- The researchers focused on three common therapy sequences: nab-paclitaxel plus gemcitabine or nab-paclitaxel combination therapy as first-line and FOLFIRINOX as second-line (line A); nab-paclitaxel combination therapy to gemcitabine combination therapy to FOLFIRINOX (line B); and nab-paclitaxel combination therapy, to gemcitabine combination therapy, to oxaliplatin or irinotecan combination therapy (line C).
- Overall, the researchers observed a median overall survival of 14 months among patients receiving line A and C sequences and 18 months with line B.
- Patients receiving line B therapy demonstrated a 52% lower risk for death compared with those receiving line A treatment (hazard ratio [HR], 0.48; P = .018) and a 75% reduced risk for death compared with those on the line C sequence (HR, 0.25; P = .040).
IN PRACTICE:
“Our study provides real-world evidence for the effectiveness of different treatment sequences and underscores the [impact of] treatment sequences on survival outcome when considering the entire management in advanced pancreatic ductal adenocarcinoma,” the authors concluded.
SOURCE:
The study, led by Guanghai Dai, MD, from the Chinese People’s Liberation Army General Hospital, Beijing, was published in BMC Cancer on January 12, 2024.
LIMITATIONS:
The study was a single-center, retrospective analysis.
DISCLOSURES:
The paper was funded by Beijing natural science foundation. The authors did not declare any relevant financial relationships.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Despite therapeutic advances, survival among patients with unresectable and/or metastatic pancreatic ductal adenocarcinoma has not markedly improved in recent years.
- In the current analysis, researchers evaluated whether treatment sequence could affect survival outcomes in this patient population.
- To this end , researchers conducted a single institution, retrospective analysis of patients who received different lines of treatment between January 2015 and December 2021.
- The most common first-line therapy was nab-paclitaxel plus S-1 (58%), followed by FOLFIRINOX (10%), nab-paclitaxel plus gemcitabine (8%), gemcitabine alone (7%), gemcitabine plus oxaliplatin (6%); second-line therapies, in order of frequency, included gemcitabine combination therapy (48%), nab-paclitaxel combination therapy (19%), FOLFIRINOX (10%), and gemcitabine alone (7%); third-line treatments consisted of FOLFIRINOX (31%), irinotecan or oxaliplatin combination therapy (23%), immunotherapy (19%), and gemcitabine combination therapy (10%).
TAKEAWAY:
- Overall, progression occurred in 90% of patients, and the median overall survival was 12.0 months, with only 48% of patients able to start a third-line therapy.
- The researchers focused on three common therapy sequences: nab-paclitaxel plus gemcitabine or nab-paclitaxel combination therapy as first-line and FOLFIRINOX as second-line (line A); nab-paclitaxel combination therapy to gemcitabine combination therapy to FOLFIRINOX (line B); and nab-paclitaxel combination therapy, to gemcitabine combination therapy, to oxaliplatin or irinotecan combination therapy (line C).
- Overall, the researchers observed a median overall survival of 14 months among patients receiving line A and C sequences and 18 months with line B.
- Patients receiving line B therapy demonstrated a 52% lower risk for death compared with those receiving line A treatment (hazard ratio [HR], 0.48; P = .018) and a 75% reduced risk for death compared with those on the line C sequence (HR, 0.25; P = .040).
IN PRACTICE:
“Our study provides real-world evidence for the effectiveness of different treatment sequences and underscores the [impact of] treatment sequences on survival outcome when considering the entire management in advanced pancreatic ductal adenocarcinoma,” the authors concluded.
SOURCE:
The study, led by Guanghai Dai, MD, from the Chinese People’s Liberation Army General Hospital, Beijing, was published in BMC Cancer on January 12, 2024.
LIMITATIONS:
The study was a single-center, retrospective analysis.
DISCLOSURES:
The paper was funded by Beijing natural science foundation. The authors did not declare any relevant financial relationships.
A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Despite therapeutic advances, survival among patients with unresectable and/or metastatic pancreatic ductal adenocarcinoma has not markedly improved in recent years.
- In the current analysis, researchers evaluated whether treatment sequence could affect survival outcomes in this patient population.
- To this end , researchers conducted a single institution, retrospective analysis of patients who received different lines of treatment between January 2015 and December 2021.
- The most common first-line therapy was nab-paclitaxel plus S-1 (58%), followed by FOLFIRINOX (10%), nab-paclitaxel plus gemcitabine (8%), gemcitabine alone (7%), gemcitabine plus oxaliplatin (6%); second-line therapies, in order of frequency, included gemcitabine combination therapy (48%), nab-paclitaxel combination therapy (19%), FOLFIRINOX (10%), and gemcitabine alone (7%); third-line treatments consisted of FOLFIRINOX (31%), irinotecan or oxaliplatin combination therapy (23%), immunotherapy (19%), and gemcitabine combination therapy (10%).
TAKEAWAY:
- Overall, progression occurred in 90% of patients, and the median overall survival was 12.0 months, with only 48% of patients able to start a third-line therapy.
- The researchers focused on three common therapy sequences: nab-paclitaxel plus gemcitabine or nab-paclitaxel combination therapy as first-line and FOLFIRINOX as second-line (line A); nab-paclitaxel combination therapy to gemcitabine combination therapy to FOLFIRINOX (line B); and nab-paclitaxel combination therapy, to gemcitabine combination therapy, to oxaliplatin or irinotecan combination therapy (line C).
- Overall, the researchers observed a median overall survival of 14 months among patients receiving line A and C sequences and 18 months with line B.
- Patients receiving line B therapy demonstrated a 52% lower risk for death compared with those receiving line A treatment (hazard ratio [HR], 0.48; P = .018) and a 75% reduced risk for death compared with those on the line C sequence (HR, 0.25; P = .040).
IN PRACTICE:
“Our study provides real-world evidence for the effectiveness of different treatment sequences and underscores the [impact of] treatment sequences on survival outcome when considering the entire management in advanced pancreatic ductal adenocarcinoma,” the authors concluded.
SOURCE:
The study, led by Guanghai Dai, MD, from the Chinese People’s Liberation Army General Hospital, Beijing, was published in BMC Cancer on January 12, 2024.
LIMITATIONS:
The study was a single-center, retrospective analysis.
DISCLOSURES:
The paper was funded by Beijing natural science foundation. The authors did not declare any relevant financial relationships.
A version of this article appeared on Medscape.com.
Wearable Device Tracks IBD from Sweat
LAS VEGAS —
The device, in development by EnLiSense, can rapidly detect calprotectin, C-reactive protein (CRP), and interleukin-6 (IL-6), using miniaturized versions of biochemical lab tests.
Patient monitoring relies on identifying trends, whether biomarker levels are increasing or decreasing, according to Shalini Prasad, PhD, who presented the study during a poster session at the annual Crohn’s & Colitis Congress®, a partnership of the Crohn’s & Colitis Foundation and the American Gastroenterological Association. “In a blood test you don’t get that unless you’re willing to sample every month. That’s the benefit [of the device],” said Dr. Prasad, professor of bioengineering at University of Texas at Dallas and a cofounder of EnLiSense.
The project grew out of the involvement of EnLiSense with the Biomedical Advanced Research Development Authority (BARDA). “We were tracking infections, and we were looking at inflammatory markers associated with infections: Cytokines and chemokines. We thought it was a natural pivot for us because the disease of inflammation is IBD,” said Dr. Prasad.
The device need only be worn when the physician determines the disease is in a variable state. The patient “will wear it for the duration of time as determined by the clinician,” said Dr. Prasad.
The watch face–sized device, typically worn on the forearm, absorbs sweat and performs automated biochemical analysis independently, then beams its findings to the cloud. “What you get back is concentration [of inflammatory biomarkers]. It is essentially trend line reporting of how the concentration is fluctuating over time for markers,” said Dr. Prasad.
The Crohn’s and Colitis Foundation is supporting the company through its IBD Ventures program. EnLiSense is currently conducting a study tracking patients over 4 weeks to correlate biomarker concentrations in sweat with concentrations in stool.
A key remaining question is how long the device should be worn and during what clinical periods. The technology has the potential to provide too much information. “Just figuring the balance. We’re trying to find the right spot where it makes sense for both the clinician and the patient. This is something that is a work in progress. We don’t want this to be just like any other consumer wearable which gives you something but you’re not sure what it means,” said Dr. Prasad.
The study included 33 patients with IBD who were monitored between 40 and 130 minutes. The device measured levels of CRP, IL-6, and calprotectin. Serum samples were also measured the same day.
The researchers found higher levels of calprotectin among patients with active disease in perspiration (P = .0260), serum (P = .022), and in fecal samples (P = .0411). There were no significant differences between patients who are active and those in remission with respect to CRP levels in perspiration or serum, or IL-6 in perspiration. Serum Il-6 levels were higher in those with active disease.
There was no significant difference between serum and sweat calprotectin levels among patients who were active or in remission, but the median expression of IL-6 in perspiration was higher in the active group (P = .0016). In the active group, calprotectin was elevated in sweat, serum, and stool.
Levels of calprotectin measured in perspiration correlated with levels in the serum (R2 = 0.7195), as did CRP (R2 = 0.615) and IL-6 (R2 = 0.5411).
Treating to Target
The poster caught the interest of Jeremiah Faith, PhD, who attended the session and was asked to comment. “I think patients want to know what’s happening [with their disease], and we could probably give better care if we know day to day the status of someone, especially because every time we test them we get a point in time, but the reality is probably that people are kind of wavy, and knowing the wave is much better,” he said.
He noted that there was not a strong separation between mean perspiration calprotectin values, but he said the ability to take frequent measurements could overcome that weakness. “The difference between active and remission is not as drastic as what you’d see from blood, for example. But it’s the same thing with your watch. Your watch is a really poor sensor of what your heartbeat is doing, but if you measure it every few seconds, and you average over a long period of time, it can actually more be more [accurate]. So there’s a lot of potential for this,” said Dr. Faith, associate professor of genetics and genomic sciences at the Icahn School of Medicine at Mount Sinai in New York.
If perfected, the device could help efforts at treating to target, in which therapies are adjusted to achieve minimal disease. Currently, physicians are forced to adjust doses or change therapies based on infrequent testing. “If this is accurate ... maybe at some point we will have the tools to be smarter about it,” said Dr. Faith.
Dr. Prasad is a cofounder of EnLiSense. Dr. Faith has no relevant financial disclosures.
LAS VEGAS —
The device, in development by EnLiSense, can rapidly detect calprotectin, C-reactive protein (CRP), and interleukin-6 (IL-6), using miniaturized versions of biochemical lab tests.
Patient monitoring relies on identifying trends, whether biomarker levels are increasing or decreasing, according to Shalini Prasad, PhD, who presented the study during a poster session at the annual Crohn’s & Colitis Congress®, a partnership of the Crohn’s & Colitis Foundation and the American Gastroenterological Association. “In a blood test you don’t get that unless you’re willing to sample every month. That’s the benefit [of the device],” said Dr. Prasad, professor of bioengineering at University of Texas at Dallas and a cofounder of EnLiSense.
The project grew out of the involvement of EnLiSense with the Biomedical Advanced Research Development Authority (BARDA). “We were tracking infections, and we were looking at inflammatory markers associated with infections: Cytokines and chemokines. We thought it was a natural pivot for us because the disease of inflammation is IBD,” said Dr. Prasad.
The device need only be worn when the physician determines the disease is in a variable state. The patient “will wear it for the duration of time as determined by the clinician,” said Dr. Prasad.
The watch face–sized device, typically worn on the forearm, absorbs sweat and performs automated biochemical analysis independently, then beams its findings to the cloud. “What you get back is concentration [of inflammatory biomarkers]. It is essentially trend line reporting of how the concentration is fluctuating over time for markers,” said Dr. Prasad.
The Crohn’s and Colitis Foundation is supporting the company through its IBD Ventures program. EnLiSense is currently conducting a study tracking patients over 4 weeks to correlate biomarker concentrations in sweat with concentrations in stool.
A key remaining question is how long the device should be worn and during what clinical periods. The technology has the potential to provide too much information. “Just figuring the balance. We’re trying to find the right spot where it makes sense for both the clinician and the patient. This is something that is a work in progress. We don’t want this to be just like any other consumer wearable which gives you something but you’re not sure what it means,” said Dr. Prasad.
The study included 33 patients with IBD who were monitored between 40 and 130 minutes. The device measured levels of CRP, IL-6, and calprotectin. Serum samples were also measured the same day.
The researchers found higher levels of calprotectin among patients with active disease in perspiration (P = .0260), serum (P = .022), and in fecal samples (P = .0411). There were no significant differences between patients who are active and those in remission with respect to CRP levels in perspiration or serum, or IL-6 in perspiration. Serum Il-6 levels were higher in those with active disease.
There was no significant difference between serum and sweat calprotectin levels among patients who were active or in remission, but the median expression of IL-6 in perspiration was higher in the active group (P = .0016). In the active group, calprotectin was elevated in sweat, serum, and stool.
Levels of calprotectin measured in perspiration correlated with levels in the serum (R2 = 0.7195), as did CRP (R2 = 0.615) and IL-6 (R2 = 0.5411).
Treating to Target
The poster caught the interest of Jeremiah Faith, PhD, who attended the session and was asked to comment. “I think patients want to know what’s happening [with their disease], and we could probably give better care if we know day to day the status of someone, especially because every time we test them we get a point in time, but the reality is probably that people are kind of wavy, and knowing the wave is much better,” he said.
He noted that there was not a strong separation between mean perspiration calprotectin values, but he said the ability to take frequent measurements could overcome that weakness. “The difference between active and remission is not as drastic as what you’d see from blood, for example. But it’s the same thing with your watch. Your watch is a really poor sensor of what your heartbeat is doing, but if you measure it every few seconds, and you average over a long period of time, it can actually more be more [accurate]. So there’s a lot of potential for this,” said Dr. Faith, associate professor of genetics and genomic sciences at the Icahn School of Medicine at Mount Sinai in New York.
If perfected, the device could help efforts at treating to target, in which therapies are adjusted to achieve minimal disease. Currently, physicians are forced to adjust doses or change therapies based on infrequent testing. “If this is accurate ... maybe at some point we will have the tools to be smarter about it,” said Dr. Faith.
Dr. Prasad is a cofounder of EnLiSense. Dr. Faith has no relevant financial disclosures.
LAS VEGAS —
The device, in development by EnLiSense, can rapidly detect calprotectin, C-reactive protein (CRP), and interleukin-6 (IL-6), using miniaturized versions of biochemical lab tests.
Patient monitoring relies on identifying trends, whether biomarker levels are increasing or decreasing, according to Shalini Prasad, PhD, who presented the study during a poster session at the annual Crohn’s & Colitis Congress®, a partnership of the Crohn’s & Colitis Foundation and the American Gastroenterological Association. “In a blood test you don’t get that unless you’re willing to sample every month. That’s the benefit [of the device],” said Dr. Prasad, professor of bioengineering at University of Texas at Dallas and a cofounder of EnLiSense.
The project grew out of the involvement of EnLiSense with the Biomedical Advanced Research Development Authority (BARDA). “We were tracking infections, and we were looking at inflammatory markers associated with infections: Cytokines and chemokines. We thought it was a natural pivot for us because the disease of inflammation is IBD,” said Dr. Prasad.
The device need only be worn when the physician determines the disease is in a variable state. The patient “will wear it for the duration of time as determined by the clinician,” said Dr. Prasad.
The watch face–sized device, typically worn on the forearm, absorbs sweat and performs automated biochemical analysis independently, then beams its findings to the cloud. “What you get back is concentration [of inflammatory biomarkers]. It is essentially trend line reporting of how the concentration is fluctuating over time for markers,” said Dr. Prasad.
The Crohn’s and Colitis Foundation is supporting the company through its IBD Ventures program. EnLiSense is currently conducting a study tracking patients over 4 weeks to correlate biomarker concentrations in sweat with concentrations in stool.
A key remaining question is how long the device should be worn and during what clinical periods. The technology has the potential to provide too much information. “Just figuring the balance. We’re trying to find the right spot where it makes sense for both the clinician and the patient. This is something that is a work in progress. We don’t want this to be just like any other consumer wearable which gives you something but you’re not sure what it means,” said Dr. Prasad.
The study included 33 patients with IBD who were monitored between 40 and 130 minutes. The device measured levels of CRP, IL-6, and calprotectin. Serum samples were also measured the same day.
The researchers found higher levels of calprotectin among patients with active disease in perspiration (P = .0260), serum (P = .022), and in fecal samples (P = .0411). There were no significant differences between patients who are active and those in remission with respect to CRP levels in perspiration or serum, or IL-6 in perspiration. Serum Il-6 levels were higher in those with active disease.
There was no significant difference between serum and sweat calprotectin levels among patients who were active or in remission, but the median expression of IL-6 in perspiration was higher in the active group (P = .0016). In the active group, calprotectin was elevated in sweat, serum, and stool.
Levels of calprotectin measured in perspiration correlated with levels in the serum (R2 = 0.7195), as did CRP (R2 = 0.615) and IL-6 (R2 = 0.5411).
Treating to Target
The poster caught the interest of Jeremiah Faith, PhD, who attended the session and was asked to comment. “I think patients want to know what’s happening [with their disease], and we could probably give better care if we know day to day the status of someone, especially because every time we test them we get a point in time, but the reality is probably that people are kind of wavy, and knowing the wave is much better,” he said.
He noted that there was not a strong separation between mean perspiration calprotectin values, but he said the ability to take frequent measurements could overcome that weakness. “The difference between active and remission is not as drastic as what you’d see from blood, for example. But it’s the same thing with your watch. Your watch is a really poor sensor of what your heartbeat is doing, but if you measure it every few seconds, and you average over a long period of time, it can actually more be more [accurate]. So there’s a lot of potential for this,” said Dr. Faith, associate professor of genetics and genomic sciences at the Icahn School of Medicine at Mount Sinai in New York.
If perfected, the device could help efforts at treating to target, in which therapies are adjusted to achieve minimal disease. Currently, physicians are forced to adjust doses or change therapies based on infrequent testing. “If this is accurate ... maybe at some point we will have the tools to be smarter about it,” said Dr. Faith.
Dr. Prasad is a cofounder of EnLiSense. Dr. Faith has no relevant financial disclosures.
FROM CROHN’S & COLITIS CONGRESS
Robitussin Cough Syrup Recalled Nationwide Due to Fungus Concerns
The company that makes Robitussin syrups did not specify which microorganisms may be in the products. The recall announcement from the global consumer health products company Haleon stated that the contamination could lead to fungal infections or the presence of fungi or yeasts in a person’s blood. So far, the company has not received any reports of people being sickened by the recalled products.
The recall applies to bottles of Robitussin Honey CF Max Day and Robitussin Honey CF Max Nighttime. Both varieties are for adults. Affected products were sold nationwide and have specific lot numbers printed at the bottom of the back of the bottles. Consumers can view the lot numbers on the FDA’s recall webpage.
People with weakened immune systems have a higher risk of life-threatening health problems due to the cough syrup, the company warned.
“In non-immunocompromised consumers, the population most likely to use the product, life-threatening infections are not likely to occur,” the recall notice from Haleon stated. “However, the occurrence of an infection that may necessitate medical intervention cannot be completely ruled out.”
People who have affected products should stop using them immediately. The company asked that anyone with the products email Haleon at [email protected], or call the company at 800-245-1040 Monday through Friday from 8 a.m. to 6 p.m. Eastern time.
A version of this article appeared on WebMD.com.
The company that makes Robitussin syrups did not specify which microorganisms may be in the products. The recall announcement from the global consumer health products company Haleon stated that the contamination could lead to fungal infections or the presence of fungi or yeasts in a person’s blood. So far, the company has not received any reports of people being sickened by the recalled products.
The recall applies to bottles of Robitussin Honey CF Max Day and Robitussin Honey CF Max Nighttime. Both varieties are for adults. Affected products were sold nationwide and have specific lot numbers printed at the bottom of the back of the bottles. Consumers can view the lot numbers on the FDA’s recall webpage.
People with weakened immune systems have a higher risk of life-threatening health problems due to the cough syrup, the company warned.
“In non-immunocompromised consumers, the population most likely to use the product, life-threatening infections are not likely to occur,” the recall notice from Haleon stated. “However, the occurrence of an infection that may necessitate medical intervention cannot be completely ruled out.”
People who have affected products should stop using them immediately. The company asked that anyone with the products email Haleon at [email protected], or call the company at 800-245-1040 Monday through Friday from 8 a.m. to 6 p.m. Eastern time.
A version of this article appeared on WebMD.com.
The company that makes Robitussin syrups did not specify which microorganisms may be in the products. The recall announcement from the global consumer health products company Haleon stated that the contamination could lead to fungal infections or the presence of fungi or yeasts in a person’s blood. So far, the company has not received any reports of people being sickened by the recalled products.
The recall applies to bottles of Robitussin Honey CF Max Day and Robitussin Honey CF Max Nighttime. Both varieties are for adults. Affected products were sold nationwide and have specific lot numbers printed at the bottom of the back of the bottles. Consumers can view the lot numbers on the FDA’s recall webpage.
People with weakened immune systems have a higher risk of life-threatening health problems due to the cough syrup, the company warned.
“In non-immunocompromised consumers, the population most likely to use the product, life-threatening infections are not likely to occur,” the recall notice from Haleon stated. “However, the occurrence of an infection that may necessitate medical intervention cannot be completely ruled out.”
People who have affected products should stop using them immediately. The company asked that anyone with the products email Haleon at [email protected], or call the company at 800-245-1040 Monday through Friday from 8 a.m. to 6 p.m. Eastern time.
A version of this article appeared on WebMD.com.
Lp(a) Packs a More Powerful Atherogenic Punch Than LDL
TOPLINE:
While low-density lipoprotein (LDL) particles are much more abundant than lipoprotein(a) [Lp(a)] particles and carry the greatest overall risk for coronary heart disease (CHD), .
METHODOLOGY:
- To compare the atherogenicity of Lp(a) relative to LDL on a per-particle basis, researchers used a genetic analysis because Lp(a) and LDL both contain one apolipoprotein B (apoB) per particle.
- In a genome-wide association study of 502,413 UK Biobank participants, they identified genetic variants uniquely affecting plasma levels of either Lp(a) or LDL particles.
- For these two genetic clusters, they related the change in apoB to the respective change in CHD risk, which allowed them to directly compare the atherogenicity of LDL and Lp(a), particle to particle.
TAKEAWAY:
- The odds ratio for CHD for a 50 nmol/L higher Lp(a)-apoB was 1.28 (95% CI, 1.24-1.33) compared with 1.04 (95% CI, 1.03-1.05) for the same increment in LDL-apoB.
- Additional supporting evidence was provided by using polygenic scores to rank participants according to the difference in Lp(a)-apoB vs LDL-apoB, which revealed a greater risk for CHD per 50 nmol/L apoB for the Lp(a) cluster (hazard ratio [HR], 1.47; 95% CI, 1.36-1.58) than the LDL cluster (HR, 1.04; 95% CI, 1.02-1.05).
- Based on the data, the researchers estimate that the atherogenicity of Lp(a) is roughly sixfold greater (point estimate of 6.6; 95% CI, 5.1-8.8) than that of LDL on a per-particle basis.
IN PRACTICE:
“There are two clinical implications. First, to completely characterize atherosclerotic cardiovascular disease risk, it is imperative to measure Lp(a) in all adult patients at least once. Second, these studies provide a rationale that targeting Lp(a) with potent and specific drugs may lead to clinically meaningful benefit,” wrote the authors of an accompanying commentary on the study.
SOURCE:
The study, with first author Elias Björnson, PhD, University of Gothenburg, Gothenburg, Sweden, and an editorial by Sotirios Tsimikas, MD, University of California, San Diego, and Vera Bittner, MD, University of Alabama at Birmingham, was published in the Journal of the American College of Cardiology.
LIMITATIONS:
The UK Biobank consists primarily of a Caucasian population, and confirmatory studies in more diverse samples are needed. The working range for the Lp(a) assay used in the study did not cover the full range of Lp(a) values seen in the population. Variations in Lp(a)-apoB and LDL-apoB were estimated from genetic analysis and not measured specifically in biochemical assays.
DISCLOSURES:
The study had no commercial funding. Some authors received honoraria from the pharmaceutical industry. A complete list of author disclosures is available with the original article.
A version of this article first appeared on Medscape.com.
TOPLINE:
While low-density lipoprotein (LDL) particles are much more abundant than lipoprotein(a) [Lp(a)] particles and carry the greatest overall risk for coronary heart disease (CHD), .
METHODOLOGY:
- To compare the atherogenicity of Lp(a) relative to LDL on a per-particle basis, researchers used a genetic analysis because Lp(a) and LDL both contain one apolipoprotein B (apoB) per particle.
- In a genome-wide association study of 502,413 UK Biobank participants, they identified genetic variants uniquely affecting plasma levels of either Lp(a) or LDL particles.
- For these two genetic clusters, they related the change in apoB to the respective change in CHD risk, which allowed them to directly compare the atherogenicity of LDL and Lp(a), particle to particle.
TAKEAWAY:
- The odds ratio for CHD for a 50 nmol/L higher Lp(a)-apoB was 1.28 (95% CI, 1.24-1.33) compared with 1.04 (95% CI, 1.03-1.05) for the same increment in LDL-apoB.
- Additional supporting evidence was provided by using polygenic scores to rank participants according to the difference in Lp(a)-apoB vs LDL-apoB, which revealed a greater risk for CHD per 50 nmol/L apoB for the Lp(a) cluster (hazard ratio [HR], 1.47; 95% CI, 1.36-1.58) than the LDL cluster (HR, 1.04; 95% CI, 1.02-1.05).
- Based on the data, the researchers estimate that the atherogenicity of Lp(a) is roughly sixfold greater (point estimate of 6.6; 95% CI, 5.1-8.8) than that of LDL on a per-particle basis.
IN PRACTICE:
“There are two clinical implications. First, to completely characterize atherosclerotic cardiovascular disease risk, it is imperative to measure Lp(a) in all adult patients at least once. Second, these studies provide a rationale that targeting Lp(a) with potent and specific drugs may lead to clinically meaningful benefit,” wrote the authors of an accompanying commentary on the study.
SOURCE:
The study, with first author Elias Björnson, PhD, University of Gothenburg, Gothenburg, Sweden, and an editorial by Sotirios Tsimikas, MD, University of California, San Diego, and Vera Bittner, MD, University of Alabama at Birmingham, was published in the Journal of the American College of Cardiology.
LIMITATIONS:
The UK Biobank consists primarily of a Caucasian population, and confirmatory studies in more diverse samples are needed. The working range for the Lp(a) assay used in the study did not cover the full range of Lp(a) values seen in the population. Variations in Lp(a)-apoB and LDL-apoB were estimated from genetic analysis and not measured specifically in biochemical assays.
DISCLOSURES:
The study had no commercial funding. Some authors received honoraria from the pharmaceutical industry. A complete list of author disclosures is available with the original article.
A version of this article first appeared on Medscape.com.
TOPLINE:
While low-density lipoprotein (LDL) particles are much more abundant than lipoprotein(a) [Lp(a)] particles and carry the greatest overall risk for coronary heart disease (CHD), .
METHODOLOGY:
- To compare the atherogenicity of Lp(a) relative to LDL on a per-particle basis, researchers used a genetic analysis because Lp(a) and LDL both contain one apolipoprotein B (apoB) per particle.
- In a genome-wide association study of 502,413 UK Biobank participants, they identified genetic variants uniquely affecting plasma levels of either Lp(a) or LDL particles.
- For these two genetic clusters, they related the change in apoB to the respective change in CHD risk, which allowed them to directly compare the atherogenicity of LDL and Lp(a), particle to particle.
TAKEAWAY:
- The odds ratio for CHD for a 50 nmol/L higher Lp(a)-apoB was 1.28 (95% CI, 1.24-1.33) compared with 1.04 (95% CI, 1.03-1.05) for the same increment in LDL-apoB.
- Additional supporting evidence was provided by using polygenic scores to rank participants according to the difference in Lp(a)-apoB vs LDL-apoB, which revealed a greater risk for CHD per 50 nmol/L apoB for the Lp(a) cluster (hazard ratio [HR], 1.47; 95% CI, 1.36-1.58) than the LDL cluster (HR, 1.04; 95% CI, 1.02-1.05).
- Based on the data, the researchers estimate that the atherogenicity of Lp(a) is roughly sixfold greater (point estimate of 6.6; 95% CI, 5.1-8.8) than that of LDL on a per-particle basis.
IN PRACTICE:
“There are two clinical implications. First, to completely characterize atherosclerotic cardiovascular disease risk, it is imperative to measure Lp(a) in all adult patients at least once. Second, these studies provide a rationale that targeting Lp(a) with potent and specific drugs may lead to clinically meaningful benefit,” wrote the authors of an accompanying commentary on the study.
SOURCE:
The study, with first author Elias Björnson, PhD, University of Gothenburg, Gothenburg, Sweden, and an editorial by Sotirios Tsimikas, MD, University of California, San Diego, and Vera Bittner, MD, University of Alabama at Birmingham, was published in the Journal of the American College of Cardiology.
LIMITATIONS:
The UK Biobank consists primarily of a Caucasian population, and confirmatory studies in more diverse samples are needed. The working range for the Lp(a) assay used in the study did not cover the full range of Lp(a) values seen in the population. Variations in Lp(a)-apoB and LDL-apoB were estimated from genetic analysis and not measured specifically in biochemical assays.
DISCLOSURES:
The study had no commercial funding. Some authors received honoraria from the pharmaceutical industry. A complete list of author disclosures is available with the original article.
A version of this article first appeared on Medscape.com.
ALL: When Should MRD Trigger Stem Cell Transplants?
Allogeneic hematopoietic stem cell transplants (HSCT) are still part of the hematology armamentarium for relapsed/refractory (R/R) patients with Ph-negative ALL who are MRD positive. However, when asked about the best treatment strategy for patients who are MRD-negative, hematologist Mark R. Litzow, MD, of the Mayo Clinic in Rochester, Minnesota, said in an interview, “There is no firm consensus about that.”
Discussing how medicine has evolved over the past 20 to 30 years, Dr. Litzow recalled that HSCT used to be standard treatment for adult patients with ALL. “We felt that in most instances, chemotherapy alone was not going to be effective in curing them. A vast majority would relapse,” he said. Nowadays, however, specialists differ on the use of HSCT in patients with Ph-negative, MRD-negative ALL.
A pair of commentaries in the January issue of The Lancet Hematology tackle this topic from different perspectives. On one hand, hematologist Patrice Chevallier, MD, of the University of Nantes in France, argues that for such patients, HSCT “remains a valid option,”and MRD status shouldn’t be the sole factor used for a decision.
However, hematologist Nicolas Boissel, MD, PhD, of Paris Cité University, contends that detectable early MRD is the “only robust predictor” of HSCT benefit in patients under 60 with Ph-negative ALL, and it has “unproven” benefit in older patients.
As Dr. Chevallier notes, “allogeneic HSCT is indicated in patients defined as having a high risk of relapse. Currently, a high level of residual leukemic cells after treatment is recognized as the strongest, and sometimes sole, criterion defining high-risk patients.”
As first- and second-line therapy in pediatric patients and as first-line therapy in adults, the “rule” is to offer HSCT to MRD-positive patients but not MRD-negative ones, he writes. “In older patients and those who are relapsed or refractory, the recent demonstration of efficient immunotherapies and cell therapies has launched the debate on the role of MRD status and the question of whether or not to transplant patients who are MRD-negative in both settings.”
Dr. Chevallier notes that “there is no standard definition of an MRD-negative status,” and the best timing for evaluation is unknown. Further, he adds, a “variable proportion of MRD-negative patients still relapse after treatment — up to 25% of patients who respond early and more than 50% of patients who respond late.”
He also points out that there’s an 80% chance that patients will convert from MRD negative to MRD positive after blinatumomab therapy, and he highlights the low long-term survival rate (20%) after brexucabtagene autoleucel (Tecartus), a CAR T-cell therapy.
As for older patients, Dr. Chevallier observes that improved chemo-immunotherapy and conditioning regimens could spark a rethinking of the feasibility of HSCT. However, for now, in those patients, “MRD is not decisional, and allogeneic HSCT is not a routine practice,” he writes.
In his commentary, Dr. Boissel points out that there have been no controlled studies of HSCT in the first-remission setting, although he writes that some data suggests that HSCT may be helpful for patients in high-risk genetic subgroups, regardless of MRD status. On the other hand, “converging observations suggest no benefit of HSCT in MRD-positive patients treated with blinatumomab in the front-line setting.”
If MRD monitoring is unavailable, Dr. Boissel adds, “it seems reasonable to use early blast clearance or other baseline high-risk features to indicate HSCT.”
How can hematologists make the best decision about HSCT?
In an interview, City of Hope Medical Center (Duarte, California) hematologist-oncologist Ibrahim T. Aldoss, MD, said that chemotherapy — with or without immunotherapy — can often be enough to treat younger patients without high-risk genetic factors. “Potentially, these patients can be spared from transplants,” he said, although patients with resistant MRD “clearly need transplants.”
The risks of transplants are significant, he noted. While they can reduce the risk of relapse, the risk of dying during remission is higher vs chemotherapy. “So you have to balance the risks that you’re willing to take,” he said, keeping in mind that some patients can be cured with chemotherapy.
In addition, Dr. Aldoss said, acute graft-versus-host disease in the first few months after transplant can become chronic. “Many years later, patients can be struggling to where it actually impacts their daily activity. And unfortunately, patients can die from it.”
In the big picture, “you cannot have a generalized statement about whether you shouldn’t do transplants in every MRD-negative patient,” he said. However, “if you do achieve MRD negativity, most patients likely don’t need transplants.”
The Mayo Clinic’s Dr. Litzow urged colleagues to consider several factors when making decisions. Do patients have a high level of comorbidities that would raise the risk of death from HSCT? He noted that there’s nearly a 20% risk of death from HSCT, and comorbidities can boost the risk to 40%-50%.
Also, does the patient have a suitable donor? While advances have boosted the number of eligible donors, he said, “not everybody has an ideal donor.”
If a patient is MRD-negative but not a good candidate for a transplant, Dr. Litzow said consolidation therapy followed by maintenance therapy may be indicated. “Continue to check their bone marrow and their blood periodically as they’re going through treatment and reassess their MRD status to make sure they’re staying negative. If they turn MRD-positive during the course of their therapy, then we have to step back and rethink the role of transplant.”
As for cost, Dr. Litzow points out that HSCT is very expensive, although ALL is an accepted indication for HSCT. However, “if someone doesn’t have medical insurance, then it can be difficult to consider them having a transplant.”
What’s next? In his commentary, Dr. Boissel writes that his team aims to study whether HSCT is helpful in patients with high-risk B-cell ALL “who reach MRD negativity after a consolidation phase including blinatumomab.”
Dr. Aldoss discloses relationships with Amgen, Kite, Pfizer, Jazz, AbbVie, Sobi, Agios, Autolus, and MacroGenics. Dr. Litzow reports ties with Amgen. Dr. Boissel declares relationships with Amgen, Pfizer, Novartis, and Servier. Dr. Chevallier has no disclosures.
Allogeneic hematopoietic stem cell transplants (HSCT) are still part of the hematology armamentarium for relapsed/refractory (R/R) patients with Ph-negative ALL who are MRD positive. However, when asked about the best treatment strategy for patients who are MRD-negative, hematologist Mark R. Litzow, MD, of the Mayo Clinic in Rochester, Minnesota, said in an interview, “There is no firm consensus about that.”
Discussing how medicine has evolved over the past 20 to 30 years, Dr. Litzow recalled that HSCT used to be standard treatment for adult patients with ALL. “We felt that in most instances, chemotherapy alone was not going to be effective in curing them. A vast majority would relapse,” he said. Nowadays, however, specialists differ on the use of HSCT in patients with Ph-negative, MRD-negative ALL.
A pair of commentaries in the January issue of The Lancet Hematology tackle this topic from different perspectives. On one hand, hematologist Patrice Chevallier, MD, of the University of Nantes in France, argues that for such patients, HSCT “remains a valid option,”and MRD status shouldn’t be the sole factor used for a decision.
However, hematologist Nicolas Boissel, MD, PhD, of Paris Cité University, contends that detectable early MRD is the “only robust predictor” of HSCT benefit in patients under 60 with Ph-negative ALL, and it has “unproven” benefit in older patients.
As Dr. Chevallier notes, “allogeneic HSCT is indicated in patients defined as having a high risk of relapse. Currently, a high level of residual leukemic cells after treatment is recognized as the strongest, and sometimes sole, criterion defining high-risk patients.”
As first- and second-line therapy in pediatric patients and as first-line therapy in adults, the “rule” is to offer HSCT to MRD-positive patients but not MRD-negative ones, he writes. “In older patients and those who are relapsed or refractory, the recent demonstration of efficient immunotherapies and cell therapies has launched the debate on the role of MRD status and the question of whether or not to transplant patients who are MRD-negative in both settings.”
Dr. Chevallier notes that “there is no standard definition of an MRD-negative status,” and the best timing for evaluation is unknown. Further, he adds, a “variable proportion of MRD-negative patients still relapse after treatment — up to 25% of patients who respond early and more than 50% of patients who respond late.”
He also points out that there’s an 80% chance that patients will convert from MRD negative to MRD positive after blinatumomab therapy, and he highlights the low long-term survival rate (20%) after brexucabtagene autoleucel (Tecartus), a CAR T-cell therapy.
As for older patients, Dr. Chevallier observes that improved chemo-immunotherapy and conditioning regimens could spark a rethinking of the feasibility of HSCT. However, for now, in those patients, “MRD is not decisional, and allogeneic HSCT is not a routine practice,” he writes.
In his commentary, Dr. Boissel points out that there have been no controlled studies of HSCT in the first-remission setting, although he writes that some data suggests that HSCT may be helpful for patients in high-risk genetic subgroups, regardless of MRD status. On the other hand, “converging observations suggest no benefit of HSCT in MRD-positive patients treated with blinatumomab in the front-line setting.”
If MRD monitoring is unavailable, Dr. Boissel adds, “it seems reasonable to use early blast clearance or other baseline high-risk features to indicate HSCT.”
How can hematologists make the best decision about HSCT?
In an interview, City of Hope Medical Center (Duarte, California) hematologist-oncologist Ibrahim T. Aldoss, MD, said that chemotherapy — with or without immunotherapy — can often be enough to treat younger patients without high-risk genetic factors. “Potentially, these patients can be spared from transplants,” he said, although patients with resistant MRD “clearly need transplants.”
The risks of transplants are significant, he noted. While they can reduce the risk of relapse, the risk of dying during remission is higher vs chemotherapy. “So you have to balance the risks that you’re willing to take,” he said, keeping in mind that some patients can be cured with chemotherapy.
In addition, Dr. Aldoss said, acute graft-versus-host disease in the first few months after transplant can become chronic. “Many years later, patients can be struggling to where it actually impacts their daily activity. And unfortunately, patients can die from it.”
In the big picture, “you cannot have a generalized statement about whether you shouldn’t do transplants in every MRD-negative patient,” he said. However, “if you do achieve MRD negativity, most patients likely don’t need transplants.”
The Mayo Clinic’s Dr. Litzow urged colleagues to consider several factors when making decisions. Do patients have a high level of comorbidities that would raise the risk of death from HSCT? He noted that there’s nearly a 20% risk of death from HSCT, and comorbidities can boost the risk to 40%-50%.
Also, does the patient have a suitable donor? While advances have boosted the number of eligible donors, he said, “not everybody has an ideal donor.”
If a patient is MRD-negative but not a good candidate for a transplant, Dr. Litzow said consolidation therapy followed by maintenance therapy may be indicated. “Continue to check their bone marrow and their blood periodically as they’re going through treatment and reassess their MRD status to make sure they’re staying negative. If they turn MRD-positive during the course of their therapy, then we have to step back and rethink the role of transplant.”
As for cost, Dr. Litzow points out that HSCT is very expensive, although ALL is an accepted indication for HSCT. However, “if someone doesn’t have medical insurance, then it can be difficult to consider them having a transplant.”
What’s next? In his commentary, Dr. Boissel writes that his team aims to study whether HSCT is helpful in patients with high-risk B-cell ALL “who reach MRD negativity after a consolidation phase including blinatumomab.”
Dr. Aldoss discloses relationships with Amgen, Kite, Pfizer, Jazz, AbbVie, Sobi, Agios, Autolus, and MacroGenics. Dr. Litzow reports ties with Amgen. Dr. Boissel declares relationships with Amgen, Pfizer, Novartis, and Servier. Dr. Chevallier has no disclosures.
Allogeneic hematopoietic stem cell transplants (HSCT) are still part of the hematology armamentarium for relapsed/refractory (R/R) patients with Ph-negative ALL who are MRD positive. However, when asked about the best treatment strategy for patients who are MRD-negative, hematologist Mark R. Litzow, MD, of the Mayo Clinic in Rochester, Minnesota, said in an interview, “There is no firm consensus about that.”
Discussing how medicine has evolved over the past 20 to 30 years, Dr. Litzow recalled that HSCT used to be standard treatment for adult patients with ALL. “We felt that in most instances, chemotherapy alone was not going to be effective in curing them. A vast majority would relapse,” he said. Nowadays, however, specialists differ on the use of HSCT in patients with Ph-negative, MRD-negative ALL.
A pair of commentaries in the January issue of The Lancet Hematology tackle this topic from different perspectives. On one hand, hematologist Patrice Chevallier, MD, of the University of Nantes in France, argues that for such patients, HSCT “remains a valid option,”and MRD status shouldn’t be the sole factor used for a decision.
However, hematologist Nicolas Boissel, MD, PhD, of Paris Cité University, contends that detectable early MRD is the “only robust predictor” of HSCT benefit in patients under 60 with Ph-negative ALL, and it has “unproven” benefit in older patients.
As Dr. Chevallier notes, “allogeneic HSCT is indicated in patients defined as having a high risk of relapse. Currently, a high level of residual leukemic cells after treatment is recognized as the strongest, and sometimes sole, criterion defining high-risk patients.”
As first- and second-line therapy in pediatric patients and as first-line therapy in adults, the “rule” is to offer HSCT to MRD-positive patients but not MRD-negative ones, he writes. “In older patients and those who are relapsed or refractory, the recent demonstration of efficient immunotherapies and cell therapies has launched the debate on the role of MRD status and the question of whether or not to transplant patients who are MRD-negative in both settings.”
Dr. Chevallier notes that “there is no standard definition of an MRD-negative status,” and the best timing for evaluation is unknown. Further, he adds, a “variable proportion of MRD-negative patients still relapse after treatment — up to 25% of patients who respond early and more than 50% of patients who respond late.”
He also points out that there’s an 80% chance that patients will convert from MRD negative to MRD positive after blinatumomab therapy, and he highlights the low long-term survival rate (20%) after brexucabtagene autoleucel (Tecartus), a CAR T-cell therapy.
As for older patients, Dr. Chevallier observes that improved chemo-immunotherapy and conditioning regimens could spark a rethinking of the feasibility of HSCT. However, for now, in those patients, “MRD is not decisional, and allogeneic HSCT is not a routine practice,” he writes.
In his commentary, Dr. Boissel points out that there have been no controlled studies of HSCT in the first-remission setting, although he writes that some data suggests that HSCT may be helpful for patients in high-risk genetic subgroups, regardless of MRD status. On the other hand, “converging observations suggest no benefit of HSCT in MRD-positive patients treated with blinatumomab in the front-line setting.”
If MRD monitoring is unavailable, Dr. Boissel adds, “it seems reasonable to use early blast clearance or other baseline high-risk features to indicate HSCT.”
How can hematologists make the best decision about HSCT?
In an interview, City of Hope Medical Center (Duarte, California) hematologist-oncologist Ibrahim T. Aldoss, MD, said that chemotherapy — with or without immunotherapy — can often be enough to treat younger patients without high-risk genetic factors. “Potentially, these patients can be spared from transplants,” he said, although patients with resistant MRD “clearly need transplants.”
The risks of transplants are significant, he noted. While they can reduce the risk of relapse, the risk of dying during remission is higher vs chemotherapy. “So you have to balance the risks that you’re willing to take,” he said, keeping in mind that some patients can be cured with chemotherapy.
In addition, Dr. Aldoss said, acute graft-versus-host disease in the first few months after transplant can become chronic. “Many years later, patients can be struggling to where it actually impacts their daily activity. And unfortunately, patients can die from it.”
In the big picture, “you cannot have a generalized statement about whether you shouldn’t do transplants in every MRD-negative patient,” he said. However, “if you do achieve MRD negativity, most patients likely don’t need transplants.”
The Mayo Clinic’s Dr. Litzow urged colleagues to consider several factors when making decisions. Do patients have a high level of comorbidities that would raise the risk of death from HSCT? He noted that there’s nearly a 20% risk of death from HSCT, and comorbidities can boost the risk to 40%-50%.
Also, does the patient have a suitable donor? While advances have boosted the number of eligible donors, he said, “not everybody has an ideal donor.”
If a patient is MRD-negative but not a good candidate for a transplant, Dr. Litzow said consolidation therapy followed by maintenance therapy may be indicated. “Continue to check their bone marrow and their blood periodically as they’re going through treatment and reassess their MRD status to make sure they’re staying negative. If they turn MRD-positive during the course of their therapy, then we have to step back and rethink the role of transplant.”
As for cost, Dr. Litzow points out that HSCT is very expensive, although ALL is an accepted indication for HSCT. However, “if someone doesn’t have medical insurance, then it can be difficult to consider them having a transplant.”
What’s next? In his commentary, Dr. Boissel writes that his team aims to study whether HSCT is helpful in patients with high-risk B-cell ALL “who reach MRD negativity after a consolidation phase including blinatumomab.”
Dr. Aldoss discloses relationships with Amgen, Kite, Pfizer, Jazz, AbbVie, Sobi, Agios, Autolus, and MacroGenics. Dr. Litzow reports ties with Amgen. Dr. Boissel declares relationships with Amgen, Pfizer, Novartis, and Servier. Dr. Chevallier has no disclosures.
First Cases of Medically Acquired Alzheimer’s Disease Reported
Five people in the United Kingdom have been diagnosed with Alzheimer’s disease resulting from a medical treatment they received decades earlier, new research shows.
The individuals received treatment as children with human growth hormone extracted from pituitary glands of cadavers (c-hGH). Between 1958-1985, an estimated 30,000 people worldwide, mostly children, were treated with c-hGH for genetic disorders and growth hormone deficiencies.
The therapy was halted in 1985 after three patients in the US who received the treatment later died of Creutzfeldt-Jakob disease (CJD) transmitted through batches of c-hGH that were contaminated with disease-causing prions.
The new study builds on the investigators’ earlier work that showed the batches of c-hGH also contained amyloid-beta protein and that the protein could be transmitted decades later. These five cases were referred to or reviewed by researchers and clinicians at a prion clinic led by one of the lead researchers.
There are no reports of amyloid-beta transmission through any other medical or surgical procedures, researchers stress, and there is no evidence that amyloid-beta can be passed on during routine patient care or in daily activities.
“However, the recognition of transmission of amyloid-beta pathology in these rare situations should lead us to review measures to prevent accidental transmission via other medical or surgical procedures, in order to prevent such cases occurring in future,” lead author John Collinge, MD, director of the University of College London Institute of Prion Diseases, London, England, and leader of the UK’s National Prion Clinic, said in a press release.
“Importantly, our findings also suggest that Alzheimer’s and some other neurological conditions share similar disease processes to CJD, and this may have important implications for understanding and treating Alzheimer’s disease in the future,” Dr. Collinge continued.
The findings were published online January 29 in Nature Medicine.
Building on Earlier Work
The research builds on investigators’ previous 2015 work that found archived samples of c-hGH were also contaminated with amyloid-beta protein. In 2018, mouse studies showed that c-hGH samples stored for decades could still transmit amyloid-beta via injection.
Researchers said the findings suggested that individuals exposed to contaminated c-hGH who did not die from CJD might eventually develop AD.
Patients in the new study developed neurological symptoms consistent with AD between the ages of 38 and 55 years. The individual cases were either referred to or reviewed by experts in the National Prion Clinic in the UK between 2017 and 2022. The clinic coordinates the National Prion Monitoring Cohort, a longitudinal study of individuals with confirmed prion diseases.
Of the eight cases, three were diagnosed with AD before referral to the clinic; two others met criteria for an AD diagnosis; and three did not meet the criteria. Three of the patients — two of whom had AD — are now deceased.
All patients in the study received c-hGH prepared using a method called Wilhelmi or Hartree-modified Wilhelmi preparation (HWP).
Biomarker analyses confirmed the AD diagnosis in two patients. Other cases showed either progressive brain volume loss on brain imaging or elevated cerebrospinal fluid total tau and phosphorylated tau, or evidence of amyloid-beta deposits on autopsy.
‘Potentially Transmissible’
The cases offered diverse presentations. Some were not symptomatic and some failed to meet current diagnostic criteria for sporadic Alzheimer’s disease. Treatment duration and frequency differed among those in the study, as did their age at treatment onset and completion. That and other factors could contribute to the diverse phenotype recorded in individuals, investigators note.
Investigators examined and ruled out other factors that might explain the individuals’ cognitive symptoms, including childhood intellectual disability, which has been linked to dementia risk, the underlying condition that prompted their treatment with c-hGH, growth hormone deficiency, and cranial radiotherapy, which four of the individuals had received. They also ruled out inherited disease in all five of the cases with samples available for testing.
“Taken together, the only factor common to all of the patients whom we describe is treatment with the HWP subtype of c-hGH,” the authors write. “Given the strong experimental evidence for A-beta transmission from relevant archived HWP c-hGH batches, we conclude that this is the most plausible explanation for the findings observed.”
Investigators say the findings show that, like other prion diseases, AD has three etiologies: sporadic, inherited, and rare acquired forms, or iatrogenic AD.
“The clinical syndrome developed by these individuals can, therefore, be termed iatrogenic Alzheimer’s disease, and Alzheimer’s disease should now be recognized as a potentially transmissible disorder,” the authors write.
“Our cases suggest that, similarly to what is observed in human prion diseases, iatrogenic forms of Alzheimer’s disease differ phenotypically from sporadic and inherited forms, with some individuals remaining asymptomatic despite exposure to A-beta seeds due to protective factors that, at present, are unknown,” they continue
‘Measure of Skepticism’
In an accompanying editorial, Mathias Jucker, PhD, of the Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany, and Lary C. Walker, PhD, in the Department of Neurology at Emory University, Atlanta, write that the findings should be considered “with a measure of skepticism.”
“The cases presented are diverse and complicated; the individuals had undergone a variety of medical interventions for various disorders earlier in life, and it is difficult to exclude a contribution of these circumstances to the complex disease phenotypes that appeared many years later,” they write.
However, they continue, “there is good reason to take the findings seriously.”
“From a practical standpoint, this report reinforces the potential of amyloid-beta seeds as targets for early prevention, and it underscores the importance of informed caution in the preparation of surgical instruments, handling of tissues, and implementation of therapeutic biologics, particularly those derived from human sources,” Dr. Jucker and Dr. Walker write.
Commenting on the findings for this news organization, Christopher Weber, PhD, director of global science initiatives for the Alzheimer’s Association, says the idea that amyloid-beta is transmissible between individuals has been shown before.
“We’ve known for a long time that it is possible to create abnormal amyloid buildup — similar to that seen in Alzheimer’s – in the brain of an animal by injecting it with amyloid-beta. We also transfer human Alzheimer’s genes into animals to initiate abnormal, Alzheimer’s-like processes in their brains,” he said. “Thus, the idea that amyloid is transferable between individuals is not so novel as implied in the new paper.”
However, the study does highlight the importance of safety measures to prevent the accidental transmission of amyloid-beta, Weber added.
“It is a reasonable and actionable caution that the scientific and clinical communities must understand the possible risks and ensure that all methods of transmission are eliminated — for example, with complete and conscientious sterilization of surgical instruments,” he said. “Bottom line: We shouldn’t put amyloid-beta into people’s brains, either accidentally or on purpose, and appropriate measures should be in place to ensure that doesn’t happen.”
The study was supported by the Medical Research Council, the National Institute for Health and Care Research (NIHR), the NIHR University College of London Hospital Biomedical Research Centre, Alzheimer’s Research UK, and the Stroke Association. Dr. Collinge is a shareholder and director of D-Gen, Ltd., an academic spin-out company working in the field of prion disease diagnosis, decontamination and therapeutics. Dr. Jucker and Dr. Walker report no conflicts of interest.
A version of this article appeared on Medscape.com.
Five people in the United Kingdom have been diagnosed with Alzheimer’s disease resulting from a medical treatment they received decades earlier, new research shows.
The individuals received treatment as children with human growth hormone extracted from pituitary glands of cadavers (c-hGH). Between 1958-1985, an estimated 30,000 people worldwide, mostly children, were treated with c-hGH for genetic disorders and growth hormone deficiencies.
The therapy was halted in 1985 after three patients in the US who received the treatment later died of Creutzfeldt-Jakob disease (CJD) transmitted through batches of c-hGH that were contaminated with disease-causing prions.
The new study builds on the investigators’ earlier work that showed the batches of c-hGH also contained amyloid-beta protein and that the protein could be transmitted decades later. These five cases were referred to or reviewed by researchers and clinicians at a prion clinic led by one of the lead researchers.
There are no reports of amyloid-beta transmission through any other medical or surgical procedures, researchers stress, and there is no evidence that amyloid-beta can be passed on during routine patient care or in daily activities.
“However, the recognition of transmission of amyloid-beta pathology in these rare situations should lead us to review measures to prevent accidental transmission via other medical or surgical procedures, in order to prevent such cases occurring in future,” lead author John Collinge, MD, director of the University of College London Institute of Prion Diseases, London, England, and leader of the UK’s National Prion Clinic, said in a press release.
“Importantly, our findings also suggest that Alzheimer’s and some other neurological conditions share similar disease processes to CJD, and this may have important implications for understanding and treating Alzheimer’s disease in the future,” Dr. Collinge continued.
The findings were published online January 29 in Nature Medicine.
Building on Earlier Work
The research builds on investigators’ previous 2015 work that found archived samples of c-hGH were also contaminated with amyloid-beta protein. In 2018, mouse studies showed that c-hGH samples stored for decades could still transmit amyloid-beta via injection.
Researchers said the findings suggested that individuals exposed to contaminated c-hGH who did not die from CJD might eventually develop AD.
Patients in the new study developed neurological symptoms consistent with AD between the ages of 38 and 55 years. The individual cases were either referred to or reviewed by experts in the National Prion Clinic in the UK between 2017 and 2022. The clinic coordinates the National Prion Monitoring Cohort, a longitudinal study of individuals with confirmed prion diseases.
Of the eight cases, three were diagnosed with AD before referral to the clinic; two others met criteria for an AD diagnosis; and three did not meet the criteria. Three of the patients — two of whom had AD — are now deceased.
All patients in the study received c-hGH prepared using a method called Wilhelmi or Hartree-modified Wilhelmi preparation (HWP).
Biomarker analyses confirmed the AD diagnosis in two patients. Other cases showed either progressive brain volume loss on brain imaging or elevated cerebrospinal fluid total tau and phosphorylated tau, or evidence of amyloid-beta deposits on autopsy.
‘Potentially Transmissible’
The cases offered diverse presentations. Some were not symptomatic and some failed to meet current diagnostic criteria for sporadic Alzheimer’s disease. Treatment duration and frequency differed among those in the study, as did their age at treatment onset and completion. That and other factors could contribute to the diverse phenotype recorded in individuals, investigators note.
Investigators examined and ruled out other factors that might explain the individuals’ cognitive symptoms, including childhood intellectual disability, which has been linked to dementia risk, the underlying condition that prompted their treatment with c-hGH, growth hormone deficiency, and cranial radiotherapy, which four of the individuals had received. They also ruled out inherited disease in all five of the cases with samples available for testing.
“Taken together, the only factor common to all of the patients whom we describe is treatment with the HWP subtype of c-hGH,” the authors write. “Given the strong experimental evidence for A-beta transmission from relevant archived HWP c-hGH batches, we conclude that this is the most plausible explanation for the findings observed.”
Investigators say the findings show that, like other prion diseases, AD has three etiologies: sporadic, inherited, and rare acquired forms, or iatrogenic AD.
“The clinical syndrome developed by these individuals can, therefore, be termed iatrogenic Alzheimer’s disease, and Alzheimer’s disease should now be recognized as a potentially transmissible disorder,” the authors write.
“Our cases suggest that, similarly to what is observed in human prion diseases, iatrogenic forms of Alzheimer’s disease differ phenotypically from sporadic and inherited forms, with some individuals remaining asymptomatic despite exposure to A-beta seeds due to protective factors that, at present, are unknown,” they continue
‘Measure of Skepticism’
In an accompanying editorial, Mathias Jucker, PhD, of the Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany, and Lary C. Walker, PhD, in the Department of Neurology at Emory University, Atlanta, write that the findings should be considered “with a measure of skepticism.”
“The cases presented are diverse and complicated; the individuals had undergone a variety of medical interventions for various disorders earlier in life, and it is difficult to exclude a contribution of these circumstances to the complex disease phenotypes that appeared many years later,” they write.
However, they continue, “there is good reason to take the findings seriously.”
“From a practical standpoint, this report reinforces the potential of amyloid-beta seeds as targets for early prevention, and it underscores the importance of informed caution in the preparation of surgical instruments, handling of tissues, and implementation of therapeutic biologics, particularly those derived from human sources,” Dr. Jucker and Dr. Walker write.
Commenting on the findings for this news organization, Christopher Weber, PhD, director of global science initiatives for the Alzheimer’s Association, says the idea that amyloid-beta is transmissible between individuals has been shown before.
“We’ve known for a long time that it is possible to create abnormal amyloid buildup — similar to that seen in Alzheimer’s – in the brain of an animal by injecting it with amyloid-beta. We also transfer human Alzheimer’s genes into animals to initiate abnormal, Alzheimer’s-like processes in their brains,” he said. “Thus, the idea that amyloid is transferable between individuals is not so novel as implied in the new paper.”
However, the study does highlight the importance of safety measures to prevent the accidental transmission of amyloid-beta, Weber added.
“It is a reasonable and actionable caution that the scientific and clinical communities must understand the possible risks and ensure that all methods of transmission are eliminated — for example, with complete and conscientious sterilization of surgical instruments,” he said. “Bottom line: We shouldn’t put amyloid-beta into people’s brains, either accidentally or on purpose, and appropriate measures should be in place to ensure that doesn’t happen.”
The study was supported by the Medical Research Council, the National Institute for Health and Care Research (NIHR), the NIHR University College of London Hospital Biomedical Research Centre, Alzheimer’s Research UK, and the Stroke Association. Dr. Collinge is a shareholder and director of D-Gen, Ltd., an academic spin-out company working in the field of prion disease diagnosis, decontamination and therapeutics. Dr. Jucker and Dr. Walker report no conflicts of interest.
A version of this article appeared on Medscape.com.
Five people in the United Kingdom have been diagnosed with Alzheimer’s disease resulting from a medical treatment they received decades earlier, new research shows.
The individuals received treatment as children with human growth hormone extracted from pituitary glands of cadavers (c-hGH). Between 1958-1985, an estimated 30,000 people worldwide, mostly children, were treated with c-hGH for genetic disorders and growth hormone deficiencies.
The therapy was halted in 1985 after three patients in the US who received the treatment later died of Creutzfeldt-Jakob disease (CJD) transmitted through batches of c-hGH that were contaminated with disease-causing prions.
The new study builds on the investigators’ earlier work that showed the batches of c-hGH also contained amyloid-beta protein and that the protein could be transmitted decades later. These five cases were referred to or reviewed by researchers and clinicians at a prion clinic led by one of the lead researchers.
There are no reports of amyloid-beta transmission through any other medical or surgical procedures, researchers stress, and there is no evidence that amyloid-beta can be passed on during routine patient care or in daily activities.
“However, the recognition of transmission of amyloid-beta pathology in these rare situations should lead us to review measures to prevent accidental transmission via other medical or surgical procedures, in order to prevent such cases occurring in future,” lead author John Collinge, MD, director of the University of College London Institute of Prion Diseases, London, England, and leader of the UK’s National Prion Clinic, said in a press release.
“Importantly, our findings also suggest that Alzheimer’s and some other neurological conditions share similar disease processes to CJD, and this may have important implications for understanding and treating Alzheimer’s disease in the future,” Dr. Collinge continued.
The findings were published online January 29 in Nature Medicine.
Building on Earlier Work
The research builds on investigators’ previous 2015 work that found archived samples of c-hGH were also contaminated with amyloid-beta protein. In 2018, mouse studies showed that c-hGH samples stored for decades could still transmit amyloid-beta via injection.
Researchers said the findings suggested that individuals exposed to contaminated c-hGH who did not die from CJD might eventually develop AD.
Patients in the new study developed neurological symptoms consistent with AD between the ages of 38 and 55 years. The individual cases were either referred to or reviewed by experts in the National Prion Clinic in the UK between 2017 and 2022. The clinic coordinates the National Prion Monitoring Cohort, a longitudinal study of individuals with confirmed prion diseases.
Of the eight cases, three were diagnosed with AD before referral to the clinic; two others met criteria for an AD diagnosis; and three did not meet the criteria. Three of the patients — two of whom had AD — are now deceased.
All patients in the study received c-hGH prepared using a method called Wilhelmi or Hartree-modified Wilhelmi preparation (HWP).
Biomarker analyses confirmed the AD diagnosis in two patients. Other cases showed either progressive brain volume loss on brain imaging or elevated cerebrospinal fluid total tau and phosphorylated tau, or evidence of amyloid-beta deposits on autopsy.
‘Potentially Transmissible’
The cases offered diverse presentations. Some were not symptomatic and some failed to meet current diagnostic criteria for sporadic Alzheimer’s disease. Treatment duration and frequency differed among those in the study, as did their age at treatment onset and completion. That and other factors could contribute to the diverse phenotype recorded in individuals, investigators note.
Investigators examined and ruled out other factors that might explain the individuals’ cognitive symptoms, including childhood intellectual disability, which has been linked to dementia risk, the underlying condition that prompted their treatment with c-hGH, growth hormone deficiency, and cranial radiotherapy, which four of the individuals had received. They also ruled out inherited disease in all five of the cases with samples available for testing.
“Taken together, the only factor common to all of the patients whom we describe is treatment with the HWP subtype of c-hGH,” the authors write. “Given the strong experimental evidence for A-beta transmission from relevant archived HWP c-hGH batches, we conclude that this is the most plausible explanation for the findings observed.”
Investigators say the findings show that, like other prion diseases, AD has three etiologies: sporadic, inherited, and rare acquired forms, or iatrogenic AD.
“The clinical syndrome developed by these individuals can, therefore, be termed iatrogenic Alzheimer’s disease, and Alzheimer’s disease should now be recognized as a potentially transmissible disorder,” the authors write.
“Our cases suggest that, similarly to what is observed in human prion diseases, iatrogenic forms of Alzheimer’s disease differ phenotypically from sporadic and inherited forms, with some individuals remaining asymptomatic despite exposure to A-beta seeds due to protective factors that, at present, are unknown,” they continue
‘Measure of Skepticism’
In an accompanying editorial, Mathias Jucker, PhD, of the Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany, and Lary C. Walker, PhD, in the Department of Neurology at Emory University, Atlanta, write that the findings should be considered “with a measure of skepticism.”
“The cases presented are diverse and complicated; the individuals had undergone a variety of medical interventions for various disorders earlier in life, and it is difficult to exclude a contribution of these circumstances to the complex disease phenotypes that appeared many years later,” they write.
However, they continue, “there is good reason to take the findings seriously.”
“From a practical standpoint, this report reinforces the potential of amyloid-beta seeds as targets for early prevention, and it underscores the importance of informed caution in the preparation of surgical instruments, handling of tissues, and implementation of therapeutic biologics, particularly those derived from human sources,” Dr. Jucker and Dr. Walker write.
Commenting on the findings for this news organization, Christopher Weber, PhD, director of global science initiatives for the Alzheimer’s Association, says the idea that amyloid-beta is transmissible between individuals has been shown before.
“We’ve known for a long time that it is possible to create abnormal amyloid buildup — similar to that seen in Alzheimer’s – in the brain of an animal by injecting it with amyloid-beta. We also transfer human Alzheimer’s genes into animals to initiate abnormal, Alzheimer’s-like processes in their brains,” he said. “Thus, the idea that amyloid is transferable between individuals is not so novel as implied in the new paper.”
However, the study does highlight the importance of safety measures to prevent the accidental transmission of amyloid-beta, Weber added.
“It is a reasonable and actionable caution that the scientific and clinical communities must understand the possible risks and ensure that all methods of transmission are eliminated — for example, with complete and conscientious sterilization of surgical instruments,” he said. “Bottom line: We shouldn’t put amyloid-beta into people’s brains, either accidentally or on purpose, and appropriate measures should be in place to ensure that doesn’t happen.”
The study was supported by the Medical Research Council, the National Institute for Health and Care Research (NIHR), the NIHR University College of London Hospital Biomedical Research Centre, Alzheimer’s Research UK, and the Stroke Association. Dr. Collinge is a shareholder and director of D-Gen, Ltd., an academic spin-out company working in the field of prion disease diagnosis, decontamination and therapeutics. Dr. Jucker and Dr. Walker report no conflicts of interest.
A version of this article appeared on Medscape.com.
FROM NATURE MEDICINE
The Emerging Physician-Scientist Crisis in America
Recent reporting has shown that That’s a problem, because physician-scientists are uniquely equipped to make scientific discoveries in the laboratory and translate them to the clinic. Indeed, many of the discoveries that have transformed medicine for the better were made by physician-scientists. For example, Jonas Salk developed the polio vaccine, Timothy Ley sequenced the first cancer genome, and Anthony Fauci coordinated public health responses to both the HIV/AIDS and COVID-19 pandemics. Indicative of their sheer impact, at least a third and as many as half of all Nobel Prizes and Lasker Awards in physiology/medicine have gone to physician-scientists.
So why is the supply of physician-scientists shrinking so precipitously at a time when medical discoveries are being made at a record-high rate? Immunotherapy and proton therapy are transforming cancer care; RNA technology led to COVID vaccines; CRISPR is facilitating gene editing and treatment of diseases like sickle cell anemia. Yet, as exciting as medical science has become, only 1.5% of American doctors work as physician-scientists, more than a threefold drop compared with 30 years ago when the figure was a more robust 4.7%. What’s going on?
Residency training programs at prestigious academic medical centers have standard infolded research years; for example, neurosurgery residents at academic medical centers will often get 2 years of protected research time. And the National Institutes of Health has training grants dedicated to physician-scientists, such as the K08 award program. Several foundations are also dedicated to supporting early-career physician-scientists. Yet, the number of physicians deciding to become physician-scientists remains low, and, more troubling, the attrition rate of those who do decide to go this route is quite high.
The underlying issue is multifold. First, funding rates from the federal government for grants have become competitive to the point of being unrealistic. For example, the current funding rate for the flagship R01 program from the National Cancer Institute is only 12%. Promotions are typically tied to these grant awards, which means physician-scientists who are unable to acquire substantial grant funding are unable to pay for their research or win promotion — and often exit the physician-scientist track altogether.
Compounding this issue is a lack of mentorship for early-career physician-scientists. With the rise of “careerism” in medicine, senior-level physician-scientists may have less incentive to mentor those who are earlier in their careers. Rather, there seems to be greater reward to “managing up” — that is, spending time to please hospital administrators and departmental leadership. Being involved in countless committees appears to carry more value in advancing an established investigator’s career than does mentorship.
Finally, physician-scientists typically earn less than their clinician colleagues, despite juggling both scientific and clinical responsibilities. While many are comfortable with this arrangement when embarking on this track, the disparity may become untenable after a while, especially as departmental leadership will often turn to physician-scientists to fill clinical coverage gaps when faculty leave the department, or as the medical center expands to satellite centers outside the primary hospital. Indeed, physician-scientists get pulled in several directions, which can lead to burnout and attrition, with many who are highly equipped for this track ultimately hanging up their cleats and seeking more clinical or private industry–oriented opportunities.
Every academic medical center operates differently. Some clearly have done a better job than others promoting and fostering physician-scientists. What we find in the centers that manage to retain physician-scientists is leadership plays a major role: If a medical center values the importance of physician-scientists, they will do things to foster the success of those people, such as assembling mentorship committees, establishing clear criteria for promotion and career advancement, protecting research time while maintaining some level of pay equity, advocating for team science approaches, and supporting investigators in cases of gaps in federal funding. Different countries also have different models for physician-scientist training, with Germany, for example, allowing medical residents to have 3 years of protected time to engage in research after their second year of residency.
The stakes here are high. If we can’t address the physician-scientist recruitment and retention crisis in America now, we risk falling behind other countries in our ability to innovate and deliver world-class care.
Dr Chaudhuri is a tenure-track physician-scientist at Washington University in St. Louis, a Paul and Daisy Soros Fellow, and a Public Voices Fellow of The OpEd Project.
Aadel Chaudhuri, MD, PhD, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Recent reporting has shown that That’s a problem, because physician-scientists are uniquely equipped to make scientific discoveries in the laboratory and translate them to the clinic. Indeed, many of the discoveries that have transformed medicine for the better were made by physician-scientists. For example, Jonas Salk developed the polio vaccine, Timothy Ley sequenced the first cancer genome, and Anthony Fauci coordinated public health responses to both the HIV/AIDS and COVID-19 pandemics. Indicative of their sheer impact, at least a third and as many as half of all Nobel Prizes and Lasker Awards in physiology/medicine have gone to physician-scientists.
So why is the supply of physician-scientists shrinking so precipitously at a time when medical discoveries are being made at a record-high rate? Immunotherapy and proton therapy are transforming cancer care; RNA technology led to COVID vaccines; CRISPR is facilitating gene editing and treatment of diseases like sickle cell anemia. Yet, as exciting as medical science has become, only 1.5% of American doctors work as physician-scientists, more than a threefold drop compared with 30 years ago when the figure was a more robust 4.7%. What’s going on?
Residency training programs at prestigious academic medical centers have standard infolded research years; for example, neurosurgery residents at academic medical centers will often get 2 years of protected research time. And the National Institutes of Health has training grants dedicated to physician-scientists, such as the K08 award program. Several foundations are also dedicated to supporting early-career physician-scientists. Yet, the number of physicians deciding to become physician-scientists remains low, and, more troubling, the attrition rate of those who do decide to go this route is quite high.
The underlying issue is multifold. First, funding rates from the federal government for grants have become competitive to the point of being unrealistic. For example, the current funding rate for the flagship R01 program from the National Cancer Institute is only 12%. Promotions are typically tied to these grant awards, which means physician-scientists who are unable to acquire substantial grant funding are unable to pay for their research or win promotion — and often exit the physician-scientist track altogether.
Compounding this issue is a lack of mentorship for early-career physician-scientists. With the rise of “careerism” in medicine, senior-level physician-scientists may have less incentive to mentor those who are earlier in their careers. Rather, there seems to be greater reward to “managing up” — that is, spending time to please hospital administrators and departmental leadership. Being involved in countless committees appears to carry more value in advancing an established investigator’s career than does mentorship.
Finally, physician-scientists typically earn less than their clinician colleagues, despite juggling both scientific and clinical responsibilities. While many are comfortable with this arrangement when embarking on this track, the disparity may become untenable after a while, especially as departmental leadership will often turn to physician-scientists to fill clinical coverage gaps when faculty leave the department, or as the medical center expands to satellite centers outside the primary hospital. Indeed, physician-scientists get pulled in several directions, which can lead to burnout and attrition, with many who are highly equipped for this track ultimately hanging up their cleats and seeking more clinical or private industry–oriented opportunities.
Every academic medical center operates differently. Some clearly have done a better job than others promoting and fostering physician-scientists. What we find in the centers that manage to retain physician-scientists is leadership plays a major role: If a medical center values the importance of physician-scientists, they will do things to foster the success of those people, such as assembling mentorship committees, establishing clear criteria for promotion and career advancement, protecting research time while maintaining some level of pay equity, advocating for team science approaches, and supporting investigators in cases of gaps in federal funding. Different countries also have different models for physician-scientist training, with Germany, for example, allowing medical residents to have 3 years of protected time to engage in research after their second year of residency.
The stakes here are high. If we can’t address the physician-scientist recruitment and retention crisis in America now, we risk falling behind other countries in our ability to innovate and deliver world-class care.
Dr Chaudhuri is a tenure-track physician-scientist at Washington University in St. Louis, a Paul and Daisy Soros Fellow, and a Public Voices Fellow of The OpEd Project.
Aadel Chaudhuri, MD, PhD, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Recent reporting has shown that That’s a problem, because physician-scientists are uniquely equipped to make scientific discoveries in the laboratory and translate them to the clinic. Indeed, many of the discoveries that have transformed medicine for the better were made by physician-scientists. For example, Jonas Salk developed the polio vaccine, Timothy Ley sequenced the first cancer genome, and Anthony Fauci coordinated public health responses to both the HIV/AIDS and COVID-19 pandemics. Indicative of their sheer impact, at least a third and as many as half of all Nobel Prizes and Lasker Awards in physiology/medicine have gone to physician-scientists.
So why is the supply of physician-scientists shrinking so precipitously at a time when medical discoveries are being made at a record-high rate? Immunotherapy and proton therapy are transforming cancer care; RNA technology led to COVID vaccines; CRISPR is facilitating gene editing and treatment of diseases like sickle cell anemia. Yet, as exciting as medical science has become, only 1.5% of American doctors work as physician-scientists, more than a threefold drop compared with 30 years ago when the figure was a more robust 4.7%. What’s going on?
Residency training programs at prestigious academic medical centers have standard infolded research years; for example, neurosurgery residents at academic medical centers will often get 2 years of protected research time. And the National Institutes of Health has training grants dedicated to physician-scientists, such as the K08 award program. Several foundations are also dedicated to supporting early-career physician-scientists. Yet, the number of physicians deciding to become physician-scientists remains low, and, more troubling, the attrition rate of those who do decide to go this route is quite high.
The underlying issue is multifold. First, funding rates from the federal government for grants have become competitive to the point of being unrealistic. For example, the current funding rate for the flagship R01 program from the National Cancer Institute is only 12%. Promotions are typically tied to these grant awards, which means physician-scientists who are unable to acquire substantial grant funding are unable to pay for their research or win promotion — and often exit the physician-scientist track altogether.
Compounding this issue is a lack of mentorship for early-career physician-scientists. With the rise of “careerism” in medicine, senior-level physician-scientists may have less incentive to mentor those who are earlier in their careers. Rather, there seems to be greater reward to “managing up” — that is, spending time to please hospital administrators and departmental leadership. Being involved in countless committees appears to carry more value in advancing an established investigator’s career than does mentorship.
Finally, physician-scientists typically earn less than their clinician colleagues, despite juggling both scientific and clinical responsibilities. While many are comfortable with this arrangement when embarking on this track, the disparity may become untenable after a while, especially as departmental leadership will often turn to physician-scientists to fill clinical coverage gaps when faculty leave the department, or as the medical center expands to satellite centers outside the primary hospital. Indeed, physician-scientists get pulled in several directions, which can lead to burnout and attrition, with many who are highly equipped for this track ultimately hanging up their cleats and seeking more clinical or private industry–oriented opportunities.
Every academic medical center operates differently. Some clearly have done a better job than others promoting and fostering physician-scientists. What we find in the centers that manage to retain physician-scientists is leadership plays a major role: If a medical center values the importance of physician-scientists, they will do things to foster the success of those people, such as assembling mentorship committees, establishing clear criteria for promotion and career advancement, protecting research time while maintaining some level of pay equity, advocating for team science approaches, and supporting investigators in cases of gaps in federal funding. Different countries also have different models for physician-scientist training, with Germany, for example, allowing medical residents to have 3 years of protected time to engage in research after their second year of residency.
The stakes here are high. If we can’t address the physician-scientist recruitment and retention crisis in America now, we risk falling behind other countries in our ability to innovate and deliver world-class care.
Dr Chaudhuri is a tenure-track physician-scientist at Washington University in St. Louis, a Paul and Daisy Soros Fellow, and a Public Voices Fellow of The OpEd Project.
Aadel Chaudhuri, MD, PhD, has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Two-Step Strategy Improves Early-Stage Ovarian Cancer Detection
TOPLINE:
a new analysis with a 21-year follow-up found.
METHODOLOGY:
- Detecting ovarian cancer at stage I or II could significantly reduce ovarian cancer-related deaths, but only 25%-30% of patients are diagnosed at an early stage.
- In this single-arm prospective analysis, 7,856 healthy postmenopausal women received annual screening for ovarian cancer between 2011 and 2022. Screening involved an annual blood test to detect levels of cancer antigen 125 and track these levels over time.
- Investigators used the Risk of Ovarian Cancer Algorithm (ROCA) to determine whether ovarian cancer risk was normal, intermediate, or high. Those with elevated ROCA scores were referred for transvaginal sonography; those with intermediate scores received follow-up blood tests every 3 months.
- Overall, 92.3% of women were normal risk, 5.7% were intermediate, and 2% were high risk and recommended for transvaginal sonography.
TAKEAWAY:
- Most women (95.5%) referred for transvaginal ultrasound had one. Of these ultrasounds, most (90%) were negative or revealed benign findings, 5.2% required a repeat ultrasound, and 4.8% (34 patients) showed suspicious findings.
- Of 34 patients with suspicious findings and recommended for surgery, 15 had ovarian cancer and two had borderline tumors, indicating a positive predictive value of 50% (17 of 34 patients) for ovarian cancer. Of these 17 patients, 12 (70.6%) had stage I or II disease.
- Following abnormal ROCA results, seven other women were diagnosed with endometrial tumors (six of which were stage I), indicating a positive predictive value of 74% (25 of 34) for any cancer.
- The specificity for elevated risk ROCA prompting ultrasound was 98%, and the specificity of the ROCA and ultrasound prompting surgery was 99.8%. The sensitivity for detecting ovarian and borderline cancer was 74% (17 of 23).
IN PRACTICE:
“Remarkably, 70% of ovarian cancers detected by the ROCA” were early stage,” the authors concluded. Although the trial was not powered to detect reduced mortality, the high specificity, positive predictive value, and shift to identifying earlier-stage cancers “support further development of this strategy,” the investigators said.
LIMITATIONS:
This trial was not powered to detect mortality benefit. Six ovarian cancers and borderline tumors were missed. Only 80% of ovarian cancers express cancer antigen 125, potentially limiting the sensitivity of the algorithm.
SOURCE:
This study, led by Chae Young Han from the University of Texas MD Anderson Cancer Center, Houston, was published online on January 12 in the Journal of Clinical Oncology.
DISCLOSURES:
This study was supported by funds from the NCI Early Detection Research Network, the MD Anderson Ovarian SPOREs, the National Cancer Institute, the Department of Health and Human Services, and others. The authors reported receiving research funding, grants, consulting, and personal fees from various companies, including Curio Science, Fujirebio Diagnostics, GlaxoSmithKline, AstraZeneca, and Genentech.
A version of this article appeared on Medscape.com.
TOPLINE:
a new analysis with a 21-year follow-up found.
METHODOLOGY:
- Detecting ovarian cancer at stage I or II could significantly reduce ovarian cancer-related deaths, but only 25%-30% of patients are diagnosed at an early stage.
- In this single-arm prospective analysis, 7,856 healthy postmenopausal women received annual screening for ovarian cancer between 2011 and 2022. Screening involved an annual blood test to detect levels of cancer antigen 125 and track these levels over time.
- Investigators used the Risk of Ovarian Cancer Algorithm (ROCA) to determine whether ovarian cancer risk was normal, intermediate, or high. Those with elevated ROCA scores were referred for transvaginal sonography; those with intermediate scores received follow-up blood tests every 3 months.
- Overall, 92.3% of women were normal risk, 5.7% were intermediate, and 2% were high risk and recommended for transvaginal sonography.
TAKEAWAY:
- Most women (95.5%) referred for transvaginal ultrasound had one. Of these ultrasounds, most (90%) were negative or revealed benign findings, 5.2% required a repeat ultrasound, and 4.8% (34 patients) showed suspicious findings.
- Of 34 patients with suspicious findings and recommended for surgery, 15 had ovarian cancer and two had borderline tumors, indicating a positive predictive value of 50% (17 of 34 patients) for ovarian cancer. Of these 17 patients, 12 (70.6%) had stage I or II disease.
- Following abnormal ROCA results, seven other women were diagnosed with endometrial tumors (six of which were stage I), indicating a positive predictive value of 74% (25 of 34) for any cancer.
- The specificity for elevated risk ROCA prompting ultrasound was 98%, and the specificity of the ROCA and ultrasound prompting surgery was 99.8%. The sensitivity for detecting ovarian and borderline cancer was 74% (17 of 23).
IN PRACTICE:
“Remarkably, 70% of ovarian cancers detected by the ROCA” were early stage,” the authors concluded. Although the trial was not powered to detect reduced mortality, the high specificity, positive predictive value, and shift to identifying earlier-stage cancers “support further development of this strategy,” the investigators said.
LIMITATIONS:
This trial was not powered to detect mortality benefit. Six ovarian cancers and borderline tumors were missed. Only 80% of ovarian cancers express cancer antigen 125, potentially limiting the sensitivity of the algorithm.
SOURCE:
This study, led by Chae Young Han from the University of Texas MD Anderson Cancer Center, Houston, was published online on January 12 in the Journal of Clinical Oncology.
DISCLOSURES:
This study was supported by funds from the NCI Early Detection Research Network, the MD Anderson Ovarian SPOREs, the National Cancer Institute, the Department of Health and Human Services, and others. The authors reported receiving research funding, grants, consulting, and personal fees from various companies, including Curio Science, Fujirebio Diagnostics, GlaxoSmithKline, AstraZeneca, and Genentech.
A version of this article appeared on Medscape.com.
TOPLINE:
a new analysis with a 21-year follow-up found.
METHODOLOGY:
- Detecting ovarian cancer at stage I or II could significantly reduce ovarian cancer-related deaths, but only 25%-30% of patients are diagnosed at an early stage.
- In this single-arm prospective analysis, 7,856 healthy postmenopausal women received annual screening for ovarian cancer between 2011 and 2022. Screening involved an annual blood test to detect levels of cancer antigen 125 and track these levels over time.
- Investigators used the Risk of Ovarian Cancer Algorithm (ROCA) to determine whether ovarian cancer risk was normal, intermediate, or high. Those with elevated ROCA scores were referred for transvaginal sonography; those with intermediate scores received follow-up blood tests every 3 months.
- Overall, 92.3% of women were normal risk, 5.7% were intermediate, and 2% were high risk and recommended for transvaginal sonography.
TAKEAWAY:
- Most women (95.5%) referred for transvaginal ultrasound had one. Of these ultrasounds, most (90%) were negative or revealed benign findings, 5.2% required a repeat ultrasound, and 4.8% (34 patients) showed suspicious findings.
- Of 34 patients with suspicious findings and recommended for surgery, 15 had ovarian cancer and two had borderline tumors, indicating a positive predictive value of 50% (17 of 34 patients) for ovarian cancer. Of these 17 patients, 12 (70.6%) had stage I or II disease.
- Following abnormal ROCA results, seven other women were diagnosed with endometrial tumors (six of which were stage I), indicating a positive predictive value of 74% (25 of 34) for any cancer.
- The specificity for elevated risk ROCA prompting ultrasound was 98%, and the specificity of the ROCA and ultrasound prompting surgery was 99.8%. The sensitivity for detecting ovarian and borderline cancer was 74% (17 of 23).
IN PRACTICE:
“Remarkably, 70% of ovarian cancers detected by the ROCA” were early stage,” the authors concluded. Although the trial was not powered to detect reduced mortality, the high specificity, positive predictive value, and shift to identifying earlier-stage cancers “support further development of this strategy,” the investigators said.
LIMITATIONS:
This trial was not powered to detect mortality benefit. Six ovarian cancers and borderline tumors were missed. Only 80% of ovarian cancers express cancer antigen 125, potentially limiting the sensitivity of the algorithm.
SOURCE:
This study, led by Chae Young Han from the University of Texas MD Anderson Cancer Center, Houston, was published online on January 12 in the Journal of Clinical Oncology.
DISCLOSURES:
This study was supported by funds from the NCI Early Detection Research Network, the MD Anderson Ovarian SPOREs, the National Cancer Institute, the Department of Health and Human Services, and others. The authors reported receiving research funding, grants, consulting, and personal fees from various companies, including Curio Science, Fujirebio Diagnostics, GlaxoSmithKline, AstraZeneca, and Genentech.
A version of this article appeared on Medscape.com.