User login
Dietary Calcium Cuts Colorectal Cancer Risk by 17%
Cancer Research UK (CRUK), which funded the study, said that it demonstrated the benefits of a healthy, balanced diet for lowering cancer risk.
Colorectal cancer is the third most common cancer worldwide. Incidence rates vary markedly, with higher rates observed in high-income countries. The risk increases for individuals who migrate from low- to high-incidence areas, suggesting that lifestyle and environmental factors contribute to its development.
While alcohol and processed meats are established carcinogens, and red meat is classified as probably carcinogenic, there is a lack of consensus regarding the relationships between other dietary factors and colorectal cancer risk. This uncertainty may be due, at least in part, to relatively few studies giving comprehensive results on all food types, as well as dietary measurement errors, and/or small sample sizes.
Study Tracked 97 Dietary Factors
To address these gaps, the research team, led by the University of Oxford in England, tracked the intake of 97 dietary factors in 542,778 women from 2001 for an average of 16.6 years. During this period 12,251 participants developed colorectal cancer. The women completed detailed dietary questionnaires at baseline, with 7% participating in at least one subsequent 24-hour online dietary assessment.
Women diagnosed with colorectal cancer were generally older, taller, more likely to have a family history of bowel cancer, and have more adverse health behaviors, compared with participants overall.
Calcium Intake Showed the Strongest Protective Association
Relative risks (RR) for colorectal cancer were calculated for intakes of all 97 dietary factors, with significant associations found for 17 of them. Calcium intake showed the strongest protective effect, with each additional 300 mg per day – equivalent to a large glass of milk – associated with a 17% reduced RR.
Six dairy-related factors associated with calcium – dairy milk, yogurt, riboflavin, magnesium, phosphorus, and potassium intakes – also demonstrated inverse associations with colorectal cancer risk. Weaker protective effects were noted for breakfast cereal, fruit, wholegrains, carbohydrates, fibre, total sugars, folate, and vitamin C. However, the team commented that these inverse associations might reflect residual confounding from other lifestyle or other dietary factors.
Calcium’s protective role was independent of dairy milk intake. The study, published in Nature Communications, concluded that, while “dairy products help protect against colorectal cancer,” that protection is “driven largely or wholly by calcium.”
Alcohol and Processed Meat Confirmed as Risk Factors
As expected, alcohol showed the reverse association, with each additional 20 g daily – equivalent to one large glass of wine – associated with a 15% RR increase. Weaker associations were seen for the combined category of red and processed meat, with each additional 30 g per day associated with an 8% increased RR for colorectal cancer. This association was minimally affected by diet and lifestyle factors.
Commenting to the Science Media Centre (SMC), Tom Sanders, professor emeritus of nutrition and dietetics at King’s College London, England, said: “One theory is that the calcium may bind to free bile acids in the gut, preventing the harmful effects of free bile acids on gut mucosa.” However, the lactose content in milk also has effects on large bowel microflora, which may in turn affect risk.
Also commenting to the SMC, David Nunan, senior research fellow at the University of Oxford’s Centre for Evidence Based Medicine, who was not involved in the study, cautioned that the findings were subject to the bias inherent in observational studies. “These biases often inflate the estimated associations compared to controlled experiments,” he said. Nunan advised caution in interpreting the findings, as more robust research, such as randomized controlled trials, would be needed to establish causation.
A version of this article first appeared on Medscape.com.
Cancer Research UK (CRUK), which funded the study, said that it demonstrated the benefits of a healthy, balanced diet for lowering cancer risk.
Colorectal cancer is the third most common cancer worldwide. Incidence rates vary markedly, with higher rates observed in high-income countries. The risk increases for individuals who migrate from low- to high-incidence areas, suggesting that lifestyle and environmental factors contribute to its development.
While alcohol and processed meats are established carcinogens, and red meat is classified as probably carcinogenic, there is a lack of consensus regarding the relationships between other dietary factors and colorectal cancer risk. This uncertainty may be due, at least in part, to relatively few studies giving comprehensive results on all food types, as well as dietary measurement errors, and/or small sample sizes.
Study Tracked 97 Dietary Factors
To address these gaps, the research team, led by the University of Oxford in England, tracked the intake of 97 dietary factors in 542,778 women from 2001 for an average of 16.6 years. During this period 12,251 participants developed colorectal cancer. The women completed detailed dietary questionnaires at baseline, with 7% participating in at least one subsequent 24-hour online dietary assessment.
Women diagnosed with colorectal cancer were generally older, taller, more likely to have a family history of bowel cancer, and have more adverse health behaviors, compared with participants overall.
Calcium Intake Showed the Strongest Protective Association
Relative risks (RR) for colorectal cancer were calculated for intakes of all 97 dietary factors, with significant associations found for 17 of them. Calcium intake showed the strongest protective effect, with each additional 300 mg per day – equivalent to a large glass of milk – associated with a 17% reduced RR.
Six dairy-related factors associated with calcium – dairy milk, yogurt, riboflavin, magnesium, phosphorus, and potassium intakes – also demonstrated inverse associations with colorectal cancer risk. Weaker protective effects were noted for breakfast cereal, fruit, wholegrains, carbohydrates, fibre, total sugars, folate, and vitamin C. However, the team commented that these inverse associations might reflect residual confounding from other lifestyle or other dietary factors.
Calcium’s protective role was independent of dairy milk intake. The study, published in Nature Communications, concluded that, while “dairy products help protect against colorectal cancer,” that protection is “driven largely or wholly by calcium.”
Alcohol and Processed Meat Confirmed as Risk Factors
As expected, alcohol showed the reverse association, with each additional 20 g daily – equivalent to one large glass of wine – associated with a 15% RR increase. Weaker associations were seen for the combined category of red and processed meat, with each additional 30 g per day associated with an 8% increased RR for colorectal cancer. This association was minimally affected by diet and lifestyle factors.
Commenting to the Science Media Centre (SMC), Tom Sanders, professor emeritus of nutrition and dietetics at King’s College London, England, said: “One theory is that the calcium may bind to free bile acids in the gut, preventing the harmful effects of free bile acids on gut mucosa.” However, the lactose content in milk also has effects on large bowel microflora, which may in turn affect risk.
Also commenting to the SMC, David Nunan, senior research fellow at the University of Oxford’s Centre for Evidence Based Medicine, who was not involved in the study, cautioned that the findings were subject to the bias inherent in observational studies. “These biases often inflate the estimated associations compared to controlled experiments,” he said. Nunan advised caution in interpreting the findings, as more robust research, such as randomized controlled trials, would be needed to establish causation.
A version of this article first appeared on Medscape.com.
Cancer Research UK (CRUK), which funded the study, said that it demonstrated the benefits of a healthy, balanced diet for lowering cancer risk.
Colorectal cancer is the third most common cancer worldwide. Incidence rates vary markedly, with higher rates observed in high-income countries. The risk increases for individuals who migrate from low- to high-incidence areas, suggesting that lifestyle and environmental factors contribute to its development.
While alcohol and processed meats are established carcinogens, and red meat is classified as probably carcinogenic, there is a lack of consensus regarding the relationships between other dietary factors and colorectal cancer risk. This uncertainty may be due, at least in part, to relatively few studies giving comprehensive results on all food types, as well as dietary measurement errors, and/or small sample sizes.
Study Tracked 97 Dietary Factors
To address these gaps, the research team, led by the University of Oxford in England, tracked the intake of 97 dietary factors in 542,778 women from 2001 for an average of 16.6 years. During this period 12,251 participants developed colorectal cancer. The women completed detailed dietary questionnaires at baseline, with 7% participating in at least one subsequent 24-hour online dietary assessment.
Women diagnosed with colorectal cancer were generally older, taller, more likely to have a family history of bowel cancer, and have more adverse health behaviors, compared with participants overall.
Calcium Intake Showed the Strongest Protective Association
Relative risks (RR) for colorectal cancer were calculated for intakes of all 97 dietary factors, with significant associations found for 17 of them. Calcium intake showed the strongest protective effect, with each additional 300 mg per day – equivalent to a large glass of milk – associated with a 17% reduced RR.
Six dairy-related factors associated with calcium – dairy milk, yogurt, riboflavin, magnesium, phosphorus, and potassium intakes – also demonstrated inverse associations with colorectal cancer risk. Weaker protective effects were noted for breakfast cereal, fruit, wholegrains, carbohydrates, fibre, total sugars, folate, and vitamin C. However, the team commented that these inverse associations might reflect residual confounding from other lifestyle or other dietary factors.
Calcium’s protective role was independent of dairy milk intake. The study, published in Nature Communications, concluded that, while “dairy products help protect against colorectal cancer,” that protection is “driven largely or wholly by calcium.”
Alcohol and Processed Meat Confirmed as Risk Factors
As expected, alcohol showed the reverse association, with each additional 20 g daily – equivalent to one large glass of wine – associated with a 15% RR increase. Weaker associations were seen for the combined category of red and processed meat, with each additional 30 g per day associated with an 8% increased RR for colorectal cancer. This association was minimally affected by diet and lifestyle factors.
Commenting to the Science Media Centre (SMC), Tom Sanders, professor emeritus of nutrition and dietetics at King’s College London, England, said: “One theory is that the calcium may bind to free bile acids in the gut, preventing the harmful effects of free bile acids on gut mucosa.” However, the lactose content in milk also has effects on large bowel microflora, which may in turn affect risk.
Also commenting to the SMC, David Nunan, senior research fellow at the University of Oxford’s Centre for Evidence Based Medicine, who was not involved in the study, cautioned that the findings were subject to the bias inherent in observational studies. “These biases often inflate the estimated associations compared to controlled experiments,” he said. Nunan advised caution in interpreting the findings, as more robust research, such as randomized controlled trials, would be needed to establish causation.
A version of this article first appeared on Medscape.com.
FROM NATURE COMMUNICATIONS
MRI-Invisible Prostate Lesions: Are They Dangerous?
MRI-invisible prostate lesions. It sounds like the stuff of science fiction and fantasy, a creation from the minds of H.G. Wells, who wrote The Invisible Man, or J.K. Rowling, who authored the Harry Potter series.
But MRI-invisible prostate lesions are real. And what these lesions may, or may not, indicate is the subject of intense debate.
MRI plays an increasingly important role in detecting and diagnosing prostate cancer, staging prostate cancer as well as monitoring disease progression. However, on occasion, a puzzling phenomenon arises. Certain prostate lesions that appear when pathologists examine biopsied tissue samples under a microscope are not visible on MRI. The prostate tissue will, instead, appear normal to a radiologist’s eye.
Some experts believe these MRI-invisible lesions are nothing to worry about.
If the clinician can’t see the cancer on MRI, then it simply isn’t a threat, according to Mark Emberton, MD, a pioneer in prostate MRIs and director of interventional oncology at University College London, England.
Laurence Klotz, MD, of the University of Toronto, Ontario, Canada, agreed, noting that “invisible cancers are clinically insignificant and don’t require systematic biopsies.”
Emberton and Klotz compared MRI-invisible lesions to grade group 1 prostate cancer (Gleason score ≤ 6) — the least aggressive category that indicates the cancer that is not likely to spread or kill. For patients on active surveillance, those with MRI-invisible cancers do drastically better than those with visible cancers, Klotz explained.
But other experts in the field are skeptical that MRI-invisible lesions are truly innocuous.
Although statistically an MRI-visible prostate lesion indicates a more aggressive tumor, that is not always the case for every individual, said Brian Helfand, MD, PhD, chief of urology at NorthShore University Health System, Evanston, Illinois.
MRIs can lead to false negatives in about 10%-20% of patients who have clinically significant prostate cancer, though estimates vary.
In one analysis, 16% of men with no suspicious lesions on MRI had clinically significant prostate cancer identified after undergoing a systematic biopsy. Another analysis found that about 35% of MRI-invisible prostate cancers identified via biopsy were clinically significant.
Other studies, however, have indicated that negative MRI results accurately indicate patients at low risk of developing clinically significant cancers. A recent JAMA Oncology analysis, for instance, found that only seven of 233 men (3%) with negative MRI results at baseline who completed 3 years of monitoring were diagnosed with clinically significant prostate cancer.
When a patient has an MRI-invisible prostate tumor, there are a couple of reasons the MRI may not be picking it up, said urologic oncologist Alexander Putnam Cole, MD, assistant professor of surgery, Harvard Medical School, Boston, Massachusetts. “One is that the cancer is aggressive but just very small,” said Cole.
“Another possibility is that the cancer looks very similar to background prostate tissue, which is something that you might expect if you think about more of a low-grade cancer,” he explained.
The experience level of the radiologist interpreting the MRI can also play into the accuracy of the reading.
But Cole agreed that “in general, MRI visibility is associated with molecular and histologic features of progression and aggressiveness and non-visible cancers are less likely to have aggressive features.”
The genomic profiles of MRI-visible and -invisible cancers bear this out.
According to Todd Morgan, MD, chief of urologic oncology at Michigan Medicine, University of Michigan, Ann Arbor, the gene expression in visible disease tends to be linked to more aggressive prostate tumors whereas gene expression in invisible disease does not.
In one analysis, for instance, researchers found that four genes — PHYHD1, CENPF, ALDH2, and GDF15 — associated with worse progression-free survival and metastasis-free survival in prostate cancer also predicted MRI visibility.
“Genes that are associated with visibility are essentially the same genes that are associated with aggressive cancers,” Klotz said.
Next Steps After Negative MRI Result
What do MRI-invisible lesions mean for patient care? If, for instance, a patient has elevated PSA levels but a normal MRI, is a targeted or systematic biopsy warranted?
The overarching message, according to Klotz, is that “you don’t need to find them.” Klotz noted, however, that patients with a negative MRI result should still be followed with periodic repeat imaging.
Several trials support this approach of using MRI to decide who needs a biopsy and delaying a biopsy in men with normal MRIs.
The recent JAMA Oncology analysis found that, among men with negative MRI results, 86% avoided a biopsy over 3 years, with clinically significant prostate cancer detected in only 4% of men across the study period — four in the initial diagnostic phase and seven in the 3-year monitoring phase. However, during the initial diagnostic phase, more than half the men with positive MRI findings had clinically significant prostate cancer detected.
Another recent study found that patients with negative MRI results were much less likely to upgrade to higher Gleason scores over time. Among 522 patients who underwent a systematic and targeted biopsy within 18 months of their grade group 1 designation, 9.2% with negative MRI findings had tumors reclassified as grade group 2 or higher vs 27% with positive MRI findings, and 2.3% with negative MRI findings had tumors reclassified as grade group 3 or higher vs 7.8% with positive MRI findings.
These data suggest that men with grade group 1 cancer and negative MRI result “may be able to avoid confirmatory biopsies until a routine surveillance biopsy in 2-3 years,” according to study author Christian Pavlovich, MD, professor of urologic oncology at the Johns Hopkins University School of Medicine, Baltimore.
Cole used MRI findings to triage who gets a biopsy. When a biopsy is warranted, “I usually recommend adding in some systematic sampling of the other side to assess for nonvisible cancers,” he noted.
Sampling prostate tissue outside the target area “adds maybe 1-2 minutes to the procedure and doesn’t drastically increase the morbidity or risks,” Cole said. It also can help “confirm there is cancer in the MRI target and also confirm there is no cancer in the nonvisible areas.”
According to Klotz, if imaging demonstrates progression, patients should receive a biopsy — in most cases, a targeted biopsy only. And, Klotz noted, skipping routine prostate biopsies in men with negative MRI results can save thousands of men from these procedures, which carry risks for infections and sepsis.
Looking beyond Gleason scores for risk prediction, MRI “visibility is a very powerful risk stratifier,” he said.
A version of this article appeared on Medscape.com.
MRI-invisible prostate lesions. It sounds like the stuff of science fiction and fantasy, a creation from the minds of H.G. Wells, who wrote The Invisible Man, or J.K. Rowling, who authored the Harry Potter series.
But MRI-invisible prostate lesions are real. And what these lesions may, or may not, indicate is the subject of intense debate.
MRI plays an increasingly important role in detecting and diagnosing prostate cancer, staging prostate cancer as well as monitoring disease progression. However, on occasion, a puzzling phenomenon arises. Certain prostate lesions that appear when pathologists examine biopsied tissue samples under a microscope are not visible on MRI. The prostate tissue will, instead, appear normal to a radiologist’s eye.
Some experts believe these MRI-invisible lesions are nothing to worry about.
If the clinician can’t see the cancer on MRI, then it simply isn’t a threat, according to Mark Emberton, MD, a pioneer in prostate MRIs and director of interventional oncology at University College London, England.
Laurence Klotz, MD, of the University of Toronto, Ontario, Canada, agreed, noting that “invisible cancers are clinically insignificant and don’t require systematic biopsies.”
Emberton and Klotz compared MRI-invisible lesions to grade group 1 prostate cancer (Gleason score ≤ 6) — the least aggressive category that indicates the cancer that is not likely to spread or kill. For patients on active surveillance, those with MRI-invisible cancers do drastically better than those with visible cancers, Klotz explained.
But other experts in the field are skeptical that MRI-invisible lesions are truly innocuous.
Although statistically an MRI-visible prostate lesion indicates a more aggressive tumor, that is not always the case for every individual, said Brian Helfand, MD, PhD, chief of urology at NorthShore University Health System, Evanston, Illinois.
MRIs can lead to false negatives in about 10%-20% of patients who have clinically significant prostate cancer, though estimates vary.
In one analysis, 16% of men with no suspicious lesions on MRI had clinically significant prostate cancer identified after undergoing a systematic biopsy. Another analysis found that about 35% of MRI-invisible prostate cancers identified via biopsy were clinically significant.
Other studies, however, have indicated that negative MRI results accurately indicate patients at low risk of developing clinically significant cancers. A recent JAMA Oncology analysis, for instance, found that only seven of 233 men (3%) with negative MRI results at baseline who completed 3 years of monitoring were diagnosed with clinically significant prostate cancer.
When a patient has an MRI-invisible prostate tumor, there are a couple of reasons the MRI may not be picking it up, said urologic oncologist Alexander Putnam Cole, MD, assistant professor of surgery, Harvard Medical School, Boston, Massachusetts. “One is that the cancer is aggressive but just very small,” said Cole.
“Another possibility is that the cancer looks very similar to background prostate tissue, which is something that you might expect if you think about more of a low-grade cancer,” he explained.
The experience level of the radiologist interpreting the MRI can also play into the accuracy of the reading.
But Cole agreed that “in general, MRI visibility is associated with molecular and histologic features of progression and aggressiveness and non-visible cancers are less likely to have aggressive features.”
The genomic profiles of MRI-visible and -invisible cancers bear this out.
According to Todd Morgan, MD, chief of urologic oncology at Michigan Medicine, University of Michigan, Ann Arbor, the gene expression in visible disease tends to be linked to more aggressive prostate tumors whereas gene expression in invisible disease does not.
In one analysis, for instance, researchers found that four genes — PHYHD1, CENPF, ALDH2, and GDF15 — associated with worse progression-free survival and metastasis-free survival in prostate cancer also predicted MRI visibility.
“Genes that are associated with visibility are essentially the same genes that are associated with aggressive cancers,” Klotz said.
Next Steps After Negative MRI Result
What do MRI-invisible lesions mean for patient care? If, for instance, a patient has elevated PSA levels but a normal MRI, is a targeted or systematic biopsy warranted?
The overarching message, according to Klotz, is that “you don’t need to find them.” Klotz noted, however, that patients with a negative MRI result should still be followed with periodic repeat imaging.
Several trials support this approach of using MRI to decide who needs a biopsy and delaying a biopsy in men with normal MRIs.
The recent JAMA Oncology analysis found that, among men with negative MRI results, 86% avoided a biopsy over 3 years, with clinically significant prostate cancer detected in only 4% of men across the study period — four in the initial diagnostic phase and seven in the 3-year monitoring phase. However, during the initial diagnostic phase, more than half the men with positive MRI findings had clinically significant prostate cancer detected.
Another recent study found that patients with negative MRI results were much less likely to upgrade to higher Gleason scores over time. Among 522 patients who underwent a systematic and targeted biopsy within 18 months of their grade group 1 designation, 9.2% with negative MRI findings had tumors reclassified as grade group 2 or higher vs 27% with positive MRI findings, and 2.3% with negative MRI findings had tumors reclassified as grade group 3 or higher vs 7.8% with positive MRI findings.
These data suggest that men with grade group 1 cancer and negative MRI result “may be able to avoid confirmatory biopsies until a routine surveillance biopsy in 2-3 years,” according to study author Christian Pavlovich, MD, professor of urologic oncology at the Johns Hopkins University School of Medicine, Baltimore.
Cole used MRI findings to triage who gets a biopsy. When a biopsy is warranted, “I usually recommend adding in some systematic sampling of the other side to assess for nonvisible cancers,” he noted.
Sampling prostate tissue outside the target area “adds maybe 1-2 minutes to the procedure and doesn’t drastically increase the morbidity or risks,” Cole said. It also can help “confirm there is cancer in the MRI target and also confirm there is no cancer in the nonvisible areas.”
According to Klotz, if imaging demonstrates progression, patients should receive a biopsy — in most cases, a targeted biopsy only. And, Klotz noted, skipping routine prostate biopsies in men with negative MRI results can save thousands of men from these procedures, which carry risks for infections and sepsis.
Looking beyond Gleason scores for risk prediction, MRI “visibility is a very powerful risk stratifier,” he said.
A version of this article appeared on Medscape.com.
MRI-invisible prostate lesions. It sounds like the stuff of science fiction and fantasy, a creation from the minds of H.G. Wells, who wrote The Invisible Man, or J.K. Rowling, who authored the Harry Potter series.
But MRI-invisible prostate lesions are real. And what these lesions may, or may not, indicate is the subject of intense debate.
MRI plays an increasingly important role in detecting and diagnosing prostate cancer, staging prostate cancer as well as monitoring disease progression. However, on occasion, a puzzling phenomenon arises. Certain prostate lesions that appear when pathologists examine biopsied tissue samples under a microscope are not visible on MRI. The prostate tissue will, instead, appear normal to a radiologist’s eye.
Some experts believe these MRI-invisible lesions are nothing to worry about.
If the clinician can’t see the cancer on MRI, then it simply isn’t a threat, according to Mark Emberton, MD, a pioneer in prostate MRIs and director of interventional oncology at University College London, England.
Laurence Klotz, MD, of the University of Toronto, Ontario, Canada, agreed, noting that “invisible cancers are clinically insignificant and don’t require systematic biopsies.”
Emberton and Klotz compared MRI-invisible lesions to grade group 1 prostate cancer (Gleason score ≤ 6) — the least aggressive category that indicates the cancer that is not likely to spread or kill. For patients on active surveillance, those with MRI-invisible cancers do drastically better than those with visible cancers, Klotz explained.
But other experts in the field are skeptical that MRI-invisible lesions are truly innocuous.
Although statistically an MRI-visible prostate lesion indicates a more aggressive tumor, that is not always the case for every individual, said Brian Helfand, MD, PhD, chief of urology at NorthShore University Health System, Evanston, Illinois.
MRIs can lead to false negatives in about 10%-20% of patients who have clinically significant prostate cancer, though estimates vary.
In one analysis, 16% of men with no suspicious lesions on MRI had clinically significant prostate cancer identified after undergoing a systematic biopsy. Another analysis found that about 35% of MRI-invisible prostate cancers identified via biopsy were clinically significant.
Other studies, however, have indicated that negative MRI results accurately indicate patients at low risk of developing clinically significant cancers. A recent JAMA Oncology analysis, for instance, found that only seven of 233 men (3%) with negative MRI results at baseline who completed 3 years of monitoring were diagnosed with clinically significant prostate cancer.
When a patient has an MRI-invisible prostate tumor, there are a couple of reasons the MRI may not be picking it up, said urologic oncologist Alexander Putnam Cole, MD, assistant professor of surgery, Harvard Medical School, Boston, Massachusetts. “One is that the cancer is aggressive but just very small,” said Cole.
“Another possibility is that the cancer looks very similar to background prostate tissue, which is something that you might expect if you think about more of a low-grade cancer,” he explained.
The experience level of the radiologist interpreting the MRI can also play into the accuracy of the reading.
But Cole agreed that “in general, MRI visibility is associated with molecular and histologic features of progression and aggressiveness and non-visible cancers are less likely to have aggressive features.”
The genomic profiles of MRI-visible and -invisible cancers bear this out.
According to Todd Morgan, MD, chief of urologic oncology at Michigan Medicine, University of Michigan, Ann Arbor, the gene expression in visible disease tends to be linked to more aggressive prostate tumors whereas gene expression in invisible disease does not.
In one analysis, for instance, researchers found that four genes — PHYHD1, CENPF, ALDH2, and GDF15 — associated with worse progression-free survival and metastasis-free survival in prostate cancer also predicted MRI visibility.
“Genes that are associated with visibility are essentially the same genes that are associated with aggressive cancers,” Klotz said.
Next Steps After Negative MRI Result
What do MRI-invisible lesions mean for patient care? If, for instance, a patient has elevated PSA levels but a normal MRI, is a targeted or systematic biopsy warranted?
The overarching message, according to Klotz, is that “you don’t need to find them.” Klotz noted, however, that patients with a negative MRI result should still be followed with periodic repeat imaging.
Several trials support this approach of using MRI to decide who needs a biopsy and delaying a biopsy in men with normal MRIs.
The recent JAMA Oncology analysis found that, among men with negative MRI results, 86% avoided a biopsy over 3 years, with clinically significant prostate cancer detected in only 4% of men across the study period — four in the initial diagnostic phase and seven in the 3-year monitoring phase. However, during the initial diagnostic phase, more than half the men with positive MRI findings had clinically significant prostate cancer detected.
Another recent study found that patients with negative MRI results were much less likely to upgrade to higher Gleason scores over time. Among 522 patients who underwent a systematic and targeted biopsy within 18 months of their grade group 1 designation, 9.2% with negative MRI findings had tumors reclassified as grade group 2 or higher vs 27% with positive MRI findings, and 2.3% with negative MRI findings had tumors reclassified as grade group 3 or higher vs 7.8% with positive MRI findings.
These data suggest that men with grade group 1 cancer and negative MRI result “may be able to avoid confirmatory biopsies until a routine surveillance biopsy in 2-3 years,” according to study author Christian Pavlovich, MD, professor of urologic oncology at the Johns Hopkins University School of Medicine, Baltimore.
Cole used MRI findings to triage who gets a biopsy. When a biopsy is warranted, “I usually recommend adding in some systematic sampling of the other side to assess for nonvisible cancers,” he noted.
Sampling prostate tissue outside the target area “adds maybe 1-2 minutes to the procedure and doesn’t drastically increase the morbidity or risks,” Cole said. It also can help “confirm there is cancer in the MRI target and also confirm there is no cancer in the nonvisible areas.”
According to Klotz, if imaging demonstrates progression, patients should receive a biopsy — in most cases, a targeted biopsy only. And, Klotz noted, skipping routine prostate biopsies in men with negative MRI results can save thousands of men from these procedures, which carry risks for infections and sepsis.
Looking beyond Gleason scores for risk prediction, MRI “visibility is a very powerful risk stratifier,” he said.
A version of this article appeared on Medscape.com.
Traumatic Brain Injury May Reactivate Herpes Virus Leading to Neurodegeneration
a new study suggested.
Using a three-dimensional (3D) human brain tissue model, researchers observed that quiescent HSV-1 can be reactivated by a mechanical jolt mimicking concussion, leading to signature markers of Alzheimer’s disease, including neuroinflammation and production of amyloid beta and phosphorylated tau (p-tau) and gliosis — a phenotype made worse by repeated head injury.
“This opens the question as to whether antiviral drugs or anti-inflammatory agents might be useful as early preventive treatments after head trauma to stop HSV-1 activation in its tracks and lower the risk of Alzheimer’s disease,” lead investigator Dana Cairns, PhD, with the Department of Biomedical Engineering at Tufts University, Medford, Massachusetts, said in a statement.
But outside experts urged caution in drawing any firm conclusions, pending further study.
The study was published online in the journal Science Signaling.
HSV-1: A Major Alzheimer’s Disease Risk Factor?
TBI is a major risk factor for Alzheimer’s disease and dementia, but the pathways in the brain leading from TBI to dementia are unknown.
HSV-1 is found in over 80% of people; varicella zoster virus (VZV) is found in about 95%. Both viruses are known to enter the brain and lay dormant in neurons and glial cells. Prior evidence indicates that HSV-1 in the brain of APOE-ε4 carriers confers a strong risk for Alzheimer’s disease.
A number of years ago, the team created a 3D model of human brain tissue to study the link between TBI, the viruses, and dementia. The model is 6 mm wide, shaped like a donut, and made of a spongy material of silk protein and collagen saturated with neural stem cells. The cells mature into neurons, communicate with each other, and form a network that mimics the brain environment.
In an earlier study using the model quiescently infected with HSV-1, Cairns and colleagues found that subsequent exposure to VZV created the inflammatory conditions that led to reactivation of HSV-1.
This led them to wonder what would happen if they subjected the brain tissue model to a physical disruption akin to a concussion. Would HSV-1 wake up and start the process of neurodegeneration?
To investigate, they examined the effects of one or more controlled blows to the 3D human brain tissue model in the absence or presence of quiescent HSV-1 infection.
After repeated, mild controlled blows, researchers found that the latently infected 3D brain tissue displayed reactivated HSV-1 and the production and accumulation of amyloid beta and p-tau — which promotes neurodegeneration. The blows also activated gliosis, which is associated with destructive neuroinflammation.
These effects are collectively associated with Alzheimer’s disease, dementia, and chronic traumatic encephalopathy, they pointed out, and were increased with additional injury but were absent in tissue not infected with HSV-1.
“These data suggest that HSV-1 in the brain is pivotal in increasing the risk of Alzheimer’s disease, as other recent studies using cerebral organoids have suggested,” the researchers wrote.
They propose that following brain injury, “whether by infection or mechanical damage, the resulting inflammation induces HSV-1 reactivation in the brain leading to the development of Alzheimer’s disease/dementia and that HSV-1 is a major cause of the disease, especially in APOE4 carriers.”
Future studies should investigate “possible ways of mitigating or stopping the damage caused by head injury, thereby reducing subsequent development of Alzheimer’s disease by implementing efforts to prevent the reactivation of virus in brain such as anti-inflammatory and/or antiviral treatment post-injury,” researchers suggested.
Outside Experts Weigh in
Several outside experts offered perspective on the study in a statement from the UK nonprofit Science Media Centre.
Tara Spires-Jones, PhD, president of the British Neuroscience Association and group leader at the UK Dementia Research Institute, London, England, said that, while the study is interesting, there are limitations.
“The increase in Alzheimer’s-like brain changes in these latent virus-containing cells subjected to injury does not resemble the pathology that is found in the brain of people with Alzheimer’s disease,” Spires-Jones noted.
“These experiments were also in cells grown in artificial conditions without important Alzheimer’s-related factors such as age and blood vessel changes. Finally, these experiments were repeated in a small number of experimental replicates (three times per experiment), so these results will need to be confirmed in more relevant biological systems with larger studies to be sure there is a biological link between latent herpes simplex virus type 1, brain injury, and Alzheimer’s pathology,” Spires-Jones cautioned.
Robert Howard, MD, MRCPsych, University College London (UCL) Division of Psychiatry, said the study suggests a possible mechanism for the association between HSV-1, brain injury, and Alzheimer’s disease.
“However, as so often in science, it is very important to bear in mind that association does not mean causation. Much more research will be needed before this can be seriously considered a plausible mechanism for the development of dementia,” Howard cautioned.
“Avoidance of brain injuries, such as those encountered in some contact sports, is already known to be an important way to prevent dementia, and I’m unconvinced that this reflects anything more complicated than mechanical damage causing death of brain cells,” he added.
Jennifer Pocock, PhD, with UCL Queen Square Institute of Neurology, noted the role of microglia, which are activated by mild and repetitive TBI, isn’t addressed in the study.
“This paper seems to suggest that only astrocytes contribute to the reported neuroinflammation in brain tissue. Also, the inclusion of APOE3/4 is not clearly defined. Because of this, the findings are likely to represent an over interpretation for the ‘real world’ as the inclusion of microglia may negate or accentuate them, depending on the severity of the TBI,” Pocock said.
The study was funded by the US Army Research Office and Department of Defense. The authors have declared no relevant conflicts of interest. Spires-Jones and Howard had no relevant disclosures related to this study. Pocock has received research funding from AstraZeneca and Daiichi Sankyo.
A version of this article appeared on Medscape.com.
a new study suggested.
Using a three-dimensional (3D) human brain tissue model, researchers observed that quiescent HSV-1 can be reactivated by a mechanical jolt mimicking concussion, leading to signature markers of Alzheimer’s disease, including neuroinflammation and production of amyloid beta and phosphorylated tau (p-tau) and gliosis — a phenotype made worse by repeated head injury.
“This opens the question as to whether antiviral drugs or anti-inflammatory agents might be useful as early preventive treatments after head trauma to stop HSV-1 activation in its tracks and lower the risk of Alzheimer’s disease,” lead investigator Dana Cairns, PhD, with the Department of Biomedical Engineering at Tufts University, Medford, Massachusetts, said in a statement.
But outside experts urged caution in drawing any firm conclusions, pending further study.
The study was published online in the journal Science Signaling.
HSV-1: A Major Alzheimer’s Disease Risk Factor?
TBI is a major risk factor for Alzheimer’s disease and dementia, but the pathways in the brain leading from TBI to dementia are unknown.
HSV-1 is found in over 80% of people; varicella zoster virus (VZV) is found in about 95%. Both viruses are known to enter the brain and lay dormant in neurons and glial cells. Prior evidence indicates that HSV-1 in the brain of APOE-ε4 carriers confers a strong risk for Alzheimer’s disease.
A number of years ago, the team created a 3D model of human brain tissue to study the link between TBI, the viruses, and dementia. The model is 6 mm wide, shaped like a donut, and made of a spongy material of silk protein and collagen saturated with neural stem cells. The cells mature into neurons, communicate with each other, and form a network that mimics the brain environment.
In an earlier study using the model quiescently infected with HSV-1, Cairns and colleagues found that subsequent exposure to VZV created the inflammatory conditions that led to reactivation of HSV-1.
This led them to wonder what would happen if they subjected the brain tissue model to a physical disruption akin to a concussion. Would HSV-1 wake up and start the process of neurodegeneration?
To investigate, they examined the effects of one or more controlled blows to the 3D human brain tissue model in the absence or presence of quiescent HSV-1 infection.
After repeated, mild controlled blows, researchers found that the latently infected 3D brain tissue displayed reactivated HSV-1 and the production and accumulation of amyloid beta and p-tau — which promotes neurodegeneration. The blows also activated gliosis, which is associated with destructive neuroinflammation.
These effects are collectively associated with Alzheimer’s disease, dementia, and chronic traumatic encephalopathy, they pointed out, and were increased with additional injury but were absent in tissue not infected with HSV-1.
“These data suggest that HSV-1 in the brain is pivotal in increasing the risk of Alzheimer’s disease, as other recent studies using cerebral organoids have suggested,” the researchers wrote.
They propose that following brain injury, “whether by infection or mechanical damage, the resulting inflammation induces HSV-1 reactivation in the brain leading to the development of Alzheimer’s disease/dementia and that HSV-1 is a major cause of the disease, especially in APOE4 carriers.”
Future studies should investigate “possible ways of mitigating or stopping the damage caused by head injury, thereby reducing subsequent development of Alzheimer’s disease by implementing efforts to prevent the reactivation of virus in brain such as anti-inflammatory and/or antiviral treatment post-injury,” researchers suggested.
Outside Experts Weigh in
Several outside experts offered perspective on the study in a statement from the UK nonprofit Science Media Centre.
Tara Spires-Jones, PhD, president of the British Neuroscience Association and group leader at the UK Dementia Research Institute, London, England, said that, while the study is interesting, there are limitations.
“The increase in Alzheimer’s-like brain changes in these latent virus-containing cells subjected to injury does not resemble the pathology that is found in the brain of people with Alzheimer’s disease,” Spires-Jones noted.
“These experiments were also in cells grown in artificial conditions without important Alzheimer’s-related factors such as age and blood vessel changes. Finally, these experiments were repeated in a small number of experimental replicates (three times per experiment), so these results will need to be confirmed in more relevant biological systems with larger studies to be sure there is a biological link between latent herpes simplex virus type 1, brain injury, and Alzheimer’s pathology,” Spires-Jones cautioned.
Robert Howard, MD, MRCPsych, University College London (UCL) Division of Psychiatry, said the study suggests a possible mechanism for the association between HSV-1, brain injury, and Alzheimer’s disease.
“However, as so often in science, it is very important to bear in mind that association does not mean causation. Much more research will be needed before this can be seriously considered a plausible mechanism for the development of dementia,” Howard cautioned.
“Avoidance of brain injuries, such as those encountered in some contact sports, is already known to be an important way to prevent dementia, and I’m unconvinced that this reflects anything more complicated than mechanical damage causing death of brain cells,” he added.
Jennifer Pocock, PhD, with UCL Queen Square Institute of Neurology, noted the role of microglia, which are activated by mild and repetitive TBI, isn’t addressed in the study.
“This paper seems to suggest that only astrocytes contribute to the reported neuroinflammation in brain tissue. Also, the inclusion of APOE3/4 is not clearly defined. Because of this, the findings are likely to represent an over interpretation for the ‘real world’ as the inclusion of microglia may negate or accentuate them, depending on the severity of the TBI,” Pocock said.
The study was funded by the US Army Research Office and Department of Defense. The authors have declared no relevant conflicts of interest. Spires-Jones and Howard had no relevant disclosures related to this study. Pocock has received research funding from AstraZeneca and Daiichi Sankyo.
A version of this article appeared on Medscape.com.
a new study suggested.
Using a three-dimensional (3D) human brain tissue model, researchers observed that quiescent HSV-1 can be reactivated by a mechanical jolt mimicking concussion, leading to signature markers of Alzheimer’s disease, including neuroinflammation and production of amyloid beta and phosphorylated tau (p-tau) and gliosis — a phenotype made worse by repeated head injury.
“This opens the question as to whether antiviral drugs or anti-inflammatory agents might be useful as early preventive treatments after head trauma to stop HSV-1 activation in its tracks and lower the risk of Alzheimer’s disease,” lead investigator Dana Cairns, PhD, with the Department of Biomedical Engineering at Tufts University, Medford, Massachusetts, said in a statement.
But outside experts urged caution in drawing any firm conclusions, pending further study.
The study was published online in the journal Science Signaling.
HSV-1: A Major Alzheimer’s Disease Risk Factor?
TBI is a major risk factor for Alzheimer’s disease and dementia, but the pathways in the brain leading from TBI to dementia are unknown.
HSV-1 is found in over 80% of people; varicella zoster virus (VZV) is found in about 95%. Both viruses are known to enter the brain and lay dormant in neurons and glial cells. Prior evidence indicates that HSV-1 in the brain of APOE-ε4 carriers confers a strong risk for Alzheimer’s disease.
A number of years ago, the team created a 3D model of human brain tissue to study the link between TBI, the viruses, and dementia. The model is 6 mm wide, shaped like a donut, and made of a spongy material of silk protein and collagen saturated with neural stem cells. The cells mature into neurons, communicate with each other, and form a network that mimics the brain environment.
In an earlier study using the model quiescently infected with HSV-1, Cairns and colleagues found that subsequent exposure to VZV created the inflammatory conditions that led to reactivation of HSV-1.
This led them to wonder what would happen if they subjected the brain tissue model to a physical disruption akin to a concussion. Would HSV-1 wake up and start the process of neurodegeneration?
To investigate, they examined the effects of one or more controlled blows to the 3D human brain tissue model in the absence or presence of quiescent HSV-1 infection.
After repeated, mild controlled blows, researchers found that the latently infected 3D brain tissue displayed reactivated HSV-1 and the production and accumulation of amyloid beta and p-tau — which promotes neurodegeneration. The blows also activated gliosis, which is associated with destructive neuroinflammation.
These effects are collectively associated with Alzheimer’s disease, dementia, and chronic traumatic encephalopathy, they pointed out, and were increased with additional injury but were absent in tissue not infected with HSV-1.
“These data suggest that HSV-1 in the brain is pivotal in increasing the risk of Alzheimer’s disease, as other recent studies using cerebral organoids have suggested,” the researchers wrote.
They propose that following brain injury, “whether by infection or mechanical damage, the resulting inflammation induces HSV-1 reactivation in the brain leading to the development of Alzheimer’s disease/dementia and that HSV-1 is a major cause of the disease, especially in APOE4 carriers.”
Future studies should investigate “possible ways of mitigating or stopping the damage caused by head injury, thereby reducing subsequent development of Alzheimer’s disease by implementing efforts to prevent the reactivation of virus in brain such as anti-inflammatory and/or antiviral treatment post-injury,” researchers suggested.
Outside Experts Weigh in
Several outside experts offered perspective on the study in a statement from the UK nonprofit Science Media Centre.
Tara Spires-Jones, PhD, president of the British Neuroscience Association and group leader at the UK Dementia Research Institute, London, England, said that, while the study is interesting, there are limitations.
“The increase in Alzheimer’s-like brain changes in these latent virus-containing cells subjected to injury does not resemble the pathology that is found in the brain of people with Alzheimer’s disease,” Spires-Jones noted.
“These experiments were also in cells grown in artificial conditions without important Alzheimer’s-related factors such as age and blood vessel changes. Finally, these experiments were repeated in a small number of experimental replicates (three times per experiment), so these results will need to be confirmed in more relevant biological systems with larger studies to be sure there is a biological link between latent herpes simplex virus type 1, brain injury, and Alzheimer’s pathology,” Spires-Jones cautioned.
Robert Howard, MD, MRCPsych, University College London (UCL) Division of Psychiatry, said the study suggests a possible mechanism for the association between HSV-1, brain injury, and Alzheimer’s disease.
“However, as so often in science, it is very important to bear in mind that association does not mean causation. Much more research will be needed before this can be seriously considered a plausible mechanism for the development of dementia,” Howard cautioned.
“Avoidance of brain injuries, such as those encountered in some contact sports, is already known to be an important way to prevent dementia, and I’m unconvinced that this reflects anything more complicated than mechanical damage causing death of brain cells,” he added.
Jennifer Pocock, PhD, with UCL Queen Square Institute of Neurology, noted the role of microglia, which are activated by mild and repetitive TBI, isn’t addressed in the study.
“This paper seems to suggest that only astrocytes contribute to the reported neuroinflammation in brain tissue. Also, the inclusion of APOE3/4 is not clearly defined. Because of this, the findings are likely to represent an over interpretation for the ‘real world’ as the inclusion of microglia may negate or accentuate them, depending on the severity of the TBI,” Pocock said.
The study was funded by the US Army Research Office and Department of Defense. The authors have declared no relevant conflicts of interest. Spires-Jones and Howard had no relevant disclosures related to this study. Pocock has received research funding from AstraZeneca and Daiichi Sankyo.
A version of this article appeared on Medscape.com.
FROM SCIENCE SIGNALING
KRAS Mutations Linked to Varied Treatment Outcomes in Metastatic Pancreatic Cancer
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study analyzing deidentified clinical data from 2,433 patients with metastatic pancreatic ductal adenocarcinoma (PDAC), diagnosed between February 2010 and September 2022.
- They assessed the association of KRAS mutations in metastatic PDAC with the clinical outcomes and responses to first-line chemotherapy regimens.
- Data originated from approximately 280 US cancer clinics, encompassing about 800 sites of care, with comprehensive genomic profiling performed on all patients.
- Analysis focused on median overall survival (OS) and time to next treatment (TTNT) across different KRAS mutation groups, using multivariate Cox proportional hazards models.
TAKEAWAY:
- Patients with KRAS G12D and G12V mutations showed significantly higher risk for disease progression (hazard ratio [HR], 1.15; 95% CI, 1.04-1.29; P = .009) and (HR, 1.16; 95% CI, 1.04-1.30; P = .01), respectively, compared with KRAS wild type.
- KRAS G12R mutations were associated with the longest median OS at 13.2 months (95% CI, 10.6-15.2) and longest median TTNT at 6.0 months (95% CI, 5.2-6.6).
- FOLFIRINOX treatment demonstrated better outcomes than gemcitabine-based therapies across all patients, with lower risk for treatment progression (HR, 1.19; 95% CI, 1.09-1.29; P < .001) and death (HR, 1.18; 95% CI, 1.07-1.29; P < .001).
- Specifically, when FOLFIRINOX was used as first-line treatment in patients with KRAS G12D and G12V mutations, the therapy was associated with improved TTNT and OS vs gemcitabine with or without nab-paclitaxel.
IN PRACTICE:
“In its totality, these data set a benchmark for future studies on KRAS inhibitors for specific KRAS variants and highlights the groups for which treatment combinations may ultimately be necessary,” the authors concluded.
SOURCE:
The study was led by Carter Norton, Huntsman Cancer Institute in Salt Lake City, Utah. It was published online on January 7 in JAMA Network Open.
LIMITATIONS:
According to the authors, the study’s limitations include the heterogeneity of clinical data collected retrospectively, which is subject to residual confounding. The sample size was limited for certain mutational groups, particularly KRAS G12C, leading to limited statistical power. Additionally, the detection rate of genomic alterations by commercially available assays may be affected by the high stromal content and low cellularity characteristic of PDAC.
DISCLOSURES:
The study was supported by Cancer Center Support grant P30CA042014 from the National Institutes of Health. Heloisa P. Soares, MD, PhD, one of the study authors, disclosed receiving consulting fees from Ipsen, Exelixis Inc, BMS, Novartis AG, AstraZeneca, and TerSera Therapeutics LLC and symposium speaker fees from ITM Radiopharma outside the submitted work. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study analyzing deidentified clinical data from 2,433 patients with metastatic pancreatic ductal adenocarcinoma (PDAC), diagnosed between February 2010 and September 2022.
- They assessed the association of KRAS mutations in metastatic PDAC with the clinical outcomes and responses to first-line chemotherapy regimens.
- Data originated from approximately 280 US cancer clinics, encompassing about 800 sites of care, with comprehensive genomic profiling performed on all patients.
- Analysis focused on median overall survival (OS) and time to next treatment (TTNT) across different KRAS mutation groups, using multivariate Cox proportional hazards models.
TAKEAWAY:
- Patients with KRAS G12D and G12V mutations showed significantly higher risk for disease progression (hazard ratio [HR], 1.15; 95% CI, 1.04-1.29; P = .009) and (HR, 1.16; 95% CI, 1.04-1.30; P = .01), respectively, compared with KRAS wild type.
- KRAS G12R mutations were associated with the longest median OS at 13.2 months (95% CI, 10.6-15.2) and longest median TTNT at 6.0 months (95% CI, 5.2-6.6).
- FOLFIRINOX treatment demonstrated better outcomes than gemcitabine-based therapies across all patients, with lower risk for treatment progression (HR, 1.19; 95% CI, 1.09-1.29; P < .001) and death (HR, 1.18; 95% CI, 1.07-1.29; P < .001).
- Specifically, when FOLFIRINOX was used as first-line treatment in patients with KRAS G12D and G12V mutations, the therapy was associated with improved TTNT and OS vs gemcitabine with or without nab-paclitaxel.
IN PRACTICE:
“In its totality, these data set a benchmark for future studies on KRAS inhibitors for specific KRAS variants and highlights the groups for which treatment combinations may ultimately be necessary,” the authors concluded.
SOURCE:
The study was led by Carter Norton, Huntsman Cancer Institute in Salt Lake City, Utah. It was published online on January 7 in JAMA Network Open.
LIMITATIONS:
According to the authors, the study’s limitations include the heterogeneity of clinical data collected retrospectively, which is subject to residual confounding. The sample size was limited for certain mutational groups, particularly KRAS G12C, leading to limited statistical power. Additionally, the detection rate of genomic alterations by commercially available assays may be affected by the high stromal content and low cellularity characteristic of PDAC.
DISCLOSURES:
The study was supported by Cancer Center Support grant P30CA042014 from the National Institutes of Health. Heloisa P. Soares, MD, PhD, one of the study authors, disclosed receiving consulting fees from Ipsen, Exelixis Inc, BMS, Novartis AG, AstraZeneca, and TerSera Therapeutics LLC and symposium speaker fees from ITM Radiopharma outside the submitted work. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a retrospective cohort study analyzing deidentified clinical data from 2,433 patients with metastatic pancreatic ductal adenocarcinoma (PDAC), diagnosed between February 2010 and September 2022.
- They assessed the association of KRAS mutations in metastatic PDAC with the clinical outcomes and responses to first-line chemotherapy regimens.
- Data originated from approximately 280 US cancer clinics, encompassing about 800 sites of care, with comprehensive genomic profiling performed on all patients.
- Analysis focused on median overall survival (OS) and time to next treatment (TTNT) across different KRAS mutation groups, using multivariate Cox proportional hazards models.
TAKEAWAY:
- Patients with KRAS G12D and G12V mutations showed significantly higher risk for disease progression (hazard ratio [HR], 1.15; 95% CI, 1.04-1.29; P = .009) and (HR, 1.16; 95% CI, 1.04-1.30; P = .01), respectively, compared with KRAS wild type.
- KRAS G12R mutations were associated with the longest median OS at 13.2 months (95% CI, 10.6-15.2) and longest median TTNT at 6.0 months (95% CI, 5.2-6.6).
- FOLFIRINOX treatment demonstrated better outcomes than gemcitabine-based therapies across all patients, with lower risk for treatment progression (HR, 1.19; 95% CI, 1.09-1.29; P < .001) and death (HR, 1.18; 95% CI, 1.07-1.29; P < .001).
- Specifically, when FOLFIRINOX was used as first-line treatment in patients with KRAS G12D and G12V mutations, the therapy was associated with improved TTNT and OS vs gemcitabine with or without nab-paclitaxel.
IN PRACTICE:
“In its totality, these data set a benchmark for future studies on KRAS inhibitors for specific KRAS variants and highlights the groups for which treatment combinations may ultimately be necessary,” the authors concluded.
SOURCE:
The study was led by Carter Norton, Huntsman Cancer Institute in Salt Lake City, Utah. It was published online on January 7 in JAMA Network Open.
LIMITATIONS:
According to the authors, the study’s limitations include the heterogeneity of clinical data collected retrospectively, which is subject to residual confounding. The sample size was limited for certain mutational groups, particularly KRAS G12C, leading to limited statistical power. Additionally, the detection rate of genomic alterations by commercially available assays may be affected by the high stromal content and low cellularity characteristic of PDAC.
DISCLOSURES:
The study was supported by Cancer Center Support grant P30CA042014 from the National Institutes of Health. Heloisa P. Soares, MD, PhD, one of the study authors, disclosed receiving consulting fees from Ipsen, Exelixis Inc, BMS, Novartis AG, AstraZeneca, and TerSera Therapeutics LLC and symposium speaker fees from ITM Radiopharma outside the submitted work. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Can Glucarpidase Help Reverse Methotrexate Kidney Damage?
TOPLINE:
METHODOLOGY:
- Researchers conducted a multicenter cohort study involving 708 adults with methotrexate-associated acute kidney injury from 28 cancer centers across the United States.
- Analysis utilized a sequential target trial emulation framework to compare outcomes between 209 patients who received glucarpidase within 4 days of methotrexate initiation and 499 patients who did not.
- The primary endpoint was kidney recovery at hospital discharge, defined as survival with serum creatinine < 1.5-fold baseline without dialysis dependence.
- Secondary endpoints included time-to-kidney recovery, neutropenia and transaminitis on day 7, and time-to-death.
TAKEAWAY:
- Glucarpidase administration was associated with adjusted odds ratio [aOR] of 2.70 (95% CI, 1.69-4.31) and adjusted hazard ratio [aHR] of 1.88 (95% CI, 1.18-3.33) for time-to-kidney recovery.
- Treatment with glucarpidase reduced the risk for grade ≥ 2 neutropenia (aOR, 0.50; 95% CI, 0.28-0.91) and grade ≥ 2 transaminitis (aOR, 0.31; 95% CI, 0.13-0.77) on day 7.
- Female patients showed greater benefit from glucarpidase treatment than male patients (P = .02 for interaction).
- No significant difference was observed in time-to-death between glucarpidase-treated and glucarpidase-untreated patients (aHR, 0.76; 95% CI, 0.49-1.18).
IN PRACTICE:
“These data suggest glucarpidase may improve both renal and extrarenal outcomes in patients with MTX-AKI [methotrexate-acute kidney injury],” the authors of the study wrote.
SOURCE:
This study was led by Shruti Gupta, MD, MPH, and David E. Leaf, MD, MMSc, Brigham and Women’s Hospital in Boston, Massachusetts. It was published online in Blood.
LIMITATIONS:
According to the authors, residual confounding cannot be excluded despite adjustment for multiple variables. While glucarpidase-treated patients had similar distributions of most baseline characteristics, they showed greater severity of illness, including more comorbidities, concomitant nephrotoxic medications, higher 24-hour methotrexate levels, and more severe acute kidney injury. This study was limited to patients treated at large, US-based academic centers, potentially affecting generalizability to smaller hospitals or countries where glucarpidase is unavailable.
DISCLOSURES:
This study was funded by BTG International. Gupta disclosed ties with BTG International, Dana-Farber Cancer Institute’s Wong Foundation, Janssen, AstraZeneca, and the National Institute of Diabetes and Digestive and Kidney Diseases (K23DK125672). Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a multicenter cohort study involving 708 adults with methotrexate-associated acute kidney injury from 28 cancer centers across the United States.
- Analysis utilized a sequential target trial emulation framework to compare outcomes between 209 patients who received glucarpidase within 4 days of methotrexate initiation and 499 patients who did not.
- The primary endpoint was kidney recovery at hospital discharge, defined as survival with serum creatinine < 1.5-fold baseline without dialysis dependence.
- Secondary endpoints included time-to-kidney recovery, neutropenia and transaminitis on day 7, and time-to-death.
TAKEAWAY:
- Glucarpidase administration was associated with adjusted odds ratio [aOR] of 2.70 (95% CI, 1.69-4.31) and adjusted hazard ratio [aHR] of 1.88 (95% CI, 1.18-3.33) for time-to-kidney recovery.
- Treatment with glucarpidase reduced the risk for grade ≥ 2 neutropenia (aOR, 0.50; 95% CI, 0.28-0.91) and grade ≥ 2 transaminitis (aOR, 0.31; 95% CI, 0.13-0.77) on day 7.
- Female patients showed greater benefit from glucarpidase treatment than male patients (P = .02 for interaction).
- No significant difference was observed in time-to-death between glucarpidase-treated and glucarpidase-untreated patients (aHR, 0.76; 95% CI, 0.49-1.18).
IN PRACTICE:
“These data suggest glucarpidase may improve both renal and extrarenal outcomes in patients with MTX-AKI [methotrexate-acute kidney injury],” the authors of the study wrote.
SOURCE:
This study was led by Shruti Gupta, MD, MPH, and David E. Leaf, MD, MMSc, Brigham and Women’s Hospital in Boston, Massachusetts. It was published online in Blood.
LIMITATIONS:
According to the authors, residual confounding cannot be excluded despite adjustment for multiple variables. While glucarpidase-treated patients had similar distributions of most baseline characteristics, they showed greater severity of illness, including more comorbidities, concomitant nephrotoxic medications, higher 24-hour methotrexate levels, and more severe acute kidney injury. This study was limited to patients treated at large, US-based academic centers, potentially affecting generalizability to smaller hospitals or countries where glucarpidase is unavailable.
DISCLOSURES:
This study was funded by BTG International. Gupta disclosed ties with BTG International, Dana-Farber Cancer Institute’s Wong Foundation, Janssen, AstraZeneca, and the National Institute of Diabetes and Digestive and Kidney Diseases (K23DK125672). Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Researchers conducted a multicenter cohort study involving 708 adults with methotrexate-associated acute kidney injury from 28 cancer centers across the United States.
- Analysis utilized a sequential target trial emulation framework to compare outcomes between 209 patients who received glucarpidase within 4 days of methotrexate initiation and 499 patients who did not.
- The primary endpoint was kidney recovery at hospital discharge, defined as survival with serum creatinine < 1.5-fold baseline without dialysis dependence.
- Secondary endpoints included time-to-kidney recovery, neutropenia and transaminitis on day 7, and time-to-death.
TAKEAWAY:
- Glucarpidase administration was associated with adjusted odds ratio [aOR] of 2.70 (95% CI, 1.69-4.31) and adjusted hazard ratio [aHR] of 1.88 (95% CI, 1.18-3.33) for time-to-kidney recovery.
- Treatment with glucarpidase reduced the risk for grade ≥ 2 neutropenia (aOR, 0.50; 95% CI, 0.28-0.91) and grade ≥ 2 transaminitis (aOR, 0.31; 95% CI, 0.13-0.77) on day 7.
- Female patients showed greater benefit from glucarpidase treatment than male patients (P = .02 for interaction).
- No significant difference was observed in time-to-death between glucarpidase-treated and glucarpidase-untreated patients (aHR, 0.76; 95% CI, 0.49-1.18).
IN PRACTICE:
“These data suggest glucarpidase may improve both renal and extrarenal outcomes in patients with MTX-AKI [methotrexate-acute kidney injury],” the authors of the study wrote.
SOURCE:
This study was led by Shruti Gupta, MD, MPH, and David E. Leaf, MD, MMSc, Brigham and Women’s Hospital in Boston, Massachusetts. It was published online in Blood.
LIMITATIONS:
According to the authors, residual confounding cannot be excluded despite adjustment for multiple variables. While glucarpidase-treated patients had similar distributions of most baseline characteristics, they showed greater severity of illness, including more comorbidities, concomitant nephrotoxic medications, higher 24-hour methotrexate levels, and more severe acute kidney injury. This study was limited to patients treated at large, US-based academic centers, potentially affecting generalizability to smaller hospitals or countries where glucarpidase is unavailable.
DISCLOSURES:
This study was funded by BTG International. Gupta disclosed ties with BTG International, Dana-Farber Cancer Institute’s Wong Foundation, Janssen, AstraZeneca, and the National Institute of Diabetes and Digestive and Kidney Diseases (K23DK125672). Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
AI-Enhanced ECG Used to Predict Hypertension and Associated Risks
TOPLINE:
in addition to traditional markers.
METHODOLOGY:
- Researchers conducted a development and external validation prognostic cohort study in a secondary care setting to identify individuals at risk for incident hypertension.
- They developed AIRE-HTN, which was trained on a derivation cohort from the Beth Israel Deaconess Medical Center in Boston, involving 1,163,401 ECGs from 189,539 patients (mean age, 57.7 years; 52.1% women; 64.5% White individuals).
- External validation was conducted on 65,610 ECGs from a UK-based volunteer cohort, drawn from an equal number of patients (mean age, 65.4 years; 51.5% women; 96.3% White individuals).
- Incident hypertension was evaluated in 19,423 individuals without hypertension from the medical center cohort and in 35,806 individuals without hypertension from the UK cohort.
TAKEAWAY:
- AIRE-HTN predicted incident hypertension with a C-index of 0.70 (95% CI, 0.69-0.71) in both the cohorts. Those in the quartile with the highest AIRE-HTN scores had a fourfold increased risk for incident hypertension (P < .001).
- The model’s predictive accuracy was maintained in individuals without left ventricular hypertrophy and those with normal ECGs and baseline blood pressure, indicating its robustness.
- The model was significantly additive to traditional clinical markers, with a continuous net reclassification index of 0.44 for the medical center cohort and 0.32 for the UK cohort.
- AIRE-HTN was an independent predictor of cardiovascular death (hazard ratio per 1-SD increase in score [HR], 2.24), heart failure (HR, 2.60), myocardial infarction (HR, 3.13), ischemic stroke (HR, 1.23), and chronic kidney disease (HR, 1.89) in outpatients from the medical center cohort (all P < .001), with largely consistent findings in the UK cohort.
IN PRACTICE:
“Results of exploratory and phenotypic analyses suggest the biological plausibility of these findings. Enhanced predictability could influence surveillance programs and primordial prevention,” the authors wrote.
SOURCE:
The study was led by Arunashis Sau, PhD, and Joseph Barker, MRes, National Heart and Lung Institute, Imperial College London, England. It was published online on January 2, 2025, in JAMA Cardiology.
LIMITATIONS:
In one cohort, hypertension was defined using International Classification of Diseases codes, which may lack granularity and not align with contemporary guidelines. The findings were not validated against ambulatory monitoring standards. The performance of the model in different populations and clinical settings remains to be explored.
DISCLOSURES:
The authors acknowledged receiving support from Imperial’s British Heart Foundation Centre for Excellence Award and disclosed receiving support from the British Heart Foundation, the National Institute for Health Research Imperial College Biomedical Research Centre, the EJP RD Research Mobility Fellowship, the Medical Research Council, and the Sir Jules Thorn Charitable Trust. Some authors reported receiving grants, personal fees, advisory fees, or laboratory work fees outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
in addition to traditional markers.
METHODOLOGY:
- Researchers conducted a development and external validation prognostic cohort study in a secondary care setting to identify individuals at risk for incident hypertension.
- They developed AIRE-HTN, which was trained on a derivation cohort from the Beth Israel Deaconess Medical Center in Boston, involving 1,163,401 ECGs from 189,539 patients (mean age, 57.7 years; 52.1% women; 64.5% White individuals).
- External validation was conducted on 65,610 ECGs from a UK-based volunteer cohort, drawn from an equal number of patients (mean age, 65.4 years; 51.5% women; 96.3% White individuals).
- Incident hypertension was evaluated in 19,423 individuals without hypertension from the medical center cohort and in 35,806 individuals without hypertension from the UK cohort.
TAKEAWAY:
- AIRE-HTN predicted incident hypertension with a C-index of 0.70 (95% CI, 0.69-0.71) in both the cohorts. Those in the quartile with the highest AIRE-HTN scores had a fourfold increased risk for incident hypertension (P < .001).
- The model’s predictive accuracy was maintained in individuals without left ventricular hypertrophy and those with normal ECGs and baseline blood pressure, indicating its robustness.
- The model was significantly additive to traditional clinical markers, with a continuous net reclassification index of 0.44 for the medical center cohort and 0.32 for the UK cohort.
- AIRE-HTN was an independent predictor of cardiovascular death (hazard ratio per 1-SD increase in score [HR], 2.24), heart failure (HR, 2.60), myocardial infarction (HR, 3.13), ischemic stroke (HR, 1.23), and chronic kidney disease (HR, 1.89) in outpatients from the medical center cohort (all P < .001), with largely consistent findings in the UK cohort.
IN PRACTICE:
“Results of exploratory and phenotypic analyses suggest the biological plausibility of these findings. Enhanced predictability could influence surveillance programs and primordial prevention,” the authors wrote.
SOURCE:
The study was led by Arunashis Sau, PhD, and Joseph Barker, MRes, National Heart and Lung Institute, Imperial College London, England. It was published online on January 2, 2025, in JAMA Cardiology.
LIMITATIONS:
In one cohort, hypertension was defined using International Classification of Diseases codes, which may lack granularity and not align with contemporary guidelines. The findings were not validated against ambulatory monitoring standards. The performance of the model in different populations and clinical settings remains to be explored.
DISCLOSURES:
The authors acknowledged receiving support from Imperial’s British Heart Foundation Centre for Excellence Award and disclosed receiving support from the British Heart Foundation, the National Institute for Health Research Imperial College Biomedical Research Centre, the EJP RD Research Mobility Fellowship, the Medical Research Council, and the Sir Jules Thorn Charitable Trust. Some authors reported receiving grants, personal fees, advisory fees, or laboratory work fees outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
in addition to traditional markers.
METHODOLOGY:
- Researchers conducted a development and external validation prognostic cohort study in a secondary care setting to identify individuals at risk for incident hypertension.
- They developed AIRE-HTN, which was trained on a derivation cohort from the Beth Israel Deaconess Medical Center in Boston, involving 1,163,401 ECGs from 189,539 patients (mean age, 57.7 years; 52.1% women; 64.5% White individuals).
- External validation was conducted on 65,610 ECGs from a UK-based volunteer cohort, drawn from an equal number of patients (mean age, 65.4 years; 51.5% women; 96.3% White individuals).
- Incident hypertension was evaluated in 19,423 individuals without hypertension from the medical center cohort and in 35,806 individuals without hypertension from the UK cohort.
TAKEAWAY:
- AIRE-HTN predicted incident hypertension with a C-index of 0.70 (95% CI, 0.69-0.71) in both the cohorts. Those in the quartile with the highest AIRE-HTN scores had a fourfold increased risk for incident hypertension (P < .001).
- The model’s predictive accuracy was maintained in individuals without left ventricular hypertrophy and those with normal ECGs and baseline blood pressure, indicating its robustness.
- The model was significantly additive to traditional clinical markers, with a continuous net reclassification index of 0.44 for the medical center cohort and 0.32 for the UK cohort.
- AIRE-HTN was an independent predictor of cardiovascular death (hazard ratio per 1-SD increase in score [HR], 2.24), heart failure (HR, 2.60), myocardial infarction (HR, 3.13), ischemic stroke (HR, 1.23), and chronic kidney disease (HR, 1.89) in outpatients from the medical center cohort (all P < .001), with largely consistent findings in the UK cohort.
IN PRACTICE:
“Results of exploratory and phenotypic analyses suggest the biological plausibility of these findings. Enhanced predictability could influence surveillance programs and primordial prevention,” the authors wrote.
SOURCE:
The study was led by Arunashis Sau, PhD, and Joseph Barker, MRes, National Heart and Lung Institute, Imperial College London, England. It was published online on January 2, 2025, in JAMA Cardiology.
LIMITATIONS:
In one cohort, hypertension was defined using International Classification of Diseases codes, which may lack granularity and not align with contemporary guidelines. The findings were not validated against ambulatory monitoring standards. The performance of the model in different populations and clinical settings remains to be explored.
DISCLOSURES:
The authors acknowledged receiving support from Imperial’s British Heart Foundation Centre for Excellence Award and disclosed receiving support from the British Heart Foundation, the National Institute for Health Research Imperial College Biomedical Research Centre, the EJP RD Research Mobility Fellowship, the Medical Research Council, and the Sir Jules Thorn Charitable Trust. Some authors reported receiving grants, personal fees, advisory fees, or laboratory work fees outside the submitted work.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Does Watch and Wait Increase Distant Metastasis Risk in Rectal Cancer?
TOPLINE:
The new study highlights the importance of timely surgical intervention to improve distant metastases–free survival rates.
METHODOLOGY:
- Organ preservation has become an attractive alternative to surgery for patients with rectal cancer who achieve a clinical complete response after neoadjuvant therapy, with the risk for local regrowth after initial clinical complete response being around 25%-30%.
- The new study aimed to compare the risk for distant metastases between patients with local regrowth after watch and wait and patients with near-complete pathologic response managed by total mesorectal excision.
- A total of 508 patients with local regrowth were included from the International Watch & Wait Database, and 893 patients with near-complete pathologic response were included from the Spanish Rectal Cancer Project.
- The primary endpoint was distant metastases–free survival at 3 years from the decision to watch and wait or total mesorectal excision, and the secondary endpoints included possible risk factors associated with distant metastases.
TAKEAWAY:
- Patients with local regrowth had a significantly higher rate of distant metastases (rate, 22.8% vs 10.2%; P ≤.001) than those with near-complete pathologic response managed by total mesorectal excision.
- Distant metastases–free survival at 3 years was significantly worse for patients with local regrowth (rate, 75% vs 87%; P < .001).
- Independent risk factors for distant metastases included local regrowth (vs total mesorectal excision at reassessment; P = .001), ypT3-4 status (P = .016), and ypN+ status (P = .001) at the time of surgery.
- Patients with local regrowth had worse distant metastases–free survival across all pathologic stages than those managed by total mesorectal excision.
IN PRACTICE:
“Patients with local regrowth appear to have a higher risk for subsequent distant metastases development than patients with near-complete pathologic response managed by total mesorectal excision at restaging irrespective of final pathology,” the authors wrote.
SOURCE:
This study was led by Laura M. Fernandez, MD, of the Champalimaud Foundation in Lisbon, Portugal. It was published online in Journal of Clinical Oncology.
LIMITATIONS:
This study’s limitations included the heterogeneity in defining clinical complete response and the decision to watch and wait across different institutions. The majority of patients did not receive total neoadjuvant therapy regimens, which may have affected the generalizability of the findings. The study had a considerable amount of follow-up losses, which could have introduced bias.
DISCLOSURES:
This study was supported by the European Society of Surgical Oncology, the Champalimaud Foundation, the Bas Mulder Award, the Alpe d’HuZes Foundation, the Dutch Cancer Society, the European Research Council Advanced Grant, and the National Institute of Health and Research Manchester Biomedical Research Centre. Fernandez disclosed receiving grants from Johnson & Johnson. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The new study highlights the importance of timely surgical intervention to improve distant metastases–free survival rates.
METHODOLOGY:
- Organ preservation has become an attractive alternative to surgery for patients with rectal cancer who achieve a clinical complete response after neoadjuvant therapy, with the risk for local regrowth after initial clinical complete response being around 25%-30%.
- The new study aimed to compare the risk for distant metastases between patients with local regrowth after watch and wait and patients with near-complete pathologic response managed by total mesorectal excision.
- A total of 508 patients with local regrowth were included from the International Watch & Wait Database, and 893 patients with near-complete pathologic response were included from the Spanish Rectal Cancer Project.
- The primary endpoint was distant metastases–free survival at 3 years from the decision to watch and wait or total mesorectal excision, and the secondary endpoints included possible risk factors associated with distant metastases.
TAKEAWAY:
- Patients with local regrowth had a significantly higher rate of distant metastases (rate, 22.8% vs 10.2%; P ≤.001) than those with near-complete pathologic response managed by total mesorectal excision.
- Distant metastases–free survival at 3 years was significantly worse for patients with local regrowth (rate, 75% vs 87%; P < .001).
- Independent risk factors for distant metastases included local regrowth (vs total mesorectal excision at reassessment; P = .001), ypT3-4 status (P = .016), and ypN+ status (P = .001) at the time of surgery.
- Patients with local regrowth had worse distant metastases–free survival across all pathologic stages than those managed by total mesorectal excision.
IN PRACTICE:
“Patients with local regrowth appear to have a higher risk for subsequent distant metastases development than patients with near-complete pathologic response managed by total mesorectal excision at restaging irrespective of final pathology,” the authors wrote.
SOURCE:
This study was led by Laura M. Fernandez, MD, of the Champalimaud Foundation in Lisbon, Portugal. It was published online in Journal of Clinical Oncology.
LIMITATIONS:
This study’s limitations included the heterogeneity in defining clinical complete response and the decision to watch and wait across different institutions. The majority of patients did not receive total neoadjuvant therapy regimens, which may have affected the generalizability of the findings. The study had a considerable amount of follow-up losses, which could have introduced bias.
DISCLOSURES:
This study was supported by the European Society of Surgical Oncology, the Champalimaud Foundation, the Bas Mulder Award, the Alpe d’HuZes Foundation, the Dutch Cancer Society, the European Research Council Advanced Grant, and the National Institute of Health and Research Manchester Biomedical Research Centre. Fernandez disclosed receiving grants from Johnson & Johnson. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
The new study highlights the importance of timely surgical intervention to improve distant metastases–free survival rates.
METHODOLOGY:
- Organ preservation has become an attractive alternative to surgery for patients with rectal cancer who achieve a clinical complete response after neoadjuvant therapy, with the risk for local regrowth after initial clinical complete response being around 25%-30%.
- The new study aimed to compare the risk for distant metastases between patients with local regrowth after watch and wait and patients with near-complete pathologic response managed by total mesorectal excision.
- A total of 508 patients with local regrowth were included from the International Watch & Wait Database, and 893 patients with near-complete pathologic response were included from the Spanish Rectal Cancer Project.
- The primary endpoint was distant metastases–free survival at 3 years from the decision to watch and wait or total mesorectal excision, and the secondary endpoints included possible risk factors associated with distant metastases.
TAKEAWAY:
- Patients with local regrowth had a significantly higher rate of distant metastases (rate, 22.8% vs 10.2%; P ≤.001) than those with near-complete pathologic response managed by total mesorectal excision.
- Distant metastases–free survival at 3 years was significantly worse for patients with local regrowth (rate, 75% vs 87%; P < .001).
- Independent risk factors for distant metastases included local regrowth (vs total mesorectal excision at reassessment; P = .001), ypT3-4 status (P = .016), and ypN+ status (P = .001) at the time of surgery.
- Patients with local regrowth had worse distant metastases–free survival across all pathologic stages than those managed by total mesorectal excision.
IN PRACTICE:
“Patients with local regrowth appear to have a higher risk for subsequent distant metastases development than patients with near-complete pathologic response managed by total mesorectal excision at restaging irrespective of final pathology,” the authors wrote.
SOURCE:
This study was led by Laura M. Fernandez, MD, of the Champalimaud Foundation in Lisbon, Portugal. It was published online in Journal of Clinical Oncology.
LIMITATIONS:
This study’s limitations included the heterogeneity in defining clinical complete response and the decision to watch and wait across different institutions. The majority of patients did not receive total neoadjuvant therapy regimens, which may have affected the generalizability of the findings. The study had a considerable amount of follow-up losses, which could have introduced bias.
DISCLOSURES:
This study was supported by the European Society of Surgical Oncology, the Champalimaud Foundation, the Bas Mulder Award, the Alpe d’HuZes Foundation, the Dutch Cancer Society, the European Research Council Advanced Grant, and the National Institute of Health and Research Manchester Biomedical Research Centre. Fernandez disclosed receiving grants from Johnson & Johnson. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Rethink Fall Risk Assessment When Choosing Between Gabapentin and Duloxetine for Older Adults
TOPLINE:
, with a weighted cumulative incidence of 84.44 vs 158.21 per 1000 person-years at 180 days. Incident gabapentin use showed a 48% lower hazard of falls at 6-month follow-up, with no increase in severe fall-related events.
METHODOLOGY:
- Researchers conducted a new user, active comparator cohort study using IBM MarketScan Research Databases from January 2014 to December 2021.
- Analysis included 57,086 adults aged 65 years or older with postherpetic neuralgia, diabetic neuropathy, or fibromyalgia, excluding those with depression, anxiety, seizures, or cancer in the prior 365 days.
- The primary outcome measured the hazard of experiencing any fall-related visit in the 6 months after initiating gabapentin or duloxetine until treatment discontinuation.
- Secondary outcomes examined severe fall-related events, including falls with hip fractures or emergency department visits/hospitalizations associated with falls.
TAKEAWAY:
- The analytic cohort included 52,152 gabapentin users and 4934 duloxetine users, with a median follow-up duration of 30 days (interquartile range, 30-90 days).
- Weighted cumulative incidence of fall-related visits per 1000 person-years at 30, 90, and 180 days was 103.60, 90.44, and 84.44 for gabapentin users vs 203.43, 177.73, and 158.21 for duloxetine users.
- At 6-month follow-up, incident gabapentin users demonstrated lower hazard of falls (hazard ratio, 0.52; 95% CI, 0.43-0.64), with no difference in hazards of severe falls.
- Results remained consistent across sensitivity and subgroup analyses.
IN PRACTICE:
“One bioplausible explanation for our results is that gabapentin is a highly titratable medication and many in our cohort started on low doses. Alternatively, duloxetine is usually titrated only once or twice. Thus, although it may be that gabapentin is simply safer than duloxetine from a falls perspective, it may also be likely that we are measuring specific clinical scenarios, the peri-initiation and titration period, in which gabapentin may be less likely to cause falls than duloxetine,” the authors wrote.
SOURCE:
The study was led by Alexander Chaitoff, MD, MPH, Department of Internal Medicine, University of Michigan School of Medicine in Ann Arbor, and Center for Healthcare Delivery Sciences, Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School in Boston. It was published online in Annals of Internal Medicine.
LIMITATIONS:
Despite the design aimed at limiting confounding effects, the observational nature of the study introduces potential bias. Claims data may undercount falls for which patients do not seek care, though this limitation likely affects both medication groups equally. The commercial claims database includes only Medicare supplemental insurance beneficiaries, potentially limiting generalizability. Regional variations in prescribing patterns could not be accounted for in the analysis.
DISCLOSURES:
No funding was provided for this study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
, with a weighted cumulative incidence of 84.44 vs 158.21 per 1000 person-years at 180 days. Incident gabapentin use showed a 48% lower hazard of falls at 6-month follow-up, with no increase in severe fall-related events.
METHODOLOGY:
- Researchers conducted a new user, active comparator cohort study using IBM MarketScan Research Databases from January 2014 to December 2021.
- Analysis included 57,086 adults aged 65 years or older with postherpetic neuralgia, diabetic neuropathy, or fibromyalgia, excluding those with depression, anxiety, seizures, or cancer in the prior 365 days.
- The primary outcome measured the hazard of experiencing any fall-related visit in the 6 months after initiating gabapentin or duloxetine until treatment discontinuation.
- Secondary outcomes examined severe fall-related events, including falls with hip fractures or emergency department visits/hospitalizations associated with falls.
TAKEAWAY:
- The analytic cohort included 52,152 gabapentin users and 4934 duloxetine users, with a median follow-up duration of 30 days (interquartile range, 30-90 days).
- Weighted cumulative incidence of fall-related visits per 1000 person-years at 30, 90, and 180 days was 103.60, 90.44, and 84.44 for gabapentin users vs 203.43, 177.73, and 158.21 for duloxetine users.
- At 6-month follow-up, incident gabapentin users demonstrated lower hazard of falls (hazard ratio, 0.52; 95% CI, 0.43-0.64), with no difference in hazards of severe falls.
- Results remained consistent across sensitivity and subgroup analyses.
IN PRACTICE:
“One bioplausible explanation for our results is that gabapentin is a highly titratable medication and many in our cohort started on low doses. Alternatively, duloxetine is usually titrated only once or twice. Thus, although it may be that gabapentin is simply safer than duloxetine from a falls perspective, it may also be likely that we are measuring specific clinical scenarios, the peri-initiation and titration period, in which gabapentin may be less likely to cause falls than duloxetine,” the authors wrote.
SOURCE:
The study was led by Alexander Chaitoff, MD, MPH, Department of Internal Medicine, University of Michigan School of Medicine in Ann Arbor, and Center for Healthcare Delivery Sciences, Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School in Boston. It was published online in Annals of Internal Medicine.
LIMITATIONS:
Despite the design aimed at limiting confounding effects, the observational nature of the study introduces potential bias. Claims data may undercount falls for which patients do not seek care, though this limitation likely affects both medication groups equally. The commercial claims database includes only Medicare supplemental insurance beneficiaries, potentially limiting generalizability. Regional variations in prescribing patterns could not be accounted for in the analysis.
DISCLOSURES:
No funding was provided for this study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
, with a weighted cumulative incidence of 84.44 vs 158.21 per 1000 person-years at 180 days. Incident gabapentin use showed a 48% lower hazard of falls at 6-month follow-up, with no increase in severe fall-related events.
METHODOLOGY:
- Researchers conducted a new user, active comparator cohort study using IBM MarketScan Research Databases from January 2014 to December 2021.
- Analysis included 57,086 adults aged 65 years or older with postherpetic neuralgia, diabetic neuropathy, or fibromyalgia, excluding those with depression, anxiety, seizures, or cancer in the prior 365 days.
- The primary outcome measured the hazard of experiencing any fall-related visit in the 6 months after initiating gabapentin or duloxetine until treatment discontinuation.
- Secondary outcomes examined severe fall-related events, including falls with hip fractures or emergency department visits/hospitalizations associated with falls.
TAKEAWAY:
- The analytic cohort included 52,152 gabapentin users and 4934 duloxetine users, with a median follow-up duration of 30 days (interquartile range, 30-90 days).
- Weighted cumulative incidence of fall-related visits per 1000 person-years at 30, 90, and 180 days was 103.60, 90.44, and 84.44 for gabapentin users vs 203.43, 177.73, and 158.21 for duloxetine users.
- At 6-month follow-up, incident gabapentin users demonstrated lower hazard of falls (hazard ratio, 0.52; 95% CI, 0.43-0.64), with no difference in hazards of severe falls.
- Results remained consistent across sensitivity and subgroup analyses.
IN PRACTICE:
“One bioplausible explanation for our results is that gabapentin is a highly titratable medication and many in our cohort started on low doses. Alternatively, duloxetine is usually titrated only once or twice. Thus, although it may be that gabapentin is simply safer than duloxetine from a falls perspective, it may also be likely that we are measuring specific clinical scenarios, the peri-initiation and titration period, in which gabapentin may be less likely to cause falls than duloxetine,” the authors wrote.
SOURCE:
The study was led by Alexander Chaitoff, MD, MPH, Department of Internal Medicine, University of Michigan School of Medicine in Ann Arbor, and Center for Healthcare Delivery Sciences, Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School in Boston. It was published online in Annals of Internal Medicine.
LIMITATIONS:
Despite the design aimed at limiting confounding effects, the observational nature of the study introduces potential bias. Claims data may undercount falls for which patients do not seek care, though this limitation likely affects both medication groups equally. The commercial claims database includes only Medicare supplemental insurance beneficiaries, potentially limiting generalizability. Regional variations in prescribing patterns could not be accounted for in the analysis.
DISCLOSURES:
No funding was provided for this study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Cost Barriers Influence Adherence to Asthma Care
TOPLINE:
Nearly one in six adults with asthma in the United States is nonadherent to medications due to costs, with younger patients, women, and those without insurance having an increased likelihood of being nonadherent.
METHODOLOGY:
- Researchers evaluated the prevalence and determinants of cost-related nonadherence (CRN) to medications among adults with asthma in the United States between 2011 and 2022. Data were obtained from the National Health Interview Survey (NHIS) conducted by the National Center for Health Statistics.
- They used the data from the NHIS to include a total of 30,793 adults who had asthma, representing 8.1% of the US population.
- CRN was defined through three components: Skipping medication doses, taking less medication, or delaying medication refills to save money over the past 12 months.
- CRN prevalence, factors associated with CRN, and asthma-related adverse events were analyzed.
TAKEAWAY:
- Overall, 17.8% of US adults with asthma reported CRN; 11.6% skipped medication, 12.4% took less medication, and 15.1% delayed refilling medications to save money.
- Patients aged > 60 years were the least likely to report CRN compared with those aged 18-40 years and 41-60 years; women were more likely to report CRN to medications than men (both P < .01).
- Patients who were current or former smokers or had two or more comorbidities, no health insurance coverage, or a family income below 400% of the federal poverty level had an increased likelihood of reporting CRN.
- Compared with patients without CRN, those who reported CRN had almost double the odds of experiencing asthma attacks (adjusted odds ratio [aOR], 1.95; 95% CI, 1.78-2.13) and increased emergency room visits for asthma (aOR, 1.63; 95% CI, 1.44-1.84).
IN PRACTICE:
“The present study reinforces the recommendation that patients with asthma are best controlled when they are prescribed and take medications that are strongly recommended by clinical guidelines,” the authors wrote. “For healthcare providers, it is imperative to monitor patient’s adherence to medications to prevent asthma exacerbations, especially when treating patients with financial concerns,” they further added.
SOURCE:
The study was led by Chun-Tse Hung, School of Pharmacy, College of Pharmacy, Taipei Medical University, Taipei, Taiwan. It was published online in Thorax.
LIMITATIONS:
The reliance on self-reported data introduced potential recall bias, and the absence of medical records may have led to misclassification of disease status. The study could not evaluate the effect of asthma severity due to limited measures in the NHIS. Some important variables reflecting economic indicators, such as the consumer price index, could not be included due to limited measures in the NHIS.
DISCLOSURES:
No disclosures or conflicts of interest statements were provided in the study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Nearly one in six adults with asthma in the United States is nonadherent to medications due to costs, with younger patients, women, and those without insurance having an increased likelihood of being nonadherent.
METHODOLOGY:
- Researchers evaluated the prevalence and determinants of cost-related nonadherence (CRN) to medications among adults with asthma in the United States between 2011 and 2022. Data were obtained from the National Health Interview Survey (NHIS) conducted by the National Center for Health Statistics.
- They used the data from the NHIS to include a total of 30,793 adults who had asthma, representing 8.1% of the US population.
- CRN was defined through three components: Skipping medication doses, taking less medication, or delaying medication refills to save money over the past 12 months.
- CRN prevalence, factors associated with CRN, and asthma-related adverse events were analyzed.
TAKEAWAY:
- Overall, 17.8% of US adults with asthma reported CRN; 11.6% skipped medication, 12.4% took less medication, and 15.1% delayed refilling medications to save money.
- Patients aged > 60 years were the least likely to report CRN compared with those aged 18-40 years and 41-60 years; women were more likely to report CRN to medications than men (both P < .01).
- Patients who were current or former smokers or had two or more comorbidities, no health insurance coverage, or a family income below 400% of the federal poverty level had an increased likelihood of reporting CRN.
- Compared with patients without CRN, those who reported CRN had almost double the odds of experiencing asthma attacks (adjusted odds ratio [aOR], 1.95; 95% CI, 1.78-2.13) and increased emergency room visits for asthma (aOR, 1.63; 95% CI, 1.44-1.84).
IN PRACTICE:
“The present study reinforces the recommendation that patients with asthma are best controlled when they are prescribed and take medications that are strongly recommended by clinical guidelines,” the authors wrote. “For healthcare providers, it is imperative to monitor patient’s adherence to medications to prevent asthma exacerbations, especially when treating patients with financial concerns,” they further added.
SOURCE:
The study was led by Chun-Tse Hung, School of Pharmacy, College of Pharmacy, Taipei Medical University, Taipei, Taiwan. It was published online in Thorax.
LIMITATIONS:
The reliance on self-reported data introduced potential recall bias, and the absence of medical records may have led to misclassification of disease status. The study could not evaluate the effect of asthma severity due to limited measures in the NHIS. Some important variables reflecting economic indicators, such as the consumer price index, could not be included due to limited measures in the NHIS.
DISCLOSURES:
No disclosures or conflicts of interest statements were provided in the study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Nearly one in six adults with asthma in the United States is nonadherent to medications due to costs, with younger patients, women, and those without insurance having an increased likelihood of being nonadherent.
METHODOLOGY:
- Researchers evaluated the prevalence and determinants of cost-related nonadherence (CRN) to medications among adults with asthma in the United States between 2011 and 2022. Data were obtained from the National Health Interview Survey (NHIS) conducted by the National Center for Health Statistics.
- They used the data from the NHIS to include a total of 30,793 adults who had asthma, representing 8.1% of the US population.
- CRN was defined through three components: Skipping medication doses, taking less medication, or delaying medication refills to save money over the past 12 months.
- CRN prevalence, factors associated with CRN, and asthma-related adverse events were analyzed.
TAKEAWAY:
- Overall, 17.8% of US adults with asthma reported CRN; 11.6% skipped medication, 12.4% took less medication, and 15.1% delayed refilling medications to save money.
- Patients aged > 60 years were the least likely to report CRN compared with those aged 18-40 years and 41-60 years; women were more likely to report CRN to medications than men (both P < .01).
- Patients who were current or former smokers or had two or more comorbidities, no health insurance coverage, or a family income below 400% of the federal poverty level had an increased likelihood of reporting CRN.
- Compared with patients without CRN, those who reported CRN had almost double the odds of experiencing asthma attacks (adjusted odds ratio [aOR], 1.95; 95% CI, 1.78-2.13) and increased emergency room visits for asthma (aOR, 1.63; 95% CI, 1.44-1.84).
IN PRACTICE:
“The present study reinforces the recommendation that patients with asthma are best controlled when they are prescribed and take medications that are strongly recommended by clinical guidelines,” the authors wrote. “For healthcare providers, it is imperative to monitor patient’s adherence to medications to prevent asthma exacerbations, especially when treating patients with financial concerns,” they further added.
SOURCE:
The study was led by Chun-Tse Hung, School of Pharmacy, College of Pharmacy, Taipei Medical University, Taipei, Taiwan. It was published online in Thorax.
LIMITATIONS:
The reliance on self-reported data introduced potential recall bias, and the absence of medical records may have led to misclassification of disease status. The study could not evaluate the effect of asthma severity due to limited measures in the NHIS. Some important variables reflecting economic indicators, such as the consumer price index, could not be included due to limited measures in the NHIS.
DISCLOSURES:
No disclosures or conflicts of interest statements were provided in the study.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Choline Alfoscerate Has Modest Benefits on Cognition in Type 2 Diabetes
TOPLINE:
METHODOLOGY:
- Prior studies have demonstrated the efficacy of choline alfoscerate, a phospholipid metabolite naturally found in the brain, in improving cognitive function in patients with neurodegenerative conditions, but its use in patients with T2D remains unexplored.
- Researchers at a hospital in Korea enrolled patients aged over 60 years with T2D and mild cognitive impairment (assessed by Mini-Mental State Examination [MMSE] scores of 25-28), who were randomly assigned to receive either 1200 mg/d choline alfoscerate or placebo for 12 months.
- The primary efficacy endpoint was the change in the total MMSE score from baseline to month 6; secondary efficacy endpoints included changes in cognitive performance and quality of life, measured by the 36-Item Short Form Health Survey, at 6 and 12 months.
TAKEAWAY:
- Thirty-six patients (average age, 71.8 years; 25% men) with an average diabetes duration of 12.1 years were randomized to receive choline alfoscerate (n = 18) or placebo (n = 18).
- At 6 months, there was modest but nonsignificant improvement in MMSE score with choline alfoscerate vs placebo (P = .059).
- After 12 months, the choline alfoscerate group showed an increase in the MMSE score from 26.2 to 27.1, whereas the placebo group showed a slight decline from 26.6 to 25.8, which represented a significant improvement for the treatment arm (P < .001).
- Physical health scores were significantly superior in the choline alfoscerate group vs the placebo group at 6 months (P = .014), with similar observations at 12 months (P = .039).
- No serious adverse events were reported in either group.
IN PRACTICE:
“Choline alfoscerate could be considered an anticipated therapeutic option to preserve cognitive function and subsequently physical health in elderly patients with diabetes and mild cognitive impairment,” the authors wrote.
SOURCE:
The study was led by Minji Sohn, PhD, Department of Internal Medicine, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea, and published online in Diabetes, Obesity and Metabolism.
LIMITATIONS:
The study population primarily comprised non–insulin-dependent patients with controlled glycemia and minimal comorbidities, which may have limited the applicability of the results to a broader population. The small sample size may have contributed to the lack of statistical significance in some outcomes. Moreover, the 12-month study duration may have not been sufficient to investigate the long-term effects of choline alfoscerate.
DISCLOSURES:
This study was funded by Daewoong Pharmaceutical through subcontracting with Seoul National University Bundang Hospital. The authors declared no competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Prior studies have demonstrated the efficacy of choline alfoscerate, a phospholipid metabolite naturally found in the brain, in improving cognitive function in patients with neurodegenerative conditions, but its use in patients with T2D remains unexplored.
- Researchers at a hospital in Korea enrolled patients aged over 60 years with T2D and mild cognitive impairment (assessed by Mini-Mental State Examination [MMSE] scores of 25-28), who were randomly assigned to receive either 1200 mg/d choline alfoscerate or placebo for 12 months.
- The primary efficacy endpoint was the change in the total MMSE score from baseline to month 6; secondary efficacy endpoints included changes in cognitive performance and quality of life, measured by the 36-Item Short Form Health Survey, at 6 and 12 months.
TAKEAWAY:
- Thirty-six patients (average age, 71.8 years; 25% men) with an average diabetes duration of 12.1 years were randomized to receive choline alfoscerate (n = 18) or placebo (n = 18).
- At 6 months, there was modest but nonsignificant improvement in MMSE score with choline alfoscerate vs placebo (P = .059).
- After 12 months, the choline alfoscerate group showed an increase in the MMSE score from 26.2 to 27.1, whereas the placebo group showed a slight decline from 26.6 to 25.8, which represented a significant improvement for the treatment arm (P < .001).
- Physical health scores were significantly superior in the choline alfoscerate group vs the placebo group at 6 months (P = .014), with similar observations at 12 months (P = .039).
- No serious adverse events were reported in either group.
IN PRACTICE:
“Choline alfoscerate could be considered an anticipated therapeutic option to preserve cognitive function and subsequently physical health in elderly patients with diabetes and mild cognitive impairment,” the authors wrote.
SOURCE:
The study was led by Minji Sohn, PhD, Department of Internal Medicine, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea, and published online in Diabetes, Obesity and Metabolism.
LIMITATIONS:
The study population primarily comprised non–insulin-dependent patients with controlled glycemia and minimal comorbidities, which may have limited the applicability of the results to a broader population. The small sample size may have contributed to the lack of statistical significance in some outcomes. Moreover, the 12-month study duration may have not been sufficient to investigate the long-term effects of choline alfoscerate.
DISCLOSURES:
This study was funded by Daewoong Pharmaceutical through subcontracting with Seoul National University Bundang Hospital. The authors declared no competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Prior studies have demonstrated the efficacy of choline alfoscerate, a phospholipid metabolite naturally found in the brain, in improving cognitive function in patients with neurodegenerative conditions, but its use in patients with T2D remains unexplored.
- Researchers at a hospital in Korea enrolled patients aged over 60 years with T2D and mild cognitive impairment (assessed by Mini-Mental State Examination [MMSE] scores of 25-28), who were randomly assigned to receive either 1200 mg/d choline alfoscerate or placebo for 12 months.
- The primary efficacy endpoint was the change in the total MMSE score from baseline to month 6; secondary efficacy endpoints included changes in cognitive performance and quality of life, measured by the 36-Item Short Form Health Survey, at 6 and 12 months.
TAKEAWAY:
- Thirty-six patients (average age, 71.8 years; 25% men) with an average diabetes duration of 12.1 years were randomized to receive choline alfoscerate (n = 18) or placebo (n = 18).
- At 6 months, there was modest but nonsignificant improvement in MMSE score with choline alfoscerate vs placebo (P = .059).
- After 12 months, the choline alfoscerate group showed an increase in the MMSE score from 26.2 to 27.1, whereas the placebo group showed a slight decline from 26.6 to 25.8, which represented a significant improvement for the treatment arm (P < .001).
- Physical health scores were significantly superior in the choline alfoscerate group vs the placebo group at 6 months (P = .014), with similar observations at 12 months (P = .039).
- No serious adverse events were reported in either group.
IN PRACTICE:
“Choline alfoscerate could be considered an anticipated therapeutic option to preserve cognitive function and subsequently physical health in elderly patients with diabetes and mild cognitive impairment,” the authors wrote.
SOURCE:
The study was led by Minji Sohn, PhD, Department of Internal Medicine, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea, and published online in Diabetes, Obesity and Metabolism.
LIMITATIONS:
The study population primarily comprised non–insulin-dependent patients with controlled glycemia and minimal comorbidities, which may have limited the applicability of the results to a broader population. The small sample size may have contributed to the lack of statistical significance in some outcomes. Moreover, the 12-month study duration may have not been sufficient to investigate the long-term effects of choline alfoscerate.
DISCLOSURES:
This study was funded by Daewoong Pharmaceutical through subcontracting with Seoul National University Bundang Hospital. The authors declared no competing interests.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.