User login
Low Serum Caffeine Level Could Indicate Early Parkinson’s Disease
Low serum caffeine and caffeine metabolite levels after an overnight fast may be a sensitive way to detect Parkinson’s disease, according to the results of a case–control study published online ahead of print January 3 in Neurology.
In the study, levels of caffeine and its metabolites were lower in patients with Parkinson’s disease and motor dysfunction, compared with those without motor dysfunction. The investigators detected no differences in serum levels of caffeine metabolites between patients with mild Parkinson’s disease and those with severe Parkinson’s disease, said Motoki Fujimaki, MD, of Juntendo University School of Medicine in Tokyo, and colleagues.
A Single-Center Study
Previous research had shown that people drinking four or more cups of coffee per day had a greater than fivefold reduction in the risk of developing Parkinson’s disease. Mouse models of Parkinson’s disease showed that caffeine and two of its metabolites have a neuroprotective effect. Those results suggested that serum caffeine might be useful as a blood marker for Parkinson’s disease.
To test that idea, Dr. Fujimaki and associates recruited 31 healthy controls (18 women) and 108 patients with Parkinson’s disease but no dementia (50 women). The control group’s mean caffeine intake of 115.81 mg/day was similar to that of patients with Parkinson’s disease (107.50 mg/day).
Serum caffeine levels measured after an overnight fast showed that a cutoff of 33.04 pmol/10 µL identified Parkinson’s disease with an area under the curve (AUC) of 0.78 (sensitivity, 76.9%; specificity, 74.2%). Inclusion of the primary caffeine metabolites theophylline, theobromine, and paraxanthine increased the AUC to 0.87. When the researchers included all 11 measurable metabolites, the AUC increased further to 0.98.
Genetic analyses revealed no significant differences in the frequencies of caffeine metabolism–associated genetic variants between patients and controls.
The study was limited by the fact that it was conducted at a single university hospital, and the patient population did not include many severe cases. The association should also be studied in other Parkinson’s disease patient populations, according to the authors.
Did Treatment Effects Influence the Findings?
A key question that the study raises is what caused the decrease in serum concentration found in patients with Parkinson’s disease, said David G. Munoz, MD, of the Department of Laboratory Medicine and Pathobiology at the University of Toronto, and Shinsuke Fujioka, MD, of the Department of Neurology at Fukuoka University in Japan, in an accompanying editorial. Almost all of the patients were receiving treatment, which could have affected serum levels, they added. The researchers looked for, but did not find, an association between serum caffeine metabolite levels and levodopa equivalent doses.
“The validity of the study depends on whether caffeine metabolism may be affected by treatment,” said Drs. Munoz and Fujioka. “To demonstrate the utility of caffeine metabolites unequivocally, a future study will have to reproduce these results in patients with untreated Parkinson’s disease or subjects at high risk of Parkinson’s disease, such as those with prodromal signs of Parkinson’s disease.”
—Jim Kling
Suggested Reading
Fujimaki M, Saiki S, Li Y, et al. Serum caffeine and metabolites are reliable biomarkers of early Parkinson disease. Neurology. 2018 Jan 3 [Epub ahead of print].
Munoz DG, Fujioka S. Caffeine and Parkinson disease: A possible diagnostic and pathogenic breakthrough. Neurology. 2018 Jan 3 [Epub ahead of print].
Low serum caffeine and caffeine metabolite levels after an overnight fast may be a sensitive way to detect Parkinson’s disease, according to the results of a case–control study published online ahead of print January 3 in Neurology.
In the study, levels of caffeine and its metabolites were lower in patients with Parkinson’s disease and motor dysfunction, compared with those without motor dysfunction. The investigators detected no differences in serum levels of caffeine metabolites between patients with mild Parkinson’s disease and those with severe Parkinson’s disease, said Motoki Fujimaki, MD, of Juntendo University School of Medicine in Tokyo, and colleagues.
A Single-Center Study
Previous research had shown that people drinking four or more cups of coffee per day had a greater than fivefold reduction in the risk of developing Parkinson’s disease. Mouse models of Parkinson’s disease showed that caffeine and two of its metabolites have a neuroprotective effect. Those results suggested that serum caffeine might be useful as a blood marker for Parkinson’s disease.
To test that idea, Dr. Fujimaki and associates recruited 31 healthy controls (18 women) and 108 patients with Parkinson’s disease but no dementia (50 women). The control group’s mean caffeine intake of 115.81 mg/day was similar to that of patients with Parkinson’s disease (107.50 mg/day).
Serum caffeine levels measured after an overnight fast showed that a cutoff of 33.04 pmol/10 µL identified Parkinson’s disease with an area under the curve (AUC) of 0.78 (sensitivity, 76.9%; specificity, 74.2%). Inclusion of the primary caffeine metabolites theophylline, theobromine, and paraxanthine increased the AUC to 0.87. When the researchers included all 11 measurable metabolites, the AUC increased further to 0.98.
Genetic analyses revealed no significant differences in the frequencies of caffeine metabolism–associated genetic variants between patients and controls.
The study was limited by the fact that it was conducted at a single university hospital, and the patient population did not include many severe cases. The association should also be studied in other Parkinson’s disease patient populations, according to the authors.
Did Treatment Effects Influence the Findings?
A key question that the study raises is what caused the decrease in serum concentration found in patients with Parkinson’s disease, said David G. Munoz, MD, of the Department of Laboratory Medicine and Pathobiology at the University of Toronto, and Shinsuke Fujioka, MD, of the Department of Neurology at Fukuoka University in Japan, in an accompanying editorial. Almost all of the patients were receiving treatment, which could have affected serum levels, they added. The researchers looked for, but did not find, an association between serum caffeine metabolite levels and levodopa equivalent doses.
“The validity of the study depends on whether caffeine metabolism may be affected by treatment,” said Drs. Munoz and Fujioka. “To demonstrate the utility of caffeine metabolites unequivocally, a future study will have to reproduce these results in patients with untreated Parkinson’s disease or subjects at high risk of Parkinson’s disease, such as those with prodromal signs of Parkinson’s disease.”
—Jim Kling
Suggested Reading
Fujimaki M, Saiki S, Li Y, et al. Serum caffeine and metabolites are reliable biomarkers of early Parkinson disease. Neurology. 2018 Jan 3 [Epub ahead of print].
Munoz DG, Fujioka S. Caffeine and Parkinson disease: A possible diagnostic and pathogenic breakthrough. Neurology. 2018 Jan 3 [Epub ahead of print].
Low serum caffeine and caffeine metabolite levels after an overnight fast may be a sensitive way to detect Parkinson’s disease, according to the results of a case–control study published online ahead of print January 3 in Neurology.
In the study, levels of caffeine and its metabolites were lower in patients with Parkinson’s disease and motor dysfunction, compared with those without motor dysfunction. The investigators detected no differences in serum levels of caffeine metabolites between patients with mild Parkinson’s disease and those with severe Parkinson’s disease, said Motoki Fujimaki, MD, of Juntendo University School of Medicine in Tokyo, and colleagues.
A Single-Center Study
Previous research had shown that people drinking four or more cups of coffee per day had a greater than fivefold reduction in the risk of developing Parkinson’s disease. Mouse models of Parkinson’s disease showed that caffeine and two of its metabolites have a neuroprotective effect. Those results suggested that serum caffeine might be useful as a blood marker for Parkinson’s disease.
To test that idea, Dr. Fujimaki and associates recruited 31 healthy controls (18 women) and 108 patients with Parkinson’s disease but no dementia (50 women). The control group’s mean caffeine intake of 115.81 mg/day was similar to that of patients with Parkinson’s disease (107.50 mg/day).
Serum caffeine levels measured after an overnight fast showed that a cutoff of 33.04 pmol/10 µL identified Parkinson’s disease with an area under the curve (AUC) of 0.78 (sensitivity, 76.9%; specificity, 74.2%). Inclusion of the primary caffeine metabolites theophylline, theobromine, and paraxanthine increased the AUC to 0.87. When the researchers included all 11 measurable metabolites, the AUC increased further to 0.98.
Genetic analyses revealed no significant differences in the frequencies of caffeine metabolism–associated genetic variants between patients and controls.
The study was limited by the fact that it was conducted at a single university hospital, and the patient population did not include many severe cases. The association should also be studied in other Parkinson’s disease patient populations, according to the authors.
Did Treatment Effects Influence the Findings?
A key question that the study raises is what caused the decrease in serum concentration found in patients with Parkinson’s disease, said David G. Munoz, MD, of the Department of Laboratory Medicine and Pathobiology at the University of Toronto, and Shinsuke Fujioka, MD, of the Department of Neurology at Fukuoka University in Japan, in an accompanying editorial. Almost all of the patients were receiving treatment, which could have affected serum levels, they added. The researchers looked for, but did not find, an association between serum caffeine metabolite levels and levodopa equivalent doses.
“The validity of the study depends on whether caffeine metabolism may be affected by treatment,” said Drs. Munoz and Fujioka. “To demonstrate the utility of caffeine metabolites unequivocally, a future study will have to reproduce these results in patients with untreated Parkinson’s disease or subjects at high risk of Parkinson’s disease, such as those with prodromal signs of Parkinson’s disease.”
—Jim Kling
Suggested Reading
Fujimaki M, Saiki S, Li Y, et al. Serum caffeine and metabolites are reliable biomarkers of early Parkinson disease. Neurology. 2018 Jan 3 [Epub ahead of print].
Munoz DG, Fujioka S. Caffeine and Parkinson disease: A possible diagnostic and pathogenic breakthrough. Neurology. 2018 Jan 3 [Epub ahead of print].
Embracing Life’s Simple 7 slashes PAD risk
ANAHEIM, CALIF. – Adherence to the American Heart Association’s widely publicized “Life’s Simple 7” program addressing key modifiable cardiovascular health factors substantially reduces the risk of developing peripheral arterial disease, Parveen Garg, MD, said at the American Heart Association scientific sessions.
This is new evidence-supported information. Until this new analysis from the landmark ARIC (Atherosclerosis Risk in Communities) study, the relationship between Life’s Simple 7 and peripheral arterial disease (PAD) hadn’t been studied. It’s a relationship worthy of examination, considering that more than 8 million Americans have PAD, and nearly 40% of them don’t have concomitant coronary or cerebrovascular disease, which raised the question of whether Life’s Simple 7 applied to PAD risk, noted Dr. Garg of the University of Southern California, Los Angeles.
ARIC is a National Heart, Lung, and Blood Institute–sponsored prospective study of nearly 16,000 black or white individuals who were middle-aged at enrollment and have been followed for more than 2 decades. Dr. Garg’s analysis focused on 12,865 participants who were free of CHD, heart failure, prior stroke, and PAD at baseline, and have been followed for a median of 24 years.
As background, the metrics for Life’s Simple 7 consist of total cholesterol, blood pressure, blood glucose, smoking status, body mass index, physical activity, and adherence to a healthy diet score. Each element can be scored 2 points for ideal, 1 for intermediate, and 0 for poor. The composite Life’s Simple 7 score is rated optimal at 10-14 points, average at 5-9, and inadequate at 0-4.
During follow-up, 3.4% of ARIC participants developed PAD sufficiently severe to involve hospitalization. The incidence rate was 5.2 cases per 1,000 person-years for the 1,008 subjects categorized as having an inadequate Life’s Simple 7 score, 1.1/1,000 person-years for the 8,395 people in the average category, and just 0.4 cases/1,000 person-years for the 3,462 individuals in the optimal Life’s Simple 7 group.
Compared with subjects in the inadequate category, those in the average group were 56% less likely to develop PAD. Those in the optimal Life’s Simple 7 category had an 86% reduction in risk.
For each of the seven components of Life’s Simple 7 a person scored ideally in, the risk of incident PAD was reduced by 28% in a multivariate analysis fully adjusted for demographics, alcohol consumption, aspirin use, study site, left ventricular hypertrophy, and other potential confounders.
The inverse relationship between Life’s Simple 7 score and PAD risk was stronger in women than men. However, the association didn’t differ by race.
Dr. Garg noted that his study undoubtedly underestimates the true incidence of PAD in the ARIC population, since a hospital diagnosis was required. Also, to date he and his coinvestigators have only analyzed the results in terms of baseline Life’s Simple 7 score. It would be useful to also document the impact of change in the score over time.
Session moderator David C. Goff Jr., MD, observed, “This is very consistent with evidence in CHD that people who are in ideal cardiovascular health status have about an 80%-90% lower risk of cardiovascular mortality and a 70%-80% reduction in risk of total mortality compared with people who are in poor cardiovascular health status.”
“This study really does provide additional evidence that if we could get more people into the ideal cardiovascular health range, we’d probably see less atherosclerotic cardiovascular disease in general,” added Dr. Goff, who is director of the division of cardiovascular sciences at the National Heart, Lung, and Blood Institute.
Dr. Garg reported having no financial conflicts of interest.
ANAHEIM, CALIF. – Adherence to the American Heart Association’s widely publicized “Life’s Simple 7” program addressing key modifiable cardiovascular health factors substantially reduces the risk of developing peripheral arterial disease, Parveen Garg, MD, said at the American Heart Association scientific sessions.
This is new evidence-supported information. Until this new analysis from the landmark ARIC (Atherosclerosis Risk in Communities) study, the relationship between Life’s Simple 7 and peripheral arterial disease (PAD) hadn’t been studied. It’s a relationship worthy of examination, considering that more than 8 million Americans have PAD, and nearly 40% of them don’t have concomitant coronary or cerebrovascular disease, which raised the question of whether Life’s Simple 7 applied to PAD risk, noted Dr. Garg of the University of Southern California, Los Angeles.
ARIC is a National Heart, Lung, and Blood Institute–sponsored prospective study of nearly 16,000 black or white individuals who were middle-aged at enrollment and have been followed for more than 2 decades. Dr. Garg’s analysis focused on 12,865 participants who were free of CHD, heart failure, prior stroke, and PAD at baseline, and have been followed for a median of 24 years.
As background, the metrics for Life’s Simple 7 consist of total cholesterol, blood pressure, blood glucose, smoking status, body mass index, physical activity, and adherence to a healthy diet score. Each element can be scored 2 points for ideal, 1 for intermediate, and 0 for poor. The composite Life’s Simple 7 score is rated optimal at 10-14 points, average at 5-9, and inadequate at 0-4.
During follow-up, 3.4% of ARIC participants developed PAD sufficiently severe to involve hospitalization. The incidence rate was 5.2 cases per 1,000 person-years for the 1,008 subjects categorized as having an inadequate Life’s Simple 7 score, 1.1/1,000 person-years for the 8,395 people in the average category, and just 0.4 cases/1,000 person-years for the 3,462 individuals in the optimal Life’s Simple 7 group.
Compared with subjects in the inadequate category, those in the average group were 56% less likely to develop PAD. Those in the optimal Life’s Simple 7 category had an 86% reduction in risk.
For each of the seven components of Life’s Simple 7 a person scored ideally in, the risk of incident PAD was reduced by 28% in a multivariate analysis fully adjusted for demographics, alcohol consumption, aspirin use, study site, left ventricular hypertrophy, and other potential confounders.
The inverse relationship between Life’s Simple 7 score and PAD risk was stronger in women than men. However, the association didn’t differ by race.
Dr. Garg noted that his study undoubtedly underestimates the true incidence of PAD in the ARIC population, since a hospital diagnosis was required. Also, to date he and his coinvestigators have only analyzed the results in terms of baseline Life’s Simple 7 score. It would be useful to also document the impact of change in the score over time.
Session moderator David C. Goff Jr., MD, observed, “This is very consistent with evidence in CHD that people who are in ideal cardiovascular health status have about an 80%-90% lower risk of cardiovascular mortality and a 70%-80% reduction in risk of total mortality compared with people who are in poor cardiovascular health status.”
“This study really does provide additional evidence that if we could get more people into the ideal cardiovascular health range, we’d probably see less atherosclerotic cardiovascular disease in general,” added Dr. Goff, who is director of the division of cardiovascular sciences at the National Heart, Lung, and Blood Institute.
Dr. Garg reported having no financial conflicts of interest.
ANAHEIM, CALIF. – Adherence to the American Heart Association’s widely publicized “Life’s Simple 7” program addressing key modifiable cardiovascular health factors substantially reduces the risk of developing peripheral arterial disease, Parveen Garg, MD, said at the American Heart Association scientific sessions.
This is new evidence-supported information. Until this new analysis from the landmark ARIC (Atherosclerosis Risk in Communities) study, the relationship between Life’s Simple 7 and peripheral arterial disease (PAD) hadn’t been studied. It’s a relationship worthy of examination, considering that more than 8 million Americans have PAD, and nearly 40% of them don’t have concomitant coronary or cerebrovascular disease, which raised the question of whether Life’s Simple 7 applied to PAD risk, noted Dr. Garg of the University of Southern California, Los Angeles.
ARIC is a National Heart, Lung, and Blood Institute–sponsored prospective study of nearly 16,000 black or white individuals who were middle-aged at enrollment and have been followed for more than 2 decades. Dr. Garg’s analysis focused on 12,865 participants who were free of CHD, heart failure, prior stroke, and PAD at baseline, and have been followed for a median of 24 years.
As background, the metrics for Life’s Simple 7 consist of total cholesterol, blood pressure, blood glucose, smoking status, body mass index, physical activity, and adherence to a healthy diet score. Each element can be scored 2 points for ideal, 1 for intermediate, and 0 for poor. The composite Life’s Simple 7 score is rated optimal at 10-14 points, average at 5-9, and inadequate at 0-4.
During follow-up, 3.4% of ARIC participants developed PAD sufficiently severe to involve hospitalization. The incidence rate was 5.2 cases per 1,000 person-years for the 1,008 subjects categorized as having an inadequate Life’s Simple 7 score, 1.1/1,000 person-years for the 8,395 people in the average category, and just 0.4 cases/1,000 person-years for the 3,462 individuals in the optimal Life’s Simple 7 group.
Compared with subjects in the inadequate category, those in the average group were 56% less likely to develop PAD. Those in the optimal Life’s Simple 7 category had an 86% reduction in risk.
For each of the seven components of Life’s Simple 7 a person scored ideally in, the risk of incident PAD was reduced by 28% in a multivariate analysis fully adjusted for demographics, alcohol consumption, aspirin use, study site, left ventricular hypertrophy, and other potential confounders.
The inverse relationship between Life’s Simple 7 score and PAD risk was stronger in women than men. However, the association didn’t differ by race.
Dr. Garg noted that his study undoubtedly underestimates the true incidence of PAD in the ARIC population, since a hospital diagnosis was required. Also, to date he and his coinvestigators have only analyzed the results in terms of baseline Life’s Simple 7 score. It would be useful to also document the impact of change in the score over time.
Session moderator David C. Goff Jr., MD, observed, “This is very consistent with evidence in CHD that people who are in ideal cardiovascular health status have about an 80%-90% lower risk of cardiovascular mortality and a 70%-80% reduction in risk of total mortality compared with people who are in poor cardiovascular health status.”
“This study really does provide additional evidence that if we could get more people into the ideal cardiovascular health range, we’d probably see less atherosclerotic cardiovascular disease in general,” added Dr. Goff, who is director of the division of cardiovascular sciences at the National Heart, Lung, and Blood Institute.
Dr. Garg reported having no financial conflicts of interest.
REPORTING FROM THE AHA SCIENTIFIC SESSIONS
Key clinical point: The Life’s Simple 7 public health program points the way to reduced risk of PAD.
Major finding: Being in the top tertile of cardiovascular health by the American Heart Association’s Life’s Simple 7 metric is associated with an 86% lower risk of developing PAD than for those in poor cardiovascular health.
Study details: This biracial prospective observational study includes nearly 16,000 white and black Americans.
Disclosures: The ARIC study is funded by the NHLBI. The presenter reported having no financial conflicts.
Can Walking Protect Cognition in Amyloid-Positive Older Adults?
BOSTON—Walking appears to moderate cognitive decline in people with elevated levels of amyloid in the brain, according to a four-year observational study described at the Clinical Trials on Alzheimer’s Disease conference.
Among a group of cognitively normal older adults with beta-amyloid brain plaques, those who walked the most had significantly less decline in memory and thinking than those who walked little, said Dylan Kirn, MPH. Walking did not affect any biomarkers of Alzheimer’s disease, such as brain glucose utilization, amyloid accumulation, or hippocampal volume, but it was associated with significantly better cognitive scores on a composite measure of memory over time.
“We should be careful in interpreting these data, because this is an observational cohort, and we cannot make claims regarding causality or the mechanism by which physical activity may be influencing cognitive decline,” said Mr. Kirn, Clinical Research Project Manager at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital in Boston. “But I find these results interesting and novel, and I think they support further investigation.”
The Harvard Aging Brain Study
The research is part of the ongoing Harvard Aging Brain Study, which is a longitudinal investigation of cognitively normal elderly individuals that seeks to identify the earliest changes in molecular, functional, and structural imaging markers that signal a transition from normal cognition to progressive cognitive decline and preclinical Alzheimer’s disease. The walking study included 255 subjects with a mean age of 73. Participants were highly educated, with a mean of 16 years’ schooling. About 24% of the population was amyloid-positive on PET imaging. All patients were cognitively normal, with a Clinical Dementia Rating scale score of 0. Activity was measured at baseline with a pedometer, which was worn for seven consecutive days; only people who walked at least 100 steps per day were included in the analysis.
In addition to amyloid PET imaging, subjects underwent an 18F-fluorodeoxyglucose (FDG) PET scan to assess brain glucose utilization, and MRI to measure hippocampal volume changes and white matter hyperintensities (WMHs). Changes in all of these biomarkers can herald the onset of Alzheimer’s disease.
The study’s primary outcome was the relationship between physical activity, as measured by number of walking steps per day, and changes on the Preclinical Alzheimer’s Cognitive Composite (PACC) test. This relatively new cognitive scale is gaining increasing use in clinical trials. The PACC is a composite of the Digit Symbol Substitution Test score from the Wechsler Adult Intelligence Scale–Revised, the Mini-Mental State Examination, the Total Recall score from the Free and Cued Selective Reminding Test, and the Delayed Recall score on the Logical Memory IIa subtest of the Wechsler Memory Scale. It correlates well with amyloid accumulation in the brain, said Mr. Kirn.
The cohort was followed for as long as six years (median follow-up, four years), and PACC scores were calculated annually. The investigators examined the relationship between walking at baseline and PACC decline during the study period in two multivariate models. One model controlled for age, sex, and years of education, and the second controlled for those variables plus cortical WMHs, bilateral hippocampal volume (HV), and FDG PET in brain regions typically affected by Alzheimer’s disease.
The investigators sorted physical activity into tertiles by the average number of steps per day over the seven-day measuring period. The middle tertile was the mean (ie, 5,616 steps/day), the top tertile was one standard deviation above the mean (ie, 8,482 steps/day), and the bottom tertile was one standard deviation below the mean (ie, 2,751 steps/day). Amyloid-positive patients were further categorized as having high or low brain amyloid load.
No Relationship Between Activity and Biomarkers
The researchers found no significant relationships between any of the biomarkers and any level of physical activity in either of the analyses, said Mr. Kirn. When looking at the time-linked changes in the PACC, however, they found significant differences. Subjects who walked at least the mean number of steps per day were much more likely to maintain a stable cognitive score, while those who walked the fewest steps declined by about a quarter of a point on the PACC. The difference in decline between the high-activity and low-activity subjects was statistically significant, even when the investigators controlled for amyloid burden and other Alzheimer’s disease biomarkers.
The level of physical activity at baseline was a particularly strong predictor of cognitive health among amyloid-positive subjects. Those in the high-activity group maintained a steady score on the PACC. Those in the mean activity group declined slightly, and those in the low activity group showed a sharp decline, losing almost a full point on the PACC by the end of follow-up.
In the amyloid-negative group, the researchers found no association between cognition and activity. PACC scores improved for all groups during the study period, which probably reflects a practice effect, said Mr. Kirn.
“We observed that physical activity was significantly predictive of cognitive decline in high-amyloid participants, but not in low-amyloid participants,” he said. “Individuals with high amyloid and low physical activity at baseline had the steepest decline in cognition over time. But in those with high amyloid and high physical activity at baseline, we did not see a tremendous amount of decline.”
The study suggests that pedometers may help stratify patients for clinical trials or assess cognitive risk in elderly subjects. “Most studies that have looked at physical activity and dementia use a self-reported activity level, so the results have been varied,” said Mr. Kirn. “These findings support consideration of objectively measured physical activity in clinical research, and perhaps in stratification for risk of cognitive decline.”
—Michele G. Sullivan
BOSTON—Walking appears to moderate cognitive decline in people with elevated levels of amyloid in the brain, according to a four-year observational study described at the Clinical Trials on Alzheimer’s Disease conference.
Among a group of cognitively normal older adults with beta-amyloid brain plaques, those who walked the most had significantly less decline in memory and thinking than those who walked little, said Dylan Kirn, MPH. Walking did not affect any biomarkers of Alzheimer’s disease, such as brain glucose utilization, amyloid accumulation, or hippocampal volume, but it was associated with significantly better cognitive scores on a composite measure of memory over time.
“We should be careful in interpreting these data, because this is an observational cohort, and we cannot make claims regarding causality or the mechanism by which physical activity may be influencing cognitive decline,” said Mr. Kirn, Clinical Research Project Manager at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital in Boston. “But I find these results interesting and novel, and I think they support further investigation.”
The Harvard Aging Brain Study
The research is part of the ongoing Harvard Aging Brain Study, which is a longitudinal investigation of cognitively normal elderly individuals that seeks to identify the earliest changes in molecular, functional, and structural imaging markers that signal a transition from normal cognition to progressive cognitive decline and preclinical Alzheimer’s disease. The walking study included 255 subjects with a mean age of 73. Participants were highly educated, with a mean of 16 years’ schooling. About 24% of the population was amyloid-positive on PET imaging. All patients were cognitively normal, with a Clinical Dementia Rating scale score of 0. Activity was measured at baseline with a pedometer, which was worn for seven consecutive days; only people who walked at least 100 steps per day were included in the analysis.
In addition to amyloid PET imaging, subjects underwent an 18F-fluorodeoxyglucose (FDG) PET scan to assess brain glucose utilization, and MRI to measure hippocampal volume changes and white matter hyperintensities (WMHs). Changes in all of these biomarkers can herald the onset of Alzheimer’s disease.
The study’s primary outcome was the relationship between physical activity, as measured by number of walking steps per day, and changes on the Preclinical Alzheimer’s Cognitive Composite (PACC) test. This relatively new cognitive scale is gaining increasing use in clinical trials. The PACC is a composite of the Digit Symbol Substitution Test score from the Wechsler Adult Intelligence Scale–Revised, the Mini-Mental State Examination, the Total Recall score from the Free and Cued Selective Reminding Test, and the Delayed Recall score on the Logical Memory IIa subtest of the Wechsler Memory Scale. It correlates well with amyloid accumulation in the brain, said Mr. Kirn.
The cohort was followed for as long as six years (median follow-up, four years), and PACC scores were calculated annually. The investigators examined the relationship between walking at baseline and PACC decline during the study period in two multivariate models. One model controlled for age, sex, and years of education, and the second controlled for those variables plus cortical WMHs, bilateral hippocampal volume (HV), and FDG PET in brain regions typically affected by Alzheimer’s disease.
The investigators sorted physical activity into tertiles by the average number of steps per day over the seven-day measuring period. The middle tertile was the mean (ie, 5,616 steps/day), the top tertile was one standard deviation above the mean (ie, 8,482 steps/day), and the bottom tertile was one standard deviation below the mean (ie, 2,751 steps/day). Amyloid-positive patients were further categorized as having high or low brain amyloid load.
No Relationship Between Activity and Biomarkers
The researchers found no significant relationships between any of the biomarkers and any level of physical activity in either of the analyses, said Mr. Kirn. When looking at the time-linked changes in the PACC, however, they found significant differences. Subjects who walked at least the mean number of steps per day were much more likely to maintain a stable cognitive score, while those who walked the fewest steps declined by about a quarter of a point on the PACC. The difference in decline between the high-activity and low-activity subjects was statistically significant, even when the investigators controlled for amyloid burden and other Alzheimer’s disease biomarkers.
The level of physical activity at baseline was a particularly strong predictor of cognitive health among amyloid-positive subjects. Those in the high-activity group maintained a steady score on the PACC. Those in the mean activity group declined slightly, and those in the low activity group showed a sharp decline, losing almost a full point on the PACC by the end of follow-up.
In the amyloid-negative group, the researchers found no association between cognition and activity. PACC scores improved for all groups during the study period, which probably reflects a practice effect, said Mr. Kirn.
“We observed that physical activity was significantly predictive of cognitive decline in high-amyloid participants, but not in low-amyloid participants,” he said. “Individuals with high amyloid and low physical activity at baseline had the steepest decline in cognition over time. But in those with high amyloid and high physical activity at baseline, we did not see a tremendous amount of decline.”
The study suggests that pedometers may help stratify patients for clinical trials or assess cognitive risk in elderly subjects. “Most studies that have looked at physical activity and dementia use a self-reported activity level, so the results have been varied,” said Mr. Kirn. “These findings support consideration of objectively measured physical activity in clinical research, and perhaps in stratification for risk of cognitive decline.”
—Michele G. Sullivan
BOSTON—Walking appears to moderate cognitive decline in people with elevated levels of amyloid in the brain, according to a four-year observational study described at the Clinical Trials on Alzheimer’s Disease conference.
Among a group of cognitively normal older adults with beta-amyloid brain plaques, those who walked the most had significantly less decline in memory and thinking than those who walked little, said Dylan Kirn, MPH. Walking did not affect any biomarkers of Alzheimer’s disease, such as brain glucose utilization, amyloid accumulation, or hippocampal volume, but it was associated with significantly better cognitive scores on a composite measure of memory over time.
“We should be careful in interpreting these data, because this is an observational cohort, and we cannot make claims regarding causality or the mechanism by which physical activity may be influencing cognitive decline,” said Mr. Kirn, Clinical Research Project Manager at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital in Boston. “But I find these results interesting and novel, and I think they support further investigation.”
The Harvard Aging Brain Study
The research is part of the ongoing Harvard Aging Brain Study, which is a longitudinal investigation of cognitively normal elderly individuals that seeks to identify the earliest changes in molecular, functional, and structural imaging markers that signal a transition from normal cognition to progressive cognitive decline and preclinical Alzheimer’s disease. The walking study included 255 subjects with a mean age of 73. Participants were highly educated, with a mean of 16 years’ schooling. About 24% of the population was amyloid-positive on PET imaging. All patients were cognitively normal, with a Clinical Dementia Rating scale score of 0. Activity was measured at baseline with a pedometer, which was worn for seven consecutive days; only people who walked at least 100 steps per day were included in the analysis.
In addition to amyloid PET imaging, subjects underwent an 18F-fluorodeoxyglucose (FDG) PET scan to assess brain glucose utilization, and MRI to measure hippocampal volume changes and white matter hyperintensities (WMHs). Changes in all of these biomarkers can herald the onset of Alzheimer’s disease.
The study’s primary outcome was the relationship between physical activity, as measured by number of walking steps per day, and changes on the Preclinical Alzheimer’s Cognitive Composite (PACC) test. This relatively new cognitive scale is gaining increasing use in clinical trials. The PACC is a composite of the Digit Symbol Substitution Test score from the Wechsler Adult Intelligence Scale–Revised, the Mini-Mental State Examination, the Total Recall score from the Free and Cued Selective Reminding Test, and the Delayed Recall score on the Logical Memory IIa subtest of the Wechsler Memory Scale. It correlates well with amyloid accumulation in the brain, said Mr. Kirn.
The cohort was followed for as long as six years (median follow-up, four years), and PACC scores were calculated annually. The investigators examined the relationship between walking at baseline and PACC decline during the study period in two multivariate models. One model controlled for age, sex, and years of education, and the second controlled for those variables plus cortical WMHs, bilateral hippocampal volume (HV), and FDG PET in brain regions typically affected by Alzheimer’s disease.
The investigators sorted physical activity into tertiles by the average number of steps per day over the seven-day measuring period. The middle tertile was the mean (ie, 5,616 steps/day), the top tertile was one standard deviation above the mean (ie, 8,482 steps/day), and the bottom tertile was one standard deviation below the mean (ie, 2,751 steps/day). Amyloid-positive patients were further categorized as having high or low brain amyloid load.
No Relationship Between Activity and Biomarkers
The researchers found no significant relationships between any of the biomarkers and any level of physical activity in either of the analyses, said Mr. Kirn. When looking at the time-linked changes in the PACC, however, they found significant differences. Subjects who walked at least the mean number of steps per day were much more likely to maintain a stable cognitive score, while those who walked the fewest steps declined by about a quarter of a point on the PACC. The difference in decline between the high-activity and low-activity subjects was statistically significant, even when the investigators controlled for amyloid burden and other Alzheimer’s disease biomarkers.
The level of physical activity at baseline was a particularly strong predictor of cognitive health among amyloid-positive subjects. Those in the high-activity group maintained a steady score on the PACC. Those in the mean activity group declined slightly, and those in the low activity group showed a sharp decline, losing almost a full point on the PACC by the end of follow-up.
In the amyloid-negative group, the researchers found no association between cognition and activity. PACC scores improved for all groups during the study period, which probably reflects a practice effect, said Mr. Kirn.
“We observed that physical activity was significantly predictive of cognitive decline in high-amyloid participants, but not in low-amyloid participants,” he said. “Individuals with high amyloid and low physical activity at baseline had the steepest decline in cognition over time. But in those with high amyloid and high physical activity at baseline, we did not see a tremendous amount of decline.”
The study suggests that pedometers may help stratify patients for clinical trials or assess cognitive risk in elderly subjects. “Most studies that have looked at physical activity and dementia use a self-reported activity level, so the results have been varied,” said Mr. Kirn. “These findings support consideration of objectively measured physical activity in clinical research, and perhaps in stratification for risk of cognitive decline.”
—Michele G. Sullivan
Psychiatric issues common among hepatitis C inpatients
Adult inpatients with hepatitis C are much more likely to have mental health comorbidities, compared with those who do not have hepatitis C, according to the Agency for Healthcare Research and Quality.
All four comorbidities skewed younger, and the oldest patients (73 years and older) with hepatitis C presented with each condition at about the same rate as the non–hepatitis C population. The proportions of hepatitis C–related inpatient stays with alcohol abuse by age, for example, were 20.5% for 18-51 years, 23.3% for those aged 52-72, and 5.8% for the 73-and-older group, according to data from the National Inpatient Sample, which includes more than 95% of all discharges from community (short-term, nonfederal, nonrehabilitation) hospitals in the United States.
Adult inpatients with hepatitis C are much more likely to have mental health comorbidities, compared with those who do not have hepatitis C, according to the Agency for Healthcare Research and Quality.
All four comorbidities skewed younger, and the oldest patients (73 years and older) with hepatitis C presented with each condition at about the same rate as the non–hepatitis C population. The proportions of hepatitis C–related inpatient stays with alcohol abuse by age, for example, were 20.5% for 18-51 years, 23.3% for those aged 52-72, and 5.8% for the 73-and-older group, according to data from the National Inpatient Sample, which includes more than 95% of all discharges from community (short-term, nonfederal, nonrehabilitation) hospitals in the United States.
Adult inpatients with hepatitis C are much more likely to have mental health comorbidities, compared with those who do not have hepatitis C, according to the Agency for Healthcare Research and Quality.
All four comorbidities skewed younger, and the oldest patients (73 years and older) with hepatitis C presented with each condition at about the same rate as the non–hepatitis C population. The proportions of hepatitis C–related inpatient stays with alcohol abuse by age, for example, were 20.5% for 18-51 years, 23.3% for those aged 52-72, and 5.8% for the 73-and-older group, according to data from the National Inpatient Sample, which includes more than 95% of all discharges from community (short-term, nonfederal, nonrehabilitation) hospitals in the United States.
ROBOT trial compares surgical approaches to esophagectomy
SAN FRANCISCO – Patients undergoing had less morbidity and pain and similarly good oncologic outcomes, when the surgery was performed by robot-assisted laparoscopy instead of by the open technique, a phase 3 clinical trial has found.
Investigators of the ROBOT (Robot-assisted Thoracolaparoscopic Esophagectomy vs. Open Transthoracic Esophagectomy) trial, led by Pieter C. van der Sluis, MD, a surgeon at the University Medical Center Utrecht, the Netherlands, randomized 112 patients with resectable esophageal cancer to open transthoracic esophagectomy – considered to be the gold standard – or robot-assisted minimally invasive thoracolaparoscopic esophagectomy.
“Robot-assisted minimally invasive thoracolaparoscopic esophagectomy versus open transthoracic esophagectomy improves postoperative outcome. There were no differences in oncologic outcomes, and our oncologic outcomes were in concordance with the highest standards nowadays,” Dr. van der Sluis summarized. “This trial provides evidence for the minimally invasive approach over the open approach, and especially the robot-assisted minimally invasive esophagectomy.”
The investigators will report a full cost comparison separately. “We see that costs are lower, though not significantly lower, with the robot,” he said, giving a preview. “We are going to show that the real costs of the operation are in the complications. When you have complications that involve the ICU and reoperations, some patients are in the hospital for months after the surgery. So by investing a little extra money in the surgical procedure, you might actually get it back by reducing the complications.”
When asked by an attendee why the trial did not compare robotic esophagectomy with thoracoscopic esophagectomy, Dr. van der Sluis noted that such comparison is complicated by many factors; for example, the challenge of finding surgeons skilled in both techniques, and the likelihood of small differences in outcomes, potentially requiring enrollment of thousands of patients to have adequate study power. “We concluded that such a trial might not be feasible,” he said.
Parsing the findings
“The complication rates [in this trial] are very high in the robotic and open groups, much higher than reported in some well-controlled prospective and retrospective studies,” commented session attendee Kenneth Meredith, MD, FACS, professor at Florida State University, Sarasota, and director of gastrointestinal oncology, Sarasota Memorial Institute for Cancer Care.
He wondered how extensive the investigators’ experience with robotics was and how many cases they had done on their learning curve. Data from his group suggest that surgeons must perform 29 cases of robotic esophagectomy before the complication rate drops (Dis Esophagus. 2017;30:1-7).
“That’s more then half of the patients in the robotic arm of their study,” he noted in an interview. “I find this needs to be explained. If the authors are past their learning curve, why were the complication rates so high?” Additionally, the 80% rate in the open group “is among the highest I’ve seen in many years.”
The lack of significant differences in complete resection rate and in lymph node harvest was also surprising, as he and other robotics users have found that this technique can improve these outcomes, Dr. Meredith added. This could likewise be a learning curve phenomenon.
Although ROBOT’s comparison of robotic with open esophagectomy is relevant, “it would have been more relevant to compare robotic to minimally invasive esophagectomy [MIE],” he maintained, as MIE has been shown to improve outcomes relative to open surgery (Lancet. 2012;379:1887-92).
“There are many high-volume centers in MIE but not necessarily robotics. The two are often mutually exclusive, and a multicenter trial in which each center performs high volumes of their respective technique, rather then mandating each center perform an operation they may not be facile in,” would be practical, Dr. Meredith concluded.
Study details
“The main objective in our trial was to reduce surgical trauma and reduce the percentage of complications,” Dr. van der Sluis told attendees of the symposium, sponsored by the American Gastroenterological Association, the American Society for Clinical Oncology, the American Society for Radiation Oncology, and the Society of Surgical Oncology.
Results showed that compared with peers in the open surgery group, patients in the robotic-assisted surgery group specifically had a lower rate of pulmonary complications (32% vs. 58%, P = .005), largely due to a reduction in rate of pneumonia (28% vs. 55%, P = .005), and a lower rate of cardiac complications (22% vs. 47%, P = .006), almost entirely due to a reduction in rate of atrial fibrillation (22% vs. 46%, P = .01).
There was a trend toward fewer wound infections with robotics (4% vs. 14%, P = .09), with a large difference in thoracic wound infections (0% vs. 9%, P = .06).
The two groups were statistically indistinguishable on rates of anastomotic leakage (24% and 20%) and recurrent laryngeal nerve injury (9% and 11%). The fairly high rate of anastomotic leakage was likely due to the center’s use of cervical anastomosis at the time of the trial, according to Dr. van der Sluis; they have since started using thoracic anastomosis, and will report results with that technique soon.
There was also no significant difference between groups in the rate of in-hospital mortality (4% with robotic surgery and 2% with open surgery), median hospital length of stay (14 and 16 days), and ICU length of stay (1 day in each group).
Patients in the robotics group more commonly had functional recovery within 2 weeks (70% vs. 51%, P = .04). And on the Quality of Life Questionnaire Core 30, they had better scores for health-related quality of life at discharge (57.9 vs 44.6, P = .02) and at 6 weeks (68.7 vs. 57.6, P = .03), and for physical functioning at discharge (54.5 vs. 41.0, P = .03) and 6 weeks (69.3 vs. 58.6, P = .049).
The two groups were similar on rates of R0 resection (93% and 96%) and median number of lymph nodes retrieved (27 and 25), reported Dr. van der Sluis. Pain during the first 14 days after surgery was lower for the robotics group (P = .003).
With a median follow-up of 40 months, the robotics and open groups did not differ significantly on disease-free survival (median, 26 and 28 months) and overall survival (not reached in either group).
Dr. van der Sluis disclosed no relevant conflicts of interest.
SOURCE: van der Sluis PC et al. 2018 GI Cancer Symposium, Abstract 156148
SAN FRANCISCO – Patients undergoing had less morbidity and pain and similarly good oncologic outcomes, when the surgery was performed by robot-assisted laparoscopy instead of by the open technique, a phase 3 clinical trial has found.
Investigators of the ROBOT (Robot-assisted Thoracolaparoscopic Esophagectomy vs. Open Transthoracic Esophagectomy) trial, led by Pieter C. van der Sluis, MD, a surgeon at the University Medical Center Utrecht, the Netherlands, randomized 112 patients with resectable esophageal cancer to open transthoracic esophagectomy – considered to be the gold standard – or robot-assisted minimally invasive thoracolaparoscopic esophagectomy.
“Robot-assisted minimally invasive thoracolaparoscopic esophagectomy versus open transthoracic esophagectomy improves postoperative outcome. There were no differences in oncologic outcomes, and our oncologic outcomes were in concordance with the highest standards nowadays,” Dr. van der Sluis summarized. “This trial provides evidence for the minimally invasive approach over the open approach, and especially the robot-assisted minimally invasive esophagectomy.”
The investigators will report a full cost comparison separately. “We see that costs are lower, though not significantly lower, with the robot,” he said, giving a preview. “We are going to show that the real costs of the operation are in the complications. When you have complications that involve the ICU and reoperations, some patients are in the hospital for months after the surgery. So by investing a little extra money in the surgical procedure, you might actually get it back by reducing the complications.”
When asked by an attendee why the trial did not compare robotic esophagectomy with thoracoscopic esophagectomy, Dr. van der Sluis noted that such comparison is complicated by many factors; for example, the challenge of finding surgeons skilled in both techniques, and the likelihood of small differences in outcomes, potentially requiring enrollment of thousands of patients to have adequate study power. “We concluded that such a trial might not be feasible,” he said.
Parsing the findings
“The complication rates [in this trial] are very high in the robotic and open groups, much higher than reported in some well-controlled prospective and retrospective studies,” commented session attendee Kenneth Meredith, MD, FACS, professor at Florida State University, Sarasota, and director of gastrointestinal oncology, Sarasota Memorial Institute for Cancer Care.
He wondered how extensive the investigators’ experience with robotics was and how many cases they had done on their learning curve. Data from his group suggest that surgeons must perform 29 cases of robotic esophagectomy before the complication rate drops (Dis Esophagus. 2017;30:1-7).
“That’s more then half of the patients in the robotic arm of their study,” he noted in an interview. “I find this needs to be explained. If the authors are past their learning curve, why were the complication rates so high?” Additionally, the 80% rate in the open group “is among the highest I’ve seen in many years.”
The lack of significant differences in complete resection rate and in lymph node harvest was also surprising, as he and other robotics users have found that this technique can improve these outcomes, Dr. Meredith added. This could likewise be a learning curve phenomenon.
Although ROBOT’s comparison of robotic with open esophagectomy is relevant, “it would have been more relevant to compare robotic to minimally invasive esophagectomy [MIE],” he maintained, as MIE has been shown to improve outcomes relative to open surgery (Lancet. 2012;379:1887-92).
“There are many high-volume centers in MIE but not necessarily robotics. The two are often mutually exclusive, and a multicenter trial in which each center performs high volumes of their respective technique, rather then mandating each center perform an operation they may not be facile in,” would be practical, Dr. Meredith concluded.
Study details
“The main objective in our trial was to reduce surgical trauma and reduce the percentage of complications,” Dr. van der Sluis told attendees of the symposium, sponsored by the American Gastroenterological Association, the American Society for Clinical Oncology, the American Society for Radiation Oncology, and the Society of Surgical Oncology.
Results showed that compared with peers in the open surgery group, patients in the robotic-assisted surgery group specifically had a lower rate of pulmonary complications (32% vs. 58%, P = .005), largely due to a reduction in rate of pneumonia (28% vs. 55%, P = .005), and a lower rate of cardiac complications (22% vs. 47%, P = .006), almost entirely due to a reduction in rate of atrial fibrillation (22% vs. 46%, P = .01).
There was a trend toward fewer wound infections with robotics (4% vs. 14%, P = .09), with a large difference in thoracic wound infections (0% vs. 9%, P = .06).
The two groups were statistically indistinguishable on rates of anastomotic leakage (24% and 20%) and recurrent laryngeal nerve injury (9% and 11%). The fairly high rate of anastomotic leakage was likely due to the center’s use of cervical anastomosis at the time of the trial, according to Dr. van der Sluis; they have since started using thoracic anastomosis, and will report results with that technique soon.
There was also no significant difference between groups in the rate of in-hospital mortality (4% with robotic surgery and 2% with open surgery), median hospital length of stay (14 and 16 days), and ICU length of stay (1 day in each group).
Patients in the robotics group more commonly had functional recovery within 2 weeks (70% vs. 51%, P = .04). And on the Quality of Life Questionnaire Core 30, they had better scores for health-related quality of life at discharge (57.9 vs 44.6, P = .02) and at 6 weeks (68.7 vs. 57.6, P = .03), and for physical functioning at discharge (54.5 vs. 41.0, P = .03) and 6 weeks (69.3 vs. 58.6, P = .049).
The two groups were similar on rates of R0 resection (93% and 96%) and median number of lymph nodes retrieved (27 and 25), reported Dr. van der Sluis. Pain during the first 14 days after surgery was lower for the robotics group (P = .003).
With a median follow-up of 40 months, the robotics and open groups did not differ significantly on disease-free survival (median, 26 and 28 months) and overall survival (not reached in either group).
Dr. van der Sluis disclosed no relevant conflicts of interest.
SOURCE: van der Sluis PC et al. 2018 GI Cancer Symposium, Abstract 156148
SAN FRANCISCO – Patients undergoing had less morbidity and pain and similarly good oncologic outcomes, when the surgery was performed by robot-assisted laparoscopy instead of by the open technique, a phase 3 clinical trial has found.
Investigators of the ROBOT (Robot-assisted Thoracolaparoscopic Esophagectomy vs. Open Transthoracic Esophagectomy) trial, led by Pieter C. van der Sluis, MD, a surgeon at the University Medical Center Utrecht, the Netherlands, randomized 112 patients with resectable esophageal cancer to open transthoracic esophagectomy – considered to be the gold standard – or robot-assisted minimally invasive thoracolaparoscopic esophagectomy.
“Robot-assisted minimally invasive thoracolaparoscopic esophagectomy versus open transthoracic esophagectomy improves postoperative outcome. There were no differences in oncologic outcomes, and our oncologic outcomes were in concordance with the highest standards nowadays,” Dr. van der Sluis summarized. “This trial provides evidence for the minimally invasive approach over the open approach, and especially the robot-assisted minimally invasive esophagectomy.”
The investigators will report a full cost comparison separately. “We see that costs are lower, though not significantly lower, with the robot,” he said, giving a preview. “We are going to show that the real costs of the operation are in the complications. When you have complications that involve the ICU and reoperations, some patients are in the hospital for months after the surgery. So by investing a little extra money in the surgical procedure, you might actually get it back by reducing the complications.”
When asked by an attendee why the trial did not compare robotic esophagectomy with thoracoscopic esophagectomy, Dr. van der Sluis noted that such comparison is complicated by many factors; for example, the challenge of finding surgeons skilled in both techniques, and the likelihood of small differences in outcomes, potentially requiring enrollment of thousands of patients to have adequate study power. “We concluded that such a trial might not be feasible,” he said.
Parsing the findings
“The complication rates [in this trial] are very high in the robotic and open groups, much higher than reported in some well-controlled prospective and retrospective studies,” commented session attendee Kenneth Meredith, MD, FACS, professor at Florida State University, Sarasota, and director of gastrointestinal oncology, Sarasota Memorial Institute for Cancer Care.
He wondered how extensive the investigators’ experience with robotics was and how many cases they had done on their learning curve. Data from his group suggest that surgeons must perform 29 cases of robotic esophagectomy before the complication rate drops (Dis Esophagus. 2017;30:1-7).
“That’s more then half of the patients in the robotic arm of their study,” he noted in an interview. “I find this needs to be explained. If the authors are past their learning curve, why were the complication rates so high?” Additionally, the 80% rate in the open group “is among the highest I’ve seen in many years.”
The lack of significant differences in complete resection rate and in lymph node harvest was also surprising, as he and other robotics users have found that this technique can improve these outcomes, Dr. Meredith added. This could likewise be a learning curve phenomenon.
Although ROBOT’s comparison of robotic with open esophagectomy is relevant, “it would have been more relevant to compare robotic to minimally invasive esophagectomy [MIE],” he maintained, as MIE has been shown to improve outcomes relative to open surgery (Lancet. 2012;379:1887-92).
“There are many high-volume centers in MIE but not necessarily robotics. The two are often mutually exclusive, and a multicenter trial in which each center performs high volumes of their respective technique, rather then mandating each center perform an operation they may not be facile in,” would be practical, Dr. Meredith concluded.
Study details
“The main objective in our trial was to reduce surgical trauma and reduce the percentage of complications,” Dr. van der Sluis told attendees of the symposium, sponsored by the American Gastroenterological Association, the American Society for Clinical Oncology, the American Society for Radiation Oncology, and the Society of Surgical Oncology.
Results showed that compared with peers in the open surgery group, patients in the robotic-assisted surgery group specifically had a lower rate of pulmonary complications (32% vs. 58%, P = .005), largely due to a reduction in rate of pneumonia (28% vs. 55%, P = .005), and a lower rate of cardiac complications (22% vs. 47%, P = .006), almost entirely due to a reduction in rate of atrial fibrillation (22% vs. 46%, P = .01).
There was a trend toward fewer wound infections with robotics (4% vs. 14%, P = .09), with a large difference in thoracic wound infections (0% vs. 9%, P = .06).
The two groups were statistically indistinguishable on rates of anastomotic leakage (24% and 20%) and recurrent laryngeal nerve injury (9% and 11%). The fairly high rate of anastomotic leakage was likely due to the center’s use of cervical anastomosis at the time of the trial, according to Dr. van der Sluis; they have since started using thoracic anastomosis, and will report results with that technique soon.
There was also no significant difference between groups in the rate of in-hospital mortality (4% with robotic surgery and 2% with open surgery), median hospital length of stay (14 and 16 days), and ICU length of stay (1 day in each group).
Patients in the robotics group more commonly had functional recovery within 2 weeks (70% vs. 51%, P = .04). And on the Quality of Life Questionnaire Core 30, they had better scores for health-related quality of life at discharge (57.9 vs 44.6, P = .02) and at 6 weeks (68.7 vs. 57.6, P = .03), and for physical functioning at discharge (54.5 vs. 41.0, P = .03) and 6 weeks (69.3 vs. 58.6, P = .049).
The two groups were similar on rates of R0 resection (93% and 96%) and median number of lymph nodes retrieved (27 and 25), reported Dr. van der Sluis. Pain during the first 14 days after surgery was lower for the robotics group (P = .003).
With a median follow-up of 40 months, the robotics and open groups did not differ significantly on disease-free survival (median, 26 and 28 months) and overall survival (not reached in either group).
Dr. van der Sluis disclosed no relevant conflicts of interest.
SOURCE: van der Sluis PC et al. 2018 GI Cancer Symposium, Abstract 156148
REPORTING FROM THE 2018 GI CANCERS SYMPOSIUM
Key clinical point: Patients with esophageal cancer undergoing esophagectomy are less likely to experience complications when the surgery is performed robotically.
Major finding: Compared with open transthoracic esophagectomy, robot-assisted minimally invasive thoracolaparoscopic esophagectomy had a lower rate of MCDC grade 2 or higher surgery-related postoperative complications (59% vs. 80%).
Data source: A single-center phase 3 randomized controlled trial among 112 patients with resectable esophageal cancer.
Disclosures: Dr. van der Sluis disclosed no relevant conflicts of interest.
Source: van der Sluis PC et al. 2018 GI Cancer Symposium, Abstract 156148
Trial of clozapine, risperidone halted in MS
SAN DIEGO – New Zealand researchers halted a small trial that was testing the use of the antipsychotics clozapine and risperidone to treat progressive multiple sclerosis because significant side effects caused participants to withdraw.
The adverse events appeared even though the doses were much smaller than those routinely given to patients with psychiatric illnesses. “The neurologists realized it was in the participants’ best interest to stop,” said study lead author Anne Camille La Flamme, PhD, of Victoria University of Wellington (New Zealand). “Adverse events included dizziness, muscle weakness, and falls.”
The researchers launched the study – a blinded, randomized, placebo-controlled trial – to learn whether the two antipsychotic drugs, also known by the brand names Clozaril and Risperdal, have potential as treatments for progressive multiple sclerosis.
Previous in-vitro research had linked the drugs to anti-inflammatory effects in the central nervous system, Dr. La Flamme said, and researchers believed that the progressive form of MS might be especially vulnerable to their effects because of high immune system involvement.
The researchers planned to randomly assign three groups of 12 patients per arm to placebo, clozapine, and risperidone.
For clozapine, “the doses were very low, much lower than you’d expect for psychiatric use,” Dr. La Flamme said. A typical dose for psychiatric disorders is about 350 mg/day, she said, and the trial aimed to use 100-150 mg/day with an eye toward preventing dose-dependent side effects.
As for risperidone, a typical dose is about 4 mg/day, and the trial began at 2 mg/day and would increase to 3.5 mg/day, she said.
Three subjects in the clozapine group had to withdraw within 2 weeks when their doses had only reached an average of 35 mg/day. Two of three in the risperidone group withdrew within 4 months.
In light of the adverse effects, “it was deemed not wise to continue,” Dr. La Flamme said.
The placebo group, meanwhile, completed the trial at 178 days and had adverse effects that were more indicative of MS, she said.
What happened? One possibility is that disability from MS made the adverse events more evident, Dr. La Flamme said. Another possible explanation is that the underlying MS physiology changed the targets of the medications, she said.
“We have no conclusive evidence that would suggest one over the other,” she said. “But a lot of the evidence supports the idea that it’s a change in the physiology, that something about those pathways has been altered.”
It’s clear, she said, that the doses of the drugs in the trial were not appropriate. However, a big question remains: “We do not know whether these medicines are effective at reducing neuroinflammation.”
It’s possible, she said, that a “whisper of a dose” could still be effective. “It may get back to how these agents metabolize and become an active form.”
The study was funded by New Zealand’s Ministry of Business, Innovation and Employment. Dr. La Flamme disclosed that the study team has a patent for repurposing of clozapine and risperidone to treat MS.
SOURCE: La Flamme A et al. ACTRIMS Forum 2018, abstract P031.
SAN DIEGO – New Zealand researchers halted a small trial that was testing the use of the antipsychotics clozapine and risperidone to treat progressive multiple sclerosis because significant side effects caused participants to withdraw.
The adverse events appeared even though the doses were much smaller than those routinely given to patients with psychiatric illnesses. “The neurologists realized it was in the participants’ best interest to stop,” said study lead author Anne Camille La Flamme, PhD, of Victoria University of Wellington (New Zealand). “Adverse events included dizziness, muscle weakness, and falls.”
The researchers launched the study – a blinded, randomized, placebo-controlled trial – to learn whether the two antipsychotic drugs, also known by the brand names Clozaril and Risperdal, have potential as treatments for progressive multiple sclerosis.
Previous in-vitro research had linked the drugs to anti-inflammatory effects in the central nervous system, Dr. La Flamme said, and researchers believed that the progressive form of MS might be especially vulnerable to their effects because of high immune system involvement.
The researchers planned to randomly assign three groups of 12 patients per arm to placebo, clozapine, and risperidone.
For clozapine, “the doses were very low, much lower than you’d expect for psychiatric use,” Dr. La Flamme said. A typical dose for psychiatric disorders is about 350 mg/day, she said, and the trial aimed to use 100-150 mg/day with an eye toward preventing dose-dependent side effects.
As for risperidone, a typical dose is about 4 mg/day, and the trial began at 2 mg/day and would increase to 3.5 mg/day, she said.
Three subjects in the clozapine group had to withdraw within 2 weeks when their doses had only reached an average of 35 mg/day. Two of three in the risperidone group withdrew within 4 months.
In light of the adverse effects, “it was deemed not wise to continue,” Dr. La Flamme said.
The placebo group, meanwhile, completed the trial at 178 days and had adverse effects that were more indicative of MS, she said.
What happened? One possibility is that disability from MS made the adverse events more evident, Dr. La Flamme said. Another possible explanation is that the underlying MS physiology changed the targets of the medications, she said.
“We have no conclusive evidence that would suggest one over the other,” she said. “But a lot of the evidence supports the idea that it’s a change in the physiology, that something about those pathways has been altered.”
It’s clear, she said, that the doses of the drugs in the trial were not appropriate. However, a big question remains: “We do not know whether these medicines are effective at reducing neuroinflammation.”
It’s possible, she said, that a “whisper of a dose” could still be effective. “It may get back to how these agents metabolize and become an active form.”
The study was funded by New Zealand’s Ministry of Business, Innovation and Employment. Dr. La Flamme disclosed that the study team has a patent for repurposing of clozapine and risperidone to treat MS.
SOURCE: La Flamme A et al. ACTRIMS Forum 2018, abstract P031.
SAN DIEGO – New Zealand researchers halted a small trial that was testing the use of the antipsychotics clozapine and risperidone to treat progressive multiple sclerosis because significant side effects caused participants to withdraw.
The adverse events appeared even though the doses were much smaller than those routinely given to patients with psychiatric illnesses. “The neurologists realized it was in the participants’ best interest to stop,” said study lead author Anne Camille La Flamme, PhD, of Victoria University of Wellington (New Zealand). “Adverse events included dizziness, muscle weakness, and falls.”
The researchers launched the study – a blinded, randomized, placebo-controlled trial – to learn whether the two antipsychotic drugs, also known by the brand names Clozaril and Risperdal, have potential as treatments for progressive multiple sclerosis.
Previous in-vitro research had linked the drugs to anti-inflammatory effects in the central nervous system, Dr. La Flamme said, and researchers believed that the progressive form of MS might be especially vulnerable to their effects because of high immune system involvement.
The researchers planned to randomly assign three groups of 12 patients per arm to placebo, clozapine, and risperidone.
For clozapine, “the doses were very low, much lower than you’d expect for psychiatric use,” Dr. La Flamme said. A typical dose for psychiatric disorders is about 350 mg/day, she said, and the trial aimed to use 100-150 mg/day with an eye toward preventing dose-dependent side effects.
As for risperidone, a typical dose is about 4 mg/day, and the trial began at 2 mg/day and would increase to 3.5 mg/day, she said.
Three subjects in the clozapine group had to withdraw within 2 weeks when their doses had only reached an average of 35 mg/day. Two of three in the risperidone group withdrew within 4 months.
In light of the adverse effects, “it was deemed not wise to continue,” Dr. La Flamme said.
The placebo group, meanwhile, completed the trial at 178 days and had adverse effects that were more indicative of MS, she said.
What happened? One possibility is that disability from MS made the adverse events more evident, Dr. La Flamme said. Another possible explanation is that the underlying MS physiology changed the targets of the medications, she said.
“We have no conclusive evidence that would suggest one over the other,” she said. “But a lot of the evidence supports the idea that it’s a change in the physiology, that something about those pathways has been altered.”
It’s clear, she said, that the doses of the drugs in the trial were not appropriate. However, a big question remains: “We do not know whether these medicines are effective at reducing neuroinflammation.”
It’s possible, she said, that a “whisper of a dose” could still be effective. “It may get back to how these agents metabolize and become an active form.”
The study was funded by New Zealand’s Ministry of Business, Innovation and Employment. Dr. La Flamme disclosed that the study team has a patent for repurposing of clozapine and risperidone to treat MS.
SOURCE: La Flamme A et al. ACTRIMS Forum 2018, abstract P031.
REPORTING FROM ACTRIMS Forum 2018
C7 Nerve Transfer May Reduce Spastic Arm Paralysis
Patients with spastic arm paralysis who receive a contralateral C7 nerve graft from their nonparalyzed side to their paralyzed side may have greater improvement in arm function and reduction in spasticity after a year, compared with patients who undergo rehabilitation alone, according to research published January 4 in the New England Journal of Medicine.
The researchers randomly assigned 36 patients who had had unilateral arm paralysis for at least five years to either surgical C7 nerve transfer plus rehabilitation or rehabilitation alone. Participants in the surgery group had an average increase of 17.7 points on the Fugl-Meyer score, while those in the rehabilitation-only group had an average increase of 2.6 points.
To evaluate spasticity, the researchers used the Modified Ashworth Scale, which is scored from 0 to 5. A higher score indicates greater spasticity. Patients who received surgery had improvement from baseline in all five areas measured, and none worsened. The smallest difference between the two groups was in thumb extension. On this measure, 15 surgery patients had a one- or two-unit improvement and three had no change, while seven controls had a one- or two-unit improvement, seven had no improvement, and four had a one-unit worsening. At one year, 16 (89%) patients in the surgery group were able to accomplish three or more of the functional tasks that researchers gave them, whereas none of the controls could do so.
“The majority of clinical improvements coincided with physiologic evidence of connectivity between the hemisphere on the side of the donor nerve and the paralyzed arm,” said lead author Mou-Xiong Zheng, MD, PhD, a hand surgeon at Huashan Hospital at Fudan University in Shanghai, and colleagues.
A Modification to a Previous Surgical Method
Damage to the contralateral cerebral hemisphere after stroke arises from interruption of the inhibitory activity of upper motor neurons. This interruption causes spasticity, along with hand weakness and loss of fractionated fine motor control. In previous studies, researchers have observed activity in the cerebral hemisphere on the same side of paralysis during stroke recovery, but Dr. Zheng and coauthors asserted that connections between the hand and that part of the brain are “sparse,” thus limiting the body’s ability to compensate for spasticity and functional loss.
The latest findings are consistent with those of earlier studies, including one by Dr. Zheng’s coauthors that suggested that the paralyzed hand could be connected to the unaffected hemisphere by transferring a cervical spine nerve from the nonparalyzed side. Researchers previously found this treatment effective for injuries of the brachial plexus. Of the five nerves of the brachial plexus, Dr. Zheng and coauthors chose the C7 nerve because it accounts for about 20% of the nerve fibers in the brachial bundle. Severing the nerve typically results in transient weakness and numbness in the arm or leg on the same side. When they evaluated the hand on the side of the donor graft, the researchers found no significant changes in power, tactile threshold, or two-point discrimination as a result of surgery.
The authors’ surgical approach was a modification of the C7 nerve transfer method that Dr. Zheng and coauthors had previously reported. The operation involved making an incision at the superior aspect of the sternum, mobilizing the donor C7 nerve on the nonparalyzed side, and routing it between the spinal column and esophagus. Then, an anastomosis was performed directly with the C7 nerve on the paralyzed side.
Rehabilitation therapy for the surgery group and controls was identical. Rehabilitation sessions took place four times weekly for 12 months at a single facility, although surgery patients wore an immobilizing cast after their operations.
The nature of the study population (ie, men of various ages with various causes of the underlying cerebral lesions) makes it difficult to draw general conclusions from the findings, Dr. Zheng and coauthors noted. “A larger cohort, followed for a longer period, would be necessary to determine whether cervical nerve transfer results in safe, consistent, and long-term improvements in the function of an arm that is chronically paralyzed as a result of a cerebral lesion,” the authors concluded.
Results Need Clarification
The results that Dr. Zheng and coauthors reported “are exciting, but need clarification and confirmation,” said Robert J. Spinner, MD, Chair of the Department of Neurologic Surgery; Alexander Y. Shin, MD, Consultant in the Department of Orthopedic Surgery; and Allen T. Bishop, MD, Consultant in the Department of Orthopedic Surgery; all at the Mayo Clinic in Rochester, Minnesota, in an accompanying editorial.
Among the questions Dr. Spinner and coauthors raised about the study are whether distal muscles can functionally reinnervate in a year, and whether C7 neurotomy on the paralyzed side led to improvements in spasticity and function. “The C7 neurotomy itself, associated with an immediate reduction in spasticity, represents a major advance for some patients with brain injury who have poor function and spasticity,” they noted. Improvement of the damaged motor cortex, which ongoing physical therapy may enhance, may also contribute to a reduction in spasticity.
Dr. Spinner and coauthors also cited a previous trial by some of Dr. Zheng’s colleagues in which 49% of patients with brachial plexus injury had motor recovery within seven years. “The presence of physiological connectivity observed in the trials does not necessarily equate with functional recovery,” the authors stated.
Future studies of surgical C7 nerve transfer in patients with one-sided arm paralysis should include patients who have C7 neurotomy without nerve transfer, said Dr. Spinner and colleagues. They also noted that because Dr. Zheng and colleagues perform a relatively high volume of these operations, their results might not be easy to reproduce elsewhere.
“Factors other than technical ones, including differences in BMI and limb length across different populations, may lead to different surgical outcomes,” said Dr. Spinner and coauthors. Future research should focus on ways to enhance or speed up nerve regeneration, improve plasticity, and maximize rehabilitation, they added.
—Richard Mark Kirkner
Suggested Reading
Spinner RJ, Shin AY, Bishop AT. Rewiring to regain function in patients with spastic hemiplegia. N Engl J Med. 2018;378(1):83-84.
Zheng MX, Hua XY, Feng JT, et al. Trial of contralateral seventh cervical nerve transfer for spastic arm paralysis. N Engl J Med. 2018;378(1):22-34.
Patients with spastic arm paralysis who receive a contralateral C7 nerve graft from their nonparalyzed side to their paralyzed side may have greater improvement in arm function and reduction in spasticity after a year, compared with patients who undergo rehabilitation alone, according to research published January 4 in the New England Journal of Medicine.
The researchers randomly assigned 36 patients who had had unilateral arm paralysis for at least five years to either surgical C7 nerve transfer plus rehabilitation or rehabilitation alone. Participants in the surgery group had an average increase of 17.7 points on the Fugl-Meyer score, while those in the rehabilitation-only group had an average increase of 2.6 points.
To evaluate spasticity, the researchers used the Modified Ashworth Scale, which is scored from 0 to 5. A higher score indicates greater spasticity. Patients who received surgery had improvement from baseline in all five areas measured, and none worsened. The smallest difference between the two groups was in thumb extension. On this measure, 15 surgery patients had a one- or two-unit improvement and three had no change, while seven controls had a one- or two-unit improvement, seven had no improvement, and four had a one-unit worsening. At one year, 16 (89%) patients in the surgery group were able to accomplish three or more of the functional tasks that researchers gave them, whereas none of the controls could do so.
“The majority of clinical improvements coincided with physiologic evidence of connectivity between the hemisphere on the side of the donor nerve and the paralyzed arm,” said lead author Mou-Xiong Zheng, MD, PhD, a hand surgeon at Huashan Hospital at Fudan University in Shanghai, and colleagues.
A Modification to a Previous Surgical Method
Damage to the contralateral cerebral hemisphere after stroke arises from interruption of the inhibitory activity of upper motor neurons. This interruption causes spasticity, along with hand weakness and loss of fractionated fine motor control. In previous studies, researchers have observed activity in the cerebral hemisphere on the same side of paralysis during stroke recovery, but Dr. Zheng and coauthors asserted that connections between the hand and that part of the brain are “sparse,” thus limiting the body’s ability to compensate for spasticity and functional loss.
The latest findings are consistent with those of earlier studies, including one by Dr. Zheng’s coauthors that suggested that the paralyzed hand could be connected to the unaffected hemisphere by transferring a cervical spine nerve from the nonparalyzed side. Researchers previously found this treatment effective for injuries of the brachial plexus. Of the five nerves of the brachial plexus, Dr. Zheng and coauthors chose the C7 nerve because it accounts for about 20% of the nerve fibers in the brachial bundle. Severing the nerve typically results in transient weakness and numbness in the arm or leg on the same side. When they evaluated the hand on the side of the donor graft, the researchers found no significant changes in power, tactile threshold, or two-point discrimination as a result of surgery.
The authors’ surgical approach was a modification of the C7 nerve transfer method that Dr. Zheng and coauthors had previously reported. The operation involved making an incision at the superior aspect of the sternum, mobilizing the donor C7 nerve on the nonparalyzed side, and routing it between the spinal column and esophagus. Then, an anastomosis was performed directly with the C7 nerve on the paralyzed side.
Rehabilitation therapy for the surgery group and controls was identical. Rehabilitation sessions took place four times weekly for 12 months at a single facility, although surgery patients wore an immobilizing cast after their operations.
The nature of the study population (ie, men of various ages with various causes of the underlying cerebral lesions) makes it difficult to draw general conclusions from the findings, Dr. Zheng and coauthors noted. “A larger cohort, followed for a longer period, would be necessary to determine whether cervical nerve transfer results in safe, consistent, and long-term improvements in the function of an arm that is chronically paralyzed as a result of a cerebral lesion,” the authors concluded.
Results Need Clarification
The results that Dr. Zheng and coauthors reported “are exciting, but need clarification and confirmation,” said Robert J. Spinner, MD, Chair of the Department of Neurologic Surgery; Alexander Y. Shin, MD, Consultant in the Department of Orthopedic Surgery; and Allen T. Bishop, MD, Consultant in the Department of Orthopedic Surgery; all at the Mayo Clinic in Rochester, Minnesota, in an accompanying editorial.
Among the questions Dr. Spinner and coauthors raised about the study are whether distal muscles can functionally reinnervate in a year, and whether C7 neurotomy on the paralyzed side led to improvements in spasticity and function. “The C7 neurotomy itself, associated with an immediate reduction in spasticity, represents a major advance for some patients with brain injury who have poor function and spasticity,” they noted. Improvement of the damaged motor cortex, which ongoing physical therapy may enhance, may also contribute to a reduction in spasticity.
Dr. Spinner and coauthors also cited a previous trial by some of Dr. Zheng’s colleagues in which 49% of patients with brachial plexus injury had motor recovery within seven years. “The presence of physiological connectivity observed in the trials does not necessarily equate with functional recovery,” the authors stated.
Future studies of surgical C7 nerve transfer in patients with one-sided arm paralysis should include patients who have C7 neurotomy without nerve transfer, said Dr. Spinner and colleagues. They also noted that because Dr. Zheng and colleagues perform a relatively high volume of these operations, their results might not be easy to reproduce elsewhere.
“Factors other than technical ones, including differences in BMI and limb length across different populations, may lead to different surgical outcomes,” said Dr. Spinner and coauthors. Future research should focus on ways to enhance or speed up nerve regeneration, improve plasticity, and maximize rehabilitation, they added.
—Richard Mark Kirkner
Suggested Reading
Spinner RJ, Shin AY, Bishop AT. Rewiring to regain function in patients with spastic hemiplegia. N Engl J Med. 2018;378(1):83-84.
Zheng MX, Hua XY, Feng JT, et al. Trial of contralateral seventh cervical nerve transfer for spastic arm paralysis. N Engl J Med. 2018;378(1):22-34.
Patients with spastic arm paralysis who receive a contralateral C7 nerve graft from their nonparalyzed side to their paralyzed side may have greater improvement in arm function and reduction in spasticity after a year, compared with patients who undergo rehabilitation alone, according to research published January 4 in the New England Journal of Medicine.
The researchers randomly assigned 36 patients who had had unilateral arm paralysis for at least five years to either surgical C7 nerve transfer plus rehabilitation or rehabilitation alone. Participants in the surgery group had an average increase of 17.7 points on the Fugl-Meyer score, while those in the rehabilitation-only group had an average increase of 2.6 points.
To evaluate spasticity, the researchers used the Modified Ashworth Scale, which is scored from 0 to 5. A higher score indicates greater spasticity. Patients who received surgery had improvement from baseline in all five areas measured, and none worsened. The smallest difference between the two groups was in thumb extension. On this measure, 15 surgery patients had a one- or two-unit improvement and three had no change, while seven controls had a one- or two-unit improvement, seven had no improvement, and four had a one-unit worsening. At one year, 16 (89%) patients in the surgery group were able to accomplish three or more of the functional tasks that researchers gave them, whereas none of the controls could do so.
“The majority of clinical improvements coincided with physiologic evidence of connectivity between the hemisphere on the side of the donor nerve and the paralyzed arm,” said lead author Mou-Xiong Zheng, MD, PhD, a hand surgeon at Huashan Hospital at Fudan University in Shanghai, and colleagues.
A Modification to a Previous Surgical Method
Damage to the contralateral cerebral hemisphere after stroke arises from interruption of the inhibitory activity of upper motor neurons. This interruption causes spasticity, along with hand weakness and loss of fractionated fine motor control. In previous studies, researchers have observed activity in the cerebral hemisphere on the same side of paralysis during stroke recovery, but Dr. Zheng and coauthors asserted that connections between the hand and that part of the brain are “sparse,” thus limiting the body’s ability to compensate for spasticity and functional loss.
The latest findings are consistent with those of earlier studies, including one by Dr. Zheng’s coauthors that suggested that the paralyzed hand could be connected to the unaffected hemisphere by transferring a cervical spine nerve from the nonparalyzed side. Researchers previously found this treatment effective for injuries of the brachial plexus. Of the five nerves of the brachial plexus, Dr. Zheng and coauthors chose the C7 nerve because it accounts for about 20% of the nerve fibers in the brachial bundle. Severing the nerve typically results in transient weakness and numbness in the arm or leg on the same side. When they evaluated the hand on the side of the donor graft, the researchers found no significant changes in power, tactile threshold, or two-point discrimination as a result of surgery.
The authors’ surgical approach was a modification of the C7 nerve transfer method that Dr. Zheng and coauthors had previously reported. The operation involved making an incision at the superior aspect of the sternum, mobilizing the donor C7 nerve on the nonparalyzed side, and routing it between the spinal column and esophagus. Then, an anastomosis was performed directly with the C7 nerve on the paralyzed side.
Rehabilitation therapy for the surgery group and controls was identical. Rehabilitation sessions took place four times weekly for 12 months at a single facility, although surgery patients wore an immobilizing cast after their operations.
The nature of the study population (ie, men of various ages with various causes of the underlying cerebral lesions) makes it difficult to draw general conclusions from the findings, Dr. Zheng and coauthors noted. “A larger cohort, followed for a longer period, would be necessary to determine whether cervical nerve transfer results in safe, consistent, and long-term improvements in the function of an arm that is chronically paralyzed as a result of a cerebral lesion,” the authors concluded.
Results Need Clarification
The results that Dr. Zheng and coauthors reported “are exciting, but need clarification and confirmation,” said Robert J. Spinner, MD, Chair of the Department of Neurologic Surgery; Alexander Y. Shin, MD, Consultant in the Department of Orthopedic Surgery; and Allen T. Bishop, MD, Consultant in the Department of Orthopedic Surgery; all at the Mayo Clinic in Rochester, Minnesota, in an accompanying editorial.
Among the questions Dr. Spinner and coauthors raised about the study are whether distal muscles can functionally reinnervate in a year, and whether C7 neurotomy on the paralyzed side led to improvements in spasticity and function. “The C7 neurotomy itself, associated with an immediate reduction in spasticity, represents a major advance for some patients with brain injury who have poor function and spasticity,” they noted. Improvement of the damaged motor cortex, which ongoing physical therapy may enhance, may also contribute to a reduction in spasticity.
Dr. Spinner and coauthors also cited a previous trial by some of Dr. Zheng’s colleagues in which 49% of patients with brachial plexus injury had motor recovery within seven years. “The presence of physiological connectivity observed in the trials does not necessarily equate with functional recovery,” the authors stated.
Future studies of surgical C7 nerve transfer in patients with one-sided arm paralysis should include patients who have C7 neurotomy without nerve transfer, said Dr. Spinner and colleagues. They also noted that because Dr. Zheng and colleagues perform a relatively high volume of these operations, their results might not be easy to reproduce elsewhere.
“Factors other than technical ones, including differences in BMI and limb length across different populations, may lead to different surgical outcomes,” said Dr. Spinner and coauthors. Future research should focus on ways to enhance or speed up nerve regeneration, improve plasticity, and maximize rehabilitation, they added.
—Richard Mark Kirkner
Suggested Reading
Spinner RJ, Shin AY, Bishop AT. Rewiring to regain function in patients with spastic hemiplegia. N Engl J Med. 2018;378(1):83-84.
Zheng MX, Hua XY, Feng JT, et al. Trial of contralateral seventh cervical nerve transfer for spastic arm paralysis. N Engl J Med. 2018;378(1):22-34.
FDA’s standards for approving generics are questioned
TORONTO – The Food and Drug Administration’s standards for demonstrating pharmacokinetic bioequivalence between two inhaled products, which allow for single batch comparisons of approved and generic candidate products, need to be revised to address batch to batch variability, suggested a presenter at the CHEST annual meeting.
Marketing approval of a new generic drug in the United States, including orally inhaled products, generally requires a demonstration of pharmacokinetic bioequivalence to a reference listed product. The standard criterion for statistical bioequivalence applied by the FDA requires the pharmacokinetics of the generic to be within about 10% of the branded product.
In early pharmacokinetic bioequivalence studies, Elise Burmeister Getz, PhD, and her colleagues compared single batches of their generic candidate OT329 Solis 100/50 to single batches of Advair Diskus 100/50 in five individual studies and single batches of Advair Diskus 100/50 to single batches of the same drug. They also found Advair Diskus 100/50 batches that were more than 30% different from each other.
“When patients differ from one another, we put many patients in the trial. And when batches differ from one another, we should be putting many batches in the trial,” Dr. Burmeister Getz, director of clinical pharmacology at Oriel Therapeutics, said at the CHEST meeting. “If we want a robust assessment of bioequivalence and not just a check the box exercise, we really need to have product sampling that’s aligned with product variability.”
When the researchers combined the data in a meta-analysis, bioequivalence was demonstrated, but the pooled analysis could not be used for FDA registration because of its retrospective nature.
They later conducted a prospective study with multiple batches of both the generic and branded drugs. This multiple-batch bioequivalence study involved 96 healthy subjects using 16 batches each of Advair Diskus and Oriel’s OT329 Solis 100/50. A single inhalation was administered to healthy adult subjects in a randomized crossover design and blood samples were collected pre dose and up to 48 hours after inhalation.
With the FDA’s definition of bioequivalence, the generic candidate fell within the bioequivalence goalposts, Dr. Burmeister Getz noted.
The issue of pharmacokinetic variance is not unique to Advair Diskus, but she and her colleagues don’t understand why different batches show such wide variability, Dr. Burmeister Getz noted.
“The advantage of this multibatch approach is that the results of the bioequivalence assessment aren’t dependent on the single batch that happened to be chosen for the study. They are generalizable to the product because the product has been robustly represented in the study,” Dr. Burmeister Getz told attendees.
Oriel makes OT329 Solis 100/50, a fully substitutable generic to Advair Diskus 100/50, which is indicated for treating asthma. Both are multidose dry powder oral inhalation products containing fluticasone propionate, to reduce inflammation in the lungs, and salmeterol, to relax muscles in the airways, for the maintenance treatment of asthma. Advair Diskus at higher doses is indicated for asthma and COPD.
An FDA response?
Asked what the FDA makes of the batch-to-batch variability data, Dr. Burmeister Getz answered simply, “We don’t know.” Before she and her colleagues ran the 16 batch per product study, they submitted their protocol to the FDA for review, but 1 year later, they still hadn’t heard any response.
“Sponsors are apparently allowed to simply pick their batch in a careful and, dare I say manipulative way, to gain the result they want. With a single batch study the selection of batch will absolutely determine the outcome of the study.”
In vitro bioequivalence studies are already required to use multiple batches, she noted.
This research was funded by Oriel Therapeutics, an indirect wholly-owned subsidiary of Novartis AG.
TORONTO – The Food and Drug Administration’s standards for demonstrating pharmacokinetic bioequivalence between two inhaled products, which allow for single batch comparisons of approved and generic candidate products, need to be revised to address batch to batch variability, suggested a presenter at the CHEST annual meeting.
Marketing approval of a new generic drug in the United States, including orally inhaled products, generally requires a demonstration of pharmacokinetic bioequivalence to a reference listed product. The standard criterion for statistical bioequivalence applied by the FDA requires the pharmacokinetics of the generic to be within about 10% of the branded product.
In early pharmacokinetic bioequivalence studies, Elise Burmeister Getz, PhD, and her colleagues compared single batches of their generic candidate OT329 Solis 100/50 to single batches of Advair Diskus 100/50 in five individual studies and single batches of Advair Diskus 100/50 to single batches of the same drug. They also found Advair Diskus 100/50 batches that were more than 30% different from each other.
“When patients differ from one another, we put many patients in the trial. And when batches differ from one another, we should be putting many batches in the trial,” Dr. Burmeister Getz, director of clinical pharmacology at Oriel Therapeutics, said at the CHEST meeting. “If we want a robust assessment of bioequivalence and not just a check the box exercise, we really need to have product sampling that’s aligned with product variability.”
When the researchers combined the data in a meta-analysis, bioequivalence was demonstrated, but the pooled analysis could not be used for FDA registration because of its retrospective nature.
They later conducted a prospective study with multiple batches of both the generic and branded drugs. This multiple-batch bioequivalence study involved 96 healthy subjects using 16 batches each of Advair Diskus and Oriel’s OT329 Solis 100/50. A single inhalation was administered to healthy adult subjects in a randomized crossover design and blood samples were collected pre dose and up to 48 hours after inhalation.
With the FDA’s definition of bioequivalence, the generic candidate fell within the bioequivalence goalposts, Dr. Burmeister Getz noted.
The issue of pharmacokinetic variance is not unique to Advair Diskus, but she and her colleagues don’t understand why different batches show such wide variability, Dr. Burmeister Getz noted.
“The advantage of this multibatch approach is that the results of the bioequivalence assessment aren’t dependent on the single batch that happened to be chosen for the study. They are generalizable to the product because the product has been robustly represented in the study,” Dr. Burmeister Getz told attendees.
Oriel makes OT329 Solis 100/50, a fully substitutable generic to Advair Diskus 100/50, which is indicated for treating asthma. Both are multidose dry powder oral inhalation products containing fluticasone propionate, to reduce inflammation in the lungs, and salmeterol, to relax muscles in the airways, for the maintenance treatment of asthma. Advair Diskus at higher doses is indicated for asthma and COPD.
An FDA response?
Asked what the FDA makes of the batch-to-batch variability data, Dr. Burmeister Getz answered simply, “We don’t know.” Before she and her colleagues ran the 16 batch per product study, they submitted their protocol to the FDA for review, but 1 year later, they still hadn’t heard any response.
“Sponsors are apparently allowed to simply pick their batch in a careful and, dare I say manipulative way, to gain the result they want. With a single batch study the selection of batch will absolutely determine the outcome of the study.”
In vitro bioequivalence studies are already required to use multiple batches, she noted.
This research was funded by Oriel Therapeutics, an indirect wholly-owned subsidiary of Novartis AG.
TORONTO – The Food and Drug Administration’s standards for demonstrating pharmacokinetic bioequivalence between two inhaled products, which allow for single batch comparisons of approved and generic candidate products, need to be revised to address batch to batch variability, suggested a presenter at the CHEST annual meeting.
Marketing approval of a new generic drug in the United States, including orally inhaled products, generally requires a demonstration of pharmacokinetic bioequivalence to a reference listed product. The standard criterion for statistical bioequivalence applied by the FDA requires the pharmacokinetics of the generic to be within about 10% of the branded product.
In early pharmacokinetic bioequivalence studies, Elise Burmeister Getz, PhD, and her colleagues compared single batches of their generic candidate OT329 Solis 100/50 to single batches of Advair Diskus 100/50 in five individual studies and single batches of Advair Diskus 100/50 to single batches of the same drug. They also found Advair Diskus 100/50 batches that were more than 30% different from each other.
“When patients differ from one another, we put many patients in the trial. And when batches differ from one another, we should be putting many batches in the trial,” Dr. Burmeister Getz, director of clinical pharmacology at Oriel Therapeutics, said at the CHEST meeting. “If we want a robust assessment of bioequivalence and not just a check the box exercise, we really need to have product sampling that’s aligned with product variability.”
When the researchers combined the data in a meta-analysis, bioequivalence was demonstrated, but the pooled analysis could not be used for FDA registration because of its retrospective nature.
They later conducted a prospective study with multiple batches of both the generic and branded drugs. This multiple-batch bioequivalence study involved 96 healthy subjects using 16 batches each of Advair Diskus and Oriel’s OT329 Solis 100/50. A single inhalation was administered to healthy adult subjects in a randomized crossover design and blood samples were collected pre dose and up to 48 hours after inhalation.
With the FDA’s definition of bioequivalence, the generic candidate fell within the bioequivalence goalposts, Dr. Burmeister Getz noted.
The issue of pharmacokinetic variance is not unique to Advair Diskus, but she and her colleagues don’t understand why different batches show such wide variability, Dr. Burmeister Getz noted.
“The advantage of this multibatch approach is that the results of the bioequivalence assessment aren’t dependent on the single batch that happened to be chosen for the study. They are generalizable to the product because the product has been robustly represented in the study,” Dr. Burmeister Getz told attendees.
Oriel makes OT329 Solis 100/50, a fully substitutable generic to Advair Diskus 100/50, which is indicated for treating asthma. Both are multidose dry powder oral inhalation products containing fluticasone propionate, to reduce inflammation in the lungs, and salmeterol, to relax muscles in the airways, for the maintenance treatment of asthma. Advair Diskus at higher doses is indicated for asthma and COPD.
An FDA response?
Asked what the FDA makes of the batch-to-batch variability data, Dr. Burmeister Getz answered simply, “We don’t know.” Before she and her colleagues ran the 16 batch per product study, they submitted their protocol to the FDA for review, but 1 year later, they still hadn’t heard any response.
“Sponsors are apparently allowed to simply pick their batch in a careful and, dare I say manipulative way, to gain the result they want. With a single batch study the selection of batch will absolutely determine the outcome of the study.”
In vitro bioequivalence studies are already required to use multiple batches, she noted.
This research was funded by Oriel Therapeutics, an indirect wholly-owned subsidiary of Novartis AG.
AT CHEST 2017
Key clinical point: The FDA’s standards for demonstrating pharmacokinetic bioequivalence between two inhaled products need to be revised to address batch to batch variability.
Major finding: Investigators found Advair Diskus 100/50 batches that were more than 30% different from each other.
Data source: Pharmacokinetic bioequivalence studies comparing batches of Advair Diskus 100/50 to each other, and to batches of the generic candidate OT329 Solis 100/50.
Disclosures: This research was funded by Oriel Therapeutics, an indirect wholly-owned subsidiary of Novartis AG. Dr. Burmeister Getz is director of clinical pharmacology at Oriel Therapeutics.
Two biomarkers predict immunotherapy response for NSCLC
Two biomarkers were correlated with poor outcomes for patients with non–small cell lung cancer (NSCLC) who were treated with immune checkpoint inhibitors, according to the results of a multicenter retrospective study.
Furthermore, these two biomarkers – derived neutrophil to lymphocyte ratio (dNLR) and lactate dehydrogenase (LDH) – could make up a lung immune prognostic index (LIPI) to help predict response to immune checkpoint inhibitor therapy, Laura Mezquita, MD, of the Institut Gustave Roussy in Villejuif, France, and her associates reported in JAMA Oncology.
The median overall survival for the population evaluated was 10.1 months (95% CI, 9.0-11.7 months), and the population’s median progression-free survival was 4.0 months (95% confidence interval, 3.4-5.0 months).
Overall survival was independently associated with both a dNLR greater than three (HR, 2.22) and an LDH level greater than the upper limit of normal (HR, 2.51), they reported.
The researchers sought to examine dNLR and LDH after previously reported data had shown that those biomarkers predict immunotherapy response for other types of disease.
“We hypothesized that the combination of baseline dNLR and LDH could be correlated with resistance to ICI [immune checkpoint inhibitor] therapy ... and could be used to develop a long immune prognostic index,” Dr. Mezquita and her associates wrote.
The researchers examined whether, at baseline, patients had a dNLR greater than three or an LDH level greater than the upper limit of normal. They then derived the LIPI by categorizing patients based on whether they met both biomarker criteria (“poor”), only one (“intermediate”), or neither (“good”), and analyzed the results accordingly.
Researchers evaluated 431 patients using the LIPI: 15% of these patients were in the poor group, 48% were in the intermediate group, and 38% were in the good group.
Patients in the poor group experienced worse median overall survival (4.8 months) and progression-free survival (2 months) than did those who had an LIPI considered intermediate (OS, 10 months; PFS, 3.7 months) or good (OS, 16.5 months; PFS, 6.3 months), according to Dr. Mezquita and her associates.
The investigators noted that the retrospective nature of the study and unknown clinical and pathological data may have limited these findings, but they concluded that LIPI with PD-L1 expression ought to be explored in future prospective trials and that LIPI could be used for patient stratification in future randomized trials.
The researchers reported no disclosures.
SOURCE: Mezquita L et al. Jama Oncol. 2018 Jan 11. doi: 10.1001/jamaoncol.2017.4771.
Two biomarkers were correlated with poor outcomes for patients with non–small cell lung cancer (NSCLC) who were treated with immune checkpoint inhibitors, according to the results of a multicenter retrospective study.
Furthermore, these two biomarkers – derived neutrophil to lymphocyte ratio (dNLR) and lactate dehydrogenase (LDH) – could make up a lung immune prognostic index (LIPI) to help predict response to immune checkpoint inhibitor therapy, Laura Mezquita, MD, of the Institut Gustave Roussy in Villejuif, France, and her associates reported in JAMA Oncology.
The median overall survival for the population evaluated was 10.1 months (95% CI, 9.0-11.7 months), and the population’s median progression-free survival was 4.0 months (95% confidence interval, 3.4-5.0 months).
Overall survival was independently associated with both a dNLR greater than three (HR, 2.22) and an LDH level greater than the upper limit of normal (HR, 2.51), they reported.
The researchers sought to examine dNLR and LDH after previously reported data had shown that those biomarkers predict immunotherapy response for other types of disease.
“We hypothesized that the combination of baseline dNLR and LDH could be correlated with resistance to ICI [immune checkpoint inhibitor] therapy ... and could be used to develop a long immune prognostic index,” Dr. Mezquita and her associates wrote.
The researchers examined whether, at baseline, patients had a dNLR greater than three or an LDH level greater than the upper limit of normal. They then derived the LIPI by categorizing patients based on whether they met both biomarker criteria (“poor”), only one (“intermediate”), or neither (“good”), and analyzed the results accordingly.
Researchers evaluated 431 patients using the LIPI: 15% of these patients were in the poor group, 48% were in the intermediate group, and 38% were in the good group.
Patients in the poor group experienced worse median overall survival (4.8 months) and progression-free survival (2 months) than did those who had an LIPI considered intermediate (OS, 10 months; PFS, 3.7 months) or good (OS, 16.5 months; PFS, 6.3 months), according to Dr. Mezquita and her associates.
The investigators noted that the retrospective nature of the study and unknown clinical and pathological data may have limited these findings, but they concluded that LIPI with PD-L1 expression ought to be explored in future prospective trials and that LIPI could be used for patient stratification in future randomized trials.
The researchers reported no disclosures.
SOURCE: Mezquita L et al. Jama Oncol. 2018 Jan 11. doi: 10.1001/jamaoncol.2017.4771.
Two biomarkers were correlated with poor outcomes for patients with non–small cell lung cancer (NSCLC) who were treated with immune checkpoint inhibitors, according to the results of a multicenter retrospective study.
Furthermore, these two biomarkers – derived neutrophil to lymphocyte ratio (dNLR) and lactate dehydrogenase (LDH) – could make up a lung immune prognostic index (LIPI) to help predict response to immune checkpoint inhibitor therapy, Laura Mezquita, MD, of the Institut Gustave Roussy in Villejuif, France, and her associates reported in JAMA Oncology.
The median overall survival for the population evaluated was 10.1 months (95% CI, 9.0-11.7 months), and the population’s median progression-free survival was 4.0 months (95% confidence interval, 3.4-5.0 months).
Overall survival was independently associated with both a dNLR greater than three (HR, 2.22) and an LDH level greater than the upper limit of normal (HR, 2.51), they reported.
The researchers sought to examine dNLR and LDH after previously reported data had shown that those biomarkers predict immunotherapy response for other types of disease.
“We hypothesized that the combination of baseline dNLR and LDH could be correlated with resistance to ICI [immune checkpoint inhibitor] therapy ... and could be used to develop a long immune prognostic index,” Dr. Mezquita and her associates wrote.
The researchers examined whether, at baseline, patients had a dNLR greater than three or an LDH level greater than the upper limit of normal. They then derived the LIPI by categorizing patients based on whether they met both biomarker criteria (“poor”), only one (“intermediate”), or neither (“good”), and analyzed the results accordingly.
Researchers evaluated 431 patients using the LIPI: 15% of these patients were in the poor group, 48% were in the intermediate group, and 38% were in the good group.
Patients in the poor group experienced worse median overall survival (4.8 months) and progression-free survival (2 months) than did those who had an LIPI considered intermediate (OS, 10 months; PFS, 3.7 months) or good (OS, 16.5 months; PFS, 6.3 months), according to Dr. Mezquita and her associates.
The investigators noted that the retrospective nature of the study and unknown clinical and pathological data may have limited these findings, but they concluded that LIPI with PD-L1 expression ought to be explored in future prospective trials and that LIPI could be used for patient stratification in future randomized trials.
The researchers reported no disclosures.
SOURCE: Mezquita L et al. Jama Oncol. 2018 Jan 11. doi: 10.1001/jamaoncol.2017.4771.
FROM JAMA ONCOLOGY
Key clinical point:
Major finding: Median OS was 4.8 months and median PFS was 2.0 months for patients with advanced NSCLC who had both a dNLR greater than three and an LDH level greater than the upper limit of normal.
Study details: A multicenter retrospective study which included a test cohort, validation cohort, and control cohort.
Disclosures: No conflict of interest disclosures.
Source: Mezquita L et al. Jama Oncol. 2018 Jan 11. doi: 10.1001/jamaoncol.2017.4771.
USPSTF: Screen all pregnant women for syphilis
The U.S. Preventive Services Task Force issued a draft recommendation that all pregnant women be screened for syphilis infection.
The recommendation, released Feb. 6, follows an evidence review of studies conducted since the task force’s most recent recommendation in 2009, which also called for universal screening of pregnant women.
“Despite consistent recommendations and legal mandates, screening for syphilis in pregnancy continues to be suboptimal in certain populations,” the evidence review noted. The rate of congenital syphilis in the United States nearly doubled from 2012 to 2016.
“Because the early stages of syphilis often don’t cause any symptoms, screening helps identify the infection in pregnant women who may not realize they have the disease,” task force member Chien-Wen Tseng, MD, of the University of Hawaii, said in a statement.
Treatment is most effective early in pregnancy, and can reduce the chances of congenital syphilis. The draft recommendation calls for pregnant women to be tested at the first prenatal visit or at delivery, if the woman has not received prenatal care.
Comments can be submitted until March 5 at www.uspreventiveservicestaskforce.org/tfcomment.htm.
[email protected]
The U.S. Preventive Services Task Force issued a draft recommendation that all pregnant women be screened for syphilis infection.
The recommendation, released Feb. 6, follows an evidence review of studies conducted since the task force’s most recent recommendation in 2009, which also called for universal screening of pregnant women.
“Despite consistent recommendations and legal mandates, screening for syphilis in pregnancy continues to be suboptimal in certain populations,” the evidence review noted. The rate of congenital syphilis in the United States nearly doubled from 2012 to 2016.
“Because the early stages of syphilis often don’t cause any symptoms, screening helps identify the infection in pregnant women who may not realize they have the disease,” task force member Chien-Wen Tseng, MD, of the University of Hawaii, said in a statement.
Treatment is most effective early in pregnancy, and can reduce the chances of congenital syphilis. The draft recommendation calls for pregnant women to be tested at the first prenatal visit or at delivery, if the woman has not received prenatal care.
Comments can be submitted until March 5 at www.uspreventiveservicestaskforce.org/tfcomment.htm.
[email protected]
The U.S. Preventive Services Task Force issued a draft recommendation that all pregnant women be screened for syphilis infection.
The recommendation, released Feb. 6, follows an evidence review of studies conducted since the task force’s most recent recommendation in 2009, which also called for universal screening of pregnant women.
“Despite consistent recommendations and legal mandates, screening for syphilis in pregnancy continues to be suboptimal in certain populations,” the evidence review noted. The rate of congenital syphilis in the United States nearly doubled from 2012 to 2016.
“Because the early stages of syphilis often don’t cause any symptoms, screening helps identify the infection in pregnant women who may not realize they have the disease,” task force member Chien-Wen Tseng, MD, of the University of Hawaii, said in a statement.
Treatment is most effective early in pregnancy, and can reduce the chances of congenital syphilis. The draft recommendation calls for pregnant women to be tested at the first prenatal visit or at delivery, if the woman has not received prenatal care.
Comments can be submitted until March 5 at www.uspreventiveservicestaskforce.org/tfcomment.htm.
[email protected]