First-of-its kind guideline on lipid monitoring in endocrine diseases

Article Type
Changed
Tue, 05/03/2022 - 15:08

Endocrine diseases of any type – not just diabetes – can represent a cardiovascular risk and patients with those disorders should be screened for high cholesterol, according to a new clinical practice guideline from the Endocrine Society.

“The simple recommendation to check a lipid panel in patients with endocrine diseases and calculate cardiovascular risk may be practice changing because that is not done routinely,” Connie Newman, MD, chair of the Endocrine Society committee that developed the guideline, said in an interview.

“Usually the focus is on assessment and treatment of the endocrine disease, rather than on assessment and treatment of atherosclerotic cardiovascular disease risk,” said Newman, an adjunct professor of medicine in the department of medicine, division of endocrinology, diabetes & metabolism, at New York University.

Whereas diabetes, well-known for its increased cardiovascular risk profile, is commonly addressed in other cardiovascular and cholesterol practice management guidelines, the array of other endocrine diseases are not typically included.

“This guideline is the first of its kind,” Dr. Newman said. “The Endocrine Society has not previously issued a guideline on lipid management in endocrine disorders [and] other organizations have not written guidelines on this topic. 

“Rather, guidelines have been written on cholesterol management, but these do not describe cholesterol management in patients with endocrine diseases such as thyroid disease [hypothyroidism and hyperthyroidism], Cushing’s syndrome, acromegaly, growth hormone deficiency, menopause, male hypogonadism, and obesity,” she noted.

But these conditions carry a host of cardiovascular risk factors that may require careful monitoring and management.

“Although endocrine hormones, such as thyroid hormone, cortisol, estrogen, testosterone, growth hormone, and insulin, affect pathways for lipid metabolism, physicians lack guidance on lipid abnormalities, cardiovascular risk, and treatment to reduce lipids and cardiovascular risk in patients with endocrine diseases,” she explained.

Vinaya Simha, MD, an internal medicine specialist at the Mayo Clinic in Rochester, Minn., agrees that the guideline is notable in addressing an unmet need.

Recommendations that stand out to Dr. Simha include the suggestion of adding eicosapentaenoic acid (EPA) ethyl ester to reduce the risk of cardiovascular disease in adults with diabetes or atherosclerotic cardiovascular disease who have elevated triglyceride levels despite statin treatment.

James L. Rosenzweig, MD, an endocrinologist at Hebrew SeniorLife in Boston, agreed that this is an important addition to an area that needs more guidance.

“Many of these clinical situations can exacerbate dyslipidemia and some also increase the cardiovascular risk to a greater extent in combination with elevated cholesterol and/or triglycerides,” he said in an interview. 

“In many cases, treatment of the underlying disorder appropriately can have an important impact in resolving the lipid disorder. In others, more aggressive pharmacological treatment is indicated,” he said.

“I think that this will be a valuable resource, especially for endocrinologists, but it can be used as well by providers in other disciplines.”
 

Key recommendations for different endocrine conditions

The guideline, published in the Journal of Clinical Endocrinology & Metabolism, details those risks and provides evidence-based recommendations on their management and treatment.

Key recommendations include:

  • Obtain a lipid panel and evaluate cardiovascular risk factors in all adults with endocrine disorders.
  • In patients with  and risk factors for cardiovascular disease, start statin therapy in addition to lifestyle modification to reduce cardiovascular risk. “This could mean earlier treatment because other guidelines recommend consideration of therapy at age 40,” Dr. Newman said.
  • Statin therapy is also recommended for adults over 40 with  with a duration of diabetes of more than 20 years and/or microvascular complications, regardless of their cardiovascular risk score. “This means earlier treatment of patients with type 1 diabetes with statins in order to reduce cardiovascular disease risk,” Dr. Newman noted.
  • In patients with hyperlipidemia, rule out  as the cause before treating with lipid-lowering medications. And among patients who are found to have hypothyroidism, reevaluate the lipid profile when the patient has thyroid hormone levels in the normal range.
  • Adults with persistent endogenous Cushing’s syndrome should have their lipid profile monitored. Statin therapy should be considered in addition to lifestyle modifications, irrespective of the cardiovascular risk score.
  • In postmenopausal women, high cholesterol or triglycerides should be treated with statins rather than hormone therapy.
  • Evaluate and treat lipids and other cardiovascular risk factors in women who enter menopause early (before the age of 40-45 years).
 

 

Nice summary of ‘risk-enhancing’ endocrine disorders

Dr. Simha said in an interview that the new guideline is “probably the first comprehensive statement addressing lipid treatment in patients with a broad range of endocrine disorders besides diabetes.”

“Most of the treatment recommendations are congruent with other current guidelines such as the American College of Cardiology/American Heart Association [guidelines], but there is specific mention of which endocrine disorders represent enhanced cardiovascular risk,” she explained.

The new recommendations are notable for including “a nice summary of how different endocrine disorders affect lipid values, and also which endocrine disorders need to be considered as ‘risk-enhancing factors,’ ” Dr. Simha noted.

“The use of EPA in patients with hypertriglyceridemia is novel, compared to the ACC/AHA recommendation. This reflects new data which is now available,” she added.

The American Association of Clinical Endocrinologists also just issued a new algorithm on lipid management and prevention of cardiovascular disease in which treatment of hypertriglyceridemia is emphasized.

In addition, the new Endocrine Society guideline “also mentions an LDL [cholesterol] treatment threshold of 70 mg/dL, and 55 mg/dL in some patient categories, which previous guidelines have not,” Dr. Simha noted.

Overall, Dr. Newman added that the goal of the guideline is to increase awareness of key issues with endocrine diseases that may not necessarily be on clinicians’ radars.

“We hope that it will make a lipid panel and cardiovascular risk evaluation routine in adults with endocrine diseases and cause a greater focus on therapies to reduce heart disease and stroke,” she said.

Dr. Newman, Dr. Simha, and Dr. Rosenzweig reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Endocrine diseases of any type – not just diabetes – can represent a cardiovascular risk and patients with those disorders should be screened for high cholesterol, according to a new clinical practice guideline from the Endocrine Society.

“The simple recommendation to check a lipid panel in patients with endocrine diseases and calculate cardiovascular risk may be practice changing because that is not done routinely,” Connie Newman, MD, chair of the Endocrine Society committee that developed the guideline, said in an interview.

“Usually the focus is on assessment and treatment of the endocrine disease, rather than on assessment and treatment of atherosclerotic cardiovascular disease risk,” said Newman, an adjunct professor of medicine in the department of medicine, division of endocrinology, diabetes & metabolism, at New York University.

Whereas diabetes, well-known for its increased cardiovascular risk profile, is commonly addressed in other cardiovascular and cholesterol practice management guidelines, the array of other endocrine diseases are not typically included.

“This guideline is the first of its kind,” Dr. Newman said. “The Endocrine Society has not previously issued a guideline on lipid management in endocrine disorders [and] other organizations have not written guidelines on this topic. 

“Rather, guidelines have been written on cholesterol management, but these do not describe cholesterol management in patients with endocrine diseases such as thyroid disease [hypothyroidism and hyperthyroidism], Cushing’s syndrome, acromegaly, growth hormone deficiency, menopause, male hypogonadism, and obesity,” she noted.

But these conditions carry a host of cardiovascular risk factors that may require careful monitoring and management.

“Although endocrine hormones, such as thyroid hormone, cortisol, estrogen, testosterone, growth hormone, and insulin, affect pathways for lipid metabolism, physicians lack guidance on lipid abnormalities, cardiovascular risk, and treatment to reduce lipids and cardiovascular risk in patients with endocrine diseases,” she explained.

Vinaya Simha, MD, an internal medicine specialist at the Mayo Clinic in Rochester, Minn., agrees that the guideline is notable in addressing an unmet need.

Recommendations that stand out to Dr. Simha include the suggestion of adding eicosapentaenoic acid (EPA) ethyl ester to reduce the risk of cardiovascular disease in adults with diabetes or atherosclerotic cardiovascular disease who have elevated triglyceride levels despite statin treatment.

James L. Rosenzweig, MD, an endocrinologist at Hebrew SeniorLife in Boston, agreed that this is an important addition to an area that needs more guidance.

“Many of these clinical situations can exacerbate dyslipidemia and some also increase the cardiovascular risk to a greater extent in combination with elevated cholesterol and/or triglycerides,” he said in an interview. 

“In many cases, treatment of the underlying disorder appropriately can have an important impact in resolving the lipid disorder. In others, more aggressive pharmacological treatment is indicated,” he said.

“I think that this will be a valuable resource, especially for endocrinologists, but it can be used as well by providers in other disciplines.”
 

Key recommendations for different endocrine conditions

The guideline, published in the Journal of Clinical Endocrinology & Metabolism, details those risks and provides evidence-based recommendations on their management and treatment.

Key recommendations include:

  • Obtain a lipid panel and evaluate cardiovascular risk factors in all adults with endocrine disorders.
  • In patients with  and risk factors for cardiovascular disease, start statin therapy in addition to lifestyle modification to reduce cardiovascular risk. “This could mean earlier treatment because other guidelines recommend consideration of therapy at age 40,” Dr. Newman said.
  • Statin therapy is also recommended for adults over 40 with  with a duration of diabetes of more than 20 years and/or microvascular complications, regardless of their cardiovascular risk score. “This means earlier treatment of patients with type 1 diabetes with statins in order to reduce cardiovascular disease risk,” Dr. Newman noted.
  • In patients with hyperlipidemia, rule out  as the cause before treating with lipid-lowering medications. And among patients who are found to have hypothyroidism, reevaluate the lipid profile when the patient has thyroid hormone levels in the normal range.
  • Adults with persistent endogenous Cushing’s syndrome should have their lipid profile monitored. Statin therapy should be considered in addition to lifestyle modifications, irrespective of the cardiovascular risk score.
  • In postmenopausal women, high cholesterol or triglycerides should be treated with statins rather than hormone therapy.
  • Evaluate and treat lipids and other cardiovascular risk factors in women who enter menopause early (before the age of 40-45 years).
 

 

Nice summary of ‘risk-enhancing’ endocrine disorders

Dr. Simha said in an interview that the new guideline is “probably the first comprehensive statement addressing lipid treatment in patients with a broad range of endocrine disorders besides diabetes.”

“Most of the treatment recommendations are congruent with other current guidelines such as the American College of Cardiology/American Heart Association [guidelines], but there is specific mention of which endocrine disorders represent enhanced cardiovascular risk,” she explained.

The new recommendations are notable for including “a nice summary of how different endocrine disorders affect lipid values, and also which endocrine disorders need to be considered as ‘risk-enhancing factors,’ ” Dr. Simha noted.

“The use of EPA in patients with hypertriglyceridemia is novel, compared to the ACC/AHA recommendation. This reflects new data which is now available,” she added.

The American Association of Clinical Endocrinologists also just issued a new algorithm on lipid management and prevention of cardiovascular disease in which treatment of hypertriglyceridemia is emphasized.

In addition, the new Endocrine Society guideline “also mentions an LDL [cholesterol] treatment threshold of 70 mg/dL, and 55 mg/dL in some patient categories, which previous guidelines have not,” Dr. Simha noted.

Overall, Dr. Newman added that the goal of the guideline is to increase awareness of key issues with endocrine diseases that may not necessarily be on clinicians’ radars.

“We hope that it will make a lipid panel and cardiovascular risk evaluation routine in adults with endocrine diseases and cause a greater focus on therapies to reduce heart disease and stroke,” she said.

Dr. Newman, Dr. Simha, and Dr. Rosenzweig reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Endocrine diseases of any type – not just diabetes – can represent a cardiovascular risk and patients with those disorders should be screened for high cholesterol, according to a new clinical practice guideline from the Endocrine Society.

“The simple recommendation to check a lipid panel in patients with endocrine diseases and calculate cardiovascular risk may be practice changing because that is not done routinely,” Connie Newman, MD, chair of the Endocrine Society committee that developed the guideline, said in an interview.

“Usually the focus is on assessment and treatment of the endocrine disease, rather than on assessment and treatment of atherosclerotic cardiovascular disease risk,” said Newman, an adjunct professor of medicine in the department of medicine, division of endocrinology, diabetes & metabolism, at New York University.

Whereas diabetes, well-known for its increased cardiovascular risk profile, is commonly addressed in other cardiovascular and cholesterol practice management guidelines, the array of other endocrine diseases are not typically included.

“This guideline is the first of its kind,” Dr. Newman said. “The Endocrine Society has not previously issued a guideline on lipid management in endocrine disorders [and] other organizations have not written guidelines on this topic. 

“Rather, guidelines have been written on cholesterol management, but these do not describe cholesterol management in patients with endocrine diseases such as thyroid disease [hypothyroidism and hyperthyroidism], Cushing’s syndrome, acromegaly, growth hormone deficiency, menopause, male hypogonadism, and obesity,” she noted.

But these conditions carry a host of cardiovascular risk factors that may require careful monitoring and management.

“Although endocrine hormones, such as thyroid hormone, cortisol, estrogen, testosterone, growth hormone, and insulin, affect pathways for lipid metabolism, physicians lack guidance on lipid abnormalities, cardiovascular risk, and treatment to reduce lipids and cardiovascular risk in patients with endocrine diseases,” she explained.

Vinaya Simha, MD, an internal medicine specialist at the Mayo Clinic in Rochester, Minn., agrees that the guideline is notable in addressing an unmet need.

Recommendations that stand out to Dr. Simha include the suggestion of adding eicosapentaenoic acid (EPA) ethyl ester to reduce the risk of cardiovascular disease in adults with diabetes or atherosclerotic cardiovascular disease who have elevated triglyceride levels despite statin treatment.

James L. Rosenzweig, MD, an endocrinologist at Hebrew SeniorLife in Boston, agreed that this is an important addition to an area that needs more guidance.

“Many of these clinical situations can exacerbate dyslipidemia and some also increase the cardiovascular risk to a greater extent in combination with elevated cholesterol and/or triglycerides,” he said in an interview. 

“In many cases, treatment of the underlying disorder appropriately can have an important impact in resolving the lipid disorder. In others, more aggressive pharmacological treatment is indicated,” he said.

“I think that this will be a valuable resource, especially for endocrinologists, but it can be used as well by providers in other disciplines.”
 

Key recommendations for different endocrine conditions

The guideline, published in the Journal of Clinical Endocrinology & Metabolism, details those risks and provides evidence-based recommendations on their management and treatment.

Key recommendations include:

  • Obtain a lipid panel and evaluate cardiovascular risk factors in all adults with endocrine disorders.
  • In patients with  and risk factors for cardiovascular disease, start statin therapy in addition to lifestyle modification to reduce cardiovascular risk. “This could mean earlier treatment because other guidelines recommend consideration of therapy at age 40,” Dr. Newman said.
  • Statin therapy is also recommended for adults over 40 with  with a duration of diabetes of more than 20 years and/or microvascular complications, regardless of their cardiovascular risk score. “This means earlier treatment of patients with type 1 diabetes with statins in order to reduce cardiovascular disease risk,” Dr. Newman noted.
  • In patients with hyperlipidemia, rule out  as the cause before treating with lipid-lowering medications. And among patients who are found to have hypothyroidism, reevaluate the lipid profile when the patient has thyroid hormone levels in the normal range.
  • Adults with persistent endogenous Cushing’s syndrome should have their lipid profile monitored. Statin therapy should be considered in addition to lifestyle modifications, irrespective of the cardiovascular risk score.
  • In postmenopausal women, high cholesterol or triglycerides should be treated with statins rather than hormone therapy.
  • Evaluate and treat lipids and other cardiovascular risk factors in women who enter menopause early (before the age of 40-45 years).
 

 

Nice summary of ‘risk-enhancing’ endocrine disorders

Dr. Simha said in an interview that the new guideline is “probably the first comprehensive statement addressing lipid treatment in patients with a broad range of endocrine disorders besides diabetes.”

“Most of the treatment recommendations are congruent with other current guidelines such as the American College of Cardiology/American Heart Association [guidelines], but there is specific mention of which endocrine disorders represent enhanced cardiovascular risk,” she explained.

The new recommendations are notable for including “a nice summary of how different endocrine disorders affect lipid values, and also which endocrine disorders need to be considered as ‘risk-enhancing factors,’ ” Dr. Simha noted.

“The use of EPA in patients with hypertriglyceridemia is novel, compared to the ACC/AHA recommendation. This reflects new data which is now available,” she added.

The American Association of Clinical Endocrinologists also just issued a new algorithm on lipid management and prevention of cardiovascular disease in which treatment of hypertriglyceridemia is emphasized.

In addition, the new Endocrine Society guideline “also mentions an LDL [cholesterol] treatment threshold of 70 mg/dL, and 55 mg/dL in some patient categories, which previous guidelines have not,” Dr. Simha noted.

Overall, Dr. Newman added that the goal of the guideline is to increase awareness of key issues with endocrine diseases that may not necessarily be on clinicians’ radars.

“We hope that it will make a lipid panel and cardiovascular risk evaluation routine in adults with endocrine diseases and cause a greater focus on therapies to reduce heart disease and stroke,” she said.

Dr. Newman, Dr. Simha, and Dr. Rosenzweig reported no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

‘Landmark’ study pushed detection of covert consciousness in TBI

Article Type
Changed
Mon, 11/30/2020 - 14:27

Compelling advances in the ability to detect signs of consciousness in unconscious patients who have experienced traumatic brain injury (TBI) are leading to unprecedented changes in the field. There is now hope of improving outcomes and even sparing lives of patients who may otherwise have been mistakenly assessed as having no chance of recovery.

Dr. Brian L. Edlow

A recent key study represents a tipping point in the mounting evidence of the ability to detect “covert consciousness” in patients with TBI who are in an unconscious state. That research, published in the New England Journal of Medicine in June 2019, linked the promising signals of consciousness in comatose patients, detected only on imaging, with remarkable outcomes a year later.

“This was a landmark study,” said Brian L. Edlow, MD, in a presentation on the issue of covert consciousness at the virtual annual meeting of the American Neurological Association.

“Importantly, it is the first compelling evidence that early detection of covert consciousness also predicts 1-year outcomes in the Glasgow Outcome Scale Extended (GOSE), showing that covert consciousness in the ICU appears to be relevant for predicting long-term outcomes,” said Dr. Edlow, who is associate director of the Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, in Boston.

The researchers showed that 15% of unconscious patients with acute brain injury in the study exhibited significant brain activity on EEG in response to stimuli that included verbal commands such as envisioning that they are playing tennis.

Although other studies have shown similar effects with task-based stimuli, the New England Journal of Medicine study further showed that a year later, the patients who had shown signs of covert consciousness, also called “cognitive motor dissociation” (CMD), were significantly more likely to have a good functional outcome, said the study’s senior author, Jan Claassen, MD, director of critical care neurology at Columbia University, New York, who also presented at the ANA session.

“Importantly, a year later after injury, we found that 44% of patients with CMD and only 14% of non-CMD patients had a good functional outcome, defined as a GOSE score indicating a state where they can at least take care of themselves for 8 hours in a day,” he said.

“[Whether] these patients in a CMD state represent a parallel state or a transitory state on the road to recovery remains to be shown,” he said.

Jennifer Frontera, MD, a professor in the department of neurology at NYU Langone Health in New York and comoderator of the session, agreed that the research is “remarkable.”

“Also,” she said, “it is practical, since many could potentially apply and validate his algorithms, since EEG technology is portable and widely available.”
 

Research has ushered in a ‘sea change’ in neurocritical care

The research has helped push forward recommendations on the treatment of unconscious patients, Dr. Edlow said. “This has led to a sea change in our field just over the last 2 years, with multiple guidelines published suggesting that it may be time for us to consider incorporating task-based fMRI and EEG techniques into our clinical assessment of patients with disorders of consciousness,” Dr. Edlow said.

Among those updating their recommendations was the American Academy of Neurology, which revised guidelines on practice parameters for patients in a persistent vegetative state. Those guidelines had not been updated since 1995.

Although concluding that “no diagnostic assessment procedure had moderate or strong evidence for use,” the guidelines acknowledge that “it is possible that a positive electromyographic (EMG) response to command, EEG reactivity to sensory stimuli, laser-evoked potentials, and the Perturbational Complexity Index can distinguish a minimally conscious state from vegetative state/unresponsive wakefulness syndrome (VS/UWS).”

Earlier this year, the European Academy of Neurology followed suit with updated guidelines of its own. In the EAN guideline, the academy’s Panel on Coma, Disorders of Consciousness recommends that task-based fMRI, EEG, and other advanced assessments be performed as part of a composite assessment of consciousness and that a patient’s best performance or highest level of consciousness on any of those tests should be a reflection of their diagnosis, Dr. Edlow explained.

“What this means is that our field is moving toward a multimodal assessment of consciousness in the ICU as well as beyond, in the subacute to chronic setting, whereby the behavioral exam, advanced DG, and advanced MRI methods all also contribute to the diagnosis of consciousness,” he said.

The standard for assessment of disorders of consciousness is the Coma Recovery Scale–Revised, with a 25-item scale for diagnosis, prediction of outcome, and assessment of potential treatment efficacy.

But much uncertainty can remain despite the assessment, Dr. Claassen said. “Behavioral assessments of patients with acute brain injury are challenging because examinations fluctuate, and there’s variability between assessors,” he said. “Nevertheless, patients and their families demand guidance from us.”

Dr. Edlow pointed out that the largest study to date of the causes of death among patients with TBI in the ICU underscores the need for better assessments.

The study of more than 600 patients at six level l trauma centers in Canada showed that 70% of patients who died in the ICU from TBI did so as the result of the withdrawal of life-sustaining therapy. However, only about a half (57%) had an unreactive pupil, and only about a quarter (23.7%) had evidence of herniation on CT, findings that are commonly associated with a poor prognosis.

“What emerges from this is that the manner in which the clinicians communicated the prognosis to families was a primary determinant of decisions to withdraw life-sustaining therapy,” Dr. Edlow said.
 

Negative response not necessarily conclusive

Dr. Edlow added a word of caution that the science is still far from perfect. He noted that, for 25% of healthy patients who are given a motor imagery task, neuroimaging might not show a response, implying that the lack of a signal may not be conclusive.

He described the case of a patient who was comatose at the time she was scanned on day 3 after injury and who showed no responses to language, music, or motor imagery during the MRI, yet a year later, she was functionally independent, back in the workforce, and had very few residual symptoms from her trauma.

“So if a patient does not show a response, that does not prove the patient is not conscious, and it does not prove that the patient is likely to have a poor outcome,” Dr. Edlow said. Such cases underscore the need for more advances in understanding the inner workings of brain injury.

Dr. Edlow and his colleagues are embarking on a trial of the effects of intravenous methylphenidate in targeting the stimulation of dopaminergic circuits within the subcortical ascending arousal network in patients with severe brain injuries.

“The scientific premise of the trial is that personalized brain network mapping in the ICU can identify patients whose connectomes are amenable to neuromodulation,” Dr. Edlow and his colleague report in an article in Neurocritical Care.

The trial, called STIMPACT (Stimulant Therapy Targeted to Individualized Connectivity Maps to Promote ReACTivation of Consciousness), is part of the newly launched Connectome-based Clinical Trial Platform, which the authors describe as “a new paradigm for developing and testing targeted therapies that promote early recovery of consciousness in the ICU.”

Such efforts are essential, given the high stakes of TBI outcomes, Dr. Edlow said.

“Let’s be clear about the stakes of an incorrect prognosis,” he said. “If we’re overly pessimistic, then a patient who could have potential for meaningful recovery will likely die in our ICU. On the other hand, if we are overly optimistic, then a patient could end up in a vegetative or minimally conscious state that he or she may never have found to be acceptable,” he said.
 

Access to technologies a ‘civil right?’

Some ethicists in the field are recommending that patients be given access to the advanced techniques as a civil right, similar to the rights described in the Convention on the Rights of Persons With Disabilities, which was adopted by the United Nations in 2008, Dr. Edlow noted.

“So the question that we as clinicians are going to face moving forward from an ethical standpoint is, if we have access to these techniques, is it an ethical obligation to offer them now?” he said.

Dr. Edlow underscored the need to consider the reality that “there are profound issues relating to resource allocation and access to these advanced techniques, but we’re going to have to consider this together as we move forward.”

Dr. Edlow has received funding from the National Institutes of Health. Dr. Claassen is a minority shareholder with ICE Neurosystems. Dr. Frontera has disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Meeting/Event
Issue
Neurology Reviews- 28(12)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Compelling advances in the ability to detect signs of consciousness in unconscious patients who have experienced traumatic brain injury (TBI) are leading to unprecedented changes in the field. There is now hope of improving outcomes and even sparing lives of patients who may otherwise have been mistakenly assessed as having no chance of recovery.

Dr. Brian L. Edlow

A recent key study represents a tipping point in the mounting evidence of the ability to detect “covert consciousness” in patients with TBI who are in an unconscious state. That research, published in the New England Journal of Medicine in June 2019, linked the promising signals of consciousness in comatose patients, detected only on imaging, with remarkable outcomes a year later.

“This was a landmark study,” said Brian L. Edlow, MD, in a presentation on the issue of covert consciousness at the virtual annual meeting of the American Neurological Association.

“Importantly, it is the first compelling evidence that early detection of covert consciousness also predicts 1-year outcomes in the Glasgow Outcome Scale Extended (GOSE), showing that covert consciousness in the ICU appears to be relevant for predicting long-term outcomes,” said Dr. Edlow, who is associate director of the Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, in Boston.

The researchers showed that 15% of unconscious patients with acute brain injury in the study exhibited significant brain activity on EEG in response to stimuli that included verbal commands such as envisioning that they are playing tennis.

Although other studies have shown similar effects with task-based stimuli, the New England Journal of Medicine study further showed that a year later, the patients who had shown signs of covert consciousness, also called “cognitive motor dissociation” (CMD), were significantly more likely to have a good functional outcome, said the study’s senior author, Jan Claassen, MD, director of critical care neurology at Columbia University, New York, who also presented at the ANA session.

“Importantly, a year later after injury, we found that 44% of patients with CMD and only 14% of non-CMD patients had a good functional outcome, defined as a GOSE score indicating a state where they can at least take care of themselves for 8 hours in a day,” he said.

“[Whether] these patients in a CMD state represent a parallel state or a transitory state on the road to recovery remains to be shown,” he said.

Jennifer Frontera, MD, a professor in the department of neurology at NYU Langone Health in New York and comoderator of the session, agreed that the research is “remarkable.”

“Also,” she said, “it is practical, since many could potentially apply and validate his algorithms, since EEG technology is portable and widely available.”
 

Research has ushered in a ‘sea change’ in neurocritical care

The research has helped push forward recommendations on the treatment of unconscious patients, Dr. Edlow said. “This has led to a sea change in our field just over the last 2 years, with multiple guidelines published suggesting that it may be time for us to consider incorporating task-based fMRI and EEG techniques into our clinical assessment of patients with disorders of consciousness,” Dr. Edlow said.

Among those updating their recommendations was the American Academy of Neurology, which revised guidelines on practice parameters for patients in a persistent vegetative state. Those guidelines had not been updated since 1995.

Although concluding that “no diagnostic assessment procedure had moderate or strong evidence for use,” the guidelines acknowledge that “it is possible that a positive electromyographic (EMG) response to command, EEG reactivity to sensory stimuli, laser-evoked potentials, and the Perturbational Complexity Index can distinguish a minimally conscious state from vegetative state/unresponsive wakefulness syndrome (VS/UWS).”

Earlier this year, the European Academy of Neurology followed suit with updated guidelines of its own. In the EAN guideline, the academy’s Panel on Coma, Disorders of Consciousness recommends that task-based fMRI, EEG, and other advanced assessments be performed as part of a composite assessment of consciousness and that a patient’s best performance or highest level of consciousness on any of those tests should be a reflection of their diagnosis, Dr. Edlow explained.

“What this means is that our field is moving toward a multimodal assessment of consciousness in the ICU as well as beyond, in the subacute to chronic setting, whereby the behavioral exam, advanced DG, and advanced MRI methods all also contribute to the diagnosis of consciousness,” he said.

The standard for assessment of disorders of consciousness is the Coma Recovery Scale–Revised, with a 25-item scale for diagnosis, prediction of outcome, and assessment of potential treatment efficacy.

But much uncertainty can remain despite the assessment, Dr. Claassen said. “Behavioral assessments of patients with acute brain injury are challenging because examinations fluctuate, and there’s variability between assessors,” he said. “Nevertheless, patients and their families demand guidance from us.”

Dr. Edlow pointed out that the largest study to date of the causes of death among patients with TBI in the ICU underscores the need for better assessments.

The study of more than 600 patients at six level l trauma centers in Canada showed that 70% of patients who died in the ICU from TBI did so as the result of the withdrawal of life-sustaining therapy. However, only about a half (57%) had an unreactive pupil, and only about a quarter (23.7%) had evidence of herniation on CT, findings that are commonly associated with a poor prognosis.

“What emerges from this is that the manner in which the clinicians communicated the prognosis to families was a primary determinant of decisions to withdraw life-sustaining therapy,” Dr. Edlow said.
 

Negative response not necessarily conclusive

Dr. Edlow added a word of caution that the science is still far from perfect. He noted that, for 25% of healthy patients who are given a motor imagery task, neuroimaging might not show a response, implying that the lack of a signal may not be conclusive.

He described the case of a patient who was comatose at the time she was scanned on day 3 after injury and who showed no responses to language, music, or motor imagery during the MRI, yet a year later, she was functionally independent, back in the workforce, and had very few residual symptoms from her trauma.

“So if a patient does not show a response, that does not prove the patient is not conscious, and it does not prove that the patient is likely to have a poor outcome,” Dr. Edlow said. Such cases underscore the need for more advances in understanding the inner workings of brain injury.

Dr. Edlow and his colleagues are embarking on a trial of the effects of intravenous methylphenidate in targeting the stimulation of dopaminergic circuits within the subcortical ascending arousal network in patients with severe brain injuries.

“The scientific premise of the trial is that personalized brain network mapping in the ICU can identify patients whose connectomes are amenable to neuromodulation,” Dr. Edlow and his colleague report in an article in Neurocritical Care.

The trial, called STIMPACT (Stimulant Therapy Targeted to Individualized Connectivity Maps to Promote ReACTivation of Consciousness), is part of the newly launched Connectome-based Clinical Trial Platform, which the authors describe as “a new paradigm for developing and testing targeted therapies that promote early recovery of consciousness in the ICU.”

Such efforts are essential, given the high stakes of TBI outcomes, Dr. Edlow said.

“Let’s be clear about the stakes of an incorrect prognosis,” he said. “If we’re overly pessimistic, then a patient who could have potential for meaningful recovery will likely die in our ICU. On the other hand, if we are overly optimistic, then a patient could end up in a vegetative or minimally conscious state that he or she may never have found to be acceptable,” he said.
 

Access to technologies a ‘civil right?’

Some ethicists in the field are recommending that patients be given access to the advanced techniques as a civil right, similar to the rights described in the Convention on the Rights of Persons With Disabilities, which was adopted by the United Nations in 2008, Dr. Edlow noted.

“So the question that we as clinicians are going to face moving forward from an ethical standpoint is, if we have access to these techniques, is it an ethical obligation to offer them now?” he said.

Dr. Edlow underscored the need to consider the reality that “there are profound issues relating to resource allocation and access to these advanced techniques, but we’re going to have to consider this together as we move forward.”

Dr. Edlow has received funding from the National Institutes of Health. Dr. Claassen is a minority shareholder with ICE Neurosystems. Dr. Frontera has disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Compelling advances in the ability to detect signs of consciousness in unconscious patients who have experienced traumatic brain injury (TBI) are leading to unprecedented changes in the field. There is now hope of improving outcomes and even sparing lives of patients who may otherwise have been mistakenly assessed as having no chance of recovery.

Dr. Brian L. Edlow

A recent key study represents a tipping point in the mounting evidence of the ability to detect “covert consciousness” in patients with TBI who are in an unconscious state. That research, published in the New England Journal of Medicine in June 2019, linked the promising signals of consciousness in comatose patients, detected only on imaging, with remarkable outcomes a year later.

“This was a landmark study,” said Brian L. Edlow, MD, in a presentation on the issue of covert consciousness at the virtual annual meeting of the American Neurological Association.

“Importantly, it is the first compelling evidence that early detection of covert consciousness also predicts 1-year outcomes in the Glasgow Outcome Scale Extended (GOSE), showing that covert consciousness in the ICU appears to be relevant for predicting long-term outcomes,” said Dr. Edlow, who is associate director of the Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, in Boston.

The researchers showed that 15% of unconscious patients with acute brain injury in the study exhibited significant brain activity on EEG in response to stimuli that included verbal commands such as envisioning that they are playing tennis.

Although other studies have shown similar effects with task-based stimuli, the New England Journal of Medicine study further showed that a year later, the patients who had shown signs of covert consciousness, also called “cognitive motor dissociation” (CMD), were significantly more likely to have a good functional outcome, said the study’s senior author, Jan Claassen, MD, director of critical care neurology at Columbia University, New York, who also presented at the ANA session.

“Importantly, a year later after injury, we found that 44% of patients with CMD and only 14% of non-CMD patients had a good functional outcome, defined as a GOSE score indicating a state where they can at least take care of themselves for 8 hours in a day,” he said.

“[Whether] these patients in a CMD state represent a parallel state or a transitory state on the road to recovery remains to be shown,” he said.

Jennifer Frontera, MD, a professor in the department of neurology at NYU Langone Health in New York and comoderator of the session, agreed that the research is “remarkable.”

“Also,” she said, “it is practical, since many could potentially apply and validate his algorithms, since EEG technology is portable and widely available.”
 

Research has ushered in a ‘sea change’ in neurocritical care

The research has helped push forward recommendations on the treatment of unconscious patients, Dr. Edlow said. “This has led to a sea change in our field just over the last 2 years, with multiple guidelines published suggesting that it may be time for us to consider incorporating task-based fMRI and EEG techniques into our clinical assessment of patients with disorders of consciousness,” Dr. Edlow said.

Among those updating their recommendations was the American Academy of Neurology, which revised guidelines on practice parameters for patients in a persistent vegetative state. Those guidelines had not been updated since 1995.

Although concluding that “no diagnostic assessment procedure had moderate or strong evidence for use,” the guidelines acknowledge that “it is possible that a positive electromyographic (EMG) response to command, EEG reactivity to sensory stimuli, laser-evoked potentials, and the Perturbational Complexity Index can distinguish a minimally conscious state from vegetative state/unresponsive wakefulness syndrome (VS/UWS).”

Earlier this year, the European Academy of Neurology followed suit with updated guidelines of its own. In the EAN guideline, the academy’s Panel on Coma, Disorders of Consciousness recommends that task-based fMRI, EEG, and other advanced assessments be performed as part of a composite assessment of consciousness and that a patient’s best performance or highest level of consciousness on any of those tests should be a reflection of their diagnosis, Dr. Edlow explained.

“What this means is that our field is moving toward a multimodal assessment of consciousness in the ICU as well as beyond, in the subacute to chronic setting, whereby the behavioral exam, advanced DG, and advanced MRI methods all also contribute to the diagnosis of consciousness,” he said.

The standard for assessment of disorders of consciousness is the Coma Recovery Scale–Revised, with a 25-item scale for diagnosis, prediction of outcome, and assessment of potential treatment efficacy.

But much uncertainty can remain despite the assessment, Dr. Claassen said. “Behavioral assessments of patients with acute brain injury are challenging because examinations fluctuate, and there’s variability between assessors,” he said. “Nevertheless, patients and their families demand guidance from us.”

Dr. Edlow pointed out that the largest study to date of the causes of death among patients with TBI in the ICU underscores the need for better assessments.

The study of more than 600 patients at six level l trauma centers in Canada showed that 70% of patients who died in the ICU from TBI did so as the result of the withdrawal of life-sustaining therapy. However, only about a half (57%) had an unreactive pupil, and only about a quarter (23.7%) had evidence of herniation on CT, findings that are commonly associated with a poor prognosis.

“What emerges from this is that the manner in which the clinicians communicated the prognosis to families was a primary determinant of decisions to withdraw life-sustaining therapy,” Dr. Edlow said.
 

Negative response not necessarily conclusive

Dr. Edlow added a word of caution that the science is still far from perfect. He noted that, for 25% of healthy patients who are given a motor imagery task, neuroimaging might not show a response, implying that the lack of a signal may not be conclusive.

He described the case of a patient who was comatose at the time she was scanned on day 3 after injury and who showed no responses to language, music, or motor imagery during the MRI, yet a year later, she was functionally independent, back in the workforce, and had very few residual symptoms from her trauma.

“So if a patient does not show a response, that does not prove the patient is not conscious, and it does not prove that the patient is likely to have a poor outcome,” Dr. Edlow said. Such cases underscore the need for more advances in understanding the inner workings of brain injury.

Dr. Edlow and his colleagues are embarking on a trial of the effects of intravenous methylphenidate in targeting the stimulation of dopaminergic circuits within the subcortical ascending arousal network in patients with severe brain injuries.

“The scientific premise of the trial is that personalized brain network mapping in the ICU can identify patients whose connectomes are amenable to neuromodulation,” Dr. Edlow and his colleague report in an article in Neurocritical Care.

The trial, called STIMPACT (Stimulant Therapy Targeted to Individualized Connectivity Maps to Promote ReACTivation of Consciousness), is part of the newly launched Connectome-based Clinical Trial Platform, which the authors describe as “a new paradigm for developing and testing targeted therapies that promote early recovery of consciousness in the ICU.”

Such efforts are essential, given the high stakes of TBI outcomes, Dr. Edlow said.

“Let’s be clear about the stakes of an incorrect prognosis,” he said. “If we’re overly pessimistic, then a patient who could have potential for meaningful recovery will likely die in our ICU. On the other hand, if we are overly optimistic, then a patient could end up in a vegetative or minimally conscious state that he or she may never have found to be acceptable,” he said.
 

Access to technologies a ‘civil right?’

Some ethicists in the field are recommending that patients be given access to the advanced techniques as a civil right, similar to the rights described in the Convention on the Rights of Persons With Disabilities, which was adopted by the United Nations in 2008, Dr. Edlow noted.

“So the question that we as clinicians are going to face moving forward from an ethical standpoint is, if we have access to these techniques, is it an ethical obligation to offer them now?” he said.

Dr. Edlow underscored the need to consider the reality that “there are profound issues relating to resource allocation and access to these advanced techniques, but we’re going to have to consider this together as we move forward.”

Dr. Edlow has received funding from the National Institutes of Health. Dr. Claassen is a minority shareholder with ICE Neurosystems. Dr. Frontera has disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(12)
Issue
Neurology Reviews- 28(12)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANA 2020

Citation Override
Publish date: October 30, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Time-restricted eating shows no weight-loss benefit in RCT

Article Type
Changed
Wed, 10/07/2020 - 12:05

 

The popular new weight-loss approach of eating within a restricted window of time during the day, allowing for an extended period of fasting – also known as intermittent fasting – does not result in greater weight loss, compared with nonrestricted meal timing, results from a randomized clinical trial show.

“I was very surprised by all of [the results],” senior author Ethan J. Weiss, MD, said in an interview.

“Part of the reason we did the study was because I had been doing time-restricted eating myself for years and even recommending it to friends and patients as an effective weight-loss tool,” said Dr. Weiss, of the Cardiovascular Research Institute, University of California, San Francisco.

“But no matter how you slice it, prescription of time-restricted eating – at least this version –is not a very effective weight-loss strategy,” Dr. Weiss said.

The study, published online in JAMA Internal Medicine by Dylan A. Lowe, PhD, also of the University of California, San Francisco, involved 116 participants who were randomized to a 12-week regimen of either three structured meals per day or time-restricted eating, with instructions to eat only between 12:00 p.m. and 8:00 p.m. and to completely abstain from eating at other times.

The participants were not given any specific instructions regarding caloric or macronutrient intake “so as to offer a simple, real-world recommendation to free-living individuals,” the authors wrote.

Although some prior research has shown improvements in measures such as glucose tolerance with time-restricted eating, studies showing weight loss with the approach, including one recently reported by Medscape Medical News, have been small and lacked control groups.

“To my knowledge this is the first randomized, controlled trial and definitely the biggest,” Dr. Weiss. “I think it is the most comprehensive dataset available in people, at least for this intervention.”
 

Participants used app to log details

At baseline, participants had a mean weight of 99.2 kg (approximately 219 lb). Their mean age was 46.5 years and 60.3% were men. They were drawn from anywhere in the United States and received study surveys through a custom mobile study application on the Eureka Research Platform. They were given a Bluetooth weight scale to use daily, which was connected with the app, and randomized to one of the two interventions. A subset of 50 participants living near San Francisco underwent in-person testing.

At the end of the 12 weeks, those in the time-restricted eating group (n = 59) did have a significant decrease in weight, compared with baseline (−0.94 kg; P = .01), while weight loss in the consistent-meal group (n = 57) was not significant (−0.68 kg; P = .07).

But importantly, the difference in weight loss between the groups was not significant (−0.26 kg; P = .63).

There were no significant differences in secondary outcomes of fasting insulin, glucose, hemoglobin A1c, or blood lipids within or between the time-restricted eating and consistent-meal group either. Nor were there any significant differences in resting metabolic rate.

Although participants did not self-report their caloric intake, the authors estimated that the differences were not significant using mathematical modeling developed at the National Institutes of Health.

Rates of adherence to the diets were 92.1% in the consistent-meal group versus 83.5% in the time-restricted group.
 

 

 

Not all diets are equal: Time-restricted eating group lost more lean mass

In a subset analysis, loss of lean mass was significantly greater in the time-restricted eating group, compared with the consistent-meals group, in terms of both appendicular lean mass (P = .009) and the appendicular lean mass index (P = .005).

In fact, as much as 65% of the weight lost (1.10 kg of the average 1.70 kg) in the time-restricted eating group consisted of lean mass, while much less was fat mass (0.51 kg).

“The proportion of lean mass loss in this study (approximately 65%) far exceeds the normal range of 20%-30%,” the authors wrote. “In addition, there was a highly significant between-group difference in appendicular lean mass.”

Appendicular lean mass correlates with nutritional and physical status, and its reduction can lead to weakness, disability, and impaired quality of life.

“This serves as a caution for patient populations at risk for sarcopenia because time-restricted eating could exacerbate muscle loss,” the authors asserted.

Furthermore, previous studies suggest that the loss of lean mass in such studies is positively linked with weight regain.

While a limitation of the work is that self-reported measures of energy or macronutrient or protein intake were not obtained, the authors speculated that the role of protein intake could be linked to the greater loss of lean mass.

“Given the loss of appendicular lean mass in participants in the time-restricted eating arm and previous reports of decreased protein consumption from time-restricted eating, it is possible that protein intake was altered by time-restricted eating in this cohort, and this clearly warrants future study,” they wrote.

Dr. Weiss said the findings underscore that not all weight loss in dieting is beneficial.

“Losing 1 kg of lean mass (is not equal) to a kilogram of fat,” he said. “Indeed, if one loses 0.65 kg of lean mass and only 0.35 kg of fat mass, that is an intervention I’d probably pass on.”
 

Time-restricted eating is popular, perhaps because it’s easy?

Time-restricted eating has gained popularity in recent years.

The approach “is attractive as a weight-loss option in that it does not require tedious and time-consuming methods such as calorie counting or adherence to complicated diets,” the authors noted. “Indeed, we found that self-reported adherence to the time-restricted eating schedule was high; however, in contrast to our hypothesis, there was no greater weight loss with time-restricted eating compared with the consistent meal timing.”

They explain that the 12 p.m. to 8 p.m. window for eating was chosen because they thought people might find it easier culturally to skip breakfast than dinner, the more social meal.

However, an 8 p.m. cutoff is somewhat late given there is some suggestion that fasting several hours before bedtime is most beneficial, Dr. Weiss noted. So it may be worth examining different time windows.

“I am very intrigued about looking at early time-restricted eating – 6 a.m. to 2 p.m.,” for example, he said. “It is on our list.”

Meanwhile, the study results support previous research showing no effect on weight outcomes in relation to skipping breakfast.

The study received funding from the UCSF cardiology division’s Cardiology Innovations Award Program and the National Institute of Diabetes and Digestive and Kidney Diseases, with additional support from the James Peter Read Foundation. Dr. Weiss has reported nonfinancial support from Mocacare and nonfinancial support from iHealth Labs during the conduct of the study. He also is a cofounder and equity stakeholder of Keyto, and owns stock and was formerly on the board of Virta.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

The popular new weight-loss approach of eating within a restricted window of time during the day, allowing for an extended period of fasting – also known as intermittent fasting – does not result in greater weight loss, compared with nonrestricted meal timing, results from a randomized clinical trial show.

“I was very surprised by all of [the results],” senior author Ethan J. Weiss, MD, said in an interview.

“Part of the reason we did the study was because I had been doing time-restricted eating myself for years and even recommending it to friends and patients as an effective weight-loss tool,” said Dr. Weiss, of the Cardiovascular Research Institute, University of California, San Francisco.

“But no matter how you slice it, prescription of time-restricted eating – at least this version –is not a very effective weight-loss strategy,” Dr. Weiss said.

The study, published online in JAMA Internal Medicine by Dylan A. Lowe, PhD, also of the University of California, San Francisco, involved 116 participants who were randomized to a 12-week regimen of either three structured meals per day or time-restricted eating, with instructions to eat only between 12:00 p.m. and 8:00 p.m. and to completely abstain from eating at other times.

The participants were not given any specific instructions regarding caloric or macronutrient intake “so as to offer a simple, real-world recommendation to free-living individuals,” the authors wrote.

Although some prior research has shown improvements in measures such as glucose tolerance with time-restricted eating, studies showing weight loss with the approach, including one recently reported by Medscape Medical News, have been small and lacked control groups.

“To my knowledge this is the first randomized, controlled trial and definitely the biggest,” Dr. Weiss. “I think it is the most comprehensive dataset available in people, at least for this intervention.”
 

Participants used app to log details

At baseline, participants had a mean weight of 99.2 kg (approximately 219 lb). Their mean age was 46.5 years and 60.3% were men. They were drawn from anywhere in the United States and received study surveys through a custom mobile study application on the Eureka Research Platform. They were given a Bluetooth weight scale to use daily, which was connected with the app, and randomized to one of the two interventions. A subset of 50 participants living near San Francisco underwent in-person testing.

At the end of the 12 weeks, those in the time-restricted eating group (n = 59) did have a significant decrease in weight, compared with baseline (−0.94 kg; P = .01), while weight loss in the consistent-meal group (n = 57) was not significant (−0.68 kg; P = .07).

But importantly, the difference in weight loss between the groups was not significant (−0.26 kg; P = .63).

There were no significant differences in secondary outcomes of fasting insulin, glucose, hemoglobin A1c, or blood lipids within or between the time-restricted eating and consistent-meal group either. Nor were there any significant differences in resting metabolic rate.

Although participants did not self-report their caloric intake, the authors estimated that the differences were not significant using mathematical modeling developed at the National Institutes of Health.

Rates of adherence to the diets were 92.1% in the consistent-meal group versus 83.5% in the time-restricted group.
 

 

 

Not all diets are equal: Time-restricted eating group lost more lean mass

In a subset analysis, loss of lean mass was significantly greater in the time-restricted eating group, compared with the consistent-meals group, in terms of both appendicular lean mass (P = .009) and the appendicular lean mass index (P = .005).

In fact, as much as 65% of the weight lost (1.10 kg of the average 1.70 kg) in the time-restricted eating group consisted of lean mass, while much less was fat mass (0.51 kg).

“The proportion of lean mass loss in this study (approximately 65%) far exceeds the normal range of 20%-30%,” the authors wrote. “In addition, there was a highly significant between-group difference in appendicular lean mass.”

Appendicular lean mass correlates with nutritional and physical status, and its reduction can lead to weakness, disability, and impaired quality of life.

“This serves as a caution for patient populations at risk for sarcopenia because time-restricted eating could exacerbate muscle loss,” the authors asserted.

Furthermore, previous studies suggest that the loss of lean mass in such studies is positively linked with weight regain.

While a limitation of the work is that self-reported measures of energy or macronutrient or protein intake were not obtained, the authors speculated that the role of protein intake could be linked to the greater loss of lean mass.

“Given the loss of appendicular lean mass in participants in the time-restricted eating arm and previous reports of decreased protein consumption from time-restricted eating, it is possible that protein intake was altered by time-restricted eating in this cohort, and this clearly warrants future study,” they wrote.

Dr. Weiss said the findings underscore that not all weight loss in dieting is beneficial.

“Losing 1 kg of lean mass (is not equal) to a kilogram of fat,” he said. “Indeed, if one loses 0.65 kg of lean mass and only 0.35 kg of fat mass, that is an intervention I’d probably pass on.”
 

Time-restricted eating is popular, perhaps because it’s easy?

Time-restricted eating has gained popularity in recent years.

The approach “is attractive as a weight-loss option in that it does not require tedious and time-consuming methods such as calorie counting or adherence to complicated diets,” the authors noted. “Indeed, we found that self-reported adherence to the time-restricted eating schedule was high; however, in contrast to our hypothesis, there was no greater weight loss with time-restricted eating compared with the consistent meal timing.”

They explain that the 12 p.m. to 8 p.m. window for eating was chosen because they thought people might find it easier culturally to skip breakfast than dinner, the more social meal.

However, an 8 p.m. cutoff is somewhat late given there is some suggestion that fasting several hours before bedtime is most beneficial, Dr. Weiss noted. So it may be worth examining different time windows.

“I am very intrigued about looking at early time-restricted eating – 6 a.m. to 2 p.m.,” for example, he said. “It is on our list.”

Meanwhile, the study results support previous research showing no effect on weight outcomes in relation to skipping breakfast.

The study received funding from the UCSF cardiology division’s Cardiology Innovations Award Program and the National Institute of Diabetes and Digestive and Kidney Diseases, with additional support from the James Peter Read Foundation. Dr. Weiss has reported nonfinancial support from Mocacare and nonfinancial support from iHealth Labs during the conduct of the study. He also is a cofounder and equity stakeholder of Keyto, and owns stock and was formerly on the board of Virta.

A version of this article originally appeared on Medscape.com.

 

The popular new weight-loss approach of eating within a restricted window of time during the day, allowing for an extended period of fasting – also known as intermittent fasting – does not result in greater weight loss, compared with nonrestricted meal timing, results from a randomized clinical trial show.

“I was very surprised by all of [the results],” senior author Ethan J. Weiss, MD, said in an interview.

“Part of the reason we did the study was because I had been doing time-restricted eating myself for years and even recommending it to friends and patients as an effective weight-loss tool,” said Dr. Weiss, of the Cardiovascular Research Institute, University of California, San Francisco.

“But no matter how you slice it, prescription of time-restricted eating – at least this version –is not a very effective weight-loss strategy,” Dr. Weiss said.

The study, published online in JAMA Internal Medicine by Dylan A. Lowe, PhD, also of the University of California, San Francisco, involved 116 participants who were randomized to a 12-week regimen of either three structured meals per day or time-restricted eating, with instructions to eat only between 12:00 p.m. and 8:00 p.m. and to completely abstain from eating at other times.

The participants were not given any specific instructions regarding caloric or macronutrient intake “so as to offer a simple, real-world recommendation to free-living individuals,” the authors wrote.

Although some prior research has shown improvements in measures such as glucose tolerance with time-restricted eating, studies showing weight loss with the approach, including one recently reported by Medscape Medical News, have been small and lacked control groups.

“To my knowledge this is the first randomized, controlled trial and definitely the biggest,” Dr. Weiss. “I think it is the most comprehensive dataset available in people, at least for this intervention.”
 

Participants used app to log details

At baseline, participants had a mean weight of 99.2 kg (approximately 219 lb). Their mean age was 46.5 years and 60.3% were men. They were drawn from anywhere in the United States and received study surveys through a custom mobile study application on the Eureka Research Platform. They were given a Bluetooth weight scale to use daily, which was connected with the app, and randomized to one of the two interventions. A subset of 50 participants living near San Francisco underwent in-person testing.

At the end of the 12 weeks, those in the time-restricted eating group (n = 59) did have a significant decrease in weight, compared with baseline (−0.94 kg; P = .01), while weight loss in the consistent-meal group (n = 57) was not significant (−0.68 kg; P = .07).

But importantly, the difference in weight loss between the groups was not significant (−0.26 kg; P = .63).

There were no significant differences in secondary outcomes of fasting insulin, glucose, hemoglobin A1c, or blood lipids within or between the time-restricted eating and consistent-meal group either. Nor were there any significant differences in resting metabolic rate.

Although participants did not self-report their caloric intake, the authors estimated that the differences were not significant using mathematical modeling developed at the National Institutes of Health.

Rates of adherence to the diets were 92.1% in the consistent-meal group versus 83.5% in the time-restricted group.
 

 

 

Not all diets are equal: Time-restricted eating group lost more lean mass

In a subset analysis, loss of lean mass was significantly greater in the time-restricted eating group, compared with the consistent-meals group, in terms of both appendicular lean mass (P = .009) and the appendicular lean mass index (P = .005).

In fact, as much as 65% of the weight lost (1.10 kg of the average 1.70 kg) in the time-restricted eating group consisted of lean mass, while much less was fat mass (0.51 kg).

“The proportion of lean mass loss in this study (approximately 65%) far exceeds the normal range of 20%-30%,” the authors wrote. “In addition, there was a highly significant between-group difference in appendicular lean mass.”

Appendicular lean mass correlates with nutritional and physical status, and its reduction can lead to weakness, disability, and impaired quality of life.

“This serves as a caution for patient populations at risk for sarcopenia because time-restricted eating could exacerbate muscle loss,” the authors asserted.

Furthermore, previous studies suggest that the loss of lean mass in such studies is positively linked with weight regain.

While a limitation of the work is that self-reported measures of energy or macronutrient or protein intake were not obtained, the authors speculated that the role of protein intake could be linked to the greater loss of lean mass.

“Given the loss of appendicular lean mass in participants in the time-restricted eating arm and previous reports of decreased protein consumption from time-restricted eating, it is possible that protein intake was altered by time-restricted eating in this cohort, and this clearly warrants future study,” they wrote.

Dr. Weiss said the findings underscore that not all weight loss in dieting is beneficial.

“Losing 1 kg of lean mass (is not equal) to a kilogram of fat,” he said. “Indeed, if one loses 0.65 kg of lean mass and only 0.35 kg of fat mass, that is an intervention I’d probably pass on.”
 

Time-restricted eating is popular, perhaps because it’s easy?

Time-restricted eating has gained popularity in recent years.

The approach “is attractive as a weight-loss option in that it does not require tedious and time-consuming methods such as calorie counting or adherence to complicated diets,” the authors noted. “Indeed, we found that self-reported adherence to the time-restricted eating schedule was high; however, in contrast to our hypothesis, there was no greater weight loss with time-restricted eating compared with the consistent meal timing.”

They explain that the 12 p.m. to 8 p.m. window for eating was chosen because they thought people might find it easier culturally to skip breakfast than dinner, the more social meal.

However, an 8 p.m. cutoff is somewhat late given there is some suggestion that fasting several hours before bedtime is most beneficial, Dr. Weiss noted. So it may be worth examining different time windows.

“I am very intrigued about looking at early time-restricted eating – 6 a.m. to 2 p.m.,” for example, he said. “It is on our list.”

Meanwhile, the study results support previous research showing no effect on weight outcomes in relation to skipping breakfast.

The study received funding from the UCSF cardiology division’s Cardiology Innovations Award Program and the National Institute of Diabetes and Digestive and Kidney Diseases, with additional support from the James Peter Read Foundation. Dr. Weiss has reported nonfinancial support from Mocacare and nonfinancial support from iHealth Labs during the conduct of the study. He also is a cofounder and equity stakeholder of Keyto, and owns stock and was formerly on the board of Virta.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Keep desiccated thyroid as a treatment option for hypothyroidism

Article Type
Changed
Wed, 09/23/2020 - 15:33

 

For patients with hypothyroidism who underwent treatment with desiccated thyroid, there were no significant differences in the time spent in normal ranges of thyroid stimulating hormone (TSH) over 3 years, compared with patients who received the standard therapy of synthetic levothyroxine (T4), new research shows.

The findings are “unanticipated ... given concerns for variability between batches of desiccated thyroid cited by national guidelines,” wrote the authors of the study, which was published this month in the Annals of Family Medicine.

In the trial, patients who had been treated for hypothyroidism at Kaiser Permanente Colorado were matched retrospectively into groups of 450 patients each according to whether they were treated with desiccated thyroid or synthetic levothyroxine.

After a follow-up of 3 years, TSH values within normal ranges (0.320-5.500 uIU/mL) were seen at approximately the same rate among those treated with desiccated thyroid and those who received levothyroxine (79.1% vs. 79.3%; P = .905).

“This study showed that after 3 years TSH values in both groups remained within reference ranges approximately 80% of the time,” said Rolake Kuye, PharmD, and colleagues with Kaiser Permanente, in Denver, Colorado.

In an accompanying editorial, Jill Schneiderhan, MD, and Suzanna Zick, ND, MPH, of the University of Michigan, Ann Arbor, say the overall results indicate that the continued use of desiccated thyroid is warranted in some cases.

“Keeping desiccated thyroid medications as an option in our tool kit will allow for improved shared decision-making, while allowing for patient preference, and offer an option for those patients who remain symptomatic on levothyroxine monotherapy,” they advised.
 

Some variability still seen with desiccated thyroid

Desiccated thyroid (dehydrated porcine thyroid), which was long the standard of care, is still commonly used in the treatment of hypothyroidism, despite having been replaced beginning in the 1970s by synthetic levothyroxine in light of evidence that the former was associated with more variability in thyroid hormone levels.

Desiccated thyroid is still sold legally by prescription in the United States under the names Nature Thyroid, Thyroid USP, and Armour Thyroid and is currently used by up to 30% of patients with hypothyroidism, according to recent estimates.

Consistent with concerns about variability in thyroid hormone levels, the new study did show greater variability in TSH levels with desiccated thyroid when assessed on a visit-to-visit basis.

Dr. Kuye and coauthors therefore recommended that, “[f]or providers targeting a tighter TSH goal in certain patients, the decreased TSH variability with levothyroxine could be clinically meaningful.”
 

This long-term investigation is “much needed”

This new study adds important new insight to the ongoing debate over hypothyroidism treatment, said Dr. Schneiderhan and Dr. Zick in their editorial.

“[The study authors] begin a much-needed investigation into whether patients prescribed synthetic levothyroxine compared with desiccated thyroid had differences in TSH stability over the course of 3 years.

“Further prospective studies are needed to confirm these results and to explore differences in more diverse patient populations, such as Hashimoto’s thyroiditis, as well as on quality of life and other important patient-reported outcomes such as fatigue and weight gain,” the editorialists added.

“This study does, however, provide helpful information that desiccated thyroid products are a reasonable choice for treating some hypothyroid patients.”
 

 

 

For 60% of patients in both groups, TSH levels were within reference range for whole study

In the study, Dr. Kuye and colleagues matched patients (average age, 63 years; 90% women) in terms of characteristics such as race, comorbidities, and cholesterol levels.

Patients were excluded if they had been prescribed more than one agent for the treatment of hypothyroidism or if they had comorbid conditions, including a history of thyroid cancer or other related comorbidities, as well as pregnancy.

With respect to visit-to-visit TSH level variability, the lower rate among patients prescribed levothyroxine in comparison with patients prescribed desiccated thyroid was statistically significant (1.25 vs. 1.44; P = .015). Among 60% of patients in both groups, all TSH values measured during the study period were within reference ranges, however (P = .951).

The median number of TSH laboratory studies obtained during the study was four in the synthetic levothyroxine group and three for patients prescribed desiccated thyroid (P = .578).

There were some notable differences between the groups. Patients in the desiccated thyroid group had lower body mass index (P = .032), hemoglobin A1c levels (P = .041), and lower baseline TSH values (2.4 vs. 3.4 uIU/mL; P = .001). compared with those prescribed levothyroxine.

Limitations include the fact that the authors could not account for potentially important variables such as rates of adherence, differences in prescriber practice between agents, or the concurrent use of other medications.
 

Subjective outcomes not assessed: “One-size-fits-all approach doesn’t work”

The authors note they were not able to assess subjective outcomes, which, as noted by the editorialists, are particularly important in hypothyroidism.

“Emerging evidence shows that for many patients, symptoms persist despite normal TSH values,” Dr. Schneiderhan and Dr. Zick write.

They cite as an example a large study that found significant impairment in psychological well-being among patients treated with thyroxine replacement, despite their achieving normal TSH levels.

In addition, synthetic levothyroxine is associated with other uncertainties, such as complexities in the conversion of T4 to triiodothyronine (T3) that may disrupt thyroid metabolism in some patients.

In addition, there are differences in the amounts of thyroid replacement needed by certain groups, such as patients who have undergone thyroidectomies.

“The one-size-fits-all approach for treating hypothyroidism does not work ... for all patients,” they concluded.

The study authors and editorialists have disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

For patients with hypothyroidism who underwent treatment with desiccated thyroid, there were no significant differences in the time spent in normal ranges of thyroid stimulating hormone (TSH) over 3 years, compared with patients who received the standard therapy of synthetic levothyroxine (T4), new research shows.

The findings are “unanticipated ... given concerns for variability between batches of desiccated thyroid cited by national guidelines,” wrote the authors of the study, which was published this month in the Annals of Family Medicine.

In the trial, patients who had been treated for hypothyroidism at Kaiser Permanente Colorado were matched retrospectively into groups of 450 patients each according to whether they were treated with desiccated thyroid or synthetic levothyroxine.

After a follow-up of 3 years, TSH values within normal ranges (0.320-5.500 uIU/mL) were seen at approximately the same rate among those treated with desiccated thyroid and those who received levothyroxine (79.1% vs. 79.3%; P = .905).

“This study showed that after 3 years TSH values in both groups remained within reference ranges approximately 80% of the time,” said Rolake Kuye, PharmD, and colleagues with Kaiser Permanente, in Denver, Colorado.

In an accompanying editorial, Jill Schneiderhan, MD, and Suzanna Zick, ND, MPH, of the University of Michigan, Ann Arbor, say the overall results indicate that the continued use of desiccated thyroid is warranted in some cases.

“Keeping desiccated thyroid medications as an option in our tool kit will allow for improved shared decision-making, while allowing for patient preference, and offer an option for those patients who remain symptomatic on levothyroxine monotherapy,” they advised.
 

Some variability still seen with desiccated thyroid

Desiccated thyroid (dehydrated porcine thyroid), which was long the standard of care, is still commonly used in the treatment of hypothyroidism, despite having been replaced beginning in the 1970s by synthetic levothyroxine in light of evidence that the former was associated with more variability in thyroid hormone levels.

Desiccated thyroid is still sold legally by prescription in the United States under the names Nature Thyroid, Thyroid USP, and Armour Thyroid and is currently used by up to 30% of patients with hypothyroidism, according to recent estimates.

Consistent with concerns about variability in thyroid hormone levels, the new study did show greater variability in TSH levels with desiccated thyroid when assessed on a visit-to-visit basis.

Dr. Kuye and coauthors therefore recommended that, “[f]or providers targeting a tighter TSH goal in certain patients, the decreased TSH variability with levothyroxine could be clinically meaningful.”
 

This long-term investigation is “much needed”

This new study adds important new insight to the ongoing debate over hypothyroidism treatment, said Dr. Schneiderhan and Dr. Zick in their editorial.

“[The study authors] begin a much-needed investigation into whether patients prescribed synthetic levothyroxine compared with desiccated thyroid had differences in TSH stability over the course of 3 years.

“Further prospective studies are needed to confirm these results and to explore differences in more diverse patient populations, such as Hashimoto’s thyroiditis, as well as on quality of life and other important patient-reported outcomes such as fatigue and weight gain,” the editorialists added.

“This study does, however, provide helpful information that desiccated thyroid products are a reasonable choice for treating some hypothyroid patients.”
 

 

 

For 60% of patients in both groups, TSH levels were within reference range for whole study

In the study, Dr. Kuye and colleagues matched patients (average age, 63 years; 90% women) in terms of characteristics such as race, comorbidities, and cholesterol levels.

Patients were excluded if they had been prescribed more than one agent for the treatment of hypothyroidism or if they had comorbid conditions, including a history of thyroid cancer or other related comorbidities, as well as pregnancy.

With respect to visit-to-visit TSH level variability, the lower rate among patients prescribed levothyroxine in comparison with patients prescribed desiccated thyroid was statistically significant (1.25 vs. 1.44; P = .015). Among 60% of patients in both groups, all TSH values measured during the study period were within reference ranges, however (P = .951).

The median number of TSH laboratory studies obtained during the study was four in the synthetic levothyroxine group and three for patients prescribed desiccated thyroid (P = .578).

There were some notable differences between the groups. Patients in the desiccated thyroid group had lower body mass index (P = .032), hemoglobin A1c levels (P = .041), and lower baseline TSH values (2.4 vs. 3.4 uIU/mL; P = .001). compared with those prescribed levothyroxine.

Limitations include the fact that the authors could not account for potentially important variables such as rates of adherence, differences in prescriber practice between agents, or the concurrent use of other medications.
 

Subjective outcomes not assessed: “One-size-fits-all approach doesn’t work”

The authors note they were not able to assess subjective outcomes, which, as noted by the editorialists, are particularly important in hypothyroidism.

“Emerging evidence shows that for many patients, symptoms persist despite normal TSH values,” Dr. Schneiderhan and Dr. Zick write.

They cite as an example a large study that found significant impairment in psychological well-being among patients treated with thyroxine replacement, despite their achieving normal TSH levels.

In addition, synthetic levothyroxine is associated with other uncertainties, such as complexities in the conversion of T4 to triiodothyronine (T3) that may disrupt thyroid metabolism in some patients.

In addition, there are differences in the amounts of thyroid replacement needed by certain groups, such as patients who have undergone thyroidectomies.

“The one-size-fits-all approach for treating hypothyroidism does not work ... for all patients,” they concluded.

The study authors and editorialists have disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

 

For patients with hypothyroidism who underwent treatment with desiccated thyroid, there were no significant differences in the time spent in normal ranges of thyroid stimulating hormone (TSH) over 3 years, compared with patients who received the standard therapy of synthetic levothyroxine (T4), new research shows.

The findings are “unanticipated ... given concerns for variability between batches of desiccated thyroid cited by national guidelines,” wrote the authors of the study, which was published this month in the Annals of Family Medicine.

In the trial, patients who had been treated for hypothyroidism at Kaiser Permanente Colorado were matched retrospectively into groups of 450 patients each according to whether they were treated with desiccated thyroid or synthetic levothyroxine.

After a follow-up of 3 years, TSH values within normal ranges (0.320-5.500 uIU/mL) were seen at approximately the same rate among those treated with desiccated thyroid and those who received levothyroxine (79.1% vs. 79.3%; P = .905).

“This study showed that after 3 years TSH values in both groups remained within reference ranges approximately 80% of the time,” said Rolake Kuye, PharmD, and colleagues with Kaiser Permanente, in Denver, Colorado.

In an accompanying editorial, Jill Schneiderhan, MD, and Suzanna Zick, ND, MPH, of the University of Michigan, Ann Arbor, say the overall results indicate that the continued use of desiccated thyroid is warranted in some cases.

“Keeping desiccated thyroid medications as an option in our tool kit will allow for improved shared decision-making, while allowing for patient preference, and offer an option for those patients who remain symptomatic on levothyroxine monotherapy,” they advised.
 

Some variability still seen with desiccated thyroid

Desiccated thyroid (dehydrated porcine thyroid), which was long the standard of care, is still commonly used in the treatment of hypothyroidism, despite having been replaced beginning in the 1970s by synthetic levothyroxine in light of evidence that the former was associated with more variability in thyroid hormone levels.

Desiccated thyroid is still sold legally by prescription in the United States under the names Nature Thyroid, Thyroid USP, and Armour Thyroid and is currently used by up to 30% of patients with hypothyroidism, according to recent estimates.

Consistent with concerns about variability in thyroid hormone levels, the new study did show greater variability in TSH levels with desiccated thyroid when assessed on a visit-to-visit basis.

Dr. Kuye and coauthors therefore recommended that, “[f]or providers targeting a tighter TSH goal in certain patients, the decreased TSH variability with levothyroxine could be clinically meaningful.”
 

This long-term investigation is “much needed”

This new study adds important new insight to the ongoing debate over hypothyroidism treatment, said Dr. Schneiderhan and Dr. Zick in their editorial.

“[The study authors] begin a much-needed investigation into whether patients prescribed synthetic levothyroxine compared with desiccated thyroid had differences in TSH stability over the course of 3 years.

“Further prospective studies are needed to confirm these results and to explore differences in more diverse patient populations, such as Hashimoto’s thyroiditis, as well as on quality of life and other important patient-reported outcomes such as fatigue and weight gain,” the editorialists added.

“This study does, however, provide helpful information that desiccated thyroid products are a reasonable choice for treating some hypothyroid patients.”
 

 

 

For 60% of patients in both groups, TSH levels were within reference range for whole study

In the study, Dr. Kuye and colleagues matched patients (average age, 63 years; 90% women) in terms of characteristics such as race, comorbidities, and cholesterol levels.

Patients were excluded if they had been prescribed more than one agent for the treatment of hypothyroidism or if they had comorbid conditions, including a history of thyroid cancer or other related comorbidities, as well as pregnancy.

With respect to visit-to-visit TSH level variability, the lower rate among patients prescribed levothyroxine in comparison with patients prescribed desiccated thyroid was statistically significant (1.25 vs. 1.44; P = .015). Among 60% of patients in both groups, all TSH values measured during the study period were within reference ranges, however (P = .951).

The median number of TSH laboratory studies obtained during the study was four in the synthetic levothyroxine group and three for patients prescribed desiccated thyroid (P = .578).

There were some notable differences between the groups. Patients in the desiccated thyroid group had lower body mass index (P = .032), hemoglobin A1c levels (P = .041), and lower baseline TSH values (2.4 vs. 3.4 uIU/mL; P = .001). compared with those prescribed levothyroxine.

Limitations include the fact that the authors could not account for potentially important variables such as rates of adherence, differences in prescriber practice between agents, or the concurrent use of other medications.
 

Subjective outcomes not assessed: “One-size-fits-all approach doesn’t work”

The authors note they were not able to assess subjective outcomes, which, as noted by the editorialists, are particularly important in hypothyroidism.

“Emerging evidence shows that for many patients, symptoms persist despite normal TSH values,” Dr. Schneiderhan and Dr. Zick write.

They cite as an example a large study that found significant impairment in psychological well-being among patients treated with thyroxine replacement, despite their achieving normal TSH levels.

In addition, synthetic levothyroxine is associated with other uncertainties, such as complexities in the conversion of T4 to triiodothyronine (T3) that may disrupt thyroid metabolism in some patients.

In addition, there are differences in the amounts of thyroid replacement needed by certain groups, such as patients who have undergone thyroidectomies.

“The one-size-fits-all approach for treating hypothyroidism does not work ... for all patients,” they concluded.

The study authors and editorialists have disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Atypical fractures with bisphosphonates highest in Asians, study confirms

Article Type
Changed
Thu, 08/20/2020 - 12:57

The latest findings regarding the risk for atypical femur fracture (AFF) with use of bisphosphonates for osteoporosis show a significant increase in risk when treatment extends beyond 5 years. The risk is notably higher risk among Asian women, compared with White women. However, the benefits in fracture reduction still appear to far outweigh the risk for AFF.

The research, published in the New England Journal of Medicine, importantly adds to findings from smaller studies by showing effects in a population of nearly 200,000 women in a diverse cohort, said Angela M. Cheung, MD, PhD.

“This study answers some important questions – Kaiser Permanente Southern California is a large health maintenance organization with a diverse racial population,” said Dr. Cheung, director of the Center of Excellence in Skeletal Health Assessment and osteoporosis program at the University of Toronto.

“This is the first study that included a diverse population to definitively show that Asians are at a much higher risk of atypical femur fractures than Caucasians,” she emphasized.

Although AFFs are rare, concerns about them remain pressing in the treatment of osteoporosis, Dr. Cheung noted. “This is a big concern for clinicians – they want to do no harm.”
 

Risk for AFF increases with longer duration of bisphosphonate use

For the study, Dennis M. Black, PhD, of the departments of epidemiology and biostatistics and orthopedic surgery at the University of California, San Francisco, and colleagues identified women aged 50 years or older enrolled in the Kaiser Permanente Southern California system who were treated with bisphosphonates and were followed from January 2007 to November 2017.

Among the 196,129 women identified in the study, 277 AFFs occurred.

After multivariate adjustment, compared with those treated for less than 3 months, for women who were treated for 3-5 years, the hazard ratio for experiencing an AFF was 8.86. For therapy of 5-8 years, the HR increased to 19.88, and for those treated with bisphosphonates for 8 years or longer, the HR was 43.51.

The risk for AFF declined quickly upon bisphosphonate discontinuation; compared with current users, the HR dropped to 0.52 within 3-15 months after the last bisphosphonate use. It declined to 0.26 at more than 4 years after discontinuation.

The risk for AFF with bisphosphonate use was higher for Asian women than for White women (HR, 4.84); this did not apply to any other ethnic groups (HR, 0.99).



Other risk factors for AFF included shorter height (HR, 1.28 per 5-cm decrement), greater weight (HR, 1.15 per 5-kg increment), and glucocorticoid use (HR, 2.28 for glucocorticoid use of 1 or more years).

Among White women, the number of fractures prevented with bisphosphonate use far outweighed the risk for bisphosphonate-associated AFFs.

For example, among White women, during a 3-year treatment period, there were two bisphosphonate-associated AFFs, whereas 149 hip fractures and 541 clinical fractures were prevented, the authors wrote.

After 5 years, there were eight AFFs, but 286 hip fractures and 859 clinical fractures were prevented.

Although the risk-benefit ratio among Asian women still favored prevention of fractures, the difference was less pronounced – eight bisphosphonate-associated AFFs had occurred at 3 years, whereas 91 hip fractures and 330 clinical fractures were prevented.

The authors noted that previous studies have also shown Asian women to be at a disproportionately higher risk for AFF.

An earlier Kaiser Permanente Southern California case series showed that 49% of 142 AFFs occurred in Asian patients, despite the fact that those patients made up only 10% of the study population.

 

 

Various factors could cause higher risk in Asian women

The reasons for the increased risk among Asian women are likely multifactorial and could include greater medication adherence among Asian women, genetic differences in drug metabolism and bone turnover, and, notably, increased lateral stress caused by bowed Asian femora, the authors speculated.

Further questions include whether the risk is limited to Asians living outside of Asia and whether cultural differences in diet or physical activity are risk factors, they added.

“At this early stage, further research into the cause of the increased risk among women of Asian ancestry is warranted,” they wrote.

Although the risk for AFF may be higher among Asian women, the incidence of hip and other osteoporotic fractures is lower among Asians as well as other non-White persons, compared with White persons, they added.

The findings have important implications in how clinicians should discuss treatment options with different patient groups, Dr. Cheung said.

“I think this is one of the key findings of the study,” she added. “In this day and age of personalized medicine, we need to keep the individual patient in mind, and that includes their racial/ethnic background, genetic characteristics, sex, medical conditions and medications, etc. So it is important for physicians to pay attention to this. The risk-benefit ratio of these drugs for Asians will be quite different, compared to Caucasians.”
 

No link between traditional fracture risk factors and AFF, study shows

Interestingly, although older age, previous fractures, and lower bone mineral density are key risk factors for hip and other osteoporotic fractures in the general population, they do not significantly increase the risk for AFF with bisphosphonate use, the study also showed.

“In fact, the oldest women in our cohort, who are at highest risk for hip and other fractures, were at lowest risk for AFF,” the authors wrote.

The collective findings “add to the risk-benefit balance of bisphosphonate treatment in these populations and could directly affect decisions regarding treatment initiation and duration.”

Notable limitations of the study include the fact that most women were treated with one particular bisphosphonate, alendronate, and that other bisphosphonates were underrepresented, Dr. Cheung said.

“This study examined bisphosphonate therapy, but the vast majority of the women were exposed to alendronate, so whether women on risedronate or other bisphosphonates have similar risks is unclear,” she observed.

“In addition, because they can only capture bisphosphonate use using their database, any bisphosphonate exposure prior to joining Kaiser Permanente will not be captured. So the study may underestimate the total cumulative duration of bisphosphonate use,” she added.

The study received support from Kaiser Permanente and discretionary funds from the University of California, San Francisco. The study began with a pilot grant from Merck Sharp & Dohme, which had no role in the conduct of the study. Dr. Cheung has served as a consultant for Amgen. She chaired and led the 2019 International Society for Clinical Densitometry Position Development Conference on Detection of Atypical Femur Fractures and currently is on the Osteoporosis Canada Guidelines Committee.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

The latest findings regarding the risk for atypical femur fracture (AFF) with use of bisphosphonates for osteoporosis show a significant increase in risk when treatment extends beyond 5 years. The risk is notably higher risk among Asian women, compared with White women. However, the benefits in fracture reduction still appear to far outweigh the risk for AFF.

The research, published in the New England Journal of Medicine, importantly adds to findings from smaller studies by showing effects in a population of nearly 200,000 women in a diverse cohort, said Angela M. Cheung, MD, PhD.

“This study answers some important questions – Kaiser Permanente Southern California is a large health maintenance organization with a diverse racial population,” said Dr. Cheung, director of the Center of Excellence in Skeletal Health Assessment and osteoporosis program at the University of Toronto.

“This is the first study that included a diverse population to definitively show that Asians are at a much higher risk of atypical femur fractures than Caucasians,” she emphasized.

Although AFFs are rare, concerns about them remain pressing in the treatment of osteoporosis, Dr. Cheung noted. “This is a big concern for clinicians – they want to do no harm.”
 

Risk for AFF increases with longer duration of bisphosphonate use

For the study, Dennis M. Black, PhD, of the departments of epidemiology and biostatistics and orthopedic surgery at the University of California, San Francisco, and colleagues identified women aged 50 years or older enrolled in the Kaiser Permanente Southern California system who were treated with bisphosphonates and were followed from January 2007 to November 2017.

Among the 196,129 women identified in the study, 277 AFFs occurred.

After multivariate adjustment, compared with those treated for less than 3 months, for women who were treated for 3-5 years, the hazard ratio for experiencing an AFF was 8.86. For therapy of 5-8 years, the HR increased to 19.88, and for those treated with bisphosphonates for 8 years or longer, the HR was 43.51.

The risk for AFF declined quickly upon bisphosphonate discontinuation; compared with current users, the HR dropped to 0.52 within 3-15 months after the last bisphosphonate use. It declined to 0.26 at more than 4 years after discontinuation.

The risk for AFF with bisphosphonate use was higher for Asian women than for White women (HR, 4.84); this did not apply to any other ethnic groups (HR, 0.99).



Other risk factors for AFF included shorter height (HR, 1.28 per 5-cm decrement), greater weight (HR, 1.15 per 5-kg increment), and glucocorticoid use (HR, 2.28 for glucocorticoid use of 1 or more years).

Among White women, the number of fractures prevented with bisphosphonate use far outweighed the risk for bisphosphonate-associated AFFs.

For example, among White women, during a 3-year treatment period, there were two bisphosphonate-associated AFFs, whereas 149 hip fractures and 541 clinical fractures were prevented, the authors wrote.

After 5 years, there were eight AFFs, but 286 hip fractures and 859 clinical fractures were prevented.

Although the risk-benefit ratio among Asian women still favored prevention of fractures, the difference was less pronounced – eight bisphosphonate-associated AFFs had occurred at 3 years, whereas 91 hip fractures and 330 clinical fractures were prevented.

The authors noted that previous studies have also shown Asian women to be at a disproportionately higher risk for AFF.

An earlier Kaiser Permanente Southern California case series showed that 49% of 142 AFFs occurred in Asian patients, despite the fact that those patients made up only 10% of the study population.

 

 

Various factors could cause higher risk in Asian women

The reasons for the increased risk among Asian women are likely multifactorial and could include greater medication adherence among Asian women, genetic differences in drug metabolism and bone turnover, and, notably, increased lateral stress caused by bowed Asian femora, the authors speculated.

Further questions include whether the risk is limited to Asians living outside of Asia and whether cultural differences in diet or physical activity are risk factors, they added.

“At this early stage, further research into the cause of the increased risk among women of Asian ancestry is warranted,” they wrote.

Although the risk for AFF may be higher among Asian women, the incidence of hip and other osteoporotic fractures is lower among Asians as well as other non-White persons, compared with White persons, they added.

The findings have important implications in how clinicians should discuss treatment options with different patient groups, Dr. Cheung said.

“I think this is one of the key findings of the study,” she added. “In this day and age of personalized medicine, we need to keep the individual patient in mind, and that includes their racial/ethnic background, genetic characteristics, sex, medical conditions and medications, etc. So it is important for physicians to pay attention to this. The risk-benefit ratio of these drugs for Asians will be quite different, compared to Caucasians.”
 

No link between traditional fracture risk factors and AFF, study shows

Interestingly, although older age, previous fractures, and lower bone mineral density are key risk factors for hip and other osteoporotic fractures in the general population, they do not significantly increase the risk for AFF with bisphosphonate use, the study also showed.

“In fact, the oldest women in our cohort, who are at highest risk for hip and other fractures, were at lowest risk for AFF,” the authors wrote.

The collective findings “add to the risk-benefit balance of bisphosphonate treatment in these populations and could directly affect decisions regarding treatment initiation and duration.”

Notable limitations of the study include the fact that most women were treated with one particular bisphosphonate, alendronate, and that other bisphosphonates were underrepresented, Dr. Cheung said.

“This study examined bisphosphonate therapy, but the vast majority of the women were exposed to alendronate, so whether women on risedronate or other bisphosphonates have similar risks is unclear,” she observed.

“In addition, because they can only capture bisphosphonate use using their database, any bisphosphonate exposure prior to joining Kaiser Permanente will not be captured. So the study may underestimate the total cumulative duration of bisphosphonate use,” she added.

The study received support from Kaiser Permanente and discretionary funds from the University of California, San Francisco. The study began with a pilot grant from Merck Sharp & Dohme, which had no role in the conduct of the study. Dr. Cheung has served as a consultant for Amgen. She chaired and led the 2019 International Society for Clinical Densitometry Position Development Conference on Detection of Atypical Femur Fractures and currently is on the Osteoporosis Canada Guidelines Committee.

A version of this article originally appeared on Medscape.com.

The latest findings regarding the risk for atypical femur fracture (AFF) with use of bisphosphonates for osteoporosis show a significant increase in risk when treatment extends beyond 5 years. The risk is notably higher risk among Asian women, compared with White women. However, the benefits in fracture reduction still appear to far outweigh the risk for AFF.

The research, published in the New England Journal of Medicine, importantly adds to findings from smaller studies by showing effects in a population of nearly 200,000 women in a diverse cohort, said Angela M. Cheung, MD, PhD.

“This study answers some important questions – Kaiser Permanente Southern California is a large health maintenance organization with a diverse racial population,” said Dr. Cheung, director of the Center of Excellence in Skeletal Health Assessment and osteoporosis program at the University of Toronto.

“This is the first study that included a diverse population to definitively show that Asians are at a much higher risk of atypical femur fractures than Caucasians,” she emphasized.

Although AFFs are rare, concerns about them remain pressing in the treatment of osteoporosis, Dr. Cheung noted. “This is a big concern for clinicians – they want to do no harm.”
 

Risk for AFF increases with longer duration of bisphosphonate use

For the study, Dennis M. Black, PhD, of the departments of epidemiology and biostatistics and orthopedic surgery at the University of California, San Francisco, and colleagues identified women aged 50 years or older enrolled in the Kaiser Permanente Southern California system who were treated with bisphosphonates and were followed from January 2007 to November 2017.

Among the 196,129 women identified in the study, 277 AFFs occurred.

After multivariate adjustment, compared with those treated for less than 3 months, for women who were treated for 3-5 years, the hazard ratio for experiencing an AFF was 8.86. For therapy of 5-8 years, the HR increased to 19.88, and for those treated with bisphosphonates for 8 years or longer, the HR was 43.51.

The risk for AFF declined quickly upon bisphosphonate discontinuation; compared with current users, the HR dropped to 0.52 within 3-15 months after the last bisphosphonate use. It declined to 0.26 at more than 4 years after discontinuation.

The risk for AFF with bisphosphonate use was higher for Asian women than for White women (HR, 4.84); this did not apply to any other ethnic groups (HR, 0.99).



Other risk factors for AFF included shorter height (HR, 1.28 per 5-cm decrement), greater weight (HR, 1.15 per 5-kg increment), and glucocorticoid use (HR, 2.28 for glucocorticoid use of 1 or more years).

Among White women, the number of fractures prevented with bisphosphonate use far outweighed the risk for bisphosphonate-associated AFFs.

For example, among White women, during a 3-year treatment period, there were two bisphosphonate-associated AFFs, whereas 149 hip fractures and 541 clinical fractures were prevented, the authors wrote.

After 5 years, there were eight AFFs, but 286 hip fractures and 859 clinical fractures were prevented.

Although the risk-benefit ratio among Asian women still favored prevention of fractures, the difference was less pronounced – eight bisphosphonate-associated AFFs had occurred at 3 years, whereas 91 hip fractures and 330 clinical fractures were prevented.

The authors noted that previous studies have also shown Asian women to be at a disproportionately higher risk for AFF.

An earlier Kaiser Permanente Southern California case series showed that 49% of 142 AFFs occurred in Asian patients, despite the fact that those patients made up only 10% of the study population.

 

 

Various factors could cause higher risk in Asian women

The reasons for the increased risk among Asian women are likely multifactorial and could include greater medication adherence among Asian women, genetic differences in drug metabolism and bone turnover, and, notably, increased lateral stress caused by bowed Asian femora, the authors speculated.

Further questions include whether the risk is limited to Asians living outside of Asia and whether cultural differences in diet or physical activity are risk factors, they added.

“At this early stage, further research into the cause of the increased risk among women of Asian ancestry is warranted,” they wrote.

Although the risk for AFF may be higher among Asian women, the incidence of hip and other osteoporotic fractures is lower among Asians as well as other non-White persons, compared with White persons, they added.

The findings have important implications in how clinicians should discuss treatment options with different patient groups, Dr. Cheung said.

“I think this is one of the key findings of the study,” she added. “In this day and age of personalized medicine, we need to keep the individual patient in mind, and that includes their racial/ethnic background, genetic characteristics, sex, medical conditions and medications, etc. So it is important for physicians to pay attention to this. The risk-benefit ratio of these drugs for Asians will be quite different, compared to Caucasians.”
 

No link between traditional fracture risk factors and AFF, study shows

Interestingly, although older age, previous fractures, and lower bone mineral density are key risk factors for hip and other osteoporotic fractures in the general population, they do not significantly increase the risk for AFF with bisphosphonate use, the study also showed.

“In fact, the oldest women in our cohort, who are at highest risk for hip and other fractures, were at lowest risk for AFF,” the authors wrote.

The collective findings “add to the risk-benefit balance of bisphosphonate treatment in these populations and could directly affect decisions regarding treatment initiation and duration.”

Notable limitations of the study include the fact that most women were treated with one particular bisphosphonate, alendronate, and that other bisphosphonates were underrepresented, Dr. Cheung said.

“This study examined bisphosphonate therapy, but the vast majority of the women were exposed to alendronate, so whether women on risedronate or other bisphosphonates have similar risks is unclear,” she observed.

“In addition, because they can only capture bisphosphonate use using their database, any bisphosphonate exposure prior to joining Kaiser Permanente will not be captured. So the study may underestimate the total cumulative duration of bisphosphonate use,” she added.

The study received support from Kaiser Permanente and discretionary funds from the University of California, San Francisco. The study began with a pilot grant from Merck Sharp & Dohme, which had no role in the conduct of the study. Dr. Cheung has served as a consultant for Amgen. She chaired and led the 2019 International Society for Clinical Densitometry Position Development Conference on Detection of Atypical Femur Fractures and currently is on the Osteoporosis Canada Guidelines Committee.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Evidence mounts for COVID-19 effects on thyroid gland

Article Type
Changed
Thu, 08/26/2021 - 16:01

Rates of thyrotoxicosis are significantly higher among patients who are critically ill with COVID-19 than among patients who are critically ill but who do not not have COVID-19, suggesting an atypical form of thyroiditis related to the novel coronavirus infection, according to new research.

“We suggest routine assessment of thyroid function in patients with COVID-19 requiring high-intensity care because they frequently present with thyrotoxicosis due to a form of subacute thyroiditis related to SARS-CoV-2,” the authors wrote in correspondence published online in The Lancet Diabetes and Endocrinology.

However, notably, the study – which compared critically ill ICU patients who had COVID-19 with those who did not have COVID-19 or who had milder cases of COVID-19 – indicates that thyroid disorders do not appear to increase the risk of developing COVID-19, first author Ilaria Muller, MD, PhD, of the department of endocrinology, IRCCS Fondazione Ca’ Granda Ospedale Maggiore Policlinico, Milan, said in an interview.

“It is important to highlight that we did not find an increased prevalence of preexisting thyroid disorders in COVID-19 patients (contrary to early media reports),” she said. “So far, clinical observations do not support this fear, and we need to reassure people with thyroid disorders, since such disorders are very common among the general population.”

Yet the findings add to emerging evidence of a COVID-19/thyroid relationship, Angela M. Leung, MD, said in an interview.

“Given the health care impacts of the current COVID-19 pandemic worldwide, this study provides some insight on the potential systemic inflammation, as well as thyroid-specific inflammation, of the SARS-Cov-2 virus that is described in some emerging reports,” she said.

“This study joins at least six others that have reported a clinical presentation resembling subacute thyroiditis in critically ill patients with COVID-19,” noted Dr. Leung, of the division of endocrinology, diabetes, and metabolism in the department of medicine at the University of California, Los Angeles.
 

Thyroid function analysis in those with severe COVID-19

Dr. Muller explained that preliminary data from her institution showed thyroid abnormalities in patients who were severely ill with COVID-19. She and her team extended the evaluation to include thyroid data and other data on 93 patients with COVID-19 who were admitted to high-intensity care units (HICUs) in Italy during the 2020 pandemic.

Those data were compared with data on 101 critically ill patients admitted to the same HICUs in 2019 who did not have COVID-19. A third group of 52 patients with COVID-19 who were admitted to low-intensity care units (LICUs) in Italy in 2020 were also included in the analysis.

The mean age of the patients in the HICU 2020 group was 65.3 years; in the HICU 2019 group, it was 73 years; and in the LICU group, it was 70 years (P = .001). In addition, the HICU 2020 group included more men than the other two groups (69% vs. 56% and 48%; P = .03).

Of note, only 9% of patients in the HICU 2020 group had preexisting thyroid disorders, compared with 21% in the LICU group and 23% in the HICU 2019 group (P = .017).

These findings suggest that “such conditions are not a risk factor for SARS-CoV-2 infection or severity of COVID-19,” the authors wrote.

The patients with the preexisting thyroid conditions were excluded from the thyroid function analysis.

A significantly higher proportion of patients in the HICU 2020 group (13; 15%) were thyrotoxic upon admission, compared with just 1 (1%) of 78 patients in the HICU 2019 group (P = .002) and one (2%) of 41 patients in the LICU group (P = .025).

Among the 14 patients in the two COVID-19 groups who had thyrotoxicosis, the majority were male (9; 64%)

Among those in the HICU 2020 group, serum thyroid-stimulating hormone concentrations were lower than in either of the other two groups (P = .018), and serum free thyroxine (free T4) concentrations were higher than in the LICU group (P = .016) but not the HICU 2019 group.
 

 

 

Differences compared with other infection-related thyroiditis

Although thyrotoxicosis relating to subacute viral thyroiditis can result from a wide variety of viral infections, there are some key differences with COVID-19, Dr. Muller said.

“Thyroid dysfunction related to SARS-CoV-2 seems to be milder than that of classic subacute thyroiditis due to other viruses,” she explained. Furthermore, thyroid dysfunction associated with other viral infections is more common in women, whereas there were more male patients with the COVID-19–related atypical thyroiditis.

In addition, the thyroid effects developed early with COVID-19, whereas they usually emerge after the infections by other viruses.

Patients did not demonstrate the neck pain that is common with classic viral thyroiditis, and the thyroid abnormalities appear to correlate with the severity of COVID-19, whereas they are seen even in patients with mild symptoms when other viral infections are the cause.

In addition to the risk for subacute viral thyroiditis, critically ill patients in general are at risk of developing nonthyroidal illness syndrome, with alterations in thyroid function. However, thyroid hormone measures in the patients severely ill with COVID-19 were not consistent with that syndrome.

A subanalysis of eight HICU 2020 patients with thyroid dysfunction who were followed for 55 days after discharge showed that two experienced hyperthyroidism but likely not from COVID-19; in the remaining six, thyroid function normalized.

Muller speculated that, when ill with COVID-19, the patients likely had a combination of SARS-CoV-2–related atypical thyroiditis and nonthyroidal illness syndrome, known as T4 toxicosis.
 

Will there be any long-term effects?

Importantly, it remains unknown whether the novel coronavirus has longer-term effects on the thyroid, Dr. Muller said.

“We cannot predict what will be the long-lasting thyroid effects after COVID-19,” she said.

With classic subacute viral thyroiditis, “After a few years ... 5%-20% of patients develop permanent hypothyroidism, [and] the same might happen in COVID-19 patients,” she hypothesized. “We will follow our patients long term to answer this question – this study is already ongoing.”

In the meantime, diagnosis of thyroid dysfunction in patients with COVID-19 is important, inasmuch as it could worsen the already critical conditions of patients, Muller stressed.

“The gold-standard treatment for thyroiditis is steroids, so the presence of thyroid dysfunction might represent an additional indication to such treatment in COVID-19 patients, to be verified in properly designed clinical trials,” she advised.
 

ACE2 cell receptors highly expressed in thyroid

Dr. Muller and colleagues also noted recent research showing that ACE2 – demonstrated to be a key host-cell entry receptor for both SARS-CoV and SARS-CoV-2 – is expressed in even higher levels in the thyroid than the lungs, where it causes COVID-19’s notorious pulmonary effects.

Dr. Muller said the implications of ACE2 expression in the thyroid remain to be elucidated.

“If ACE2 is confirmed to be expressed at higher levels, compared with the lungs in the thyroid gland and other tissues, i.e., small intestine, testis, kidney, heart, etc, dedicated studies will be needed to correlate ACE2 expression with the organs’ susceptibility to SARS-CoV-2 reflected by clinical presentation,” she said.

Dr. Leung added that, as a take-home message from these and the other thyroid/COVID-19 studies, “data are starting to show us that COVID-19 infection may cause thyrotoxicosis that is possibly related to thyroid and systemic inflammation. However, the serum thyroid function test abnormalities seen in COVID-19 patients with subacute thyroiditis are also likely exacerbated to a substantial extent by nonthyroidal illness physiology.”

The authors have disclosed no relevant financial relationships. Dr. Leung is on the advisory board of Medscape Diabetes and Endocrinology.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Rates of thyrotoxicosis are significantly higher among patients who are critically ill with COVID-19 than among patients who are critically ill but who do not not have COVID-19, suggesting an atypical form of thyroiditis related to the novel coronavirus infection, according to new research.

“We suggest routine assessment of thyroid function in patients with COVID-19 requiring high-intensity care because they frequently present with thyrotoxicosis due to a form of subacute thyroiditis related to SARS-CoV-2,” the authors wrote in correspondence published online in The Lancet Diabetes and Endocrinology.

However, notably, the study – which compared critically ill ICU patients who had COVID-19 with those who did not have COVID-19 or who had milder cases of COVID-19 – indicates that thyroid disorders do not appear to increase the risk of developing COVID-19, first author Ilaria Muller, MD, PhD, of the department of endocrinology, IRCCS Fondazione Ca’ Granda Ospedale Maggiore Policlinico, Milan, said in an interview.

“It is important to highlight that we did not find an increased prevalence of preexisting thyroid disorders in COVID-19 patients (contrary to early media reports),” she said. “So far, clinical observations do not support this fear, and we need to reassure people with thyroid disorders, since such disorders are very common among the general population.”

Yet the findings add to emerging evidence of a COVID-19/thyroid relationship, Angela M. Leung, MD, said in an interview.

“Given the health care impacts of the current COVID-19 pandemic worldwide, this study provides some insight on the potential systemic inflammation, as well as thyroid-specific inflammation, of the SARS-Cov-2 virus that is described in some emerging reports,” she said.

“This study joins at least six others that have reported a clinical presentation resembling subacute thyroiditis in critically ill patients with COVID-19,” noted Dr. Leung, of the division of endocrinology, diabetes, and metabolism in the department of medicine at the University of California, Los Angeles.
 

Thyroid function analysis in those with severe COVID-19

Dr. Muller explained that preliminary data from her institution showed thyroid abnormalities in patients who were severely ill with COVID-19. She and her team extended the evaluation to include thyroid data and other data on 93 patients with COVID-19 who were admitted to high-intensity care units (HICUs) in Italy during the 2020 pandemic.

Those data were compared with data on 101 critically ill patients admitted to the same HICUs in 2019 who did not have COVID-19. A third group of 52 patients with COVID-19 who were admitted to low-intensity care units (LICUs) in Italy in 2020 were also included in the analysis.

The mean age of the patients in the HICU 2020 group was 65.3 years; in the HICU 2019 group, it was 73 years; and in the LICU group, it was 70 years (P = .001). In addition, the HICU 2020 group included more men than the other two groups (69% vs. 56% and 48%; P = .03).

Of note, only 9% of patients in the HICU 2020 group had preexisting thyroid disorders, compared with 21% in the LICU group and 23% in the HICU 2019 group (P = .017).

These findings suggest that “such conditions are not a risk factor for SARS-CoV-2 infection or severity of COVID-19,” the authors wrote.

The patients with the preexisting thyroid conditions were excluded from the thyroid function analysis.

A significantly higher proportion of patients in the HICU 2020 group (13; 15%) were thyrotoxic upon admission, compared with just 1 (1%) of 78 patients in the HICU 2019 group (P = .002) and one (2%) of 41 patients in the LICU group (P = .025).

Among the 14 patients in the two COVID-19 groups who had thyrotoxicosis, the majority were male (9; 64%)

Among those in the HICU 2020 group, serum thyroid-stimulating hormone concentrations were lower than in either of the other two groups (P = .018), and serum free thyroxine (free T4) concentrations were higher than in the LICU group (P = .016) but not the HICU 2019 group.
 

 

 

Differences compared with other infection-related thyroiditis

Although thyrotoxicosis relating to subacute viral thyroiditis can result from a wide variety of viral infections, there are some key differences with COVID-19, Dr. Muller said.

“Thyroid dysfunction related to SARS-CoV-2 seems to be milder than that of classic subacute thyroiditis due to other viruses,” she explained. Furthermore, thyroid dysfunction associated with other viral infections is more common in women, whereas there were more male patients with the COVID-19–related atypical thyroiditis.

In addition, the thyroid effects developed early with COVID-19, whereas they usually emerge after the infections by other viruses.

Patients did not demonstrate the neck pain that is common with classic viral thyroiditis, and the thyroid abnormalities appear to correlate with the severity of COVID-19, whereas they are seen even in patients with mild symptoms when other viral infections are the cause.

In addition to the risk for subacute viral thyroiditis, critically ill patients in general are at risk of developing nonthyroidal illness syndrome, with alterations in thyroid function. However, thyroid hormone measures in the patients severely ill with COVID-19 were not consistent with that syndrome.

A subanalysis of eight HICU 2020 patients with thyroid dysfunction who were followed for 55 days after discharge showed that two experienced hyperthyroidism but likely not from COVID-19; in the remaining six, thyroid function normalized.

Muller speculated that, when ill with COVID-19, the patients likely had a combination of SARS-CoV-2–related atypical thyroiditis and nonthyroidal illness syndrome, known as T4 toxicosis.
 

Will there be any long-term effects?

Importantly, it remains unknown whether the novel coronavirus has longer-term effects on the thyroid, Dr. Muller said.

“We cannot predict what will be the long-lasting thyroid effects after COVID-19,” she said.

With classic subacute viral thyroiditis, “After a few years ... 5%-20% of patients develop permanent hypothyroidism, [and] the same might happen in COVID-19 patients,” she hypothesized. “We will follow our patients long term to answer this question – this study is already ongoing.”

In the meantime, diagnosis of thyroid dysfunction in patients with COVID-19 is important, inasmuch as it could worsen the already critical conditions of patients, Muller stressed.

“The gold-standard treatment for thyroiditis is steroids, so the presence of thyroid dysfunction might represent an additional indication to such treatment in COVID-19 patients, to be verified in properly designed clinical trials,” she advised.
 

ACE2 cell receptors highly expressed in thyroid

Dr. Muller and colleagues also noted recent research showing that ACE2 – demonstrated to be a key host-cell entry receptor for both SARS-CoV and SARS-CoV-2 – is expressed in even higher levels in the thyroid than the lungs, where it causes COVID-19’s notorious pulmonary effects.

Dr. Muller said the implications of ACE2 expression in the thyroid remain to be elucidated.

“If ACE2 is confirmed to be expressed at higher levels, compared with the lungs in the thyroid gland and other tissues, i.e., small intestine, testis, kidney, heart, etc, dedicated studies will be needed to correlate ACE2 expression with the organs’ susceptibility to SARS-CoV-2 reflected by clinical presentation,” she said.

Dr. Leung added that, as a take-home message from these and the other thyroid/COVID-19 studies, “data are starting to show us that COVID-19 infection may cause thyrotoxicosis that is possibly related to thyroid and systemic inflammation. However, the serum thyroid function test abnormalities seen in COVID-19 patients with subacute thyroiditis are also likely exacerbated to a substantial extent by nonthyroidal illness physiology.”

The authors have disclosed no relevant financial relationships. Dr. Leung is on the advisory board of Medscape Diabetes and Endocrinology.

A version of this article originally appeared on Medscape.com.

Rates of thyrotoxicosis are significantly higher among patients who are critically ill with COVID-19 than among patients who are critically ill but who do not not have COVID-19, suggesting an atypical form of thyroiditis related to the novel coronavirus infection, according to new research.

“We suggest routine assessment of thyroid function in patients with COVID-19 requiring high-intensity care because they frequently present with thyrotoxicosis due to a form of subacute thyroiditis related to SARS-CoV-2,” the authors wrote in correspondence published online in The Lancet Diabetes and Endocrinology.

However, notably, the study – which compared critically ill ICU patients who had COVID-19 with those who did not have COVID-19 or who had milder cases of COVID-19 – indicates that thyroid disorders do not appear to increase the risk of developing COVID-19, first author Ilaria Muller, MD, PhD, of the department of endocrinology, IRCCS Fondazione Ca’ Granda Ospedale Maggiore Policlinico, Milan, said in an interview.

“It is important to highlight that we did not find an increased prevalence of preexisting thyroid disorders in COVID-19 patients (contrary to early media reports),” she said. “So far, clinical observations do not support this fear, and we need to reassure people with thyroid disorders, since such disorders are very common among the general population.”

Yet the findings add to emerging evidence of a COVID-19/thyroid relationship, Angela M. Leung, MD, said in an interview.

“Given the health care impacts of the current COVID-19 pandemic worldwide, this study provides some insight on the potential systemic inflammation, as well as thyroid-specific inflammation, of the SARS-Cov-2 virus that is described in some emerging reports,” she said.

“This study joins at least six others that have reported a clinical presentation resembling subacute thyroiditis in critically ill patients with COVID-19,” noted Dr. Leung, of the division of endocrinology, diabetes, and metabolism in the department of medicine at the University of California, Los Angeles.
 

Thyroid function analysis in those with severe COVID-19

Dr. Muller explained that preliminary data from her institution showed thyroid abnormalities in patients who were severely ill with COVID-19. She and her team extended the evaluation to include thyroid data and other data on 93 patients with COVID-19 who were admitted to high-intensity care units (HICUs) in Italy during the 2020 pandemic.

Those data were compared with data on 101 critically ill patients admitted to the same HICUs in 2019 who did not have COVID-19. A third group of 52 patients with COVID-19 who were admitted to low-intensity care units (LICUs) in Italy in 2020 were also included in the analysis.

The mean age of the patients in the HICU 2020 group was 65.3 years; in the HICU 2019 group, it was 73 years; and in the LICU group, it was 70 years (P = .001). In addition, the HICU 2020 group included more men than the other two groups (69% vs. 56% and 48%; P = .03).

Of note, only 9% of patients in the HICU 2020 group had preexisting thyroid disorders, compared with 21% in the LICU group and 23% in the HICU 2019 group (P = .017).

These findings suggest that “such conditions are not a risk factor for SARS-CoV-2 infection or severity of COVID-19,” the authors wrote.

The patients with the preexisting thyroid conditions were excluded from the thyroid function analysis.

A significantly higher proportion of patients in the HICU 2020 group (13; 15%) were thyrotoxic upon admission, compared with just 1 (1%) of 78 patients in the HICU 2019 group (P = .002) and one (2%) of 41 patients in the LICU group (P = .025).

Among the 14 patients in the two COVID-19 groups who had thyrotoxicosis, the majority were male (9; 64%)

Among those in the HICU 2020 group, serum thyroid-stimulating hormone concentrations were lower than in either of the other two groups (P = .018), and serum free thyroxine (free T4) concentrations were higher than in the LICU group (P = .016) but not the HICU 2019 group.
 

 

 

Differences compared with other infection-related thyroiditis

Although thyrotoxicosis relating to subacute viral thyroiditis can result from a wide variety of viral infections, there are some key differences with COVID-19, Dr. Muller said.

“Thyroid dysfunction related to SARS-CoV-2 seems to be milder than that of classic subacute thyroiditis due to other viruses,” she explained. Furthermore, thyroid dysfunction associated with other viral infections is more common in women, whereas there were more male patients with the COVID-19–related atypical thyroiditis.

In addition, the thyroid effects developed early with COVID-19, whereas they usually emerge after the infections by other viruses.

Patients did not demonstrate the neck pain that is common with classic viral thyroiditis, and the thyroid abnormalities appear to correlate with the severity of COVID-19, whereas they are seen even in patients with mild symptoms when other viral infections are the cause.

In addition to the risk for subacute viral thyroiditis, critically ill patients in general are at risk of developing nonthyroidal illness syndrome, with alterations in thyroid function. However, thyroid hormone measures in the patients severely ill with COVID-19 were not consistent with that syndrome.

A subanalysis of eight HICU 2020 patients with thyroid dysfunction who were followed for 55 days after discharge showed that two experienced hyperthyroidism but likely not from COVID-19; in the remaining six, thyroid function normalized.

Muller speculated that, when ill with COVID-19, the patients likely had a combination of SARS-CoV-2–related atypical thyroiditis and nonthyroidal illness syndrome, known as T4 toxicosis.
 

Will there be any long-term effects?

Importantly, it remains unknown whether the novel coronavirus has longer-term effects on the thyroid, Dr. Muller said.

“We cannot predict what will be the long-lasting thyroid effects after COVID-19,” she said.

With classic subacute viral thyroiditis, “After a few years ... 5%-20% of patients develop permanent hypothyroidism, [and] the same might happen in COVID-19 patients,” she hypothesized. “We will follow our patients long term to answer this question – this study is already ongoing.”

In the meantime, diagnosis of thyroid dysfunction in patients with COVID-19 is important, inasmuch as it could worsen the already critical conditions of patients, Muller stressed.

“The gold-standard treatment for thyroiditis is steroids, so the presence of thyroid dysfunction might represent an additional indication to such treatment in COVID-19 patients, to be verified in properly designed clinical trials,” she advised.
 

ACE2 cell receptors highly expressed in thyroid

Dr. Muller and colleagues also noted recent research showing that ACE2 – demonstrated to be a key host-cell entry receptor for both SARS-CoV and SARS-CoV-2 – is expressed in even higher levels in the thyroid than the lungs, where it causes COVID-19’s notorious pulmonary effects.

Dr. Muller said the implications of ACE2 expression in the thyroid remain to be elucidated.

“If ACE2 is confirmed to be expressed at higher levels, compared with the lungs in the thyroid gland and other tissues, i.e., small intestine, testis, kidney, heart, etc, dedicated studies will be needed to correlate ACE2 expression with the organs’ susceptibility to SARS-CoV-2 reflected by clinical presentation,” she said.

Dr. Leung added that, as a take-home message from these and the other thyroid/COVID-19 studies, “data are starting to show us that COVID-19 infection may cause thyrotoxicosis that is possibly related to thyroid and systemic inflammation. However, the serum thyroid function test abnormalities seen in COVID-19 patients with subacute thyroiditis are also likely exacerbated to a substantial extent by nonthyroidal illness physiology.”

The authors have disclosed no relevant financial relationships. Dr. Leung is on the advisory board of Medscape Diabetes and Endocrinology.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Urine screen as part of triple test improves ID of adrenal cancer

Article Type
Changed
Wed, 08/05/2020 - 08:29

A strategy that includes a urine steroid test along with imaging characteristics and tumor size criteria can significantly improve the challenging diagnosis of adrenocortical cancer, helping to avoid unnecessary, and often unsuccessful, further imaging and even surgery, new research shows.

“A triple-test strategy of tumor diameter, imaging characteristics, and urine steroid metabolomics improves detection of adrenocortical carcinoma, which could shorten time to surgery for patients with ... carcinoma and help to avoid unnecessary surgery in patients with benign tumors,” the authors say in research published online July 23 in The Lancet Diabetes & Endocrinology.

The triple-test strategy can be expected to make its way into international guidelines, notes joint lead author Irina Bancos, MD, an associate professor of endocrinology at the Mayo Clinic, Rochester, Minn., in a press statement issued by the University of Birmingham (England), which also had a number of researchers involved in the study.

“The findings of this study will feed into the next international guidelines on the management of adrenal tumors and the implementation of the new test will hopefully improve the overall outlook for patients diagnosed with adrenal tumors,” Dr. Bancos emphasized.

More imaging has led to detection of more adrenal tumors

Advances in CT and MRI imaging have increased the ability to detect adrenal incidentalomas, which are now picked up on about 5% of scans, and the widespread use of imaging has compounded the prevalence of such findings, particularly in older people.

Adrenocortical carcinomas represent only about 2%-12% of adrenal incidentalomas, but the prognosis is very poor, and early detection and surgery can improve outcomes, so findings of any adrenal tumor typically trigger additional multimodal imaging to rule out malignancy.



Evidence is lacking on the accuracy of imaging in determining whether such masses are truly cancerous, or benign, and such procedures add costs, as well as expose patients to radiation that may ultimately have no benefit. However, a previous proof-of-concept study from the same authors did show that the presence of excess adrenal steroid hormones in the urine is a key indicator of adrenal tumors, and other research has supported the findings.

All three tests together give best predictive value: EURINE-ACT

To further validate this work, the authors conducted the EURINE-ACT trial, a prospective 14-center study that is the first of its kind to evaluate the efficacy of a screening strategy for adrenocortical carcinoma that combines urine steroid profiling with tumor size and imaging characteristics.

The study of 2,017 participants with newly diagnosed adrenal masses, recruited from January 2011 to July 2016 from specialist centers in 11 different countries, assessed the diagnostic accuracy of three components: maximum tumor diameter (≥4 cm vs. <4 cm), imaging characteristics (positive vs. negative), and urine steroid metabolomics (low, medium, or high risk of adrenocortical carcinoma), separately and in combination.

Of the patients, 98 (4.9%) had adrenocortical carcinoma confirmed clinically, histopathologically, or biochemically.

Tumors with diameters of 4 cm or larger were identified in 488 patients (24.2%) and were observed in the vast majority of patients with adrenocortical carcinoma (96 of 98), for a positive predictive value (PPV) of 19.7%.

Likewise, the PPV for imaging characteristics was 19.7%. However, increasing the unenhanced CT tumor attenuation threshold to 20 Hounsfield units (HU) from the recommended 10 HU increased specificity for adrenocortical carcinoma (80.0% vs. 64.0%) while maintaining sensitivity (99.0% vs. 100.0%).

Comparatively, a urine steroid metabolomics result suggesting a high risk of adrenocortical carcinoma had a PPV of 34.6%.

A total of 106 patients (5.3%) met the criteria for all three measures, and the PPV for all three was 76.4%.

Using the criteria, 70 patients (3.5%) were classified as being at moderate risk of adrenocortical carcinoma and 1,841 (91.3%) at low risk, for a negative predictive value (NPV) of 99.7%.

“Use of radiation-free, noninvasive urine steroid metabolomics has a higher PPV than two standard imaging tests, and best performance was seen with the combination of all three tests,” the authors state.

 

 

Limit urine test to patients with larger tumors

They note that the use of the combined diagnostic strategy would have led to additional imaging in only 488 (24.2%) of the study’s 2,017 patients, compared with the 2,737 scans that were actually conducted before reaching a diagnostic decision.

“Implementation of urine steroid metabolomics in the routine diagnostic assessment of newly discovered adrenal masses could reduce the number of imaging procedures required to diagnose adrenocortical carcinoma and avoid unnecessary surgery of benign adrenal tumors, potentially yielding beneficial effects with respect to patient burden and health care costs,” they stress.

And regarding imaging parameters, “we also showed that using a cutoff of 20 HU for unenhanced CT tumor attenuation increases the accuracy of imaging characteristic assessment for exclusion of adrenocortical carcinoma, compared with the currently recommended cutoff of 10 HU, which has immediate implications for clinical practice,” they emphasize.

In an accompanying editorial, Adina F. Turcu, MD, of the division of metabolism, endocrinology, and diabetes, University of Michigan, Ann Arbor, and Axel K. Walch, MD, of the Helmholtz Zentrum München–German Research Centre for Environmental Health, agree. “The introduction of urine steroid metabolomics into routine clinical practice would provide major advantages,” they state.

However, they point out that, although the overall negative predictive value of the test was excellent, the specificity was weak.

“Thus, urine steroid metabolomics should be limited to patients who have adrenal nodules larger than 4 cm and have qualitative imaging characteristics suggestive of malignancy,” say Dr. Turcu and Dr. Walch.

The EURINE-ACT study results suggest this subgroup would represent roughly only 12% of all patients with adrenal incidentalomas, they add.

Issues that remain to be addressed with regard to the implementation of the screening strategy include how to best respond to patients who are classified as having intermediate or moderate risk of malignancy, and whether the diagnostic value of steroid metabolomics could be refined by adding analytes or parameters, the editorialists conclude.

The study was funded by the European Commission, U.K. Medical Research Council, Wellcome Trust, U.K. National Institute for Health Research, U.S. National Institutes of Health, the Claire Khan Trust Fund at University Hospitals Birmingham Charities, and the Mayo Clinic Foundation for Medical Education and Research.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

A strategy that includes a urine steroid test along with imaging characteristics and tumor size criteria can significantly improve the challenging diagnosis of adrenocortical cancer, helping to avoid unnecessary, and often unsuccessful, further imaging and even surgery, new research shows.

“A triple-test strategy of tumor diameter, imaging characteristics, and urine steroid metabolomics improves detection of adrenocortical carcinoma, which could shorten time to surgery for patients with ... carcinoma and help to avoid unnecessary surgery in patients with benign tumors,” the authors say in research published online July 23 in The Lancet Diabetes & Endocrinology.

The triple-test strategy can be expected to make its way into international guidelines, notes joint lead author Irina Bancos, MD, an associate professor of endocrinology at the Mayo Clinic, Rochester, Minn., in a press statement issued by the University of Birmingham (England), which also had a number of researchers involved in the study.

“The findings of this study will feed into the next international guidelines on the management of adrenal tumors and the implementation of the new test will hopefully improve the overall outlook for patients diagnosed with adrenal tumors,” Dr. Bancos emphasized.

More imaging has led to detection of more adrenal tumors

Advances in CT and MRI imaging have increased the ability to detect adrenal incidentalomas, which are now picked up on about 5% of scans, and the widespread use of imaging has compounded the prevalence of such findings, particularly in older people.

Adrenocortical carcinomas represent only about 2%-12% of adrenal incidentalomas, but the prognosis is very poor, and early detection and surgery can improve outcomes, so findings of any adrenal tumor typically trigger additional multimodal imaging to rule out malignancy.



Evidence is lacking on the accuracy of imaging in determining whether such masses are truly cancerous, or benign, and such procedures add costs, as well as expose patients to radiation that may ultimately have no benefit. However, a previous proof-of-concept study from the same authors did show that the presence of excess adrenal steroid hormones in the urine is a key indicator of adrenal tumors, and other research has supported the findings.

All three tests together give best predictive value: EURINE-ACT

To further validate this work, the authors conducted the EURINE-ACT trial, a prospective 14-center study that is the first of its kind to evaluate the efficacy of a screening strategy for adrenocortical carcinoma that combines urine steroid profiling with tumor size and imaging characteristics.

The study of 2,017 participants with newly diagnosed adrenal masses, recruited from January 2011 to July 2016 from specialist centers in 11 different countries, assessed the diagnostic accuracy of three components: maximum tumor diameter (≥4 cm vs. <4 cm), imaging characteristics (positive vs. negative), and urine steroid metabolomics (low, medium, or high risk of adrenocortical carcinoma), separately and in combination.

Of the patients, 98 (4.9%) had adrenocortical carcinoma confirmed clinically, histopathologically, or biochemically.

Tumors with diameters of 4 cm or larger were identified in 488 patients (24.2%) and were observed in the vast majority of patients with adrenocortical carcinoma (96 of 98), for a positive predictive value (PPV) of 19.7%.

Likewise, the PPV for imaging characteristics was 19.7%. However, increasing the unenhanced CT tumor attenuation threshold to 20 Hounsfield units (HU) from the recommended 10 HU increased specificity for adrenocortical carcinoma (80.0% vs. 64.0%) while maintaining sensitivity (99.0% vs. 100.0%).

Comparatively, a urine steroid metabolomics result suggesting a high risk of adrenocortical carcinoma had a PPV of 34.6%.

A total of 106 patients (5.3%) met the criteria for all three measures, and the PPV for all three was 76.4%.

Using the criteria, 70 patients (3.5%) were classified as being at moderate risk of adrenocortical carcinoma and 1,841 (91.3%) at low risk, for a negative predictive value (NPV) of 99.7%.

“Use of radiation-free, noninvasive urine steroid metabolomics has a higher PPV than two standard imaging tests, and best performance was seen with the combination of all three tests,” the authors state.

 

 

Limit urine test to patients with larger tumors

They note that the use of the combined diagnostic strategy would have led to additional imaging in only 488 (24.2%) of the study’s 2,017 patients, compared with the 2,737 scans that were actually conducted before reaching a diagnostic decision.

“Implementation of urine steroid metabolomics in the routine diagnostic assessment of newly discovered adrenal masses could reduce the number of imaging procedures required to diagnose adrenocortical carcinoma and avoid unnecessary surgery of benign adrenal tumors, potentially yielding beneficial effects with respect to patient burden and health care costs,” they stress.

And regarding imaging parameters, “we also showed that using a cutoff of 20 HU for unenhanced CT tumor attenuation increases the accuracy of imaging characteristic assessment for exclusion of adrenocortical carcinoma, compared with the currently recommended cutoff of 10 HU, which has immediate implications for clinical practice,” they emphasize.

In an accompanying editorial, Adina F. Turcu, MD, of the division of metabolism, endocrinology, and diabetes, University of Michigan, Ann Arbor, and Axel K. Walch, MD, of the Helmholtz Zentrum München–German Research Centre for Environmental Health, agree. “The introduction of urine steroid metabolomics into routine clinical practice would provide major advantages,” they state.

However, they point out that, although the overall negative predictive value of the test was excellent, the specificity was weak.

“Thus, urine steroid metabolomics should be limited to patients who have adrenal nodules larger than 4 cm and have qualitative imaging characteristics suggestive of malignancy,” say Dr. Turcu and Dr. Walch.

The EURINE-ACT study results suggest this subgroup would represent roughly only 12% of all patients with adrenal incidentalomas, they add.

Issues that remain to be addressed with regard to the implementation of the screening strategy include how to best respond to patients who are classified as having intermediate or moderate risk of malignancy, and whether the diagnostic value of steroid metabolomics could be refined by adding analytes or parameters, the editorialists conclude.

The study was funded by the European Commission, U.K. Medical Research Council, Wellcome Trust, U.K. National Institute for Health Research, U.S. National Institutes of Health, the Claire Khan Trust Fund at University Hospitals Birmingham Charities, and the Mayo Clinic Foundation for Medical Education and Research.
 

A version of this article originally appeared on Medscape.com.

A strategy that includes a urine steroid test along with imaging characteristics and tumor size criteria can significantly improve the challenging diagnosis of adrenocortical cancer, helping to avoid unnecessary, and often unsuccessful, further imaging and even surgery, new research shows.

“A triple-test strategy of tumor diameter, imaging characteristics, and urine steroid metabolomics improves detection of adrenocortical carcinoma, which could shorten time to surgery for patients with ... carcinoma and help to avoid unnecessary surgery in patients with benign tumors,” the authors say in research published online July 23 in The Lancet Diabetes & Endocrinology.

The triple-test strategy can be expected to make its way into international guidelines, notes joint lead author Irina Bancos, MD, an associate professor of endocrinology at the Mayo Clinic, Rochester, Minn., in a press statement issued by the University of Birmingham (England), which also had a number of researchers involved in the study.

“The findings of this study will feed into the next international guidelines on the management of adrenal tumors and the implementation of the new test will hopefully improve the overall outlook for patients diagnosed with adrenal tumors,” Dr. Bancos emphasized.

More imaging has led to detection of more adrenal tumors

Advances in CT and MRI imaging have increased the ability to detect adrenal incidentalomas, which are now picked up on about 5% of scans, and the widespread use of imaging has compounded the prevalence of such findings, particularly in older people.

Adrenocortical carcinomas represent only about 2%-12% of adrenal incidentalomas, but the prognosis is very poor, and early detection and surgery can improve outcomes, so findings of any adrenal tumor typically trigger additional multimodal imaging to rule out malignancy.



Evidence is lacking on the accuracy of imaging in determining whether such masses are truly cancerous, or benign, and such procedures add costs, as well as expose patients to radiation that may ultimately have no benefit. However, a previous proof-of-concept study from the same authors did show that the presence of excess adrenal steroid hormones in the urine is a key indicator of adrenal tumors, and other research has supported the findings.

All three tests together give best predictive value: EURINE-ACT

To further validate this work, the authors conducted the EURINE-ACT trial, a prospective 14-center study that is the first of its kind to evaluate the efficacy of a screening strategy for adrenocortical carcinoma that combines urine steroid profiling with tumor size and imaging characteristics.

The study of 2,017 participants with newly diagnosed adrenal masses, recruited from January 2011 to July 2016 from specialist centers in 11 different countries, assessed the diagnostic accuracy of three components: maximum tumor diameter (≥4 cm vs. <4 cm), imaging characteristics (positive vs. negative), and urine steroid metabolomics (low, medium, or high risk of adrenocortical carcinoma), separately and in combination.

Of the patients, 98 (4.9%) had adrenocortical carcinoma confirmed clinically, histopathologically, or biochemically.

Tumors with diameters of 4 cm or larger were identified in 488 patients (24.2%) and were observed in the vast majority of patients with adrenocortical carcinoma (96 of 98), for a positive predictive value (PPV) of 19.7%.

Likewise, the PPV for imaging characteristics was 19.7%. However, increasing the unenhanced CT tumor attenuation threshold to 20 Hounsfield units (HU) from the recommended 10 HU increased specificity for adrenocortical carcinoma (80.0% vs. 64.0%) while maintaining sensitivity (99.0% vs. 100.0%).

Comparatively, a urine steroid metabolomics result suggesting a high risk of adrenocortical carcinoma had a PPV of 34.6%.

A total of 106 patients (5.3%) met the criteria for all three measures, and the PPV for all three was 76.4%.

Using the criteria, 70 patients (3.5%) were classified as being at moderate risk of adrenocortical carcinoma and 1,841 (91.3%) at low risk, for a negative predictive value (NPV) of 99.7%.

“Use of radiation-free, noninvasive urine steroid metabolomics has a higher PPV than two standard imaging tests, and best performance was seen with the combination of all three tests,” the authors state.

 

 

Limit urine test to patients with larger tumors

They note that the use of the combined diagnostic strategy would have led to additional imaging in only 488 (24.2%) of the study’s 2,017 patients, compared with the 2,737 scans that were actually conducted before reaching a diagnostic decision.

“Implementation of urine steroid metabolomics in the routine diagnostic assessment of newly discovered adrenal masses could reduce the number of imaging procedures required to diagnose adrenocortical carcinoma and avoid unnecessary surgery of benign adrenal tumors, potentially yielding beneficial effects with respect to patient burden and health care costs,” they stress.

And regarding imaging parameters, “we also showed that using a cutoff of 20 HU for unenhanced CT tumor attenuation increases the accuracy of imaging characteristic assessment for exclusion of adrenocortical carcinoma, compared with the currently recommended cutoff of 10 HU, which has immediate implications for clinical practice,” they emphasize.

In an accompanying editorial, Adina F. Turcu, MD, of the division of metabolism, endocrinology, and diabetes, University of Michigan, Ann Arbor, and Axel K. Walch, MD, of the Helmholtz Zentrum München–German Research Centre for Environmental Health, agree. “The introduction of urine steroid metabolomics into routine clinical practice would provide major advantages,” they state.

However, they point out that, although the overall negative predictive value of the test was excellent, the specificity was weak.

“Thus, urine steroid metabolomics should be limited to patients who have adrenal nodules larger than 4 cm and have qualitative imaging characteristics suggestive of malignancy,” say Dr. Turcu and Dr. Walch.

The EURINE-ACT study results suggest this subgroup would represent roughly only 12% of all patients with adrenal incidentalomas, they add.

Issues that remain to be addressed with regard to the implementation of the screening strategy include how to best respond to patients who are classified as having intermediate or moderate risk of malignancy, and whether the diagnostic value of steroid metabolomics could be refined by adding analytes or parameters, the editorialists conclude.

The study was funded by the European Commission, U.K. Medical Research Council, Wellcome Trust, U.K. National Institute for Health Research, U.S. National Institutes of Health, the Claire Khan Trust Fund at University Hospitals Birmingham Charities, and the Mayo Clinic Foundation for Medical Education and Research.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

New osteoporosis recommendations from AACE help therapy selection

Article Type
Changed
Fri, 07/24/2020 - 15:23

Recommendations on use of the new dual-action anabolic agent romosozumab (Evenity, Amgen) and how to safely transition between osteoporosis agents are two of the issues addressed in the latest clinical practice guidelines for the diagnosis and treatment of postmenopausal osteoporosis from the American Association of Clinical Endocrinologists and American College of Endocrinology.

“This guideline is a practical tool for endocrinologists, physicians in general, regulatory bodies, health-related organizations, and interested laypersons regarding the diagnosis, evaluation, and treatment of postmenopausal osteoporosis,” the authors wrote.

The guidelines focus on 12 key clinical questions related to postmenopausal osteoporosis, with 52 specific recommendations, each graded according to the level of evidence.

They also include a treatment algorithm to help guide choice of therapy.
 

Reiterating role of FRAX in the diagnosis of patients with osteopenia

Among key updates is an emphasis on the role of the Fracture Risk Assessment Tool (FRAX) in the diagnosis of osteoporosis in patients with osteopenia.

While patients have traditionally been diagnosed with osteoporosis based on the presence of low bone mineral density (BMD) in the absence of fracture, the updated guidelines indicate that osteoporosis may be diagnosed in patients with osteopenia and an increased fracture risk using FRAX.

“The use of FRAX and osteopenia to diagnosis osteoporosis was first proposed by the National Bone Health Alliance years ago, and in the 2016 guideline, we agreed with it,” Pauline M. Camacho, MD, cochair of the guidelines task force, said in an interview.

“We reiterate in the 2020 guideline that we feel this is a valid diagnostic criteria,” said Dr. Camacho, professor of medicine and director of the Osteoporosis and Metabolic Bone Disease Center at Loyola University Chicago, Maywood, Ill. “It makes sense because when the thresholds are met by FRAX in patients with osteopenia, treatment is recommended. Therefore, why would they not fulfill treatment criteria for diagnosing osteoporosis?”

An increased risk of fracture based on a FRAX score may also be used to determine pharmacologic therapy, as can other traditional factors such as a low T score or a fragility fracture, the guidelines stated.
 

High risk vs. very high risk guides choice of first therapy

Another key update is the clarification of the risk stratification of patients who are high risk versus very high risk, which is key in determining the initial choice of agents and duration of therapy.

Specifically, patients should be considered at a very high fracture risk if they have the following criteria: a recent fracture (e.g., within the past 12 months), fractures while on approved osteoporosis therapy, multiple fractures, fractures while on drugs causing skeletal harm (e.g., long-term glucocorticoids), very low T score (e.g., less than −3.0), a high risk for falls or history of injurious falls, and a very high fracture probability by FRAX (e.g., major osteoporosis fracture >30%, hip fracture >4.5%) or other validated fracture risk algorithm.

Meanwhile, patients should be considered at high risk if they have been diagnosed with osteoporosis but do not meet the criteria for very high fracture risk.
 

Romosozumab brought into the mix

Another important update provides information on the role of one of the newest osteoporosis agents on the market, the anabolic drug romosozumab, a monoclonal antibody directed against sclerostin.

The drug’s approval by the Food and Drug Administration in 2019 for postmenopausal women at high risk of fracture was based on two large trials that showed dramatic increases in bone density through modeling as well as remodeling.

Those studies specifically showed significant reductions in radiographic vertebral fractures with romosozumab, compared with placebo and alendronate.

Dr. Camacho noted that romosozumab “will likely be for the very high risk group and those who have maxed out on teriparatide or abaloparatide.”

Romosozumab can safely be used in patients with prior radiation exposure, the guidelines noted.



Importantly, because of reports of a higher risk of serious cardiovascular events with romosozumab, compared with alendronate, romosozumab comes with a black-box warning that it should not be used in patients at high risk for cardiovascular events or who have had a recent myocardial infarction or stroke.

“Unfortunately, the very high risk group is often the older patients,” Dr. Camacho noted.

“The drug should not be given if there is a history of myocardial infarction or stroke in the past year,” she emphasized. “Clinical judgment is needed to decide who is at risk for cardiovascular complications.”

Notably, teriparatide and abaloparatide have black box warnings of their own regarding risk for osteosarcoma.

Switching therapies

Reflecting the evolving data on osteoporosis drug holidays, the guidelines also addressed the issue and the clinical challenges of switching therapies.

“In 2016, we said drug holidays are not recommended, and the treatment can be continued indefinitely, [however] in 2020, we felt that if some patients are no longer high risk, they can be transitioned off the drug,” Dr. Camacho said.

For teriparatide and abaloparatide, the FDA recommends treatment be limited to no more than 2 years, and for romosozumab, 1 year.

The updated guidelines recommend that upon discontinuation of an anabolic agent (e.g., abaloparatide, romosozumab, or teriparatide), a switch to therapy with an antiresorptive agent, such as denosumab or bisphosphonates, should be implemented to prevent loss of BMD and fracture efficacy.

Discontinuation of denosumab, however, can have notably negative effects. Clinical trials show rapid decreases in BMD when denosumab treatment is stopped after 2 or 8 years, as well as rapid loss of protection from vertebral fractures.

Therefore, if denosumab is going to be discontinued, there should be a proper transition to an antiresorptive agent for a limited time, such as one infusion of the bisphosphonate zoledronate.
 

Communicate the risks with and without treatment to patients

The authors underscored that, in addition to communicating the potential risk and expected benefits of osteoporosis treatments, clinicians should make sure patients fully appreciate the risk of fractures and their consequences, such as pain, disability, loss of independence, and death, when no treatment is given.

“It is incumbent on the clinician to provide this information to each patient in a manner that is fully understood, and it is equally important to learn from the patient about cultural beliefs, previous treatment experiences, fears, and concerns,” they wrote.

And in estimating patients’ fracture risk, T score must be combined with clinical risk factors, particularly advanced age and previous fracture, and clinicians should recognize that the absolute fracture risk is more useful than a risk ratio in developing treatment plans.

“Treatment recommendations may be quite different; an early postmenopausal woman with a T score of −2.5 has osteoporosis, although fracture risk is much lower than an 80-year-old woman with the same T score,” the authors explained.

Dr. Camacho reported financial relationships with Amgen and Shire. Disclosures for other task force members are detailed in the guidelines.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Recommendations on use of the new dual-action anabolic agent romosozumab (Evenity, Amgen) and how to safely transition between osteoporosis agents are two of the issues addressed in the latest clinical practice guidelines for the diagnosis and treatment of postmenopausal osteoporosis from the American Association of Clinical Endocrinologists and American College of Endocrinology.

“This guideline is a practical tool for endocrinologists, physicians in general, regulatory bodies, health-related organizations, and interested laypersons regarding the diagnosis, evaluation, and treatment of postmenopausal osteoporosis,” the authors wrote.

The guidelines focus on 12 key clinical questions related to postmenopausal osteoporosis, with 52 specific recommendations, each graded according to the level of evidence.

They also include a treatment algorithm to help guide choice of therapy.
 

Reiterating role of FRAX in the diagnosis of patients with osteopenia

Among key updates is an emphasis on the role of the Fracture Risk Assessment Tool (FRAX) in the diagnosis of osteoporosis in patients with osteopenia.

While patients have traditionally been diagnosed with osteoporosis based on the presence of low bone mineral density (BMD) in the absence of fracture, the updated guidelines indicate that osteoporosis may be diagnosed in patients with osteopenia and an increased fracture risk using FRAX.

“The use of FRAX and osteopenia to diagnosis osteoporosis was first proposed by the National Bone Health Alliance years ago, and in the 2016 guideline, we agreed with it,” Pauline M. Camacho, MD, cochair of the guidelines task force, said in an interview.

“We reiterate in the 2020 guideline that we feel this is a valid diagnostic criteria,” said Dr. Camacho, professor of medicine and director of the Osteoporosis and Metabolic Bone Disease Center at Loyola University Chicago, Maywood, Ill. “It makes sense because when the thresholds are met by FRAX in patients with osteopenia, treatment is recommended. Therefore, why would they not fulfill treatment criteria for diagnosing osteoporosis?”

An increased risk of fracture based on a FRAX score may also be used to determine pharmacologic therapy, as can other traditional factors such as a low T score or a fragility fracture, the guidelines stated.
 

High risk vs. very high risk guides choice of first therapy

Another key update is the clarification of the risk stratification of patients who are high risk versus very high risk, which is key in determining the initial choice of agents and duration of therapy.

Specifically, patients should be considered at a very high fracture risk if they have the following criteria: a recent fracture (e.g., within the past 12 months), fractures while on approved osteoporosis therapy, multiple fractures, fractures while on drugs causing skeletal harm (e.g., long-term glucocorticoids), very low T score (e.g., less than −3.0), a high risk for falls or history of injurious falls, and a very high fracture probability by FRAX (e.g., major osteoporosis fracture >30%, hip fracture >4.5%) or other validated fracture risk algorithm.

Meanwhile, patients should be considered at high risk if they have been diagnosed with osteoporosis but do not meet the criteria for very high fracture risk.
 

Romosozumab brought into the mix

Another important update provides information on the role of one of the newest osteoporosis agents on the market, the anabolic drug romosozumab, a monoclonal antibody directed against sclerostin.

The drug’s approval by the Food and Drug Administration in 2019 for postmenopausal women at high risk of fracture was based on two large trials that showed dramatic increases in bone density through modeling as well as remodeling.

Those studies specifically showed significant reductions in radiographic vertebral fractures with romosozumab, compared with placebo and alendronate.

Dr. Camacho noted that romosozumab “will likely be for the very high risk group and those who have maxed out on teriparatide or abaloparatide.”

Romosozumab can safely be used in patients with prior radiation exposure, the guidelines noted.



Importantly, because of reports of a higher risk of serious cardiovascular events with romosozumab, compared with alendronate, romosozumab comes with a black-box warning that it should not be used in patients at high risk for cardiovascular events or who have had a recent myocardial infarction or stroke.

“Unfortunately, the very high risk group is often the older patients,” Dr. Camacho noted.

“The drug should not be given if there is a history of myocardial infarction or stroke in the past year,” she emphasized. “Clinical judgment is needed to decide who is at risk for cardiovascular complications.”

Notably, teriparatide and abaloparatide have black box warnings of their own regarding risk for osteosarcoma.

Switching therapies

Reflecting the evolving data on osteoporosis drug holidays, the guidelines also addressed the issue and the clinical challenges of switching therapies.

“In 2016, we said drug holidays are not recommended, and the treatment can be continued indefinitely, [however] in 2020, we felt that if some patients are no longer high risk, they can be transitioned off the drug,” Dr. Camacho said.

For teriparatide and abaloparatide, the FDA recommends treatment be limited to no more than 2 years, and for romosozumab, 1 year.

The updated guidelines recommend that upon discontinuation of an anabolic agent (e.g., abaloparatide, romosozumab, or teriparatide), a switch to therapy with an antiresorptive agent, such as denosumab or bisphosphonates, should be implemented to prevent loss of BMD and fracture efficacy.

Discontinuation of denosumab, however, can have notably negative effects. Clinical trials show rapid decreases in BMD when denosumab treatment is stopped after 2 or 8 years, as well as rapid loss of protection from vertebral fractures.

Therefore, if denosumab is going to be discontinued, there should be a proper transition to an antiresorptive agent for a limited time, such as one infusion of the bisphosphonate zoledronate.
 

Communicate the risks with and without treatment to patients

The authors underscored that, in addition to communicating the potential risk and expected benefits of osteoporosis treatments, clinicians should make sure patients fully appreciate the risk of fractures and their consequences, such as pain, disability, loss of independence, and death, when no treatment is given.

“It is incumbent on the clinician to provide this information to each patient in a manner that is fully understood, and it is equally important to learn from the patient about cultural beliefs, previous treatment experiences, fears, and concerns,” they wrote.

And in estimating patients’ fracture risk, T score must be combined with clinical risk factors, particularly advanced age and previous fracture, and clinicians should recognize that the absolute fracture risk is more useful than a risk ratio in developing treatment plans.

“Treatment recommendations may be quite different; an early postmenopausal woman with a T score of −2.5 has osteoporosis, although fracture risk is much lower than an 80-year-old woman with the same T score,” the authors explained.

Dr. Camacho reported financial relationships with Amgen and Shire. Disclosures for other task force members are detailed in the guidelines.

A version of this article originally appeared on Medscape.com.

Recommendations on use of the new dual-action anabolic agent romosozumab (Evenity, Amgen) and how to safely transition between osteoporosis agents are two of the issues addressed in the latest clinical practice guidelines for the diagnosis and treatment of postmenopausal osteoporosis from the American Association of Clinical Endocrinologists and American College of Endocrinology.

“This guideline is a practical tool for endocrinologists, physicians in general, regulatory bodies, health-related organizations, and interested laypersons regarding the diagnosis, evaluation, and treatment of postmenopausal osteoporosis,” the authors wrote.

The guidelines focus on 12 key clinical questions related to postmenopausal osteoporosis, with 52 specific recommendations, each graded according to the level of evidence.

They also include a treatment algorithm to help guide choice of therapy.
 

Reiterating role of FRAX in the diagnosis of patients with osteopenia

Among key updates is an emphasis on the role of the Fracture Risk Assessment Tool (FRAX) in the diagnosis of osteoporosis in patients with osteopenia.

While patients have traditionally been diagnosed with osteoporosis based on the presence of low bone mineral density (BMD) in the absence of fracture, the updated guidelines indicate that osteoporosis may be diagnosed in patients with osteopenia and an increased fracture risk using FRAX.

“The use of FRAX and osteopenia to diagnosis osteoporosis was first proposed by the National Bone Health Alliance years ago, and in the 2016 guideline, we agreed with it,” Pauline M. Camacho, MD, cochair of the guidelines task force, said in an interview.

“We reiterate in the 2020 guideline that we feel this is a valid diagnostic criteria,” said Dr. Camacho, professor of medicine and director of the Osteoporosis and Metabolic Bone Disease Center at Loyola University Chicago, Maywood, Ill. “It makes sense because when the thresholds are met by FRAX in patients with osteopenia, treatment is recommended. Therefore, why would they not fulfill treatment criteria for diagnosing osteoporosis?”

An increased risk of fracture based on a FRAX score may also be used to determine pharmacologic therapy, as can other traditional factors such as a low T score or a fragility fracture, the guidelines stated.
 

High risk vs. very high risk guides choice of first therapy

Another key update is the clarification of the risk stratification of patients who are high risk versus very high risk, which is key in determining the initial choice of agents and duration of therapy.

Specifically, patients should be considered at a very high fracture risk if they have the following criteria: a recent fracture (e.g., within the past 12 months), fractures while on approved osteoporosis therapy, multiple fractures, fractures while on drugs causing skeletal harm (e.g., long-term glucocorticoids), very low T score (e.g., less than −3.0), a high risk for falls or history of injurious falls, and a very high fracture probability by FRAX (e.g., major osteoporosis fracture >30%, hip fracture >4.5%) or other validated fracture risk algorithm.

Meanwhile, patients should be considered at high risk if they have been diagnosed with osteoporosis but do not meet the criteria for very high fracture risk.
 

Romosozumab brought into the mix

Another important update provides information on the role of one of the newest osteoporosis agents on the market, the anabolic drug romosozumab, a monoclonal antibody directed against sclerostin.

The drug’s approval by the Food and Drug Administration in 2019 for postmenopausal women at high risk of fracture was based on two large trials that showed dramatic increases in bone density through modeling as well as remodeling.

Those studies specifically showed significant reductions in radiographic vertebral fractures with romosozumab, compared with placebo and alendronate.

Dr. Camacho noted that romosozumab “will likely be for the very high risk group and those who have maxed out on teriparatide or abaloparatide.”

Romosozumab can safely be used in patients with prior radiation exposure, the guidelines noted.



Importantly, because of reports of a higher risk of serious cardiovascular events with romosozumab, compared with alendronate, romosozumab comes with a black-box warning that it should not be used in patients at high risk for cardiovascular events or who have had a recent myocardial infarction or stroke.

“Unfortunately, the very high risk group is often the older patients,” Dr. Camacho noted.

“The drug should not be given if there is a history of myocardial infarction or stroke in the past year,” she emphasized. “Clinical judgment is needed to decide who is at risk for cardiovascular complications.”

Notably, teriparatide and abaloparatide have black box warnings of their own regarding risk for osteosarcoma.

Switching therapies

Reflecting the evolving data on osteoporosis drug holidays, the guidelines also addressed the issue and the clinical challenges of switching therapies.

“In 2016, we said drug holidays are not recommended, and the treatment can be continued indefinitely, [however] in 2020, we felt that if some patients are no longer high risk, they can be transitioned off the drug,” Dr. Camacho said.

For teriparatide and abaloparatide, the FDA recommends treatment be limited to no more than 2 years, and for romosozumab, 1 year.

The updated guidelines recommend that upon discontinuation of an anabolic agent (e.g., abaloparatide, romosozumab, or teriparatide), a switch to therapy with an antiresorptive agent, such as denosumab or bisphosphonates, should be implemented to prevent loss of BMD and fracture efficacy.

Discontinuation of denosumab, however, can have notably negative effects. Clinical trials show rapid decreases in BMD when denosumab treatment is stopped after 2 or 8 years, as well as rapid loss of protection from vertebral fractures.

Therefore, if denosumab is going to be discontinued, there should be a proper transition to an antiresorptive agent for a limited time, such as one infusion of the bisphosphonate zoledronate.
 

Communicate the risks with and without treatment to patients

The authors underscored that, in addition to communicating the potential risk and expected benefits of osteoporosis treatments, clinicians should make sure patients fully appreciate the risk of fractures and their consequences, such as pain, disability, loss of independence, and death, when no treatment is given.

“It is incumbent on the clinician to provide this information to each patient in a manner that is fully understood, and it is equally important to learn from the patient about cultural beliefs, previous treatment experiences, fears, and concerns,” they wrote.

And in estimating patients’ fracture risk, T score must be combined with clinical risk factors, particularly advanced age and previous fracture, and clinicians should recognize that the absolute fracture risk is more useful than a risk ratio in developing treatment plans.

“Treatment recommendations may be quite different; an early postmenopausal woman with a T score of −2.5 has osteoporosis, although fracture risk is much lower than an 80-year-old woman with the same T score,” the authors explained.

Dr. Camacho reported financial relationships with Amgen and Shire. Disclosures for other task force members are detailed in the guidelines.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

Epilepsy after TBI linked to worse 12-month outcomes

Article Type
Changed
Thu, 07/30/2020 - 12:02

The severity of head injury in traumatic brain injury (TBI) is significantly linked with the risk of developing posttraumatic epilepsy and seizures, and posttraumatic epilepsy itself further worsens outcomes at 12 months, findings from an analysis of a large, prospective database suggest. “We found that patients essentially have a 10-times greater risk of developing posttraumatic epilepsy and seizures at 12 months [post injury] if the presenting Glasgow Coma Scale GCS) is less than 8,” said lead author John F. Burke, MD, PhD, University of California, San Francisco, in presenting the findings as part of the virtual annual meeting of the American Association of Neurological Surgeons.

Assessing risk factors

While posttraumatic epilepsy represents an estimated 20% of all cases of symptomatic epilepsy, many questions remain on those most at risk and on the long-term effects of posttraumatic epilepsy on TBI outcomes. To probe those issues, Dr. Burke and colleagues turned to the multicenter TRACK-TBI database, which has prospective, longitudinal data on more than 2,700 patients with traumatic brain injuries and is considered the largest source of prospective data on posttraumatic epilepsy.

Using the criteria of no previous epilepsy and having 12 months of follow-up, the team identified 1,493 patients with TBI. In addition, investigators identified 182 orthopedic controls (included and prospectively followed because they have injuries but not specifically head trauma) and 210 controls who are friends of the patients and who do not have injuries but allow researchers to control for socioeconomic and environmental factors.

Of the 1,493 patients with TBI, 41 (2.7%) were determined to have posttraumatic epilepsy, assessed according to a National Institute of Neurological Disorders and Stroke epilepsy screening questionnaire, which is designed to identify patients with posttraumatic epilepsy symptoms. There were no reports of epilepsy symptoms using the screening tool among the controls. Dr. Burke noted that the 2.7% was in agreement with historical reports.

In comparing patients with TBI who did and did not have posttraumatic epilepsy, no differences were observed in the groups in terms of gender, although there was a trend toward younger age among those with PTE (mean age, 35.4 years with posttraumatic injury vs. 41.5 without; P = .05).

A major risk factor for the development of posttraumatic epilepsy was presenting GCS scores. Among those with scores of less than 8, indicative of severe injury, the rate of posttraumatic epilepsy was 6% at 6 months and 12.5% at 12 months. In contrast, those with TBI presenting with GCS scores between 13 and 15, indicative of minor injury, had an incidence of posttraumatic epilepsy of 0.9% at 6 months and 1.4% at 12 months.

Imaging findings in the two groups showed that hemorrhage detected on CT imaging was associated with a significantly higher risk for posttraumatic epilepsy (P < .001).

“The main takeaway is that any hemorrhage in the brain is a major risk factor for developing seizures,” Dr. Burke said. “Whether it is subdural, epidural blood, subarachnoid or contusion, any blood confers a very [high] risk for developing seizures.”

Posttraumatic epilepsy was linked to poorer longer-term outcomes even for patients with lesser injury: Among those with TBI and GCS of 13-15, the mean Glasgow Outcome Scale Extended (GOSE) score at 12 months among those without posttraumatic epilepsy was 7, indicative of a good recovery with minor defects, whereas the mean GOSE score for those with PTE was 4.6, indicative of moderate to severe disability (P  < .001).

“It was surprising to us that PTE-positive patients had a very significant decrease in GOSE, compared to PTE-negative patients,” Dr. Burke said. “There was a nearly 2-point drop in the GOSE and that was extremely significant.”

A multivariate analysis showed there was still a significant independent risk for a poor GOSE score with posttraumatic epilepsy after controlling for GCS score, head CT findings, and age (P < .001).

The authors also looked at mood outcomes using the Brief Symptom Inventory–18, which showed significant worse effect in those with posttraumatic epilepsy after multivariate adjustment (P = .01). Additionally, a highly significant worse effect in cognitive outcomes on the Rivermead cognitive metric was observed with posttraumatic epilepsy (P = .001).

“On all metrics tested, posttraumatic epilepsy worsened outcomes,” Dr. Burke said.

He noted that the study has some key limitations, including the 12-month follow-up. A previous study showed a linear increase in posttraumatic follow-up up to 30 years. “The fact that we found 41 patients at 12 months indicates there are probably more that are out there who are going to develop seizures, but because we don’t have the follow-up we can’t look at that.”

Although the screening questionnaires are effective, “the issue is these people are not being seen by an epileptologist or having scalp EEG done, and we need a more accurate way to do this,” he said. A new study, TRACK-TBI EPI, will address those limitations and a host of other issues with a 5-year follow-up.
 

 

 

Capturing the nuances of brain injury

Commenting on the study as a discussant, neurosurgeon Uzma Samadani, MD, PhD, of the Minneapolis Veterans Affairs Medical Center and CentraCare in Minneapolis, suggested that the future work should focus on issues including the wide-ranging mechanisms that could explain the seizure activity.

“For example, it’s known that posttraumatic epilepsy or seizures can be triggered by abnormal conductivity due to multiple different mechanisms associated with brain injury, such as endocrine dysfunction, cortical-spreading depression, and many others,” said Dr. Samadani, who has been a researcher on the TRACK-TBI study.

Factors ranging from genetic differences to comorbid conditions such as alcoholism can play a role in brain injury susceptibility, Dr. Samadani added. Furthermore, outcome measures currently available simply may not capture the unknown nuances of brain injury.

“We have to ask, are these an all-or-none phenomena, or is aberrant electrical activity after brain injury a continuum of dysfunction?” Dr. Samadani speculated.

“I would caution that we are likely underestimating the non–easily measurable consequences of brain injury,” she said. “And the better we can quantitate susceptibility, classify the nature of injury and target acute management, the less posttraumatic epilepsy/aberrant electrical activity our patients will have.”

Dr. Burke and Dr. Samadani disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(8)
Publications
Topics
Sections

The severity of head injury in traumatic brain injury (TBI) is significantly linked with the risk of developing posttraumatic epilepsy and seizures, and posttraumatic epilepsy itself further worsens outcomes at 12 months, findings from an analysis of a large, prospective database suggest. “We found that patients essentially have a 10-times greater risk of developing posttraumatic epilepsy and seizures at 12 months [post injury] if the presenting Glasgow Coma Scale GCS) is less than 8,” said lead author John F. Burke, MD, PhD, University of California, San Francisco, in presenting the findings as part of the virtual annual meeting of the American Association of Neurological Surgeons.

Assessing risk factors

While posttraumatic epilepsy represents an estimated 20% of all cases of symptomatic epilepsy, many questions remain on those most at risk and on the long-term effects of posttraumatic epilepsy on TBI outcomes. To probe those issues, Dr. Burke and colleagues turned to the multicenter TRACK-TBI database, which has prospective, longitudinal data on more than 2,700 patients with traumatic brain injuries and is considered the largest source of prospective data on posttraumatic epilepsy.

Using the criteria of no previous epilepsy and having 12 months of follow-up, the team identified 1,493 patients with TBI. In addition, investigators identified 182 orthopedic controls (included and prospectively followed because they have injuries but not specifically head trauma) and 210 controls who are friends of the patients and who do not have injuries but allow researchers to control for socioeconomic and environmental factors.

Of the 1,493 patients with TBI, 41 (2.7%) were determined to have posttraumatic epilepsy, assessed according to a National Institute of Neurological Disorders and Stroke epilepsy screening questionnaire, which is designed to identify patients with posttraumatic epilepsy symptoms. There were no reports of epilepsy symptoms using the screening tool among the controls. Dr. Burke noted that the 2.7% was in agreement with historical reports.

In comparing patients with TBI who did and did not have posttraumatic epilepsy, no differences were observed in the groups in terms of gender, although there was a trend toward younger age among those with PTE (mean age, 35.4 years with posttraumatic injury vs. 41.5 without; P = .05).

A major risk factor for the development of posttraumatic epilepsy was presenting GCS scores. Among those with scores of less than 8, indicative of severe injury, the rate of posttraumatic epilepsy was 6% at 6 months and 12.5% at 12 months. In contrast, those with TBI presenting with GCS scores between 13 and 15, indicative of minor injury, had an incidence of posttraumatic epilepsy of 0.9% at 6 months and 1.4% at 12 months.

Imaging findings in the two groups showed that hemorrhage detected on CT imaging was associated with a significantly higher risk for posttraumatic epilepsy (P < .001).

“The main takeaway is that any hemorrhage in the brain is a major risk factor for developing seizures,” Dr. Burke said. “Whether it is subdural, epidural blood, subarachnoid or contusion, any blood confers a very [high] risk for developing seizures.”

Posttraumatic epilepsy was linked to poorer longer-term outcomes even for patients with lesser injury: Among those with TBI and GCS of 13-15, the mean Glasgow Outcome Scale Extended (GOSE) score at 12 months among those without posttraumatic epilepsy was 7, indicative of a good recovery with minor defects, whereas the mean GOSE score for those with PTE was 4.6, indicative of moderate to severe disability (P  < .001).

“It was surprising to us that PTE-positive patients had a very significant decrease in GOSE, compared to PTE-negative patients,” Dr. Burke said. “There was a nearly 2-point drop in the GOSE and that was extremely significant.”

A multivariate analysis showed there was still a significant independent risk for a poor GOSE score with posttraumatic epilepsy after controlling for GCS score, head CT findings, and age (P < .001).

The authors also looked at mood outcomes using the Brief Symptom Inventory–18, which showed significant worse effect in those with posttraumatic epilepsy after multivariate adjustment (P = .01). Additionally, a highly significant worse effect in cognitive outcomes on the Rivermead cognitive metric was observed with posttraumatic epilepsy (P = .001).

“On all metrics tested, posttraumatic epilepsy worsened outcomes,” Dr. Burke said.

He noted that the study has some key limitations, including the 12-month follow-up. A previous study showed a linear increase in posttraumatic follow-up up to 30 years. “The fact that we found 41 patients at 12 months indicates there are probably more that are out there who are going to develop seizures, but because we don’t have the follow-up we can’t look at that.”

Although the screening questionnaires are effective, “the issue is these people are not being seen by an epileptologist or having scalp EEG done, and we need a more accurate way to do this,” he said. A new study, TRACK-TBI EPI, will address those limitations and a host of other issues with a 5-year follow-up.
 

 

 

Capturing the nuances of brain injury

Commenting on the study as a discussant, neurosurgeon Uzma Samadani, MD, PhD, of the Minneapolis Veterans Affairs Medical Center and CentraCare in Minneapolis, suggested that the future work should focus on issues including the wide-ranging mechanisms that could explain the seizure activity.

“For example, it’s known that posttraumatic epilepsy or seizures can be triggered by abnormal conductivity due to multiple different mechanisms associated with brain injury, such as endocrine dysfunction, cortical-spreading depression, and many others,” said Dr. Samadani, who has been a researcher on the TRACK-TBI study.

Factors ranging from genetic differences to comorbid conditions such as alcoholism can play a role in brain injury susceptibility, Dr. Samadani added. Furthermore, outcome measures currently available simply may not capture the unknown nuances of brain injury.

“We have to ask, are these an all-or-none phenomena, or is aberrant electrical activity after brain injury a continuum of dysfunction?” Dr. Samadani speculated.

“I would caution that we are likely underestimating the non–easily measurable consequences of brain injury,” she said. “And the better we can quantitate susceptibility, classify the nature of injury and target acute management, the less posttraumatic epilepsy/aberrant electrical activity our patients will have.”

Dr. Burke and Dr. Samadani disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

The severity of head injury in traumatic brain injury (TBI) is significantly linked with the risk of developing posttraumatic epilepsy and seizures, and posttraumatic epilepsy itself further worsens outcomes at 12 months, findings from an analysis of a large, prospective database suggest. “We found that patients essentially have a 10-times greater risk of developing posttraumatic epilepsy and seizures at 12 months [post injury] if the presenting Glasgow Coma Scale GCS) is less than 8,” said lead author John F. Burke, MD, PhD, University of California, San Francisco, in presenting the findings as part of the virtual annual meeting of the American Association of Neurological Surgeons.

Assessing risk factors

While posttraumatic epilepsy represents an estimated 20% of all cases of symptomatic epilepsy, many questions remain on those most at risk and on the long-term effects of posttraumatic epilepsy on TBI outcomes. To probe those issues, Dr. Burke and colleagues turned to the multicenter TRACK-TBI database, which has prospective, longitudinal data on more than 2,700 patients with traumatic brain injuries and is considered the largest source of prospective data on posttraumatic epilepsy.

Using the criteria of no previous epilepsy and having 12 months of follow-up, the team identified 1,493 patients with TBI. In addition, investigators identified 182 orthopedic controls (included and prospectively followed because they have injuries but not specifically head trauma) and 210 controls who are friends of the patients and who do not have injuries but allow researchers to control for socioeconomic and environmental factors.

Of the 1,493 patients with TBI, 41 (2.7%) were determined to have posttraumatic epilepsy, assessed according to a National Institute of Neurological Disorders and Stroke epilepsy screening questionnaire, which is designed to identify patients with posttraumatic epilepsy symptoms. There were no reports of epilepsy symptoms using the screening tool among the controls. Dr. Burke noted that the 2.7% was in agreement with historical reports.

In comparing patients with TBI who did and did not have posttraumatic epilepsy, no differences were observed in the groups in terms of gender, although there was a trend toward younger age among those with PTE (mean age, 35.4 years with posttraumatic injury vs. 41.5 without; P = .05).

A major risk factor for the development of posttraumatic epilepsy was presenting GCS scores. Among those with scores of less than 8, indicative of severe injury, the rate of posttraumatic epilepsy was 6% at 6 months and 12.5% at 12 months. In contrast, those with TBI presenting with GCS scores between 13 and 15, indicative of minor injury, had an incidence of posttraumatic epilepsy of 0.9% at 6 months and 1.4% at 12 months.

Imaging findings in the two groups showed that hemorrhage detected on CT imaging was associated with a significantly higher risk for posttraumatic epilepsy (P < .001).

“The main takeaway is that any hemorrhage in the brain is a major risk factor for developing seizures,” Dr. Burke said. “Whether it is subdural, epidural blood, subarachnoid or contusion, any blood confers a very [high] risk for developing seizures.”

Posttraumatic epilepsy was linked to poorer longer-term outcomes even for patients with lesser injury: Among those with TBI and GCS of 13-15, the mean Glasgow Outcome Scale Extended (GOSE) score at 12 months among those without posttraumatic epilepsy was 7, indicative of a good recovery with minor defects, whereas the mean GOSE score for those with PTE was 4.6, indicative of moderate to severe disability (P  < .001).

“It was surprising to us that PTE-positive patients had a very significant decrease in GOSE, compared to PTE-negative patients,” Dr. Burke said. “There was a nearly 2-point drop in the GOSE and that was extremely significant.”

A multivariate analysis showed there was still a significant independent risk for a poor GOSE score with posttraumatic epilepsy after controlling for GCS score, head CT findings, and age (P < .001).

The authors also looked at mood outcomes using the Brief Symptom Inventory–18, which showed significant worse effect in those with posttraumatic epilepsy after multivariate adjustment (P = .01). Additionally, a highly significant worse effect in cognitive outcomes on the Rivermead cognitive metric was observed with posttraumatic epilepsy (P = .001).

“On all metrics tested, posttraumatic epilepsy worsened outcomes,” Dr. Burke said.

He noted that the study has some key limitations, including the 12-month follow-up. A previous study showed a linear increase in posttraumatic follow-up up to 30 years. “The fact that we found 41 patients at 12 months indicates there are probably more that are out there who are going to develop seizures, but because we don’t have the follow-up we can’t look at that.”

Although the screening questionnaires are effective, “the issue is these people are not being seen by an epileptologist or having scalp EEG done, and we need a more accurate way to do this,” he said. A new study, TRACK-TBI EPI, will address those limitations and a host of other issues with a 5-year follow-up.
 

 

 

Capturing the nuances of brain injury

Commenting on the study as a discussant, neurosurgeon Uzma Samadani, MD, PhD, of the Minneapolis Veterans Affairs Medical Center and CentraCare in Minneapolis, suggested that the future work should focus on issues including the wide-ranging mechanisms that could explain the seizure activity.

“For example, it’s known that posttraumatic epilepsy or seizures can be triggered by abnormal conductivity due to multiple different mechanisms associated with brain injury, such as endocrine dysfunction, cortical-spreading depression, and many others,” said Dr. Samadani, who has been a researcher on the TRACK-TBI study.

Factors ranging from genetic differences to comorbid conditions such as alcoholism can play a role in brain injury susceptibility, Dr. Samadani added. Furthermore, outcome measures currently available simply may not capture the unknown nuances of brain injury.

“We have to ask, are these an all-or-none phenomena, or is aberrant electrical activity after brain injury a continuum of dysfunction?” Dr. Samadani speculated.

“I would caution that we are likely underestimating the non–easily measurable consequences of brain injury,” she said. “And the better we can quantitate susceptibility, classify the nature of injury and target acute management, the less posttraumatic epilepsy/aberrant electrical activity our patients will have.”

Dr. Burke and Dr. Samadani disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 28(8)
Issue
Neurology Reviews- 28(8)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AANS 2020

Citation Override
Publish date: July 7, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article

FDA approves first oral somatostatin analog for acromegaly

Article Type
Changed
Mon, 06/29/2020 - 15:04

The Food and Drug Administration has approved oral octreotide (Mycapssa, Chiasma) delayed-release capsules for the long-term maintenance treatment of patients with acromegaly who previously responded to and tolerated octreotide or lanreotide injections.

Wikimedia Commons/FitzColinGerald/ Creative Commons License

“People living with acromegaly experience many challenges associated with injectable therapies and are in need of new treatment options,” Jill Sisco, president of Acromegaly Community, a patient support group, said in a Chiasma press release.

“The entire acromegaly community has long awaited oral therapeutic options and it is gratifying to see that the FDA has now approved the first oral somatostatin analog (SSA) therapy with the potential to make a significant impact in the lives of people with acromegaly and their caregivers,” she added.

Acromegaly, a rare, chronic disease usually caused by a benign pituitary tumor that leads to excess production of growth hormone and insulin-like growth factor-1 (IGF-1) hormone, can be cured through the successful surgical removal of the pituitary tumor. However, management of the disease remains a lifelong challenge for many who must rely on chronic injections.

The new oral formulation of octreotide is the first and only oral somatostatin analog approved by the FDA.

The approval was based on the results of the 9-month, phase 3 pivotal CHIASMA OPTIMAL clinical trial, involving 56 adults with acromegaly controlled by injectable SSAs.

The patients, who were randomized 1:1 to octreotide capsules or placebo, were dose-titrated from 40 mg/day up to a maximum of 80 mg/day, equaling two capsules in the morning and two in the evening.

The study met its primary endpoint. Overall, 58% of patients taking octreotide maintained IGF-1 response compared with 19% of those on placebo at the end of 9 months (P = .008), according to the average of the last two IGF-1 levels that were 1 times or less the upper limit of normal, assessed at weeks 34 and 36.  

The trial also met its secondary endpoints, which included the proportion of patients who maintain growth hormone response at week 36 compared with screening; time to loss of response; and proportion of patients requiring reversion to prior treatment.

Safety data were favorable. Adverse reactions to the drug, detailed in the prescribing information, include cholelithiasis and associated complications; hyperglycemia and hypoglycemia; thyroid function abnormalities; cardiac function abnormalities; decreased vitamin B12 levels, and abnormal Schilling’s test results.

Results from the clinical trial “are encouraging for patients with acromegaly,” the study’s principal investigator, Susan Samson, MD, PhD, of Baylor College of Medicine, Houston, said in the Chiasma statement.

“Based on data from the CHIASMA OPTIMAL trial showing patients on therapy being able to maintain mean IGF-1 levels within the normal range at the end of treatment, I believe oral octreotide capsules hold meaningful promise for patients with this disease and will address a long-standing unmet treatment need,” she added.

Chiasma reports that it expects Mycapssa to be available in the fourth quarter of 2020, pending FDA approval of a planned manufacturing supplement to the approved new drug application.

The company further plans to provide patient support services including assistance with insurance providers and specialty pharmacies and support in incorporating treatment into patients’ daily routines.

Despite effective biochemical control of growth hormone, many patients with acromegaly continue to suffer symptoms, mainly because of comorbidities, so it is important that these are also adequately treated, a consensus group concluded earlier this year.

The CHIASMA OPTIMAL trial was funded by Chiasma.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

The Food and Drug Administration has approved oral octreotide (Mycapssa, Chiasma) delayed-release capsules for the long-term maintenance treatment of patients with acromegaly who previously responded to and tolerated octreotide or lanreotide injections.

Wikimedia Commons/FitzColinGerald/ Creative Commons License

“People living with acromegaly experience many challenges associated with injectable therapies and are in need of new treatment options,” Jill Sisco, president of Acromegaly Community, a patient support group, said in a Chiasma press release.

“The entire acromegaly community has long awaited oral therapeutic options and it is gratifying to see that the FDA has now approved the first oral somatostatin analog (SSA) therapy with the potential to make a significant impact in the lives of people with acromegaly and their caregivers,” she added.

Acromegaly, a rare, chronic disease usually caused by a benign pituitary tumor that leads to excess production of growth hormone and insulin-like growth factor-1 (IGF-1) hormone, can be cured through the successful surgical removal of the pituitary tumor. However, management of the disease remains a lifelong challenge for many who must rely on chronic injections.

The new oral formulation of octreotide is the first and only oral somatostatin analog approved by the FDA.

The approval was based on the results of the 9-month, phase 3 pivotal CHIASMA OPTIMAL clinical trial, involving 56 adults with acromegaly controlled by injectable SSAs.

The patients, who were randomized 1:1 to octreotide capsules or placebo, were dose-titrated from 40 mg/day up to a maximum of 80 mg/day, equaling two capsules in the morning and two in the evening.

The study met its primary endpoint. Overall, 58% of patients taking octreotide maintained IGF-1 response compared with 19% of those on placebo at the end of 9 months (P = .008), according to the average of the last two IGF-1 levels that were 1 times or less the upper limit of normal, assessed at weeks 34 and 36.  

The trial also met its secondary endpoints, which included the proportion of patients who maintain growth hormone response at week 36 compared with screening; time to loss of response; and proportion of patients requiring reversion to prior treatment.

Safety data were favorable. Adverse reactions to the drug, detailed in the prescribing information, include cholelithiasis and associated complications; hyperglycemia and hypoglycemia; thyroid function abnormalities; cardiac function abnormalities; decreased vitamin B12 levels, and abnormal Schilling’s test results.

Results from the clinical trial “are encouraging for patients with acromegaly,” the study’s principal investigator, Susan Samson, MD, PhD, of Baylor College of Medicine, Houston, said in the Chiasma statement.

“Based on data from the CHIASMA OPTIMAL trial showing patients on therapy being able to maintain mean IGF-1 levels within the normal range at the end of treatment, I believe oral octreotide capsules hold meaningful promise for patients with this disease and will address a long-standing unmet treatment need,” she added.

Chiasma reports that it expects Mycapssa to be available in the fourth quarter of 2020, pending FDA approval of a planned manufacturing supplement to the approved new drug application.

The company further plans to provide patient support services including assistance with insurance providers and specialty pharmacies and support in incorporating treatment into patients’ daily routines.

Despite effective biochemical control of growth hormone, many patients with acromegaly continue to suffer symptoms, mainly because of comorbidities, so it is important that these are also adequately treated, a consensus group concluded earlier this year.

The CHIASMA OPTIMAL trial was funded by Chiasma.
 

A version of this article originally appeared on Medscape.com.

The Food and Drug Administration has approved oral octreotide (Mycapssa, Chiasma) delayed-release capsules for the long-term maintenance treatment of patients with acromegaly who previously responded to and tolerated octreotide or lanreotide injections.

Wikimedia Commons/FitzColinGerald/ Creative Commons License

“People living with acromegaly experience many challenges associated with injectable therapies and are in need of new treatment options,” Jill Sisco, president of Acromegaly Community, a patient support group, said in a Chiasma press release.

“The entire acromegaly community has long awaited oral therapeutic options and it is gratifying to see that the FDA has now approved the first oral somatostatin analog (SSA) therapy with the potential to make a significant impact in the lives of people with acromegaly and their caregivers,” she added.

Acromegaly, a rare, chronic disease usually caused by a benign pituitary tumor that leads to excess production of growth hormone and insulin-like growth factor-1 (IGF-1) hormone, can be cured through the successful surgical removal of the pituitary tumor. However, management of the disease remains a lifelong challenge for many who must rely on chronic injections.

The new oral formulation of octreotide is the first and only oral somatostatin analog approved by the FDA.

The approval was based on the results of the 9-month, phase 3 pivotal CHIASMA OPTIMAL clinical trial, involving 56 adults with acromegaly controlled by injectable SSAs.

The patients, who were randomized 1:1 to octreotide capsules or placebo, were dose-titrated from 40 mg/day up to a maximum of 80 mg/day, equaling two capsules in the morning and two in the evening.

The study met its primary endpoint. Overall, 58% of patients taking octreotide maintained IGF-1 response compared with 19% of those on placebo at the end of 9 months (P = .008), according to the average of the last two IGF-1 levels that were 1 times or less the upper limit of normal, assessed at weeks 34 and 36.  

The trial also met its secondary endpoints, which included the proportion of patients who maintain growth hormone response at week 36 compared with screening; time to loss of response; and proportion of patients requiring reversion to prior treatment.

Safety data were favorable. Adverse reactions to the drug, detailed in the prescribing information, include cholelithiasis and associated complications; hyperglycemia and hypoglycemia; thyroid function abnormalities; cardiac function abnormalities; decreased vitamin B12 levels, and abnormal Schilling’s test results.

Results from the clinical trial “are encouraging for patients with acromegaly,” the study’s principal investigator, Susan Samson, MD, PhD, of Baylor College of Medicine, Houston, said in the Chiasma statement.

“Based on data from the CHIASMA OPTIMAL trial showing patients on therapy being able to maintain mean IGF-1 levels within the normal range at the end of treatment, I believe oral octreotide capsules hold meaningful promise for patients with this disease and will address a long-standing unmet treatment need,” she added.

Chiasma reports that it expects Mycapssa to be available in the fourth quarter of 2020, pending FDA approval of a planned manufacturing supplement to the approved new drug application.

The company further plans to provide patient support services including assistance with insurance providers and specialty pharmacies and support in incorporating treatment into patients’ daily routines.

Despite effective biochemical control of growth hormone, many patients with acromegaly continue to suffer symptoms, mainly because of comorbidities, so it is important that these are also adequately treated, a consensus group concluded earlier this year.

The CHIASMA OPTIMAL trial was funded by Chiasma.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article