Pig kidneys show ‘life-sustaining’ function in human

Article Type
Changed
Thu, 08/24/2023 - 19:23

A pair of genetically modified pig kidneys filtered blood and produced urine for 7 days after being transplanted into a brain-dead patient – marking another important step toward opening up a new supply of much-needed organs for those with end-stage kidney disease.

A team of researchers in Alabama removed a brain-dead person’s kidneys and transplanted two kidneys that had been taken from a genetically modified pig. The researchers monitored the patient’s response to the organs and tracked the kidneys’ function over a 7-day period. The findings were published in JAMA Surgery.

During the first 24 hours after transplantation, the pig kidneys made more than 37 liters of urine. “It was really a remarkable thing to see,” lead investigator Jayme Locke, MD, professor of surgery and the Arnold G. Diethelm Endowed Chair in Transplantation Surgery, University of Alabama at Birmingham, said in a press release.

The recipient was given standard maintenance immunosuppression - tacrolimus, mycophenolate mofetil, and prednisone. The target tacrolimus level (8-10 ng/dL) was reached by postoperative day 2 and was maintained through study completion.

At the end of the study, the serum creatinine level was 0.9 mg/dL, and creatinine clearance was 200 mL/min. Creatinine levels are an indicator of kidney function and demonstrate the organ’s ability to filter waste from blood, according to Roger Lord, PhD, senior lecturer (medical sciences) in the School of Behavioural and Health Sciences, Australian Catholic University, who was not involved in the research.

This is the first time that it has been demonstrated that a standard immunosuppression regimen may be sufficient to support xenotransplantation with pig kidneys and in which creatinine clearance was achieved.

The finding comes less than 2 years after the same team published results from a similar experiment. In that transplant, the investigators didn’t observe significant creatinine excretion into the urine.

In the team’s previous attempts, kidney function was delayed because the brain-dead recipients had deteriorated physiologically. This time, the subject was stable, and the team was able to observe urine production within 4 minutes of restoration of blood flow to the transplanted pig organs.

“This new work firmly establishes that the xenografts not only make urine but provide life-sustaining kidney function by clearing serum creatinine,” Locke said in an interview. “This is the first time in history this has been shown.”

The investigators are hoping animal-sourced organs could become an alternative for human transplantations, potentially solving the serious shortage of human organs available for patients on transplant waiting lists.

Organ transplantation can treat patients with advanced kidney disease and kidney failure, but there are not enough human organs available to meet the need. More than 92,000 people in the United States are waiting for a kidney, according to the American Kidney Fund.

Organ rejection is a risk with xenotransplants – animal-to-human organ transplants. Investigators in this study used kidneys from pigs with 10 gene modifications. The modifications were intended to decrease the likelihood of the organs being rejected by a human host.

The kidneys were still viable at the end of the 7-day period. In addition, there was no microscopic blood clot formation, another indicator of normal kidney function, according to Dr. Lord, who provided comments to the UK Science Media Centre.

The long-term outcomes of animal-to-human organ transplantation remain unclear. Dr. Lord describes the operation as a “first step” to demonstrate that genetically modified, transplanted pig kidneys can function normally so as to remove creatinine over a 7-day period.

Dr. Locke and colleagues said: “Future research in living human recipients is necessary to determine long-term xenograft kidney function and whether xenografts could serve as a bridge or destination therapy for end-stage kidney disease.

“Because our study represents a single case, generalizability of the findings is limited. This study showcases xenotransplant as a viable potential solution to an organ shortage crisis responsible for thousands of preventable deaths annually,” they concluded.

A version of this article first appeared on Medscape.com .

Publications
Topics
Sections

A pair of genetically modified pig kidneys filtered blood and produced urine for 7 days after being transplanted into a brain-dead patient – marking another important step toward opening up a new supply of much-needed organs for those with end-stage kidney disease.

A team of researchers in Alabama removed a brain-dead person’s kidneys and transplanted two kidneys that had been taken from a genetically modified pig. The researchers monitored the patient’s response to the organs and tracked the kidneys’ function over a 7-day period. The findings were published in JAMA Surgery.

During the first 24 hours after transplantation, the pig kidneys made more than 37 liters of urine. “It was really a remarkable thing to see,” lead investigator Jayme Locke, MD, professor of surgery and the Arnold G. Diethelm Endowed Chair in Transplantation Surgery, University of Alabama at Birmingham, said in a press release.

The recipient was given standard maintenance immunosuppression - tacrolimus, mycophenolate mofetil, and prednisone. The target tacrolimus level (8-10 ng/dL) was reached by postoperative day 2 and was maintained through study completion.

At the end of the study, the serum creatinine level was 0.9 mg/dL, and creatinine clearance was 200 mL/min. Creatinine levels are an indicator of kidney function and demonstrate the organ’s ability to filter waste from blood, according to Roger Lord, PhD, senior lecturer (medical sciences) in the School of Behavioural and Health Sciences, Australian Catholic University, who was not involved in the research.

This is the first time that it has been demonstrated that a standard immunosuppression regimen may be sufficient to support xenotransplantation with pig kidneys and in which creatinine clearance was achieved.

The finding comes less than 2 years after the same team published results from a similar experiment. In that transplant, the investigators didn’t observe significant creatinine excretion into the urine.

In the team’s previous attempts, kidney function was delayed because the brain-dead recipients had deteriorated physiologically. This time, the subject was stable, and the team was able to observe urine production within 4 minutes of restoration of blood flow to the transplanted pig organs.

“This new work firmly establishes that the xenografts not only make urine but provide life-sustaining kidney function by clearing serum creatinine,” Locke said in an interview. “This is the first time in history this has been shown.”

The investigators are hoping animal-sourced organs could become an alternative for human transplantations, potentially solving the serious shortage of human organs available for patients on transplant waiting lists.

Organ transplantation can treat patients with advanced kidney disease and kidney failure, but there are not enough human organs available to meet the need. More than 92,000 people in the United States are waiting for a kidney, according to the American Kidney Fund.

Organ rejection is a risk with xenotransplants – animal-to-human organ transplants. Investigators in this study used kidneys from pigs with 10 gene modifications. The modifications were intended to decrease the likelihood of the organs being rejected by a human host.

The kidneys were still viable at the end of the 7-day period. In addition, there was no microscopic blood clot formation, another indicator of normal kidney function, according to Dr. Lord, who provided comments to the UK Science Media Centre.

The long-term outcomes of animal-to-human organ transplantation remain unclear. Dr. Lord describes the operation as a “first step” to demonstrate that genetically modified, transplanted pig kidneys can function normally so as to remove creatinine over a 7-day period.

Dr. Locke and colleagues said: “Future research in living human recipients is necessary to determine long-term xenograft kidney function and whether xenografts could serve as a bridge or destination therapy for end-stage kidney disease.

“Because our study represents a single case, generalizability of the findings is limited. This study showcases xenotransplant as a viable potential solution to an organ shortage crisis responsible for thousands of preventable deaths annually,” they concluded.

A version of this article first appeared on Medscape.com .

A pair of genetically modified pig kidneys filtered blood and produced urine for 7 days after being transplanted into a brain-dead patient – marking another important step toward opening up a new supply of much-needed organs for those with end-stage kidney disease.

A team of researchers in Alabama removed a brain-dead person’s kidneys and transplanted two kidneys that had been taken from a genetically modified pig. The researchers monitored the patient’s response to the organs and tracked the kidneys’ function over a 7-day period. The findings were published in JAMA Surgery.

During the first 24 hours after transplantation, the pig kidneys made more than 37 liters of urine. “It was really a remarkable thing to see,” lead investigator Jayme Locke, MD, professor of surgery and the Arnold G. Diethelm Endowed Chair in Transplantation Surgery, University of Alabama at Birmingham, said in a press release.

The recipient was given standard maintenance immunosuppression - tacrolimus, mycophenolate mofetil, and prednisone. The target tacrolimus level (8-10 ng/dL) was reached by postoperative day 2 and was maintained through study completion.

At the end of the study, the serum creatinine level was 0.9 mg/dL, and creatinine clearance was 200 mL/min. Creatinine levels are an indicator of kidney function and demonstrate the organ’s ability to filter waste from blood, according to Roger Lord, PhD, senior lecturer (medical sciences) in the School of Behavioural and Health Sciences, Australian Catholic University, who was not involved in the research.

This is the first time that it has been demonstrated that a standard immunosuppression regimen may be sufficient to support xenotransplantation with pig kidneys and in which creatinine clearance was achieved.

The finding comes less than 2 years after the same team published results from a similar experiment. In that transplant, the investigators didn’t observe significant creatinine excretion into the urine.

In the team’s previous attempts, kidney function was delayed because the brain-dead recipients had deteriorated physiologically. This time, the subject was stable, and the team was able to observe urine production within 4 minutes of restoration of blood flow to the transplanted pig organs.

“This new work firmly establishes that the xenografts not only make urine but provide life-sustaining kidney function by clearing serum creatinine,” Locke said in an interview. “This is the first time in history this has been shown.”

The investigators are hoping animal-sourced organs could become an alternative for human transplantations, potentially solving the serious shortage of human organs available for patients on transplant waiting lists.

Organ transplantation can treat patients with advanced kidney disease and kidney failure, but there are not enough human organs available to meet the need. More than 92,000 people in the United States are waiting for a kidney, according to the American Kidney Fund.

Organ rejection is a risk with xenotransplants – animal-to-human organ transplants. Investigators in this study used kidneys from pigs with 10 gene modifications. The modifications were intended to decrease the likelihood of the organs being rejected by a human host.

The kidneys were still viable at the end of the 7-day period. In addition, there was no microscopic blood clot formation, another indicator of normal kidney function, according to Dr. Lord, who provided comments to the UK Science Media Centre.

The long-term outcomes of animal-to-human organ transplantation remain unclear. Dr. Lord describes the operation as a “first step” to demonstrate that genetically modified, transplanted pig kidneys can function normally so as to remove creatinine over a 7-day period.

Dr. Locke and colleagues said: “Future research in living human recipients is necessary to determine long-term xenograft kidney function and whether xenografts could serve as a bridge or destination therapy for end-stage kidney disease.

“Because our study represents a single case, generalizability of the findings is limited. This study showcases xenotransplant as a viable potential solution to an organ shortage crisis responsible for thousands of preventable deaths annually,” they concluded.

A version of this article first appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Pain 1 year after MI tied to all-cause mortality

Article Type
Changed
Fri, 08/18/2023 - 11:37

Patients reporting moderate or extreme pain a year after a myocardial infarction (MI) – even pain due to other health conditions – are more likely to die within the next 8 years than those without post-MI pain, new research suggests.

In the analysis of post-MI health data for more than 18,300 Swedish adults, those with moderate pain were 35% more likely to die from any cause during follow-up, compared with those with no pain, and those with extreme pain were more than twice as likely to die.

Furthermore, pain was a stronger predictor of mortality than smoking.

“For a long time, pain has been regarded as merely a symptom of disease rather than a disease” in its own right, Linda Vixner, PT, PhD, of Dalarna University in Falun, Sweden, said in an interview.

Updated definitions of chronic pain in the ICD-11, as well as a recent study using data from the UK Biobank showing that chronic pain is associated with an increased risk of cardiovascular disease, prompted the current study, which looks at the effect of pain on long-term survival after an MI.

“We did not expect that pain would have such a strong impact on the risk of death, and it also surprised us that the risk was more pronounced than that of smoking,” Dr. Vixner said. “Clinicians should consider pain an important cardiovascular risk factor.”

The study was published online in the Journal of the American Heart Association.
 

‘Experienced pain’ prognostic

The investigators analyzed data from the SWEDEHEART registry of 18,376 patients who had an MI in 2004-2013. The mean age of patients was 62 years and 75% were men. Follow-up time was 8.5 years (median, 3.37).

Self-reported levels of experienced pain according to the EuroQol five-dimension instrument were recorded 12 months after hospital discharge.

Moderate pain was reported by 38.2% of patients and extreme pain by 4.5%.

In the extreme pain category, women were overrepresented (7.5% vs. 3.6% of men), as were current smokers, and patients with diabetes, previous MI, previous stroke, previous percutaneous coronary intervention, non-ST-segment–elevation MI, and any kind of chest pain. Patients classified as physically inactive also were overrepresented in this category.

In addition, those with extreme pain had a higher body mass index and waist circumference 12 months after hospital discharge.

Most (73%) of the 7,889 patients who reported no pain at the 2-month follow-up after MI were also pain-free at the 12-month follow-up, and 65% of those experiencing pain at 2 months were also experiencing pain at 12 months.

There were 1,067 deaths. The adjusted hazard ratio was 1.35 for moderate pain and 2.06 for extreme pain.

As noted, pain was a stronger mortality predictor than smoking: C-statistics for pain were 0.60, and for smoking, 0.55.

“Clinicians managing patients after MI should recognize the need to consider experienced pain as a prognostic factor comparable to persistent smoking and to address this when designing individually adjusted [cardiac rehabilitation] and secondary prevention treatments,” the authors write.

Pain should be assessed at follow-up after MI, they add, and, as Dr. Vixner suggested, it should be “acknowledged as an important risk factor.”
 

 

 

Managing risks

“These findings parallel prior studies and my own clinical experience,” American Heart Association volunteer expert Gregg C. Fonarow, MD, interim chief of the division of cardiology at the University of California, Los Angeles, and director, Ahmanson-UCLA Cardiomyopathy Center, told this news organization.

“There are many potential causes for patient-reported pain in the year after a heart attack,” he said, including a greater cardiovascular risk burden, more comorbid conditions, less physical activity, and chronic use of nonsteroidal anti-inflammatory medications or opioids for pain control – all of which can contribute to the increased risk of mortality.

Factors beyond those evaluated and adjusted for in the observational study may contribute to the observed associations, he added. “Socioeconomic factors were not accounted for [and] there was no information on the types, doses, and frequency of pain medication use.”

“Clinicians managing patients with prior MI should carefully assess experienced pain and utilize this information to optimize risk factor control recommendations, inform treatment decisions, and consider in terms of prognosis,” he advised.

Further studies should evaluate whether the associations hold true for other patient populations, Dr. Fonarow said. “In addition, intervention trials could evaluate if enhanced management strategies in these higher-risk patients with self-reported pain can successfully lower the mortality risk.”

Dr. Vixner sees a role for physical activity in lowering the mortality risk.

“One of the core treatments for chronic pain is physical activity,” she said. “It positively influences quality of life, activities of daily living, pain intensity, and overall physical function, and reduces the risk of social isolation” and cardiovascular diseases.

Her team recently developed the “eVISualisation of physical activity and pain” (eVIS) intervention, which aims to promote healthy physical activity levels in persons living with chronic pain. The intervention is currently being evaluated in an ongoing registry-based, randomized controlled trial.

The study was supported by Svenska Försäkringsföreningen, Dalarna University, Region Dalarna. Dr. Vixner and coauthors have reported no relevant financial relationships. Dr. Fonarow has disclosed consulting for Abbott, Amgen, AstraZeneca, Bayer, Cytokinetics, Eli Lilly, Johnson & Johnson, Medtronic, Merck, Novartis, and Pfizer.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Patients reporting moderate or extreme pain a year after a myocardial infarction (MI) – even pain due to other health conditions – are more likely to die within the next 8 years than those without post-MI pain, new research suggests.

In the analysis of post-MI health data for more than 18,300 Swedish adults, those with moderate pain were 35% more likely to die from any cause during follow-up, compared with those with no pain, and those with extreme pain were more than twice as likely to die.

Furthermore, pain was a stronger predictor of mortality than smoking.

“For a long time, pain has been regarded as merely a symptom of disease rather than a disease” in its own right, Linda Vixner, PT, PhD, of Dalarna University in Falun, Sweden, said in an interview.

Updated definitions of chronic pain in the ICD-11, as well as a recent study using data from the UK Biobank showing that chronic pain is associated with an increased risk of cardiovascular disease, prompted the current study, which looks at the effect of pain on long-term survival after an MI.

“We did not expect that pain would have such a strong impact on the risk of death, and it also surprised us that the risk was more pronounced than that of smoking,” Dr. Vixner said. “Clinicians should consider pain an important cardiovascular risk factor.”

The study was published online in the Journal of the American Heart Association.
 

‘Experienced pain’ prognostic

The investigators analyzed data from the SWEDEHEART registry of 18,376 patients who had an MI in 2004-2013. The mean age of patients was 62 years and 75% were men. Follow-up time was 8.5 years (median, 3.37).

Self-reported levels of experienced pain according to the EuroQol five-dimension instrument were recorded 12 months after hospital discharge.

Moderate pain was reported by 38.2% of patients and extreme pain by 4.5%.

In the extreme pain category, women were overrepresented (7.5% vs. 3.6% of men), as were current smokers, and patients with diabetes, previous MI, previous stroke, previous percutaneous coronary intervention, non-ST-segment–elevation MI, and any kind of chest pain. Patients classified as physically inactive also were overrepresented in this category.

In addition, those with extreme pain had a higher body mass index and waist circumference 12 months after hospital discharge.

Most (73%) of the 7,889 patients who reported no pain at the 2-month follow-up after MI were also pain-free at the 12-month follow-up, and 65% of those experiencing pain at 2 months were also experiencing pain at 12 months.

There were 1,067 deaths. The adjusted hazard ratio was 1.35 for moderate pain and 2.06 for extreme pain.

As noted, pain was a stronger mortality predictor than smoking: C-statistics for pain were 0.60, and for smoking, 0.55.

“Clinicians managing patients after MI should recognize the need to consider experienced pain as a prognostic factor comparable to persistent smoking and to address this when designing individually adjusted [cardiac rehabilitation] and secondary prevention treatments,” the authors write.

Pain should be assessed at follow-up after MI, they add, and, as Dr. Vixner suggested, it should be “acknowledged as an important risk factor.”
 

 

 

Managing risks

“These findings parallel prior studies and my own clinical experience,” American Heart Association volunteer expert Gregg C. Fonarow, MD, interim chief of the division of cardiology at the University of California, Los Angeles, and director, Ahmanson-UCLA Cardiomyopathy Center, told this news organization.

“There are many potential causes for patient-reported pain in the year after a heart attack,” he said, including a greater cardiovascular risk burden, more comorbid conditions, less physical activity, and chronic use of nonsteroidal anti-inflammatory medications or opioids for pain control – all of which can contribute to the increased risk of mortality.

Factors beyond those evaluated and adjusted for in the observational study may contribute to the observed associations, he added. “Socioeconomic factors were not accounted for [and] there was no information on the types, doses, and frequency of pain medication use.”

“Clinicians managing patients with prior MI should carefully assess experienced pain and utilize this information to optimize risk factor control recommendations, inform treatment decisions, and consider in terms of prognosis,” he advised.

Further studies should evaluate whether the associations hold true for other patient populations, Dr. Fonarow said. “In addition, intervention trials could evaluate if enhanced management strategies in these higher-risk patients with self-reported pain can successfully lower the mortality risk.”

Dr. Vixner sees a role for physical activity in lowering the mortality risk.

“One of the core treatments for chronic pain is physical activity,” she said. “It positively influences quality of life, activities of daily living, pain intensity, and overall physical function, and reduces the risk of social isolation” and cardiovascular diseases.

Her team recently developed the “eVISualisation of physical activity and pain” (eVIS) intervention, which aims to promote healthy physical activity levels in persons living with chronic pain. The intervention is currently being evaluated in an ongoing registry-based, randomized controlled trial.

The study was supported by Svenska Försäkringsföreningen, Dalarna University, Region Dalarna. Dr. Vixner and coauthors have reported no relevant financial relationships. Dr. Fonarow has disclosed consulting for Abbott, Amgen, AstraZeneca, Bayer, Cytokinetics, Eli Lilly, Johnson & Johnson, Medtronic, Merck, Novartis, and Pfizer.

A version of this article first appeared on Medscape.com.

Patients reporting moderate or extreme pain a year after a myocardial infarction (MI) – even pain due to other health conditions – are more likely to die within the next 8 years than those without post-MI pain, new research suggests.

In the analysis of post-MI health data for more than 18,300 Swedish adults, those with moderate pain were 35% more likely to die from any cause during follow-up, compared with those with no pain, and those with extreme pain were more than twice as likely to die.

Furthermore, pain was a stronger predictor of mortality than smoking.

“For a long time, pain has been regarded as merely a symptom of disease rather than a disease” in its own right, Linda Vixner, PT, PhD, of Dalarna University in Falun, Sweden, said in an interview.

Updated definitions of chronic pain in the ICD-11, as well as a recent study using data from the UK Biobank showing that chronic pain is associated with an increased risk of cardiovascular disease, prompted the current study, which looks at the effect of pain on long-term survival after an MI.

“We did not expect that pain would have such a strong impact on the risk of death, and it also surprised us that the risk was more pronounced than that of smoking,” Dr. Vixner said. “Clinicians should consider pain an important cardiovascular risk factor.”

The study was published online in the Journal of the American Heart Association.
 

‘Experienced pain’ prognostic

The investigators analyzed data from the SWEDEHEART registry of 18,376 patients who had an MI in 2004-2013. The mean age of patients was 62 years and 75% were men. Follow-up time was 8.5 years (median, 3.37).

Self-reported levels of experienced pain according to the EuroQol five-dimension instrument were recorded 12 months after hospital discharge.

Moderate pain was reported by 38.2% of patients and extreme pain by 4.5%.

In the extreme pain category, women were overrepresented (7.5% vs. 3.6% of men), as were current smokers, and patients with diabetes, previous MI, previous stroke, previous percutaneous coronary intervention, non-ST-segment–elevation MI, and any kind of chest pain. Patients classified as physically inactive also were overrepresented in this category.

In addition, those with extreme pain had a higher body mass index and waist circumference 12 months after hospital discharge.

Most (73%) of the 7,889 patients who reported no pain at the 2-month follow-up after MI were also pain-free at the 12-month follow-up, and 65% of those experiencing pain at 2 months were also experiencing pain at 12 months.

There were 1,067 deaths. The adjusted hazard ratio was 1.35 for moderate pain and 2.06 for extreme pain.

As noted, pain was a stronger mortality predictor than smoking: C-statistics for pain were 0.60, and for smoking, 0.55.

“Clinicians managing patients after MI should recognize the need to consider experienced pain as a prognostic factor comparable to persistent smoking and to address this when designing individually adjusted [cardiac rehabilitation] and secondary prevention treatments,” the authors write.

Pain should be assessed at follow-up after MI, they add, and, as Dr. Vixner suggested, it should be “acknowledged as an important risk factor.”
 

 

 

Managing risks

“These findings parallel prior studies and my own clinical experience,” American Heart Association volunteer expert Gregg C. Fonarow, MD, interim chief of the division of cardiology at the University of California, Los Angeles, and director, Ahmanson-UCLA Cardiomyopathy Center, told this news organization.

“There are many potential causes for patient-reported pain in the year after a heart attack,” he said, including a greater cardiovascular risk burden, more comorbid conditions, less physical activity, and chronic use of nonsteroidal anti-inflammatory medications or opioids for pain control – all of which can contribute to the increased risk of mortality.

Factors beyond those evaluated and adjusted for in the observational study may contribute to the observed associations, he added. “Socioeconomic factors were not accounted for [and] there was no information on the types, doses, and frequency of pain medication use.”

“Clinicians managing patients with prior MI should carefully assess experienced pain and utilize this information to optimize risk factor control recommendations, inform treatment decisions, and consider in terms of prognosis,” he advised.

Further studies should evaluate whether the associations hold true for other patient populations, Dr. Fonarow said. “In addition, intervention trials could evaluate if enhanced management strategies in these higher-risk patients with self-reported pain can successfully lower the mortality risk.”

Dr. Vixner sees a role for physical activity in lowering the mortality risk.

“One of the core treatments for chronic pain is physical activity,” she said. “It positively influences quality of life, activities of daily living, pain intensity, and overall physical function, and reduces the risk of social isolation” and cardiovascular diseases.

Her team recently developed the “eVISualisation of physical activity and pain” (eVIS) intervention, which aims to promote healthy physical activity levels in persons living with chronic pain. The intervention is currently being evaluated in an ongoing registry-based, randomized controlled trial.

The study was supported by Svenska Försäkringsföreningen, Dalarna University, Region Dalarna. Dr. Vixner and coauthors have reported no relevant financial relationships. Dr. Fonarow has disclosed consulting for Abbott, Amgen, AstraZeneca, Bayer, Cytokinetics, Eli Lilly, Johnson & Johnson, Medtronic, Merck, Novartis, and Pfizer.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN HEART ASSOCIATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Two historical events that changed the field of gastroenterology

Article Type
Changed
Wed, 08/30/2023 - 12:15

During the 2023 DDW Presidential Plenary Session held in May during the annual Digestive Disease Week®, attendees heard about two major historical events that helped shape the field of gastroenterology.

University of Michigan
Dr. Joel D. Howell

The first event took place in 1822 at Fort Mackinac, which today is known as Mackinac Island on northern Lake Huron in Michigan. Alexis St. Martin, a French-Canadian fur trapper, was standing outside of the general store when a shotgun blast accidentally struck him in the stomach. Ordinarily, this would have been a fatal wound, but St. Martin miraculously survived--but with a gastric fistula that permanently exposed the interior of his stomach.

William Beaumont, the post surgeon at Fort Mackinac, engaged in a series of experiments – purportedly 238 – to study human digestion. In one experiment, Dr. Beaumont would pull food in and out of the stomach to study digestion. In another, he would withdraw fluid from the stomach to observe digestion outside of the body. The experiments caused St. Martin considerable discomfort. He eventually returned to Canada, but returned later when the U.S. Army agreed to compensate him for some of his expenses. Today, the experiments would be called into question as having crossed ethical boundaries. Dr. Beaumont published the results from his experiments in a book that established the fundamental basics of our current beliefs about digestion. The experiments arguably mark the first example of gastrointestinal research in the United States.

The second historical event – the invention of the fiber-optic endoscope – also occurred in Michigan. At the University of Michigan, Basil Hirschowitz, MD, invented a flexible, fiber-optic instrument that could be used to look into the stomach, and perhaps even the duodenum. He first tried the invention on himself, and in 1957, he demonstrated it at the national meeting of the American Gastroscopic Society by reading a telephone directory through the new device.

The instrument was soon adopted for clinical use by physicians. Whether the fiber-optic machine was superior for visualizing the stomach was hotly debated, but what was very clear was that the fiber-optic tool was more comfortable for patients. By the mid-1960s, the fiber-optic invention had become the instrument of choice for gastrointestinal endoscopy. Many advances have since been made to the original instrument.

Dr. Howell is the Elizabeth Farrand Professor and a professor of internal medicine, history, and health management and policy at the University of Michigan, Ann Arbor. He has no financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

During the 2023 DDW Presidential Plenary Session held in May during the annual Digestive Disease Week®, attendees heard about two major historical events that helped shape the field of gastroenterology.

University of Michigan
Dr. Joel D. Howell

The first event took place in 1822 at Fort Mackinac, which today is known as Mackinac Island on northern Lake Huron in Michigan. Alexis St. Martin, a French-Canadian fur trapper, was standing outside of the general store when a shotgun blast accidentally struck him in the stomach. Ordinarily, this would have been a fatal wound, but St. Martin miraculously survived--but with a gastric fistula that permanently exposed the interior of his stomach.

William Beaumont, the post surgeon at Fort Mackinac, engaged in a series of experiments – purportedly 238 – to study human digestion. In one experiment, Dr. Beaumont would pull food in and out of the stomach to study digestion. In another, he would withdraw fluid from the stomach to observe digestion outside of the body. The experiments caused St. Martin considerable discomfort. He eventually returned to Canada, but returned later when the U.S. Army agreed to compensate him for some of his expenses. Today, the experiments would be called into question as having crossed ethical boundaries. Dr. Beaumont published the results from his experiments in a book that established the fundamental basics of our current beliefs about digestion. The experiments arguably mark the first example of gastrointestinal research in the United States.

The second historical event – the invention of the fiber-optic endoscope – also occurred in Michigan. At the University of Michigan, Basil Hirschowitz, MD, invented a flexible, fiber-optic instrument that could be used to look into the stomach, and perhaps even the duodenum. He first tried the invention on himself, and in 1957, he demonstrated it at the national meeting of the American Gastroscopic Society by reading a telephone directory through the new device.

The instrument was soon adopted for clinical use by physicians. Whether the fiber-optic machine was superior for visualizing the stomach was hotly debated, but what was very clear was that the fiber-optic tool was more comfortable for patients. By the mid-1960s, the fiber-optic invention had become the instrument of choice for gastrointestinal endoscopy. Many advances have since been made to the original instrument.

Dr. Howell is the Elizabeth Farrand Professor and a professor of internal medicine, history, and health management and policy at the University of Michigan, Ann Arbor. He has no financial disclosures.

During the 2023 DDW Presidential Plenary Session held in May during the annual Digestive Disease Week®, attendees heard about two major historical events that helped shape the field of gastroenterology.

University of Michigan
Dr. Joel D. Howell

The first event took place in 1822 at Fort Mackinac, which today is known as Mackinac Island on northern Lake Huron in Michigan. Alexis St. Martin, a French-Canadian fur trapper, was standing outside of the general store when a shotgun blast accidentally struck him in the stomach. Ordinarily, this would have been a fatal wound, but St. Martin miraculously survived--but with a gastric fistula that permanently exposed the interior of his stomach.

William Beaumont, the post surgeon at Fort Mackinac, engaged in a series of experiments – purportedly 238 – to study human digestion. In one experiment, Dr. Beaumont would pull food in and out of the stomach to study digestion. In another, he would withdraw fluid from the stomach to observe digestion outside of the body. The experiments caused St. Martin considerable discomfort. He eventually returned to Canada, but returned later when the U.S. Army agreed to compensate him for some of his expenses. Today, the experiments would be called into question as having crossed ethical boundaries. Dr. Beaumont published the results from his experiments in a book that established the fundamental basics of our current beliefs about digestion. The experiments arguably mark the first example of gastrointestinal research in the United States.

The second historical event – the invention of the fiber-optic endoscope – also occurred in Michigan. At the University of Michigan, Basil Hirschowitz, MD, invented a flexible, fiber-optic instrument that could be used to look into the stomach, and perhaps even the duodenum. He first tried the invention on himself, and in 1957, he demonstrated it at the national meeting of the American Gastroscopic Society by reading a telephone directory through the new device.

The instrument was soon adopted for clinical use by physicians. Whether the fiber-optic machine was superior for visualizing the stomach was hotly debated, but what was very clear was that the fiber-optic tool was more comfortable for patients. By the mid-1960s, the fiber-optic invention had become the instrument of choice for gastrointestinal endoscopy. Many advances have since been made to the original instrument.

Dr. Howell is the Elizabeth Farrand Professor and a professor of internal medicine, history, and health management and policy at the University of Michigan, Ann Arbor. He has no financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A 75-year-old White woman presented with diffuse erythema, scale, and pruritus on her scalp

Article Type
Changed
Mon, 08/21/2023 - 11:55

Dermatomyositis is a rare autoimmune condition characterized by muscle inflammation and weakness, and skin findings. The classical presentation includes symmetric proximal muscle weakness and underlying malignancy and is very common in adult patients. The etiology is unknown, however.

Courtesy Lucas Shapiro and Dr. Natalie Y. Nasser

Some studies suggest people with certain HLA subtypes are at higher risk, and various infectious and pharmacological triggers are suspected to play a role in the pathogenesis of dermatomyositis. Infectious causes include Coxsackie B, enterovirus, and parvovirus. Drugs such as antineoplastic agents, antibiotics, and NSAIDs have been found to be triggers.

The pathogenesis of dermatomyositis involves immune-mediated damage to muscle capillaries and the endothelium of arterioles. In the typical humoral immune response, complement activation occurs. One mechanism of damage in dermatomyositis occurs when the membrane attack complex formed at the end of the complement process deposits in blood vessels, causing inflammation. B cells, autoantibodies, and interferon overexpression may also play a role in damaging the vasculature and muscle fibers. Hypoxia leads to muscular atrophy, resulting in degeneration and death of the fibers. On muscle biopsy, a perivascular and perimysial inflammatory infiltrate, perifascicular atrophy, and microangiopathy may be present. Skin histology reveals vacuolar changes in the basal layer, a lymphocytic infiltrate, and increased mucin production in the dermis.

On clinical examination, patients will have proximal muscle weakness and a skin rash that may include Gottron’s papules, heliotrope erythema, V-sign, shawl sign, holster sign, scalp erythema, midfacial erythema, and photosensitivity. Scalp erythema in dermatomyositis is highly linked to pruritus, alopecia, and telogen effluvium. Patients may experience small fiber neuropathy in dermatomyositis.

Serologies for this patient, who had previously been diagnosed and treated for dermatomyositis, were significant for a positive ANA 1:2560. Anti-Jo-1 antibody was negative. Her liver function tests, aldolase, creatinine kinase, sedimentation rate, C-reactive protein, and serum protein electrophoresis were normal. Imaging revealed mild chronic interstitial lung disease. A malignancy workup was negative.

Dr. Donna Bilu Martin

Treatment of dermatomyositis involves lifestyle changes and pharmacologic therapy. Because of the intense photosensitivity, patients should be diligent with their sun protection. Methotrexate, azathioprine, and mycophenolate mofetil are considered first-line therapies for dermatomyositis. Therapies such as cyclophosphamide, rituximab, IVIg, and plasmapheresis may also be indicated in severe or refractory cases. Additionally, patients with pulmonary involvement should be given systemic steroids. The side effects of these drugs must be considered in the context of the patient’s demographics, comorbidities and lifestyle.

This case and the photos were submitted by Lucas Shapiro, BS, of Nova Southeastern University College of Osteopathic Medicine, Fort Lauderdale, Fla., and Natalie Y. Nasser, MD, of Kaiser Permanente Riverside Medical Center, Riverside, Calif. The column was edited by Dr. Bilu Martin.

Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].

References

1. Qudsiya Z and Waseem M. Dermatomyositis, in “StatPearls.” Treasure Island, Fla.: StatPearls Publishing, 2023 Jan.

2. Kamperman RG et al. Int J Mol Sci. 2022 Apr 13;23(8):4301.

3. Kassamali B et al. Int J WomensDermatol. 2021 Sep 24;7(5Part A):576-82.

4. Vázquez-Herrera NE et al. Skin Appendage Disord. 2018 Aug;4(3):187-99.

Publications
Topics
Sections

Dermatomyositis is a rare autoimmune condition characterized by muscle inflammation and weakness, and skin findings. The classical presentation includes symmetric proximal muscle weakness and underlying malignancy and is very common in adult patients. The etiology is unknown, however.

Courtesy Lucas Shapiro and Dr. Natalie Y. Nasser

Some studies suggest people with certain HLA subtypes are at higher risk, and various infectious and pharmacological triggers are suspected to play a role in the pathogenesis of dermatomyositis. Infectious causes include Coxsackie B, enterovirus, and parvovirus. Drugs such as antineoplastic agents, antibiotics, and NSAIDs have been found to be triggers.

The pathogenesis of dermatomyositis involves immune-mediated damage to muscle capillaries and the endothelium of arterioles. In the typical humoral immune response, complement activation occurs. One mechanism of damage in dermatomyositis occurs when the membrane attack complex formed at the end of the complement process deposits in blood vessels, causing inflammation. B cells, autoantibodies, and interferon overexpression may also play a role in damaging the vasculature and muscle fibers. Hypoxia leads to muscular atrophy, resulting in degeneration and death of the fibers. On muscle biopsy, a perivascular and perimysial inflammatory infiltrate, perifascicular atrophy, and microangiopathy may be present. Skin histology reveals vacuolar changes in the basal layer, a lymphocytic infiltrate, and increased mucin production in the dermis.

On clinical examination, patients will have proximal muscle weakness and a skin rash that may include Gottron’s papules, heliotrope erythema, V-sign, shawl sign, holster sign, scalp erythema, midfacial erythema, and photosensitivity. Scalp erythema in dermatomyositis is highly linked to pruritus, alopecia, and telogen effluvium. Patients may experience small fiber neuropathy in dermatomyositis.

Serologies for this patient, who had previously been diagnosed and treated for dermatomyositis, were significant for a positive ANA 1:2560. Anti-Jo-1 antibody was negative. Her liver function tests, aldolase, creatinine kinase, sedimentation rate, C-reactive protein, and serum protein electrophoresis were normal. Imaging revealed mild chronic interstitial lung disease. A malignancy workup was negative.

Dr. Donna Bilu Martin

Treatment of dermatomyositis involves lifestyle changes and pharmacologic therapy. Because of the intense photosensitivity, patients should be diligent with their sun protection. Methotrexate, azathioprine, and mycophenolate mofetil are considered first-line therapies for dermatomyositis. Therapies such as cyclophosphamide, rituximab, IVIg, and plasmapheresis may also be indicated in severe or refractory cases. Additionally, patients with pulmonary involvement should be given systemic steroids. The side effects of these drugs must be considered in the context of the patient’s demographics, comorbidities and lifestyle.

This case and the photos were submitted by Lucas Shapiro, BS, of Nova Southeastern University College of Osteopathic Medicine, Fort Lauderdale, Fla., and Natalie Y. Nasser, MD, of Kaiser Permanente Riverside Medical Center, Riverside, Calif. The column was edited by Dr. Bilu Martin.

Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].

References

1. Qudsiya Z and Waseem M. Dermatomyositis, in “StatPearls.” Treasure Island, Fla.: StatPearls Publishing, 2023 Jan.

2. Kamperman RG et al. Int J Mol Sci. 2022 Apr 13;23(8):4301.

3. Kassamali B et al. Int J WomensDermatol. 2021 Sep 24;7(5Part A):576-82.

4. Vázquez-Herrera NE et al. Skin Appendage Disord. 2018 Aug;4(3):187-99.

Dermatomyositis is a rare autoimmune condition characterized by muscle inflammation and weakness, and skin findings. The classical presentation includes symmetric proximal muscle weakness and underlying malignancy and is very common in adult patients. The etiology is unknown, however.

Courtesy Lucas Shapiro and Dr. Natalie Y. Nasser

Some studies suggest people with certain HLA subtypes are at higher risk, and various infectious and pharmacological triggers are suspected to play a role in the pathogenesis of dermatomyositis. Infectious causes include Coxsackie B, enterovirus, and parvovirus. Drugs such as antineoplastic agents, antibiotics, and NSAIDs have been found to be triggers.

The pathogenesis of dermatomyositis involves immune-mediated damage to muscle capillaries and the endothelium of arterioles. In the typical humoral immune response, complement activation occurs. One mechanism of damage in dermatomyositis occurs when the membrane attack complex formed at the end of the complement process deposits in blood vessels, causing inflammation. B cells, autoantibodies, and interferon overexpression may also play a role in damaging the vasculature and muscle fibers. Hypoxia leads to muscular atrophy, resulting in degeneration and death of the fibers. On muscle biopsy, a perivascular and perimysial inflammatory infiltrate, perifascicular atrophy, and microangiopathy may be present. Skin histology reveals vacuolar changes in the basal layer, a lymphocytic infiltrate, and increased mucin production in the dermis.

On clinical examination, patients will have proximal muscle weakness and a skin rash that may include Gottron’s papules, heliotrope erythema, V-sign, shawl sign, holster sign, scalp erythema, midfacial erythema, and photosensitivity. Scalp erythema in dermatomyositis is highly linked to pruritus, alopecia, and telogen effluvium. Patients may experience small fiber neuropathy in dermatomyositis.

Serologies for this patient, who had previously been diagnosed and treated for dermatomyositis, were significant for a positive ANA 1:2560. Anti-Jo-1 antibody was negative. Her liver function tests, aldolase, creatinine kinase, sedimentation rate, C-reactive protein, and serum protein electrophoresis were normal. Imaging revealed mild chronic interstitial lung disease. A malignancy workup was negative.

Dr. Donna Bilu Martin

Treatment of dermatomyositis involves lifestyle changes and pharmacologic therapy. Because of the intense photosensitivity, patients should be diligent with their sun protection. Methotrexate, azathioprine, and mycophenolate mofetil are considered first-line therapies for dermatomyositis. Therapies such as cyclophosphamide, rituximab, IVIg, and plasmapheresis may also be indicated in severe or refractory cases. Additionally, patients with pulmonary involvement should be given systemic steroids. The side effects of these drugs must be considered in the context of the patient’s demographics, comorbidities and lifestyle.

This case and the photos were submitted by Lucas Shapiro, BS, of Nova Southeastern University College of Osteopathic Medicine, Fort Lauderdale, Fla., and Natalie Y. Nasser, MD, of Kaiser Permanente Riverside Medical Center, Riverside, Calif. The column was edited by Dr. Bilu Martin.

Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].

References

1. Qudsiya Z and Waseem M. Dermatomyositis, in “StatPearls.” Treasure Island, Fla.: StatPearls Publishing, 2023 Jan.

2. Kamperman RG et al. Int J Mol Sci. 2022 Apr 13;23(8):4301.

3. Kassamali B et al. Int J WomensDermatol. 2021 Sep 24;7(5Part A):576-82.

4. Vázquez-Herrera NE et al. Skin Appendage Disord. 2018 Aug;4(3):187-99.

Publications
Publications
Topics
Article Type
Sections
Questionnaire Body

Courtesy Lucas Shapiro and Dr. Natalie Y. Nasser
A 75-year-old White woman presented with diffuse erythema, scale, and pruritus on her scalp. She had periorbital erythema, as well as erythema on the distal interphalangeal joints. Her medications included prednisone, mycophenolate mofetil, and hydroxychloroquine for a longstanding diagnosis, and her prednisone and hydroxychloroquine dosages had been lowered. She also exhibited shoulder and proximal arm muscle weakness.

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

CDC alerts clinicians to signs of alpha-gal syndrome

Article Type
Changed
Wed, 08/16/2023 - 15:08

The Centers for Disease Control and Prevention has issued a report alerting clinicians to emerging cases of alpha-gal syndrome (AGS) linked with tick bites.

AGS causes patients to become allergic to meat, and in some cases the reaction can be life-threatening. Symptoms typically start 2-6 hours after eating the meat.

The American Gastroenterological Association published a Clinical Practice Update in February notifying gastroenterologists that a subset of AGS patients are presenting with abdominal pain, nausea, diarrhea or vomiting, without skin changes or anaphylaxis. If alpha-gal is suspected, serum tests for immunoglobulin E (IgE) antibodies should be performed.

“It is important for gastroenterologists to be aware of this condition and to be capable of diagnosing and treating it in a timely manner,” wrote authors of the clinical practice update in Clinical Gastroenterology and Hepatology.

A Morbidity and Mortality Weekly Report demonstrates that health care provider knowledge is low surrounding AGS. Almost half of the 1,500 health care providers surveyed (42%) had never heard of the syndrome and another 35% were not confident in diagnosing or managing affected patients.

The low knowledge is concerning because the range of the lone star tick, which is the species primarily associated with this syndrome, is expanding. The knowledge gaps may lead to delayed or overlooked diagnoses.

“Improved health care provider education might facilitate a rapid diagnosis of AGS, improve patient care, and support public health understanding of this emerging condition,” write the report authors, led by Ann Carpenter, DVM, with the CDC.

Another Morbidity and Mortality Weekly Report, with lead author Johanna S. Salzer, DVM, PhD, of the CDC, also issued on July 28, notes that specific symptoms and severity of AGS vary and no cure or treatment is currently available. From 2010 to 2018, there were more than 34,000 suspected cases of AGS in the United States, but current knowledge of where the cases have occurred is limited, the study authors write.

According to the report, the suspected AGS cases were concentrated in areas where the lone star tick is known to be found, particularly throughout Arkansas, Kentucky, Missouri, and Suffolk County, N.Y.

The report also notes that, “during 2017-2021, there was an annual increase in positive test results for AGS in the United States. More than 90,000 suspected AGS cases were identified during the study period, and the number of new suspected cases increased by approximately 15,000 each year during the study.”

An AGS diagnosis “can be made with GI distress and increased serum alpha-gal IgE antibodies whose symptoms are relieved adequately on an alpha-gal avoidance diet that eliminates pork, beef, and mammalian-derived products,” the practice update says.

Patients whose symptoms also include facial swelling, urticaria, and trouble breathing should be referred to allergists, the AGA update states.

Patients should also be counseled to avoid further tick bites because additional bites can worsen the allergy.

The authors declare no relevant financial relationships.

Publications
Topics
Sections

The Centers for Disease Control and Prevention has issued a report alerting clinicians to emerging cases of alpha-gal syndrome (AGS) linked with tick bites.

AGS causes patients to become allergic to meat, and in some cases the reaction can be life-threatening. Symptoms typically start 2-6 hours after eating the meat.

The American Gastroenterological Association published a Clinical Practice Update in February notifying gastroenterologists that a subset of AGS patients are presenting with abdominal pain, nausea, diarrhea or vomiting, without skin changes or anaphylaxis. If alpha-gal is suspected, serum tests for immunoglobulin E (IgE) antibodies should be performed.

“It is important for gastroenterologists to be aware of this condition and to be capable of diagnosing and treating it in a timely manner,” wrote authors of the clinical practice update in Clinical Gastroenterology and Hepatology.

A Morbidity and Mortality Weekly Report demonstrates that health care provider knowledge is low surrounding AGS. Almost half of the 1,500 health care providers surveyed (42%) had never heard of the syndrome and another 35% were not confident in diagnosing or managing affected patients.

The low knowledge is concerning because the range of the lone star tick, which is the species primarily associated with this syndrome, is expanding. The knowledge gaps may lead to delayed or overlooked diagnoses.

“Improved health care provider education might facilitate a rapid diagnosis of AGS, improve patient care, and support public health understanding of this emerging condition,” write the report authors, led by Ann Carpenter, DVM, with the CDC.

Another Morbidity and Mortality Weekly Report, with lead author Johanna S. Salzer, DVM, PhD, of the CDC, also issued on July 28, notes that specific symptoms and severity of AGS vary and no cure or treatment is currently available. From 2010 to 2018, there were more than 34,000 suspected cases of AGS in the United States, but current knowledge of where the cases have occurred is limited, the study authors write.

According to the report, the suspected AGS cases were concentrated in areas where the lone star tick is known to be found, particularly throughout Arkansas, Kentucky, Missouri, and Suffolk County, N.Y.

The report also notes that, “during 2017-2021, there was an annual increase in positive test results for AGS in the United States. More than 90,000 suspected AGS cases were identified during the study period, and the number of new suspected cases increased by approximately 15,000 each year during the study.”

An AGS diagnosis “can be made with GI distress and increased serum alpha-gal IgE antibodies whose symptoms are relieved adequately on an alpha-gal avoidance diet that eliminates pork, beef, and mammalian-derived products,” the practice update says.

Patients whose symptoms also include facial swelling, urticaria, and trouble breathing should be referred to allergists, the AGA update states.

Patients should also be counseled to avoid further tick bites because additional bites can worsen the allergy.

The authors declare no relevant financial relationships.

The Centers for Disease Control and Prevention has issued a report alerting clinicians to emerging cases of alpha-gal syndrome (AGS) linked with tick bites.

AGS causes patients to become allergic to meat, and in some cases the reaction can be life-threatening. Symptoms typically start 2-6 hours after eating the meat.

The American Gastroenterological Association published a Clinical Practice Update in February notifying gastroenterologists that a subset of AGS patients are presenting with abdominal pain, nausea, diarrhea or vomiting, without skin changes or anaphylaxis. If alpha-gal is suspected, serum tests for immunoglobulin E (IgE) antibodies should be performed.

“It is important for gastroenterologists to be aware of this condition and to be capable of diagnosing and treating it in a timely manner,” wrote authors of the clinical practice update in Clinical Gastroenterology and Hepatology.

A Morbidity and Mortality Weekly Report demonstrates that health care provider knowledge is low surrounding AGS. Almost half of the 1,500 health care providers surveyed (42%) had never heard of the syndrome and another 35% were not confident in diagnosing or managing affected patients.

The low knowledge is concerning because the range of the lone star tick, which is the species primarily associated with this syndrome, is expanding. The knowledge gaps may lead to delayed or overlooked diagnoses.

“Improved health care provider education might facilitate a rapid diagnosis of AGS, improve patient care, and support public health understanding of this emerging condition,” write the report authors, led by Ann Carpenter, DVM, with the CDC.

Another Morbidity and Mortality Weekly Report, with lead author Johanna S. Salzer, DVM, PhD, of the CDC, also issued on July 28, notes that specific symptoms and severity of AGS vary and no cure or treatment is currently available. From 2010 to 2018, there were more than 34,000 suspected cases of AGS in the United States, but current knowledge of where the cases have occurred is limited, the study authors write.

According to the report, the suspected AGS cases were concentrated in areas where the lone star tick is known to be found, particularly throughout Arkansas, Kentucky, Missouri, and Suffolk County, N.Y.

The report also notes that, “during 2017-2021, there was an annual increase in positive test results for AGS in the United States. More than 90,000 suspected AGS cases were identified during the study period, and the number of new suspected cases increased by approximately 15,000 each year during the study.”

An AGS diagnosis “can be made with GI distress and increased serum alpha-gal IgE antibodies whose symptoms are relieved adequately on an alpha-gal avoidance diet that eliminates pork, beef, and mammalian-derived products,” the practice update says.

Patients whose symptoms also include facial swelling, urticaria, and trouble breathing should be referred to allergists, the AGA update states.

Patients should also be counseled to avoid further tick bites because additional bites can worsen the allergy.

The authors declare no relevant financial relationships.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Better than dialysis? Artificial kidney could be the future

Article Type
Changed
Thu, 08/24/2023 - 19:22

Nearly 90,000 patients in the United States are waiting for a lifesaving kidney transplant, yet only about 25,000 kidney transplants were performed last year. Thousands die each year while they wait. Others are not suitable transplant candidates.

Half a million people are on dialysis, the only transplant alternative for those with kidney failure. This greatly impacts their work, relationships, and quality of life.

Researchers from The Kidney Project hope to solve this public health crisis with a futuristic approach: an implantable bioartificial kidney. That hope is slowly approaching reality. Early prototypes have been tested successfully in preclinical research and clinical trials could lie ahead.

This news organization spoke with two researchers who came up with the idea: nephrologist William Dr. Fissell, MD, of Vanderbilt University in Nashville, Tenn., and Shuvo Dr. Roy, PhD, a biomedical engineer at the University of California, San Francisco. This interview has been edited for length and clarity.
 

Question: Could you summarize the clinical problem with chronic kidney disease?

Dr. Fissell:
Dialysis treatment, although lifesaving, is incomplete. Healthy kidneys do a variety of things that dialysis cannot provide. Transplant is absolutely the best remedy, but donor organs are vanishingly scarce. Our goal has been to develop a mass-produced, universal donor kidney to render the issue of scarcity – scarcity of time, scarcity of resources, scarcity of money, scarcity of donor organs – irrelevant.

Do you envision your implantable, bioartificial kidney as a bridge to transplantation? Or can it be even more, like a bionic organ, as good as a natural organ and thus better than a transplant?

Dr. Roy:
We see it initially as a bridge to transplantation or as a better option than dialysis for those who will never get a transplant. We’re not trying to create the “Six Million Dollar Man.” The goal is to keep patients off dialysis – to deliver some, but probably not all, of the benefits of a kidney transplant in a mass-produced device that anybody can receive.

Dr. Fissell: The technology is aimed at people in stage 5 renal disease, the final stage, when kidneys are failing, and dialysis is the only option to maintain life. We want to make dialysis a thing of the past, put dialysis machines in museums like the iron lung, which was so vital to keeping people alive several decades ago but is mostly obsolete today.

How did you two come up with this idea? How did you get started working together?

Dr. Roy:
I had just begun my career as a research biomedical engineer when I met Dr. William Fissell, who was then contemplating a career in nephrology. He opened my eyes to the problems faced by patients affected by kidney failure. Through our discussions, we quickly realized that while we could improve dialysis machines, patients needed and deserved something better – a treatment that improves their health while also allowing them to keep a job, travel readily, and consume food and drink without restrictions. Basically, something that works more like a kidney transplant.

How does the artificial kidney differ from dialysis?

Dr. Fissell:
Dialysis is an intermittent stop-and-start treatment. The artificial kidney is continuous, around-the-clock treatment. There are a couple of advantages to that. The first is that you can maintain your body’s fluid balance. In dialysis, you get rid of 2-3 days’ worth of fluid in a couple of hours, and that’s very stressful to the heart and maybe to the brain as well. Second advantage is that patients will be able to eat a normal diet. Some waste products that are byproducts of our nutritional intake are slow to leave the body. So in dialysis, we restrict the diet severely and add medicines to soak up extra phosphorus. With a continuous treatment, you can balance excretion and intake.

The other aspect is that dialysis requires an immense amount of disposables. Hundreds of liters of water per patient per treatment, hundreds of thousands of dialysis cartridges and IV bags every year. The artificial kidney doesn’t need a water supply, disposable sorbent, or cartridges.
 

How does the artificial kidney work?

Dr. Fissell:
Just like a healthy kidney. We have a unit that filters the blood so that red blood cells, white blood cells, platelets, antibodies, albumin – all the good stuff that your body worked hard to synthesize – stays in the blood, but a watery soup of toxins and waste is separated out. In a second unit, called the bioreactor, kidney cells concentrate those wastes and toxins into urine.

Dr. Roy: We used a technology called silicon micro-machining to invent an entirely new membrane that mimics a healthy kidney’s filters. It filters the blood just using the patient’s heart as a pump. No electric motors, no batteries, no wires. This lets us have something that’s completely implanted.

We also developed a cell culture of kidney cells that function in an artificial kidney. Normally, cells in a dish don’t fully adopt the features of a cell in the body. We looked at the literature around 3-D printing of organs. We learned that, in addition to fluid flow, stiff scaffolds, like cell culture dishes, trigger specific signals that keep the cells from functioning. We overcame that by looking at the physical microenvironment of the cells –  not the hormones and proteins, but instead the fundamentals of the laboratory environment. For example, most organs are soft, yet plastic lab dishes are hard. By using tools that replicated the softness and fluid flow of a healthy kidney, remarkably, these cells functioned better than on a plastic dish.
 

Would patients need immunosuppressive or anticoagulation medication?

Dr. Fissell:
They wouldn’t need either. The structure and chemistry of the device prevents blood from clotting. And the membranes in the device are a physical barrier between the host immune system and the donor cells, so the body won’t reject the device.

What is the state of the technology now?

Dr. Fissell:
We have shown the function of the filters and the function of the cells, both separately and together, in preclinical in vivo testing. What we now need to do is construct clinical-grade devices and complete sterility and biocompatibility testing to initiate a human trial. That’s going to take between $12 million and $15 million in device manufacturing.

So it’s more a matter of money than time until the first clinical trials?

Dr. Roy: Yes, exactly. We don’t like to say that a clinical trial will start by such-and-such year. From the very start of the project, we have been resource limited.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Nearly 90,000 patients in the United States are waiting for a lifesaving kidney transplant, yet only about 25,000 kidney transplants were performed last year. Thousands die each year while they wait. Others are not suitable transplant candidates.

Half a million people are on dialysis, the only transplant alternative for those with kidney failure. This greatly impacts their work, relationships, and quality of life.

Researchers from The Kidney Project hope to solve this public health crisis with a futuristic approach: an implantable bioartificial kidney. That hope is slowly approaching reality. Early prototypes have been tested successfully in preclinical research and clinical trials could lie ahead.

This news organization spoke with two researchers who came up with the idea: nephrologist William Dr. Fissell, MD, of Vanderbilt University in Nashville, Tenn., and Shuvo Dr. Roy, PhD, a biomedical engineer at the University of California, San Francisco. This interview has been edited for length and clarity.
 

Question: Could you summarize the clinical problem with chronic kidney disease?

Dr. Fissell:
Dialysis treatment, although lifesaving, is incomplete. Healthy kidneys do a variety of things that dialysis cannot provide. Transplant is absolutely the best remedy, but donor organs are vanishingly scarce. Our goal has been to develop a mass-produced, universal donor kidney to render the issue of scarcity – scarcity of time, scarcity of resources, scarcity of money, scarcity of donor organs – irrelevant.

Do you envision your implantable, bioartificial kidney as a bridge to transplantation? Or can it be even more, like a bionic organ, as good as a natural organ and thus better than a transplant?

Dr. Roy:
We see it initially as a bridge to transplantation or as a better option than dialysis for those who will never get a transplant. We’re not trying to create the “Six Million Dollar Man.” The goal is to keep patients off dialysis – to deliver some, but probably not all, of the benefits of a kidney transplant in a mass-produced device that anybody can receive.

Dr. Fissell: The technology is aimed at people in stage 5 renal disease, the final stage, when kidneys are failing, and dialysis is the only option to maintain life. We want to make dialysis a thing of the past, put dialysis machines in museums like the iron lung, which was so vital to keeping people alive several decades ago but is mostly obsolete today.

How did you two come up with this idea? How did you get started working together?

Dr. Roy:
I had just begun my career as a research biomedical engineer when I met Dr. William Fissell, who was then contemplating a career in nephrology. He opened my eyes to the problems faced by patients affected by kidney failure. Through our discussions, we quickly realized that while we could improve dialysis machines, patients needed and deserved something better – a treatment that improves their health while also allowing them to keep a job, travel readily, and consume food and drink without restrictions. Basically, something that works more like a kidney transplant.

How does the artificial kidney differ from dialysis?

Dr. Fissell:
Dialysis is an intermittent stop-and-start treatment. The artificial kidney is continuous, around-the-clock treatment. There are a couple of advantages to that. The first is that you can maintain your body’s fluid balance. In dialysis, you get rid of 2-3 days’ worth of fluid in a couple of hours, and that’s very stressful to the heart and maybe to the brain as well. Second advantage is that patients will be able to eat a normal diet. Some waste products that are byproducts of our nutritional intake are slow to leave the body. So in dialysis, we restrict the diet severely and add medicines to soak up extra phosphorus. With a continuous treatment, you can balance excretion and intake.

The other aspect is that dialysis requires an immense amount of disposables. Hundreds of liters of water per patient per treatment, hundreds of thousands of dialysis cartridges and IV bags every year. The artificial kidney doesn’t need a water supply, disposable sorbent, or cartridges.
 

How does the artificial kidney work?

Dr. Fissell:
Just like a healthy kidney. We have a unit that filters the blood so that red blood cells, white blood cells, platelets, antibodies, albumin – all the good stuff that your body worked hard to synthesize – stays in the blood, but a watery soup of toxins and waste is separated out. In a second unit, called the bioreactor, kidney cells concentrate those wastes and toxins into urine.

Dr. Roy: We used a technology called silicon micro-machining to invent an entirely new membrane that mimics a healthy kidney’s filters. It filters the blood just using the patient’s heart as a pump. No electric motors, no batteries, no wires. This lets us have something that’s completely implanted.

We also developed a cell culture of kidney cells that function in an artificial kidney. Normally, cells in a dish don’t fully adopt the features of a cell in the body. We looked at the literature around 3-D printing of organs. We learned that, in addition to fluid flow, stiff scaffolds, like cell culture dishes, trigger specific signals that keep the cells from functioning. We overcame that by looking at the physical microenvironment of the cells –  not the hormones and proteins, but instead the fundamentals of the laboratory environment. For example, most organs are soft, yet plastic lab dishes are hard. By using tools that replicated the softness and fluid flow of a healthy kidney, remarkably, these cells functioned better than on a plastic dish.
 

Would patients need immunosuppressive or anticoagulation medication?

Dr. Fissell:
They wouldn’t need either. The structure and chemistry of the device prevents blood from clotting. And the membranes in the device are a physical barrier between the host immune system and the donor cells, so the body won’t reject the device.

What is the state of the technology now?

Dr. Fissell:
We have shown the function of the filters and the function of the cells, both separately and together, in preclinical in vivo testing. What we now need to do is construct clinical-grade devices and complete sterility and biocompatibility testing to initiate a human trial. That’s going to take between $12 million and $15 million in device manufacturing.

So it’s more a matter of money than time until the first clinical trials?

Dr. Roy: Yes, exactly. We don’t like to say that a clinical trial will start by such-and-such year. From the very start of the project, we have been resource limited.

A version of this article first appeared on Medscape.com.

Nearly 90,000 patients in the United States are waiting for a lifesaving kidney transplant, yet only about 25,000 kidney transplants were performed last year. Thousands die each year while they wait. Others are not suitable transplant candidates.

Half a million people are on dialysis, the only transplant alternative for those with kidney failure. This greatly impacts their work, relationships, and quality of life.

Researchers from The Kidney Project hope to solve this public health crisis with a futuristic approach: an implantable bioartificial kidney. That hope is slowly approaching reality. Early prototypes have been tested successfully in preclinical research and clinical trials could lie ahead.

This news organization spoke with two researchers who came up with the idea: nephrologist William Dr. Fissell, MD, of Vanderbilt University in Nashville, Tenn., and Shuvo Dr. Roy, PhD, a biomedical engineer at the University of California, San Francisco. This interview has been edited for length and clarity.
 

Question: Could you summarize the clinical problem with chronic kidney disease?

Dr. Fissell:
Dialysis treatment, although lifesaving, is incomplete. Healthy kidneys do a variety of things that dialysis cannot provide. Transplant is absolutely the best remedy, but donor organs are vanishingly scarce. Our goal has been to develop a mass-produced, universal donor kidney to render the issue of scarcity – scarcity of time, scarcity of resources, scarcity of money, scarcity of donor organs – irrelevant.

Do you envision your implantable, bioartificial kidney as a bridge to transplantation? Or can it be even more, like a bionic organ, as good as a natural organ and thus better than a transplant?

Dr. Roy:
We see it initially as a bridge to transplantation or as a better option than dialysis for those who will never get a transplant. We’re not trying to create the “Six Million Dollar Man.” The goal is to keep patients off dialysis – to deliver some, but probably not all, of the benefits of a kidney transplant in a mass-produced device that anybody can receive.

Dr. Fissell: The technology is aimed at people in stage 5 renal disease, the final stage, when kidneys are failing, and dialysis is the only option to maintain life. We want to make dialysis a thing of the past, put dialysis machines in museums like the iron lung, which was so vital to keeping people alive several decades ago but is mostly obsolete today.

How did you two come up with this idea? How did you get started working together?

Dr. Roy:
I had just begun my career as a research biomedical engineer when I met Dr. William Fissell, who was then contemplating a career in nephrology. He opened my eyes to the problems faced by patients affected by kidney failure. Through our discussions, we quickly realized that while we could improve dialysis machines, patients needed and deserved something better – a treatment that improves their health while also allowing them to keep a job, travel readily, and consume food and drink without restrictions. Basically, something that works more like a kidney transplant.

How does the artificial kidney differ from dialysis?

Dr. Fissell:
Dialysis is an intermittent stop-and-start treatment. The artificial kidney is continuous, around-the-clock treatment. There are a couple of advantages to that. The first is that you can maintain your body’s fluid balance. In dialysis, you get rid of 2-3 days’ worth of fluid in a couple of hours, and that’s very stressful to the heart and maybe to the brain as well. Second advantage is that patients will be able to eat a normal diet. Some waste products that are byproducts of our nutritional intake are slow to leave the body. So in dialysis, we restrict the diet severely and add medicines to soak up extra phosphorus. With a continuous treatment, you can balance excretion and intake.

The other aspect is that dialysis requires an immense amount of disposables. Hundreds of liters of water per patient per treatment, hundreds of thousands of dialysis cartridges and IV bags every year. The artificial kidney doesn’t need a water supply, disposable sorbent, or cartridges.
 

How does the artificial kidney work?

Dr. Fissell:
Just like a healthy kidney. We have a unit that filters the blood so that red blood cells, white blood cells, platelets, antibodies, albumin – all the good stuff that your body worked hard to synthesize – stays in the blood, but a watery soup of toxins and waste is separated out. In a second unit, called the bioreactor, kidney cells concentrate those wastes and toxins into urine.

Dr. Roy: We used a technology called silicon micro-machining to invent an entirely new membrane that mimics a healthy kidney’s filters. It filters the blood just using the patient’s heart as a pump. No electric motors, no batteries, no wires. This lets us have something that’s completely implanted.

We also developed a cell culture of kidney cells that function in an artificial kidney. Normally, cells in a dish don’t fully adopt the features of a cell in the body. We looked at the literature around 3-D printing of organs. We learned that, in addition to fluid flow, stiff scaffolds, like cell culture dishes, trigger specific signals that keep the cells from functioning. We overcame that by looking at the physical microenvironment of the cells –  not the hormones and proteins, but instead the fundamentals of the laboratory environment. For example, most organs are soft, yet plastic lab dishes are hard. By using tools that replicated the softness and fluid flow of a healthy kidney, remarkably, these cells functioned better than on a plastic dish.
 

Would patients need immunosuppressive or anticoagulation medication?

Dr. Fissell:
They wouldn’t need either. The structure and chemistry of the device prevents blood from clotting. And the membranes in the device are a physical barrier between the host immune system and the donor cells, so the body won’t reject the device.

What is the state of the technology now?

Dr. Fissell:
We have shown the function of the filters and the function of the cells, both separately and together, in preclinical in vivo testing. What we now need to do is construct clinical-grade devices and complete sterility and biocompatibility testing to initiate a human trial. That’s going to take between $12 million and $15 million in device manufacturing.

So it’s more a matter of money than time until the first clinical trials?

Dr. Roy: Yes, exactly. We don’t like to say that a clinical trial will start by such-and-such year. From the very start of the project, we have been resource limited.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Liver transplant in CRC: Who might benefit?

Article Type
Changed
Thu, 10/05/2023 - 19:31

For carefully selected patients with colorectal cancer (CRC), a liver transplant may offer long-term survival and potentially even cure unresectable liver metastases.

Findings from a Norwegian review of 61 patients who had liver transplants for unresectable colorectal liver metastases found half of patients were still alive at 5 years, and about one in five appeared to be cured at 10 years.

“It seems likely that there is a small group of patients with unresectable colorectal liver metastases who should be considered for transplant, and long-term survival and possibly cure are achievable in these patients with appropriate selection,” Ryan Ellis, MD, and Michael D’Angelica, MD, wrote in a commentary published alongside the study in JAMA Surgery.

The core question, however, is how to identify patients who will benefit the most from a liver transplant, said Dr. Ellis and Dr. D’Angelica, both surgical oncologists in the Hepatopancreatobiliary Service at Memorial Sloan Kettering Cancer Center, New York. Looking closely at who did well in this analysis can offer clues to appropriate patient selection, the editorialists said.

Three decades ago, the oncology community had largely abandoned liver transplant in this population after studies showed overall 5-year survival of less than 20%. Some patients, however, did better, which prompted the Norwegian investigators to attempt to refine patient selection.

In the current prospective nonrandomized study, 61 patients had liver transplants for unresectable metastases at Oslo University Hospital from 2006 to 2020.

The researchers reported a median overall survival of 60.3 months, with about half of patients (50.4%) alive at 5 years.

Most patients (78.3%) experienced a relapse after liver transplant, with a median time to relapse of 9 months and with most occurring within 2 years of transplant. Median overall survival from time of relapse was 37.1 months, with 5-year survival at nearly 35% in this group and with one patient still alive 156 months after relapse.

The remaining 21.7% of patients (n = 13) did not experience a relapse post-transplant at their last follow-up.

Given the variety of responses to liver transplant, how can experts differentiate patients who will benefit most from those who won’t?

The researchers looked at several factors, including Oslo score and Fong Clinical Risk Score. The Oslo score assesses overall survival among liver transplant patients, while the Fong score predicts recurrence risk for patients with CRC liver metastasis following resection. These scores assign one point for each adverse prognostic factor.

Among the 10 patients who had an Oslo Score of 0, median overall survival was 151.6 months, and the 5-year and 10-year survival rates reached nearly 89%. Among the 27 patients with an Oslo Score of 1, median overall survival was 60.3 months, and 5-year overall survival was 54.7%. No patients with an Oslo score of 4 lived for 5 years.

As for FCRS, median overall survival was 164.9 months among those with a score of 1, 90.5 months among those with a score of 2, 59.9 months for those with a score of 3, 32.8 months for those with a score of 4, and 25.3 months for those with the highest score of 5 (P < .001). Overall, these patients had 5-year overall survival of 100%, 63.9%, 49.4%, 33.3%, and 0%, respectively.

In addition to Oslo and Fong scores, metabolic tumor volume on PET scan (PET-MTV) was also a good prognostic factor for survival. Among the 40 patients with MTV values less than 70 cm3, median 5-year overall survival was nearly 67%, while those with values above 70 cm3 had a median 5-year overall survival of 23.3%.

Additional harbingers of low 5-year survival, in addition to higher Oslo and Fong scores and PET-MTV above 70 cm3, included a tumor size greater than 5.5 cm, progressive disease while receiving chemotherapy, primary tumors in the ascending colon, tumor burden scores of 9 or higher, and nine or more liver lesions.

Overall, the current analysis can help oncologists identify patients who may benefit from a liver transplant.

The findings indicate that “patients with liver-only metastases and favorable pretransplant prognostic scoring [have] long-term survival comparable with conventional indications for liver transplant, thus providing a potential curative treatment option in patients otherwise offered only palliative care,” said investigators led by Svein Dueland, MD, PhD, a member of the Transplant Oncology Research Group at Oslo University Hospital.

Perhaps “the most compelling argument in favor of liver transplant lies in the likely curative potential evidenced by the 13 disease-free patients,” Dr. Ellis and Dr. D’Angelica wrote.

But even some patients who had early recurrences did well following transplant. The investigators noted that early recurrences in this population aren’t as dire as in other settings because they generally manifest as slow growing lung metastases that can be caught early and resected with curative intent.

A major hurdle to broader use of liver transplants in this population is the scarcity of donor grafts. To manage demand, the investigators suggested “extended-criteria donor grafts” – grafts that don’t meet ideal criteria – and the use of the RAPID technique for liver transplant, which opens the door to using one graft for two patients and using living donors with low risk to the donor.

Another challenge will be identifying patients with unresectable colorectal liver metastases who may experience long-term survival following transplant and possibly a cure. “We all will need to keep a sharp eye out for these patients – they might be hard to find!” Dr. Ellis and Dr. D’Angelica wrote.

The study was supported by Oslo University Hospital, the Norwegian Cancer Society, and South-Eastern Norway Regional Health Authority. The investigators and editorialists report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

For carefully selected patients with colorectal cancer (CRC), a liver transplant may offer long-term survival and potentially even cure unresectable liver metastases.

Findings from a Norwegian review of 61 patients who had liver transplants for unresectable colorectal liver metastases found half of patients were still alive at 5 years, and about one in five appeared to be cured at 10 years.

“It seems likely that there is a small group of patients with unresectable colorectal liver metastases who should be considered for transplant, and long-term survival and possibly cure are achievable in these patients with appropriate selection,” Ryan Ellis, MD, and Michael D’Angelica, MD, wrote in a commentary published alongside the study in JAMA Surgery.

The core question, however, is how to identify patients who will benefit the most from a liver transplant, said Dr. Ellis and Dr. D’Angelica, both surgical oncologists in the Hepatopancreatobiliary Service at Memorial Sloan Kettering Cancer Center, New York. Looking closely at who did well in this analysis can offer clues to appropriate patient selection, the editorialists said.

Three decades ago, the oncology community had largely abandoned liver transplant in this population after studies showed overall 5-year survival of less than 20%. Some patients, however, did better, which prompted the Norwegian investigators to attempt to refine patient selection.

In the current prospective nonrandomized study, 61 patients had liver transplants for unresectable metastases at Oslo University Hospital from 2006 to 2020.

The researchers reported a median overall survival of 60.3 months, with about half of patients (50.4%) alive at 5 years.

Most patients (78.3%) experienced a relapse after liver transplant, with a median time to relapse of 9 months and with most occurring within 2 years of transplant. Median overall survival from time of relapse was 37.1 months, with 5-year survival at nearly 35% in this group and with one patient still alive 156 months after relapse.

The remaining 21.7% of patients (n = 13) did not experience a relapse post-transplant at their last follow-up.

Given the variety of responses to liver transplant, how can experts differentiate patients who will benefit most from those who won’t?

The researchers looked at several factors, including Oslo score and Fong Clinical Risk Score. The Oslo score assesses overall survival among liver transplant patients, while the Fong score predicts recurrence risk for patients with CRC liver metastasis following resection. These scores assign one point for each adverse prognostic factor.

Among the 10 patients who had an Oslo Score of 0, median overall survival was 151.6 months, and the 5-year and 10-year survival rates reached nearly 89%. Among the 27 patients with an Oslo Score of 1, median overall survival was 60.3 months, and 5-year overall survival was 54.7%. No patients with an Oslo score of 4 lived for 5 years.

As for FCRS, median overall survival was 164.9 months among those with a score of 1, 90.5 months among those with a score of 2, 59.9 months for those with a score of 3, 32.8 months for those with a score of 4, and 25.3 months for those with the highest score of 5 (P < .001). Overall, these patients had 5-year overall survival of 100%, 63.9%, 49.4%, 33.3%, and 0%, respectively.

In addition to Oslo and Fong scores, metabolic tumor volume on PET scan (PET-MTV) was also a good prognostic factor for survival. Among the 40 patients with MTV values less than 70 cm3, median 5-year overall survival was nearly 67%, while those with values above 70 cm3 had a median 5-year overall survival of 23.3%.

Additional harbingers of low 5-year survival, in addition to higher Oslo and Fong scores and PET-MTV above 70 cm3, included a tumor size greater than 5.5 cm, progressive disease while receiving chemotherapy, primary tumors in the ascending colon, tumor burden scores of 9 or higher, and nine or more liver lesions.

Overall, the current analysis can help oncologists identify patients who may benefit from a liver transplant.

The findings indicate that “patients with liver-only metastases and favorable pretransplant prognostic scoring [have] long-term survival comparable with conventional indications for liver transplant, thus providing a potential curative treatment option in patients otherwise offered only palliative care,” said investigators led by Svein Dueland, MD, PhD, a member of the Transplant Oncology Research Group at Oslo University Hospital.

Perhaps “the most compelling argument in favor of liver transplant lies in the likely curative potential evidenced by the 13 disease-free patients,” Dr. Ellis and Dr. D’Angelica wrote.

But even some patients who had early recurrences did well following transplant. The investigators noted that early recurrences in this population aren’t as dire as in other settings because they generally manifest as slow growing lung metastases that can be caught early and resected with curative intent.

A major hurdle to broader use of liver transplants in this population is the scarcity of donor grafts. To manage demand, the investigators suggested “extended-criteria donor grafts” – grafts that don’t meet ideal criteria – and the use of the RAPID technique for liver transplant, which opens the door to using one graft for two patients and using living donors with low risk to the donor.

Another challenge will be identifying patients with unresectable colorectal liver metastases who may experience long-term survival following transplant and possibly a cure. “We all will need to keep a sharp eye out for these patients – they might be hard to find!” Dr. Ellis and Dr. D’Angelica wrote.

The study was supported by Oslo University Hospital, the Norwegian Cancer Society, and South-Eastern Norway Regional Health Authority. The investigators and editorialists report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

For carefully selected patients with colorectal cancer (CRC), a liver transplant may offer long-term survival and potentially even cure unresectable liver metastases.

Findings from a Norwegian review of 61 patients who had liver transplants for unresectable colorectal liver metastases found half of patients were still alive at 5 years, and about one in five appeared to be cured at 10 years.

“It seems likely that there is a small group of patients with unresectable colorectal liver metastases who should be considered for transplant, and long-term survival and possibly cure are achievable in these patients with appropriate selection,” Ryan Ellis, MD, and Michael D’Angelica, MD, wrote in a commentary published alongside the study in JAMA Surgery.

The core question, however, is how to identify patients who will benefit the most from a liver transplant, said Dr. Ellis and Dr. D’Angelica, both surgical oncologists in the Hepatopancreatobiliary Service at Memorial Sloan Kettering Cancer Center, New York. Looking closely at who did well in this analysis can offer clues to appropriate patient selection, the editorialists said.

Three decades ago, the oncology community had largely abandoned liver transplant in this population after studies showed overall 5-year survival of less than 20%. Some patients, however, did better, which prompted the Norwegian investigators to attempt to refine patient selection.

In the current prospective nonrandomized study, 61 patients had liver transplants for unresectable metastases at Oslo University Hospital from 2006 to 2020.

The researchers reported a median overall survival of 60.3 months, with about half of patients (50.4%) alive at 5 years.

Most patients (78.3%) experienced a relapse after liver transplant, with a median time to relapse of 9 months and with most occurring within 2 years of transplant. Median overall survival from time of relapse was 37.1 months, with 5-year survival at nearly 35% in this group and with one patient still alive 156 months after relapse.

The remaining 21.7% of patients (n = 13) did not experience a relapse post-transplant at their last follow-up.

Given the variety of responses to liver transplant, how can experts differentiate patients who will benefit most from those who won’t?

The researchers looked at several factors, including Oslo score and Fong Clinical Risk Score. The Oslo score assesses overall survival among liver transplant patients, while the Fong score predicts recurrence risk for patients with CRC liver metastasis following resection. These scores assign one point for each adverse prognostic factor.

Among the 10 patients who had an Oslo Score of 0, median overall survival was 151.6 months, and the 5-year and 10-year survival rates reached nearly 89%. Among the 27 patients with an Oslo Score of 1, median overall survival was 60.3 months, and 5-year overall survival was 54.7%. No patients with an Oslo score of 4 lived for 5 years.

As for FCRS, median overall survival was 164.9 months among those with a score of 1, 90.5 months among those with a score of 2, 59.9 months for those with a score of 3, 32.8 months for those with a score of 4, and 25.3 months for those with the highest score of 5 (P < .001). Overall, these patients had 5-year overall survival of 100%, 63.9%, 49.4%, 33.3%, and 0%, respectively.

In addition to Oslo and Fong scores, metabolic tumor volume on PET scan (PET-MTV) was also a good prognostic factor for survival. Among the 40 patients with MTV values less than 70 cm3, median 5-year overall survival was nearly 67%, while those with values above 70 cm3 had a median 5-year overall survival of 23.3%.

Additional harbingers of low 5-year survival, in addition to higher Oslo and Fong scores and PET-MTV above 70 cm3, included a tumor size greater than 5.5 cm, progressive disease while receiving chemotherapy, primary tumors in the ascending colon, tumor burden scores of 9 or higher, and nine or more liver lesions.

Overall, the current analysis can help oncologists identify patients who may benefit from a liver transplant.

The findings indicate that “patients with liver-only metastases and favorable pretransplant prognostic scoring [have] long-term survival comparable with conventional indications for liver transplant, thus providing a potential curative treatment option in patients otherwise offered only palliative care,” said investigators led by Svein Dueland, MD, PhD, a member of the Transplant Oncology Research Group at Oslo University Hospital.

Perhaps “the most compelling argument in favor of liver transplant lies in the likely curative potential evidenced by the 13 disease-free patients,” Dr. Ellis and Dr. D’Angelica wrote.

But even some patients who had early recurrences did well following transplant. The investigators noted that early recurrences in this population aren’t as dire as in other settings because they generally manifest as slow growing lung metastases that can be caught early and resected with curative intent.

A major hurdle to broader use of liver transplants in this population is the scarcity of donor grafts. To manage demand, the investigators suggested “extended-criteria donor grafts” – grafts that don’t meet ideal criteria – and the use of the RAPID technique for liver transplant, which opens the door to using one graft for two patients and using living donors with low risk to the donor.

Another challenge will be identifying patients with unresectable colorectal liver metastases who may experience long-term survival following transplant and possibly a cure. “We all will need to keep a sharp eye out for these patients – they might be hard to find!” Dr. Ellis and Dr. D’Angelica wrote.

The study was supported by Oslo University Hospital, the Norwegian Cancer Society, and South-Eastern Norway Regional Health Authority. The investigators and editorialists report no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The good, bad, and ugly of direct-to-consumer advertising

Article Type
Changed
Wed, 08/30/2023 - 12:14

Case 1: A 48-year-old female with a 10-year history of left-sided Ulcerative Colitis (UC) has been well controlled on an injectable biologic for 5 years. Her last colonoscopy one year ago was normal without erythema or friability. She presents for an interim visit due to an increase in stool frequency from 2 to 4 bowel movements a day. She denies urgency, nocturnal bowel movements, or blood in her stool. She is concerned about a disease flare and wonders if she is on the right medication. She just saw a TV ad for a new oral UC medication and wants to switch because she prefers an oral medication to an injectable one. Physical exam was normal and in-office flexible sigmoidoscopy demonstrates no change in her colon appearance. You advise her to stay on the biologic because she is still considered well-controlled. She insists on being switched to the new oral medicine. When you probe her more, she says that the TV ad she saw shows people getting the medicine leading normal lives, which has enormous appeal to her.



Case 2: A 52-year-old healthy male is referred for colonoscopy evaluation. He reports no change in bowel habits with rare blood in his stool and thinks his uncle had colon cancer at age 48. He is anxious and not very receptive to having a procedure. He recently saw a TV advertisement promoting non-colonoscopy-based colon cancer screening. You recommend a colonoscopy based on his family history, but he insists on stool-based screening.

AGA
Dr. Sheldon Sloan


Case 3: A 32-year-old female with moderately to well-controlled IBD asks you to review a new “multi-omic” profile of her gut microbiome that she saw advertised on social media. The test report she provides contains a snapshot of her microbiome including abundances of single species and predicted functions for these bacteria from a single stool sample collected and submitted 6 months ago. You counsel her on the role of the gut microbiome in IBD and explain that currently there is not enough knowledge or technology to incorporate these test results into clinical care yet. The patient is frustrated and wants to know why you’re “behind the times.”


These cases may sound familiar to a practicing gastroenterologist. The platform driving all three of these clinical encounters, direct-to-consumer advertising (DTCA), is a legal mechanism by which a commercial entity can communicate directly with the consumer about a medicine or device, bypassing a health care professional.

In the 1960s, Congress granted the Food and Drug Administration regulatory authority over prescription drug labeling and advertising. This included ensuring that ads (1) were not false or misleading, (2) presented a “fair balance” of drug risks and benefits, (3) included facts “material” to a drug’s advertised uses, and (4) included a “brief summary” of all risks described in the drug’s labeling.

Direct-to-consumer advertising increased dramatically in the late 1990s after the FDA eased regulations around risk information by requiring ads to include only “major risks” and provide resources directing consumers to full risk information.

In 2022, the top 10 pharmaceutical ad spenders combined for a total of about $1.7 billion in TV ad spend, with the two top categories being inflammation and diabetes.

The role of the FDA in regulating DTCA of at-home tests is still evolving and largely depends on the intended use of the test results and the health claims used to market the test (that is, whether the test is designed to simply return general information, as in case 3 where DTCA regulations may not apply, or is marketed with specific medical claims or diagnostic interpretations, as in case 2 with clear applications for DTCA regulations.).

DTCA has transformed clinical interactions between patients and health care providers since its inception It has both potential benefits and potential risks. DTCA can serve to increase disease awareness (e.g., the need for colon cancer screening). It may also prompt patients who might otherwise disregard “red flag” signs and symptoms to seek medical evaluation (e.g., rectal bleeding). DTCA can also alert healthcare providers to new treatment options for diseases within their scope of practice and encourage them to expand their armamentarium.
Massachusetts General Hospital
Dr. David A. Drew

In bioethics terms, DTCA can be beneficial in facilitating patient autonomy and promoting justice. For example, DTCA can “even the playing field” by ensuring that patients have equitable access to information about available treatments regardless of their socioeconomic status. In doing so, it can empower patients and allow them to have more meaningful discussions with their health care providers and make more informed decisions. In addition, patients may be introduced to alternative testing modalities (i.e., stool-based CRC screening) that, while not necessarily the best screening modality given individual risk (as in Case 2), may offer benefit with greater acceptance compared to inaction (i.e., no screening). Last, the idea of direct-to-consumer “omics” profiling has empowered patients as “citizen scientists” and led to the advent of “biohacking” among the lay population. In doing so, it has challenged the previous bounds of patient autonomy in healthcare by broadening the types of personal health data available to individuals, even when the clinical utility of this data may not yet be clear.

On the flip side, it is undeniable that DTCA of medical products is driven by commercial interests. Branded drugs are primarily, if not exclusively, promoted in DTCA, but these drugs may not always align with standard of care treatment recommendations. A study published in February 2023 in JAMA found that drugs with lower added clinical benefit and higher total sales were associated with higher DTCA spending.

With patients entering medical encounters with preconceived notions of what drugs they want to be prescribed based on media exposure, the ability of health care providers to provide sound medical advice regarding treatment may be circumvented. A patient’s preferred therapy based on exposure to DTCA may sharply contrast with their provider’s recommendation based on their experience and expertise and knowledge of the patient’s unique clinical history.
 

 

 

Unreasonable expectations

An additional potential downside of DTCA is that it can instill unreasonable expectations on the part of patients that the advertised product will benefit them. While DTCA is required to be fair and balanced in reporting benefits and risks, it is difficult to meaningfully address nuanced clinical risks in a brief TV ad and patients may come away with a skewed view of the risk-benefit equation. Furthermore, social media advertising and associated formats may not provide the same level of digestible information as other forms of media and are targeted to individuals likely to identify with the product. Finally, as stated above, only branded products (vs. generics) are advertised. Branded products are generally more costly, and where less expensive and equally effective therapies exist societal costs need to be considered. This can lead to inequities in distributive justice which is the societal allocation of resources. The more the healthcare market is driven towards higher costs in one segment, the less resources are available in another. This may affect regions where healthcare resources are limited.
 

Shared decision-making

Returning to the 3 cases above, in case 1 the UC patient’s awareness of new treatment options has prompted a shared decision-making discussion. She has a renewed interest in exploring a different route of medication administration because of DTCA. In spite of the patient seemingly well-controlled on her current IBD therapy and minor fluctuations in symptoms that might otherwise be a reason to observe more closely, the patient sees this as a reason to change her treatment based on her impression from the DTCA.

Regarding case 2, disease awareness and CRC screening acceptance is itself a positive outcome. Although commercially driven, the outcome/benefit to society leads to a decrease in disease burden and is a ready alternative with established benefits compared to no screening.

Regarding the proactive “omics-curious” IBD patient in case 3, despite the patient’s disappointment with the DCTA-promoted test’s limited clinical utility at this time, the patient may be communicating a general openness to novel approaches to treating IBD and to advancing her understanding of her disease.

So where does that leave you as a clinician?

As of today, if you live in the U.S., DTCA is a reality. As you navigate day-to-day patient interactions, it is important to keep in mind that your first obligation is to your patients and their best medical interests. Having a well-informed and engaged patient is a positive effect of DTCA over the past 30 years despite the challenging discussions you sometimes are forced to have. In many cases, patients are more self-aware of and engaged in their health and are acting on it due to direct acquisition of information from DTCA platforms. As a clinician, you have an ethical obligation to educate your patients and manage expectations, even when those expectations may be formed on the basis of DTCA with inherent conflicts of interest for promoting a product. Moreover, though certain products may be trendy or promoted on a popular science basis, the underlying technology (i.e. stool-based screening or metagenomics) and/or resultant data are likely not well understood by the typical lay patient, such that it may be difficult for a patient to comprehend how the product may not be particularly informative or inappropriate with respect to their personal medical history without additional counseling. Despite the potentially awkward clinician-patient dynamic precipitated by DTCA, these moments do offer an opportunity for you to gain rapport with your patients by taking the time to fill in the gaps of their understanding of their treatment plans, alternatives, individual risk factors and/or disease while gaining a greater appreciation for what they may personally prioritize with respect to their own health. Ultimately, as we transition further toward precision medicine approaches in healthcare, shared interest in individualized health decisions, at least partially informed by DTCA, is a positive outcome.

Dr. Sloan is a chief medical officer at Abivax. He is a member of the AGA Ethics Committee for COI. David A. Drew, PhD is assistant professor of medicine, Harvard Medical School, Boston. He is director of the Massachusetts General Hospital Biobanking, Clinical & Translational Epidemiology Unit, Boston. He is a member of the AGA Ethics Committee.

Publications
Topics
Sections

Case 1: A 48-year-old female with a 10-year history of left-sided Ulcerative Colitis (UC) has been well controlled on an injectable biologic for 5 years. Her last colonoscopy one year ago was normal without erythema or friability. She presents for an interim visit due to an increase in stool frequency from 2 to 4 bowel movements a day. She denies urgency, nocturnal bowel movements, or blood in her stool. She is concerned about a disease flare and wonders if she is on the right medication. She just saw a TV ad for a new oral UC medication and wants to switch because she prefers an oral medication to an injectable one. Physical exam was normal and in-office flexible sigmoidoscopy demonstrates no change in her colon appearance. You advise her to stay on the biologic because she is still considered well-controlled. She insists on being switched to the new oral medicine. When you probe her more, she says that the TV ad she saw shows people getting the medicine leading normal lives, which has enormous appeal to her.



Case 2: A 52-year-old healthy male is referred for colonoscopy evaluation. He reports no change in bowel habits with rare blood in his stool and thinks his uncle had colon cancer at age 48. He is anxious and not very receptive to having a procedure. He recently saw a TV advertisement promoting non-colonoscopy-based colon cancer screening. You recommend a colonoscopy based on his family history, but he insists on stool-based screening.

AGA
Dr. Sheldon Sloan


Case 3: A 32-year-old female with moderately to well-controlled IBD asks you to review a new “multi-omic” profile of her gut microbiome that she saw advertised on social media. The test report she provides contains a snapshot of her microbiome including abundances of single species and predicted functions for these bacteria from a single stool sample collected and submitted 6 months ago. You counsel her on the role of the gut microbiome in IBD and explain that currently there is not enough knowledge or technology to incorporate these test results into clinical care yet. The patient is frustrated and wants to know why you’re “behind the times.”


These cases may sound familiar to a practicing gastroenterologist. The platform driving all three of these clinical encounters, direct-to-consumer advertising (DTCA), is a legal mechanism by which a commercial entity can communicate directly with the consumer about a medicine or device, bypassing a health care professional.

In the 1960s, Congress granted the Food and Drug Administration regulatory authority over prescription drug labeling and advertising. This included ensuring that ads (1) were not false or misleading, (2) presented a “fair balance” of drug risks and benefits, (3) included facts “material” to a drug’s advertised uses, and (4) included a “brief summary” of all risks described in the drug’s labeling.

Direct-to-consumer advertising increased dramatically in the late 1990s after the FDA eased regulations around risk information by requiring ads to include only “major risks” and provide resources directing consumers to full risk information.

In 2022, the top 10 pharmaceutical ad spenders combined for a total of about $1.7 billion in TV ad spend, with the two top categories being inflammation and diabetes.

The role of the FDA in regulating DTCA of at-home tests is still evolving and largely depends on the intended use of the test results and the health claims used to market the test (that is, whether the test is designed to simply return general information, as in case 3 where DTCA regulations may not apply, or is marketed with specific medical claims or diagnostic interpretations, as in case 2 with clear applications for DTCA regulations.).

DTCA has transformed clinical interactions between patients and health care providers since its inception It has both potential benefits and potential risks. DTCA can serve to increase disease awareness (e.g., the need for colon cancer screening). It may also prompt patients who might otherwise disregard “red flag” signs and symptoms to seek medical evaluation (e.g., rectal bleeding). DTCA can also alert healthcare providers to new treatment options for diseases within their scope of practice and encourage them to expand their armamentarium.
Massachusetts General Hospital
Dr. David A. Drew

In bioethics terms, DTCA can be beneficial in facilitating patient autonomy and promoting justice. For example, DTCA can “even the playing field” by ensuring that patients have equitable access to information about available treatments regardless of their socioeconomic status. In doing so, it can empower patients and allow them to have more meaningful discussions with their health care providers and make more informed decisions. In addition, patients may be introduced to alternative testing modalities (i.e., stool-based CRC screening) that, while not necessarily the best screening modality given individual risk (as in Case 2), may offer benefit with greater acceptance compared to inaction (i.e., no screening). Last, the idea of direct-to-consumer “omics” profiling has empowered patients as “citizen scientists” and led to the advent of “biohacking” among the lay population. In doing so, it has challenged the previous bounds of patient autonomy in healthcare by broadening the types of personal health data available to individuals, even when the clinical utility of this data may not yet be clear.

On the flip side, it is undeniable that DTCA of medical products is driven by commercial interests. Branded drugs are primarily, if not exclusively, promoted in DTCA, but these drugs may not always align with standard of care treatment recommendations. A study published in February 2023 in JAMA found that drugs with lower added clinical benefit and higher total sales were associated with higher DTCA spending.

With patients entering medical encounters with preconceived notions of what drugs they want to be prescribed based on media exposure, the ability of health care providers to provide sound medical advice regarding treatment may be circumvented. A patient’s preferred therapy based on exposure to DTCA may sharply contrast with their provider’s recommendation based on their experience and expertise and knowledge of the patient’s unique clinical history.
 

 

 

Unreasonable expectations

An additional potential downside of DTCA is that it can instill unreasonable expectations on the part of patients that the advertised product will benefit them. While DTCA is required to be fair and balanced in reporting benefits and risks, it is difficult to meaningfully address nuanced clinical risks in a brief TV ad and patients may come away with a skewed view of the risk-benefit equation. Furthermore, social media advertising and associated formats may not provide the same level of digestible information as other forms of media and are targeted to individuals likely to identify with the product. Finally, as stated above, only branded products (vs. generics) are advertised. Branded products are generally more costly, and where less expensive and equally effective therapies exist societal costs need to be considered. This can lead to inequities in distributive justice which is the societal allocation of resources. The more the healthcare market is driven towards higher costs in one segment, the less resources are available in another. This may affect regions where healthcare resources are limited.
 

Shared decision-making

Returning to the 3 cases above, in case 1 the UC patient’s awareness of new treatment options has prompted a shared decision-making discussion. She has a renewed interest in exploring a different route of medication administration because of DTCA. In spite of the patient seemingly well-controlled on her current IBD therapy and minor fluctuations in symptoms that might otherwise be a reason to observe more closely, the patient sees this as a reason to change her treatment based on her impression from the DTCA.

Regarding case 2, disease awareness and CRC screening acceptance is itself a positive outcome. Although commercially driven, the outcome/benefit to society leads to a decrease in disease burden and is a ready alternative with established benefits compared to no screening.

Regarding the proactive “omics-curious” IBD patient in case 3, despite the patient’s disappointment with the DCTA-promoted test’s limited clinical utility at this time, the patient may be communicating a general openness to novel approaches to treating IBD and to advancing her understanding of her disease.

So where does that leave you as a clinician?

As of today, if you live in the U.S., DTCA is a reality. As you navigate day-to-day patient interactions, it is important to keep in mind that your first obligation is to your patients and their best medical interests. Having a well-informed and engaged patient is a positive effect of DTCA over the past 30 years despite the challenging discussions you sometimes are forced to have. In many cases, patients are more self-aware of and engaged in their health and are acting on it due to direct acquisition of information from DTCA platforms. As a clinician, you have an ethical obligation to educate your patients and manage expectations, even when those expectations may be formed on the basis of DTCA with inherent conflicts of interest for promoting a product. Moreover, though certain products may be trendy or promoted on a popular science basis, the underlying technology (i.e. stool-based screening or metagenomics) and/or resultant data are likely not well understood by the typical lay patient, such that it may be difficult for a patient to comprehend how the product may not be particularly informative or inappropriate with respect to their personal medical history without additional counseling. Despite the potentially awkward clinician-patient dynamic precipitated by DTCA, these moments do offer an opportunity for you to gain rapport with your patients by taking the time to fill in the gaps of their understanding of their treatment plans, alternatives, individual risk factors and/or disease while gaining a greater appreciation for what they may personally prioritize with respect to their own health. Ultimately, as we transition further toward precision medicine approaches in healthcare, shared interest in individualized health decisions, at least partially informed by DTCA, is a positive outcome.

Dr. Sloan is a chief medical officer at Abivax. He is a member of the AGA Ethics Committee for COI. David A. Drew, PhD is assistant professor of medicine, Harvard Medical School, Boston. He is director of the Massachusetts General Hospital Biobanking, Clinical & Translational Epidemiology Unit, Boston. He is a member of the AGA Ethics Committee.

Case 1: A 48-year-old female with a 10-year history of left-sided Ulcerative Colitis (UC) has been well controlled on an injectable biologic for 5 years. Her last colonoscopy one year ago was normal without erythema or friability. She presents for an interim visit due to an increase in stool frequency from 2 to 4 bowel movements a day. She denies urgency, nocturnal bowel movements, or blood in her stool. She is concerned about a disease flare and wonders if she is on the right medication. She just saw a TV ad for a new oral UC medication and wants to switch because she prefers an oral medication to an injectable one. Physical exam was normal and in-office flexible sigmoidoscopy demonstrates no change in her colon appearance. You advise her to stay on the biologic because she is still considered well-controlled. She insists on being switched to the new oral medicine. When you probe her more, she says that the TV ad she saw shows people getting the medicine leading normal lives, which has enormous appeal to her.



Case 2: A 52-year-old healthy male is referred for colonoscopy evaluation. He reports no change in bowel habits with rare blood in his stool and thinks his uncle had colon cancer at age 48. He is anxious and not very receptive to having a procedure. He recently saw a TV advertisement promoting non-colonoscopy-based colon cancer screening. You recommend a colonoscopy based on his family history, but he insists on stool-based screening.

AGA
Dr. Sheldon Sloan


Case 3: A 32-year-old female with moderately to well-controlled IBD asks you to review a new “multi-omic” profile of her gut microbiome that she saw advertised on social media. The test report she provides contains a snapshot of her microbiome including abundances of single species and predicted functions for these bacteria from a single stool sample collected and submitted 6 months ago. You counsel her on the role of the gut microbiome in IBD and explain that currently there is not enough knowledge or technology to incorporate these test results into clinical care yet. The patient is frustrated and wants to know why you’re “behind the times.”


These cases may sound familiar to a practicing gastroenterologist. The platform driving all three of these clinical encounters, direct-to-consumer advertising (DTCA), is a legal mechanism by which a commercial entity can communicate directly with the consumer about a medicine or device, bypassing a health care professional.

In the 1960s, Congress granted the Food and Drug Administration regulatory authority over prescription drug labeling and advertising. This included ensuring that ads (1) were not false or misleading, (2) presented a “fair balance” of drug risks and benefits, (3) included facts “material” to a drug’s advertised uses, and (4) included a “brief summary” of all risks described in the drug’s labeling.

Direct-to-consumer advertising increased dramatically in the late 1990s after the FDA eased regulations around risk information by requiring ads to include only “major risks” and provide resources directing consumers to full risk information.

In 2022, the top 10 pharmaceutical ad spenders combined for a total of about $1.7 billion in TV ad spend, with the two top categories being inflammation and diabetes.

The role of the FDA in regulating DTCA of at-home tests is still evolving and largely depends on the intended use of the test results and the health claims used to market the test (that is, whether the test is designed to simply return general information, as in case 3 where DTCA regulations may not apply, or is marketed with specific medical claims or diagnostic interpretations, as in case 2 with clear applications for DTCA regulations.).

DTCA has transformed clinical interactions between patients and health care providers since its inception It has both potential benefits and potential risks. DTCA can serve to increase disease awareness (e.g., the need for colon cancer screening). It may also prompt patients who might otherwise disregard “red flag” signs and symptoms to seek medical evaluation (e.g., rectal bleeding). DTCA can also alert healthcare providers to new treatment options for diseases within their scope of practice and encourage them to expand their armamentarium.
Massachusetts General Hospital
Dr. David A. Drew

In bioethics terms, DTCA can be beneficial in facilitating patient autonomy and promoting justice. For example, DTCA can “even the playing field” by ensuring that patients have equitable access to information about available treatments regardless of their socioeconomic status. In doing so, it can empower patients and allow them to have more meaningful discussions with their health care providers and make more informed decisions. In addition, patients may be introduced to alternative testing modalities (i.e., stool-based CRC screening) that, while not necessarily the best screening modality given individual risk (as in Case 2), may offer benefit with greater acceptance compared to inaction (i.e., no screening). Last, the idea of direct-to-consumer “omics” profiling has empowered patients as “citizen scientists” and led to the advent of “biohacking” among the lay population. In doing so, it has challenged the previous bounds of patient autonomy in healthcare by broadening the types of personal health data available to individuals, even when the clinical utility of this data may not yet be clear.

On the flip side, it is undeniable that DTCA of medical products is driven by commercial interests. Branded drugs are primarily, if not exclusively, promoted in DTCA, but these drugs may not always align with standard of care treatment recommendations. A study published in February 2023 in JAMA found that drugs with lower added clinical benefit and higher total sales were associated with higher DTCA spending.

With patients entering medical encounters with preconceived notions of what drugs they want to be prescribed based on media exposure, the ability of health care providers to provide sound medical advice regarding treatment may be circumvented. A patient’s preferred therapy based on exposure to DTCA may sharply contrast with their provider’s recommendation based on their experience and expertise and knowledge of the patient’s unique clinical history.
 

 

 

Unreasonable expectations

An additional potential downside of DTCA is that it can instill unreasonable expectations on the part of patients that the advertised product will benefit them. While DTCA is required to be fair and balanced in reporting benefits and risks, it is difficult to meaningfully address nuanced clinical risks in a brief TV ad and patients may come away with a skewed view of the risk-benefit equation. Furthermore, social media advertising and associated formats may not provide the same level of digestible information as other forms of media and are targeted to individuals likely to identify with the product. Finally, as stated above, only branded products (vs. generics) are advertised. Branded products are generally more costly, and where less expensive and equally effective therapies exist societal costs need to be considered. This can lead to inequities in distributive justice which is the societal allocation of resources. The more the healthcare market is driven towards higher costs in one segment, the less resources are available in another. This may affect regions where healthcare resources are limited.
 

Shared decision-making

Returning to the 3 cases above, in case 1 the UC patient’s awareness of new treatment options has prompted a shared decision-making discussion. She has a renewed interest in exploring a different route of medication administration because of DTCA. In spite of the patient seemingly well-controlled on her current IBD therapy and minor fluctuations in symptoms that might otherwise be a reason to observe more closely, the patient sees this as a reason to change her treatment based on her impression from the DTCA.

Regarding case 2, disease awareness and CRC screening acceptance is itself a positive outcome. Although commercially driven, the outcome/benefit to society leads to a decrease in disease burden and is a ready alternative with established benefits compared to no screening.

Regarding the proactive “omics-curious” IBD patient in case 3, despite the patient’s disappointment with the DCTA-promoted test’s limited clinical utility at this time, the patient may be communicating a general openness to novel approaches to treating IBD and to advancing her understanding of her disease.

So where does that leave you as a clinician?

As of today, if you live in the U.S., DTCA is a reality. As you navigate day-to-day patient interactions, it is important to keep in mind that your first obligation is to your patients and their best medical interests. Having a well-informed and engaged patient is a positive effect of DTCA over the past 30 years despite the challenging discussions you sometimes are forced to have. In many cases, patients are more self-aware of and engaged in their health and are acting on it due to direct acquisition of information from DTCA platforms. As a clinician, you have an ethical obligation to educate your patients and manage expectations, even when those expectations may be formed on the basis of DTCA with inherent conflicts of interest for promoting a product. Moreover, though certain products may be trendy or promoted on a popular science basis, the underlying technology (i.e. stool-based screening or metagenomics) and/or resultant data are likely not well understood by the typical lay patient, such that it may be difficult for a patient to comprehend how the product may not be particularly informative or inappropriate with respect to their personal medical history without additional counseling. Despite the potentially awkward clinician-patient dynamic precipitated by DTCA, these moments do offer an opportunity for you to gain rapport with your patients by taking the time to fill in the gaps of their understanding of their treatment plans, alternatives, individual risk factors and/or disease while gaining a greater appreciation for what they may personally prioritize with respect to their own health. Ultimately, as we transition further toward precision medicine approaches in healthcare, shared interest in individualized health decisions, at least partially informed by DTCA, is a positive outcome.

Dr. Sloan is a chief medical officer at Abivax. He is a member of the AGA Ethics Committee for COI. David A. Drew, PhD is assistant professor of medicine, Harvard Medical School, Boston. He is director of the Massachusetts General Hospital Biobanking, Clinical & Translational Epidemiology Unit, Boston. He is a member of the AGA Ethics Committee.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

First-line therapy in T2D: Has metformin been ‘dethroned’?

Article Type
Changed
Thu, 08/17/2023 - 08:24

Initially approved by the U.S. Food and Drug Administration (FDA) in 1994, metformin has been the preferred first-line glucose-lowering agent for patients with type 2 diabetes (T2D) owing to its effectiveness, low hypoglycemia risk, weight neutrality, long clinical track record of safety, and affordability. However, the advent of newer glucose-lowering agents with evidence-based cardiovascular (CV) and renal benefits calls into question whether metformin should continue to be the initial pharmacotherapy for all patients with T2D. To help determine whether metformin has been “dethroned” as first-line treatment for T2D, here is a brief review of recent evidence and current guideline recommendations.

Cardiovascular outcome trials transform standard of care

In 2008, the FDA issued guidance to industry to ensure that CV risk is more thoroughly addressed during development of T2D therapies. This guidance document required dedicated trials to establish CV safety of new glucose-lowering therapies. Findings from subsequent cardiovascular outcome trials (CVOTs) and subsequent large renal and heart failure (HF) outcome trials have since prompted frequent and substantial updates to major guidelines. On the basis of recent evidence from CVOT and renal trials, contemporary clinical practice guidelines have transitioned from a traditional glucocentric treatment approach to a holistic management approach that emphasizes organ protection through heart-kidney-metabolic risk reduction.

Per the 2008 FDA guidance, dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagonlike peptide-1 (GLP-1) receptor agonists, and sodium-glucose cotransporter-2 (SGLT2) inhibitors were evaluated in large dedicated CVOTs. Findings from several CVOTs established GLP-1 receptor agonist and SGLT2 inhibitor CV safety, and unexpectedly demonstrated reduced rates of major adverse cardiovascular events (MACE) relative to placebo. The LEADER and EMPA-REG OUTCOME trials were the first CVOTs to report cardioprotective benefits of the GLP-1 receptor agonist liraglutide and the SGLT2 inhibitor empagliflozin, respectively. The LEADER trial reported a 13% significant relative risk reduction for its primary composite MACE outcome, and the EMPA-REG OUTCOME trial similarly reported a 14% relative risk reduction for MACE. After CVOTs on other GLP-1 receptor agonists and SGLT2 inhibitors reported CV benefit, clinical practice guidelines began to recommend use of these agents in at-risk patients to mitigate CV risk.

During the period when most CVOTs were designed and conducted, a majority of trial participants were receiving metformin at baseline. Inclusion of a small subset of metformin-naive participants in these trials allowed for several post hoc and meta-analyses investigating the impact of background metformin use on the overall CV benefits reported. Depending on the trial, baseline metformin use in large GLP-1 receptor agonist CVOTs ranged from 66% to 81%. For instance, 76% of participants in the LEADER trial were receiving metformin at baseline, but a post hoc analysis found no heterogeneity for the observed CV benefit based on background metformin use. Similarly, a subgroup analysis of pooled data from the SUSTAIN-6 and PIONEER 6 trials of injectable and oral formulations of semaglutide, respectively, reported similar CV outcomes for participants, regardless of concomitant metformin use. When looking at the GLP-1 receptor agonist class overall, a meta-analysis of seven CVOTs, which included participants with established atherosclerotic cardiovascular disease (ASCVD) and those with multiple ASCVD risk factors, concluded that GLP-1 receptor agonist therapy reduced the overall incidence of MACE in participants not receiving concomitant metformin at baseline.

Similar analyses have examined the impact of background metformin use on CV outcomes with SGLT2 inhibitors. An analysis of EMPA-REG OUTCOME found that empagliflozin improved CV outcomes and reduced mortality irrespective of background metformin, sulfonylurea, or insulin use. Of note, this analysis suggested a greater risk reduction for incident or worsening nephropathy in patients not on concomitant metformin (hazard ratio, 0.47; 95% confidence interval, 0.37-0.59; P = .01), when compared with those taking metformin at baseline (HR, 0.68; 95% CI, 0.58-0.79; P = .01). In addition, a meta-analysis of six large outcome trials found consistent benefits of SGLT2 inhibition on CV, kidney, and mortality outcomes regardless of background metformin treatment. Therefore, although CVOTs on GLP-1 receptor agonists and SGLT2 inhibitors were not designed to assess the impact of background metformin use on CV outcomes, available evidence supports the CV benefits of these agents independent of metformin use.
 

 

 

Individualizing care to attain cardiorenal-metabolic goals

Three dedicated SGLT2 inhibitor renal outcome trials have been published to date: CREDENCE, DAPA-CKD, and EMPA-KIDNEY. All three studies confirmed the positive secondary renal outcomes observed in SGLT2 inhibitor CVOTs: reduced progression of kidney disease, HF-associated hospital admissions, and CV-related death. The observed renal and CV benefits from the CREDENCE trial were consistent across different levels of kidney function. Similarly, a meta-analysis of five SGLT2 inhibitor trials of patients with HF demonstrated a decreased risk for CV-related death and admission for HF, irrespective of baseline heart function. The ongoing FLOW is the first dedicated kidney-outcome trial to evaluate the effectiveness of a GLP-1 receptor agonist (semaglutide) in slowing the progression and worsening of chronic kidney disease (CKD) in patients with T2D.

As previously noted, findings from the LEADER and EMPA-REG OUTCOME trials demonstrated the beneficial effects of GLP-1 receptor agonists and SGLT2 inhibitors not only on MACE but also on secondary HF and kidney disease outcomes. These findings have supported a series of dedicated HF and kidney outcome trials further informing the standard of care for patients with these key comorbidities. Indeed, the American Diabetes Association’s 2023 Standards of Care in Diabetes updated its recommendations and algorithm for the use of glucose-lowering medications in the management of T2D. The current ADA recommendations stress cardiorenal risk reduction while concurrently achieving and maintaining glycemic and weight management goals. On the basis of evolving outcome trial data, GLP-1 receptor agonists and SGLT2 inhibitors with evidence of benefit are recommended for patients with established or at high risk for ASCVD. Further, the Standards preferentially recommend SGLT2 inhibitors for patients with HF and/or CKD. Because evidence suggests no heterogeneity of benefit based on hemoglobin A1c for MACE outcomes with GLP-1 receptor agonists and no heterogeneity of benefit for HF or CKD benefits with SGLT2 inhibitors, these agents are recommended for cardiorenal risk reduction regardless of the need to lower glucose.

The 2023 update to the American Association of Clinical Endocrinology Consensus Statement: Type 2 Diabetes Management Algorithm similarly recommends the use of GLP-1 receptor agonists and SGLT2 inhibitors to improve cardiorenal outcomes. To further emphasize the importance of prescribing agents with proven organ-protective benefits, the AACE consensus statement provides a complications-centric algorithm to guide therapeutic decisions for risk reduction in patients with key comorbidities (for instance, ASCVD, HF, CKD) and a separate glucocentric algorithm to guide selection and intensification of glucose-lowering agents in patients without key comorbidities to meet individualized glycemic targets. Within the complications-centric algorithm, AACE recommends GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment for cardiorenal risk reduction regardless of background metformin use or A1c level.

In addition to the emphasis on the use of GLP-1 receptor agonists and SGLT2 inhibitors for organ protection, guidelines now recommend SGLT2 inhibitors as the standard-of-care therapy in patients with T2D and CKD with an estimated glomerular filtration rate ≥ 20 mL/min per 1.73 m2, and irrespective of ejection fraction or a diagnosis of diabetes in the setting of HF. Overall, a common thread within current guidelines is the importance of individualized therapy based on patient- and medication-specific factors.
 

 

 

Optimizing guideline-directed medical therapy

Results from the DISCOVER trial found that GLP-1 receptor agonist and SGLT2 inhibitor use was less likely in the key patient subgroups most likely to benefit from therapy, including patients with peripheral artery disease and CKD. Factors contributing to underutilization of newer cardiorenal protective glucose-lowering therapies range from cost and access barriers to clinician-level barriers (for example, lack of knowledge on CKD, lack of familiarity with CKD practice guidelines). Addressing these issues and helping patients work through financial and other access barriers is essential to optimize the utilization of these therapies and improve cardiorenal and metabolic outcomes.

So, has metformin been “dethroned” as a first-line therapy for T2D? As is often the case in medicine, the answer depends on the individual patient and clinical situation. Metformin remains an important first-line treatment in combination with lifestyle interventions to help patients with T2D without key cardiorenal comorbidities achieve individualized glycemic targets. However, based on evidence demonstrating cardiorenal protective benefits and improved glycemia and weight loss, GLP-1 agonists and SGLT2 inhibitors may be considered as first-line treatment for patients with T2D with or at high risk for ASCVD, HF, or CKD, regardless of the need for additional glucose-lowering agents and independent of background metformin. Ultimately, the choice of first-line therapy for patients with T2D should be informed by individualized treatment goals, preferences, and cost-related access. Continued efforts to increase patient access to GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment when indicated are essential to ensure optimal treatment and outcomes.

Dr. Neumiller is professor, department of pharmacotherapy, Washington State University, Spokane. He disclosed ties with Bayer, Boehringer Ingelheim, and Eli Lilly. Dr. Alicic is clinical professor, department of medicine, University of Washington; and associate director of research, Inland Northwest Washington, Providence St. Joseph Health, Spokane. She disclosed ties with Providence St. Joseph Health, Boehringer Ingelheim/Lilly, and Bayer.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Initially approved by the U.S. Food and Drug Administration (FDA) in 1994, metformin has been the preferred first-line glucose-lowering agent for patients with type 2 diabetes (T2D) owing to its effectiveness, low hypoglycemia risk, weight neutrality, long clinical track record of safety, and affordability. However, the advent of newer glucose-lowering agents with evidence-based cardiovascular (CV) and renal benefits calls into question whether metformin should continue to be the initial pharmacotherapy for all patients with T2D. To help determine whether metformin has been “dethroned” as first-line treatment for T2D, here is a brief review of recent evidence and current guideline recommendations.

Cardiovascular outcome trials transform standard of care

In 2008, the FDA issued guidance to industry to ensure that CV risk is more thoroughly addressed during development of T2D therapies. This guidance document required dedicated trials to establish CV safety of new glucose-lowering therapies. Findings from subsequent cardiovascular outcome trials (CVOTs) and subsequent large renal and heart failure (HF) outcome trials have since prompted frequent and substantial updates to major guidelines. On the basis of recent evidence from CVOT and renal trials, contemporary clinical practice guidelines have transitioned from a traditional glucocentric treatment approach to a holistic management approach that emphasizes organ protection through heart-kidney-metabolic risk reduction.

Per the 2008 FDA guidance, dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagonlike peptide-1 (GLP-1) receptor agonists, and sodium-glucose cotransporter-2 (SGLT2) inhibitors were evaluated in large dedicated CVOTs. Findings from several CVOTs established GLP-1 receptor agonist and SGLT2 inhibitor CV safety, and unexpectedly demonstrated reduced rates of major adverse cardiovascular events (MACE) relative to placebo. The LEADER and EMPA-REG OUTCOME trials were the first CVOTs to report cardioprotective benefits of the GLP-1 receptor agonist liraglutide and the SGLT2 inhibitor empagliflozin, respectively. The LEADER trial reported a 13% significant relative risk reduction for its primary composite MACE outcome, and the EMPA-REG OUTCOME trial similarly reported a 14% relative risk reduction for MACE. After CVOTs on other GLP-1 receptor agonists and SGLT2 inhibitors reported CV benefit, clinical practice guidelines began to recommend use of these agents in at-risk patients to mitigate CV risk.

During the period when most CVOTs were designed and conducted, a majority of trial participants were receiving metformin at baseline. Inclusion of a small subset of metformin-naive participants in these trials allowed for several post hoc and meta-analyses investigating the impact of background metformin use on the overall CV benefits reported. Depending on the trial, baseline metformin use in large GLP-1 receptor agonist CVOTs ranged from 66% to 81%. For instance, 76% of participants in the LEADER trial were receiving metformin at baseline, but a post hoc analysis found no heterogeneity for the observed CV benefit based on background metformin use. Similarly, a subgroup analysis of pooled data from the SUSTAIN-6 and PIONEER 6 trials of injectable and oral formulations of semaglutide, respectively, reported similar CV outcomes for participants, regardless of concomitant metformin use. When looking at the GLP-1 receptor agonist class overall, a meta-analysis of seven CVOTs, which included participants with established atherosclerotic cardiovascular disease (ASCVD) and those with multiple ASCVD risk factors, concluded that GLP-1 receptor agonist therapy reduced the overall incidence of MACE in participants not receiving concomitant metformin at baseline.

Similar analyses have examined the impact of background metformin use on CV outcomes with SGLT2 inhibitors. An analysis of EMPA-REG OUTCOME found that empagliflozin improved CV outcomes and reduced mortality irrespective of background metformin, sulfonylurea, or insulin use. Of note, this analysis suggested a greater risk reduction for incident or worsening nephropathy in patients not on concomitant metformin (hazard ratio, 0.47; 95% confidence interval, 0.37-0.59; P = .01), when compared with those taking metformin at baseline (HR, 0.68; 95% CI, 0.58-0.79; P = .01). In addition, a meta-analysis of six large outcome trials found consistent benefits of SGLT2 inhibition on CV, kidney, and mortality outcomes regardless of background metformin treatment. Therefore, although CVOTs on GLP-1 receptor agonists and SGLT2 inhibitors were not designed to assess the impact of background metformin use on CV outcomes, available evidence supports the CV benefits of these agents independent of metformin use.
 

 

 

Individualizing care to attain cardiorenal-metabolic goals

Three dedicated SGLT2 inhibitor renal outcome trials have been published to date: CREDENCE, DAPA-CKD, and EMPA-KIDNEY. All three studies confirmed the positive secondary renal outcomes observed in SGLT2 inhibitor CVOTs: reduced progression of kidney disease, HF-associated hospital admissions, and CV-related death. The observed renal and CV benefits from the CREDENCE trial were consistent across different levels of kidney function. Similarly, a meta-analysis of five SGLT2 inhibitor trials of patients with HF demonstrated a decreased risk for CV-related death and admission for HF, irrespective of baseline heart function. The ongoing FLOW is the first dedicated kidney-outcome trial to evaluate the effectiveness of a GLP-1 receptor agonist (semaglutide) in slowing the progression and worsening of chronic kidney disease (CKD) in patients with T2D.

As previously noted, findings from the LEADER and EMPA-REG OUTCOME trials demonstrated the beneficial effects of GLP-1 receptor agonists and SGLT2 inhibitors not only on MACE but also on secondary HF and kidney disease outcomes. These findings have supported a series of dedicated HF and kidney outcome trials further informing the standard of care for patients with these key comorbidities. Indeed, the American Diabetes Association’s 2023 Standards of Care in Diabetes updated its recommendations and algorithm for the use of glucose-lowering medications in the management of T2D. The current ADA recommendations stress cardiorenal risk reduction while concurrently achieving and maintaining glycemic and weight management goals. On the basis of evolving outcome trial data, GLP-1 receptor agonists and SGLT2 inhibitors with evidence of benefit are recommended for patients with established or at high risk for ASCVD. Further, the Standards preferentially recommend SGLT2 inhibitors for patients with HF and/or CKD. Because evidence suggests no heterogeneity of benefit based on hemoglobin A1c for MACE outcomes with GLP-1 receptor agonists and no heterogeneity of benefit for HF or CKD benefits with SGLT2 inhibitors, these agents are recommended for cardiorenal risk reduction regardless of the need to lower glucose.

The 2023 update to the American Association of Clinical Endocrinology Consensus Statement: Type 2 Diabetes Management Algorithm similarly recommends the use of GLP-1 receptor agonists and SGLT2 inhibitors to improve cardiorenal outcomes. To further emphasize the importance of prescribing agents with proven organ-protective benefits, the AACE consensus statement provides a complications-centric algorithm to guide therapeutic decisions for risk reduction in patients with key comorbidities (for instance, ASCVD, HF, CKD) and a separate glucocentric algorithm to guide selection and intensification of glucose-lowering agents in patients without key comorbidities to meet individualized glycemic targets. Within the complications-centric algorithm, AACE recommends GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment for cardiorenal risk reduction regardless of background metformin use or A1c level.

In addition to the emphasis on the use of GLP-1 receptor agonists and SGLT2 inhibitors for organ protection, guidelines now recommend SGLT2 inhibitors as the standard-of-care therapy in patients with T2D and CKD with an estimated glomerular filtration rate ≥ 20 mL/min per 1.73 m2, and irrespective of ejection fraction or a diagnosis of diabetes in the setting of HF. Overall, a common thread within current guidelines is the importance of individualized therapy based on patient- and medication-specific factors.
 

 

 

Optimizing guideline-directed medical therapy

Results from the DISCOVER trial found that GLP-1 receptor agonist and SGLT2 inhibitor use was less likely in the key patient subgroups most likely to benefit from therapy, including patients with peripheral artery disease and CKD. Factors contributing to underutilization of newer cardiorenal protective glucose-lowering therapies range from cost and access barriers to clinician-level barriers (for example, lack of knowledge on CKD, lack of familiarity with CKD practice guidelines). Addressing these issues and helping patients work through financial and other access barriers is essential to optimize the utilization of these therapies and improve cardiorenal and metabolic outcomes.

So, has metformin been “dethroned” as a first-line therapy for T2D? As is often the case in medicine, the answer depends on the individual patient and clinical situation. Metformin remains an important first-line treatment in combination with lifestyle interventions to help patients with T2D without key cardiorenal comorbidities achieve individualized glycemic targets. However, based on evidence demonstrating cardiorenal protective benefits and improved glycemia and weight loss, GLP-1 agonists and SGLT2 inhibitors may be considered as first-line treatment for patients with T2D with or at high risk for ASCVD, HF, or CKD, regardless of the need for additional glucose-lowering agents and independent of background metformin. Ultimately, the choice of first-line therapy for patients with T2D should be informed by individualized treatment goals, preferences, and cost-related access. Continued efforts to increase patient access to GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment when indicated are essential to ensure optimal treatment and outcomes.

Dr. Neumiller is professor, department of pharmacotherapy, Washington State University, Spokane. He disclosed ties with Bayer, Boehringer Ingelheim, and Eli Lilly. Dr. Alicic is clinical professor, department of medicine, University of Washington; and associate director of research, Inland Northwest Washington, Providence St. Joseph Health, Spokane. She disclosed ties with Providence St. Joseph Health, Boehringer Ingelheim/Lilly, and Bayer.

A version of this article appeared on Medscape.com.

Initially approved by the U.S. Food and Drug Administration (FDA) in 1994, metformin has been the preferred first-line glucose-lowering agent for patients with type 2 diabetes (T2D) owing to its effectiveness, low hypoglycemia risk, weight neutrality, long clinical track record of safety, and affordability. However, the advent of newer glucose-lowering agents with evidence-based cardiovascular (CV) and renal benefits calls into question whether metformin should continue to be the initial pharmacotherapy for all patients with T2D. To help determine whether metformin has been “dethroned” as first-line treatment for T2D, here is a brief review of recent evidence and current guideline recommendations.

Cardiovascular outcome trials transform standard of care

In 2008, the FDA issued guidance to industry to ensure that CV risk is more thoroughly addressed during development of T2D therapies. This guidance document required dedicated trials to establish CV safety of new glucose-lowering therapies. Findings from subsequent cardiovascular outcome trials (CVOTs) and subsequent large renal and heart failure (HF) outcome trials have since prompted frequent and substantial updates to major guidelines. On the basis of recent evidence from CVOT and renal trials, contemporary clinical practice guidelines have transitioned from a traditional glucocentric treatment approach to a holistic management approach that emphasizes organ protection through heart-kidney-metabolic risk reduction.

Per the 2008 FDA guidance, dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagonlike peptide-1 (GLP-1) receptor agonists, and sodium-glucose cotransporter-2 (SGLT2) inhibitors were evaluated in large dedicated CVOTs. Findings from several CVOTs established GLP-1 receptor agonist and SGLT2 inhibitor CV safety, and unexpectedly demonstrated reduced rates of major adverse cardiovascular events (MACE) relative to placebo. The LEADER and EMPA-REG OUTCOME trials were the first CVOTs to report cardioprotective benefits of the GLP-1 receptor agonist liraglutide and the SGLT2 inhibitor empagliflozin, respectively. The LEADER trial reported a 13% significant relative risk reduction for its primary composite MACE outcome, and the EMPA-REG OUTCOME trial similarly reported a 14% relative risk reduction for MACE. After CVOTs on other GLP-1 receptor agonists and SGLT2 inhibitors reported CV benefit, clinical practice guidelines began to recommend use of these agents in at-risk patients to mitigate CV risk.

During the period when most CVOTs were designed and conducted, a majority of trial participants were receiving metformin at baseline. Inclusion of a small subset of metformin-naive participants in these trials allowed for several post hoc and meta-analyses investigating the impact of background metformin use on the overall CV benefits reported. Depending on the trial, baseline metformin use in large GLP-1 receptor agonist CVOTs ranged from 66% to 81%. For instance, 76% of participants in the LEADER trial were receiving metformin at baseline, but a post hoc analysis found no heterogeneity for the observed CV benefit based on background metformin use. Similarly, a subgroup analysis of pooled data from the SUSTAIN-6 and PIONEER 6 trials of injectable and oral formulations of semaglutide, respectively, reported similar CV outcomes for participants, regardless of concomitant metformin use. When looking at the GLP-1 receptor agonist class overall, a meta-analysis of seven CVOTs, which included participants with established atherosclerotic cardiovascular disease (ASCVD) and those with multiple ASCVD risk factors, concluded that GLP-1 receptor agonist therapy reduced the overall incidence of MACE in participants not receiving concomitant metformin at baseline.

Similar analyses have examined the impact of background metformin use on CV outcomes with SGLT2 inhibitors. An analysis of EMPA-REG OUTCOME found that empagliflozin improved CV outcomes and reduced mortality irrespective of background metformin, sulfonylurea, or insulin use. Of note, this analysis suggested a greater risk reduction for incident or worsening nephropathy in patients not on concomitant metformin (hazard ratio, 0.47; 95% confidence interval, 0.37-0.59; P = .01), when compared with those taking metformin at baseline (HR, 0.68; 95% CI, 0.58-0.79; P = .01). In addition, a meta-analysis of six large outcome trials found consistent benefits of SGLT2 inhibition on CV, kidney, and mortality outcomes regardless of background metformin treatment. Therefore, although CVOTs on GLP-1 receptor agonists and SGLT2 inhibitors were not designed to assess the impact of background metformin use on CV outcomes, available evidence supports the CV benefits of these agents independent of metformin use.
 

 

 

Individualizing care to attain cardiorenal-metabolic goals

Three dedicated SGLT2 inhibitor renal outcome trials have been published to date: CREDENCE, DAPA-CKD, and EMPA-KIDNEY. All three studies confirmed the positive secondary renal outcomes observed in SGLT2 inhibitor CVOTs: reduced progression of kidney disease, HF-associated hospital admissions, and CV-related death. The observed renal and CV benefits from the CREDENCE trial were consistent across different levels of kidney function. Similarly, a meta-analysis of five SGLT2 inhibitor trials of patients with HF demonstrated a decreased risk for CV-related death and admission for HF, irrespective of baseline heart function. The ongoing FLOW is the first dedicated kidney-outcome trial to evaluate the effectiveness of a GLP-1 receptor agonist (semaglutide) in slowing the progression and worsening of chronic kidney disease (CKD) in patients with T2D.

As previously noted, findings from the LEADER and EMPA-REG OUTCOME trials demonstrated the beneficial effects of GLP-1 receptor agonists and SGLT2 inhibitors not only on MACE but also on secondary HF and kidney disease outcomes. These findings have supported a series of dedicated HF and kidney outcome trials further informing the standard of care for patients with these key comorbidities. Indeed, the American Diabetes Association’s 2023 Standards of Care in Diabetes updated its recommendations and algorithm for the use of glucose-lowering medications in the management of T2D. The current ADA recommendations stress cardiorenal risk reduction while concurrently achieving and maintaining glycemic and weight management goals. On the basis of evolving outcome trial data, GLP-1 receptor agonists and SGLT2 inhibitors with evidence of benefit are recommended for patients with established or at high risk for ASCVD. Further, the Standards preferentially recommend SGLT2 inhibitors for patients with HF and/or CKD. Because evidence suggests no heterogeneity of benefit based on hemoglobin A1c for MACE outcomes with GLP-1 receptor agonists and no heterogeneity of benefit for HF or CKD benefits with SGLT2 inhibitors, these agents are recommended for cardiorenal risk reduction regardless of the need to lower glucose.

The 2023 update to the American Association of Clinical Endocrinology Consensus Statement: Type 2 Diabetes Management Algorithm similarly recommends the use of GLP-1 receptor agonists and SGLT2 inhibitors to improve cardiorenal outcomes. To further emphasize the importance of prescribing agents with proven organ-protective benefits, the AACE consensus statement provides a complications-centric algorithm to guide therapeutic decisions for risk reduction in patients with key comorbidities (for instance, ASCVD, HF, CKD) and a separate glucocentric algorithm to guide selection and intensification of glucose-lowering agents in patients without key comorbidities to meet individualized glycemic targets. Within the complications-centric algorithm, AACE recommends GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment for cardiorenal risk reduction regardless of background metformin use or A1c level.

In addition to the emphasis on the use of GLP-1 receptor agonists and SGLT2 inhibitors for organ protection, guidelines now recommend SGLT2 inhibitors as the standard-of-care therapy in patients with T2D and CKD with an estimated glomerular filtration rate ≥ 20 mL/min per 1.73 m2, and irrespective of ejection fraction or a diagnosis of diabetes in the setting of HF. Overall, a common thread within current guidelines is the importance of individualized therapy based on patient- and medication-specific factors.
 

 

 

Optimizing guideline-directed medical therapy

Results from the DISCOVER trial found that GLP-1 receptor agonist and SGLT2 inhibitor use was less likely in the key patient subgroups most likely to benefit from therapy, including patients with peripheral artery disease and CKD. Factors contributing to underutilization of newer cardiorenal protective glucose-lowering therapies range from cost and access barriers to clinician-level barriers (for example, lack of knowledge on CKD, lack of familiarity with CKD practice guidelines). Addressing these issues and helping patients work through financial and other access barriers is essential to optimize the utilization of these therapies and improve cardiorenal and metabolic outcomes.

So, has metformin been “dethroned” as a first-line therapy for T2D? As is often the case in medicine, the answer depends on the individual patient and clinical situation. Metformin remains an important first-line treatment in combination with lifestyle interventions to help patients with T2D without key cardiorenal comorbidities achieve individualized glycemic targets. However, based on evidence demonstrating cardiorenal protective benefits and improved glycemia and weight loss, GLP-1 agonists and SGLT2 inhibitors may be considered as first-line treatment for patients with T2D with or at high risk for ASCVD, HF, or CKD, regardless of the need for additional glucose-lowering agents and independent of background metformin. Ultimately, the choice of first-line therapy for patients with T2D should be informed by individualized treatment goals, preferences, and cost-related access. Continued efforts to increase patient access to GLP-1 receptor agonists and SGLT2 inhibitors as first-line treatment when indicated are essential to ensure optimal treatment and outcomes.

Dr. Neumiller is professor, department of pharmacotherapy, Washington State University, Spokane. He disclosed ties with Bayer, Boehringer Ingelheim, and Eli Lilly. Dr. Alicic is clinical professor, department of medicine, University of Washington; and associate director of research, Inland Northwest Washington, Providence St. Joseph Health, Spokane. She disclosed ties with Providence St. Joseph Health, Boehringer Ingelheim/Lilly, and Bayer.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Certain genes predict abdominal fat regain after weight loss

Article Type
Changed
Wed, 08/16/2023 - 09:55

Genetic effects on abdominal obesity may be more pronounced than those on general obesity during weight regain, a new study suggests.

People with a genetic predisposition for abdominal adiposity regained more weight around their waist after weight loss than other people.

However, people with a genetic predisposition for a higher body mass index did not regain more weight after weight loss than others.

These findings are from a secondary analysis of data from participants in the Look AHEAD trial who had type 2 diabetes and overweight/obesity and had lost at least 3% of their initial weight after 1 year of intensive lifestyle intervention or control, who were followed for another 3 years.

The study showed that change in waist circumference (aka abdominal obesity) is regulated by a separate pathway from overall obesity during weight regain, the researchers report in their paper, published in Diabetes.

“These findings are the first of their kind and provide new insights into the mechanisms of weight regain,” they conclude.

“It was already known in the scientific literature that genes that are associated with abdominal fat deposition are different from the ones associated with overall obesity,” Malene Revsbech Christiansen, a PhD student, and Tuomas O. Kilpeläinen, PhD, associate professor, Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, said in a joint email to this news organization.

Genetic variants associated with obesity are expressed in the central nervous system. However, genetic variants associated with waist circumference are expressed in the adipose tissues and might be involved in insulin sensitivity, or fat cell shape and differentiation, influencing how much adipose cells can expand in size or in number.

If those genes can function as targets for therapeutic agents, this might benefit patients who possess the genetic variants that predispose them to a higher waist-to-hip ratio adjusted for BMI (WHR-adjBMI), they said.

“However, this is a preliminary study that discovered an association between genetic variants and abdominal fat changes during weight loss,” they cautioned.

Further study is needed, they said, to test the associations in people without obesity and type 2 diabetes and to investigate this research question in people who underwent bariatric surgery or took weight-loss medications, “especially now that Wegovy has increased in popularity.”

“Genetic profiling,” they noted, “is becoming more popular as the prices go down, and future treatments are moving towards precision medicine, where treatments are tailored towards individuals rather than ‘one size fits all.’ ”

In the future, genetic tests might identify people who are more predisposed to abdominal fat deposition, hence needing more follow-up and help with lifestyle changes.

“For now, it does not seem realistic to test individuals for all these 481 [genetic] variants [predisposing to abdominal adiposity]. Each of these genetic variants predisposes, but is not deterministic, for the outcome, because of their individual small effects on waist circumference.”

“It should be stated,” they added, “that changing the diet, physical activity pattern, and behavior are still the main factors when losing weight and maintaining a healthy body.”    
 

Maintaining weight loss is the big challenge

“Lifestyle interventions typically result in an average weight loss of 7%-10 % within 6 months; however, maintaining the weight loss is a significant challenge, as participants often regain an average one-third of the lost weight within 1 year and 50%-100% within 5 years,” the researchers write.

They aimed to study whether genetic predisposition to general or abdominal obesity predicts weight gain after weight loss, based on data from 822 women and 593 men in the Look AHEAD trial.

On average, at 1 year after the intervention, the participants in the intensive lifestyle group lost 24 lbs (10.9 kg) and 3.55 inches (9 cm) around the waist, and participants in the control group lost 15 lbs (6.8 kg) pounds and 1.98 inches (5 cm) around the waist.

From year 1 to year 2, participants in the intensive lifestyle group regained 6.09 lbs and 0.98 inches around the waist, and participants in the control group lost 1.41 lbs and 0.17 inches around the waist.

From year 1 to year 4, participants in the intensive lifestyle group regained 11.05 lbs and 1.92 inches around the waist, and participants in the control group lost 2.24 lbs and 0.76 inches around the waist.

From genome-wide association studies (GWAS) in about 700,000 mainly White individuals of European origin, the researchers constructed a genetic risk score based on 894 independent single nucleotide polymorphisms (SNPs) associated with high BMI and another genetic risk score based on 481 SNPs associated with high WHR-adjBMI.

Having a genetic predisposition to higher WHR-adjBMI predicted an increase in abdominal obesity after weight loss, whereas having a genetic predisposition to higher BMI did not predict weight regain.

“These results suggest that genetic effects on abdominal obesity may be more pronounced than those on general obesity during weight regain,” the researchers conclude.

The researchers were supported by grants from the Novo Nordisk Foundation and the Danish Diabetes Academy (funded by the Novo Nordisk Foundation). The authors report no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Genetic effects on abdominal obesity may be more pronounced than those on general obesity during weight regain, a new study suggests.

People with a genetic predisposition for abdominal adiposity regained more weight around their waist after weight loss than other people.

However, people with a genetic predisposition for a higher body mass index did not regain more weight after weight loss than others.

These findings are from a secondary analysis of data from participants in the Look AHEAD trial who had type 2 diabetes and overweight/obesity and had lost at least 3% of their initial weight after 1 year of intensive lifestyle intervention or control, who were followed for another 3 years.

The study showed that change in waist circumference (aka abdominal obesity) is regulated by a separate pathway from overall obesity during weight regain, the researchers report in their paper, published in Diabetes.

“These findings are the first of their kind and provide new insights into the mechanisms of weight regain,” they conclude.

“It was already known in the scientific literature that genes that are associated with abdominal fat deposition are different from the ones associated with overall obesity,” Malene Revsbech Christiansen, a PhD student, and Tuomas O. Kilpeläinen, PhD, associate professor, Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, said in a joint email to this news organization.

Genetic variants associated with obesity are expressed in the central nervous system. However, genetic variants associated with waist circumference are expressed in the adipose tissues and might be involved in insulin sensitivity, or fat cell shape and differentiation, influencing how much adipose cells can expand in size or in number.

If those genes can function as targets for therapeutic agents, this might benefit patients who possess the genetic variants that predispose them to a higher waist-to-hip ratio adjusted for BMI (WHR-adjBMI), they said.

“However, this is a preliminary study that discovered an association between genetic variants and abdominal fat changes during weight loss,” they cautioned.

Further study is needed, they said, to test the associations in people without obesity and type 2 diabetes and to investigate this research question in people who underwent bariatric surgery or took weight-loss medications, “especially now that Wegovy has increased in popularity.”

“Genetic profiling,” they noted, “is becoming more popular as the prices go down, and future treatments are moving towards precision medicine, where treatments are tailored towards individuals rather than ‘one size fits all.’ ”

In the future, genetic tests might identify people who are more predisposed to abdominal fat deposition, hence needing more follow-up and help with lifestyle changes.

“For now, it does not seem realistic to test individuals for all these 481 [genetic] variants [predisposing to abdominal adiposity]. Each of these genetic variants predisposes, but is not deterministic, for the outcome, because of their individual small effects on waist circumference.”

“It should be stated,” they added, “that changing the diet, physical activity pattern, and behavior are still the main factors when losing weight and maintaining a healthy body.”    
 

Maintaining weight loss is the big challenge

“Lifestyle interventions typically result in an average weight loss of 7%-10 % within 6 months; however, maintaining the weight loss is a significant challenge, as participants often regain an average one-third of the lost weight within 1 year and 50%-100% within 5 years,” the researchers write.

They aimed to study whether genetic predisposition to general or abdominal obesity predicts weight gain after weight loss, based on data from 822 women and 593 men in the Look AHEAD trial.

On average, at 1 year after the intervention, the participants in the intensive lifestyle group lost 24 lbs (10.9 kg) and 3.55 inches (9 cm) around the waist, and participants in the control group lost 15 lbs (6.8 kg) pounds and 1.98 inches (5 cm) around the waist.

From year 1 to year 2, participants in the intensive lifestyle group regained 6.09 lbs and 0.98 inches around the waist, and participants in the control group lost 1.41 lbs and 0.17 inches around the waist.

From year 1 to year 4, participants in the intensive lifestyle group regained 11.05 lbs and 1.92 inches around the waist, and participants in the control group lost 2.24 lbs and 0.76 inches around the waist.

From genome-wide association studies (GWAS) in about 700,000 mainly White individuals of European origin, the researchers constructed a genetic risk score based on 894 independent single nucleotide polymorphisms (SNPs) associated with high BMI and another genetic risk score based on 481 SNPs associated with high WHR-adjBMI.

Having a genetic predisposition to higher WHR-adjBMI predicted an increase in abdominal obesity after weight loss, whereas having a genetic predisposition to higher BMI did not predict weight regain.

“These results suggest that genetic effects on abdominal obesity may be more pronounced than those on general obesity during weight regain,” the researchers conclude.

The researchers were supported by grants from the Novo Nordisk Foundation and the Danish Diabetes Academy (funded by the Novo Nordisk Foundation). The authors report no relevant financial relationships.

A version of this article appeared on Medscape.com.

Genetic effects on abdominal obesity may be more pronounced than those on general obesity during weight regain, a new study suggests.

People with a genetic predisposition for abdominal adiposity regained more weight around their waist after weight loss than other people.

However, people with a genetic predisposition for a higher body mass index did not regain more weight after weight loss than others.

These findings are from a secondary analysis of data from participants in the Look AHEAD trial who had type 2 diabetes and overweight/obesity and had lost at least 3% of their initial weight after 1 year of intensive lifestyle intervention or control, who were followed for another 3 years.

The study showed that change in waist circumference (aka abdominal obesity) is regulated by a separate pathway from overall obesity during weight regain, the researchers report in their paper, published in Diabetes.

“These findings are the first of their kind and provide new insights into the mechanisms of weight regain,” they conclude.

“It was already known in the scientific literature that genes that are associated with abdominal fat deposition are different from the ones associated with overall obesity,” Malene Revsbech Christiansen, a PhD student, and Tuomas O. Kilpeläinen, PhD, associate professor, Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, said in a joint email to this news organization.

Genetic variants associated with obesity are expressed in the central nervous system. However, genetic variants associated with waist circumference are expressed in the adipose tissues and might be involved in insulin sensitivity, or fat cell shape and differentiation, influencing how much adipose cells can expand in size or in number.

If those genes can function as targets for therapeutic agents, this might benefit patients who possess the genetic variants that predispose them to a higher waist-to-hip ratio adjusted for BMI (WHR-adjBMI), they said.

“However, this is a preliminary study that discovered an association between genetic variants and abdominal fat changes during weight loss,” they cautioned.

Further study is needed, they said, to test the associations in people without obesity and type 2 diabetes and to investigate this research question in people who underwent bariatric surgery or took weight-loss medications, “especially now that Wegovy has increased in popularity.”

“Genetic profiling,” they noted, “is becoming more popular as the prices go down, and future treatments are moving towards precision medicine, where treatments are tailored towards individuals rather than ‘one size fits all.’ ”

In the future, genetic tests might identify people who are more predisposed to abdominal fat deposition, hence needing more follow-up and help with lifestyle changes.

“For now, it does not seem realistic to test individuals for all these 481 [genetic] variants [predisposing to abdominal adiposity]. Each of these genetic variants predisposes, but is not deterministic, for the outcome, because of their individual small effects on waist circumference.”

“It should be stated,” they added, “that changing the diet, physical activity pattern, and behavior are still the main factors when losing weight and maintaining a healthy body.”    
 

Maintaining weight loss is the big challenge

“Lifestyle interventions typically result in an average weight loss of 7%-10 % within 6 months; however, maintaining the weight loss is a significant challenge, as participants often regain an average one-third of the lost weight within 1 year and 50%-100% within 5 years,” the researchers write.

They aimed to study whether genetic predisposition to general or abdominal obesity predicts weight gain after weight loss, based on data from 822 women and 593 men in the Look AHEAD trial.

On average, at 1 year after the intervention, the participants in the intensive lifestyle group lost 24 lbs (10.9 kg) and 3.55 inches (9 cm) around the waist, and participants in the control group lost 15 lbs (6.8 kg) pounds and 1.98 inches (5 cm) around the waist.

From year 1 to year 2, participants in the intensive lifestyle group regained 6.09 lbs and 0.98 inches around the waist, and participants in the control group lost 1.41 lbs and 0.17 inches around the waist.

From year 1 to year 4, participants in the intensive lifestyle group regained 11.05 lbs and 1.92 inches around the waist, and participants in the control group lost 2.24 lbs and 0.76 inches around the waist.

From genome-wide association studies (GWAS) in about 700,000 mainly White individuals of European origin, the researchers constructed a genetic risk score based on 894 independent single nucleotide polymorphisms (SNPs) associated with high BMI and another genetic risk score based on 481 SNPs associated with high WHR-adjBMI.

Having a genetic predisposition to higher WHR-adjBMI predicted an increase in abdominal obesity after weight loss, whereas having a genetic predisposition to higher BMI did not predict weight regain.

“These results suggest that genetic effects on abdominal obesity may be more pronounced than those on general obesity during weight regain,” the researchers conclude.

The researchers were supported by grants from the Novo Nordisk Foundation and the Danish Diabetes Academy (funded by the Novo Nordisk Foundation). The authors report no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DIABETES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article