Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin

US Multi-Society Task Force lowers recommended CRC screening age

Article Type
Changed
Mon, 01/03/2022 - 10:35

The U.S. Multi-Society Task Force on Colorectal Cancer (CRC) has lowered the recommended age to start CRC screening from 50 to 45 years of age for all average-risk individuals.

Although no studies have directly demonstrated the result of lowering the age of screening, lead author Swati G. Patel, MD, of University of Colorado Anschutz Medical Center, Aurora, and colleagues suggested that the increasing incidence of advanced CRC among younger individuals, coupled with the net benefit of screening, warrant a lower age threshold.

“Recent data ... show that CRC incidence rates in individuals ages 50 to 64 have increased by 1% annually between 2011 and 2016,” the authors wrote in Gastroenterology. “Similarly, CRC incidence and mortality rates in persons under age 50, termed early-age onset CRC (EAO-CRC), are also increasing.”

The task force of nine experts, representing the American Gastroenterological Association, the American College of Gastroenterology, and the American Society for Gastrointestinal Endoscopy, conducted a literature review and generated recommendations using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) criteria. In addition to recommending a lower age for initial screening, Dr. Patel and colleagues provided guidance for cessation of screening among older individuals.
 

Guidance for screening initiation

According to the authors, the present risk of CRC among younger individuals mirrors the historical risk for older individuals before screening was prevalent.

“The current CRC incidence rates in individuals ages 45 to 49 are similar to the incidence rates observed in 50-year-olds in 1992, before widespread CRC screening was performed,” they wrote.

Elevated rates among younger people have been disproportionately driven by rectal cancer, according to the authors. From 2006 to 2015, incidence of rectal cancer among Americans under 50 increased 1.7% per year, compared with 0.7% per year for colon cancer, based on data from the North American Association of Central Cancer Registries.

Associated mortality rates also increased, the authors noted. From 1999-2019, mortality from colon cancer among people 45-49 years increased from 6.4 to 6.6 deaths per 100,000 individuals, while deaths from rectal cancer increased from 1.3 to 1.7 per 100,000, according to the CDC. Concurrently, CRC-associated mortality rates among older individuals generally declined.

While these findings suggest a growing disease burden among the under-50-year age group, controlled data demonstrating the effects of earlier screening are lacking, Dr. Patel and colleagues noted. Still, they predicted that expanded screening would generate a net benefit.

“Although there are no CRC screening safety data for average-risk individuals [younger than] 50, there are ample data that colonoscopy for other indications (screening based on family history, symptom evaluation, etc.) is safer when comparing younger versus older individuals,” they wrote.

Supporting this claim, the authors cited three independently generated microsimulation models from the Agency for Healthcare Research and Quality that “showed a favorable balance of life-years gained compared with adverse events,” given 100% compliance.
 

Guidance for screening cessation

Like the situation with younger individuals, minimal data are available to determine the best time for screening cessation, according to the task force.

“There are no randomized or observational studies after 2017 that enrolled individuals over age 75 to inform the appropriate time to stop CRC screening,” the authors wrote. “In our search of 37 relevant articles, only one presented primary data for when to stop screening.”

This one available study showed that some individuals older than 74 do in fact gain benefit from screening,

“For example,” Dr. Patel and colleagues wrote, “women without a history of screening and no comorbidities benefitted from annual fecal immunochemical test (FIT) screening until age 90, whereas unscreened men with or without comorbidities benefited from annual FIT screening until age 88. Conversely, screening was not beneficial beyond age 66 in men or women with severe comorbidities.”

The task force therefore recommended personalized screening for individuals 76-85 years of age “based on the balance of benefits and harms and individual patient clinical factors and preferences.”

Screening for individuals 86 years and older, according to the task force, is unnecessary.

The authors disclosed relationships with Olympus America, Bayer Pharmaceuticals, Janssen Pharmaceuticals, and others.

This article was updated on Jan. 3, 2022.

Publications
Topics
Sections

The U.S. Multi-Society Task Force on Colorectal Cancer (CRC) has lowered the recommended age to start CRC screening from 50 to 45 years of age for all average-risk individuals.

Although no studies have directly demonstrated the result of lowering the age of screening, lead author Swati G. Patel, MD, of University of Colorado Anschutz Medical Center, Aurora, and colleagues suggested that the increasing incidence of advanced CRC among younger individuals, coupled with the net benefit of screening, warrant a lower age threshold.

“Recent data ... show that CRC incidence rates in individuals ages 50 to 64 have increased by 1% annually between 2011 and 2016,” the authors wrote in Gastroenterology. “Similarly, CRC incidence and mortality rates in persons under age 50, termed early-age onset CRC (EAO-CRC), are also increasing.”

The task force of nine experts, representing the American Gastroenterological Association, the American College of Gastroenterology, and the American Society for Gastrointestinal Endoscopy, conducted a literature review and generated recommendations using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) criteria. In addition to recommending a lower age for initial screening, Dr. Patel and colleagues provided guidance for cessation of screening among older individuals.
 

Guidance for screening initiation

According to the authors, the present risk of CRC among younger individuals mirrors the historical risk for older individuals before screening was prevalent.

“The current CRC incidence rates in individuals ages 45 to 49 are similar to the incidence rates observed in 50-year-olds in 1992, before widespread CRC screening was performed,” they wrote.

Elevated rates among younger people have been disproportionately driven by rectal cancer, according to the authors. From 2006 to 2015, incidence of rectal cancer among Americans under 50 increased 1.7% per year, compared with 0.7% per year for colon cancer, based on data from the North American Association of Central Cancer Registries.

Associated mortality rates also increased, the authors noted. From 1999-2019, mortality from colon cancer among people 45-49 years increased from 6.4 to 6.6 deaths per 100,000 individuals, while deaths from rectal cancer increased from 1.3 to 1.7 per 100,000, according to the CDC. Concurrently, CRC-associated mortality rates among older individuals generally declined.

While these findings suggest a growing disease burden among the under-50-year age group, controlled data demonstrating the effects of earlier screening are lacking, Dr. Patel and colleagues noted. Still, they predicted that expanded screening would generate a net benefit.

“Although there are no CRC screening safety data for average-risk individuals [younger than] 50, there are ample data that colonoscopy for other indications (screening based on family history, symptom evaluation, etc.) is safer when comparing younger versus older individuals,” they wrote.

Supporting this claim, the authors cited three independently generated microsimulation models from the Agency for Healthcare Research and Quality that “showed a favorable balance of life-years gained compared with adverse events,” given 100% compliance.
 

Guidance for screening cessation

Like the situation with younger individuals, minimal data are available to determine the best time for screening cessation, according to the task force.

“There are no randomized or observational studies after 2017 that enrolled individuals over age 75 to inform the appropriate time to stop CRC screening,” the authors wrote. “In our search of 37 relevant articles, only one presented primary data for when to stop screening.”

This one available study showed that some individuals older than 74 do in fact gain benefit from screening,

“For example,” Dr. Patel and colleagues wrote, “women without a history of screening and no comorbidities benefitted from annual fecal immunochemical test (FIT) screening until age 90, whereas unscreened men with or without comorbidities benefited from annual FIT screening until age 88. Conversely, screening was not beneficial beyond age 66 in men or women with severe comorbidities.”

The task force therefore recommended personalized screening for individuals 76-85 years of age “based on the balance of benefits and harms and individual patient clinical factors and preferences.”

Screening for individuals 86 years and older, according to the task force, is unnecessary.

The authors disclosed relationships with Olympus America, Bayer Pharmaceuticals, Janssen Pharmaceuticals, and others.

This article was updated on Jan. 3, 2022.

The U.S. Multi-Society Task Force on Colorectal Cancer (CRC) has lowered the recommended age to start CRC screening from 50 to 45 years of age for all average-risk individuals.

Although no studies have directly demonstrated the result of lowering the age of screening, lead author Swati G. Patel, MD, of University of Colorado Anschutz Medical Center, Aurora, and colleagues suggested that the increasing incidence of advanced CRC among younger individuals, coupled with the net benefit of screening, warrant a lower age threshold.

“Recent data ... show that CRC incidence rates in individuals ages 50 to 64 have increased by 1% annually between 2011 and 2016,” the authors wrote in Gastroenterology. “Similarly, CRC incidence and mortality rates in persons under age 50, termed early-age onset CRC (EAO-CRC), are also increasing.”

The task force of nine experts, representing the American Gastroenterological Association, the American College of Gastroenterology, and the American Society for Gastrointestinal Endoscopy, conducted a literature review and generated recommendations using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) criteria. In addition to recommending a lower age for initial screening, Dr. Patel and colleagues provided guidance for cessation of screening among older individuals.
 

Guidance for screening initiation

According to the authors, the present risk of CRC among younger individuals mirrors the historical risk for older individuals before screening was prevalent.

“The current CRC incidence rates in individuals ages 45 to 49 are similar to the incidence rates observed in 50-year-olds in 1992, before widespread CRC screening was performed,” they wrote.

Elevated rates among younger people have been disproportionately driven by rectal cancer, according to the authors. From 2006 to 2015, incidence of rectal cancer among Americans under 50 increased 1.7% per year, compared with 0.7% per year for colon cancer, based on data from the North American Association of Central Cancer Registries.

Associated mortality rates also increased, the authors noted. From 1999-2019, mortality from colon cancer among people 45-49 years increased from 6.4 to 6.6 deaths per 100,000 individuals, while deaths from rectal cancer increased from 1.3 to 1.7 per 100,000, according to the CDC. Concurrently, CRC-associated mortality rates among older individuals generally declined.

While these findings suggest a growing disease burden among the under-50-year age group, controlled data demonstrating the effects of earlier screening are lacking, Dr. Patel and colleagues noted. Still, they predicted that expanded screening would generate a net benefit.

“Although there are no CRC screening safety data for average-risk individuals [younger than] 50, there are ample data that colonoscopy for other indications (screening based on family history, symptom evaluation, etc.) is safer when comparing younger versus older individuals,” they wrote.

Supporting this claim, the authors cited three independently generated microsimulation models from the Agency for Healthcare Research and Quality that “showed a favorable balance of life-years gained compared with adverse events,” given 100% compliance.
 

Guidance for screening cessation

Like the situation with younger individuals, minimal data are available to determine the best time for screening cessation, according to the task force.

“There are no randomized or observational studies after 2017 that enrolled individuals over age 75 to inform the appropriate time to stop CRC screening,” the authors wrote. “In our search of 37 relevant articles, only one presented primary data for when to stop screening.”

This one available study showed that some individuals older than 74 do in fact gain benefit from screening,

“For example,” Dr. Patel and colleagues wrote, “women without a history of screening and no comorbidities benefitted from annual fecal immunochemical test (FIT) screening until age 90, whereas unscreened men with or without comorbidities benefited from annual FIT screening until age 88. Conversely, screening was not beneficial beyond age 66 in men or women with severe comorbidities.”

The task force therefore recommended personalized screening for individuals 76-85 years of age “based on the balance of benefits and harms and individual patient clinical factors and preferences.”

Screening for individuals 86 years and older, according to the task force, is unnecessary.

The authors disclosed relationships with Olympus America, Bayer Pharmaceuticals, Janssen Pharmaceuticals, and others.

This article was updated on Jan. 3, 2022.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA Clinical Practice Update: Commentary on surveillance after ESD for dysplasia and early-stage GI cancer

Article Type
Changed
Wed, 12/15/2021 - 16:48

The American Gastroenterological Association recently published a Clinical Practice Update Commentary outlining surveillance strategies following endoscopic submucosal dissection (ESD) of dysplasia and early gastrointestinal cancer considered pathologically curative.

Dr. Andrew Y. Wang

The suggested practice advice, which was put together by Andrew Y. Wang, MD, of the University of Virginia, Charlottesville, and colleagues, offers timelines and modalities of surveillance based on neoplasia type and location, with accompanying summaries of relevant literature.

“Long-term U.S. data about ESD outcomes for early GI neoplasia are only beginning to emerge,” the authors wrote in Gastroenterology. “As such, the current clinical practice regarding endoscopic surveillance intervals and the need for other testing (such as radiographic imaging) after ESD considered curative by histopathology is extrapolated from data derived from Asia and other countries, from concepts learned from polypectomy and piecemeal endoscopic mucosal resection (EMR), and from guideline recommendations after local surgical resection.”

The authors went on to suggest that current recommendations for post-ESD surveillance, including international guidelines “are based more so on expert opinion than rigorous evidence.”

The present update was written to offer additional clarity in this area by providing “a reasonable framework for clinical care and launch points for future research to refine and standardize optimal post-ESD surveillance strategies.”

Foremost, Dr. Wang and colleagues suggested that post-ESD surveillance is necessary because of a lack of standardization concerning the definition of complete resection, along with variable standards of pathological assessment in Western countries, compared with Japan, where pathologists use 2-3 mm serial sectioning and special stains to detect lymphovascular invasion, “which is essential to accurate histopathologic diagnosis and determination of curative resection.”

According to the authors, surveillance endoscopy should be performed with a high-definition endoscope augmented with dye-based or electronic chromoendoscopy, and ideally with optimal magnification.

“Although no supporting data are available at this time, it is prudent and may be reasonable to obtain central and peripheral biopsies of the post-ESD scar,” the authors wrote, noting that relevant mucosa should be checked for metachronous lesions.
 

Esophageal dysplasia and esophageal squamous cell carcinoma

Following curative resection of low-grade or high-grade esophageal squamous dysplasia, the authors suggested follow-up esophagogastroduodenoscopy (EGD) initially at intervals of 6-12 months, while advising against endoscopic ultrasonography and radiographic surveillance.

In contrast, Dr. Wang and colleagues suggested that superficial esophageal squamous cell carcinoma removed by ESD may benefit from a shorter interval of endoscopic surveillance, with a range of 3-6 months for first and second follow-up EGDs. Clinicians may also consider endoscopic ultrasonography with each EGD, plus an annual CT scan of the abdomen and chest, for 3-5 years.

“A limitation of ESD is that the at-risk esophagus is left in place, and there is a possibility of developing local recurrence or metachronous neoplasia,” the authors wrote. “Although local recurrence after ESD deemed pathologically curative of esophageal squamous cell carcinoma is infrequent, the development of metachronous lesions is not.”
 

Barrett’s dysplasia and esophageal adenocarcinoma

For all patients, curative removal of Barrett’s dysplasia or esophageal adenocarcinoma should be followed by endoscopy with mucosal ablative therapy at 2-3 months, with treatments every 2-3 months until complete eradication of intestinal metaplasia is achieved, according to Dr. Wang and colleagues.

After complete eradication, patients should be endoscopically screened from 3-12 months, depending on the degree of dysplasia or T-stage of adenocarcinoma, followed by screening procedures ranging from 6 months to 3 years, again depending on disease type.

“Endoscopic resection of visible Barrett’s neoplasia without treatment of Barrett’s esophagus has been associated with significant recurrence rates, so the objective of treatment should be endoscopic resection of visible or nodular dysplasia, followed by complete ablation of any remaining Barrett’s esophagus and associated (flat and/or invisible) dysplasia,” the authors wrote.
 

Gastric dysplasia and gastric adenocarcinoma

According to the update, after curative resection of gastric dysplasia, first follow-up endoscopy should be conducted at 6-12 months. Second follow-up should be conducted at 12 months for low-grade dysplasia versus 6-12 months for high-grade dysplasia, with annual exams thereafter.

For T1a early gastric cancer, the first two follow-up endoscopies should be performed at 6-month intervals, followed by annual exams. T1b Sm1 disease should be screened more aggressively, with 3-6 months intervals for first and second follow-up EGDs, plus CT scans of the abdomen and chest and/or endoscopic ultrasound every 6-12 months for 3-5 years.

“For lesions where a curative resection was achieved based on clinical criteria and histopathologic examination, surveillance is performed primarily to detect metachronous gastric cancers,” the authors wrote.
 

Colonic dysplasia and adenocarcinoma

According to the authors, adenomas with low-grade dysplasia or serrated sessile lesions without dysplasia removed by ESD should be rechecked by colonoscopy at 1 year and then 3 years, followed by adherence to U.S. Multi-Society Task Force recommendations.

For traditional serrated adenomas, serrated sessile lesions with dysplasia, adenomas with high-grade dysplasia, carcinoma in situ, intramucosal carcinoma, or dysplasia in the setting of inflammatory bowel disease, first follow-up colonoscopy should be conducted at 6-12 months, 1 year later, then 3 years after that, followed by reversion to USMSTF recommendations, although patients with IBD may benefit from annual colonoscopy.

Finally, patients with superficial T1 colonic adenocarcinoma should be screened more frequently, with colonoscopies at 3-6 months, 6 months, and 1 year, followed by adherence to USMSTF recommendations.

“The current Japanese guideline suggests that recurrence or metastasis after endoscopic resection of T1 (Sm) colonic carcinomas occurs mainly within 3-5 years,” the authors noted.
 

Rectal dysplasia and adenocarcinoma

Best practice advice suggestions for rectal dysplasia and adenocarcinoma are grouped similarly to the above advice for colonic lesions.

For lower-grade lesions, first follow-up with flexible sigmoidoscopy is suggested after 1 year, then 3 years, followed by reversion to USMSTF recommendations. Higher-grade dysplastic lesions should be checked after 6-12 months, 1 year, then 3 years, followed by adherence to USMSTF guidance, again excluding patients with IBD, who may benefit from annual exams.

Patients with superficial T1 rectal adenocarcinoma removed by ESD deemed pathologically curative should be checked with flexible sigmoidoscopy at 3-6 months, again at 3-6 months after first sigmoidoscopy, then every 6 months for a total of 5 years from the time of ESD, followed by adherence to USMSTF recommendations. At 1 year following ESD, patients should undergo colonoscopy, which can take the place of one of the follow-up flexible sigmoidoscopy exams; if an advanced adenoma is found, colonoscopy should be repeated after 1 year, versus 3 years if no advanced adenomas are found, followed by adherence to USMSTF recommendations. Patients with superficial T1 rectal adenocarcinoma should also undergo endoscopic ultrasound or pelvic MRI with contrast every 3-6 months for 2 years, followed by intervals of 6 months for a total of 5 years. Annual CT of the chest and abdomen may also be considered for a duration of 3-5 years.

Call for research

Dr. Wang and colleagues concluded their update with a call for research.

“We acknowledge that the level of evidence currently available to support much of our surveillance advice is generally low,” they wrote. “The intent of this clinical practice update was to propose surveillance strategies after potentially curative ESD for various GI neoplasms, which might also serve as reference points to stimulate research that will refine future clinical best practice advice.”

The article was supported by the AGA. The authors disclosed relationships with MicroTech, Olympus, Lumendi, U.S. Endoscopy, Boston Scientific, Steris and others.

This article was updated Dec. 15, 2021.

Publications
Topics
Sections

The American Gastroenterological Association recently published a Clinical Practice Update Commentary outlining surveillance strategies following endoscopic submucosal dissection (ESD) of dysplasia and early gastrointestinal cancer considered pathologically curative.

Dr. Andrew Y. Wang

The suggested practice advice, which was put together by Andrew Y. Wang, MD, of the University of Virginia, Charlottesville, and colleagues, offers timelines and modalities of surveillance based on neoplasia type and location, with accompanying summaries of relevant literature.

“Long-term U.S. data about ESD outcomes for early GI neoplasia are only beginning to emerge,” the authors wrote in Gastroenterology. “As such, the current clinical practice regarding endoscopic surveillance intervals and the need for other testing (such as radiographic imaging) after ESD considered curative by histopathology is extrapolated from data derived from Asia and other countries, from concepts learned from polypectomy and piecemeal endoscopic mucosal resection (EMR), and from guideline recommendations after local surgical resection.”

The authors went on to suggest that current recommendations for post-ESD surveillance, including international guidelines “are based more so on expert opinion than rigorous evidence.”

The present update was written to offer additional clarity in this area by providing “a reasonable framework for clinical care and launch points for future research to refine and standardize optimal post-ESD surveillance strategies.”

Foremost, Dr. Wang and colleagues suggested that post-ESD surveillance is necessary because of a lack of standardization concerning the definition of complete resection, along with variable standards of pathological assessment in Western countries, compared with Japan, where pathologists use 2-3 mm serial sectioning and special stains to detect lymphovascular invasion, “which is essential to accurate histopathologic diagnosis and determination of curative resection.”

According to the authors, surveillance endoscopy should be performed with a high-definition endoscope augmented with dye-based or electronic chromoendoscopy, and ideally with optimal magnification.

“Although no supporting data are available at this time, it is prudent and may be reasonable to obtain central and peripheral biopsies of the post-ESD scar,” the authors wrote, noting that relevant mucosa should be checked for metachronous lesions.
 

Esophageal dysplasia and esophageal squamous cell carcinoma

Following curative resection of low-grade or high-grade esophageal squamous dysplasia, the authors suggested follow-up esophagogastroduodenoscopy (EGD) initially at intervals of 6-12 months, while advising against endoscopic ultrasonography and radiographic surveillance.

In contrast, Dr. Wang and colleagues suggested that superficial esophageal squamous cell carcinoma removed by ESD may benefit from a shorter interval of endoscopic surveillance, with a range of 3-6 months for first and second follow-up EGDs. Clinicians may also consider endoscopic ultrasonography with each EGD, plus an annual CT scan of the abdomen and chest, for 3-5 years.

“A limitation of ESD is that the at-risk esophagus is left in place, and there is a possibility of developing local recurrence or metachronous neoplasia,” the authors wrote. “Although local recurrence after ESD deemed pathologically curative of esophageal squamous cell carcinoma is infrequent, the development of metachronous lesions is not.”
 

Barrett’s dysplasia and esophageal adenocarcinoma

For all patients, curative removal of Barrett’s dysplasia or esophageal adenocarcinoma should be followed by endoscopy with mucosal ablative therapy at 2-3 months, with treatments every 2-3 months until complete eradication of intestinal metaplasia is achieved, according to Dr. Wang and colleagues.

After complete eradication, patients should be endoscopically screened from 3-12 months, depending on the degree of dysplasia or T-stage of adenocarcinoma, followed by screening procedures ranging from 6 months to 3 years, again depending on disease type.

“Endoscopic resection of visible Barrett’s neoplasia without treatment of Barrett’s esophagus has been associated with significant recurrence rates, so the objective of treatment should be endoscopic resection of visible or nodular dysplasia, followed by complete ablation of any remaining Barrett’s esophagus and associated (flat and/or invisible) dysplasia,” the authors wrote.
 

Gastric dysplasia and gastric adenocarcinoma

According to the update, after curative resection of gastric dysplasia, first follow-up endoscopy should be conducted at 6-12 months. Second follow-up should be conducted at 12 months for low-grade dysplasia versus 6-12 months for high-grade dysplasia, with annual exams thereafter.

For T1a early gastric cancer, the first two follow-up endoscopies should be performed at 6-month intervals, followed by annual exams. T1b Sm1 disease should be screened more aggressively, with 3-6 months intervals for first and second follow-up EGDs, plus CT scans of the abdomen and chest and/or endoscopic ultrasound every 6-12 months for 3-5 years.

“For lesions where a curative resection was achieved based on clinical criteria and histopathologic examination, surveillance is performed primarily to detect metachronous gastric cancers,” the authors wrote.
 

Colonic dysplasia and adenocarcinoma

According to the authors, adenomas with low-grade dysplasia or serrated sessile lesions without dysplasia removed by ESD should be rechecked by colonoscopy at 1 year and then 3 years, followed by adherence to U.S. Multi-Society Task Force recommendations.

For traditional serrated adenomas, serrated sessile lesions with dysplasia, adenomas with high-grade dysplasia, carcinoma in situ, intramucosal carcinoma, or dysplasia in the setting of inflammatory bowel disease, first follow-up colonoscopy should be conducted at 6-12 months, 1 year later, then 3 years after that, followed by reversion to USMSTF recommendations, although patients with IBD may benefit from annual colonoscopy.

Finally, patients with superficial T1 colonic adenocarcinoma should be screened more frequently, with colonoscopies at 3-6 months, 6 months, and 1 year, followed by adherence to USMSTF recommendations.

“The current Japanese guideline suggests that recurrence or metastasis after endoscopic resection of T1 (Sm) colonic carcinomas occurs mainly within 3-5 years,” the authors noted.
 

Rectal dysplasia and adenocarcinoma

Best practice advice suggestions for rectal dysplasia and adenocarcinoma are grouped similarly to the above advice for colonic lesions.

For lower-grade lesions, first follow-up with flexible sigmoidoscopy is suggested after 1 year, then 3 years, followed by reversion to USMSTF recommendations. Higher-grade dysplastic lesions should be checked after 6-12 months, 1 year, then 3 years, followed by adherence to USMSTF guidance, again excluding patients with IBD, who may benefit from annual exams.

Patients with superficial T1 rectal adenocarcinoma removed by ESD deemed pathologically curative should be checked with flexible sigmoidoscopy at 3-6 months, again at 3-6 months after first sigmoidoscopy, then every 6 months for a total of 5 years from the time of ESD, followed by adherence to USMSTF recommendations. At 1 year following ESD, patients should undergo colonoscopy, which can take the place of one of the follow-up flexible sigmoidoscopy exams; if an advanced adenoma is found, colonoscopy should be repeated after 1 year, versus 3 years if no advanced adenomas are found, followed by adherence to USMSTF recommendations. Patients with superficial T1 rectal adenocarcinoma should also undergo endoscopic ultrasound or pelvic MRI with contrast every 3-6 months for 2 years, followed by intervals of 6 months for a total of 5 years. Annual CT of the chest and abdomen may also be considered for a duration of 3-5 years.

Call for research

Dr. Wang and colleagues concluded their update with a call for research.

“We acknowledge that the level of evidence currently available to support much of our surveillance advice is generally low,” they wrote. “The intent of this clinical practice update was to propose surveillance strategies after potentially curative ESD for various GI neoplasms, which might also serve as reference points to stimulate research that will refine future clinical best practice advice.”

The article was supported by the AGA. The authors disclosed relationships with MicroTech, Olympus, Lumendi, U.S. Endoscopy, Boston Scientific, Steris and others.

This article was updated Dec. 15, 2021.

The American Gastroenterological Association recently published a Clinical Practice Update Commentary outlining surveillance strategies following endoscopic submucosal dissection (ESD) of dysplasia and early gastrointestinal cancer considered pathologically curative.

Dr. Andrew Y. Wang

The suggested practice advice, which was put together by Andrew Y. Wang, MD, of the University of Virginia, Charlottesville, and colleagues, offers timelines and modalities of surveillance based on neoplasia type and location, with accompanying summaries of relevant literature.

“Long-term U.S. data about ESD outcomes for early GI neoplasia are only beginning to emerge,” the authors wrote in Gastroenterology. “As such, the current clinical practice regarding endoscopic surveillance intervals and the need for other testing (such as radiographic imaging) after ESD considered curative by histopathology is extrapolated from data derived from Asia and other countries, from concepts learned from polypectomy and piecemeal endoscopic mucosal resection (EMR), and from guideline recommendations after local surgical resection.”

The authors went on to suggest that current recommendations for post-ESD surveillance, including international guidelines “are based more so on expert opinion than rigorous evidence.”

The present update was written to offer additional clarity in this area by providing “a reasonable framework for clinical care and launch points for future research to refine and standardize optimal post-ESD surveillance strategies.”

Foremost, Dr. Wang and colleagues suggested that post-ESD surveillance is necessary because of a lack of standardization concerning the definition of complete resection, along with variable standards of pathological assessment in Western countries, compared with Japan, where pathologists use 2-3 mm serial sectioning and special stains to detect lymphovascular invasion, “which is essential to accurate histopathologic diagnosis and determination of curative resection.”

According to the authors, surveillance endoscopy should be performed with a high-definition endoscope augmented with dye-based or electronic chromoendoscopy, and ideally with optimal magnification.

“Although no supporting data are available at this time, it is prudent and may be reasonable to obtain central and peripheral biopsies of the post-ESD scar,” the authors wrote, noting that relevant mucosa should be checked for metachronous lesions.
 

Esophageal dysplasia and esophageal squamous cell carcinoma

Following curative resection of low-grade or high-grade esophageal squamous dysplasia, the authors suggested follow-up esophagogastroduodenoscopy (EGD) initially at intervals of 6-12 months, while advising against endoscopic ultrasonography and radiographic surveillance.

In contrast, Dr. Wang and colleagues suggested that superficial esophageal squamous cell carcinoma removed by ESD may benefit from a shorter interval of endoscopic surveillance, with a range of 3-6 months for first and second follow-up EGDs. Clinicians may also consider endoscopic ultrasonography with each EGD, plus an annual CT scan of the abdomen and chest, for 3-5 years.

“A limitation of ESD is that the at-risk esophagus is left in place, and there is a possibility of developing local recurrence or metachronous neoplasia,” the authors wrote. “Although local recurrence after ESD deemed pathologically curative of esophageal squamous cell carcinoma is infrequent, the development of metachronous lesions is not.”
 

Barrett’s dysplasia and esophageal adenocarcinoma

For all patients, curative removal of Barrett’s dysplasia or esophageal adenocarcinoma should be followed by endoscopy with mucosal ablative therapy at 2-3 months, with treatments every 2-3 months until complete eradication of intestinal metaplasia is achieved, according to Dr. Wang and colleagues.

After complete eradication, patients should be endoscopically screened from 3-12 months, depending on the degree of dysplasia or T-stage of adenocarcinoma, followed by screening procedures ranging from 6 months to 3 years, again depending on disease type.

“Endoscopic resection of visible Barrett’s neoplasia without treatment of Barrett’s esophagus has been associated with significant recurrence rates, so the objective of treatment should be endoscopic resection of visible or nodular dysplasia, followed by complete ablation of any remaining Barrett’s esophagus and associated (flat and/or invisible) dysplasia,” the authors wrote.
 

Gastric dysplasia and gastric adenocarcinoma

According to the update, after curative resection of gastric dysplasia, first follow-up endoscopy should be conducted at 6-12 months. Second follow-up should be conducted at 12 months for low-grade dysplasia versus 6-12 months for high-grade dysplasia, with annual exams thereafter.

For T1a early gastric cancer, the first two follow-up endoscopies should be performed at 6-month intervals, followed by annual exams. T1b Sm1 disease should be screened more aggressively, with 3-6 months intervals for first and second follow-up EGDs, plus CT scans of the abdomen and chest and/or endoscopic ultrasound every 6-12 months for 3-5 years.

“For lesions where a curative resection was achieved based on clinical criteria and histopathologic examination, surveillance is performed primarily to detect metachronous gastric cancers,” the authors wrote.
 

Colonic dysplasia and adenocarcinoma

According to the authors, adenomas with low-grade dysplasia or serrated sessile lesions without dysplasia removed by ESD should be rechecked by colonoscopy at 1 year and then 3 years, followed by adherence to U.S. Multi-Society Task Force recommendations.

For traditional serrated adenomas, serrated sessile lesions with dysplasia, adenomas with high-grade dysplasia, carcinoma in situ, intramucosal carcinoma, or dysplasia in the setting of inflammatory bowel disease, first follow-up colonoscopy should be conducted at 6-12 months, 1 year later, then 3 years after that, followed by reversion to USMSTF recommendations, although patients with IBD may benefit from annual colonoscopy.

Finally, patients with superficial T1 colonic adenocarcinoma should be screened more frequently, with colonoscopies at 3-6 months, 6 months, and 1 year, followed by adherence to USMSTF recommendations.

“The current Japanese guideline suggests that recurrence or metastasis after endoscopic resection of T1 (Sm) colonic carcinomas occurs mainly within 3-5 years,” the authors noted.
 

Rectal dysplasia and adenocarcinoma

Best practice advice suggestions for rectal dysplasia and adenocarcinoma are grouped similarly to the above advice for colonic lesions.

For lower-grade lesions, first follow-up with flexible sigmoidoscopy is suggested after 1 year, then 3 years, followed by reversion to USMSTF recommendations. Higher-grade dysplastic lesions should be checked after 6-12 months, 1 year, then 3 years, followed by adherence to USMSTF guidance, again excluding patients with IBD, who may benefit from annual exams.

Patients with superficial T1 rectal adenocarcinoma removed by ESD deemed pathologically curative should be checked with flexible sigmoidoscopy at 3-6 months, again at 3-6 months after first sigmoidoscopy, then every 6 months for a total of 5 years from the time of ESD, followed by adherence to USMSTF recommendations. At 1 year following ESD, patients should undergo colonoscopy, which can take the place of one of the follow-up flexible sigmoidoscopy exams; if an advanced adenoma is found, colonoscopy should be repeated after 1 year, versus 3 years if no advanced adenomas are found, followed by adherence to USMSTF recommendations. Patients with superficial T1 rectal adenocarcinoma should also undergo endoscopic ultrasound or pelvic MRI with contrast every 3-6 months for 2 years, followed by intervals of 6 months for a total of 5 years. Annual CT of the chest and abdomen may also be considered for a duration of 3-5 years.

Call for research

Dr. Wang and colleagues concluded their update with a call for research.

“We acknowledge that the level of evidence currently available to support much of our surveillance advice is generally low,” they wrote. “The intent of this clinical practice update was to propose surveillance strategies after potentially curative ESD for various GI neoplasms, which might also serve as reference points to stimulate research that will refine future clinical best practice advice.”

The article was supported by the AGA. The authors disclosed relationships with MicroTech, Olympus, Lumendi, U.S. Endoscopy, Boston Scientific, Steris and others.

This article was updated Dec. 15, 2021.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

ESD vs. cEMR: Rates of complete remission in Barrett’s compared

Still waiting to see superiority
Article Type
Changed
Tue, 01/25/2022 - 16:25

Treatment with endoscopic submucosal dissection (ESD) is associated with higher rates of complete remission of dysplasia at 2 years, compared with cap-assisted endoscopic mucosal resection (cEMR) in patients with Barrett’s esophagus with dysplasia or early-stage intramucosal esophageal adenocarcinoma (EAC), according to study findings.

Despite the seeming advantage of ESD over cEMR, the study found similar rates of complete remission of intestinal metaplasia (CRIM) between the treatment groups at 2 years.

The study authors explained that ESD, a recent development in endoscopic resection, allows for en bloc resection of larger lesions in dysplastic Barrett’s and EAC and features less diagnostic uncertainty, compared with cEMR. Findings from the study highlight the importance of this newer technique but also emphasize the utility of both treatments. “In expert hands both sets of procedures appear to be safe and well tolerated,” wrote study authors Don Codipilly, MD, of the Mayo Clinic in Rochester, Minn., and colleagues in Clinical Gastroenterology and Hepatology.

Given the lack of comparative data on the long-term outcomes of cEMR versus ESD in patients with neoplasia associated with Barrett’s esophagus, Dr. Codipilly and colleagues examined histologic outcomes in a prospectively maintained database of 537 patients who underwent endoscopic eradication therapy for Barrett’s esophagus or EAC at the Mayo Clinic between 2006 and 2020. Only patients who had undergone either cEMR (n = 456) or ESD (n = 81) followed by endoscopic ablation were included in the analysis.

The primary endpoint of the study was the rate and time to complete remission of dysplasia (CRD), which was defined by the absence of dysplasia on biopsy from the gastroesophageal junction and tubular esophagus during at least one surveillance endoscopy. Researchers also examined the rates of complications, such as clinically significant intraprocedural or postprocedural bleeding that required hospitalization, perforation, receipt of red blood cells within 30 days of the initial procedure, and stricture formation that required dilation within 120 days of the index procedure.

Patients in the ESD group had a longer mean length of resected specimens (23.9 vs. 10.9 mm; P < .01) as well as higher rates of en bloc (97.5% vs. 41.9%; P < .01) and R0 resection (58% vs. 20.2%; P < .01). Patients were generally balanced on other basic baseline demographics, including age, sex distribution, and smoking status.

Over a median 11.2-year follow-up period, a total of 420 patients in the cEMR group achieved CRD. In the ESD group, 48 patients achieved CRD over a median 1.4-year follow-up period. The 2-year cumulative probability of CRD was lower in patients who received cEMR versus those who received ESD (75.8% vs. 85.6%, respectively). In a univariate analysis, the odds of achieving CRD were lower in cEMR versus ESD (hazard ratio, 0.41; 95% CI, 0.31-0.54; P < .01).

According to multivariate analysis, two independent predictors of CRD included ESD (hazard ratio, 2.38; P <.01) and shorter Barrett’s segment length (HR, 1.11; P < .01).

The investigators also assessed whether advancements made in cEMR technique have contributed to the findings in an analysis of patients who underwent cEMR (n = 48) with ESD (n = 80) from 2015 to 2019. In this analysis, the researchers found that the odds of CRD were lower than that of ESD (HR, 0.67; 95% CI, 0.45-0.99). Additionally, higher odds of achieving CRD in the cEMR group were observed in years between 2013 and 2019 (n = 129), compared with years 2006-2012 (n = 112) (HR, 2.09; 95% CI, 1.59-2.75; P < .01).

Demographic and clinical variables were incorporated into a Cox proportional hazard model to identify factors associated with decreased odds of CRD. This analysis found that decreased odds of CRD were associated with longer Barrett’s esophagus segment length (HR, 0.90; P <.01) and treatment with cEMR versus ESD (HR, 0.42; P < .01).

Over median follow-up periods of 7.8 years in the cEMR group and 1.1 years in the ESD group, approximately 78.5% and 40.7% of patients, respectively, achieved CRIM. While those in the ESD group achieved CRIM earlier, the cumulative probabilities of CRIM were similar by 2 years (59.3% vs. 50.6%; HR, 0.74; 95% CI, 0.52-1.07; P = .11). Shorter Barrett’s esophagus segment was the only independent predictor of CRIM (HR, 1.16; P < .01).

The researchers noted that the study population may have included patients with more severe disease than that in the general population, which may limit the generalizability of the findings. Additionally, the lack of a randomized design was cited as an additional study limitation.

In spite of their findings, the researchers explained that “continued monitoring for additional outcomes such as recurrence are required for further elucidation of the optimal role of these procedures in the management of” neoplasia associated with Barrett’s esophagus.”

The study was funded by the National Cancer Institute and the Freeman Foundation. The researchers reported no conflicts of interest with any pharmaceutical companies.

Body

 

When compared with cap-assisted EMR (cEMR), endoscopic submucosal dissection (ESD) of visible abnormalities within a Barrett’s segment leads to higher R0 resection rates in patients with Barrett’s related neoplasia. However, its superiority over cEMR with regards to clinical and histological outcomes has remained in question. The current study by Codipilly and colleagues attempts to address this issue by comparing histologic outcomes of cEMR versus ESD in dysplastic Barrett’s.

Baylor College of Medicine
Dr. Salmaan Jawaid
After following 537 patients who underwent cEMR and ESD, the study found those who underwent ESD were more likely to achieve clinical remission of dysplasia (CRD) at 2 years (75.8% vs. 85.6% respectively; P < .01) with a hazard ratio of 2.38 (P < .01), likely attributed to the higher rates of en bloc (97.5%) and R0 resection (58%) in the ESD group. However, regarding clinical remission of intestinal metaplasia (CRIM), there was no difference between the two groups after 2 years, suggesting mid-term outcomes remain the same between both resection techniques, so long as ablation is performed of the remaining Barrett’s segment.

Since therapies that achieve CRIM, rather than primarily CRD, decrease risk of recurrence, the current study suggests ESD is not superior to cEMR in preventing recurrence for Barrett’s related neoplasia, and either technique may be employed based on local expertise. However, ESD is more effective for achieving CRD and may be preferable for lesions greater than 15 mm or lesions where superficial submucosal invasion is suspected and providing an accurate histopathologic specimen would help direct appropriate oncologic therapy. Further, long-term randomized clinical trials are needed to address differences in recurrence between both treatment modalities.

Salmaan Jawaid, MD, is an assistant professor of medicine in interventional endoscopy at Baylor College of Medicine, Houston. He has no relevant conflicts of interest.

Publications
Topics
Sections
Body

 

When compared with cap-assisted EMR (cEMR), endoscopic submucosal dissection (ESD) of visible abnormalities within a Barrett’s segment leads to higher R0 resection rates in patients with Barrett’s related neoplasia. However, its superiority over cEMR with regards to clinical and histological outcomes has remained in question. The current study by Codipilly and colleagues attempts to address this issue by comparing histologic outcomes of cEMR versus ESD in dysplastic Barrett’s.

Baylor College of Medicine
Dr. Salmaan Jawaid
After following 537 patients who underwent cEMR and ESD, the study found those who underwent ESD were more likely to achieve clinical remission of dysplasia (CRD) at 2 years (75.8% vs. 85.6% respectively; P < .01) with a hazard ratio of 2.38 (P < .01), likely attributed to the higher rates of en bloc (97.5%) and R0 resection (58%) in the ESD group. However, regarding clinical remission of intestinal metaplasia (CRIM), there was no difference between the two groups after 2 years, suggesting mid-term outcomes remain the same between both resection techniques, so long as ablation is performed of the remaining Barrett’s segment.

Since therapies that achieve CRIM, rather than primarily CRD, decrease risk of recurrence, the current study suggests ESD is not superior to cEMR in preventing recurrence for Barrett’s related neoplasia, and either technique may be employed based on local expertise. However, ESD is more effective for achieving CRD and may be preferable for lesions greater than 15 mm or lesions where superficial submucosal invasion is suspected and providing an accurate histopathologic specimen would help direct appropriate oncologic therapy. Further, long-term randomized clinical trials are needed to address differences in recurrence between both treatment modalities.

Salmaan Jawaid, MD, is an assistant professor of medicine in interventional endoscopy at Baylor College of Medicine, Houston. He has no relevant conflicts of interest.

Body

 

When compared with cap-assisted EMR (cEMR), endoscopic submucosal dissection (ESD) of visible abnormalities within a Barrett’s segment leads to higher R0 resection rates in patients with Barrett’s related neoplasia. However, its superiority over cEMR with regards to clinical and histological outcomes has remained in question. The current study by Codipilly and colleagues attempts to address this issue by comparing histologic outcomes of cEMR versus ESD in dysplastic Barrett’s.

Baylor College of Medicine
Dr. Salmaan Jawaid
After following 537 patients who underwent cEMR and ESD, the study found those who underwent ESD were more likely to achieve clinical remission of dysplasia (CRD) at 2 years (75.8% vs. 85.6% respectively; P < .01) with a hazard ratio of 2.38 (P < .01), likely attributed to the higher rates of en bloc (97.5%) and R0 resection (58%) in the ESD group. However, regarding clinical remission of intestinal metaplasia (CRIM), there was no difference between the two groups after 2 years, suggesting mid-term outcomes remain the same between both resection techniques, so long as ablation is performed of the remaining Barrett’s segment.

Since therapies that achieve CRIM, rather than primarily CRD, decrease risk of recurrence, the current study suggests ESD is not superior to cEMR in preventing recurrence for Barrett’s related neoplasia, and either technique may be employed based on local expertise. However, ESD is more effective for achieving CRD and may be preferable for lesions greater than 15 mm or lesions where superficial submucosal invasion is suspected and providing an accurate histopathologic specimen would help direct appropriate oncologic therapy. Further, long-term randomized clinical trials are needed to address differences in recurrence between both treatment modalities.

Salmaan Jawaid, MD, is an assistant professor of medicine in interventional endoscopy at Baylor College of Medicine, Houston. He has no relevant conflicts of interest.

Title
Still waiting to see superiority
Still waiting to see superiority

Treatment with endoscopic submucosal dissection (ESD) is associated with higher rates of complete remission of dysplasia at 2 years, compared with cap-assisted endoscopic mucosal resection (cEMR) in patients with Barrett’s esophagus with dysplasia or early-stage intramucosal esophageal adenocarcinoma (EAC), according to study findings.

Despite the seeming advantage of ESD over cEMR, the study found similar rates of complete remission of intestinal metaplasia (CRIM) between the treatment groups at 2 years.

The study authors explained that ESD, a recent development in endoscopic resection, allows for en bloc resection of larger lesions in dysplastic Barrett’s and EAC and features less diagnostic uncertainty, compared with cEMR. Findings from the study highlight the importance of this newer technique but also emphasize the utility of both treatments. “In expert hands both sets of procedures appear to be safe and well tolerated,” wrote study authors Don Codipilly, MD, of the Mayo Clinic in Rochester, Minn., and colleagues in Clinical Gastroenterology and Hepatology.

Given the lack of comparative data on the long-term outcomes of cEMR versus ESD in patients with neoplasia associated with Barrett’s esophagus, Dr. Codipilly and colleagues examined histologic outcomes in a prospectively maintained database of 537 patients who underwent endoscopic eradication therapy for Barrett’s esophagus or EAC at the Mayo Clinic between 2006 and 2020. Only patients who had undergone either cEMR (n = 456) or ESD (n = 81) followed by endoscopic ablation were included in the analysis.

The primary endpoint of the study was the rate and time to complete remission of dysplasia (CRD), which was defined by the absence of dysplasia on biopsy from the gastroesophageal junction and tubular esophagus during at least one surveillance endoscopy. Researchers also examined the rates of complications, such as clinically significant intraprocedural or postprocedural bleeding that required hospitalization, perforation, receipt of red blood cells within 30 days of the initial procedure, and stricture formation that required dilation within 120 days of the index procedure.

Patients in the ESD group had a longer mean length of resected specimens (23.9 vs. 10.9 mm; P < .01) as well as higher rates of en bloc (97.5% vs. 41.9%; P < .01) and R0 resection (58% vs. 20.2%; P < .01). Patients were generally balanced on other basic baseline demographics, including age, sex distribution, and smoking status.

Over a median 11.2-year follow-up period, a total of 420 patients in the cEMR group achieved CRD. In the ESD group, 48 patients achieved CRD over a median 1.4-year follow-up period. The 2-year cumulative probability of CRD was lower in patients who received cEMR versus those who received ESD (75.8% vs. 85.6%, respectively). In a univariate analysis, the odds of achieving CRD were lower in cEMR versus ESD (hazard ratio, 0.41; 95% CI, 0.31-0.54; P < .01).

According to multivariate analysis, two independent predictors of CRD included ESD (hazard ratio, 2.38; P <.01) and shorter Barrett’s segment length (HR, 1.11; P < .01).

The investigators also assessed whether advancements made in cEMR technique have contributed to the findings in an analysis of patients who underwent cEMR (n = 48) with ESD (n = 80) from 2015 to 2019. In this analysis, the researchers found that the odds of CRD were lower than that of ESD (HR, 0.67; 95% CI, 0.45-0.99). Additionally, higher odds of achieving CRD in the cEMR group were observed in years between 2013 and 2019 (n = 129), compared with years 2006-2012 (n = 112) (HR, 2.09; 95% CI, 1.59-2.75; P < .01).

Demographic and clinical variables were incorporated into a Cox proportional hazard model to identify factors associated with decreased odds of CRD. This analysis found that decreased odds of CRD were associated with longer Barrett’s esophagus segment length (HR, 0.90; P <.01) and treatment with cEMR versus ESD (HR, 0.42; P < .01).

Over median follow-up periods of 7.8 years in the cEMR group and 1.1 years in the ESD group, approximately 78.5% and 40.7% of patients, respectively, achieved CRIM. While those in the ESD group achieved CRIM earlier, the cumulative probabilities of CRIM were similar by 2 years (59.3% vs. 50.6%; HR, 0.74; 95% CI, 0.52-1.07; P = .11). Shorter Barrett’s esophagus segment was the only independent predictor of CRIM (HR, 1.16; P < .01).

The researchers noted that the study population may have included patients with more severe disease than that in the general population, which may limit the generalizability of the findings. Additionally, the lack of a randomized design was cited as an additional study limitation.

In spite of their findings, the researchers explained that “continued monitoring for additional outcomes such as recurrence are required for further elucidation of the optimal role of these procedures in the management of” neoplasia associated with Barrett’s esophagus.”

The study was funded by the National Cancer Institute and the Freeman Foundation. The researchers reported no conflicts of interest with any pharmaceutical companies.

Treatment with endoscopic submucosal dissection (ESD) is associated with higher rates of complete remission of dysplasia at 2 years, compared with cap-assisted endoscopic mucosal resection (cEMR) in patients with Barrett’s esophagus with dysplasia or early-stage intramucosal esophageal adenocarcinoma (EAC), according to study findings.

Despite the seeming advantage of ESD over cEMR, the study found similar rates of complete remission of intestinal metaplasia (CRIM) between the treatment groups at 2 years.

The study authors explained that ESD, a recent development in endoscopic resection, allows for en bloc resection of larger lesions in dysplastic Barrett’s and EAC and features less diagnostic uncertainty, compared with cEMR. Findings from the study highlight the importance of this newer technique but also emphasize the utility of both treatments. “In expert hands both sets of procedures appear to be safe and well tolerated,” wrote study authors Don Codipilly, MD, of the Mayo Clinic in Rochester, Minn., and colleagues in Clinical Gastroenterology and Hepatology.

Given the lack of comparative data on the long-term outcomes of cEMR versus ESD in patients with neoplasia associated with Barrett’s esophagus, Dr. Codipilly and colleagues examined histologic outcomes in a prospectively maintained database of 537 patients who underwent endoscopic eradication therapy for Barrett’s esophagus or EAC at the Mayo Clinic between 2006 and 2020. Only patients who had undergone either cEMR (n = 456) or ESD (n = 81) followed by endoscopic ablation were included in the analysis.

The primary endpoint of the study was the rate and time to complete remission of dysplasia (CRD), which was defined by the absence of dysplasia on biopsy from the gastroesophageal junction and tubular esophagus during at least one surveillance endoscopy. Researchers also examined the rates of complications, such as clinically significant intraprocedural or postprocedural bleeding that required hospitalization, perforation, receipt of red blood cells within 30 days of the initial procedure, and stricture formation that required dilation within 120 days of the index procedure.

Patients in the ESD group had a longer mean length of resected specimens (23.9 vs. 10.9 mm; P < .01) as well as higher rates of en bloc (97.5% vs. 41.9%; P < .01) and R0 resection (58% vs. 20.2%; P < .01). Patients were generally balanced on other basic baseline demographics, including age, sex distribution, and smoking status.

Over a median 11.2-year follow-up period, a total of 420 patients in the cEMR group achieved CRD. In the ESD group, 48 patients achieved CRD over a median 1.4-year follow-up period. The 2-year cumulative probability of CRD was lower in patients who received cEMR versus those who received ESD (75.8% vs. 85.6%, respectively). In a univariate analysis, the odds of achieving CRD were lower in cEMR versus ESD (hazard ratio, 0.41; 95% CI, 0.31-0.54; P < .01).

According to multivariate analysis, two independent predictors of CRD included ESD (hazard ratio, 2.38; P <.01) and shorter Barrett’s segment length (HR, 1.11; P < .01).

The investigators also assessed whether advancements made in cEMR technique have contributed to the findings in an analysis of patients who underwent cEMR (n = 48) with ESD (n = 80) from 2015 to 2019. In this analysis, the researchers found that the odds of CRD were lower than that of ESD (HR, 0.67; 95% CI, 0.45-0.99). Additionally, higher odds of achieving CRD in the cEMR group were observed in years between 2013 and 2019 (n = 129), compared with years 2006-2012 (n = 112) (HR, 2.09; 95% CI, 1.59-2.75; P < .01).

Demographic and clinical variables were incorporated into a Cox proportional hazard model to identify factors associated with decreased odds of CRD. This analysis found that decreased odds of CRD were associated with longer Barrett’s esophagus segment length (HR, 0.90; P <.01) and treatment with cEMR versus ESD (HR, 0.42; P < .01).

Over median follow-up periods of 7.8 years in the cEMR group and 1.1 years in the ESD group, approximately 78.5% and 40.7% of patients, respectively, achieved CRIM. While those in the ESD group achieved CRIM earlier, the cumulative probabilities of CRIM were similar by 2 years (59.3% vs. 50.6%; HR, 0.74; 95% CI, 0.52-1.07; P = .11). Shorter Barrett’s esophagus segment was the only independent predictor of CRIM (HR, 1.16; P < .01).

The researchers noted that the study population may have included patients with more severe disease than that in the general population, which may limit the generalizability of the findings. Additionally, the lack of a randomized design was cited as an additional study limitation.

In spite of their findings, the researchers explained that “continued monitoring for additional outcomes such as recurrence are required for further elucidation of the optimal role of these procedures in the management of” neoplasia associated with Barrett’s esophagus.”

The study was funded by the National Cancer Institute and the Freeman Foundation. The researchers reported no conflicts of interest with any pharmaceutical companies.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COVID-19 vaccines: Lower serologic response among IBD, rheumatic diseases

Consider the booster shot
Article Type
Changed
Wed, 12/01/2021 - 09:24

Patients with immune-mediated inflammatory diseases (IMIDs), such as inflammatory bowel disease and rheumatic conditions, have a reduced serologic response to a two-dose vaccination regimen with mRNA COVID-19 vaccines, according to the findings of a meta-analysis.

“These results suggest that IMID patients receiving mRNA vaccines should complete the vaccine series without delay and support the strategy of providing a third dose of the vaccine,” wrote study authors Atsushi Sakuraba, MD, of the University of Chicago Medicine, and colleagues in Gastroenterology.

During the COVID-19 pandemic, concerns were raised about the susceptibility of patients with pre-existing conditions to infection with the novel coronavirus, the authors noted. Likewise, ongoing concerns have centered on the risk of worse COVID-19–related outcomes among patients with IMIDs who are treated with immunosuppressive agents.

Since the onset of the pandemic, several registries have been established to gauge the incidence and prognosis of COVID-19 in patients with IMID, including the Surveillance Epidemiology of Coronavirus Under Research Exclusion (SECURE)–Inflammatory Bowel Disease (IBD) registry and the COVID-19 Global Rheumatology Alliance 75 (C19-GRA), which includes patients with rheumatic diseases.

Authorization of COVID-19 mRNA vaccines provided hope that the COVID-19 pandemic could soon come to an end given the overwhelming safety and efficacy data supporting the use of these vaccines for preventing hospitalization and death. Despite these data, little is known regarding the efficacy of mRNA COVID-19 vaccines in patients with IMIDs and/or patients treated with immunosuppressive therapies, as these patients were excluded from the regulatory vaccine studies.

The study by Dr. Sakuraba and colleagues was a meta-analysis of 25 observational studies that reported serologic response rates to COVID-19 vaccination in a pooled cohort of 5,360 patients with IMIDs. Data regarding the reference population, medications, vaccination, and proportion of patients who achieved a serologic response were extracted from the observational studies and included in the meta-analysis.

In the analyzed studies, serologic response was evaluated separately after one or two vaccine doses. The researchers also examined the post-vaccine serologic response rate in patients with IMIDs versus controls without IMIDs.

A total of 23 studies used the BNT162b2 or mRNA-1273 vaccines, while 3 studies reported that 50% to 75.9% of patients received the AZD1222 vaccine. Some studies also included patients who received other COVID-19 vaccines, including CoronaVac, BBV152, and Ad26.COV2.S.

While 6 studies assessed serologic response to COVID-19 after just 1 dose, 20 studies assessed the post-vaccination serologic response following 2 doses. In most cases, researchers evaluated serologic response at 2 to 3 weeks after the first dose. After the second vaccine dose, most studies examined serologic response at 1 to 3 weeks.

The serologic response after 1 dose of the mRNA vaccines was 73.2% (95% CI 65.7-79.5). In a multivariate meta-regression analysis, the researchers found that a significantly greater proportion of patients with IMIDs who took anti-tumor necrosis factor (anti-TNF) therapies had a lower serologic response rate (coefficient, –2.60; 95% CI –4.49 to –0.72; P =.0069). The investigators indicated this “likely contributed to the difference in serologic response rates and overall heterogeneity.”

Studies with patients with IBD reported a lower serologic response rate compared with studies that included patients with rheumatoid arthritis (49.2% vs. 65.0%, respectively), which the investigators explained was likely reflective of the increased use of anti-TNF agents in patients with IBD.

After 2 doses of the mRNA vaccines, the pooled serologic response was 83.4% (95% CI, 76.8%-88.4%). Multivariate meta-regression found that a significantly greater proportion of patients who took anti-CD20 treatments had a lower serologic response (coefficient, -6.08; 95% CI -9.40 to -2.76; P <.001). The investigators found that older age was significantly associated with lower serologic response after 2 doses (coefficient, -0.044; 95% CI -0.083 to -0.0050; P =.027).

For the non-mRNA COVID-19 vaccines, the rates of serologic response after 2 doses were 93.5% with AZD1222, 22.9% with CoronaVac, and 55.6% with BBV152.

Compared with controls without IMIDs, those with IMIDs were significantly less likely to achieve a serologic response following 2 mRNA vaccine doses (odds ratio, 0.086; 95% CI 0.036-0.206; P <.001). The investigators noted that there were not enough studies to examine and compare serologic response rates to adenoviral or inactivated vaccines between patients and controls.

In terms of limitations, the researchers wrote that additional studies examining humoral and cellular immunity to COVID-19 vaccines are needed to determine vaccine efficacy and durability in patients with IMIDs. Additionally, there is a need for studies with larger patient populations to determine serologic response to COVID-19 vaccines in the broader IMID population.

The researchers reported no funding for the study and no relevant conflicts of interest with the pharmaceutical industry.

Body

 

Messenger RNA vaccines against COVID-19 play a certain role in controlling the pandemic. There has been no clear evidence about the efficacy of vaccination against various vaccine-preventable diseases in patients with IMIDs including IBD, but this global pandemic has led to huge progress in this field. This study by Sakuraba et al. helps us to interpret such information by putting 25 recent studies together. Unfortunately but not unexpectedly, patients with IMIDs were shown to have a lower serologic response to the vaccine, especially if they were treated with anti-TNF therapy. However, this study was incapable of showing the influence of other immunosuppressive therapies such as steroids, antimetabolites, and biologics. It is also still unclear whether their antibody titer would decrease sooner than that in the general population.

Large-scale registries of IBD patients suggest that their disease itself is not a risk for severe COVID-19; however, lower effectiveness of vaccination may result in a serious disadvantage in this patient population, compared with others. Therefore, results from this study strongly suggest that it is critical for patients with IBD not only to complete the regular two-dose vaccination but also to consider the booster shot to maintain immunity for the upcoming months. Further studies are needed to optimize the vaccination strategy specifically in this patient population.

Taku Kobayashi, MD, PhD, is the associate professor and vice director of the Center for Advanced IBD Research and Treatment and codirector of department of gastroenterology, Kitasato University Kitasato Institute Hospital, Tokyo. He has received lecture and advisory fees from Janssen, Pfizer, and Takeda.

Publications
Topics
Sections
Body

 

Messenger RNA vaccines against COVID-19 play a certain role in controlling the pandemic. There has been no clear evidence about the efficacy of vaccination against various vaccine-preventable diseases in patients with IMIDs including IBD, but this global pandemic has led to huge progress in this field. This study by Sakuraba et al. helps us to interpret such information by putting 25 recent studies together. Unfortunately but not unexpectedly, patients with IMIDs were shown to have a lower serologic response to the vaccine, especially if they were treated with anti-TNF therapy. However, this study was incapable of showing the influence of other immunosuppressive therapies such as steroids, antimetabolites, and biologics. It is also still unclear whether their antibody titer would decrease sooner than that in the general population.

Large-scale registries of IBD patients suggest that their disease itself is not a risk for severe COVID-19; however, lower effectiveness of vaccination may result in a serious disadvantage in this patient population, compared with others. Therefore, results from this study strongly suggest that it is critical for patients with IBD not only to complete the regular two-dose vaccination but also to consider the booster shot to maintain immunity for the upcoming months. Further studies are needed to optimize the vaccination strategy specifically in this patient population.

Taku Kobayashi, MD, PhD, is the associate professor and vice director of the Center for Advanced IBD Research and Treatment and codirector of department of gastroenterology, Kitasato University Kitasato Institute Hospital, Tokyo. He has received lecture and advisory fees from Janssen, Pfizer, and Takeda.

Body

 

Messenger RNA vaccines against COVID-19 play a certain role in controlling the pandemic. There has been no clear evidence about the efficacy of vaccination against various vaccine-preventable diseases in patients with IMIDs including IBD, but this global pandemic has led to huge progress in this field. This study by Sakuraba et al. helps us to interpret such information by putting 25 recent studies together. Unfortunately but not unexpectedly, patients with IMIDs were shown to have a lower serologic response to the vaccine, especially if they were treated with anti-TNF therapy. However, this study was incapable of showing the influence of other immunosuppressive therapies such as steroids, antimetabolites, and biologics. It is also still unclear whether their antibody titer would decrease sooner than that in the general population.

Large-scale registries of IBD patients suggest that their disease itself is not a risk for severe COVID-19; however, lower effectiveness of vaccination may result in a serious disadvantage in this patient population, compared with others. Therefore, results from this study strongly suggest that it is critical for patients with IBD not only to complete the regular two-dose vaccination but also to consider the booster shot to maintain immunity for the upcoming months. Further studies are needed to optimize the vaccination strategy specifically in this patient population.

Taku Kobayashi, MD, PhD, is the associate professor and vice director of the Center for Advanced IBD Research and Treatment and codirector of department of gastroenterology, Kitasato University Kitasato Institute Hospital, Tokyo. He has received lecture and advisory fees from Janssen, Pfizer, and Takeda.

Title
Consider the booster shot
Consider the booster shot

Patients with immune-mediated inflammatory diseases (IMIDs), such as inflammatory bowel disease and rheumatic conditions, have a reduced serologic response to a two-dose vaccination regimen with mRNA COVID-19 vaccines, according to the findings of a meta-analysis.

“These results suggest that IMID patients receiving mRNA vaccines should complete the vaccine series without delay and support the strategy of providing a third dose of the vaccine,” wrote study authors Atsushi Sakuraba, MD, of the University of Chicago Medicine, and colleagues in Gastroenterology.

During the COVID-19 pandemic, concerns were raised about the susceptibility of patients with pre-existing conditions to infection with the novel coronavirus, the authors noted. Likewise, ongoing concerns have centered on the risk of worse COVID-19–related outcomes among patients with IMIDs who are treated with immunosuppressive agents.

Since the onset of the pandemic, several registries have been established to gauge the incidence and prognosis of COVID-19 in patients with IMID, including the Surveillance Epidemiology of Coronavirus Under Research Exclusion (SECURE)–Inflammatory Bowel Disease (IBD) registry and the COVID-19 Global Rheumatology Alliance 75 (C19-GRA), which includes patients with rheumatic diseases.

Authorization of COVID-19 mRNA vaccines provided hope that the COVID-19 pandemic could soon come to an end given the overwhelming safety and efficacy data supporting the use of these vaccines for preventing hospitalization and death. Despite these data, little is known regarding the efficacy of mRNA COVID-19 vaccines in patients with IMIDs and/or patients treated with immunosuppressive therapies, as these patients were excluded from the regulatory vaccine studies.

The study by Dr. Sakuraba and colleagues was a meta-analysis of 25 observational studies that reported serologic response rates to COVID-19 vaccination in a pooled cohort of 5,360 patients with IMIDs. Data regarding the reference population, medications, vaccination, and proportion of patients who achieved a serologic response were extracted from the observational studies and included in the meta-analysis.

In the analyzed studies, serologic response was evaluated separately after one or two vaccine doses. The researchers also examined the post-vaccine serologic response rate in patients with IMIDs versus controls without IMIDs.

A total of 23 studies used the BNT162b2 or mRNA-1273 vaccines, while 3 studies reported that 50% to 75.9% of patients received the AZD1222 vaccine. Some studies also included patients who received other COVID-19 vaccines, including CoronaVac, BBV152, and Ad26.COV2.S.

While 6 studies assessed serologic response to COVID-19 after just 1 dose, 20 studies assessed the post-vaccination serologic response following 2 doses. In most cases, researchers evaluated serologic response at 2 to 3 weeks after the first dose. After the second vaccine dose, most studies examined serologic response at 1 to 3 weeks.

The serologic response after 1 dose of the mRNA vaccines was 73.2% (95% CI 65.7-79.5). In a multivariate meta-regression analysis, the researchers found that a significantly greater proportion of patients with IMIDs who took anti-tumor necrosis factor (anti-TNF) therapies had a lower serologic response rate (coefficient, –2.60; 95% CI –4.49 to –0.72; P =.0069). The investigators indicated this “likely contributed to the difference in serologic response rates and overall heterogeneity.”

Studies with patients with IBD reported a lower serologic response rate compared with studies that included patients with rheumatoid arthritis (49.2% vs. 65.0%, respectively), which the investigators explained was likely reflective of the increased use of anti-TNF agents in patients with IBD.

After 2 doses of the mRNA vaccines, the pooled serologic response was 83.4% (95% CI, 76.8%-88.4%). Multivariate meta-regression found that a significantly greater proportion of patients who took anti-CD20 treatments had a lower serologic response (coefficient, -6.08; 95% CI -9.40 to -2.76; P <.001). The investigators found that older age was significantly associated with lower serologic response after 2 doses (coefficient, -0.044; 95% CI -0.083 to -0.0050; P =.027).

For the non-mRNA COVID-19 vaccines, the rates of serologic response after 2 doses were 93.5% with AZD1222, 22.9% with CoronaVac, and 55.6% with BBV152.

Compared with controls without IMIDs, those with IMIDs were significantly less likely to achieve a serologic response following 2 mRNA vaccine doses (odds ratio, 0.086; 95% CI 0.036-0.206; P <.001). The investigators noted that there were not enough studies to examine and compare serologic response rates to adenoviral or inactivated vaccines between patients and controls.

In terms of limitations, the researchers wrote that additional studies examining humoral and cellular immunity to COVID-19 vaccines are needed to determine vaccine efficacy and durability in patients with IMIDs. Additionally, there is a need for studies with larger patient populations to determine serologic response to COVID-19 vaccines in the broader IMID population.

The researchers reported no funding for the study and no relevant conflicts of interest with the pharmaceutical industry.

Patients with immune-mediated inflammatory diseases (IMIDs), such as inflammatory bowel disease and rheumatic conditions, have a reduced serologic response to a two-dose vaccination regimen with mRNA COVID-19 vaccines, according to the findings of a meta-analysis.

“These results suggest that IMID patients receiving mRNA vaccines should complete the vaccine series without delay and support the strategy of providing a third dose of the vaccine,” wrote study authors Atsushi Sakuraba, MD, of the University of Chicago Medicine, and colleagues in Gastroenterology.

During the COVID-19 pandemic, concerns were raised about the susceptibility of patients with pre-existing conditions to infection with the novel coronavirus, the authors noted. Likewise, ongoing concerns have centered on the risk of worse COVID-19–related outcomes among patients with IMIDs who are treated with immunosuppressive agents.

Since the onset of the pandemic, several registries have been established to gauge the incidence and prognosis of COVID-19 in patients with IMID, including the Surveillance Epidemiology of Coronavirus Under Research Exclusion (SECURE)–Inflammatory Bowel Disease (IBD) registry and the COVID-19 Global Rheumatology Alliance 75 (C19-GRA), which includes patients with rheumatic diseases.

Authorization of COVID-19 mRNA vaccines provided hope that the COVID-19 pandemic could soon come to an end given the overwhelming safety and efficacy data supporting the use of these vaccines for preventing hospitalization and death. Despite these data, little is known regarding the efficacy of mRNA COVID-19 vaccines in patients with IMIDs and/or patients treated with immunosuppressive therapies, as these patients were excluded from the regulatory vaccine studies.

The study by Dr. Sakuraba and colleagues was a meta-analysis of 25 observational studies that reported serologic response rates to COVID-19 vaccination in a pooled cohort of 5,360 patients with IMIDs. Data regarding the reference population, medications, vaccination, and proportion of patients who achieved a serologic response were extracted from the observational studies and included in the meta-analysis.

In the analyzed studies, serologic response was evaluated separately after one or two vaccine doses. The researchers also examined the post-vaccine serologic response rate in patients with IMIDs versus controls without IMIDs.

A total of 23 studies used the BNT162b2 or mRNA-1273 vaccines, while 3 studies reported that 50% to 75.9% of patients received the AZD1222 vaccine. Some studies also included patients who received other COVID-19 vaccines, including CoronaVac, BBV152, and Ad26.COV2.S.

While 6 studies assessed serologic response to COVID-19 after just 1 dose, 20 studies assessed the post-vaccination serologic response following 2 doses. In most cases, researchers evaluated serologic response at 2 to 3 weeks after the first dose. After the second vaccine dose, most studies examined serologic response at 1 to 3 weeks.

The serologic response after 1 dose of the mRNA vaccines was 73.2% (95% CI 65.7-79.5). In a multivariate meta-regression analysis, the researchers found that a significantly greater proportion of patients with IMIDs who took anti-tumor necrosis factor (anti-TNF) therapies had a lower serologic response rate (coefficient, –2.60; 95% CI –4.49 to –0.72; P =.0069). The investigators indicated this “likely contributed to the difference in serologic response rates and overall heterogeneity.”

Studies with patients with IBD reported a lower serologic response rate compared with studies that included patients with rheumatoid arthritis (49.2% vs. 65.0%, respectively), which the investigators explained was likely reflective of the increased use of anti-TNF agents in patients with IBD.

After 2 doses of the mRNA vaccines, the pooled serologic response was 83.4% (95% CI, 76.8%-88.4%). Multivariate meta-regression found that a significantly greater proportion of patients who took anti-CD20 treatments had a lower serologic response (coefficient, -6.08; 95% CI -9.40 to -2.76; P <.001). The investigators found that older age was significantly associated with lower serologic response after 2 doses (coefficient, -0.044; 95% CI -0.083 to -0.0050; P =.027).

For the non-mRNA COVID-19 vaccines, the rates of serologic response after 2 doses were 93.5% with AZD1222, 22.9% with CoronaVac, and 55.6% with BBV152.

Compared with controls without IMIDs, those with IMIDs were significantly less likely to achieve a serologic response following 2 mRNA vaccine doses (odds ratio, 0.086; 95% CI 0.036-0.206; P <.001). The investigators noted that there were not enough studies to examine and compare serologic response rates to adenoviral or inactivated vaccines between patients and controls.

In terms of limitations, the researchers wrote that additional studies examining humoral and cellular immunity to COVID-19 vaccines are needed to determine vaccine efficacy and durability in patients with IMIDs. Additionally, there is a need for studies with larger patient populations to determine serologic response to COVID-19 vaccines in the broader IMID population.

The researchers reported no funding for the study and no relevant conflicts of interest with the pharmaceutical industry.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Human CRP protects against acetaminophen-induced liver injury in mice

Could CRP replace N-acetylcysteine?
Article Type
Changed
Tue, 09/20/2022 - 11:27

While often linked to deleterious outcomes in certain disease states, the hepatocyte-produced inflammatory marker C-reactive protein (CRP) may be a checkpoint that protects against acetaminophen-induced acute liver injury, according to research findings.

Based on the study findings, researchers believe long-term suppression of CRP function or expression may increase an individual’s susceptibility to acetaminophen-induced liver injury. In contrast, CRP “could be exploited as a promising therapeutic approach to treat hepatotoxicity caused by drug overdose” wrote study authors Hai-Yun Li, MD, of the Xi’an Jiaotong University in Shaanxi, China, and colleagues in Cellular and Molecular Gastroenterology and Hepatology.

According to Dr. Li and colleagues, a major cause of acute liver failure is acetaminophen-induced liver injury, but despite this risk, very few treatment options for this condition exist. The only approved treatment for this complication is N-acetyl cysteine (NAC).

Although CRP represents a marker for inflammation following tissue injury, a study from 2020 and one from 2018 suggest the protein regulates complement activation and may modulate responses of immune cells. The authors of the current study noted that few studies have explored what roles complement activation and modulated immune cell responses via CRP play in acetaminophen-induced acute liver injury.

To further elucidate the role of CRP in this setting, Dr. Li and researchers assessed the mechanisms of CRP action both in vitro as well as in CRP mice with Fcy receptor 2B knockout. The researchers suggested CRP may modulate immune cell responses via these receptors. Additionally, the investigators assessed CRP action in mice with C3 knockout, given previous studies suggesting C3 knockout may alleviate acetaminophen-induced liver injury in mice. The researchers also investigated hepatic expression of CRP mutants that were defective in complement interaction. Finally, the researchers sought to understand the therapeutic potential of the inflammatory marker by performing intraperitoneal administration of human CRP at 2 or 6 hours after induction of acetaminophen-induced acute liver injury in wild-type mice.

Injection of 300 mg/kg acetaminophen over 24 hours led to overt liver injury in wild-type mice, which was characterized by increased levels of circulating alanine transaminase and aspartate transaminase as well as massive necrosis of hepatocytes. The researchers noted that these manifestations were exacerbated significantly in the CRP knockout mice.

The intravenous administration of human CRP in the mice with the drug-induced liver injury rescued defects caused by mouse CRP knockout. Additionally, human CRP administration alleviated acetaminophen-induced acute liver injury in the wild-type mice. The researchers wrote that these findings demonstrate that endogenous and human CRP “are both protective,” at least in mouse models of acetaminophen-induced liver injury.

In a second experiment, the researchers examined the mechanisms involved in CRP protection in early phases of drug-induced liver injury. Based on the experiment, the researchers found that the knockout of an inhibitory Fcy receptor mediating the anti-inflammatory activities of CRP demonstrated only “marginal effects” on the protection of the protein in acetaminophen-induced liver injury. Overall, the investigators suggested that the inflammatory marker does not likely act via the cellular Fcy receptor 2B to inhibit early phases of acetaminophen-induced hepatocyte injury. Rather, the investigators explained that CRP may act via factor H, which is recruited by CRP in regulating complement activation, to inhibit overactivation of complement on injured hepatocytes. Ultimately, the researchers explained, this results in suppression of the late phase amplification of inflammation that is mediated by neutrophils’ C3a-dependent actions.

Finally, the researchers found that intraperitoneal administration of human CRP at 2.5 mg/kg in wild-type mice at 2 hours following induction of acetaminophen-induced liver injury led to “markedly reduced liver injury,” with an efficacy that was similar to that of 500 mg/kg N-acetylcysteine, the only available treatment approved for acetaminophen-induced liver injury.

The researchers noted that N-acetylcysteine is only effective during the early phases of the acetaminophen-induced liver injury and loses effectiveness at 6 hours following injury. In contrast, human CRP in this study was still highly effective at this time point. “Given that people can tolerate high levels of circulating CRP, the administration of this protein might be a promising option to treat [acetaminophen-induced liver injury] with minimal side effects,” the researchers wrote.

The study was funded by the National Natural Science Foundation of China. The researchers reported no conflicts of interest with any pharmaceutical companies.

This article was updated on Sep. 20, 2022.

Body

 

Acetaminophen is one of the most widely used pain relievers in the world. Acetaminophen use is considered safe at therapeutic doses; however it is a dose-dependent hepatotoxin, and acetaminophen overdose is one of the leading causes of acute liver failure (ALF) in industrialized countries. Despite intensive efforts, the mechanisms involved in acetaminophen hepatotoxicity are not fully understood, which has hampered the availability of effective therapy for acetaminophen hepatotoxicity.

In Cellular and Molecular Gastroenterology and Hepatology, Li et al. uncovered a crucial role of C-reactive protein in acetaminophen-mediated ALF. Despite its well recognized role as an acute-phase protein in inflammation, CRP also regulates complement activation and hence the modulation of immune cell responses and the generation of anaphylotoxins via specific receptors. With use of models of genetic deletion of CRP in rats and mice, Li et al. demonstrate a protective role for CRP in acetaminophen-induced ALF by regulating the late phase of acetaminophen-induced liver failure via complement overactivation through antagonism of C3aR that prevented neutrophil recruitment.

From a clinically relevant perspective, the protective effect of CRP was more effective than the currently used therapeutic approach of giving N-acetylcysteine (NAC) to patients after acetaminophen hepatotoxicity. The superiority of CRP vs. NAC is related to the limited period for NAC administration after acetaminophen overdose, while the administration of CRP was effective even when given several hours after acetaminophen dosage, consistent with its ability to target the late phase of events involved in acetaminophen hepatotoxicity. Therefore, these findings identify CRP as a promising approach for acetaminophen hepatotoxicity with significant therapeutic advantage, compared with NAC treatment, which may change the paradigm of management of acetaminophen-induced liver failure.

Jose C. Fernandez-Checa, PhD, is a professor at the Spanish National Research Council at the Institute of Biomedical Research of Barcelona, investigator of the Institute of Biomedical Research August Pi i Sunyer, group leader of the Center for Biomedical Network Research on Hepatic and Digestive Diseases, and visiting professor at the department of medicine University of Southern California, Los Angeles. He has no relevant conflicts of interest.

Publications
Topics
Sections
Body

 

Acetaminophen is one of the most widely used pain relievers in the world. Acetaminophen use is considered safe at therapeutic doses; however it is a dose-dependent hepatotoxin, and acetaminophen overdose is one of the leading causes of acute liver failure (ALF) in industrialized countries. Despite intensive efforts, the mechanisms involved in acetaminophen hepatotoxicity are not fully understood, which has hampered the availability of effective therapy for acetaminophen hepatotoxicity.

In Cellular and Molecular Gastroenterology and Hepatology, Li et al. uncovered a crucial role of C-reactive protein in acetaminophen-mediated ALF. Despite its well recognized role as an acute-phase protein in inflammation, CRP also regulates complement activation and hence the modulation of immune cell responses and the generation of anaphylotoxins via specific receptors. With use of models of genetic deletion of CRP in rats and mice, Li et al. demonstrate a protective role for CRP in acetaminophen-induced ALF by regulating the late phase of acetaminophen-induced liver failure via complement overactivation through antagonism of C3aR that prevented neutrophil recruitment.

From a clinically relevant perspective, the protective effect of CRP was more effective than the currently used therapeutic approach of giving N-acetylcysteine (NAC) to patients after acetaminophen hepatotoxicity. The superiority of CRP vs. NAC is related to the limited period for NAC administration after acetaminophen overdose, while the administration of CRP was effective even when given several hours after acetaminophen dosage, consistent with its ability to target the late phase of events involved in acetaminophen hepatotoxicity. Therefore, these findings identify CRP as a promising approach for acetaminophen hepatotoxicity with significant therapeutic advantage, compared with NAC treatment, which may change the paradigm of management of acetaminophen-induced liver failure.

Jose C. Fernandez-Checa, PhD, is a professor at the Spanish National Research Council at the Institute of Biomedical Research of Barcelona, investigator of the Institute of Biomedical Research August Pi i Sunyer, group leader of the Center for Biomedical Network Research on Hepatic and Digestive Diseases, and visiting professor at the department of medicine University of Southern California, Los Angeles. He has no relevant conflicts of interest.

Body

 

Acetaminophen is one of the most widely used pain relievers in the world. Acetaminophen use is considered safe at therapeutic doses; however it is a dose-dependent hepatotoxin, and acetaminophen overdose is one of the leading causes of acute liver failure (ALF) in industrialized countries. Despite intensive efforts, the mechanisms involved in acetaminophen hepatotoxicity are not fully understood, which has hampered the availability of effective therapy for acetaminophen hepatotoxicity.

In Cellular and Molecular Gastroenterology and Hepatology, Li et al. uncovered a crucial role of C-reactive protein in acetaminophen-mediated ALF. Despite its well recognized role as an acute-phase protein in inflammation, CRP also regulates complement activation and hence the modulation of immune cell responses and the generation of anaphylotoxins via specific receptors. With use of models of genetic deletion of CRP in rats and mice, Li et al. demonstrate a protective role for CRP in acetaminophen-induced ALF by regulating the late phase of acetaminophen-induced liver failure via complement overactivation through antagonism of C3aR that prevented neutrophil recruitment.

From a clinically relevant perspective, the protective effect of CRP was more effective than the currently used therapeutic approach of giving N-acetylcysteine (NAC) to patients after acetaminophen hepatotoxicity. The superiority of CRP vs. NAC is related to the limited period for NAC administration after acetaminophen overdose, while the administration of CRP was effective even when given several hours after acetaminophen dosage, consistent with its ability to target the late phase of events involved in acetaminophen hepatotoxicity. Therefore, these findings identify CRP as a promising approach for acetaminophen hepatotoxicity with significant therapeutic advantage, compared with NAC treatment, which may change the paradigm of management of acetaminophen-induced liver failure.

Jose C. Fernandez-Checa, PhD, is a professor at the Spanish National Research Council at the Institute of Biomedical Research of Barcelona, investigator of the Institute of Biomedical Research August Pi i Sunyer, group leader of the Center for Biomedical Network Research on Hepatic and Digestive Diseases, and visiting professor at the department of medicine University of Southern California, Los Angeles. He has no relevant conflicts of interest.

Title
Could CRP replace N-acetylcysteine?
Could CRP replace N-acetylcysteine?

While often linked to deleterious outcomes in certain disease states, the hepatocyte-produced inflammatory marker C-reactive protein (CRP) may be a checkpoint that protects against acetaminophen-induced acute liver injury, according to research findings.

Based on the study findings, researchers believe long-term suppression of CRP function or expression may increase an individual’s susceptibility to acetaminophen-induced liver injury. In contrast, CRP “could be exploited as a promising therapeutic approach to treat hepatotoxicity caused by drug overdose” wrote study authors Hai-Yun Li, MD, of the Xi’an Jiaotong University in Shaanxi, China, and colleagues in Cellular and Molecular Gastroenterology and Hepatology.

According to Dr. Li and colleagues, a major cause of acute liver failure is acetaminophen-induced liver injury, but despite this risk, very few treatment options for this condition exist. The only approved treatment for this complication is N-acetyl cysteine (NAC).

Although CRP represents a marker for inflammation following tissue injury, a study from 2020 and one from 2018 suggest the protein regulates complement activation and may modulate responses of immune cells. The authors of the current study noted that few studies have explored what roles complement activation and modulated immune cell responses via CRP play in acetaminophen-induced acute liver injury.

To further elucidate the role of CRP in this setting, Dr. Li and researchers assessed the mechanisms of CRP action both in vitro as well as in CRP mice with Fcy receptor 2B knockout. The researchers suggested CRP may modulate immune cell responses via these receptors. Additionally, the investigators assessed CRP action in mice with C3 knockout, given previous studies suggesting C3 knockout may alleviate acetaminophen-induced liver injury in mice. The researchers also investigated hepatic expression of CRP mutants that were defective in complement interaction. Finally, the researchers sought to understand the therapeutic potential of the inflammatory marker by performing intraperitoneal administration of human CRP at 2 or 6 hours after induction of acetaminophen-induced acute liver injury in wild-type mice.

Injection of 300 mg/kg acetaminophen over 24 hours led to overt liver injury in wild-type mice, which was characterized by increased levels of circulating alanine transaminase and aspartate transaminase as well as massive necrosis of hepatocytes. The researchers noted that these manifestations were exacerbated significantly in the CRP knockout mice.

The intravenous administration of human CRP in the mice with the drug-induced liver injury rescued defects caused by mouse CRP knockout. Additionally, human CRP administration alleviated acetaminophen-induced acute liver injury in the wild-type mice. The researchers wrote that these findings demonstrate that endogenous and human CRP “are both protective,” at least in mouse models of acetaminophen-induced liver injury.

In a second experiment, the researchers examined the mechanisms involved in CRP protection in early phases of drug-induced liver injury. Based on the experiment, the researchers found that the knockout of an inhibitory Fcy receptor mediating the anti-inflammatory activities of CRP demonstrated only “marginal effects” on the protection of the protein in acetaminophen-induced liver injury. Overall, the investigators suggested that the inflammatory marker does not likely act via the cellular Fcy receptor 2B to inhibit early phases of acetaminophen-induced hepatocyte injury. Rather, the investigators explained that CRP may act via factor H, which is recruited by CRP in regulating complement activation, to inhibit overactivation of complement on injured hepatocytes. Ultimately, the researchers explained, this results in suppression of the late phase amplification of inflammation that is mediated by neutrophils’ C3a-dependent actions.

Finally, the researchers found that intraperitoneal administration of human CRP at 2.5 mg/kg in wild-type mice at 2 hours following induction of acetaminophen-induced liver injury led to “markedly reduced liver injury,” with an efficacy that was similar to that of 500 mg/kg N-acetylcysteine, the only available treatment approved for acetaminophen-induced liver injury.

The researchers noted that N-acetylcysteine is only effective during the early phases of the acetaminophen-induced liver injury and loses effectiveness at 6 hours following injury. In contrast, human CRP in this study was still highly effective at this time point. “Given that people can tolerate high levels of circulating CRP, the administration of this protein might be a promising option to treat [acetaminophen-induced liver injury] with minimal side effects,” the researchers wrote.

The study was funded by the National Natural Science Foundation of China. The researchers reported no conflicts of interest with any pharmaceutical companies.

This article was updated on Sep. 20, 2022.

While often linked to deleterious outcomes in certain disease states, the hepatocyte-produced inflammatory marker C-reactive protein (CRP) may be a checkpoint that protects against acetaminophen-induced acute liver injury, according to research findings.

Based on the study findings, researchers believe long-term suppression of CRP function or expression may increase an individual’s susceptibility to acetaminophen-induced liver injury. In contrast, CRP “could be exploited as a promising therapeutic approach to treat hepatotoxicity caused by drug overdose” wrote study authors Hai-Yun Li, MD, of the Xi’an Jiaotong University in Shaanxi, China, and colleagues in Cellular and Molecular Gastroenterology and Hepatology.

According to Dr. Li and colleagues, a major cause of acute liver failure is acetaminophen-induced liver injury, but despite this risk, very few treatment options for this condition exist. The only approved treatment for this complication is N-acetyl cysteine (NAC).

Although CRP represents a marker for inflammation following tissue injury, a study from 2020 and one from 2018 suggest the protein regulates complement activation and may modulate responses of immune cells. The authors of the current study noted that few studies have explored what roles complement activation and modulated immune cell responses via CRP play in acetaminophen-induced acute liver injury.

To further elucidate the role of CRP in this setting, Dr. Li and researchers assessed the mechanisms of CRP action both in vitro as well as in CRP mice with Fcy receptor 2B knockout. The researchers suggested CRP may modulate immune cell responses via these receptors. Additionally, the investigators assessed CRP action in mice with C3 knockout, given previous studies suggesting C3 knockout may alleviate acetaminophen-induced liver injury in mice. The researchers also investigated hepatic expression of CRP mutants that were defective in complement interaction. Finally, the researchers sought to understand the therapeutic potential of the inflammatory marker by performing intraperitoneal administration of human CRP at 2 or 6 hours after induction of acetaminophen-induced acute liver injury in wild-type mice.

Injection of 300 mg/kg acetaminophen over 24 hours led to overt liver injury in wild-type mice, which was characterized by increased levels of circulating alanine transaminase and aspartate transaminase as well as massive necrosis of hepatocytes. The researchers noted that these manifestations were exacerbated significantly in the CRP knockout mice.

The intravenous administration of human CRP in the mice with the drug-induced liver injury rescued defects caused by mouse CRP knockout. Additionally, human CRP administration alleviated acetaminophen-induced acute liver injury in the wild-type mice. The researchers wrote that these findings demonstrate that endogenous and human CRP “are both protective,” at least in mouse models of acetaminophen-induced liver injury.

In a second experiment, the researchers examined the mechanisms involved in CRP protection in early phases of drug-induced liver injury. Based on the experiment, the researchers found that the knockout of an inhibitory Fcy receptor mediating the anti-inflammatory activities of CRP demonstrated only “marginal effects” on the protection of the protein in acetaminophen-induced liver injury. Overall, the investigators suggested that the inflammatory marker does not likely act via the cellular Fcy receptor 2B to inhibit early phases of acetaminophen-induced hepatocyte injury. Rather, the investigators explained that CRP may act via factor H, which is recruited by CRP in regulating complement activation, to inhibit overactivation of complement on injured hepatocytes. Ultimately, the researchers explained, this results in suppression of the late phase amplification of inflammation that is mediated by neutrophils’ C3a-dependent actions.

Finally, the researchers found that intraperitoneal administration of human CRP at 2.5 mg/kg in wild-type mice at 2 hours following induction of acetaminophen-induced liver injury led to “markedly reduced liver injury,” with an efficacy that was similar to that of 500 mg/kg N-acetylcysteine, the only available treatment approved for acetaminophen-induced liver injury.

The researchers noted that N-acetylcysteine is only effective during the early phases of the acetaminophen-induced liver injury and loses effectiveness at 6 hours following injury. In contrast, human CRP in this study was still highly effective at this time point. “Given that people can tolerate high levels of circulating CRP, the administration of this protein might be a promising option to treat [acetaminophen-induced liver injury] with minimal side effects,” the researchers wrote.

The study was funded by the National Natural Science Foundation of China. The researchers reported no conflicts of interest with any pharmaceutical companies.

This article was updated on Sep. 20, 2022.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA Clinical Care Pathway: Screening, diagnosis, and treatment of NAFLD and NASH

Article Type
Changed
Mon, 11/15/2021 - 13:14

The American Gastroenterological Association recently published a Clinical Care Pathway for screening, diagnosis, and treatment of patients with nonalcoholic fatty liver disease (NAFLD).

Recommendations are intended for a spectrum of clinical settings, including primary care, obesity medicine, gastroenterology, hepatology, and endocrinology practices, reported lead author Fasiha Kanwal, MD, of Baylor College of Medicine, Houston, and colleagues.

“Most patients with NAFLD and NASH [nonalcoholic steatohepatitis] are seen in primary care or endocrine clinics,” the authors wrote in Gastroenterology. “Although not all patients with NAFLD/NASH require secondary (i.e., hepatology) care, not knowing which patients might benefit from such care and when to refer them results in inconsistent care processes and possibly poor outcomes. Clinical Care Pathways, with careful explication of each step in screening, diagnosis, and treatment, have been shown to improve the quality of health care delivery in other areas of medicine, [and] are crucial to addressing the often inconsistent care processes characterizing current approaches to NAFLD/NASH.”

The guidance was drafted by a group of 15 multidisciplinary experts from around the world representing the AGA, the American Diabetes Association, the American Osteopathic Association, the Obesity Society, and the Endocrine Society. Recommendations were based on available literature and clinical experience.

The authors recommended a four-step screening process for NAFLD/NASH: Check for risk factors predicting clinically relevant fibrosis (stage F2 or higher), review history and perform relevant laboratory tests, conduct noninvasive liver fibrosis testing, and measure liver stiffness.

Patients at greatest risk for clinically significant fibrosis include those with two or more metabolic risk factors, those with type 2 diabetes, and those with incidentally detected steatosis and/or elevated aminotransferases.

“A recent retrospective cohort study found that patients with hepatic steatosis and elevated alanine aminotransferase had a significantly higher risk of progression to cirrhosis or hepatocellular carcinoma than patients with hepatic steatosis and persistently normal alanine aminotransferase,” the authors noted.

When any of the above risk factors are present, the authors recommended checking the patient’s history for excessive alcohol intake, conducting a complete blood count and liver function tests, and screening for other hepatic and biliary diseases, such as chronic hepatitis C virus infection and liver mass lesions.

If other liver diseases have been ruled out, the first step in liver fibrosis risk stratification involves noninvasive testing, with the authors favoring the Fibrosis-4 (FIB-4) score “because it has been shown to have the best diagnostic accuracy for advanced fibrosis, compared with other noninvasive markers of fibrosis in patients with NAFLD.”

The next step in risk stratification involves liver stiffness measurement (LSM) with FibroScan (vibration controlled transient elastography [VCTE]), or newer modalities, such as bidimensional shear wave elastography or point shear wave elastography, which offer “diagnostic performances at least as good as VCTE.”

According to the publication, patients with NAFLD at low risk of advanced fibrosis (FIB-4 less than 1.3 or LSM less than 8 kPa or liver biopsy F0-F1) can be managed by one provider, such as a primary care provider or endocrinologist, whereas indeterminate-risk patients (FIB-4 of 1.3-2.67 and/or LSM 8-12 kPa and liver biopsy unavailable) and high-risk patients (FIB-4 greater than 2.67 or LSM greater than 12 kPa or liver biopsy F2-F4) should be managed by a multidisciplinary team led by a hepatologist.

Lifestyle intervention, weight loss (if overweight or obese), and cardiovascular disease risk reduction are advised for patients of all risk categories.

“There are no large, long-term behavioral modification or pharmacotherapy studies regarding weight loss in individuals with NAFLD,” the authors wrote. “However, weight loss of any magnitude should be encouraged as beneficial.”

For patients with indeterminate and high risk, NASH pharmacotherapy is recommended, and if needed, diabetes care should involve medications with efficacy in NASH, such as pioglitazone.

“Although we recognize that knowledge is continuing to evolve and that recommendations may change accordingly over time, we believe this Pathway provides accessible, standardized, evidence-based, timely, and testable recommendations that will allow clinicians to care for a rapidly growing population of patients, most of whom are managed in primary care or endocrine clinics,” the authors concluded.

The article was supported by the American Gastroenterological Association, Intercept Pharmaceuticals, Pfizer, and others. The authors disclosed relationships with Novo Nordisk, Eli Lilly, Sanofi, and others.

Publications
Topics
Sections

The American Gastroenterological Association recently published a Clinical Care Pathway for screening, diagnosis, and treatment of patients with nonalcoholic fatty liver disease (NAFLD).

Recommendations are intended for a spectrum of clinical settings, including primary care, obesity medicine, gastroenterology, hepatology, and endocrinology practices, reported lead author Fasiha Kanwal, MD, of Baylor College of Medicine, Houston, and colleagues.

“Most patients with NAFLD and NASH [nonalcoholic steatohepatitis] are seen in primary care or endocrine clinics,” the authors wrote in Gastroenterology. “Although not all patients with NAFLD/NASH require secondary (i.e., hepatology) care, not knowing which patients might benefit from such care and when to refer them results in inconsistent care processes and possibly poor outcomes. Clinical Care Pathways, with careful explication of each step in screening, diagnosis, and treatment, have been shown to improve the quality of health care delivery in other areas of medicine, [and] are crucial to addressing the often inconsistent care processes characterizing current approaches to NAFLD/NASH.”

The guidance was drafted by a group of 15 multidisciplinary experts from around the world representing the AGA, the American Diabetes Association, the American Osteopathic Association, the Obesity Society, and the Endocrine Society. Recommendations were based on available literature and clinical experience.

The authors recommended a four-step screening process for NAFLD/NASH: Check for risk factors predicting clinically relevant fibrosis (stage F2 or higher), review history and perform relevant laboratory tests, conduct noninvasive liver fibrosis testing, and measure liver stiffness.

Patients at greatest risk for clinically significant fibrosis include those with two or more metabolic risk factors, those with type 2 diabetes, and those with incidentally detected steatosis and/or elevated aminotransferases.

“A recent retrospective cohort study found that patients with hepatic steatosis and elevated alanine aminotransferase had a significantly higher risk of progression to cirrhosis or hepatocellular carcinoma than patients with hepatic steatosis and persistently normal alanine aminotransferase,” the authors noted.

When any of the above risk factors are present, the authors recommended checking the patient’s history for excessive alcohol intake, conducting a complete blood count and liver function tests, and screening for other hepatic and biliary diseases, such as chronic hepatitis C virus infection and liver mass lesions.

If other liver diseases have been ruled out, the first step in liver fibrosis risk stratification involves noninvasive testing, with the authors favoring the Fibrosis-4 (FIB-4) score “because it has been shown to have the best diagnostic accuracy for advanced fibrosis, compared with other noninvasive markers of fibrosis in patients with NAFLD.”

The next step in risk stratification involves liver stiffness measurement (LSM) with FibroScan (vibration controlled transient elastography [VCTE]), or newer modalities, such as bidimensional shear wave elastography or point shear wave elastography, which offer “diagnostic performances at least as good as VCTE.”

According to the publication, patients with NAFLD at low risk of advanced fibrosis (FIB-4 less than 1.3 or LSM less than 8 kPa or liver biopsy F0-F1) can be managed by one provider, such as a primary care provider or endocrinologist, whereas indeterminate-risk patients (FIB-4 of 1.3-2.67 and/or LSM 8-12 kPa and liver biopsy unavailable) and high-risk patients (FIB-4 greater than 2.67 or LSM greater than 12 kPa or liver biopsy F2-F4) should be managed by a multidisciplinary team led by a hepatologist.

Lifestyle intervention, weight loss (if overweight or obese), and cardiovascular disease risk reduction are advised for patients of all risk categories.

“There are no large, long-term behavioral modification or pharmacotherapy studies regarding weight loss in individuals with NAFLD,” the authors wrote. “However, weight loss of any magnitude should be encouraged as beneficial.”

For patients with indeterminate and high risk, NASH pharmacotherapy is recommended, and if needed, diabetes care should involve medications with efficacy in NASH, such as pioglitazone.

“Although we recognize that knowledge is continuing to evolve and that recommendations may change accordingly over time, we believe this Pathway provides accessible, standardized, evidence-based, timely, and testable recommendations that will allow clinicians to care for a rapidly growing population of patients, most of whom are managed in primary care or endocrine clinics,” the authors concluded.

The article was supported by the American Gastroenterological Association, Intercept Pharmaceuticals, Pfizer, and others. The authors disclosed relationships with Novo Nordisk, Eli Lilly, Sanofi, and others.

The American Gastroenterological Association recently published a Clinical Care Pathway for screening, diagnosis, and treatment of patients with nonalcoholic fatty liver disease (NAFLD).

Recommendations are intended for a spectrum of clinical settings, including primary care, obesity medicine, gastroenterology, hepatology, and endocrinology practices, reported lead author Fasiha Kanwal, MD, of Baylor College of Medicine, Houston, and colleagues.

“Most patients with NAFLD and NASH [nonalcoholic steatohepatitis] are seen in primary care or endocrine clinics,” the authors wrote in Gastroenterology. “Although not all patients with NAFLD/NASH require secondary (i.e., hepatology) care, not knowing which patients might benefit from such care and when to refer them results in inconsistent care processes and possibly poor outcomes. Clinical Care Pathways, with careful explication of each step in screening, diagnosis, and treatment, have been shown to improve the quality of health care delivery in other areas of medicine, [and] are crucial to addressing the often inconsistent care processes characterizing current approaches to NAFLD/NASH.”

The guidance was drafted by a group of 15 multidisciplinary experts from around the world representing the AGA, the American Diabetes Association, the American Osteopathic Association, the Obesity Society, and the Endocrine Society. Recommendations were based on available literature and clinical experience.

The authors recommended a four-step screening process for NAFLD/NASH: Check for risk factors predicting clinically relevant fibrosis (stage F2 or higher), review history and perform relevant laboratory tests, conduct noninvasive liver fibrosis testing, and measure liver stiffness.

Patients at greatest risk for clinically significant fibrosis include those with two or more metabolic risk factors, those with type 2 diabetes, and those with incidentally detected steatosis and/or elevated aminotransferases.

“A recent retrospective cohort study found that patients with hepatic steatosis and elevated alanine aminotransferase had a significantly higher risk of progression to cirrhosis or hepatocellular carcinoma than patients with hepatic steatosis and persistently normal alanine aminotransferase,” the authors noted.

When any of the above risk factors are present, the authors recommended checking the patient’s history for excessive alcohol intake, conducting a complete blood count and liver function tests, and screening for other hepatic and biliary diseases, such as chronic hepatitis C virus infection and liver mass lesions.

If other liver diseases have been ruled out, the first step in liver fibrosis risk stratification involves noninvasive testing, with the authors favoring the Fibrosis-4 (FIB-4) score “because it has been shown to have the best diagnostic accuracy for advanced fibrosis, compared with other noninvasive markers of fibrosis in patients with NAFLD.”

The next step in risk stratification involves liver stiffness measurement (LSM) with FibroScan (vibration controlled transient elastography [VCTE]), or newer modalities, such as bidimensional shear wave elastography or point shear wave elastography, which offer “diagnostic performances at least as good as VCTE.”

According to the publication, patients with NAFLD at low risk of advanced fibrosis (FIB-4 less than 1.3 or LSM less than 8 kPa or liver biopsy F0-F1) can be managed by one provider, such as a primary care provider or endocrinologist, whereas indeterminate-risk patients (FIB-4 of 1.3-2.67 and/or LSM 8-12 kPa and liver biopsy unavailable) and high-risk patients (FIB-4 greater than 2.67 or LSM greater than 12 kPa or liver biopsy F2-F4) should be managed by a multidisciplinary team led by a hepatologist.

Lifestyle intervention, weight loss (if overweight or obese), and cardiovascular disease risk reduction are advised for patients of all risk categories.

“There are no large, long-term behavioral modification or pharmacotherapy studies regarding weight loss in individuals with NAFLD,” the authors wrote. “However, weight loss of any magnitude should be encouraged as beneficial.”

For patients with indeterminate and high risk, NASH pharmacotherapy is recommended, and if needed, diabetes care should involve medications with efficacy in NASH, such as pioglitazone.

“Although we recognize that knowledge is continuing to evolve and that recommendations may change accordingly over time, we believe this Pathway provides accessible, standardized, evidence-based, timely, and testable recommendations that will allow clinicians to care for a rapidly growing population of patients, most of whom are managed in primary care or endocrine clinics,” the authors concluded.

The article was supported by the American Gastroenterological Association, Intercept Pharmaceuticals, Pfizer, and others. The authors disclosed relationships with Novo Nordisk, Eli Lilly, Sanofi, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Novel blood-based panel highly effective for early-stage HCC

Tremendous benefits to come
Article Type
Changed
Thu, 11/11/2021 - 10:20

A blood-based biomarker panel that includes DNA and protein markers featured a 71% sensitivity at 90% specificity for the detection of early-stage hepatocellular carcinoma (HCC) compared with the GALAD (gender, age, a-fetoprotein [AFP], Lens culinaris agglutinin-reactive AFP [AFP-L3], and des-gamma-carboxy-prothrombin [DCP]) score or AFP alone, according to research findings. The panel reportedly performed well in certain subgroups based on sex, presence of cirrhosis, and liver disease etiology.

Jezperklauzen/ThinkStock

The study, which included inpatients with HCC and controls without HCC but underlying liver disease, suggests the panel could be utilized in the detection of early stage disease in patients with well-established risk factors for HCC. Ultimately, this may lead to earlier treatment initiation and potentially improved clinical outcomes.

“A blood-based marker panel that detects early-stage HCC with higher sensitivity than current biomarker-based approaches could substantially benefit patients undergoing HCC surveillance,” wrote study authors Naga Chalasani, MD, of Indiana University, Indianapolis, and colleagues. Their report is in Clinical Gastroenterology and Hepatology.

HCC, which accounts for most primary liver cancers, generally occurs in patients with several established risk factors, including alcoholic liver disease or nonalcoholic fatty liver disease as well as chronic hepatitis B virus or hepatitis C virus infection. Current guidelines, such as those from the European Association for the Study of the Liver and those from the American Association for the Study of Liver Diseases, recommend surveillance of at-risk patients every 6 months by ultrasound with or without AFP measurement. When caught early, HCC is typically treatable and is associated with a higher rate of survival compared with late-stage disease. According to Dr. Chalasani and colleagues, however, the effectiveness of current recommended surveillance for very early stage or early stage HCC is poor, characterized by a 45% sensitivity for ultrasound and a 63% sensitivity for ultrasound coupled with AFP measurement.

The investigators of the multicenter, case-control study collected blood specimens from 135 patients with HCC as well as 302 age-matched controls with underlying liver disease but no HCC. Very early or early stage disease was seen in approximately 56.3% of patients with HCC, and intermediate, advanced, or terminal stage disease was seen in 43.7% of patients.

To predict cases of HCC, the researchers used a logistic regression algorithm to analyze 10 methylated DNA markers (MDMs) associated with HCC, methylated B3GALT6 (reference DNA marker), and 3 candidate proteins. Finally, the researchers compared the accuracy of the developed blood-based biomarker panel with other blood-based biomarkers – including the GALAD, AFP, AFP-L3, DCP – for the detection of HCC.

The multitarget HCC panel included 3 MDMs – HOXA1, EMX1, and TSPYL5. In addition, the panel included methylation reference marker B3GALT6 and the protein markers AFP and AFP-L3. The biomarker panel featured a higher sensitivity (71%; 95% confidence interval, 60-81) at 90% specificity for the detection of early stage HCC compared with the GALAD score (41%; 95% CI, 30-53) or AFP ≥ 7.32 ng/mL (45%; 95% CI, 33-57). The area under the curve for the novel HCC panel for the detection of any stage HCC was 0.92 vs. 0.87 for the GALAD and 0.81 for the AFP measurement alone. The researchers found that the performance of the test was similar between men and women in terms of sensitivity (79% and 84%, respectively). Moreover, the panel performed similarly well among subgroups based on presence of cirrhosis and liver disease etiology.

A potential limitation of this study was the inclusion of controls who were largely confirmed HCC negative by ultrasound, a technique that lacks sensitivity for detecting early stage HCC, the researchers noted. Given this limitation, the researchers suggest that some of the control participants may have had underlying HCC that was missed by ultrasound. Furthermore, the findings indicate that the cross-sectional nature of the study may also mean some of the control participants had HCCs that were undetectable at initial screening.

Despite the limitations of the study, the researchers reported that the novel, blood-based marker panel’s sensitivity for detecting early stage HCC likely supports its use “among at-risk patients to enhance HCC surveillance and improve early cancer detection.”

The study was funded by the Exact Sciences Corporation. The researchers reported conflicts of interest with several pharmaceutical companies.

Body

Hepatocellular carcinoma (HCC) is frequently diagnosed at late stages, leading to a high mortality rate given the limited treatment options. One of the major barriers to early diagnosis of HCC is the suboptimal sensitivity of the current diagnostic modality. Most recently, liquid biopsy has been used to diagnose and prognosticate various tumors, including HCC.

In this study, Dr. Chalasani and colleagues developed a biomarker panel consisting of three methylated DNA markers, methylated B3GALT6 (reference DNA marker) and two proteins (AFP and AFP-L3), to diagnose HCC. This panel demonstrated higher sensitivity (71%) at 90% specificity for early-stage HCC than the GALAD score (41%) or AFP (45%). It is exciting news for clinicians since this novel blood-based test could identify patients who are qualified for curative HCC treatment without the limitations of image-based tests such as body habitus or renal function. Although the cohort is relatively small, the performance is equally good in subgroups of patients based on liver disease etiology, presence of cirrhosis, or sex. We are looking forward to seeing the validation data of this biomarker panel in larger independent cohorts and the studies that compare this panel to abdominal ultrasound, which is the most commonly used tool for HCC surveillance. Hopefully, the sensitivity of the biomarkers-based tests can be further increased, and the costs can be lowered in the near future with more studies in this field. A powerful and cost-effective biomarker-based test that can either replace or enhance current HCC surveillance tools will bring tremendous benefits to our patients.

Howard T. Lee, MD, is with the department of hepatology at Baylor College of Medicine, Houston. He has no conflicts.

Publications
Topics
Sections
Body

Hepatocellular carcinoma (HCC) is frequently diagnosed at late stages, leading to a high mortality rate given the limited treatment options. One of the major barriers to early diagnosis of HCC is the suboptimal sensitivity of the current diagnostic modality. Most recently, liquid biopsy has been used to diagnose and prognosticate various tumors, including HCC.

In this study, Dr. Chalasani and colleagues developed a biomarker panel consisting of three methylated DNA markers, methylated B3GALT6 (reference DNA marker) and two proteins (AFP and AFP-L3), to diagnose HCC. This panel demonstrated higher sensitivity (71%) at 90% specificity for early-stage HCC than the GALAD score (41%) or AFP (45%). It is exciting news for clinicians since this novel blood-based test could identify patients who are qualified for curative HCC treatment without the limitations of image-based tests such as body habitus or renal function. Although the cohort is relatively small, the performance is equally good in subgroups of patients based on liver disease etiology, presence of cirrhosis, or sex. We are looking forward to seeing the validation data of this biomarker panel in larger independent cohorts and the studies that compare this panel to abdominal ultrasound, which is the most commonly used tool for HCC surveillance. Hopefully, the sensitivity of the biomarkers-based tests can be further increased, and the costs can be lowered in the near future with more studies in this field. A powerful and cost-effective biomarker-based test that can either replace or enhance current HCC surveillance tools will bring tremendous benefits to our patients.

Howard T. Lee, MD, is with the department of hepatology at Baylor College of Medicine, Houston. He has no conflicts.

Body

Hepatocellular carcinoma (HCC) is frequently diagnosed at late stages, leading to a high mortality rate given the limited treatment options. One of the major barriers to early diagnosis of HCC is the suboptimal sensitivity of the current diagnostic modality. Most recently, liquid biopsy has been used to diagnose and prognosticate various tumors, including HCC.

In this study, Dr. Chalasani and colleagues developed a biomarker panel consisting of three methylated DNA markers, methylated B3GALT6 (reference DNA marker) and two proteins (AFP and AFP-L3), to diagnose HCC. This panel demonstrated higher sensitivity (71%) at 90% specificity for early-stage HCC than the GALAD score (41%) or AFP (45%). It is exciting news for clinicians since this novel blood-based test could identify patients who are qualified for curative HCC treatment without the limitations of image-based tests such as body habitus or renal function. Although the cohort is relatively small, the performance is equally good in subgroups of patients based on liver disease etiology, presence of cirrhosis, or sex. We are looking forward to seeing the validation data of this biomarker panel in larger independent cohorts and the studies that compare this panel to abdominal ultrasound, which is the most commonly used tool for HCC surveillance. Hopefully, the sensitivity of the biomarkers-based tests can be further increased, and the costs can be lowered in the near future with more studies in this field. A powerful and cost-effective biomarker-based test that can either replace or enhance current HCC surveillance tools will bring tremendous benefits to our patients.

Howard T. Lee, MD, is with the department of hepatology at Baylor College of Medicine, Houston. He has no conflicts.

Title
Tremendous benefits to come
Tremendous benefits to come

A blood-based biomarker panel that includes DNA and protein markers featured a 71% sensitivity at 90% specificity for the detection of early-stage hepatocellular carcinoma (HCC) compared with the GALAD (gender, age, a-fetoprotein [AFP], Lens culinaris agglutinin-reactive AFP [AFP-L3], and des-gamma-carboxy-prothrombin [DCP]) score or AFP alone, according to research findings. The panel reportedly performed well in certain subgroups based on sex, presence of cirrhosis, and liver disease etiology.

Jezperklauzen/ThinkStock

The study, which included inpatients with HCC and controls without HCC but underlying liver disease, suggests the panel could be utilized in the detection of early stage disease in patients with well-established risk factors for HCC. Ultimately, this may lead to earlier treatment initiation and potentially improved clinical outcomes.

“A blood-based marker panel that detects early-stage HCC with higher sensitivity than current biomarker-based approaches could substantially benefit patients undergoing HCC surveillance,” wrote study authors Naga Chalasani, MD, of Indiana University, Indianapolis, and colleagues. Their report is in Clinical Gastroenterology and Hepatology.

HCC, which accounts for most primary liver cancers, generally occurs in patients with several established risk factors, including alcoholic liver disease or nonalcoholic fatty liver disease as well as chronic hepatitis B virus or hepatitis C virus infection. Current guidelines, such as those from the European Association for the Study of the Liver and those from the American Association for the Study of Liver Diseases, recommend surveillance of at-risk patients every 6 months by ultrasound with or without AFP measurement. When caught early, HCC is typically treatable and is associated with a higher rate of survival compared with late-stage disease. According to Dr. Chalasani and colleagues, however, the effectiveness of current recommended surveillance for very early stage or early stage HCC is poor, characterized by a 45% sensitivity for ultrasound and a 63% sensitivity for ultrasound coupled with AFP measurement.

The investigators of the multicenter, case-control study collected blood specimens from 135 patients with HCC as well as 302 age-matched controls with underlying liver disease but no HCC. Very early or early stage disease was seen in approximately 56.3% of patients with HCC, and intermediate, advanced, or terminal stage disease was seen in 43.7% of patients.

To predict cases of HCC, the researchers used a logistic regression algorithm to analyze 10 methylated DNA markers (MDMs) associated with HCC, methylated B3GALT6 (reference DNA marker), and 3 candidate proteins. Finally, the researchers compared the accuracy of the developed blood-based biomarker panel with other blood-based biomarkers – including the GALAD, AFP, AFP-L3, DCP – for the detection of HCC.

The multitarget HCC panel included 3 MDMs – HOXA1, EMX1, and TSPYL5. In addition, the panel included methylation reference marker B3GALT6 and the protein markers AFP and AFP-L3. The biomarker panel featured a higher sensitivity (71%; 95% confidence interval, 60-81) at 90% specificity for the detection of early stage HCC compared with the GALAD score (41%; 95% CI, 30-53) or AFP ≥ 7.32 ng/mL (45%; 95% CI, 33-57). The area under the curve for the novel HCC panel for the detection of any stage HCC was 0.92 vs. 0.87 for the GALAD and 0.81 for the AFP measurement alone. The researchers found that the performance of the test was similar between men and women in terms of sensitivity (79% and 84%, respectively). Moreover, the panel performed similarly well among subgroups based on presence of cirrhosis and liver disease etiology.

A potential limitation of this study was the inclusion of controls who were largely confirmed HCC negative by ultrasound, a technique that lacks sensitivity for detecting early stage HCC, the researchers noted. Given this limitation, the researchers suggest that some of the control participants may have had underlying HCC that was missed by ultrasound. Furthermore, the findings indicate that the cross-sectional nature of the study may also mean some of the control participants had HCCs that were undetectable at initial screening.

Despite the limitations of the study, the researchers reported that the novel, blood-based marker panel’s sensitivity for detecting early stage HCC likely supports its use “among at-risk patients to enhance HCC surveillance and improve early cancer detection.”

The study was funded by the Exact Sciences Corporation. The researchers reported conflicts of interest with several pharmaceutical companies.

A blood-based biomarker panel that includes DNA and protein markers featured a 71% sensitivity at 90% specificity for the detection of early-stage hepatocellular carcinoma (HCC) compared with the GALAD (gender, age, a-fetoprotein [AFP], Lens culinaris agglutinin-reactive AFP [AFP-L3], and des-gamma-carboxy-prothrombin [DCP]) score or AFP alone, according to research findings. The panel reportedly performed well in certain subgroups based on sex, presence of cirrhosis, and liver disease etiology.

Jezperklauzen/ThinkStock

The study, which included inpatients with HCC and controls without HCC but underlying liver disease, suggests the panel could be utilized in the detection of early stage disease in patients with well-established risk factors for HCC. Ultimately, this may lead to earlier treatment initiation and potentially improved clinical outcomes.

“A blood-based marker panel that detects early-stage HCC with higher sensitivity than current biomarker-based approaches could substantially benefit patients undergoing HCC surveillance,” wrote study authors Naga Chalasani, MD, of Indiana University, Indianapolis, and colleagues. Their report is in Clinical Gastroenterology and Hepatology.

HCC, which accounts for most primary liver cancers, generally occurs in patients with several established risk factors, including alcoholic liver disease or nonalcoholic fatty liver disease as well as chronic hepatitis B virus or hepatitis C virus infection. Current guidelines, such as those from the European Association for the Study of the Liver and those from the American Association for the Study of Liver Diseases, recommend surveillance of at-risk patients every 6 months by ultrasound with or without AFP measurement. When caught early, HCC is typically treatable and is associated with a higher rate of survival compared with late-stage disease. According to Dr. Chalasani and colleagues, however, the effectiveness of current recommended surveillance for very early stage or early stage HCC is poor, characterized by a 45% sensitivity for ultrasound and a 63% sensitivity for ultrasound coupled with AFP measurement.

The investigators of the multicenter, case-control study collected blood specimens from 135 patients with HCC as well as 302 age-matched controls with underlying liver disease but no HCC. Very early or early stage disease was seen in approximately 56.3% of patients with HCC, and intermediate, advanced, or terminal stage disease was seen in 43.7% of patients.

To predict cases of HCC, the researchers used a logistic regression algorithm to analyze 10 methylated DNA markers (MDMs) associated with HCC, methylated B3GALT6 (reference DNA marker), and 3 candidate proteins. Finally, the researchers compared the accuracy of the developed blood-based biomarker panel with other blood-based biomarkers – including the GALAD, AFP, AFP-L3, DCP – for the detection of HCC.

The multitarget HCC panel included 3 MDMs – HOXA1, EMX1, and TSPYL5. In addition, the panel included methylation reference marker B3GALT6 and the protein markers AFP and AFP-L3. The biomarker panel featured a higher sensitivity (71%; 95% confidence interval, 60-81) at 90% specificity for the detection of early stage HCC compared with the GALAD score (41%; 95% CI, 30-53) or AFP ≥ 7.32 ng/mL (45%; 95% CI, 33-57). The area under the curve for the novel HCC panel for the detection of any stage HCC was 0.92 vs. 0.87 for the GALAD and 0.81 for the AFP measurement alone. The researchers found that the performance of the test was similar between men and women in terms of sensitivity (79% and 84%, respectively). Moreover, the panel performed similarly well among subgroups based on presence of cirrhosis and liver disease etiology.

A potential limitation of this study was the inclusion of controls who were largely confirmed HCC negative by ultrasound, a technique that lacks sensitivity for detecting early stage HCC, the researchers noted. Given this limitation, the researchers suggest that some of the control participants may have had underlying HCC that was missed by ultrasound. Furthermore, the findings indicate that the cross-sectional nature of the study may also mean some of the control participants had HCCs that were undetectable at initial screening.

Despite the limitations of the study, the researchers reported that the novel, blood-based marker panel’s sensitivity for detecting early stage HCC likely supports its use “among at-risk patients to enhance HCC surveillance and improve early cancer detection.”

The study was funded by the Exact Sciences Corporation. The researchers reported conflicts of interest with several pharmaceutical companies.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Forming specialized immune cell structures could combat pancreatic cancer

Tertiary lymphoid structures step up to the plate
Article Type
Changed
Tue, 11/09/2021 - 14:08

In a new study, researchers stimulated immune cells to assemble into tertiary lymphoid structures that improved the efficacy of chemotherapy in a preclinical model of pancreatic cancer.

Overall, the evidence generated by the study supports the notion that induction of tertiary lymphoid structures may potentiate chemotherapy’s antitumor activity, at least in a murine model of pancreatic ductal adenocarcinoma (PDAC). A more detailed understanding of tertiary lymphoid structure “kinetics and their induction, owing to multiple host and tumor factors, may help design personalized therapies harnessing the potential of immuno-oncology,” Francesca Delvecchio of Queen Mary University of London and colleagues wrote in Cellular and Molecular Gastroenterology and Hepatology.

While the immune system can play a role in combating cancer, a dense stroma surrounds pancreatic cancer cells, often blocking the ability of certain immune cells, such as T cells, from accessing the tumor. As shown by Young and colleagues, this causes immunotherapies to have very little success in the management of most pancreatic cancers, despite the efficacy of these therapies in other types of cancer.

In a proportion of patients with pancreatic cancer, clusters of immune cells can assemble tertiary lymphoid structures within the stroma that surrounds pancreatic cancer. These structures are associated with improved survival in PDAC. In the study, Mr. Delvecchio and colleagues sought to further elucidate the role of tertiary lymphoid structures in PDAC, particularly the structures’ antitumor potential.

The investigators analyzed donated tissue samples from patients to identify the presence of the structures within chemotherapy-naive human pancreatic cancer. Tertiary lymphoid structures were defined by the presence of tissue zones that were rich in T cells, B cells, and dendritic cells. Staining techniques were used to visualize the various cell types in the samples, revealing tertiary lymphoid structures in approximately 30% of tissue microarrays and 42% of the full section.

Multicolor immunofluorescence and immunohistochemistry were also used to characterize tertiary lymphoid structures in murine models of pancreatic cancer. Additionally, the investigators developed an orthotopic murine model to assess the development of the structures and their role in improving the therapeutic effects of chemotherapy. While tertiary lymphoid structures were not initially present in the preclinical murine model, B cells and T cells subsequently infiltrated into the tumor site following injection of lymphoid chemokines. These cells consequently assembled into the tertiary lymphoid structures.

In addition, the researchers combined chemotherapy gemcitabine with the intratumoral lymphoid chemokine and injected this combination treatment into orthotopic tumors. Following injection, the researchers observed “altered immune cell infiltration,” which facilitated the induction of tertiary lymphoid structures and potentiated antitumor activity of the chemotherapy. As a result, there was a significant reduction in the tumors, an effect the researchers did not find following the use of either treatment alone.

According to the investigators, the antitumor activity observed following induction of the tertiary lymphoid structures within the cancer is associated with B cell–mediated activation of dendritic cells, a key cell type involved in initiating an immune response.

Based on the findings, the researchers concluded that the combination of chemotherapy and lymphoid chemokines might be a viable strategy for promoting an antitumor immune response in pancreatic cancer. In turn, the researchers suggest this strategy may result in better clinical outcomes for patients with the disease. Additionally, the researchers wrote that mature tertiary lymphoid structures in PDAC prior to an immune treatment could “be used as a biomarker to define inclusion criteria of patients in immunotherapy protocols, with the aim to boost the ongoing antitumor immune response.”

The study relied on a mouse model and for this reason, it remains unclear at this time if the findings will be generalizable to humans. In the context of PDAC, the researchers wrote that further investigation and understanding of the formation of tertiary lymphoid structures may support the development of tailored treatments, including those that take advantage of the body’s immune system, to combat cancer and improve patient outcomes.

The researchers reported no conflicts of interest with the pharmaceutical industry. No funding was reported for the study.

Body

 

Pancreatic ductal adenocarcinoma (PDAC) is known for its remarkable resistance to immunotherapy. This observation is largely attributed to the microenvironment that surrounds PDAC due to its undisputed role in suppressing and excluding T cells – key mediators of productive cancer immune surveillance. This study by Delvecchio and colleagues now examines the formation and maturation of tertiary lymphoid structures (TLS) – highly organized immune cell communities – that can be found within murine and human PDAC tumors and correlate with a favorable prognosis after surgical resection in patients. Intriguingly, the authors show that intratumoral injection of lymphoid chemokines (CXCL13/CCL21) can trigger TLS formation in murine PDAC models and potentiate the activity of chemotherapy. Notably, in other solid cancers, the presence of mature TLS has been associated with response to immunotherapy raising the possibility that inciting TLS formation and maturation in PDAC may be a first step toward overcoming immune resistance in this lethal cancer. Still, much work is needed to understand mechanisms by which TLS influence PDAC biology and how to effectively deliver drugs to stimulate TLS beyond intratumoral injection which is less practical given the highly metastatic proclivity of PDAC. Nonetheless, TLS hold promise as a therapeutic target in PDAC and may even serve as a novel biomarker of treatment response.

Gregory L. Beatty, MD, PhD, is director of the Clinical and Translational Research Program for Pancreas Cancer at the Abramson Cancer Center of the University of Pennsylvania, Philadelphia, and associate professor in the department of medicine in the division of hematology/oncology at the University of Pennsylvania. He reports involvement with many pharmaceutical companies, as well as being the inventor of certain intellectual property and receiving royalties related to CAR T cells.

Publications
Topics
Sections
Body

 

Pancreatic ductal adenocarcinoma (PDAC) is known for its remarkable resistance to immunotherapy. This observation is largely attributed to the microenvironment that surrounds PDAC due to its undisputed role in suppressing and excluding T cells – key mediators of productive cancer immune surveillance. This study by Delvecchio and colleagues now examines the formation and maturation of tertiary lymphoid structures (TLS) – highly organized immune cell communities – that can be found within murine and human PDAC tumors and correlate with a favorable prognosis after surgical resection in patients. Intriguingly, the authors show that intratumoral injection of lymphoid chemokines (CXCL13/CCL21) can trigger TLS formation in murine PDAC models and potentiate the activity of chemotherapy. Notably, in other solid cancers, the presence of mature TLS has been associated with response to immunotherapy raising the possibility that inciting TLS formation and maturation in PDAC may be a first step toward overcoming immune resistance in this lethal cancer. Still, much work is needed to understand mechanisms by which TLS influence PDAC biology and how to effectively deliver drugs to stimulate TLS beyond intratumoral injection which is less practical given the highly metastatic proclivity of PDAC. Nonetheless, TLS hold promise as a therapeutic target in PDAC and may even serve as a novel biomarker of treatment response.

Gregory L. Beatty, MD, PhD, is director of the Clinical and Translational Research Program for Pancreas Cancer at the Abramson Cancer Center of the University of Pennsylvania, Philadelphia, and associate professor in the department of medicine in the division of hematology/oncology at the University of Pennsylvania. He reports involvement with many pharmaceutical companies, as well as being the inventor of certain intellectual property and receiving royalties related to CAR T cells.

Body

 

Pancreatic ductal adenocarcinoma (PDAC) is known for its remarkable resistance to immunotherapy. This observation is largely attributed to the microenvironment that surrounds PDAC due to its undisputed role in suppressing and excluding T cells – key mediators of productive cancer immune surveillance. This study by Delvecchio and colleagues now examines the formation and maturation of tertiary lymphoid structures (TLS) – highly organized immune cell communities – that can be found within murine and human PDAC tumors and correlate with a favorable prognosis after surgical resection in patients. Intriguingly, the authors show that intratumoral injection of lymphoid chemokines (CXCL13/CCL21) can trigger TLS formation in murine PDAC models and potentiate the activity of chemotherapy. Notably, in other solid cancers, the presence of mature TLS has been associated with response to immunotherapy raising the possibility that inciting TLS formation and maturation in PDAC may be a first step toward overcoming immune resistance in this lethal cancer. Still, much work is needed to understand mechanisms by which TLS influence PDAC biology and how to effectively deliver drugs to stimulate TLS beyond intratumoral injection which is less practical given the highly metastatic proclivity of PDAC. Nonetheless, TLS hold promise as a therapeutic target in PDAC and may even serve as a novel biomarker of treatment response.

Gregory L. Beatty, MD, PhD, is director of the Clinical and Translational Research Program for Pancreas Cancer at the Abramson Cancer Center of the University of Pennsylvania, Philadelphia, and associate professor in the department of medicine in the division of hematology/oncology at the University of Pennsylvania. He reports involvement with many pharmaceutical companies, as well as being the inventor of certain intellectual property and receiving royalties related to CAR T cells.

Title
Tertiary lymphoid structures step up to the plate
Tertiary lymphoid structures step up to the plate

In a new study, researchers stimulated immune cells to assemble into tertiary lymphoid structures that improved the efficacy of chemotherapy in a preclinical model of pancreatic cancer.

Overall, the evidence generated by the study supports the notion that induction of tertiary lymphoid structures may potentiate chemotherapy’s antitumor activity, at least in a murine model of pancreatic ductal adenocarcinoma (PDAC). A more detailed understanding of tertiary lymphoid structure “kinetics and their induction, owing to multiple host and tumor factors, may help design personalized therapies harnessing the potential of immuno-oncology,” Francesca Delvecchio of Queen Mary University of London and colleagues wrote in Cellular and Molecular Gastroenterology and Hepatology.

While the immune system can play a role in combating cancer, a dense stroma surrounds pancreatic cancer cells, often blocking the ability of certain immune cells, such as T cells, from accessing the tumor. As shown by Young and colleagues, this causes immunotherapies to have very little success in the management of most pancreatic cancers, despite the efficacy of these therapies in other types of cancer.

In a proportion of patients with pancreatic cancer, clusters of immune cells can assemble tertiary lymphoid structures within the stroma that surrounds pancreatic cancer. These structures are associated with improved survival in PDAC. In the study, Mr. Delvecchio and colleagues sought to further elucidate the role of tertiary lymphoid structures in PDAC, particularly the structures’ antitumor potential.

The investigators analyzed donated tissue samples from patients to identify the presence of the structures within chemotherapy-naive human pancreatic cancer. Tertiary lymphoid structures were defined by the presence of tissue zones that were rich in T cells, B cells, and dendritic cells. Staining techniques were used to visualize the various cell types in the samples, revealing tertiary lymphoid structures in approximately 30% of tissue microarrays and 42% of the full section.

Multicolor immunofluorescence and immunohistochemistry were also used to characterize tertiary lymphoid structures in murine models of pancreatic cancer. Additionally, the investigators developed an orthotopic murine model to assess the development of the structures and their role in improving the therapeutic effects of chemotherapy. While tertiary lymphoid structures were not initially present in the preclinical murine model, B cells and T cells subsequently infiltrated into the tumor site following injection of lymphoid chemokines. These cells consequently assembled into the tertiary lymphoid structures.

In addition, the researchers combined chemotherapy gemcitabine with the intratumoral lymphoid chemokine and injected this combination treatment into orthotopic tumors. Following injection, the researchers observed “altered immune cell infiltration,” which facilitated the induction of tertiary lymphoid structures and potentiated antitumor activity of the chemotherapy. As a result, there was a significant reduction in the tumors, an effect the researchers did not find following the use of either treatment alone.

According to the investigators, the antitumor activity observed following induction of the tertiary lymphoid structures within the cancer is associated with B cell–mediated activation of dendritic cells, a key cell type involved in initiating an immune response.

Based on the findings, the researchers concluded that the combination of chemotherapy and lymphoid chemokines might be a viable strategy for promoting an antitumor immune response in pancreatic cancer. In turn, the researchers suggest this strategy may result in better clinical outcomes for patients with the disease. Additionally, the researchers wrote that mature tertiary lymphoid structures in PDAC prior to an immune treatment could “be used as a biomarker to define inclusion criteria of patients in immunotherapy protocols, with the aim to boost the ongoing antitumor immune response.”

The study relied on a mouse model and for this reason, it remains unclear at this time if the findings will be generalizable to humans. In the context of PDAC, the researchers wrote that further investigation and understanding of the formation of tertiary lymphoid structures may support the development of tailored treatments, including those that take advantage of the body’s immune system, to combat cancer and improve patient outcomes.

The researchers reported no conflicts of interest with the pharmaceutical industry. No funding was reported for the study.

In a new study, researchers stimulated immune cells to assemble into tertiary lymphoid structures that improved the efficacy of chemotherapy in a preclinical model of pancreatic cancer.

Overall, the evidence generated by the study supports the notion that induction of tertiary lymphoid structures may potentiate chemotherapy’s antitumor activity, at least in a murine model of pancreatic ductal adenocarcinoma (PDAC). A more detailed understanding of tertiary lymphoid structure “kinetics and their induction, owing to multiple host and tumor factors, may help design personalized therapies harnessing the potential of immuno-oncology,” Francesca Delvecchio of Queen Mary University of London and colleagues wrote in Cellular and Molecular Gastroenterology and Hepatology.

While the immune system can play a role in combating cancer, a dense stroma surrounds pancreatic cancer cells, often blocking the ability of certain immune cells, such as T cells, from accessing the tumor. As shown by Young and colleagues, this causes immunotherapies to have very little success in the management of most pancreatic cancers, despite the efficacy of these therapies in other types of cancer.

In a proportion of patients with pancreatic cancer, clusters of immune cells can assemble tertiary lymphoid structures within the stroma that surrounds pancreatic cancer. These structures are associated with improved survival in PDAC. In the study, Mr. Delvecchio and colleagues sought to further elucidate the role of tertiary lymphoid structures in PDAC, particularly the structures’ antitumor potential.

The investigators analyzed donated tissue samples from patients to identify the presence of the structures within chemotherapy-naive human pancreatic cancer. Tertiary lymphoid structures were defined by the presence of tissue zones that were rich in T cells, B cells, and dendritic cells. Staining techniques were used to visualize the various cell types in the samples, revealing tertiary lymphoid structures in approximately 30% of tissue microarrays and 42% of the full section.

Multicolor immunofluorescence and immunohistochemistry were also used to characterize tertiary lymphoid structures in murine models of pancreatic cancer. Additionally, the investigators developed an orthotopic murine model to assess the development of the structures and their role in improving the therapeutic effects of chemotherapy. While tertiary lymphoid structures were not initially present in the preclinical murine model, B cells and T cells subsequently infiltrated into the tumor site following injection of lymphoid chemokines. These cells consequently assembled into the tertiary lymphoid structures.

In addition, the researchers combined chemotherapy gemcitabine with the intratumoral lymphoid chemokine and injected this combination treatment into orthotopic tumors. Following injection, the researchers observed “altered immune cell infiltration,” which facilitated the induction of tertiary lymphoid structures and potentiated antitumor activity of the chemotherapy. As a result, there was a significant reduction in the tumors, an effect the researchers did not find following the use of either treatment alone.

According to the investigators, the antitumor activity observed following induction of the tertiary lymphoid structures within the cancer is associated with B cell–mediated activation of dendritic cells, a key cell type involved in initiating an immune response.

Based on the findings, the researchers concluded that the combination of chemotherapy and lymphoid chemokines might be a viable strategy for promoting an antitumor immune response in pancreatic cancer. In turn, the researchers suggest this strategy may result in better clinical outcomes for patients with the disease. Additionally, the researchers wrote that mature tertiary lymphoid structures in PDAC prior to an immune treatment could “be used as a biomarker to define inclusion criteria of patients in immunotherapy protocols, with the aim to boost the ongoing antitumor immune response.”

The study relied on a mouse model and for this reason, it remains unclear at this time if the findings will be generalizable to humans. In the context of PDAC, the researchers wrote that further investigation and understanding of the formation of tertiary lymphoid structures may support the development of tailored treatments, including those that take advantage of the body’s immune system, to combat cancer and improve patient outcomes.

The researchers reported no conflicts of interest with the pharmaceutical industry. No funding was reported for the study.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA Clinical Practice Update: Managing pain in gut-brain interaction disorders

Article Type
Changed
Mon, 11/08/2021 - 16:13

An American Gastroenterological Association clinical practice update for gastrointestinal pain in disorders of gut-brain interaction (DGBI), published in Clinical Gastroenterology and Hepatology, emphasizes patient-physician collaboration and improvement of patient understanding of the pathways and mechanisms of pain sensations. It is aimed at management of patients in whom pain persists after first-line therapies fail to resolve visceral causes of pain.

DGBIs include irritable bowel syndrome, functional dyspepsia, and centrally mediated abdominal pain syndrome, according to Laurie Keefer, PhD, AGAF, of the division of gastroenterology at Icahn School of Medicine at Mount Sinai, New York, and colleagues. Initial treatment usually focuses on visceral triggers of pain such as food and bowel movements, but this approach is ineffective for many.

Cognitive, affective, and behavioral factors can impact the treatment of these patients, making it a complex clinical problem that calls for a collaborative approach between the patient and clinician. Opioids and other drugs that could be misused should be avoided, according to the authors. Both pharmacologic and nonpharmacologic approaches can be considered, but the update did not address use of marijuana or other complementary or alternative therapies.

Effective management requires empathy and collaboration. The patient has often seen various other clinicians with suboptimal results, which has left them dissatisfied with their care. Cultural sensitivity is crucial because the understanding and interpretation of pain, and preferred management approaches, vary across cultures.

The first step is a nonjudgmental patient history using open-ended questions. Examples include: “How do your symptoms interfere with your ability to do what you want in your daily life?” or “How are these symptoms impacting your life the most?” These types of questions may identify patients who could benefit from behavioral health interventions.

Questions about symptom-related anxiety can improve understanding of patient concerns and offer an opportunity to address fears. Additional understanding of the patient’s perspective can come from questions like: “What do you think is causing your symptoms,” “Why are you coming to see me now?” and “What are you most concerned about with your symptoms?”

The initial assessment should ideally result in shared goals and expectations for pain management.

Providers should educate the patient about the pathogenesis of pain and how it can be modified. Pain signals can result from innocuous signals from the gut that are misinterpreted by the vigilant brain as it scans for injury or illness. That model might explain why some patients with similar diagnoses have widely differing pain experiences, and offers hope that a change in how one approaches pain might lead to improvements. Patients should be encouraged to avoid too much focus on the cause or a solution to pain, because it can interfere with acceptance of pain or, when needed, treatment.

Opioids should not be prescribed for these patients, and if they are already taking them on referral, it’s important to manage them within a multidisciplinary framework until the opioids can be discontinued. Long-term use of opioids can lead to narcotic bowel syndrome, which results in chronic and often heightened abdominal pain even with escalating opioid doses. Opioid stoppage often must be accompanied by behavioral and psychiatric therapies to ensure success.

Nonpharmacological therapies such as brain-gut psychotherapies should be brought up as potential options early in treatment, even though many patients won’t require this type of care. Early mention is likely to keep the patient more open to trying them because they’re less likely to think of it as a sign of failure or a “last-ditch” approach. Cognitive-behavioral therapy works to improve pain management skills and bolster skill deficits, with attention to pain catastrophizing, pain hypervigilance, and visceral anxiety through different techniques.

Gut-directed hypnotherapy deals with somatic awareness and the use of imagery and suggestion to reduce pain sensations. Mindfulness-based stress reduction has been shown to be effective in inflammatory bowel disease and musculoskeletal pain syndromes. The provider should be familiar with these available methods, but should leave choice of interventions to partner mental health providers.

It’s important to distinguish between gastrointestinal pain with visceral causes and centrally mediated pain. Central sensitization can cause intermittent pain to become persistent even in the absence of ongoing peripheral causes of pain.

Peripheral acting agents affect gastrointestinal pain, and a network meta-analysis identified the top three drugs for pain relief in irritable bowel syndrome as tricyclic antidepressants, antispasmodics, and peppermint oil.

Neuromodulator drugs are an option for DGBI pain because the gut nervous system shares embryonic developmental pathways with the brain and spinal cord, which helps explains some of the benefits of low-dose antidepressants, now termed gut-brain neuromodulators. These drugs should be started at a low dose and gradually titrated according to symptom response and tolerability.

The authors have financial relationships with various pharmaceutical companies.

Publications
Topics
Sections

An American Gastroenterological Association clinical practice update for gastrointestinal pain in disorders of gut-brain interaction (DGBI), published in Clinical Gastroenterology and Hepatology, emphasizes patient-physician collaboration and improvement of patient understanding of the pathways and mechanisms of pain sensations. It is aimed at management of patients in whom pain persists after first-line therapies fail to resolve visceral causes of pain.

DGBIs include irritable bowel syndrome, functional dyspepsia, and centrally mediated abdominal pain syndrome, according to Laurie Keefer, PhD, AGAF, of the division of gastroenterology at Icahn School of Medicine at Mount Sinai, New York, and colleagues. Initial treatment usually focuses on visceral triggers of pain such as food and bowel movements, but this approach is ineffective for many.

Cognitive, affective, and behavioral factors can impact the treatment of these patients, making it a complex clinical problem that calls for a collaborative approach between the patient and clinician. Opioids and other drugs that could be misused should be avoided, according to the authors. Both pharmacologic and nonpharmacologic approaches can be considered, but the update did not address use of marijuana or other complementary or alternative therapies.

Effective management requires empathy and collaboration. The patient has often seen various other clinicians with suboptimal results, which has left them dissatisfied with their care. Cultural sensitivity is crucial because the understanding and interpretation of pain, and preferred management approaches, vary across cultures.

The first step is a nonjudgmental patient history using open-ended questions. Examples include: “How do your symptoms interfere with your ability to do what you want in your daily life?” or “How are these symptoms impacting your life the most?” These types of questions may identify patients who could benefit from behavioral health interventions.

Questions about symptom-related anxiety can improve understanding of patient concerns and offer an opportunity to address fears. Additional understanding of the patient’s perspective can come from questions like: “What do you think is causing your symptoms,” “Why are you coming to see me now?” and “What are you most concerned about with your symptoms?”

The initial assessment should ideally result in shared goals and expectations for pain management.

Providers should educate the patient about the pathogenesis of pain and how it can be modified. Pain signals can result from innocuous signals from the gut that are misinterpreted by the vigilant brain as it scans for injury or illness. That model might explain why some patients with similar diagnoses have widely differing pain experiences, and offers hope that a change in how one approaches pain might lead to improvements. Patients should be encouraged to avoid too much focus on the cause or a solution to pain, because it can interfere with acceptance of pain or, when needed, treatment.

Opioids should not be prescribed for these patients, and if they are already taking them on referral, it’s important to manage them within a multidisciplinary framework until the opioids can be discontinued. Long-term use of opioids can lead to narcotic bowel syndrome, which results in chronic and often heightened abdominal pain even with escalating opioid doses. Opioid stoppage often must be accompanied by behavioral and psychiatric therapies to ensure success.

Nonpharmacological therapies such as brain-gut psychotherapies should be brought up as potential options early in treatment, even though many patients won’t require this type of care. Early mention is likely to keep the patient more open to trying them because they’re less likely to think of it as a sign of failure or a “last-ditch” approach. Cognitive-behavioral therapy works to improve pain management skills and bolster skill deficits, with attention to pain catastrophizing, pain hypervigilance, and visceral anxiety through different techniques.

Gut-directed hypnotherapy deals with somatic awareness and the use of imagery and suggestion to reduce pain sensations. Mindfulness-based stress reduction has been shown to be effective in inflammatory bowel disease and musculoskeletal pain syndromes. The provider should be familiar with these available methods, but should leave choice of interventions to partner mental health providers.

It’s important to distinguish between gastrointestinal pain with visceral causes and centrally mediated pain. Central sensitization can cause intermittent pain to become persistent even in the absence of ongoing peripheral causes of pain.

Peripheral acting agents affect gastrointestinal pain, and a network meta-analysis identified the top three drugs for pain relief in irritable bowel syndrome as tricyclic antidepressants, antispasmodics, and peppermint oil.

Neuromodulator drugs are an option for DGBI pain because the gut nervous system shares embryonic developmental pathways with the brain and spinal cord, which helps explains some of the benefits of low-dose antidepressants, now termed gut-brain neuromodulators. These drugs should be started at a low dose and gradually titrated according to symptom response and tolerability.

The authors have financial relationships with various pharmaceutical companies.

An American Gastroenterological Association clinical practice update for gastrointestinal pain in disorders of gut-brain interaction (DGBI), published in Clinical Gastroenterology and Hepatology, emphasizes patient-physician collaboration and improvement of patient understanding of the pathways and mechanisms of pain sensations. It is aimed at management of patients in whom pain persists after first-line therapies fail to resolve visceral causes of pain.

DGBIs include irritable bowel syndrome, functional dyspepsia, and centrally mediated abdominal pain syndrome, according to Laurie Keefer, PhD, AGAF, of the division of gastroenterology at Icahn School of Medicine at Mount Sinai, New York, and colleagues. Initial treatment usually focuses on visceral triggers of pain such as food and bowel movements, but this approach is ineffective for many.

Cognitive, affective, and behavioral factors can impact the treatment of these patients, making it a complex clinical problem that calls for a collaborative approach between the patient and clinician. Opioids and other drugs that could be misused should be avoided, according to the authors. Both pharmacologic and nonpharmacologic approaches can be considered, but the update did not address use of marijuana or other complementary or alternative therapies.

Effective management requires empathy and collaboration. The patient has often seen various other clinicians with suboptimal results, which has left them dissatisfied with their care. Cultural sensitivity is crucial because the understanding and interpretation of pain, and preferred management approaches, vary across cultures.

The first step is a nonjudgmental patient history using open-ended questions. Examples include: “How do your symptoms interfere with your ability to do what you want in your daily life?” or “How are these symptoms impacting your life the most?” These types of questions may identify patients who could benefit from behavioral health interventions.

Questions about symptom-related anxiety can improve understanding of patient concerns and offer an opportunity to address fears. Additional understanding of the patient’s perspective can come from questions like: “What do you think is causing your symptoms,” “Why are you coming to see me now?” and “What are you most concerned about with your symptoms?”

The initial assessment should ideally result in shared goals and expectations for pain management.

Providers should educate the patient about the pathogenesis of pain and how it can be modified. Pain signals can result from innocuous signals from the gut that are misinterpreted by the vigilant brain as it scans for injury or illness. That model might explain why some patients with similar diagnoses have widely differing pain experiences, and offers hope that a change in how one approaches pain might lead to improvements. Patients should be encouraged to avoid too much focus on the cause or a solution to pain, because it can interfere with acceptance of pain or, when needed, treatment.

Opioids should not be prescribed for these patients, and if they are already taking them on referral, it’s important to manage them within a multidisciplinary framework until the opioids can be discontinued. Long-term use of opioids can lead to narcotic bowel syndrome, which results in chronic and often heightened abdominal pain even with escalating opioid doses. Opioid stoppage often must be accompanied by behavioral and psychiatric therapies to ensure success.

Nonpharmacological therapies such as brain-gut psychotherapies should be brought up as potential options early in treatment, even though many patients won’t require this type of care. Early mention is likely to keep the patient more open to trying them because they’re less likely to think of it as a sign of failure or a “last-ditch” approach. Cognitive-behavioral therapy works to improve pain management skills and bolster skill deficits, with attention to pain catastrophizing, pain hypervigilance, and visceral anxiety through different techniques.

Gut-directed hypnotherapy deals with somatic awareness and the use of imagery and suggestion to reduce pain sensations. Mindfulness-based stress reduction has been shown to be effective in inflammatory bowel disease and musculoskeletal pain syndromes. The provider should be familiar with these available methods, but should leave choice of interventions to partner mental health providers.

It’s important to distinguish between gastrointestinal pain with visceral causes and centrally mediated pain. Central sensitization can cause intermittent pain to become persistent even in the absence of ongoing peripheral causes of pain.

Peripheral acting agents affect gastrointestinal pain, and a network meta-analysis identified the top three drugs for pain relief in irritable bowel syndrome as tricyclic antidepressants, antispasmodics, and peppermint oil.

Neuromodulator drugs are an option for DGBI pain because the gut nervous system shares embryonic developmental pathways with the brain and spinal cord, which helps explains some of the benefits of low-dose antidepressants, now termed gut-brain neuromodulators. These drugs should be started at a low dose and gradually titrated according to symptom response and tolerability.

The authors have financial relationships with various pharmaceutical companies.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Updated MELD score adds serum albumin, female sex

This could achieve equitable distribution
Article Type
Changed
Fri, 11/05/2021 - 14:43

A newly updated version of the Model for End-Stage Liver Disease (MELD) score was effective for predicting short-term mortality in patients with end-stage liver disease and addressed important determinants of wait list outcomes that haven’t been addressed in previous versions, according to findings from a recent study. The new model, termed MELD 3.0, includes new variables such as female sex, serum albumin, and updated creatinine cutoffs.

Thomas Northcut/Getty Images

“We believe that the new model represents an opportunity to lower wait list mortality in the United States and propose it to be considered to replace the current version of MELD in determining allocation priorities in liver transplantation,” wrote study authors W. Ray Kim, MD, of Stanford (Calif.) University and colleagues in Gastroenterology.

In patients with end-stage liver disease, the MELD score was shown to be a reliable predictor of short-term survival, according to the researchers. The original version of MELD consists of international normalized ratio of prothrombin time and serum concentrations of bilirubin and creatinine; MELDNa consists of the same with the addition of serum concentrations of total sodium. Since 2016, MELDNa has been utilized in the United States to allocate livers for transplant.

Despite the utility of the current MELD score, questions have been raised concerning the accuracy of the tool’s ability to predict mortality, including a study by Sumeet K. Asrani, MD, MSc, and colleagues. Changes in liver disease epidemiology, the introduction of newer therapies that alter prognosis, as well as increasing age and prevalence of comorbidities in transplant-eligible patients are several drivers for these concerns, according to Dr. Kim and colleagues. Also, there is an increasing concern regarding women and their potential disadvantages in the current system: At least one study has suggested that serum creatinine may overestimate renal function and consequently underestimate mortality risk in female patients, compared with men with the same creatinine level.

Dr. Kim and colleagues sought to further optimize the fit of the current MELD score by considering alternative interactions and including other variables relevant to predicting short-term mortality in patients awaiting liver transplant. The study included patients who are registered on the Organ Procurement and Transplantation Network Standard Transplant Analysis and Research files newly wait-listed from 2016 through 2018. The full cohort was divided 70:30 into a development set (n = 20,587) and a validation set (n = 8,823); there were no significant differences between the sets in respect to age, sex, race, or liver disease severity.

The investigators used univariable and multivariable regression models to predict 90-day survival following wait list registration. The 90-day Kaplan-Meier survival rate in the development set was 91.3%. Additionally, model fit was tested, and the investigators used the Liver Simulated Allocation Model to estimate the impact of replacing the current version of the MELD with MELD 3.0.

In the final MELD 3.0 model, the researchers included several additional variables such as female sex and serum albumin. Additionally, the final model was characterized by interactions between bilirubin and sodium as well as between albumin and creatinine. Also, an adjustment to the current version of MELD lowering the upper bound for creatinine from 4.0 mg/dL to 3.0 mg/dL.

The MELD 3.0 featured significantly better discrimination, compared with the MELDNa (C-statistic = 0.8693 vs. 0.8622, respectively; P < .01). In addition, the researchers wrote that the new MELD 3.0 score “correctly reclassified a net of 8.8% of decedents to a higher MELD tier, affording them a meaningfully higher chance of transplantation, particularly in women.” The MELD 3.0 score with albumin also led to fewer wait-list deaths, compared with the MELDNa, according to the Liver Simulated Allocation Model analysis (P = .02); the number for MELD 3.0 without albumin was not statistically significant.

According to the investigators, a cause of concern for the MELD 3.0 was the addition of albumin, as this variable may be vulnerable to manipulation. In addition, the researchers note that, while differences in wait list mortality and survival based on race/ethnicity were observed in the study, they were unable to describe the exact root causes of worse outcomes among patients belonging to minority groups. “Thus, inclusion in a risk prediction score without fully understanding the underlying reasons for the racial disparity may have unintended consequences,” the researchers wrote.

“Based on recent data consisting of liver transplant candidates in the United States, we identify additional variables that are meaningfully associated with short-term mortality, including female sex and serum albumin. We also found evidence to support lowering the serum creatinine ceiling to 3 mg/dL,” they wrote. “Based on these data, we created an updated version of the MELD score, which improves mortality prediction compared to the current MELDNa model, including the recognition of female sex as a risk factor for death.”

The researchers reported no conflicts of interest with the pharmaceutical industry. No funding was reported for the study.

Body

 

Introduction of the Model for End-Stage Liver Disease (MELD) score in 2002, consisting of objective measurements of creatinine, bilirubin, and international normalized ratio, revolutionized liver allocation in the United States. To minimize patient wait-list mortality and reduce geographic variability, further improvements to allocation system including the National Share for status 1 and Regional Share for MELD score greater than 35 in 2013, adoption of MELDNa score in 2016, and most recently the introduction of the Acuity Circles distribution system were implemented. Unfortunately, MELD tends to disadvantage women whose lower muscle mass translates to lower normal creatinine levels thereby underestimating the degree of renal dysfunction and wait-list mortality. MELD performance characteristics were also shown to be less accurate in patients with alcoholic and nonalcoholic fatty liver disease when compared with patients with hepatitis C, likely contributing to MELD’s decreasing accuracy in predicting mortality over the years with changing patient population.

Dr. Alexandra Shingina
To address these deficiencies, the study by Kim and colleagues explores a new iteration of organ prioritization system – MELD 3.0 – which includes adjustments for gender, albumin level, and lowering the upper limit of creatinine to 3.0 mg/dL (from 4.0 mg/dL) with validation in a contemporary cohort of listed patients. Undoubtedly, this is a step in the right direction for gender equity in organ allocation as well more accurate assessment of renal dysfunction. The incorporation of albumin into the model is more controversial. The indications for albumin administration ranges from large volume paracentesis to volume expansion for many admitted patients and is more likely to occur in patients with worse liver disease. The risks and benefits of such a volatile component will need to be carefully weighed before implementation. MELD 3.0 holds promise in bringing equity to liver organ allocation as well as improving wait-list mortality and we are likely to see MELD 3.0 (or a variation thereof) dominate the field in the near future.

Alexandra Shingina, MD, MSc, is an assistant professor of medicine in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts.

Publications
Topics
Sections
Body

 

Introduction of the Model for End-Stage Liver Disease (MELD) score in 2002, consisting of objective measurements of creatinine, bilirubin, and international normalized ratio, revolutionized liver allocation in the United States. To minimize patient wait-list mortality and reduce geographic variability, further improvements to allocation system including the National Share for status 1 and Regional Share for MELD score greater than 35 in 2013, adoption of MELDNa score in 2016, and most recently the introduction of the Acuity Circles distribution system were implemented. Unfortunately, MELD tends to disadvantage women whose lower muscle mass translates to lower normal creatinine levels thereby underestimating the degree of renal dysfunction and wait-list mortality. MELD performance characteristics were also shown to be less accurate in patients with alcoholic and nonalcoholic fatty liver disease when compared with patients with hepatitis C, likely contributing to MELD’s decreasing accuracy in predicting mortality over the years with changing patient population.

Dr. Alexandra Shingina
To address these deficiencies, the study by Kim and colleagues explores a new iteration of organ prioritization system – MELD 3.0 – which includes adjustments for gender, albumin level, and lowering the upper limit of creatinine to 3.0 mg/dL (from 4.0 mg/dL) with validation in a contemporary cohort of listed patients. Undoubtedly, this is a step in the right direction for gender equity in organ allocation as well more accurate assessment of renal dysfunction. The incorporation of albumin into the model is more controversial. The indications for albumin administration ranges from large volume paracentesis to volume expansion for many admitted patients and is more likely to occur in patients with worse liver disease. The risks and benefits of such a volatile component will need to be carefully weighed before implementation. MELD 3.0 holds promise in bringing equity to liver organ allocation as well as improving wait-list mortality and we are likely to see MELD 3.0 (or a variation thereof) dominate the field in the near future.

Alexandra Shingina, MD, MSc, is an assistant professor of medicine in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts.

Body

 

Introduction of the Model for End-Stage Liver Disease (MELD) score in 2002, consisting of objective measurements of creatinine, bilirubin, and international normalized ratio, revolutionized liver allocation in the United States. To minimize patient wait-list mortality and reduce geographic variability, further improvements to allocation system including the National Share for status 1 and Regional Share for MELD score greater than 35 in 2013, adoption of MELDNa score in 2016, and most recently the introduction of the Acuity Circles distribution system were implemented. Unfortunately, MELD tends to disadvantage women whose lower muscle mass translates to lower normal creatinine levels thereby underestimating the degree of renal dysfunction and wait-list mortality. MELD performance characteristics were also shown to be less accurate in patients with alcoholic and nonalcoholic fatty liver disease when compared with patients with hepatitis C, likely contributing to MELD’s decreasing accuracy in predicting mortality over the years with changing patient population.

Dr. Alexandra Shingina
To address these deficiencies, the study by Kim and colleagues explores a new iteration of organ prioritization system – MELD 3.0 – which includes adjustments for gender, albumin level, and lowering the upper limit of creatinine to 3.0 mg/dL (from 4.0 mg/dL) with validation in a contemporary cohort of listed patients. Undoubtedly, this is a step in the right direction for gender equity in organ allocation as well more accurate assessment of renal dysfunction. The incorporation of albumin into the model is more controversial. The indications for albumin administration ranges from large volume paracentesis to volume expansion for many admitted patients and is more likely to occur in patients with worse liver disease. The risks and benefits of such a volatile component will need to be carefully weighed before implementation. MELD 3.0 holds promise in bringing equity to liver organ allocation as well as improving wait-list mortality and we are likely to see MELD 3.0 (or a variation thereof) dominate the field in the near future.

Alexandra Shingina, MD, MSc, is an assistant professor of medicine in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts.

Title
This could achieve equitable distribution
This could achieve equitable distribution

A newly updated version of the Model for End-Stage Liver Disease (MELD) score was effective for predicting short-term mortality in patients with end-stage liver disease and addressed important determinants of wait list outcomes that haven’t been addressed in previous versions, according to findings from a recent study. The new model, termed MELD 3.0, includes new variables such as female sex, serum albumin, and updated creatinine cutoffs.

Thomas Northcut/Getty Images

“We believe that the new model represents an opportunity to lower wait list mortality in the United States and propose it to be considered to replace the current version of MELD in determining allocation priorities in liver transplantation,” wrote study authors W. Ray Kim, MD, of Stanford (Calif.) University and colleagues in Gastroenterology.

In patients with end-stage liver disease, the MELD score was shown to be a reliable predictor of short-term survival, according to the researchers. The original version of MELD consists of international normalized ratio of prothrombin time and serum concentrations of bilirubin and creatinine; MELDNa consists of the same with the addition of serum concentrations of total sodium. Since 2016, MELDNa has been utilized in the United States to allocate livers for transplant.

Despite the utility of the current MELD score, questions have been raised concerning the accuracy of the tool’s ability to predict mortality, including a study by Sumeet K. Asrani, MD, MSc, and colleagues. Changes in liver disease epidemiology, the introduction of newer therapies that alter prognosis, as well as increasing age and prevalence of comorbidities in transplant-eligible patients are several drivers for these concerns, according to Dr. Kim and colleagues. Also, there is an increasing concern regarding women and their potential disadvantages in the current system: At least one study has suggested that serum creatinine may overestimate renal function and consequently underestimate mortality risk in female patients, compared with men with the same creatinine level.

Dr. Kim and colleagues sought to further optimize the fit of the current MELD score by considering alternative interactions and including other variables relevant to predicting short-term mortality in patients awaiting liver transplant. The study included patients who are registered on the Organ Procurement and Transplantation Network Standard Transplant Analysis and Research files newly wait-listed from 2016 through 2018. The full cohort was divided 70:30 into a development set (n = 20,587) and a validation set (n = 8,823); there were no significant differences between the sets in respect to age, sex, race, or liver disease severity.

The investigators used univariable and multivariable regression models to predict 90-day survival following wait list registration. The 90-day Kaplan-Meier survival rate in the development set was 91.3%. Additionally, model fit was tested, and the investigators used the Liver Simulated Allocation Model to estimate the impact of replacing the current version of the MELD with MELD 3.0.

In the final MELD 3.0 model, the researchers included several additional variables such as female sex and serum albumin. Additionally, the final model was characterized by interactions between bilirubin and sodium as well as between albumin and creatinine. Also, an adjustment to the current version of MELD lowering the upper bound for creatinine from 4.0 mg/dL to 3.0 mg/dL.

The MELD 3.0 featured significantly better discrimination, compared with the MELDNa (C-statistic = 0.8693 vs. 0.8622, respectively; P < .01). In addition, the researchers wrote that the new MELD 3.0 score “correctly reclassified a net of 8.8% of decedents to a higher MELD tier, affording them a meaningfully higher chance of transplantation, particularly in women.” The MELD 3.0 score with albumin also led to fewer wait-list deaths, compared with the MELDNa, according to the Liver Simulated Allocation Model analysis (P = .02); the number for MELD 3.0 without albumin was not statistically significant.

According to the investigators, a cause of concern for the MELD 3.0 was the addition of albumin, as this variable may be vulnerable to manipulation. In addition, the researchers note that, while differences in wait list mortality and survival based on race/ethnicity were observed in the study, they were unable to describe the exact root causes of worse outcomes among patients belonging to minority groups. “Thus, inclusion in a risk prediction score without fully understanding the underlying reasons for the racial disparity may have unintended consequences,” the researchers wrote.

“Based on recent data consisting of liver transplant candidates in the United States, we identify additional variables that are meaningfully associated with short-term mortality, including female sex and serum albumin. We also found evidence to support lowering the serum creatinine ceiling to 3 mg/dL,” they wrote. “Based on these data, we created an updated version of the MELD score, which improves mortality prediction compared to the current MELDNa model, including the recognition of female sex as a risk factor for death.”

The researchers reported no conflicts of interest with the pharmaceutical industry. No funding was reported for the study.

A newly updated version of the Model for End-Stage Liver Disease (MELD) score was effective for predicting short-term mortality in patients with end-stage liver disease and addressed important determinants of wait list outcomes that haven’t been addressed in previous versions, according to findings from a recent study. The new model, termed MELD 3.0, includes new variables such as female sex, serum albumin, and updated creatinine cutoffs.

Thomas Northcut/Getty Images

“We believe that the new model represents an opportunity to lower wait list mortality in the United States and propose it to be considered to replace the current version of MELD in determining allocation priorities in liver transplantation,” wrote study authors W. Ray Kim, MD, of Stanford (Calif.) University and colleagues in Gastroenterology.

In patients with end-stage liver disease, the MELD score was shown to be a reliable predictor of short-term survival, according to the researchers. The original version of MELD consists of international normalized ratio of prothrombin time and serum concentrations of bilirubin and creatinine; MELDNa consists of the same with the addition of serum concentrations of total sodium. Since 2016, MELDNa has been utilized in the United States to allocate livers for transplant.

Despite the utility of the current MELD score, questions have been raised concerning the accuracy of the tool’s ability to predict mortality, including a study by Sumeet K. Asrani, MD, MSc, and colleagues. Changes in liver disease epidemiology, the introduction of newer therapies that alter prognosis, as well as increasing age and prevalence of comorbidities in transplant-eligible patients are several drivers for these concerns, according to Dr. Kim and colleagues. Also, there is an increasing concern regarding women and their potential disadvantages in the current system: At least one study has suggested that serum creatinine may overestimate renal function and consequently underestimate mortality risk in female patients, compared with men with the same creatinine level.

Dr. Kim and colleagues sought to further optimize the fit of the current MELD score by considering alternative interactions and including other variables relevant to predicting short-term mortality in patients awaiting liver transplant. The study included patients who are registered on the Organ Procurement and Transplantation Network Standard Transplant Analysis and Research files newly wait-listed from 2016 through 2018. The full cohort was divided 70:30 into a development set (n = 20,587) and a validation set (n = 8,823); there were no significant differences between the sets in respect to age, sex, race, or liver disease severity.

The investigators used univariable and multivariable regression models to predict 90-day survival following wait list registration. The 90-day Kaplan-Meier survival rate in the development set was 91.3%. Additionally, model fit was tested, and the investigators used the Liver Simulated Allocation Model to estimate the impact of replacing the current version of the MELD with MELD 3.0.

In the final MELD 3.0 model, the researchers included several additional variables such as female sex and serum albumin. Additionally, the final model was characterized by interactions between bilirubin and sodium as well as between albumin and creatinine. Also, an adjustment to the current version of MELD lowering the upper bound for creatinine from 4.0 mg/dL to 3.0 mg/dL.

The MELD 3.0 featured significantly better discrimination, compared with the MELDNa (C-statistic = 0.8693 vs. 0.8622, respectively; P < .01). In addition, the researchers wrote that the new MELD 3.0 score “correctly reclassified a net of 8.8% of decedents to a higher MELD tier, affording them a meaningfully higher chance of transplantation, particularly in women.” The MELD 3.0 score with albumin also led to fewer wait-list deaths, compared with the MELDNa, according to the Liver Simulated Allocation Model analysis (P = .02); the number for MELD 3.0 without albumin was not statistically significant.

According to the investigators, a cause of concern for the MELD 3.0 was the addition of albumin, as this variable may be vulnerable to manipulation. In addition, the researchers note that, while differences in wait list mortality and survival based on race/ethnicity were observed in the study, they were unable to describe the exact root causes of worse outcomes among patients belonging to minority groups. “Thus, inclusion in a risk prediction score without fully understanding the underlying reasons for the racial disparity may have unintended consequences,” the researchers wrote.

“Based on recent data consisting of liver transplant candidates in the United States, we identify additional variables that are meaningfully associated with short-term mortality, including female sex and serum albumin. We also found evidence to support lowering the serum creatinine ceiling to 3 mg/dL,” they wrote. “Based on these data, we created an updated version of the MELD score, which improves mortality prediction compared to the current MELDNa model, including the recognition of female sex as a risk factor for death.”

The researchers reported no conflicts of interest with the pharmaceutical industry. No funding was reported for the study.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article