Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin

AGA Clinical Practice Update: Commentary on noninvasive CRC screening

Article Type
Changed
Tue, 03/15/2022 - 09:10

A new expert commentary from the American Gastroenterological Association focuses on noninvasive screening options for colorectal cancer (CRC), as well as approaches to ensure quality in noninvasive screening programs. The commentary was published in Gastroenterology.

The American Cancer Society reported in its Cancer Facts & Figures 2021 report that lifetime risk of CRC in the United States is 4%, and those with above average risk are recommended to undergo CRC screening at an earlier age, with colonoscopy as a screening modality. Between 75% and 80% of the U.S. population is considered at average risk, and this is the group covered by the expert commentary. In this group, CRC rates jump from 35.1 to 61.2 cases per 100,000 people between the ages of 45-49 years and 50-54 years. Early-onset (before 50) CRC accounts for 12% of all cases and 7% of CRC-related deaths.

The authors noted that the U.S. Preventive Services Task Force made a grade B recommendation for individuals to begin screening at age 45, regardless of screening method, and their modeling suggests that screening initialization at 45 rather than 50 years increases life-years gained by 6.2% at the cost of a 17% increase in colonoscopies.

According to the commentary authors, a hybrid approach combining annual fecal immunochemical testing (FIT) at age 45-49, followed by colonoscopy between ages 50 and 70, could result in substantial gains in life-years while prioritizing colonoscopies for advancing age, which is associated with increased risk of advanced adenomas (AA) and CRC.
 

Exploring options

For stool-based CRC screening, FIT has generally replaced guaiac fecal occult blood testing because of better patient adherence and fewer restrictions on medicine and diet. FIT can produce a quantitative result measured in micrograms of hemoglobin per gram, or qualitatively positive above a threshold of 20 mcg per gram. The MTsDNA (Cologuard) test combines FIT with two DNA methylation markers, KRAS mutation screening, and a measurement of total human DNA, with use of an algorithm of combined results to determine positivity. It is approved only for average-risk individuals aged 45-85.

In cases where MTsDNA tests positive, but colonoscopy reveals no findings, an aerodigestive cancer could be present. However, this is considered rare based on a study that revealed that 2.4% of patients with discordant results developed an aerodigestive cancer during a median 5.4 years of follow-up, compared with 1.1% of cases with negative MTsDNA and negative colonoscopy. The difference was not statistically significant. The commentary authors suggest that no further testing is required after a negative high-quality colonoscopy and that patients can resume screening at normal intervals with any of the recommended tests.

The Septin 9 blood test (Epi proColon) is another screening option, and is FDA approved for average-risk individuals older than 50 years. It detects methylation of the promoter region of the Septin 9 gene. It has a 48% sensitivity and 91.5% specificity for CRC, as well as a sensitivity of 11.2% for AA. One model found that Septin 9 screening every 1 or 2 years could lead to more quality-adjusted life-years gained and prevention of more deaths than annual FIT, but with more colonoscopies. CRC screening guidelines do not endorse Septin 9, but screening studies are in progress to assess its performance.
 

Ensuring quality

“The linchpin for effective noninvasive screening programs is adherence, and several measures of adherence are required,” the authors wrote. To ensure high quality of noninvasive screening programs, it is important to create metrics and employ continuous monitoring of compliance, and to initiate changes when adherence and outcomes lag. Important metrics include patient compliance, rapid reporting of test results, timely implementation of follow-up colonoscopies, and systems put in place to restore patients to appropriate CRC screening intervals.

The authors suggested several specific metrics and attainable performance goals. The ratio of tests completed within 1 year to tests ordered should reach 90% or more. Outreach should be conducted to patients who do not complete testing within 1 month of the order. All patients should be contacted with 2 weeks of test results, and those who test negative should be made aware of the appropriate interval for future screening, along with the method of contact.

At least 80% of patients who receive a positive test should be offered a colonoscopy date within 3 months, and all within 6 months, because delay past that time is associated with greater risk of AA, CRC, and advanced-stage CRC. Within 6 months of a positive noninvasive test, at least 95% of patients should have undergone a colonoscopy, unless they are too ill, have moved, or cannot be reached. “Quality metrics for noninvasive screening programs should be set and program performance should be measured and ideally reported publicly,” the authors summarized. “Poor adherence at any level should trigger review of established protocols and facilitate change to ensure high-quality screening.”

Two authors disclosed relationships with Freenome and/or Check-Cap, but the third disclosed no conflicts.
 

Publications
Topics
Sections

A new expert commentary from the American Gastroenterological Association focuses on noninvasive screening options for colorectal cancer (CRC), as well as approaches to ensure quality in noninvasive screening programs. The commentary was published in Gastroenterology.

The American Cancer Society reported in its Cancer Facts & Figures 2021 report that lifetime risk of CRC in the United States is 4%, and those with above average risk are recommended to undergo CRC screening at an earlier age, with colonoscopy as a screening modality. Between 75% and 80% of the U.S. population is considered at average risk, and this is the group covered by the expert commentary. In this group, CRC rates jump from 35.1 to 61.2 cases per 100,000 people between the ages of 45-49 years and 50-54 years. Early-onset (before 50) CRC accounts for 12% of all cases and 7% of CRC-related deaths.

The authors noted that the U.S. Preventive Services Task Force made a grade B recommendation for individuals to begin screening at age 45, regardless of screening method, and their modeling suggests that screening initialization at 45 rather than 50 years increases life-years gained by 6.2% at the cost of a 17% increase in colonoscopies.

According to the commentary authors, a hybrid approach combining annual fecal immunochemical testing (FIT) at age 45-49, followed by colonoscopy between ages 50 and 70, could result in substantial gains in life-years while prioritizing colonoscopies for advancing age, which is associated with increased risk of advanced adenomas (AA) and CRC.
 

Exploring options

For stool-based CRC screening, FIT has generally replaced guaiac fecal occult blood testing because of better patient adherence and fewer restrictions on medicine and diet. FIT can produce a quantitative result measured in micrograms of hemoglobin per gram, or qualitatively positive above a threshold of 20 mcg per gram. The MTsDNA (Cologuard) test combines FIT with two DNA methylation markers, KRAS mutation screening, and a measurement of total human DNA, with use of an algorithm of combined results to determine positivity. It is approved only for average-risk individuals aged 45-85.

In cases where MTsDNA tests positive, but colonoscopy reveals no findings, an aerodigestive cancer could be present. However, this is considered rare based on a study that revealed that 2.4% of patients with discordant results developed an aerodigestive cancer during a median 5.4 years of follow-up, compared with 1.1% of cases with negative MTsDNA and negative colonoscopy. The difference was not statistically significant. The commentary authors suggest that no further testing is required after a negative high-quality colonoscopy and that patients can resume screening at normal intervals with any of the recommended tests.

The Septin 9 blood test (Epi proColon) is another screening option, and is FDA approved for average-risk individuals older than 50 years. It detects methylation of the promoter region of the Septin 9 gene. It has a 48% sensitivity and 91.5% specificity for CRC, as well as a sensitivity of 11.2% for AA. One model found that Septin 9 screening every 1 or 2 years could lead to more quality-adjusted life-years gained and prevention of more deaths than annual FIT, but with more colonoscopies. CRC screening guidelines do not endorse Septin 9, but screening studies are in progress to assess its performance.
 

Ensuring quality

“The linchpin for effective noninvasive screening programs is adherence, and several measures of adherence are required,” the authors wrote. To ensure high quality of noninvasive screening programs, it is important to create metrics and employ continuous monitoring of compliance, and to initiate changes when adherence and outcomes lag. Important metrics include patient compliance, rapid reporting of test results, timely implementation of follow-up colonoscopies, and systems put in place to restore patients to appropriate CRC screening intervals.

The authors suggested several specific metrics and attainable performance goals. The ratio of tests completed within 1 year to tests ordered should reach 90% or more. Outreach should be conducted to patients who do not complete testing within 1 month of the order. All patients should be contacted with 2 weeks of test results, and those who test negative should be made aware of the appropriate interval for future screening, along with the method of contact.

At least 80% of patients who receive a positive test should be offered a colonoscopy date within 3 months, and all within 6 months, because delay past that time is associated with greater risk of AA, CRC, and advanced-stage CRC. Within 6 months of a positive noninvasive test, at least 95% of patients should have undergone a colonoscopy, unless they are too ill, have moved, or cannot be reached. “Quality metrics for noninvasive screening programs should be set and program performance should be measured and ideally reported publicly,” the authors summarized. “Poor adherence at any level should trigger review of established protocols and facilitate change to ensure high-quality screening.”

Two authors disclosed relationships with Freenome and/or Check-Cap, but the third disclosed no conflicts.
 

A new expert commentary from the American Gastroenterological Association focuses on noninvasive screening options for colorectal cancer (CRC), as well as approaches to ensure quality in noninvasive screening programs. The commentary was published in Gastroenterology.

The American Cancer Society reported in its Cancer Facts & Figures 2021 report that lifetime risk of CRC in the United States is 4%, and those with above average risk are recommended to undergo CRC screening at an earlier age, with colonoscopy as a screening modality. Between 75% and 80% of the U.S. population is considered at average risk, and this is the group covered by the expert commentary. In this group, CRC rates jump from 35.1 to 61.2 cases per 100,000 people between the ages of 45-49 years and 50-54 years. Early-onset (before 50) CRC accounts for 12% of all cases and 7% of CRC-related deaths.

The authors noted that the U.S. Preventive Services Task Force made a grade B recommendation for individuals to begin screening at age 45, regardless of screening method, and their modeling suggests that screening initialization at 45 rather than 50 years increases life-years gained by 6.2% at the cost of a 17% increase in colonoscopies.

According to the commentary authors, a hybrid approach combining annual fecal immunochemical testing (FIT) at age 45-49, followed by colonoscopy between ages 50 and 70, could result in substantial gains in life-years while prioritizing colonoscopies for advancing age, which is associated with increased risk of advanced adenomas (AA) and CRC.
 

Exploring options

For stool-based CRC screening, FIT has generally replaced guaiac fecal occult blood testing because of better patient adherence and fewer restrictions on medicine and diet. FIT can produce a quantitative result measured in micrograms of hemoglobin per gram, or qualitatively positive above a threshold of 20 mcg per gram. The MTsDNA (Cologuard) test combines FIT with two DNA methylation markers, KRAS mutation screening, and a measurement of total human DNA, with use of an algorithm of combined results to determine positivity. It is approved only for average-risk individuals aged 45-85.

In cases where MTsDNA tests positive, but colonoscopy reveals no findings, an aerodigestive cancer could be present. However, this is considered rare based on a study that revealed that 2.4% of patients with discordant results developed an aerodigestive cancer during a median 5.4 years of follow-up, compared with 1.1% of cases with negative MTsDNA and negative colonoscopy. The difference was not statistically significant. The commentary authors suggest that no further testing is required after a negative high-quality colonoscopy and that patients can resume screening at normal intervals with any of the recommended tests.

The Septin 9 blood test (Epi proColon) is another screening option, and is FDA approved for average-risk individuals older than 50 years. It detects methylation of the promoter region of the Septin 9 gene. It has a 48% sensitivity and 91.5% specificity for CRC, as well as a sensitivity of 11.2% for AA. One model found that Septin 9 screening every 1 or 2 years could lead to more quality-adjusted life-years gained and prevention of more deaths than annual FIT, but with more colonoscopies. CRC screening guidelines do not endorse Septin 9, but screening studies are in progress to assess its performance.
 

Ensuring quality

“The linchpin for effective noninvasive screening programs is adherence, and several measures of adherence are required,” the authors wrote. To ensure high quality of noninvasive screening programs, it is important to create metrics and employ continuous monitoring of compliance, and to initiate changes when adherence and outcomes lag. Important metrics include patient compliance, rapid reporting of test results, timely implementation of follow-up colonoscopies, and systems put in place to restore patients to appropriate CRC screening intervals.

The authors suggested several specific metrics and attainable performance goals. The ratio of tests completed within 1 year to tests ordered should reach 90% or more. Outreach should be conducted to patients who do not complete testing within 1 month of the order. All patients should be contacted with 2 weeks of test results, and those who test negative should be made aware of the appropriate interval for future screening, along with the method of contact.

At least 80% of patients who receive a positive test should be offered a colonoscopy date within 3 months, and all within 6 months, because delay past that time is associated with greater risk of AA, CRC, and advanced-stage CRC. Within 6 months of a positive noninvasive test, at least 95% of patients should have undergone a colonoscopy, unless they are too ill, have moved, or cannot be reached. “Quality metrics for noninvasive screening programs should be set and program performance should be measured and ideally reported publicly,” the authors summarized. “Poor adherence at any level should trigger review of established protocols and facilitate change to ensure high-quality screening.”

Two authors disclosed relationships with Freenome and/or Check-Cap, but the third disclosed no conflicts.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA Clinical Practice Guidelines: Systemic HCC therapy

Article Type
Changed
Tue, 03/22/2022 - 18:15

New recommendations from the American Gastroenterological Association focus on choice of systemic therapy in hepatocellular carcinoma (HCC) patients. The guideline authors point out that prognosis and treatment decisions are both heavily dependent on a combination of the severity of underlying disease and biological characteristics of the tumor.

The document includes options for patients who are ineligible for locoregional therapy or resection, patients with metastatic disease and preserved liver function, patients with poor liver function, and patients receiving adjuvant therapy following surgery or locoregional therapy (LRT).

Dr. Grace L. Su

Intermediate or advanced tumor stage is common among HCC patients, and curative options such as surgery and ablation are generally limited to early-stage disease. LRTs – including transarterial chemoembolization (TACE), transarterial radioembolization (TARE), and systemic therapy – may be employed against advanced or metastatic HCC, according to the authors, led by Grace L. Su, MD, of the division of gastroenterology and hepatology at the University of Michigan, Ann Arbor, and the Veterans Affairs Ann Arbor Healthcare System.

In 2007, the Food and Drug Administration approved the multikinase inhibitor sorafenib as the first systemic therapy for HCC. The new guideline comes in the wake of new systemic therapeutic options that have arrived in the years since, including molecularly targeted therapy and immunotherapy. The authors of the guidance, published in Gastroenterology, include advice on both first- and second-line therapies.

Certainty of evidence for the recommendations ranges from low to very low, indicating limited or very little confidence in the effect estimated, and the true effect is likely to be considerably different than predicted by best current estimates. Accordingly, the recommendations are conditional, and decisions should be made with the values and preferences of the individual patient in mind.

In patients with preserved liver function who are ineligible for LRT or resection, or who have metastatic disease, the authors suggest that first-line treatment should be the combination of atezolizumab and bevacizumab rather than sorafenib. Bevacizumab comes with a bleeding risk, so patients should first be evaluated endoscopically and treated for esophageal varices. For patients who are ineligible for bevacizumab, alternatives are lenvatinib or sorafenib. Patients who are more concerned about disease progression than adverse events may want to consider lenvatinib rather than sorafenib, while those concerned about blood pressure control and who are less concerned about adverse skin reactions may choose sorafenib.

Options for second-line therapy after sorafenib include cabozantinib (mortality reduction, 2.2 months) and pembrolizumab (mortality reduction, 3.3 months). Patients with alpha-fetoprotein levels higher than 400 ng/mL may be candidates for treatment with ramucirumab (mortality reduction, 1.2 months). Another option is regorafenib (mortality reduction, 2.8 months). Patients who are more concerned about adverse effects than a potential survival benefit with any of these therapies may reasonably choose no systemic therapy.

For HCC patients with poor liver function, who are not eligible for LRT or resection, or with metastatic disease, the guidelines recommend against routine use of sorafenib.

In the setting of adjuvant therapy following curative surgical resection, curative local ablation, or TACE LRT, the guidelines recommend against the use of sorafenib. The authors also recommended against the use of bevacizumab following TACE LRT.

The authors noted that there is no high-quality comparative evidence in the second-line setting for atezolizumab plus bevacizumab, sorafenib, or lenvatinib. There is a dearth of evidence and few biomarkers to guide personalization of therapies, which places the emphasis on patient preferences, risks, and benefits.

The authors disclosed no conflicts.

Publications
Topics
Sections

New recommendations from the American Gastroenterological Association focus on choice of systemic therapy in hepatocellular carcinoma (HCC) patients. The guideline authors point out that prognosis and treatment decisions are both heavily dependent on a combination of the severity of underlying disease and biological characteristics of the tumor.

The document includes options for patients who are ineligible for locoregional therapy or resection, patients with metastatic disease and preserved liver function, patients with poor liver function, and patients receiving adjuvant therapy following surgery or locoregional therapy (LRT).

Dr. Grace L. Su

Intermediate or advanced tumor stage is common among HCC patients, and curative options such as surgery and ablation are generally limited to early-stage disease. LRTs – including transarterial chemoembolization (TACE), transarterial radioembolization (TARE), and systemic therapy – may be employed against advanced or metastatic HCC, according to the authors, led by Grace L. Su, MD, of the division of gastroenterology and hepatology at the University of Michigan, Ann Arbor, and the Veterans Affairs Ann Arbor Healthcare System.

In 2007, the Food and Drug Administration approved the multikinase inhibitor sorafenib as the first systemic therapy for HCC. The new guideline comes in the wake of new systemic therapeutic options that have arrived in the years since, including molecularly targeted therapy and immunotherapy. The authors of the guidance, published in Gastroenterology, include advice on both first- and second-line therapies.

Certainty of evidence for the recommendations ranges from low to very low, indicating limited or very little confidence in the effect estimated, and the true effect is likely to be considerably different than predicted by best current estimates. Accordingly, the recommendations are conditional, and decisions should be made with the values and preferences of the individual patient in mind.

In patients with preserved liver function who are ineligible for LRT or resection, or who have metastatic disease, the authors suggest that first-line treatment should be the combination of atezolizumab and bevacizumab rather than sorafenib. Bevacizumab comes with a bleeding risk, so patients should first be evaluated endoscopically and treated for esophageal varices. For patients who are ineligible for bevacizumab, alternatives are lenvatinib or sorafenib. Patients who are more concerned about disease progression than adverse events may want to consider lenvatinib rather than sorafenib, while those concerned about blood pressure control and who are less concerned about adverse skin reactions may choose sorafenib.

Options for second-line therapy after sorafenib include cabozantinib (mortality reduction, 2.2 months) and pembrolizumab (mortality reduction, 3.3 months). Patients with alpha-fetoprotein levels higher than 400 ng/mL may be candidates for treatment with ramucirumab (mortality reduction, 1.2 months). Another option is regorafenib (mortality reduction, 2.8 months). Patients who are more concerned about adverse effects than a potential survival benefit with any of these therapies may reasonably choose no systemic therapy.

For HCC patients with poor liver function, who are not eligible for LRT or resection, or with metastatic disease, the guidelines recommend against routine use of sorafenib.

In the setting of adjuvant therapy following curative surgical resection, curative local ablation, or TACE LRT, the guidelines recommend against the use of sorafenib. The authors also recommended against the use of bevacizumab following TACE LRT.

The authors noted that there is no high-quality comparative evidence in the second-line setting for atezolizumab plus bevacizumab, sorafenib, or lenvatinib. There is a dearth of evidence and few biomarkers to guide personalization of therapies, which places the emphasis on patient preferences, risks, and benefits.

The authors disclosed no conflicts.

New recommendations from the American Gastroenterological Association focus on choice of systemic therapy in hepatocellular carcinoma (HCC) patients. The guideline authors point out that prognosis and treatment decisions are both heavily dependent on a combination of the severity of underlying disease and biological characteristics of the tumor.

The document includes options for patients who are ineligible for locoregional therapy or resection, patients with metastatic disease and preserved liver function, patients with poor liver function, and patients receiving adjuvant therapy following surgery or locoregional therapy (LRT).

Dr. Grace L. Su

Intermediate or advanced tumor stage is common among HCC patients, and curative options such as surgery and ablation are generally limited to early-stage disease. LRTs – including transarterial chemoembolization (TACE), transarterial radioembolization (TARE), and systemic therapy – may be employed against advanced or metastatic HCC, according to the authors, led by Grace L. Su, MD, of the division of gastroenterology and hepatology at the University of Michigan, Ann Arbor, and the Veterans Affairs Ann Arbor Healthcare System.

In 2007, the Food and Drug Administration approved the multikinase inhibitor sorafenib as the first systemic therapy for HCC. The new guideline comes in the wake of new systemic therapeutic options that have arrived in the years since, including molecularly targeted therapy and immunotherapy. The authors of the guidance, published in Gastroenterology, include advice on both first- and second-line therapies.

Certainty of evidence for the recommendations ranges from low to very low, indicating limited or very little confidence in the effect estimated, and the true effect is likely to be considerably different than predicted by best current estimates. Accordingly, the recommendations are conditional, and decisions should be made with the values and preferences of the individual patient in mind.

In patients with preserved liver function who are ineligible for LRT or resection, or who have metastatic disease, the authors suggest that first-line treatment should be the combination of atezolizumab and bevacizumab rather than sorafenib. Bevacizumab comes with a bleeding risk, so patients should first be evaluated endoscopically and treated for esophageal varices. For patients who are ineligible for bevacizumab, alternatives are lenvatinib or sorafenib. Patients who are more concerned about disease progression than adverse events may want to consider lenvatinib rather than sorafenib, while those concerned about blood pressure control and who are less concerned about adverse skin reactions may choose sorafenib.

Options for second-line therapy after sorafenib include cabozantinib (mortality reduction, 2.2 months) and pembrolizumab (mortality reduction, 3.3 months). Patients with alpha-fetoprotein levels higher than 400 ng/mL may be candidates for treatment with ramucirumab (mortality reduction, 1.2 months). Another option is regorafenib (mortality reduction, 2.8 months). Patients who are more concerned about adverse effects than a potential survival benefit with any of these therapies may reasonably choose no systemic therapy.

For HCC patients with poor liver function, who are not eligible for LRT or resection, or with metastatic disease, the guidelines recommend against routine use of sorafenib.

In the setting of adjuvant therapy following curative surgical resection, curative local ablation, or TACE LRT, the guidelines recommend against the use of sorafenib. The authors also recommended against the use of bevacizumab following TACE LRT.

The authors noted that there is no high-quality comparative evidence in the second-line setting for atezolizumab plus bevacizumab, sorafenib, or lenvatinib. There is a dearth of evidence and few biomarkers to guide personalization of therapies, which places the emphasis on patient preferences, risks, and benefits.

The authors disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Improved follow-up needed to find late-stage pancreatic cancers

Finding factors may make a difference
Article Type
Changed
Thu, 02/10/2022 - 15:07

A relatively large number of late-stage pancreatic ductal adenocarcinomas (PDACs) are detected during follow-up surveillance, yet no single patient- or protocol-specific factor appears to be significantly associated with detecting late-stage disease during this period, according to a new systematic literature review and meta-analysis.

The researchers, led by Ankit Chhoda, MD, of Yale University, New Haven, Conn., wrote in Gastroenterology that interval progression in high-risk individuals “highlights the need for improved follow-up methodology with higher accuracy to detect prognostically significant and treatable lesions.”

Individuals at high risk for PDAC are encouraged to undergo routine surveillance for the disease because early detection and resection of T1N0M0 PDAC and high-grade precursors may improve survival outcomes. According to Dr. Chhoda and colleagues, challenges of interval progression of cancers during the surveillance period for gastrointestinal malignancies have been well described in the general and at-risk patient populations. Previous studies, the authors explained, have not scrutinized the issues associated with late-stage PDACs detected during follow-up surveillance.

“Late-stage PDACs necessitate critical appraisal of current follow-up strategies to detect successful targets and perform timely resections,” the authors wrote. The researchers added that the diagnosis of late-stage PDACs during follow-up emphasizes the need for implementing “quality measures to avoid preventable causes, including surveillance adherence and diagnostic errors.”

To understand the incidence rates of late-stage PDACs during follow-up in high-risk individuals, Dr. Chhoda and researchers performed a systematic literature review and meta-analysis of data that included follow-up strategies for early PDAC detection among a high-risk population.

Outcomes of interest for the analysis included the overall diagnosis of advanced neoplasia as well as surveillance-detected/interval late-stage PDACs (T2–4N0M0/metastatic stage PDAC) during follow-up. The investigators defined surveillance-detected and interval late-stage PDACs as late-stage PDACs that were detected during surveillance and as those presenting symptomatically between visits, respectively.

The researchers also performed metaregression of the incidence rates of late-stage PDACs to examine the relationship with clinicoradiologic features in high-risk individuals.

A total of 13 studies on surveillance in 2,169 high-risk individuals were included in the systematic review, while 12 studies were included in the meta-analysis. Across studies, high-risk individuals were followed for over 7,302.72 patient-years for the purposes of detecting incident lesions or progression of preexisting pancreatic abnormalities.

In all high-risk individuals who underwent follow-up, the investigators identified a total yield of advanced neoplasia of 53. This total yield consisted of 7 high-grade pancreatic intraepithelial neoplasms, 7 high-grade intraductal papillary mucinous neoplasms, and 39 PDACs. According to the meta-analysis, the cumulative incidence of advanced neoplasia was 3.3 (95% confidence interval, 0.6-7.4; P < .001) per 1,000 patient-years. During follow-up, the cumulative incidence of surveillance-detected/interval late-stage PDACs was 1.7 per 1,000 patient-years (95% CI, 0.2-4.0; P = .03).

In a separate analysis, the investigators sought to identify the relationship between the modality of follow-up imaging and late-stage PDAC incidence. Imaging modalities used during follow-up were mostly cross-sectional imaging, such as computed tomography or magnetic resonance imaging with cholangiopancreatography (n = 4) or endoscopic ultrasound and cross-sectional modalities (n = 8).

The investigators found no significant associations between late-stage PDACs and surveillance imaging, baseline pancreatic morphology, study location, genetic background, gender, or age. Incidence of late-stage PDACs in studies with mostly cross-sectional imaging was 0.7 per 1,000 patient-years (95% CI, 0.0-8.0). This incidence rate was lower than that reported with EUS and cross-sectional modalities (2.5 per 1,000 patient-years; 95% CI, 0.6-5.4), but this difference was not statistically significant (P = .2).

No significant difference was found during follow-up in the incidence of late-stage PDACs between high-risk individuals with baseline pancreatic abnormalities (0.0 no significant difference; 95% CI, 0.0-0.3) vs. high-risk individuals with normal baseline (0.9 per 1,000 patient-years; 95% CI, 0.0-2.8) (P = .9).

Most studies included in the analysis did not report on diagnostic errors and surveillance adherence, the researchers wrote. Nonadherence to surveillance as well as delays in surveillance accounted for four late-stage PDACs, and surveillance cessation and/or delays were reported in 4 out of 19 high-risk individuals. There was limited information on symptoms, presentation timing, site of lesion, and surveillance adherence, which the investigators indicated prevented a formal meta-analysis.

In their summary, the study authors noted that in clinical practice there is a need for improved quality measures and adherence to surveillance programs to reduce the risk of diagnostic errors. The authors stated that evidence on the impact of these quality measures “on surveillance outcomes will not only improve quality of surveillance practices, but also enrich our communication with patients who undergo surveillance.”

The researchers reported no conflicts of interest with the pharmaceutical industry, and the study did not receive any funding.

Body

Surveillance of individuals at increased risk of pancreatic ductal adenocarcinoma (PDAC) offers an opportunity to improve disease mortality through detection of premalignant lesions and earlier stage PDAC. Emerging data suggest that outcomes in surveillance-detected PDAC are superior to those diagnosed after onset of signs and symptoms. This study by Chhoda et al. highlights a potential quality gap in current surveillance programs, namely the diagnosis of interval cancers and late-stage metastatic PDAC.

Investigators report a cumulative incidence of late-stage PDAC of 1.7 per 1,000 patient-years in surveillance, while the incidence of any advanced neoplasia (high-grade PanIN, high-grade IPMN, NET > 1 cm and any stage PDAC) was 3.3 per 1,000 patient-years. Importantly, late-stage PDAC was defined as T2-4N0-1M0-1 in this study. This is based on the 2013 International Cancer of the Pancreas Screening definition of “success of a screening program” as treatment of T1N0M0 PDAC, which was later updated to include a resected PDAC confined to the pancreas. The cumulative incidence of resectable lesions was 2.2 per 1,000 patient-years, while the incidence of unresectable PDAC was 0.6 per 1,000 patient-years in surveillance. Unfortunately, clinical features were unable to predict the onset of these 11 unresectable PDACs.

Dr. Aimee Lucas
Given data reporting limitations, it is uncertain how many advanced PDACs were a result of delayed surveillance, diagnostic errors, or other preventable factors. Addressing these contributing factors as well identifying clinical indicators that may improve the efficacy of existing regimens (such as new onset diabetes, worsening in glycemic control in a person with diabetes, weight loss, and incorporation of novel biomarkers) will be critical to optimizing PDAC surveillance outcomes in in high-risk individuals.

Aimee Lucas, MD, is associate professor of medicine, division of gastroenterology, Icahn School of Medicine at Mount Sinai, New York. She reports receiving research support and consulting from Immunovia, who developed a blood-based biomarker for early PDAC detection.

Publications
Topics
Sections
Body

Surveillance of individuals at increased risk of pancreatic ductal adenocarcinoma (PDAC) offers an opportunity to improve disease mortality through detection of premalignant lesions and earlier stage PDAC. Emerging data suggest that outcomes in surveillance-detected PDAC are superior to those diagnosed after onset of signs and symptoms. This study by Chhoda et al. highlights a potential quality gap in current surveillance programs, namely the diagnosis of interval cancers and late-stage metastatic PDAC.

Investigators report a cumulative incidence of late-stage PDAC of 1.7 per 1,000 patient-years in surveillance, while the incidence of any advanced neoplasia (high-grade PanIN, high-grade IPMN, NET > 1 cm and any stage PDAC) was 3.3 per 1,000 patient-years. Importantly, late-stage PDAC was defined as T2-4N0-1M0-1 in this study. This is based on the 2013 International Cancer of the Pancreas Screening definition of “success of a screening program” as treatment of T1N0M0 PDAC, which was later updated to include a resected PDAC confined to the pancreas. The cumulative incidence of resectable lesions was 2.2 per 1,000 patient-years, while the incidence of unresectable PDAC was 0.6 per 1,000 patient-years in surveillance. Unfortunately, clinical features were unable to predict the onset of these 11 unresectable PDACs.

Dr. Aimee Lucas
Given data reporting limitations, it is uncertain how many advanced PDACs were a result of delayed surveillance, diagnostic errors, or other preventable factors. Addressing these contributing factors as well identifying clinical indicators that may improve the efficacy of existing regimens (such as new onset diabetes, worsening in glycemic control in a person with diabetes, weight loss, and incorporation of novel biomarkers) will be critical to optimizing PDAC surveillance outcomes in in high-risk individuals.

Aimee Lucas, MD, is associate professor of medicine, division of gastroenterology, Icahn School of Medicine at Mount Sinai, New York. She reports receiving research support and consulting from Immunovia, who developed a blood-based biomarker for early PDAC detection.

Body

Surveillance of individuals at increased risk of pancreatic ductal adenocarcinoma (PDAC) offers an opportunity to improve disease mortality through detection of premalignant lesions and earlier stage PDAC. Emerging data suggest that outcomes in surveillance-detected PDAC are superior to those diagnosed after onset of signs and symptoms. This study by Chhoda et al. highlights a potential quality gap in current surveillance programs, namely the diagnosis of interval cancers and late-stage metastatic PDAC.

Investigators report a cumulative incidence of late-stage PDAC of 1.7 per 1,000 patient-years in surveillance, while the incidence of any advanced neoplasia (high-grade PanIN, high-grade IPMN, NET > 1 cm and any stage PDAC) was 3.3 per 1,000 patient-years. Importantly, late-stage PDAC was defined as T2-4N0-1M0-1 in this study. This is based on the 2013 International Cancer of the Pancreas Screening definition of “success of a screening program” as treatment of T1N0M0 PDAC, which was later updated to include a resected PDAC confined to the pancreas. The cumulative incidence of resectable lesions was 2.2 per 1,000 patient-years, while the incidence of unresectable PDAC was 0.6 per 1,000 patient-years in surveillance. Unfortunately, clinical features were unable to predict the onset of these 11 unresectable PDACs.

Dr. Aimee Lucas
Given data reporting limitations, it is uncertain how many advanced PDACs were a result of delayed surveillance, diagnostic errors, or other preventable factors. Addressing these contributing factors as well identifying clinical indicators that may improve the efficacy of existing regimens (such as new onset diabetes, worsening in glycemic control in a person with diabetes, weight loss, and incorporation of novel biomarkers) will be critical to optimizing PDAC surveillance outcomes in in high-risk individuals.

Aimee Lucas, MD, is associate professor of medicine, division of gastroenterology, Icahn School of Medicine at Mount Sinai, New York. She reports receiving research support and consulting from Immunovia, who developed a blood-based biomarker for early PDAC detection.

Title
Finding factors may make a difference
Finding factors may make a difference

A relatively large number of late-stage pancreatic ductal adenocarcinomas (PDACs) are detected during follow-up surveillance, yet no single patient- or protocol-specific factor appears to be significantly associated with detecting late-stage disease during this period, according to a new systematic literature review and meta-analysis.

The researchers, led by Ankit Chhoda, MD, of Yale University, New Haven, Conn., wrote in Gastroenterology that interval progression in high-risk individuals “highlights the need for improved follow-up methodology with higher accuracy to detect prognostically significant and treatable lesions.”

Individuals at high risk for PDAC are encouraged to undergo routine surveillance for the disease because early detection and resection of T1N0M0 PDAC and high-grade precursors may improve survival outcomes. According to Dr. Chhoda and colleagues, challenges of interval progression of cancers during the surveillance period for gastrointestinal malignancies have been well described in the general and at-risk patient populations. Previous studies, the authors explained, have not scrutinized the issues associated with late-stage PDACs detected during follow-up surveillance.

“Late-stage PDACs necessitate critical appraisal of current follow-up strategies to detect successful targets and perform timely resections,” the authors wrote. The researchers added that the diagnosis of late-stage PDACs during follow-up emphasizes the need for implementing “quality measures to avoid preventable causes, including surveillance adherence and diagnostic errors.”

To understand the incidence rates of late-stage PDACs during follow-up in high-risk individuals, Dr. Chhoda and researchers performed a systematic literature review and meta-analysis of data that included follow-up strategies for early PDAC detection among a high-risk population.

Outcomes of interest for the analysis included the overall diagnosis of advanced neoplasia as well as surveillance-detected/interval late-stage PDACs (T2–4N0M0/metastatic stage PDAC) during follow-up. The investigators defined surveillance-detected and interval late-stage PDACs as late-stage PDACs that were detected during surveillance and as those presenting symptomatically between visits, respectively.

The researchers also performed metaregression of the incidence rates of late-stage PDACs to examine the relationship with clinicoradiologic features in high-risk individuals.

A total of 13 studies on surveillance in 2,169 high-risk individuals were included in the systematic review, while 12 studies were included in the meta-analysis. Across studies, high-risk individuals were followed for over 7,302.72 patient-years for the purposes of detecting incident lesions or progression of preexisting pancreatic abnormalities.

In all high-risk individuals who underwent follow-up, the investigators identified a total yield of advanced neoplasia of 53. This total yield consisted of 7 high-grade pancreatic intraepithelial neoplasms, 7 high-grade intraductal papillary mucinous neoplasms, and 39 PDACs. According to the meta-analysis, the cumulative incidence of advanced neoplasia was 3.3 (95% confidence interval, 0.6-7.4; P < .001) per 1,000 patient-years. During follow-up, the cumulative incidence of surveillance-detected/interval late-stage PDACs was 1.7 per 1,000 patient-years (95% CI, 0.2-4.0; P = .03).

In a separate analysis, the investigators sought to identify the relationship between the modality of follow-up imaging and late-stage PDAC incidence. Imaging modalities used during follow-up were mostly cross-sectional imaging, such as computed tomography or magnetic resonance imaging with cholangiopancreatography (n = 4) or endoscopic ultrasound and cross-sectional modalities (n = 8).

The investigators found no significant associations between late-stage PDACs and surveillance imaging, baseline pancreatic morphology, study location, genetic background, gender, or age. Incidence of late-stage PDACs in studies with mostly cross-sectional imaging was 0.7 per 1,000 patient-years (95% CI, 0.0-8.0). This incidence rate was lower than that reported with EUS and cross-sectional modalities (2.5 per 1,000 patient-years; 95% CI, 0.6-5.4), but this difference was not statistically significant (P = .2).

No significant difference was found during follow-up in the incidence of late-stage PDACs between high-risk individuals with baseline pancreatic abnormalities (0.0 no significant difference; 95% CI, 0.0-0.3) vs. high-risk individuals with normal baseline (0.9 per 1,000 patient-years; 95% CI, 0.0-2.8) (P = .9).

Most studies included in the analysis did not report on diagnostic errors and surveillance adherence, the researchers wrote. Nonadherence to surveillance as well as delays in surveillance accounted for four late-stage PDACs, and surveillance cessation and/or delays were reported in 4 out of 19 high-risk individuals. There was limited information on symptoms, presentation timing, site of lesion, and surveillance adherence, which the investigators indicated prevented a formal meta-analysis.

In their summary, the study authors noted that in clinical practice there is a need for improved quality measures and adherence to surveillance programs to reduce the risk of diagnostic errors. The authors stated that evidence on the impact of these quality measures “on surveillance outcomes will not only improve quality of surveillance practices, but also enrich our communication with patients who undergo surveillance.”

The researchers reported no conflicts of interest with the pharmaceutical industry, and the study did not receive any funding.

A relatively large number of late-stage pancreatic ductal adenocarcinomas (PDACs) are detected during follow-up surveillance, yet no single patient- or protocol-specific factor appears to be significantly associated with detecting late-stage disease during this period, according to a new systematic literature review and meta-analysis.

The researchers, led by Ankit Chhoda, MD, of Yale University, New Haven, Conn., wrote in Gastroenterology that interval progression in high-risk individuals “highlights the need for improved follow-up methodology with higher accuracy to detect prognostically significant and treatable lesions.”

Individuals at high risk for PDAC are encouraged to undergo routine surveillance for the disease because early detection and resection of T1N0M0 PDAC and high-grade precursors may improve survival outcomes. According to Dr. Chhoda and colleagues, challenges of interval progression of cancers during the surveillance period for gastrointestinal malignancies have been well described in the general and at-risk patient populations. Previous studies, the authors explained, have not scrutinized the issues associated with late-stage PDACs detected during follow-up surveillance.

“Late-stage PDACs necessitate critical appraisal of current follow-up strategies to detect successful targets and perform timely resections,” the authors wrote. The researchers added that the diagnosis of late-stage PDACs during follow-up emphasizes the need for implementing “quality measures to avoid preventable causes, including surveillance adherence and diagnostic errors.”

To understand the incidence rates of late-stage PDACs during follow-up in high-risk individuals, Dr. Chhoda and researchers performed a systematic literature review and meta-analysis of data that included follow-up strategies for early PDAC detection among a high-risk population.

Outcomes of interest for the analysis included the overall diagnosis of advanced neoplasia as well as surveillance-detected/interval late-stage PDACs (T2–4N0M0/metastatic stage PDAC) during follow-up. The investigators defined surveillance-detected and interval late-stage PDACs as late-stage PDACs that were detected during surveillance and as those presenting symptomatically between visits, respectively.

The researchers also performed metaregression of the incidence rates of late-stage PDACs to examine the relationship with clinicoradiologic features in high-risk individuals.

A total of 13 studies on surveillance in 2,169 high-risk individuals were included in the systematic review, while 12 studies were included in the meta-analysis. Across studies, high-risk individuals were followed for over 7,302.72 patient-years for the purposes of detecting incident lesions or progression of preexisting pancreatic abnormalities.

In all high-risk individuals who underwent follow-up, the investigators identified a total yield of advanced neoplasia of 53. This total yield consisted of 7 high-grade pancreatic intraepithelial neoplasms, 7 high-grade intraductal papillary mucinous neoplasms, and 39 PDACs. According to the meta-analysis, the cumulative incidence of advanced neoplasia was 3.3 (95% confidence interval, 0.6-7.4; P < .001) per 1,000 patient-years. During follow-up, the cumulative incidence of surveillance-detected/interval late-stage PDACs was 1.7 per 1,000 patient-years (95% CI, 0.2-4.0; P = .03).

In a separate analysis, the investigators sought to identify the relationship between the modality of follow-up imaging and late-stage PDAC incidence. Imaging modalities used during follow-up were mostly cross-sectional imaging, such as computed tomography or magnetic resonance imaging with cholangiopancreatography (n = 4) or endoscopic ultrasound and cross-sectional modalities (n = 8).

The investigators found no significant associations between late-stage PDACs and surveillance imaging, baseline pancreatic morphology, study location, genetic background, gender, or age. Incidence of late-stage PDACs in studies with mostly cross-sectional imaging was 0.7 per 1,000 patient-years (95% CI, 0.0-8.0). This incidence rate was lower than that reported with EUS and cross-sectional modalities (2.5 per 1,000 patient-years; 95% CI, 0.6-5.4), but this difference was not statistically significant (P = .2).

No significant difference was found during follow-up in the incidence of late-stage PDACs between high-risk individuals with baseline pancreatic abnormalities (0.0 no significant difference; 95% CI, 0.0-0.3) vs. high-risk individuals with normal baseline (0.9 per 1,000 patient-years; 95% CI, 0.0-2.8) (P = .9).

Most studies included in the analysis did not report on diagnostic errors and surveillance adherence, the researchers wrote. Nonadherence to surveillance as well as delays in surveillance accounted for four late-stage PDACs, and surveillance cessation and/or delays were reported in 4 out of 19 high-risk individuals. There was limited information on symptoms, presentation timing, site of lesion, and surveillance adherence, which the investigators indicated prevented a formal meta-analysis.

In their summary, the study authors noted that in clinical practice there is a need for improved quality measures and adherence to surveillance programs to reduce the risk of diagnostic errors. The authors stated that evidence on the impact of these quality measures “on surveillance outcomes will not only improve quality of surveillance practices, but also enrich our communication with patients who undergo surveillance.”

The researchers reported no conflicts of interest with the pharmaceutical industry, and the study did not receive any funding.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A deep dive on tofacitinib’s mode of action

Shedding light on a JAK inhibitor
Article Type
Changed
Thu, 02/10/2022 - 15:16

A new study has revealed potential cell-specific effects of the human Janus kinase (JAK) inhibitor tofacitinib, including possible targets – such as intestinal inflammation – for future research and even for increasing the drug’s effects.

The work used both mice and human cell models to explore the drug’s effect in inflammatory bowel disease (IBD). The mouse models suggested that the drug’s pharmacokinetics may be affected by intestinal inflammation. The human cell models seem to identify equilibrative nucleoside transporters as the likely route of cellular uptake of tofacitinib; this mechanism appears to be upregulated during inflammation and could present a therapeutic target to bolster the drug’s effects.

“We identify intestinal inflammation as a decisive modulator of the systemic pharmacokinetics of tofacitinib in mice, which needs to be studied and confirmed in humans. Finally, we decipher an important membrane transport mechanism that regulates cellular uptake of tofacitinib into activated immune cells, suggesting a model that explains a preferred uptake of tofacitinib into activated immune cells and a potential starting point to interfere with and channel such an uptake,” wrote the authors, led by Bernhard Texler and Andreas Zollner, both with the Christian Doppler Laboratory for Mucosal Immunology at the Johannes Kepler University in Linz, Austria, who published the results in Cellular and Molecular Gastroenterology and Hepatology.

IBD-related inflammation likely involves multiple cytokine pathways. The JAK-signal transducers and activator of transcription (JAK-STAT) pathway is downstream to more than 50 cytokines and growth factors, so disruption of their activity by JAK-STAT inhibitors like tofacitinib could counter the effects of more than one cytokine at a time.

Tofacitinib received FDA approval for the treatment of ulcerative colitis in 2018, but the details of its mechanism of action against intestinal inflammation remain poorly understood. For example, despite its efficacy against UC, the drug doesn’t work for Crohn’s disease patients. That may be because the drug affects specific cell populations involved only in UC pathogenesis.

To better understand the drug’s pharmacokinetics, the researchers examined the effects of tofacitinib in cells isolated from human peripheral blood, as well as an experimental mouse model of colitis.

The drug inhibited proliferation of both naïve and memory cytotoxic and helper T cells. At higher concentrations, it had strong effects on innate immune system cells, including monocytes, macrophages, and human intestinal epithelial organoids. It promotes the anti-inflammatory M2 phenotype among monocytes and macrophages. The drug also inhibited the pro-inflammatory M1 phenotype. The researchers observed similar effects in the mouse model of colitis.

The investigators also linked equilibrative nucleoside transporters (ENTs) with uptake of tofacitinib, specifically as a mediating role. These membrane proteins transport nucleosides, nucleobases, and therapeutic analogs like tofacitinib, which mimics the nucleotide adenosine triphosphate (ATP). Targeted inhibitors could potentially influence this process.

The researchers created three-dimensional, in vitro colonic organoids using intestinal epithelial cells from UC patients and healthy human controls. In this model, TNF-alpha can lead to production of pro-inflammatory cytokines, but tofacitinib blocked this effect. That result suggests that intestinal epithelial cells are a previously unidentified tofacitinib target.

Although a large amount of work has been done on the pharmacokinetics of therapeutic antibodies used to treat IBD, the authors point out that little is known about tofacitinib. In a mouse model, the serum concentration of the drug increased after exposure to dextran sulfate sodium (DSS), which triggers an IBD-like condition, and the spike was higher during more intense inflammation. The finding was surprising, considering that therapeutic antibodies typically get eliminated through feces during inflammation. Mice treated with DSS versus control had similar levels of tofacitinib in both urine and the feces, suggesting that inflammation may somehow inhibit the enzymes that metabolize the drug.

The researchers also noted that uptake of tofacitinib into leukocytes increased following stimulation with lipopolysaccharide. Given its structural similarity to ATP, the researchers propose that tofacitinib may enter the cells through adenosine cell membrane transporters ENT1 and ENT2, and some evidence even suggested that the pathway may be strengthened in activated immune cells.

The study received funding from: the Christian Doppler Research Association; the Austrian Federal Ministry of Science, Research, and Economy; and the National Foundation for Research, Technology, and Development. One author is receiving research support from AbbVie and Takeda under the framework of the Christian Doppler Research Society, but the remaining authors have no relevant conflicts of interest.

Body

Growing understanding of underlying immunopathogenic mechanisms of inflammatory bowel diseases (IBD) have led to the development of targeted therapies that have considerably improved patient outcomes. However, insights into their respective effector mechanisms are still scarce.

Dr. Raja N. Atreya
This translational study by Texler et al. sheds light on the molecular mechanism of action and pharmacokinetic profile of the JAK-inhibitor tofacitinib, which has been approved for the treatment of ulcerative colitis patients. The research group elegantly elucidated that the severity of intestinal inflammation and circulating tofacitinib levels show a strong positive correlation. They identified inflammation induced equilibrative nucleoside transporters as central regulators of cellular tofacitinib uptake. The presented findings are exciting, as there has so far been a glaring lack of studies on the pharmacokinetic properties of tofacitinib in intestinal inflammation. It has already been shown that the degree of intestinal inflammation impacts the pharmacokinetics of available biological therapies (such as anti–tumor necrosis factor antibodies), which not only influences their therapeutic effectiveness but also their required therapeutic dose.

Pharmacokinetics of biological therapies with assessment of serum drug levels have since been an indispensable part of the optimal management of IBD patients. The presented findings on the pharmacokinetics of tofacitinib during inflammation both on a systemic and on a cellular level might have comparable potential therapeutic consequences. Therapeutic modulation of the responsible membrane transport mechanism for the cellular uptake of tofacitinib might lead to enhanced therapeutic efficacy in the future. Further research in humans is needed to confirm the presented findings.

Raja Narayana Atreya, MD is a professor of medicine, Heisenberg Professor of Translational Immunology in IBD and head of the IBD Unit and Clinical Study Centre at the Erlangen University Hospital, Friedrich-Alexander University of Erlangen-Nürnberg, Erlangen, Germany. He has no conflicts.

Publications
Topics
Sections
Body

Growing understanding of underlying immunopathogenic mechanisms of inflammatory bowel diseases (IBD) have led to the development of targeted therapies that have considerably improved patient outcomes. However, insights into their respective effector mechanisms are still scarce.

Dr. Raja N. Atreya
This translational study by Texler et al. sheds light on the molecular mechanism of action and pharmacokinetic profile of the JAK-inhibitor tofacitinib, which has been approved for the treatment of ulcerative colitis patients. The research group elegantly elucidated that the severity of intestinal inflammation and circulating tofacitinib levels show a strong positive correlation. They identified inflammation induced equilibrative nucleoside transporters as central regulators of cellular tofacitinib uptake. The presented findings are exciting, as there has so far been a glaring lack of studies on the pharmacokinetic properties of tofacitinib in intestinal inflammation. It has already been shown that the degree of intestinal inflammation impacts the pharmacokinetics of available biological therapies (such as anti–tumor necrosis factor antibodies), which not only influences their therapeutic effectiveness but also their required therapeutic dose.

Pharmacokinetics of biological therapies with assessment of serum drug levels have since been an indispensable part of the optimal management of IBD patients. The presented findings on the pharmacokinetics of tofacitinib during inflammation both on a systemic and on a cellular level might have comparable potential therapeutic consequences. Therapeutic modulation of the responsible membrane transport mechanism for the cellular uptake of tofacitinib might lead to enhanced therapeutic efficacy in the future. Further research in humans is needed to confirm the presented findings.

Raja Narayana Atreya, MD is a professor of medicine, Heisenberg Professor of Translational Immunology in IBD and head of the IBD Unit and Clinical Study Centre at the Erlangen University Hospital, Friedrich-Alexander University of Erlangen-Nürnberg, Erlangen, Germany. He has no conflicts.

Body

Growing understanding of underlying immunopathogenic mechanisms of inflammatory bowel diseases (IBD) have led to the development of targeted therapies that have considerably improved patient outcomes. However, insights into their respective effector mechanisms are still scarce.

Dr. Raja N. Atreya
This translational study by Texler et al. sheds light on the molecular mechanism of action and pharmacokinetic profile of the JAK-inhibitor tofacitinib, which has been approved for the treatment of ulcerative colitis patients. The research group elegantly elucidated that the severity of intestinal inflammation and circulating tofacitinib levels show a strong positive correlation. They identified inflammation induced equilibrative nucleoside transporters as central regulators of cellular tofacitinib uptake. The presented findings are exciting, as there has so far been a glaring lack of studies on the pharmacokinetic properties of tofacitinib in intestinal inflammation. It has already been shown that the degree of intestinal inflammation impacts the pharmacokinetics of available biological therapies (such as anti–tumor necrosis factor antibodies), which not only influences their therapeutic effectiveness but also their required therapeutic dose.

Pharmacokinetics of biological therapies with assessment of serum drug levels have since been an indispensable part of the optimal management of IBD patients. The presented findings on the pharmacokinetics of tofacitinib during inflammation both on a systemic and on a cellular level might have comparable potential therapeutic consequences. Therapeutic modulation of the responsible membrane transport mechanism for the cellular uptake of tofacitinib might lead to enhanced therapeutic efficacy in the future. Further research in humans is needed to confirm the presented findings.

Raja Narayana Atreya, MD is a professor of medicine, Heisenberg Professor of Translational Immunology in IBD and head of the IBD Unit and Clinical Study Centre at the Erlangen University Hospital, Friedrich-Alexander University of Erlangen-Nürnberg, Erlangen, Germany. He has no conflicts.

Title
Shedding light on a JAK inhibitor
Shedding light on a JAK inhibitor

A new study has revealed potential cell-specific effects of the human Janus kinase (JAK) inhibitor tofacitinib, including possible targets – such as intestinal inflammation – for future research and even for increasing the drug’s effects.

The work used both mice and human cell models to explore the drug’s effect in inflammatory bowel disease (IBD). The mouse models suggested that the drug’s pharmacokinetics may be affected by intestinal inflammation. The human cell models seem to identify equilibrative nucleoside transporters as the likely route of cellular uptake of tofacitinib; this mechanism appears to be upregulated during inflammation and could present a therapeutic target to bolster the drug’s effects.

“We identify intestinal inflammation as a decisive modulator of the systemic pharmacokinetics of tofacitinib in mice, which needs to be studied and confirmed in humans. Finally, we decipher an important membrane transport mechanism that regulates cellular uptake of tofacitinib into activated immune cells, suggesting a model that explains a preferred uptake of tofacitinib into activated immune cells and a potential starting point to interfere with and channel such an uptake,” wrote the authors, led by Bernhard Texler and Andreas Zollner, both with the Christian Doppler Laboratory for Mucosal Immunology at the Johannes Kepler University in Linz, Austria, who published the results in Cellular and Molecular Gastroenterology and Hepatology.

IBD-related inflammation likely involves multiple cytokine pathways. The JAK-signal transducers and activator of transcription (JAK-STAT) pathway is downstream to more than 50 cytokines and growth factors, so disruption of their activity by JAK-STAT inhibitors like tofacitinib could counter the effects of more than one cytokine at a time.

Tofacitinib received FDA approval for the treatment of ulcerative colitis in 2018, but the details of its mechanism of action against intestinal inflammation remain poorly understood. For example, despite its efficacy against UC, the drug doesn’t work for Crohn’s disease patients. That may be because the drug affects specific cell populations involved only in UC pathogenesis.

To better understand the drug’s pharmacokinetics, the researchers examined the effects of tofacitinib in cells isolated from human peripheral blood, as well as an experimental mouse model of colitis.

The drug inhibited proliferation of both naïve and memory cytotoxic and helper T cells. At higher concentrations, it had strong effects on innate immune system cells, including monocytes, macrophages, and human intestinal epithelial organoids. It promotes the anti-inflammatory M2 phenotype among monocytes and macrophages. The drug also inhibited the pro-inflammatory M1 phenotype. The researchers observed similar effects in the mouse model of colitis.

The investigators also linked equilibrative nucleoside transporters (ENTs) with uptake of tofacitinib, specifically as a mediating role. These membrane proteins transport nucleosides, nucleobases, and therapeutic analogs like tofacitinib, which mimics the nucleotide adenosine triphosphate (ATP). Targeted inhibitors could potentially influence this process.

The researchers created three-dimensional, in vitro colonic organoids using intestinal epithelial cells from UC patients and healthy human controls. In this model, TNF-alpha can lead to production of pro-inflammatory cytokines, but tofacitinib blocked this effect. That result suggests that intestinal epithelial cells are a previously unidentified tofacitinib target.

Although a large amount of work has been done on the pharmacokinetics of therapeutic antibodies used to treat IBD, the authors point out that little is known about tofacitinib. In a mouse model, the serum concentration of the drug increased after exposure to dextran sulfate sodium (DSS), which triggers an IBD-like condition, and the spike was higher during more intense inflammation. The finding was surprising, considering that therapeutic antibodies typically get eliminated through feces during inflammation. Mice treated with DSS versus control had similar levels of tofacitinib in both urine and the feces, suggesting that inflammation may somehow inhibit the enzymes that metabolize the drug.

The researchers also noted that uptake of tofacitinib into leukocytes increased following stimulation with lipopolysaccharide. Given its structural similarity to ATP, the researchers propose that tofacitinib may enter the cells through adenosine cell membrane transporters ENT1 and ENT2, and some evidence even suggested that the pathway may be strengthened in activated immune cells.

The study received funding from: the Christian Doppler Research Association; the Austrian Federal Ministry of Science, Research, and Economy; and the National Foundation for Research, Technology, and Development. One author is receiving research support from AbbVie and Takeda under the framework of the Christian Doppler Research Society, but the remaining authors have no relevant conflicts of interest.

A new study has revealed potential cell-specific effects of the human Janus kinase (JAK) inhibitor tofacitinib, including possible targets – such as intestinal inflammation – for future research and even for increasing the drug’s effects.

The work used both mice and human cell models to explore the drug’s effect in inflammatory bowel disease (IBD). The mouse models suggested that the drug’s pharmacokinetics may be affected by intestinal inflammation. The human cell models seem to identify equilibrative nucleoside transporters as the likely route of cellular uptake of tofacitinib; this mechanism appears to be upregulated during inflammation and could present a therapeutic target to bolster the drug’s effects.

“We identify intestinal inflammation as a decisive modulator of the systemic pharmacokinetics of tofacitinib in mice, which needs to be studied and confirmed in humans. Finally, we decipher an important membrane transport mechanism that regulates cellular uptake of tofacitinib into activated immune cells, suggesting a model that explains a preferred uptake of tofacitinib into activated immune cells and a potential starting point to interfere with and channel such an uptake,” wrote the authors, led by Bernhard Texler and Andreas Zollner, both with the Christian Doppler Laboratory for Mucosal Immunology at the Johannes Kepler University in Linz, Austria, who published the results in Cellular and Molecular Gastroenterology and Hepatology.

IBD-related inflammation likely involves multiple cytokine pathways. The JAK-signal transducers and activator of transcription (JAK-STAT) pathway is downstream to more than 50 cytokines and growth factors, so disruption of their activity by JAK-STAT inhibitors like tofacitinib could counter the effects of more than one cytokine at a time.

Tofacitinib received FDA approval for the treatment of ulcerative colitis in 2018, but the details of its mechanism of action against intestinal inflammation remain poorly understood. For example, despite its efficacy against UC, the drug doesn’t work for Crohn’s disease patients. That may be because the drug affects specific cell populations involved only in UC pathogenesis.

To better understand the drug’s pharmacokinetics, the researchers examined the effects of tofacitinib in cells isolated from human peripheral blood, as well as an experimental mouse model of colitis.

The drug inhibited proliferation of both naïve and memory cytotoxic and helper T cells. At higher concentrations, it had strong effects on innate immune system cells, including monocytes, macrophages, and human intestinal epithelial organoids. It promotes the anti-inflammatory M2 phenotype among monocytes and macrophages. The drug also inhibited the pro-inflammatory M1 phenotype. The researchers observed similar effects in the mouse model of colitis.

The investigators also linked equilibrative nucleoside transporters (ENTs) with uptake of tofacitinib, specifically as a mediating role. These membrane proteins transport nucleosides, nucleobases, and therapeutic analogs like tofacitinib, which mimics the nucleotide adenosine triphosphate (ATP). Targeted inhibitors could potentially influence this process.

The researchers created three-dimensional, in vitro colonic organoids using intestinal epithelial cells from UC patients and healthy human controls. In this model, TNF-alpha can lead to production of pro-inflammatory cytokines, but tofacitinib blocked this effect. That result suggests that intestinal epithelial cells are a previously unidentified tofacitinib target.

Although a large amount of work has been done on the pharmacokinetics of therapeutic antibodies used to treat IBD, the authors point out that little is known about tofacitinib. In a mouse model, the serum concentration of the drug increased after exposure to dextran sulfate sodium (DSS), which triggers an IBD-like condition, and the spike was higher during more intense inflammation. The finding was surprising, considering that therapeutic antibodies typically get eliminated through feces during inflammation. Mice treated with DSS versus control had similar levels of tofacitinib in both urine and the feces, suggesting that inflammation may somehow inhibit the enzymes that metabolize the drug.

The researchers also noted that uptake of tofacitinib into leukocytes increased following stimulation with lipopolysaccharide. Given its structural similarity to ATP, the researchers propose that tofacitinib may enter the cells through adenosine cell membrane transporters ENT1 and ENT2, and some evidence even suggested that the pathway may be strengthened in activated immune cells.

The study received funding from: the Christian Doppler Research Association; the Austrian Federal Ministry of Science, Research, and Economy; and the National Foundation for Research, Technology, and Development. One author is receiving research support from AbbVie and Takeda under the framework of the Christian Doppler Research Society, but the remaining authors have no relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA clinical practice update: Expert review on managing refractory gastroparesis

Article Type
Changed
Thu, 02/17/2022 - 10:36

Gastroparesis can be tricky to diagnose and treat, in part because its symptoms can be difficult to distinguish from functional dyspepsia. A new clinical practice update from the American Gastroenterological Association aims to help physicians treat medically refractory gastroparesis with practical advice stemming from expert opinion and a literature review.

Although gastroparesis can be caused by known factors such as diabetes and medications, the largest group is idiopathic. The authors define medically refractory gastroparesis as symptoms that are not due to medication use, that continue despite dietary changes and first-line treatment with metoclopramide.

Heiko119/Thinkstock

Although the authors outline several best practice advice statements on symptom identification and management, they acknowledge that much uncertainty still exists. “Our knowledge gap remains vast, and areas for future research include study of pathophysiology and etiology, as well as identification of clinical and investigation-based predictors of response to each management approach,” the authors wrote. Their report is in Clinical Gastroenterology and Hepatology.

They also call for research to identify gastroparesis phenotypes that are most likely to respond to individual management approaches.

Common gastroparesis symptoms include nausea, vomiting, early satiety, bloating, postprandial fullness, abdominal pain, and weight loss. Many of these overlap with functional dyspepsia (FD). In fact, one study found that 42% of gastroparesis could be reclassified as having functional dyspepsia, and 37% of FD patients as having gastroparesis.

About 5 million adults in the United States, and 7.2% of the world population, report gastroparesis-like symptoms. The similarities between the two groups poses a significant diagnostic challenge. However, a careful history, physical exam, and appropriate diagnostic tests should allow the physician to rule out other conditions that may mimic gastroparesis. Repeating scintigraphy may change diagnosis from gastroparesis to FD or vice versa, but the authors note that this technique is often performed incorrectly and so should be conducted at centers that closely follow guidelines. They suggest a 4 hour meal-based test of gastric emptying over the wireless motility capsule because it provides a better physiological assessment.

They also suggest that treatment should focus on the most bothersome symptom, along with reducing the potential for complications such as esophagitis, malnutrition, and weight loss, as well as improving quality of life.

There are medications available for nausea and vomiting, although most have not been studied in large randomized controlled trials. These agents include domperidone, 5-hydroxytryptamine3 receptor antagonists, neurokinin receptor antagonists, and phenothiazine antipsychotics.

There are also medications available to increase the rate of gastric emptying. Erythromycin can be used intravenously or orally ahead of meals, while the 5-HT4 receptor agonist velusetrag improved gastric emptying in healthy volunteers with no sign of cardiac side effects. The commonly available 5-HT4 agonist prucalopride has also shown promise in improving gastric emptying.

For visceral pain, the authors suggest not using opioids because they may slow gastric emptying and increase pain perception. It is believed that neuromodulators such as tricyclic antidepressants (TCAs) and serotonin norepinephrine reuptake inhibitors (SNRIs) may reduce perception of pain, but there is limited high-quality evidence available for these therapies. The authors suggest that higher potency tertiary tricyclic amines such as amitriptyline or imipramine may be effective, particularly in diabetic gastroparesis since they provide relief in FD.

Nonpharmaceutical options include gastric electrical stimulation (GES), which improves refractory nausea and vomiting in some patients with gastroparesis, but does not accelerate gastric emptying. It may also improve glycemic control, nutritional status, and quality of life. The treatment may be well suited to opioid-free patients with refractory or intractable nausea and vomiting whose predominant symptom is not abdominal pain.

Other therapies focus on the pylorus and its role in gastric emptying, which can be impaired as a result of abnormalities of pyloric tone and pressure. Functional lumen imaging probe (FLIP) can be used to probe pyloric tone and pressure, but it is expensive, invasive, and not widely available.

Outside of clinical trial settings, the authors advise against the use of intrapyloric botulinum toxic injection and transpyloric stent placement. Per oral endoscopic myotomy (POEM) has shown some efficacy at improving symptoms and reducing gastric emptying times, but it has not been studied in sham-controlled trials. The authors call the technique intriguing, but say it should not be considered a first-line therapy, and should be performed only at tertiary centers with expert motility specialists and endoscopists.

In extreme cases, enteral nutrition may be necessary, and a transjejunal tube or combined gastrojejunostomy tube should be emplaced beyond the pylorus. In a retrospective case series, patients experienced weight recovery with acceptable morbidity and mortality, and the implant was removed at an average of 20 months.

The authors have consulted or been on scientific advisory boards for Salix, Ironwood, Allergan, Arena, Allakos, Medtronic, Diversatek, Takeda, Quintiles, and IsoThrive.

This article was updated Feb. 17, 2022.

Publications
Topics
Sections

Gastroparesis can be tricky to diagnose and treat, in part because its symptoms can be difficult to distinguish from functional dyspepsia. A new clinical practice update from the American Gastroenterological Association aims to help physicians treat medically refractory gastroparesis with practical advice stemming from expert opinion and a literature review.

Although gastroparesis can be caused by known factors such as diabetes and medications, the largest group is idiopathic. The authors define medically refractory gastroparesis as symptoms that are not due to medication use, that continue despite dietary changes and first-line treatment with metoclopramide.

Heiko119/Thinkstock

Although the authors outline several best practice advice statements on symptom identification and management, they acknowledge that much uncertainty still exists. “Our knowledge gap remains vast, and areas for future research include study of pathophysiology and etiology, as well as identification of clinical and investigation-based predictors of response to each management approach,” the authors wrote. Their report is in Clinical Gastroenterology and Hepatology.

They also call for research to identify gastroparesis phenotypes that are most likely to respond to individual management approaches.

Common gastroparesis symptoms include nausea, vomiting, early satiety, bloating, postprandial fullness, abdominal pain, and weight loss. Many of these overlap with functional dyspepsia (FD). In fact, one study found that 42% of gastroparesis could be reclassified as having functional dyspepsia, and 37% of FD patients as having gastroparesis.

About 5 million adults in the United States, and 7.2% of the world population, report gastroparesis-like symptoms. The similarities between the two groups poses a significant diagnostic challenge. However, a careful history, physical exam, and appropriate diagnostic tests should allow the physician to rule out other conditions that may mimic gastroparesis. Repeating scintigraphy may change diagnosis from gastroparesis to FD or vice versa, but the authors note that this technique is often performed incorrectly and so should be conducted at centers that closely follow guidelines. They suggest a 4 hour meal-based test of gastric emptying over the wireless motility capsule because it provides a better physiological assessment.

They also suggest that treatment should focus on the most bothersome symptom, along with reducing the potential for complications such as esophagitis, malnutrition, and weight loss, as well as improving quality of life.

There are medications available for nausea and vomiting, although most have not been studied in large randomized controlled trials. These agents include domperidone, 5-hydroxytryptamine3 receptor antagonists, neurokinin receptor antagonists, and phenothiazine antipsychotics.

There are also medications available to increase the rate of gastric emptying. Erythromycin can be used intravenously or orally ahead of meals, while the 5-HT4 receptor agonist velusetrag improved gastric emptying in healthy volunteers with no sign of cardiac side effects. The commonly available 5-HT4 agonist prucalopride has also shown promise in improving gastric emptying.

For visceral pain, the authors suggest not using opioids because they may slow gastric emptying and increase pain perception. It is believed that neuromodulators such as tricyclic antidepressants (TCAs) and serotonin norepinephrine reuptake inhibitors (SNRIs) may reduce perception of pain, but there is limited high-quality evidence available for these therapies. The authors suggest that higher potency tertiary tricyclic amines such as amitriptyline or imipramine may be effective, particularly in diabetic gastroparesis since they provide relief in FD.

Nonpharmaceutical options include gastric electrical stimulation (GES), which improves refractory nausea and vomiting in some patients with gastroparesis, but does not accelerate gastric emptying. It may also improve glycemic control, nutritional status, and quality of life. The treatment may be well suited to opioid-free patients with refractory or intractable nausea and vomiting whose predominant symptom is not abdominal pain.

Other therapies focus on the pylorus and its role in gastric emptying, which can be impaired as a result of abnormalities of pyloric tone and pressure. Functional lumen imaging probe (FLIP) can be used to probe pyloric tone and pressure, but it is expensive, invasive, and not widely available.

Outside of clinical trial settings, the authors advise against the use of intrapyloric botulinum toxic injection and transpyloric stent placement. Per oral endoscopic myotomy (POEM) has shown some efficacy at improving symptoms and reducing gastric emptying times, but it has not been studied in sham-controlled trials. The authors call the technique intriguing, but say it should not be considered a first-line therapy, and should be performed only at tertiary centers with expert motility specialists and endoscopists.

In extreme cases, enteral nutrition may be necessary, and a transjejunal tube or combined gastrojejunostomy tube should be emplaced beyond the pylorus. In a retrospective case series, patients experienced weight recovery with acceptable morbidity and mortality, and the implant was removed at an average of 20 months.

The authors have consulted or been on scientific advisory boards for Salix, Ironwood, Allergan, Arena, Allakos, Medtronic, Diversatek, Takeda, Quintiles, and IsoThrive.

This article was updated Feb. 17, 2022.

Gastroparesis can be tricky to diagnose and treat, in part because its symptoms can be difficult to distinguish from functional dyspepsia. A new clinical practice update from the American Gastroenterological Association aims to help physicians treat medically refractory gastroparesis with practical advice stemming from expert opinion and a literature review.

Although gastroparesis can be caused by known factors such as diabetes and medications, the largest group is idiopathic. The authors define medically refractory gastroparesis as symptoms that are not due to medication use, that continue despite dietary changes and first-line treatment with metoclopramide.

Heiko119/Thinkstock

Although the authors outline several best practice advice statements on symptom identification and management, they acknowledge that much uncertainty still exists. “Our knowledge gap remains vast, and areas for future research include study of pathophysiology and etiology, as well as identification of clinical and investigation-based predictors of response to each management approach,” the authors wrote. Their report is in Clinical Gastroenterology and Hepatology.

They also call for research to identify gastroparesis phenotypes that are most likely to respond to individual management approaches.

Common gastroparesis symptoms include nausea, vomiting, early satiety, bloating, postprandial fullness, abdominal pain, and weight loss. Many of these overlap with functional dyspepsia (FD). In fact, one study found that 42% of gastroparesis could be reclassified as having functional dyspepsia, and 37% of FD patients as having gastroparesis.

About 5 million adults in the United States, and 7.2% of the world population, report gastroparesis-like symptoms. The similarities between the two groups poses a significant diagnostic challenge. However, a careful history, physical exam, and appropriate diagnostic tests should allow the physician to rule out other conditions that may mimic gastroparesis. Repeating scintigraphy may change diagnosis from gastroparesis to FD or vice versa, but the authors note that this technique is often performed incorrectly and so should be conducted at centers that closely follow guidelines. They suggest a 4 hour meal-based test of gastric emptying over the wireless motility capsule because it provides a better physiological assessment.

They also suggest that treatment should focus on the most bothersome symptom, along with reducing the potential for complications such as esophagitis, malnutrition, and weight loss, as well as improving quality of life.

There are medications available for nausea and vomiting, although most have not been studied in large randomized controlled trials. These agents include domperidone, 5-hydroxytryptamine3 receptor antagonists, neurokinin receptor antagonists, and phenothiazine antipsychotics.

There are also medications available to increase the rate of gastric emptying. Erythromycin can be used intravenously or orally ahead of meals, while the 5-HT4 receptor agonist velusetrag improved gastric emptying in healthy volunteers with no sign of cardiac side effects. The commonly available 5-HT4 agonist prucalopride has also shown promise in improving gastric emptying.

For visceral pain, the authors suggest not using opioids because they may slow gastric emptying and increase pain perception. It is believed that neuromodulators such as tricyclic antidepressants (TCAs) and serotonin norepinephrine reuptake inhibitors (SNRIs) may reduce perception of pain, but there is limited high-quality evidence available for these therapies. The authors suggest that higher potency tertiary tricyclic amines such as amitriptyline or imipramine may be effective, particularly in diabetic gastroparesis since they provide relief in FD.

Nonpharmaceutical options include gastric electrical stimulation (GES), which improves refractory nausea and vomiting in some patients with gastroparesis, but does not accelerate gastric emptying. It may also improve glycemic control, nutritional status, and quality of life. The treatment may be well suited to opioid-free patients with refractory or intractable nausea and vomiting whose predominant symptom is not abdominal pain.

Other therapies focus on the pylorus and its role in gastric emptying, which can be impaired as a result of abnormalities of pyloric tone and pressure. Functional lumen imaging probe (FLIP) can be used to probe pyloric tone and pressure, but it is expensive, invasive, and not widely available.

Outside of clinical trial settings, the authors advise against the use of intrapyloric botulinum toxic injection and transpyloric stent placement. Per oral endoscopic myotomy (POEM) has shown some efficacy at improving symptoms and reducing gastric emptying times, but it has not been studied in sham-controlled trials. The authors call the technique intriguing, but say it should not be considered a first-line therapy, and should be performed only at tertiary centers with expert motility specialists and endoscopists.

In extreme cases, enteral nutrition may be necessary, and a transjejunal tube or combined gastrojejunostomy tube should be emplaced beyond the pylorus. In a retrospective case series, patients experienced weight recovery with acceptable morbidity and mortality, and the implant was removed at an average of 20 months.

The authors have consulted or been on scientific advisory boards for Salix, Ironwood, Allergan, Arena, Allakos, Medtronic, Diversatek, Takeda, Quintiles, and IsoThrive.

This article was updated Feb. 17, 2022.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Single-use duodenoscope is cost effective in ERCP

SUD subsidies may make the difference
Article Type
Changed
Wed, 01/26/2022 - 10:59

The EXALT Model-D single-use duodenoscope may be a cost-effective alternative to high-level disinfection (HLD) of reusable duodenoscopes, according to a new analysis.

The study compared the EXALT Model-D, HLD, culture-and-quarantine (CQ), and ethylene oxide sterilization (ETO). The results came from a simulated cohort of patients undergoing endoscopic retrograde cholangiopancreatography (ERCP) to treat choledocholithiasis.

Although EXALT was the costliest option and HLD the cheapest, EXALT produced the most quality-adjusted life years (QALYs) and allowed the hospital to decrease net costs, and sensitivity analysis showed that it was a better option than HLD over a range of willingness-to-pay values.

“When evaluating technologies based on cost-effectiveness and additionally in the context of TPT [transitional passthrough] or NTAP [new technology add-on payment], the EXALT approach meets typically used cost-effectiveness thresholds compared to all other evaluated strategies and should be considered for standard practice,” wrote the authors, who were led by Ananya Das, MD, of the Arizona Centers for Digestive Health, Gilbert. The study was published in Techniques and Innovations in Gastrointestinal Endoscopy.

Duodenoscope contamination has resulted in outbreaks of various multidrug-resistant organisms in hospital settings, which has led to the publication of various reprocessing guidelines. Although many hospitals have adopted HLD protocols, others use additional or alternative reprocessing methods such as CQ or ETO. Despite these efforts, a recent Food and Drug Administration study found that 1.9%-22% of samples taken from duodenoscopes tested positive for bacteria of concern, such as pathogens. Those and other findings have led some to suggest that it would be best to move away from HLD, and instead employ sterilizable or disposable endoscopes.

In another study, The EXALT Model-D (Boston Scientific) had been shown to be a good alternative to standard reusable duodenoscopes.

The researchers used a Markov-model to determine the cost-effectiveness of EXALT Model-D against other approaches in a simulated cohort. They found that EXALT Model-D created the most QALYs (21.9265) at the highest cost ($3,000), and HLD the fewest QALYs (21.8938) at the lowest cost ($962). Compared with HLD, the incremental cost-effectiveness ratio (ICER) of EXALT was $62,185, and $38,461 for ETO gas sterilization. CQ was dominated, indicating that it had a higher cost but was not more effective than HLD.

The researchers conducted a subanalysis of ERCP and Medicare patients to consider the recently approved TPT payment and the NTAP, in both hospital outpatient and inpatient settings. With TPT, EXALT had no cost after reimbursement, with a net saving of $962 per patient when compared with HLD, plus an increase in 0.033 QALYs (0.15%). The other procedures cost more and were less effective. With NTAP, EXALT had a net cost of $323 versus HLD, with a similar QALY benefit.

A Monte Carlo analysis of EXALT versus HLD found reductions in duodenoscope infection-related ICU admission (relative risk reduction, 0.996; 95% confidence interval, 0.936-1.0; number needed to treat, 79; 95% CI, 67-95) and death (RRR, 0.973; 95% CI, 0.552-0.998; number needed to treat, 556; 95% CI, 350-997).

In willingness-to-pay estimates from $50,000 to $100,000, EXALT was cost effective in 67.28% of trials with ICER under $100,000 per QALY.

The study did not consider medicolegal costs, which could lead to an underestimation of EXALT’s cost-effectiveness. The study also relied on available published information to determine cost per patient of hospital outbreaks in the United States and Europe since 2012, but the authors did not include costs of administrative sanctions, litigation, and poor publicity due to inconsistencies in the literature.

“While more research is needed to understand and quantify the determinants of the natural history after exposure to contaminated duodenoscopes, such as the risk of transmission and the subsequent development of serious clinical infections, this economic analysis demonstrates an approach using EXALT Model-D is cost effective in the U.S. health care system when compared to the currently utilized strategies of duodenoscope reprocessing,” the researchers concluded.

The study did not receive any funding. One of the authors is an employee and stockholder of Boston Scientific, which manufactures and markets EXALT. The other two authors have consulted for Boston Scientific.

Body

Consider for a moment: The single-use duodenoscope (SUD) represents a revolutionary approach to duodenoscope infection control. Who, even 10 years ago, would have imagined that a disposable duodenoscope would even be technically achievable, much less economically feasible? Notwithstanding, determining how to incorporate such a revolutionary new technology and its associated capital and recurring costs can be every bit as complex and challenging as conceiving and developing the SUD. The authors provide insights into answering these questions through Markov modeling, comparing cost-effectiveness of SUDs to traditional duodenoscopes (TD) using available data on TD and SUD performance, and extrapolating from nonendoscopic infection management data.

This analysis is helpful because it demonstrates that, despite SUD cost approaching $3,000, Centers for Medicare & Medicaid Services inpatient and outpatient cost-defrayment payments may result in SUDs being cost-effective within limits and assumptions the study incorporates. This information is also timely, because these CMS subsidies are guaranteed only through mid-2022 for Medicare inpatients and 2023 for Medicare outpatients.

Though useful and timely, this study does make assumptions that narrow its applicability to real-world endoscopic retrograde cholangiopancreatography (ERCP). Clinically, it considers only patients with uncomplicated common bile duct stones. While choledocholithiasis is the indication for ERCP in the majority of patients, over 40% of ERCPs in the United States are performed for other, often more complex applications. While most procedures in the referenced studies were performed by high-volume ERCP experts, a substantial proportion of ERCPs are performed by lower-volume ERCP proceduralists, who actually perform a substantial proportion of straight-forward ERCPs addressing uncomplicated choledocholithiasis.

This study focuses on cost implications on CMS Transitional Pass-through (TPT) and New Technology Add-On Payment (NTAP) subsidies available only for Medicare inpatients and outpatients, respectively. These reimbursements are set to expire in 2022 (inpatients) and 2023 (outpatients). What will happen after that? Also, the amount of TPT and NTAP cost defrayments are institution-dependent, because cost-to-charge ratio (CCR), an important factor in calculating these subsidies, varies substantially between institutions and regions. Looking to the future, how will the cost of SUDs be incorporated into the hospital business model when TPT and NTAP are over?

SUDs are a technological marvel and a remarkable advance in endoscopic infection control. But innovations in medical technology are expectedly accompanied by new operational challenges: How to incorporate them into day-to-day practice and develop a business model that avails valuable new resources to patients. Such operational challenges require as much heavy lifting as the technological innovation needed to produce innovative devices like SUDs. The authors’ vision and effort in ideating and executing this study give us a head-start on this path by helping us to imagine what is possible.

John A. Martin, MD, is associate professor and consultant at the Mayo Clinic, Rochester, Minn. He is a former member of the editorial board for GI & Hepatology News, but has no relevant conflicts to disclose.

Publications
Topics
Sections
Body

Consider for a moment: The single-use duodenoscope (SUD) represents a revolutionary approach to duodenoscope infection control. Who, even 10 years ago, would have imagined that a disposable duodenoscope would even be technically achievable, much less economically feasible? Notwithstanding, determining how to incorporate such a revolutionary new technology and its associated capital and recurring costs can be every bit as complex and challenging as conceiving and developing the SUD. The authors provide insights into answering these questions through Markov modeling, comparing cost-effectiveness of SUDs to traditional duodenoscopes (TD) using available data on TD and SUD performance, and extrapolating from nonendoscopic infection management data.

This analysis is helpful because it demonstrates that, despite SUD cost approaching $3,000, Centers for Medicare & Medicaid Services inpatient and outpatient cost-defrayment payments may result in SUDs being cost-effective within limits and assumptions the study incorporates. This information is also timely, because these CMS subsidies are guaranteed only through mid-2022 for Medicare inpatients and 2023 for Medicare outpatients.

Though useful and timely, this study does make assumptions that narrow its applicability to real-world endoscopic retrograde cholangiopancreatography (ERCP). Clinically, it considers only patients with uncomplicated common bile duct stones. While choledocholithiasis is the indication for ERCP in the majority of patients, over 40% of ERCPs in the United States are performed for other, often more complex applications. While most procedures in the referenced studies were performed by high-volume ERCP experts, a substantial proportion of ERCPs are performed by lower-volume ERCP proceduralists, who actually perform a substantial proportion of straight-forward ERCPs addressing uncomplicated choledocholithiasis.

This study focuses on cost implications on CMS Transitional Pass-through (TPT) and New Technology Add-On Payment (NTAP) subsidies available only for Medicare inpatients and outpatients, respectively. These reimbursements are set to expire in 2022 (inpatients) and 2023 (outpatients). What will happen after that? Also, the amount of TPT and NTAP cost defrayments are institution-dependent, because cost-to-charge ratio (CCR), an important factor in calculating these subsidies, varies substantially between institutions and regions. Looking to the future, how will the cost of SUDs be incorporated into the hospital business model when TPT and NTAP are over?

SUDs are a technological marvel and a remarkable advance in endoscopic infection control. But innovations in medical technology are expectedly accompanied by new operational challenges: How to incorporate them into day-to-day practice and develop a business model that avails valuable new resources to patients. Such operational challenges require as much heavy lifting as the technological innovation needed to produce innovative devices like SUDs. The authors’ vision and effort in ideating and executing this study give us a head-start on this path by helping us to imagine what is possible.

John A. Martin, MD, is associate professor and consultant at the Mayo Clinic, Rochester, Minn. He is a former member of the editorial board for GI & Hepatology News, but has no relevant conflicts to disclose.

Body

Consider for a moment: The single-use duodenoscope (SUD) represents a revolutionary approach to duodenoscope infection control. Who, even 10 years ago, would have imagined that a disposable duodenoscope would even be technically achievable, much less economically feasible? Notwithstanding, determining how to incorporate such a revolutionary new technology and its associated capital and recurring costs can be every bit as complex and challenging as conceiving and developing the SUD. The authors provide insights into answering these questions through Markov modeling, comparing cost-effectiveness of SUDs to traditional duodenoscopes (TD) using available data on TD and SUD performance, and extrapolating from nonendoscopic infection management data.

This analysis is helpful because it demonstrates that, despite SUD cost approaching $3,000, Centers for Medicare & Medicaid Services inpatient and outpatient cost-defrayment payments may result in SUDs being cost-effective within limits and assumptions the study incorporates. This information is also timely, because these CMS subsidies are guaranteed only through mid-2022 for Medicare inpatients and 2023 for Medicare outpatients.

Though useful and timely, this study does make assumptions that narrow its applicability to real-world endoscopic retrograde cholangiopancreatography (ERCP). Clinically, it considers only patients with uncomplicated common bile duct stones. While choledocholithiasis is the indication for ERCP in the majority of patients, over 40% of ERCPs in the United States are performed for other, often more complex applications. While most procedures in the referenced studies were performed by high-volume ERCP experts, a substantial proportion of ERCPs are performed by lower-volume ERCP proceduralists, who actually perform a substantial proportion of straight-forward ERCPs addressing uncomplicated choledocholithiasis.

This study focuses on cost implications on CMS Transitional Pass-through (TPT) and New Technology Add-On Payment (NTAP) subsidies available only for Medicare inpatients and outpatients, respectively. These reimbursements are set to expire in 2022 (inpatients) and 2023 (outpatients). What will happen after that? Also, the amount of TPT and NTAP cost defrayments are institution-dependent, because cost-to-charge ratio (CCR), an important factor in calculating these subsidies, varies substantially between institutions and regions. Looking to the future, how will the cost of SUDs be incorporated into the hospital business model when TPT and NTAP are over?

SUDs are a technological marvel and a remarkable advance in endoscopic infection control. But innovations in medical technology are expectedly accompanied by new operational challenges: How to incorporate them into day-to-day practice and develop a business model that avails valuable new resources to patients. Such operational challenges require as much heavy lifting as the technological innovation needed to produce innovative devices like SUDs. The authors’ vision and effort in ideating and executing this study give us a head-start on this path by helping us to imagine what is possible.

John A. Martin, MD, is associate professor and consultant at the Mayo Clinic, Rochester, Minn. He is a former member of the editorial board for GI & Hepatology News, but has no relevant conflicts to disclose.

Title
SUD subsidies may make the difference
SUD subsidies may make the difference

The EXALT Model-D single-use duodenoscope may be a cost-effective alternative to high-level disinfection (HLD) of reusable duodenoscopes, according to a new analysis.

The study compared the EXALT Model-D, HLD, culture-and-quarantine (CQ), and ethylene oxide sterilization (ETO). The results came from a simulated cohort of patients undergoing endoscopic retrograde cholangiopancreatography (ERCP) to treat choledocholithiasis.

Although EXALT was the costliest option and HLD the cheapest, EXALT produced the most quality-adjusted life years (QALYs) and allowed the hospital to decrease net costs, and sensitivity analysis showed that it was a better option than HLD over a range of willingness-to-pay values.

“When evaluating technologies based on cost-effectiveness and additionally in the context of TPT [transitional passthrough] or NTAP [new technology add-on payment], the EXALT approach meets typically used cost-effectiveness thresholds compared to all other evaluated strategies and should be considered for standard practice,” wrote the authors, who were led by Ananya Das, MD, of the Arizona Centers for Digestive Health, Gilbert. The study was published in Techniques and Innovations in Gastrointestinal Endoscopy.

Duodenoscope contamination has resulted in outbreaks of various multidrug-resistant organisms in hospital settings, which has led to the publication of various reprocessing guidelines. Although many hospitals have adopted HLD protocols, others use additional or alternative reprocessing methods such as CQ or ETO. Despite these efforts, a recent Food and Drug Administration study found that 1.9%-22% of samples taken from duodenoscopes tested positive for bacteria of concern, such as pathogens. Those and other findings have led some to suggest that it would be best to move away from HLD, and instead employ sterilizable or disposable endoscopes.

In another study, The EXALT Model-D (Boston Scientific) had been shown to be a good alternative to standard reusable duodenoscopes.

The researchers used a Markov-model to determine the cost-effectiveness of EXALT Model-D against other approaches in a simulated cohort. They found that EXALT Model-D created the most QALYs (21.9265) at the highest cost ($3,000), and HLD the fewest QALYs (21.8938) at the lowest cost ($962). Compared with HLD, the incremental cost-effectiveness ratio (ICER) of EXALT was $62,185, and $38,461 for ETO gas sterilization. CQ was dominated, indicating that it had a higher cost but was not more effective than HLD.

The researchers conducted a subanalysis of ERCP and Medicare patients to consider the recently approved TPT payment and the NTAP, in both hospital outpatient and inpatient settings. With TPT, EXALT had no cost after reimbursement, with a net saving of $962 per patient when compared with HLD, plus an increase in 0.033 QALYs (0.15%). The other procedures cost more and were less effective. With NTAP, EXALT had a net cost of $323 versus HLD, with a similar QALY benefit.

A Monte Carlo analysis of EXALT versus HLD found reductions in duodenoscope infection-related ICU admission (relative risk reduction, 0.996; 95% confidence interval, 0.936-1.0; number needed to treat, 79; 95% CI, 67-95) and death (RRR, 0.973; 95% CI, 0.552-0.998; number needed to treat, 556; 95% CI, 350-997).

In willingness-to-pay estimates from $50,000 to $100,000, EXALT was cost effective in 67.28% of trials with ICER under $100,000 per QALY.

The study did not consider medicolegal costs, which could lead to an underestimation of EXALT’s cost-effectiveness. The study also relied on available published information to determine cost per patient of hospital outbreaks in the United States and Europe since 2012, but the authors did not include costs of administrative sanctions, litigation, and poor publicity due to inconsistencies in the literature.

“While more research is needed to understand and quantify the determinants of the natural history after exposure to contaminated duodenoscopes, such as the risk of transmission and the subsequent development of serious clinical infections, this economic analysis demonstrates an approach using EXALT Model-D is cost effective in the U.S. health care system when compared to the currently utilized strategies of duodenoscope reprocessing,” the researchers concluded.

The study did not receive any funding. One of the authors is an employee and stockholder of Boston Scientific, which manufactures and markets EXALT. The other two authors have consulted for Boston Scientific.

The EXALT Model-D single-use duodenoscope may be a cost-effective alternative to high-level disinfection (HLD) of reusable duodenoscopes, according to a new analysis.

The study compared the EXALT Model-D, HLD, culture-and-quarantine (CQ), and ethylene oxide sterilization (ETO). The results came from a simulated cohort of patients undergoing endoscopic retrograde cholangiopancreatography (ERCP) to treat choledocholithiasis.

Although EXALT was the costliest option and HLD the cheapest, EXALT produced the most quality-adjusted life years (QALYs) and allowed the hospital to decrease net costs, and sensitivity analysis showed that it was a better option than HLD over a range of willingness-to-pay values.

“When evaluating technologies based on cost-effectiveness and additionally in the context of TPT [transitional passthrough] or NTAP [new technology add-on payment], the EXALT approach meets typically used cost-effectiveness thresholds compared to all other evaluated strategies and should be considered for standard practice,” wrote the authors, who were led by Ananya Das, MD, of the Arizona Centers for Digestive Health, Gilbert. The study was published in Techniques and Innovations in Gastrointestinal Endoscopy.

Duodenoscope contamination has resulted in outbreaks of various multidrug-resistant organisms in hospital settings, which has led to the publication of various reprocessing guidelines. Although many hospitals have adopted HLD protocols, others use additional or alternative reprocessing methods such as CQ or ETO. Despite these efforts, a recent Food and Drug Administration study found that 1.9%-22% of samples taken from duodenoscopes tested positive for bacteria of concern, such as pathogens. Those and other findings have led some to suggest that it would be best to move away from HLD, and instead employ sterilizable or disposable endoscopes.

In another study, The EXALT Model-D (Boston Scientific) had been shown to be a good alternative to standard reusable duodenoscopes.

The researchers used a Markov-model to determine the cost-effectiveness of EXALT Model-D against other approaches in a simulated cohort. They found that EXALT Model-D created the most QALYs (21.9265) at the highest cost ($3,000), and HLD the fewest QALYs (21.8938) at the lowest cost ($962). Compared with HLD, the incremental cost-effectiveness ratio (ICER) of EXALT was $62,185, and $38,461 for ETO gas sterilization. CQ was dominated, indicating that it had a higher cost but was not more effective than HLD.

The researchers conducted a subanalysis of ERCP and Medicare patients to consider the recently approved TPT payment and the NTAP, in both hospital outpatient and inpatient settings. With TPT, EXALT had no cost after reimbursement, with a net saving of $962 per patient when compared with HLD, plus an increase in 0.033 QALYs (0.15%). The other procedures cost more and were less effective. With NTAP, EXALT had a net cost of $323 versus HLD, with a similar QALY benefit.

A Monte Carlo analysis of EXALT versus HLD found reductions in duodenoscope infection-related ICU admission (relative risk reduction, 0.996; 95% confidence interval, 0.936-1.0; number needed to treat, 79; 95% CI, 67-95) and death (RRR, 0.973; 95% CI, 0.552-0.998; number needed to treat, 556; 95% CI, 350-997).

In willingness-to-pay estimates from $50,000 to $100,000, EXALT was cost effective in 67.28% of trials with ICER under $100,000 per QALY.

The study did not consider medicolegal costs, which could lead to an underestimation of EXALT’s cost-effectiveness. The study also relied on available published information to determine cost per patient of hospital outbreaks in the United States and Europe since 2012, but the authors did not include costs of administrative sanctions, litigation, and poor publicity due to inconsistencies in the literature.

“While more research is needed to understand and quantify the determinants of the natural history after exposure to contaminated duodenoscopes, such as the risk of transmission and the subsequent development of serious clinical infections, this economic analysis demonstrates an approach using EXALT Model-D is cost effective in the U.S. health care system when compared to the currently utilized strategies of duodenoscope reprocessing,” the researchers concluded.

The study did not receive any funding. One of the authors is an employee and stockholder of Boston Scientific, which manufactures and markets EXALT. The other two authors have consulted for Boston Scientific.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM TECHNIQUES AND INNOVATIONS IN GASTROINTESTINAL ENDOSCOPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

GERD: Upper endoscopy may reduce GI cancer mortality

Potentially revolutionizing results
Article Type
Changed
Tue, 01/11/2022 - 09:17

Among individuals with gastroesophageal reflux disease (GERD), a negative upper endoscopy is associated with decreased risk in incidence and mortality from gastrointestinal cancer. The benefit persisted through 5-10 years following the procedure.

The finding is similar to the survival benefit seen with colonoscopies and colorectal cancer, and may be attributable to endoscopic treatment of premalignant lesions.

“The relatively high incidence rate of upper gastrointestinal cancer in patients with GERD indicates that a one-time upper endoscopy may be beneficial,” wrote the authors, who were led by Dag Holmberg, MD, PhD, of the department of molecular medicine and surgery at the Karolinska Institutet and Karolinska University Hospital, both in Stockholm. The study was published in Gastroenterology.

GERD is the most frequent reason patients undergo an upper endoscopy, but the results are often negative. It is generally a benign condition, but can lead to Barrett’s esophagus, as well as esophageal and gastric cardia adenocarcinoma. Upper endoscopy can identify other esophageal cancers like gastric noncardia cancer and duodenal cancer, which may cause dyspepsia or GERD-like symptoms.

To determine the potential benefit of upper endoscopy, the researchers conducted a population-based, four-nation cohort study that included 1,062,740 individuals with GERD in Denmark, Finland, Norway, and Sweden. The data were gathered from national patient registries, cancer registries, and cause of death registries. The study encompassed data from 1979 through the end of 2018.

The median age was 58 years, and 52% of participants were women.

The researchers defined a negative endoscopy as no diagnosis of gastrointestinal cancer within 6 months of the procedure; 69.3% of procedures were negative.

During the follow-up period, 0.34% of participants developed and 0.27% died of upper gastrointestinal cancer. Among those with negative endoscopies, 0.23% developed and 0.22% died from upper gastrointestinal cancer.

Participants with a negative endoscopy had a lower risk of being diagnosed with upper gastrointestinal cancer during the follow-up period (adjusted hazard ratio, 0.45; 95% confidence interval, 0.43-0.48). The reduction in risk was similar across age sexes and age groups, but among procedures performed after 2008, the risk reduction was even higher (aHR, 0.34; P < .001).

The effect was strongest in the first year after the procedure, but it persisted out to 5 years before returning to baseline risk levels.

A negative endoscopy was also associated with decreased mortality risk from upper gastrointestinal cancer versus those who hadn’t had an endoscopy (aHR, 0.39; 95% CI, 0.37-0.42). The protective value continued for at least 10 years.

Esophageal adenocarcinoma developed in 0.12% of participants, and 0.10% died of the disease. Among those with a negative endoscopy, 0.09% developed adenocarcinoma, and 0.07% died (aHR vs. no upper endoscopy, 0.33; 95% CI, 0.30-0.37).

The rapid return to baseline risk was notable, and different from what occurs after negative colonoscopies. However, new tumors can readily form within one year, and the risk may reflect early malignant or premalignant lesions that were missed during the procedure.

In fact, a meta-analysis found that 11.3% of upper gastrointestinal cancers had escaped detection during an endoscopy in the previous 3 years before diagnosis, and case reviews of patients diagnosed with gastrointestinal cancer soon after an upper endoscopy usually reveal suspicious or indeterminate results that the endoscopist or pathologist missed.

Quality indicators for upper endoscopy include procedure time, rate of targeted biopsies, and computer-aided detection, but it isn’t clear what impact these measures have on outcomes. However, the greater risk reduction found with endoscopies performed more recently suggests that newer quality indicators and technological improvements may be improving outcomes.

The relatively low incidence of esophageal and gastric cancer in Western countries has discouraged widespread adoption of endoscopic screening, but the researchers point out that the risk of gastrointestinal cancer among individuals with GERD is similar to the risk of colorectal cancer in the 60-69 age group in the United States, for whom colonoscopy is recommended.

“The present study indicates that upper endoscopy may be beneficial for patients with GERD, but to make upper endoscopy screening more cost beneficial at its initiation, the target group may be limited to include patients at highest risk of cancer. Such previous cost-effectiveness studies have indicated that endoscopy is cost effective in men at aged 50 years or older with chronic GERD,” the authors wrote.

The study was funded by Swedish Research Council and Swedish Cancer Society. The authors disclosed no relevant conflicts of interest.

Body

 

This study from Holmberg and colleagues has the potential to revolutionize future clinical guidelines determining endoscopic investigations for GERD patients.

The cohort for analysis is staggering in magnitude: The authors analyzed real-world data from over 1 million participants with GERD in four Scandinavian databases. The results show strong and precise reductions in both risk and mortality from upper gastrointestinal cancer in the whole cohort. This reduction was consistent across all subgroup and sensitivity analyses.

These findings are important as GERD alone does not necessarily warrant an upper endoscopy investigation in current practice. This study provides strong evidence that a one-off endoscopic investigation in patients with GERD could bring meaningful opportunities for early detection of esophageal and gastric cancers – and in turn lead to fewer patients dying from these tumors. The immediacy of the return for investment is also impressive; with the risk reduction being strongest in the first few years of follow-up.

The elusive next step, as highlighted by the authors, is to ensure implementation of endoscopic screening can be done in a cost-effective manner. This is even more important because many health care systems across the world struggle with endoscopy capacity during the COVID-19 pandemic.

Helen Coleman, PhD, BSc(Hons), is a professor of cancer epidemiology at Queen’s University Belfast (Northern Ireland); joint deputy director of the Northern Ireland Cancer Registry; a Cancer Research UK Fellow; and a visiting scientist with the Fitzgerald Lab at the University of Cambridge (England). She has no conflicts.

Publications
Topics
Sections
Body

 

This study from Holmberg and colleagues has the potential to revolutionize future clinical guidelines determining endoscopic investigations for GERD patients.

The cohort for analysis is staggering in magnitude: The authors analyzed real-world data from over 1 million participants with GERD in four Scandinavian databases. The results show strong and precise reductions in both risk and mortality from upper gastrointestinal cancer in the whole cohort. This reduction was consistent across all subgroup and sensitivity analyses.

These findings are important as GERD alone does not necessarily warrant an upper endoscopy investigation in current practice. This study provides strong evidence that a one-off endoscopic investigation in patients with GERD could bring meaningful opportunities for early detection of esophageal and gastric cancers – and in turn lead to fewer patients dying from these tumors. The immediacy of the return for investment is also impressive; with the risk reduction being strongest in the first few years of follow-up.

The elusive next step, as highlighted by the authors, is to ensure implementation of endoscopic screening can be done in a cost-effective manner. This is even more important because many health care systems across the world struggle with endoscopy capacity during the COVID-19 pandemic.

Helen Coleman, PhD, BSc(Hons), is a professor of cancer epidemiology at Queen’s University Belfast (Northern Ireland); joint deputy director of the Northern Ireland Cancer Registry; a Cancer Research UK Fellow; and a visiting scientist with the Fitzgerald Lab at the University of Cambridge (England). She has no conflicts.

Body

 

This study from Holmberg and colleagues has the potential to revolutionize future clinical guidelines determining endoscopic investigations for GERD patients.

The cohort for analysis is staggering in magnitude: The authors analyzed real-world data from over 1 million participants with GERD in four Scandinavian databases. The results show strong and precise reductions in both risk and mortality from upper gastrointestinal cancer in the whole cohort. This reduction was consistent across all subgroup and sensitivity analyses.

These findings are important as GERD alone does not necessarily warrant an upper endoscopy investigation in current practice. This study provides strong evidence that a one-off endoscopic investigation in patients with GERD could bring meaningful opportunities for early detection of esophageal and gastric cancers – and in turn lead to fewer patients dying from these tumors. The immediacy of the return for investment is also impressive; with the risk reduction being strongest in the first few years of follow-up.

The elusive next step, as highlighted by the authors, is to ensure implementation of endoscopic screening can be done in a cost-effective manner. This is even more important because many health care systems across the world struggle with endoscopy capacity during the COVID-19 pandemic.

Helen Coleman, PhD, BSc(Hons), is a professor of cancer epidemiology at Queen’s University Belfast (Northern Ireland); joint deputy director of the Northern Ireland Cancer Registry; a Cancer Research UK Fellow; and a visiting scientist with the Fitzgerald Lab at the University of Cambridge (England). She has no conflicts.

Title
Potentially revolutionizing results
Potentially revolutionizing results

Among individuals with gastroesophageal reflux disease (GERD), a negative upper endoscopy is associated with decreased risk in incidence and mortality from gastrointestinal cancer. The benefit persisted through 5-10 years following the procedure.

The finding is similar to the survival benefit seen with colonoscopies and colorectal cancer, and may be attributable to endoscopic treatment of premalignant lesions.

“The relatively high incidence rate of upper gastrointestinal cancer in patients with GERD indicates that a one-time upper endoscopy may be beneficial,” wrote the authors, who were led by Dag Holmberg, MD, PhD, of the department of molecular medicine and surgery at the Karolinska Institutet and Karolinska University Hospital, both in Stockholm. The study was published in Gastroenterology.

GERD is the most frequent reason patients undergo an upper endoscopy, but the results are often negative. It is generally a benign condition, but can lead to Barrett’s esophagus, as well as esophageal and gastric cardia adenocarcinoma. Upper endoscopy can identify other esophageal cancers like gastric noncardia cancer and duodenal cancer, which may cause dyspepsia or GERD-like symptoms.

To determine the potential benefit of upper endoscopy, the researchers conducted a population-based, four-nation cohort study that included 1,062,740 individuals with GERD in Denmark, Finland, Norway, and Sweden. The data were gathered from national patient registries, cancer registries, and cause of death registries. The study encompassed data from 1979 through the end of 2018.

The median age was 58 years, and 52% of participants were women.

The researchers defined a negative endoscopy as no diagnosis of gastrointestinal cancer within 6 months of the procedure; 69.3% of procedures were negative.

During the follow-up period, 0.34% of participants developed and 0.27% died of upper gastrointestinal cancer. Among those with negative endoscopies, 0.23% developed and 0.22% died from upper gastrointestinal cancer.

Participants with a negative endoscopy had a lower risk of being diagnosed with upper gastrointestinal cancer during the follow-up period (adjusted hazard ratio, 0.45; 95% confidence interval, 0.43-0.48). The reduction in risk was similar across age sexes and age groups, but among procedures performed after 2008, the risk reduction was even higher (aHR, 0.34; P < .001).

The effect was strongest in the first year after the procedure, but it persisted out to 5 years before returning to baseline risk levels.

A negative endoscopy was also associated with decreased mortality risk from upper gastrointestinal cancer versus those who hadn’t had an endoscopy (aHR, 0.39; 95% CI, 0.37-0.42). The protective value continued for at least 10 years.

Esophageal adenocarcinoma developed in 0.12% of participants, and 0.10% died of the disease. Among those with a negative endoscopy, 0.09% developed adenocarcinoma, and 0.07% died (aHR vs. no upper endoscopy, 0.33; 95% CI, 0.30-0.37).

The rapid return to baseline risk was notable, and different from what occurs after negative colonoscopies. However, new tumors can readily form within one year, and the risk may reflect early malignant or premalignant lesions that were missed during the procedure.

In fact, a meta-analysis found that 11.3% of upper gastrointestinal cancers had escaped detection during an endoscopy in the previous 3 years before diagnosis, and case reviews of patients diagnosed with gastrointestinal cancer soon after an upper endoscopy usually reveal suspicious or indeterminate results that the endoscopist or pathologist missed.

Quality indicators for upper endoscopy include procedure time, rate of targeted biopsies, and computer-aided detection, but it isn’t clear what impact these measures have on outcomes. However, the greater risk reduction found with endoscopies performed more recently suggests that newer quality indicators and technological improvements may be improving outcomes.

The relatively low incidence of esophageal and gastric cancer in Western countries has discouraged widespread adoption of endoscopic screening, but the researchers point out that the risk of gastrointestinal cancer among individuals with GERD is similar to the risk of colorectal cancer in the 60-69 age group in the United States, for whom colonoscopy is recommended.

“The present study indicates that upper endoscopy may be beneficial for patients with GERD, but to make upper endoscopy screening more cost beneficial at its initiation, the target group may be limited to include patients at highest risk of cancer. Such previous cost-effectiveness studies have indicated that endoscopy is cost effective in men at aged 50 years or older with chronic GERD,” the authors wrote.

The study was funded by Swedish Research Council and Swedish Cancer Society. The authors disclosed no relevant conflicts of interest.

Among individuals with gastroesophageal reflux disease (GERD), a negative upper endoscopy is associated with decreased risk in incidence and mortality from gastrointestinal cancer. The benefit persisted through 5-10 years following the procedure.

The finding is similar to the survival benefit seen with colonoscopies and colorectal cancer, and may be attributable to endoscopic treatment of premalignant lesions.

“The relatively high incidence rate of upper gastrointestinal cancer in patients with GERD indicates that a one-time upper endoscopy may be beneficial,” wrote the authors, who were led by Dag Holmberg, MD, PhD, of the department of molecular medicine and surgery at the Karolinska Institutet and Karolinska University Hospital, both in Stockholm. The study was published in Gastroenterology.

GERD is the most frequent reason patients undergo an upper endoscopy, but the results are often negative. It is generally a benign condition, but can lead to Barrett’s esophagus, as well as esophageal and gastric cardia adenocarcinoma. Upper endoscopy can identify other esophageal cancers like gastric noncardia cancer and duodenal cancer, which may cause dyspepsia or GERD-like symptoms.

To determine the potential benefit of upper endoscopy, the researchers conducted a population-based, four-nation cohort study that included 1,062,740 individuals with GERD in Denmark, Finland, Norway, and Sweden. The data were gathered from national patient registries, cancer registries, and cause of death registries. The study encompassed data from 1979 through the end of 2018.

The median age was 58 years, and 52% of participants were women.

The researchers defined a negative endoscopy as no diagnosis of gastrointestinal cancer within 6 months of the procedure; 69.3% of procedures were negative.

During the follow-up period, 0.34% of participants developed and 0.27% died of upper gastrointestinal cancer. Among those with negative endoscopies, 0.23% developed and 0.22% died from upper gastrointestinal cancer.

Participants with a negative endoscopy had a lower risk of being diagnosed with upper gastrointestinal cancer during the follow-up period (adjusted hazard ratio, 0.45; 95% confidence interval, 0.43-0.48). The reduction in risk was similar across age sexes and age groups, but among procedures performed after 2008, the risk reduction was even higher (aHR, 0.34; P < .001).

The effect was strongest in the first year after the procedure, but it persisted out to 5 years before returning to baseline risk levels.

A negative endoscopy was also associated with decreased mortality risk from upper gastrointestinal cancer versus those who hadn’t had an endoscopy (aHR, 0.39; 95% CI, 0.37-0.42). The protective value continued for at least 10 years.

Esophageal adenocarcinoma developed in 0.12% of participants, and 0.10% died of the disease. Among those with a negative endoscopy, 0.09% developed adenocarcinoma, and 0.07% died (aHR vs. no upper endoscopy, 0.33; 95% CI, 0.30-0.37).

The rapid return to baseline risk was notable, and different from what occurs after negative colonoscopies. However, new tumors can readily form within one year, and the risk may reflect early malignant or premalignant lesions that were missed during the procedure.

In fact, a meta-analysis found that 11.3% of upper gastrointestinal cancers had escaped detection during an endoscopy in the previous 3 years before diagnosis, and case reviews of patients diagnosed with gastrointestinal cancer soon after an upper endoscopy usually reveal suspicious or indeterminate results that the endoscopist or pathologist missed.

Quality indicators for upper endoscopy include procedure time, rate of targeted biopsies, and computer-aided detection, but it isn’t clear what impact these measures have on outcomes. However, the greater risk reduction found with endoscopies performed more recently suggests that newer quality indicators and technological improvements may be improving outcomes.

The relatively low incidence of esophageal and gastric cancer in Western countries has discouraged widespread adoption of endoscopic screening, but the researchers point out that the risk of gastrointestinal cancer among individuals with GERD is similar to the risk of colorectal cancer in the 60-69 age group in the United States, for whom colonoscopy is recommended.

“The present study indicates that upper endoscopy may be beneficial for patients with GERD, but to make upper endoscopy screening more cost beneficial at its initiation, the target group may be limited to include patients at highest risk of cancer. Such previous cost-effectiveness studies have indicated that endoscopy is cost effective in men at aged 50 years or older with chronic GERD,” the authors wrote.

The study was funded by Swedish Research Council and Swedish Cancer Society. The authors disclosed no relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Smoking and alcohol raise risk of second cancer in squamous cell carcinoma

Tobacco and alcohol cessation decrease risk
Article Type
Changed
Mon, 01/10/2022 - 09:08

Field cancerization and subsequent second cancer in squamous cell carcinoma (SCC) patients was significantly associated with cigarette and alcohol use, based on data from more than 300 individuals.

Cigarette and alcohol use are established risk factors for SCCs of the esophagus, head, and neck, Manabu Moto, MD, of Kyoto University and colleagues wrote. “In addition, squamous cell carcinoma and squamous dysplastic epithelium develop multifocally in these organs,” in a phenomenon known as field cancerization, but the interaction of multiple dysplastic epithelium with other factors, notably whether cessation of cigarette and alcohol use would reduce risk of SCC, has not been well studied.

In a study published in Gastro Hep Advances, the researchers identified 331 adults with newly diagnosed superficial esophageal SCC who underwent endoscopic resection, and 1,022 healthy controls. Field cancerization was based on the number of Lugol-voiding lesions (LVLs) per endoscopic view according to three groups: grade A, 0 LVLs; grade B, 1-9; or grade C, at least 10. The primary study outcome was a measure of risk factors for the development of LVLs.

“Multiple LVLs are closely associated with inactive aldehyde dehydrogenase 2 (ALDH2) and field cancerization,” the researchers wrote. Before assessing their human subjects, they used a mouse model to investigate whether alcohol intake and abstinence would affect acetaldehyde-induced DNA damage to the esophageal epithelium among individuals with ALDH2 dysfunction.

The researchers found that DNA damage, measured by acetaldehyde-derived DNA adduct levels (via N2-ethylidene-dG), accumulated with alcohol consumption over time, but decreased with alcohol cessation in the mouse model.

For the human part of the study, participants completed a lifestyle survey at entry, with questions about alcohol consumption history, alcohol flushing response, smoking, consumption of high-temperature foods, and consumption of green and yellow vegetables and fruit. Drinking status was divided into five groups: never/rarely (of less than 1 unit/week), light (1-8.9 units/week), moderate (9-17.9 units/week), heavy (18 or more units/week), and ex-drinker, with 1 unit defined as 22 g of ethanol. Smoking was divided into three groups: never (0 pack-years), light (less than 30 pack-years), and heavy (30 or more pack-years). Patients were given educational materials at study entry about the importance of alcohol and smoking cessation, as well as verbal advice to cease these behaviors.

Participants underwent endoscopic surveillance at 3-month intervals for up to 6 months following endoscopic resection.

Overall, increased alcohol consumption was associated with increased risk in development of LVL across all LVL grades; higher grades of LVLs were positively associated with high-intensity alcohol consumption, smoking, flushing, and high-temperature foods, and negatively associated with eating vegetables and fruit.

The risk of LVL grade progression was most strongly associated with increased alcohol consumption and with reported flushing. “The greatest risk was observed in the patients with flushing reactions who consumed an average of 30 units per week in grade C LVL,” with an odds ratio of 534, compared with healthy controls. “Since flushing reaction is caused by accumulation of acetaldehyde due to ALDH2 deficient, our result also means that acetaldehyde is a strong carcinogen in field cancerization.”

Secondary outcomes included the incidence of second primary esophageal SCC and head/neck SCC; these were significantly more prevalent in patients with grade C LVL (cumulative 5-year incidence of 47.1% for ESCC and 13.3% for head and neck SCC). However, alcohol and smoking cessation significantly reduced the development of second primary esophageal SCC (adjusted hazard ratios, 0.47 for alcohol and 0.49 for smoking).

The study findings were limited by several factors including the lack of randomization to noncessation and cessation groups and the inclusion of cancer patients, but not long-term cancer survivors, the researchers noted.

“We believe that our data will be useful to establish a prevention and surveillance strategy for cancer survivors, because the overall prognosis of esophageal cancer and head and neck cancer is still poor,” with a 5-year survival rate of less than 20%, and the results highlight the need to educate cancer survivors on the value of smoking and alcohol cessation, they added.

The study was supported by the National Cancer Center Research and Development Fund 36 by the Ministry of Health, Labour, and Welfare of Japan. The researchers had no financial conflicts to disclose.

Body

In this large, prospective, multicenter Japanese study published in the December 2021 issue of Gastro Hep Advances, alcohol and/or smoking cessation for 5 or more years was found to reduce the risk of field cancerization in patients with superficial esophageal squamous cell carcinoma (ESCC). Multiple lesions that are identified by lack of staining of squamous epithelium of the esophagus with Lugol iodine (Lugol-voiding lesion) are known as field cancerization effect. The investigators found that, following endoscopic resection of first primary ESCC (n = 331), alcohol cessation (adjusted hazard-ratio, 0.47; 95% confidence interval, 0.26-0.85) and cigarette smoking cessation (AHR 0.49, 95% CI, 0.26-0.91) reduced the rate of development of second primary ESCC.

Dr. Anand Jain
This study highlights the magnitude of impact that known environmental exposures can have on the development and prognosis in ESCC. The investigators found that heavy drinking was almost 6.6 times, and heavy smoking was 2.1 times, as prevalent in individuals with high-grade esophageal epithelial dysplasia identified on Lugol iodine staining. In a mouse model, they showed that acetaldehyde, an established carcinogen produced during ethanol metabolism, which is also a compound found in cigarette smoke, induces DNA damage in the esophageal epithelium. According to this study, individuals with superficial ESCC and an inactive aldehyde dehydrogenase 2 enzyme are at higher risk for expansion and progression of esophageal dysplastic epithelium. A flushing reaction following ethanol ingestion is a marker of inactive aldehyde dehydrogenase in humans.

The take-home message from this study is that alcohol and tobacco cessation for 5 years can significantly reduce the risk of second primary ESCC. Practitioners should be vigilant in counseling patients, particularly those with Lugol-voiding lesions grades B or C or those who have a flushing reaction.

Anand Jain, MD, is with the division of digestive diseases at Emory University, Atlanta. Ravinder Mittal, MD, is with the division of digestive diseases at University of California, San Diego. They declared having no relevant conflicts of interest.

Publications
Topics
Sections
Body

In this large, prospective, multicenter Japanese study published in the December 2021 issue of Gastro Hep Advances, alcohol and/or smoking cessation for 5 or more years was found to reduce the risk of field cancerization in patients with superficial esophageal squamous cell carcinoma (ESCC). Multiple lesions that are identified by lack of staining of squamous epithelium of the esophagus with Lugol iodine (Lugol-voiding lesion) are known as field cancerization effect. The investigators found that, following endoscopic resection of first primary ESCC (n = 331), alcohol cessation (adjusted hazard-ratio, 0.47; 95% confidence interval, 0.26-0.85) and cigarette smoking cessation (AHR 0.49, 95% CI, 0.26-0.91) reduced the rate of development of second primary ESCC.

Dr. Anand Jain
This study highlights the magnitude of impact that known environmental exposures can have on the development and prognosis in ESCC. The investigators found that heavy drinking was almost 6.6 times, and heavy smoking was 2.1 times, as prevalent in individuals with high-grade esophageal epithelial dysplasia identified on Lugol iodine staining. In a mouse model, they showed that acetaldehyde, an established carcinogen produced during ethanol metabolism, which is also a compound found in cigarette smoke, induces DNA damage in the esophageal epithelium. According to this study, individuals with superficial ESCC and an inactive aldehyde dehydrogenase 2 enzyme are at higher risk for expansion and progression of esophageal dysplastic epithelium. A flushing reaction following ethanol ingestion is a marker of inactive aldehyde dehydrogenase in humans.

The take-home message from this study is that alcohol and tobacco cessation for 5 years can significantly reduce the risk of second primary ESCC. Practitioners should be vigilant in counseling patients, particularly those with Lugol-voiding lesions grades B or C or those who have a flushing reaction.

Anand Jain, MD, is with the division of digestive diseases at Emory University, Atlanta. Ravinder Mittal, MD, is with the division of digestive diseases at University of California, San Diego. They declared having no relevant conflicts of interest.

Body

In this large, prospective, multicenter Japanese study published in the December 2021 issue of Gastro Hep Advances, alcohol and/or smoking cessation for 5 or more years was found to reduce the risk of field cancerization in patients with superficial esophageal squamous cell carcinoma (ESCC). Multiple lesions that are identified by lack of staining of squamous epithelium of the esophagus with Lugol iodine (Lugol-voiding lesion) are known as field cancerization effect. The investigators found that, following endoscopic resection of first primary ESCC (n = 331), alcohol cessation (adjusted hazard-ratio, 0.47; 95% confidence interval, 0.26-0.85) and cigarette smoking cessation (AHR 0.49, 95% CI, 0.26-0.91) reduced the rate of development of second primary ESCC.

Dr. Anand Jain
This study highlights the magnitude of impact that known environmental exposures can have on the development and prognosis in ESCC. The investigators found that heavy drinking was almost 6.6 times, and heavy smoking was 2.1 times, as prevalent in individuals with high-grade esophageal epithelial dysplasia identified on Lugol iodine staining. In a mouse model, they showed that acetaldehyde, an established carcinogen produced during ethanol metabolism, which is also a compound found in cigarette smoke, induces DNA damage in the esophageal epithelium. According to this study, individuals with superficial ESCC and an inactive aldehyde dehydrogenase 2 enzyme are at higher risk for expansion and progression of esophageal dysplastic epithelium. A flushing reaction following ethanol ingestion is a marker of inactive aldehyde dehydrogenase in humans.

The take-home message from this study is that alcohol and tobacco cessation for 5 years can significantly reduce the risk of second primary ESCC. Practitioners should be vigilant in counseling patients, particularly those with Lugol-voiding lesions grades B or C or those who have a flushing reaction.

Anand Jain, MD, is with the division of digestive diseases at Emory University, Atlanta. Ravinder Mittal, MD, is with the division of digestive diseases at University of California, San Diego. They declared having no relevant conflicts of interest.

Title
Tobacco and alcohol cessation decrease risk
Tobacco and alcohol cessation decrease risk

Field cancerization and subsequent second cancer in squamous cell carcinoma (SCC) patients was significantly associated with cigarette and alcohol use, based on data from more than 300 individuals.

Cigarette and alcohol use are established risk factors for SCCs of the esophagus, head, and neck, Manabu Moto, MD, of Kyoto University and colleagues wrote. “In addition, squamous cell carcinoma and squamous dysplastic epithelium develop multifocally in these organs,” in a phenomenon known as field cancerization, but the interaction of multiple dysplastic epithelium with other factors, notably whether cessation of cigarette and alcohol use would reduce risk of SCC, has not been well studied.

In a study published in Gastro Hep Advances, the researchers identified 331 adults with newly diagnosed superficial esophageal SCC who underwent endoscopic resection, and 1,022 healthy controls. Field cancerization was based on the number of Lugol-voiding lesions (LVLs) per endoscopic view according to three groups: grade A, 0 LVLs; grade B, 1-9; or grade C, at least 10. The primary study outcome was a measure of risk factors for the development of LVLs.

“Multiple LVLs are closely associated with inactive aldehyde dehydrogenase 2 (ALDH2) and field cancerization,” the researchers wrote. Before assessing their human subjects, they used a mouse model to investigate whether alcohol intake and abstinence would affect acetaldehyde-induced DNA damage to the esophageal epithelium among individuals with ALDH2 dysfunction.

The researchers found that DNA damage, measured by acetaldehyde-derived DNA adduct levels (via N2-ethylidene-dG), accumulated with alcohol consumption over time, but decreased with alcohol cessation in the mouse model.

For the human part of the study, participants completed a lifestyle survey at entry, with questions about alcohol consumption history, alcohol flushing response, smoking, consumption of high-temperature foods, and consumption of green and yellow vegetables and fruit. Drinking status was divided into five groups: never/rarely (of less than 1 unit/week), light (1-8.9 units/week), moderate (9-17.9 units/week), heavy (18 or more units/week), and ex-drinker, with 1 unit defined as 22 g of ethanol. Smoking was divided into three groups: never (0 pack-years), light (less than 30 pack-years), and heavy (30 or more pack-years). Patients were given educational materials at study entry about the importance of alcohol and smoking cessation, as well as verbal advice to cease these behaviors.

Participants underwent endoscopic surveillance at 3-month intervals for up to 6 months following endoscopic resection.

Overall, increased alcohol consumption was associated with increased risk in development of LVL across all LVL grades; higher grades of LVLs were positively associated with high-intensity alcohol consumption, smoking, flushing, and high-temperature foods, and negatively associated with eating vegetables and fruit.

The risk of LVL grade progression was most strongly associated with increased alcohol consumption and with reported flushing. “The greatest risk was observed in the patients with flushing reactions who consumed an average of 30 units per week in grade C LVL,” with an odds ratio of 534, compared with healthy controls. “Since flushing reaction is caused by accumulation of acetaldehyde due to ALDH2 deficient, our result also means that acetaldehyde is a strong carcinogen in field cancerization.”

Secondary outcomes included the incidence of second primary esophageal SCC and head/neck SCC; these were significantly more prevalent in patients with grade C LVL (cumulative 5-year incidence of 47.1% for ESCC and 13.3% for head and neck SCC). However, alcohol and smoking cessation significantly reduced the development of second primary esophageal SCC (adjusted hazard ratios, 0.47 for alcohol and 0.49 for smoking).

The study findings were limited by several factors including the lack of randomization to noncessation and cessation groups and the inclusion of cancer patients, but not long-term cancer survivors, the researchers noted.

“We believe that our data will be useful to establish a prevention and surveillance strategy for cancer survivors, because the overall prognosis of esophageal cancer and head and neck cancer is still poor,” with a 5-year survival rate of less than 20%, and the results highlight the need to educate cancer survivors on the value of smoking and alcohol cessation, they added.

The study was supported by the National Cancer Center Research and Development Fund 36 by the Ministry of Health, Labour, and Welfare of Japan. The researchers had no financial conflicts to disclose.

Field cancerization and subsequent second cancer in squamous cell carcinoma (SCC) patients was significantly associated with cigarette and alcohol use, based on data from more than 300 individuals.

Cigarette and alcohol use are established risk factors for SCCs of the esophagus, head, and neck, Manabu Moto, MD, of Kyoto University and colleagues wrote. “In addition, squamous cell carcinoma and squamous dysplastic epithelium develop multifocally in these organs,” in a phenomenon known as field cancerization, but the interaction of multiple dysplastic epithelium with other factors, notably whether cessation of cigarette and alcohol use would reduce risk of SCC, has not been well studied.

In a study published in Gastro Hep Advances, the researchers identified 331 adults with newly diagnosed superficial esophageal SCC who underwent endoscopic resection, and 1,022 healthy controls. Field cancerization was based on the number of Lugol-voiding lesions (LVLs) per endoscopic view according to three groups: grade A, 0 LVLs; grade B, 1-9; or grade C, at least 10. The primary study outcome was a measure of risk factors for the development of LVLs.

“Multiple LVLs are closely associated with inactive aldehyde dehydrogenase 2 (ALDH2) and field cancerization,” the researchers wrote. Before assessing their human subjects, they used a mouse model to investigate whether alcohol intake and abstinence would affect acetaldehyde-induced DNA damage to the esophageal epithelium among individuals with ALDH2 dysfunction.

The researchers found that DNA damage, measured by acetaldehyde-derived DNA adduct levels (via N2-ethylidene-dG), accumulated with alcohol consumption over time, but decreased with alcohol cessation in the mouse model.

For the human part of the study, participants completed a lifestyle survey at entry, with questions about alcohol consumption history, alcohol flushing response, smoking, consumption of high-temperature foods, and consumption of green and yellow vegetables and fruit. Drinking status was divided into five groups: never/rarely (of less than 1 unit/week), light (1-8.9 units/week), moderate (9-17.9 units/week), heavy (18 or more units/week), and ex-drinker, with 1 unit defined as 22 g of ethanol. Smoking was divided into three groups: never (0 pack-years), light (less than 30 pack-years), and heavy (30 or more pack-years). Patients were given educational materials at study entry about the importance of alcohol and smoking cessation, as well as verbal advice to cease these behaviors.

Participants underwent endoscopic surveillance at 3-month intervals for up to 6 months following endoscopic resection.

Overall, increased alcohol consumption was associated with increased risk in development of LVL across all LVL grades; higher grades of LVLs were positively associated with high-intensity alcohol consumption, smoking, flushing, and high-temperature foods, and negatively associated with eating vegetables and fruit.

The risk of LVL grade progression was most strongly associated with increased alcohol consumption and with reported flushing. “The greatest risk was observed in the patients with flushing reactions who consumed an average of 30 units per week in grade C LVL,” with an odds ratio of 534, compared with healthy controls. “Since flushing reaction is caused by accumulation of acetaldehyde due to ALDH2 deficient, our result also means that acetaldehyde is a strong carcinogen in field cancerization.”

Secondary outcomes included the incidence of second primary esophageal SCC and head/neck SCC; these were significantly more prevalent in patients with grade C LVL (cumulative 5-year incidence of 47.1% for ESCC and 13.3% for head and neck SCC). However, alcohol and smoking cessation significantly reduced the development of second primary esophageal SCC (adjusted hazard ratios, 0.47 for alcohol and 0.49 for smoking).

The study findings were limited by several factors including the lack of randomization to noncessation and cessation groups and the inclusion of cancer patients, but not long-term cancer survivors, the researchers noted.

“We believe that our data will be useful to establish a prevention and surveillance strategy for cancer survivors, because the overall prognosis of esophageal cancer and head and neck cancer is still poor,” with a 5-year survival rate of less than 20%, and the results highlight the need to educate cancer survivors on the value of smoking and alcohol cessation, they added.

The study was supported by the National Cancer Center Research and Development Fund 36 by the Ministry of Health, Labour, and Welfare of Japan. The researchers had no financial conflicts to disclose.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTRO HEP ADVANCES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Flexible sigmoidoscopy ADR linked to long-term survival

Quality matters in flexible sigmoidoscopy
Article Type
Changed
Fri, 01/07/2022 - 15:47

Gastroenterology centers with higher adenoma detection rates (ADR) with the use of flexible sigmoidoscopy (FS) had a lower long-term colorectal cancer incidence and lower CRC mortality among its patients, according to a new study.

Detection and removal of polyps during colonoscopy screening is vital to the prevention of CRC, and previous research has shown that centers with higher detection rates are associated with lower rates of CRC diagnosis within 3-5 years after a negative screen.

In Clinical Gastroenterology and Hepatology, researchers led by Amanda J. Cross, PhD, a professor of cancer epidemiology at Imperial College London, published an analysis of the UK Flexible Sigmoidoscopy Screening Trial, which found that FS screening between the ages 55 and 64 led to a 35% reduction of CRC incidence and a 41% reduction in CRC over a mean follow-up 17.1 years. The screening program had no apparent effect on incidence and mortality of proximal cancers. The researchers speculated that this was because few patients underwent proximal examination during follow-up colonoscopy.

“Considering only 5% of participants were referred for follow-up colonoscopy and 4% were referred for surveillance, we conclude that the improved detection of adenomas at FS has a measurable impact on long-term distal CRC outcomes, even when there is infrequent colonoscopy use. It is possible that high detectors also were more adept at polypectomy than intermediate or low detectors, and achieved more complete resection of detected lesions,” the authors wrote.

The researchers analyzed data from 38,550 patients who underwent screening at 14 U.K. hospitals, between 1994 and 1999. A single endoscopist was responsible for nearly all FS screens performed at each participating hospital.

The mean patient age was 60 years, and 49% were male. The researchers calculated ADRs for each center using the percentage of patients who had at least one adenoma detected during screening, which included any distal adenomas discovered during follow-up colonoscopy.

The ADR overall was 12%. The researchers used multivariate logistic regression to rank individual centers as having high (15%; five centers), intermediate (12%; four centers), or low (9%; four centers) detection rates.

There was a strong association between detection rates of small adenomas and a center’s ADR (P < .001), but not for large or advanced adenomas. In the high-detector group, 6.2% of patients screened were referred to colonoscopy versus 4.5% in the intermediate group and 4.5% in the low group. About half of colonoscopies were conducted by the same endoscopist who performed FS.

During follow-up, the distal CRC incidence was 1.5% in the high ADR group, 1.4% in the intermediate group, and 1.7% in the low group, and mortality rates were 0.4%, 0.4%, and 0.5%, respectively.

Compared with unscreened controls, risk of distal CRC was lowest among individuals who underwent screening in the high ADR group (hazard ratio, 0.34; 95% confidence interval, 0.27-0.42), followed by the intermediate group (HR, 0.46; 95% CI, 0.36-0.59), and the low ADR group (HR, 0.55; 95% CI, 0.44-0.68; P < .05 for all).

Compared with unscreened controls, CRC mortality was lower among individuals who underwent screening in the high ADR group (HR, 0.22; 95% CI, 0.13-0.37), followed by the intermediate group (HR, 0.30; 95% CI, 0.17-0.55), and the low ADR group (HR, 0.54; 95% CI, 0.34-0.86; P < .05 for between group differences).

All-site CRC incidence followed similar trends, with the lowest risks in the high ADR group (HR, 0.58; 95% CI, 0.50-0.67), followed by intermediate ADR (HR, 0.65; 95% CI, 0.55-0.77) and low ADR groups (HR, 0.72; 95% CI, 0.61-0.85; between group differences not statistically significant).

All-site CRC mortality was lowest in the high ADR group (HR, 0.52; 95% CI, 0.39-0.69), followed by the intermediate group (HR, 0.53; 95% CI, 0.38-0.73), and the low ADR group (HR, 0.68; 95% CI, 0.51-0.92; between-group differences not statistically significant).

The number needed to screen (NNS) to prevent one CRC diagnosis was 78 in the high ADR group (95% CI, 61-106), 103 in the intermediate group (95% CI, 74-171), and 125 in the low ADR group (95% CI, 82-256). The NNS to prevent one CRC death was 226 (95% CI, 159-387), 247 (95% CI, 165-490), and 349 respectively (95% CI, 192-1,904).

However, the researchers also pointed out that efforts to increase ADR could result in more complications, such as perforations or gastrointestinal bleeding, as well as more frequent diagnosis and recommended surveillance for diminutive adenomas.

The study is limited by the fact that endoscopists were either gastroenterologists or surgeons and the study population was made up of individuals who desired screening.

The UK Flexible Sigmoidoscopy Screening Trial was funded by the UK Medical Research Council and the National Institute for Health Research. The authors disclosed no conflicts of interest.

Body

Adenoma detection rate (ADR) is an important quality indicator for colonoscopy. A higher ADR is associated with a lower risk of postcolonoscopy colorectal cancer (CRC). Flexible sigmoidoscopy (FS) is an evidence-based CRC screening modality, supported by multiple randomized trials reporting long-term reduction in CRC incidence and mortality. However, the impact of ADR of endoscopist performing FS on long-term outcomes is not known.

Dr. Aasma Shaukat

In this post hoc analysis from the UK Flexible Sigmoidoscopy Screening Trial the authors stratified the 13 endoscopy centers performing screening FS on 40,085 average-risk individuals aged between 55 and 64 years by their ADR into high, intermediate, and low with ADRs of 15%, 12%, and 9% respectively, and compared the relative reduction in CRC incidence and mortality with 113,195 controls over a median of 17 years. The authors reported greater reduction in both CRC incidence and mortality for CRC between high and low detectors (relative reduction of 42% versus 28% for CRC incidence and 48% versus 32% for CRC mortality respectively). Differences by ADR for distal CRC were more pronounced between high and low ADR centers (66% versus 45% for CRC incidence and 78% versus 46% for CRC mortality respectively); however, the test for interaction was not statistically significant, suggesting the three ADR groups cannot be differentiated from each other for the outcomes.

While FS is rarely used for screening in the United States, and U.K. guidelines also recently moved away from FS, the study illustrates that quality of FS is important, and that ADR can be a valid quality indicator for flexible sigmoidoscopy.

Aasma Shaukat, MD MPH AGAF, is Robert M. and Mary H. Glickman Professor of Medicine and Population Health and director of GI outcomes research at New York University. She reported having no relevant conflicts of interest.

Publications
Topics
Sections
Body

Adenoma detection rate (ADR) is an important quality indicator for colonoscopy. A higher ADR is associated with a lower risk of postcolonoscopy colorectal cancer (CRC). Flexible sigmoidoscopy (FS) is an evidence-based CRC screening modality, supported by multiple randomized trials reporting long-term reduction in CRC incidence and mortality. However, the impact of ADR of endoscopist performing FS on long-term outcomes is not known.

Dr. Aasma Shaukat

In this post hoc analysis from the UK Flexible Sigmoidoscopy Screening Trial the authors stratified the 13 endoscopy centers performing screening FS on 40,085 average-risk individuals aged between 55 and 64 years by their ADR into high, intermediate, and low with ADRs of 15%, 12%, and 9% respectively, and compared the relative reduction in CRC incidence and mortality with 113,195 controls over a median of 17 years. The authors reported greater reduction in both CRC incidence and mortality for CRC between high and low detectors (relative reduction of 42% versus 28% for CRC incidence and 48% versus 32% for CRC mortality respectively). Differences by ADR for distal CRC were more pronounced between high and low ADR centers (66% versus 45% for CRC incidence and 78% versus 46% for CRC mortality respectively); however, the test for interaction was not statistically significant, suggesting the three ADR groups cannot be differentiated from each other for the outcomes.

While FS is rarely used for screening in the United States, and U.K. guidelines also recently moved away from FS, the study illustrates that quality of FS is important, and that ADR can be a valid quality indicator for flexible sigmoidoscopy.

Aasma Shaukat, MD MPH AGAF, is Robert M. and Mary H. Glickman Professor of Medicine and Population Health and director of GI outcomes research at New York University. She reported having no relevant conflicts of interest.

Body

Adenoma detection rate (ADR) is an important quality indicator for colonoscopy. A higher ADR is associated with a lower risk of postcolonoscopy colorectal cancer (CRC). Flexible sigmoidoscopy (FS) is an evidence-based CRC screening modality, supported by multiple randomized trials reporting long-term reduction in CRC incidence and mortality. However, the impact of ADR of endoscopist performing FS on long-term outcomes is not known.

Dr. Aasma Shaukat

In this post hoc analysis from the UK Flexible Sigmoidoscopy Screening Trial the authors stratified the 13 endoscopy centers performing screening FS on 40,085 average-risk individuals aged between 55 and 64 years by their ADR into high, intermediate, and low with ADRs of 15%, 12%, and 9% respectively, and compared the relative reduction in CRC incidence and mortality with 113,195 controls over a median of 17 years. The authors reported greater reduction in both CRC incidence and mortality for CRC between high and low detectors (relative reduction of 42% versus 28% for CRC incidence and 48% versus 32% for CRC mortality respectively). Differences by ADR for distal CRC were more pronounced between high and low ADR centers (66% versus 45% for CRC incidence and 78% versus 46% for CRC mortality respectively); however, the test for interaction was not statistically significant, suggesting the three ADR groups cannot be differentiated from each other for the outcomes.

While FS is rarely used for screening in the United States, and U.K. guidelines also recently moved away from FS, the study illustrates that quality of FS is important, and that ADR can be a valid quality indicator for flexible sigmoidoscopy.

Aasma Shaukat, MD MPH AGAF, is Robert M. and Mary H. Glickman Professor of Medicine and Population Health and director of GI outcomes research at New York University. She reported having no relevant conflicts of interest.

Title
Quality matters in flexible sigmoidoscopy
Quality matters in flexible sigmoidoscopy

Gastroenterology centers with higher adenoma detection rates (ADR) with the use of flexible sigmoidoscopy (FS) had a lower long-term colorectal cancer incidence and lower CRC mortality among its patients, according to a new study.

Detection and removal of polyps during colonoscopy screening is vital to the prevention of CRC, and previous research has shown that centers with higher detection rates are associated with lower rates of CRC diagnosis within 3-5 years after a negative screen.

In Clinical Gastroenterology and Hepatology, researchers led by Amanda J. Cross, PhD, a professor of cancer epidemiology at Imperial College London, published an analysis of the UK Flexible Sigmoidoscopy Screening Trial, which found that FS screening between the ages 55 and 64 led to a 35% reduction of CRC incidence and a 41% reduction in CRC over a mean follow-up 17.1 years. The screening program had no apparent effect on incidence and mortality of proximal cancers. The researchers speculated that this was because few patients underwent proximal examination during follow-up colonoscopy.

“Considering only 5% of participants were referred for follow-up colonoscopy and 4% were referred for surveillance, we conclude that the improved detection of adenomas at FS has a measurable impact on long-term distal CRC outcomes, even when there is infrequent colonoscopy use. It is possible that high detectors also were more adept at polypectomy than intermediate or low detectors, and achieved more complete resection of detected lesions,” the authors wrote.

The researchers analyzed data from 38,550 patients who underwent screening at 14 U.K. hospitals, between 1994 and 1999. A single endoscopist was responsible for nearly all FS screens performed at each participating hospital.

The mean patient age was 60 years, and 49% were male. The researchers calculated ADRs for each center using the percentage of patients who had at least one adenoma detected during screening, which included any distal adenomas discovered during follow-up colonoscopy.

The ADR overall was 12%. The researchers used multivariate logistic regression to rank individual centers as having high (15%; five centers), intermediate (12%; four centers), or low (9%; four centers) detection rates.

There was a strong association between detection rates of small adenomas and a center’s ADR (P < .001), but not for large or advanced adenomas. In the high-detector group, 6.2% of patients screened were referred to colonoscopy versus 4.5% in the intermediate group and 4.5% in the low group. About half of colonoscopies were conducted by the same endoscopist who performed FS.

During follow-up, the distal CRC incidence was 1.5% in the high ADR group, 1.4% in the intermediate group, and 1.7% in the low group, and mortality rates were 0.4%, 0.4%, and 0.5%, respectively.

Compared with unscreened controls, risk of distal CRC was lowest among individuals who underwent screening in the high ADR group (hazard ratio, 0.34; 95% confidence interval, 0.27-0.42), followed by the intermediate group (HR, 0.46; 95% CI, 0.36-0.59), and the low ADR group (HR, 0.55; 95% CI, 0.44-0.68; P < .05 for all).

Compared with unscreened controls, CRC mortality was lower among individuals who underwent screening in the high ADR group (HR, 0.22; 95% CI, 0.13-0.37), followed by the intermediate group (HR, 0.30; 95% CI, 0.17-0.55), and the low ADR group (HR, 0.54; 95% CI, 0.34-0.86; P < .05 for between group differences).

All-site CRC incidence followed similar trends, with the lowest risks in the high ADR group (HR, 0.58; 95% CI, 0.50-0.67), followed by intermediate ADR (HR, 0.65; 95% CI, 0.55-0.77) and low ADR groups (HR, 0.72; 95% CI, 0.61-0.85; between group differences not statistically significant).

All-site CRC mortality was lowest in the high ADR group (HR, 0.52; 95% CI, 0.39-0.69), followed by the intermediate group (HR, 0.53; 95% CI, 0.38-0.73), and the low ADR group (HR, 0.68; 95% CI, 0.51-0.92; between-group differences not statistically significant).

The number needed to screen (NNS) to prevent one CRC diagnosis was 78 in the high ADR group (95% CI, 61-106), 103 in the intermediate group (95% CI, 74-171), and 125 in the low ADR group (95% CI, 82-256). The NNS to prevent one CRC death was 226 (95% CI, 159-387), 247 (95% CI, 165-490), and 349 respectively (95% CI, 192-1,904).

However, the researchers also pointed out that efforts to increase ADR could result in more complications, such as perforations or gastrointestinal bleeding, as well as more frequent diagnosis and recommended surveillance for diminutive adenomas.

The study is limited by the fact that endoscopists were either gastroenterologists or surgeons and the study population was made up of individuals who desired screening.

The UK Flexible Sigmoidoscopy Screening Trial was funded by the UK Medical Research Council and the National Institute for Health Research. The authors disclosed no conflicts of interest.

Gastroenterology centers with higher adenoma detection rates (ADR) with the use of flexible sigmoidoscopy (FS) had a lower long-term colorectal cancer incidence and lower CRC mortality among its patients, according to a new study.

Detection and removal of polyps during colonoscopy screening is vital to the prevention of CRC, and previous research has shown that centers with higher detection rates are associated with lower rates of CRC diagnosis within 3-5 years after a negative screen.

In Clinical Gastroenterology and Hepatology, researchers led by Amanda J. Cross, PhD, a professor of cancer epidemiology at Imperial College London, published an analysis of the UK Flexible Sigmoidoscopy Screening Trial, which found that FS screening between the ages 55 and 64 led to a 35% reduction of CRC incidence and a 41% reduction in CRC over a mean follow-up 17.1 years. The screening program had no apparent effect on incidence and mortality of proximal cancers. The researchers speculated that this was because few patients underwent proximal examination during follow-up colonoscopy.

“Considering only 5% of participants were referred for follow-up colonoscopy and 4% were referred for surveillance, we conclude that the improved detection of adenomas at FS has a measurable impact on long-term distal CRC outcomes, even when there is infrequent colonoscopy use. It is possible that high detectors also were more adept at polypectomy than intermediate or low detectors, and achieved more complete resection of detected lesions,” the authors wrote.

The researchers analyzed data from 38,550 patients who underwent screening at 14 U.K. hospitals, between 1994 and 1999. A single endoscopist was responsible for nearly all FS screens performed at each participating hospital.

The mean patient age was 60 years, and 49% were male. The researchers calculated ADRs for each center using the percentage of patients who had at least one adenoma detected during screening, which included any distal adenomas discovered during follow-up colonoscopy.

The ADR overall was 12%. The researchers used multivariate logistic regression to rank individual centers as having high (15%; five centers), intermediate (12%; four centers), or low (9%; four centers) detection rates.

There was a strong association between detection rates of small adenomas and a center’s ADR (P < .001), but not for large or advanced adenomas. In the high-detector group, 6.2% of patients screened were referred to colonoscopy versus 4.5% in the intermediate group and 4.5% in the low group. About half of colonoscopies were conducted by the same endoscopist who performed FS.

During follow-up, the distal CRC incidence was 1.5% in the high ADR group, 1.4% in the intermediate group, and 1.7% in the low group, and mortality rates were 0.4%, 0.4%, and 0.5%, respectively.

Compared with unscreened controls, risk of distal CRC was lowest among individuals who underwent screening in the high ADR group (hazard ratio, 0.34; 95% confidence interval, 0.27-0.42), followed by the intermediate group (HR, 0.46; 95% CI, 0.36-0.59), and the low ADR group (HR, 0.55; 95% CI, 0.44-0.68; P < .05 for all).

Compared with unscreened controls, CRC mortality was lower among individuals who underwent screening in the high ADR group (HR, 0.22; 95% CI, 0.13-0.37), followed by the intermediate group (HR, 0.30; 95% CI, 0.17-0.55), and the low ADR group (HR, 0.54; 95% CI, 0.34-0.86; P < .05 for between group differences).

All-site CRC incidence followed similar trends, with the lowest risks in the high ADR group (HR, 0.58; 95% CI, 0.50-0.67), followed by intermediate ADR (HR, 0.65; 95% CI, 0.55-0.77) and low ADR groups (HR, 0.72; 95% CI, 0.61-0.85; between group differences not statistically significant).

All-site CRC mortality was lowest in the high ADR group (HR, 0.52; 95% CI, 0.39-0.69), followed by the intermediate group (HR, 0.53; 95% CI, 0.38-0.73), and the low ADR group (HR, 0.68; 95% CI, 0.51-0.92; between-group differences not statistically significant).

The number needed to screen (NNS) to prevent one CRC diagnosis was 78 in the high ADR group (95% CI, 61-106), 103 in the intermediate group (95% CI, 74-171), and 125 in the low ADR group (95% CI, 82-256). The NNS to prevent one CRC death was 226 (95% CI, 159-387), 247 (95% CI, 165-490), and 349 respectively (95% CI, 192-1,904).

However, the researchers also pointed out that efforts to increase ADR could result in more complications, such as perforations or gastrointestinal bleeding, as well as more frequent diagnosis and recommended surveillance for diminutive adenomas.

The study is limited by the fact that endoscopists were either gastroenterologists or surgeons and the study population was made up of individuals who desired screening.

The UK Flexible Sigmoidoscopy Screening Trial was funded by the UK Medical Research Council and the National Institute for Health Research. The authors disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

High GI spending reveals research, public health need

Article Type
Changed
Wed, 01/05/2022 - 11:04

 

GI, liver, and pancreatic diseases cost the U.S. health care system about $120B per year and account for approximately 250,000 annual deaths, according to a “conservative” estimate from a recent analysis.

These figures emphasize the need for more research funding in the area, along with additional clinical and public health initiatives, reported lead author Anne F. Peery, MD, of the University of North Carolina School of Medicine, Chapel Hill, and colleagues.

“Reports detailing the burden of GI diseases are necessary for clinical research, decision making, and priority setting,” the investigators wrote in Gastroenterology. “Our aim was to describe health care use, expenditures, and research funding across GI, liver, and pancreatic diseases in the United States.”

Dr. Peery and colleagues analyzed data from 14 sources, including the National Institutes of Health; the Centers for Disease Control and Prevention; the National Ambulatory Medical Care Survey; and others. GI-specific outcomes included mortality, readmissions, hospitalizations, office-based visits, and emergency department visits. The investigators also characterized trends in cancers, organ transplants, and GI endoscopy, as well as GI-specific health care costs and NIH research funding. Annual findings were presented for various periods.

Total GI health care spending was $119.6 billion in 2018, down from $135.9 billion in 2015. The top five most costly conditions were biliary tract diseases ($16.9 billion), esophageal disorders ($12.1 billion), abdominal pain ($9.5 billion), abdominal hernias ($9.0 billion), and diverticular disease ($9.0 billion). The investigators noted that medication costs were particularly high for two categories: inflammatory bowel diseases and esophageal disorders, which had prescription drug costs relative to total expenditures of 71% and 53%, respectively.

“This conservative estimate [of $119.6 billion] did not include most GI cancers and likely underestimated the costs associated with some GI conditions,” the investigators noted. “For example, the Medical Expenditure Panel Survey estimate associated with GI bleeding was $300 million. In comparison, the aggregate cost of GI bleeding was more realistically $3.7 billion, as estimated using inpatient data from the National Inpatient Sample.”

In 2016, the most common GI-related diagnosis in the U.S. was abdominal pain (15.7 million annual visits), followed by nausea and vomiting (5.0 million visits), gastroesophageal reflux disorder and reflux esophagitis (4.7 million visits), constipation (3.1 million visits), and abdominal wall/inguinal hernia (2.8 million visits).

The top three most common GI-related hospital admissions in 2018 were GI bleeding (1.3 million admissions), followed by cholelithiasis and cholecystitis (741,060 admissions), then pancreatitis (685,880 admissions). GI bleeding was also the leading cause of 30-day readmission in 2018 (84,533 readmissions).

“We found substantial numbers of GI conditions and symptoms listed in secondary positions on the discharge record,” the investigators wrote. “For example, liver disease accounted for 280,645 discharges with a primary diagnosis; however, there were 13-fold as many discharges (3.6 million in 2018) with liver disease as a secondary diagnosis. Including all diagnoses captures a burden of GI disease not previously reported.”

In 2018 and 2019, GI diseases and cancers caused 255,407 annual deaths. The most common noncancer deaths were caused by alcohol-associated liver disease (24,110 deaths), hepatic fibrosis/cirrhosis (20,184 deaths), and GI bleeding (9,548 deaths). Among GI-cancer related deaths, colorectal cancer (CRC) caused the most mortalities (52,163 deaths), followed by pancreatic cancer (44,914 deaths), and hepatic/biliary cancer (44,914 deaths). The investigators noted that CRC was disproportionately common among non-Hispanic Black individuals, whereas gastric cancer was relatively high among Hispanic individuals.

“GI cancers account for a large number of diagnoses and deaths annually, with persistent disparities in incidence and mortality rates by race/ethnicity,” the investigators wrote. “Racial, ethnic, and regional disparities in access to most GI endoscopy procedures exist, which suggests an unmet need for GI procedures across the United States.”

A total of 22.2 million endoscopies were performed in 2019, most commonly colonoscopy (13.8 million procedures), followed by upper endoscopy (7.5 million procedures), and flexible sigmoidoscopy (379,883 procedures).

In 2020, the NIH spent $3.1 billion, or approximately 7.5% of its budget, on GI disease research. Digestive diseases captured the bulk of this spending, with $2.3 billion. In the same year, the NIH spent 10.5% of its cancer research budget on GI cancers, with the greatest proportion ($325 million) awarded to CRC research.

“Carefully examining the data in this report can help generate areas for future investigation, prioritize research funding, identify areas of unmet need or disparities, and provide an important overview of the impact of digestive and liver conditions,” the investigators concluded. “We hope that others will use this report as motivation to take a deeper dive into individual diseases. There is much to learn from carefully studying existing data sources.”

The study was supported by the National Center for Advancing Translational Sciences, National Institutes of Health. The investigators disclosed no conflicts of interest.

Publications
Topics
Sections

 

GI, liver, and pancreatic diseases cost the U.S. health care system about $120B per year and account for approximately 250,000 annual deaths, according to a “conservative” estimate from a recent analysis.

These figures emphasize the need for more research funding in the area, along with additional clinical and public health initiatives, reported lead author Anne F. Peery, MD, of the University of North Carolina School of Medicine, Chapel Hill, and colleagues.

“Reports detailing the burden of GI diseases are necessary for clinical research, decision making, and priority setting,” the investigators wrote in Gastroenterology. “Our aim was to describe health care use, expenditures, and research funding across GI, liver, and pancreatic diseases in the United States.”

Dr. Peery and colleagues analyzed data from 14 sources, including the National Institutes of Health; the Centers for Disease Control and Prevention; the National Ambulatory Medical Care Survey; and others. GI-specific outcomes included mortality, readmissions, hospitalizations, office-based visits, and emergency department visits. The investigators also characterized trends in cancers, organ transplants, and GI endoscopy, as well as GI-specific health care costs and NIH research funding. Annual findings were presented for various periods.

Total GI health care spending was $119.6 billion in 2018, down from $135.9 billion in 2015. The top five most costly conditions were biliary tract diseases ($16.9 billion), esophageal disorders ($12.1 billion), abdominal pain ($9.5 billion), abdominal hernias ($9.0 billion), and diverticular disease ($9.0 billion). The investigators noted that medication costs were particularly high for two categories: inflammatory bowel diseases and esophageal disorders, which had prescription drug costs relative to total expenditures of 71% and 53%, respectively.

“This conservative estimate [of $119.6 billion] did not include most GI cancers and likely underestimated the costs associated with some GI conditions,” the investigators noted. “For example, the Medical Expenditure Panel Survey estimate associated with GI bleeding was $300 million. In comparison, the aggregate cost of GI bleeding was more realistically $3.7 billion, as estimated using inpatient data from the National Inpatient Sample.”

In 2016, the most common GI-related diagnosis in the U.S. was abdominal pain (15.7 million annual visits), followed by nausea and vomiting (5.0 million visits), gastroesophageal reflux disorder and reflux esophagitis (4.7 million visits), constipation (3.1 million visits), and abdominal wall/inguinal hernia (2.8 million visits).

The top three most common GI-related hospital admissions in 2018 were GI bleeding (1.3 million admissions), followed by cholelithiasis and cholecystitis (741,060 admissions), then pancreatitis (685,880 admissions). GI bleeding was also the leading cause of 30-day readmission in 2018 (84,533 readmissions).

“We found substantial numbers of GI conditions and symptoms listed in secondary positions on the discharge record,” the investigators wrote. “For example, liver disease accounted for 280,645 discharges with a primary diagnosis; however, there were 13-fold as many discharges (3.6 million in 2018) with liver disease as a secondary diagnosis. Including all diagnoses captures a burden of GI disease not previously reported.”

In 2018 and 2019, GI diseases and cancers caused 255,407 annual deaths. The most common noncancer deaths were caused by alcohol-associated liver disease (24,110 deaths), hepatic fibrosis/cirrhosis (20,184 deaths), and GI bleeding (9,548 deaths). Among GI-cancer related deaths, colorectal cancer (CRC) caused the most mortalities (52,163 deaths), followed by pancreatic cancer (44,914 deaths), and hepatic/biliary cancer (44,914 deaths). The investigators noted that CRC was disproportionately common among non-Hispanic Black individuals, whereas gastric cancer was relatively high among Hispanic individuals.

“GI cancers account for a large number of diagnoses and deaths annually, with persistent disparities in incidence and mortality rates by race/ethnicity,” the investigators wrote. “Racial, ethnic, and regional disparities in access to most GI endoscopy procedures exist, which suggests an unmet need for GI procedures across the United States.”

A total of 22.2 million endoscopies were performed in 2019, most commonly colonoscopy (13.8 million procedures), followed by upper endoscopy (7.5 million procedures), and flexible sigmoidoscopy (379,883 procedures).

In 2020, the NIH spent $3.1 billion, or approximately 7.5% of its budget, on GI disease research. Digestive diseases captured the bulk of this spending, with $2.3 billion. In the same year, the NIH spent 10.5% of its cancer research budget on GI cancers, with the greatest proportion ($325 million) awarded to CRC research.

“Carefully examining the data in this report can help generate areas for future investigation, prioritize research funding, identify areas of unmet need or disparities, and provide an important overview of the impact of digestive and liver conditions,” the investigators concluded. “We hope that others will use this report as motivation to take a deeper dive into individual diseases. There is much to learn from carefully studying existing data sources.”

The study was supported by the National Center for Advancing Translational Sciences, National Institutes of Health. The investigators disclosed no conflicts of interest.

 

GI, liver, and pancreatic diseases cost the U.S. health care system about $120B per year and account for approximately 250,000 annual deaths, according to a “conservative” estimate from a recent analysis.

These figures emphasize the need for more research funding in the area, along with additional clinical and public health initiatives, reported lead author Anne F. Peery, MD, of the University of North Carolina School of Medicine, Chapel Hill, and colleagues.

“Reports detailing the burden of GI diseases are necessary for clinical research, decision making, and priority setting,” the investigators wrote in Gastroenterology. “Our aim was to describe health care use, expenditures, and research funding across GI, liver, and pancreatic diseases in the United States.”

Dr. Peery and colleagues analyzed data from 14 sources, including the National Institutes of Health; the Centers for Disease Control and Prevention; the National Ambulatory Medical Care Survey; and others. GI-specific outcomes included mortality, readmissions, hospitalizations, office-based visits, and emergency department visits. The investigators also characterized trends in cancers, organ transplants, and GI endoscopy, as well as GI-specific health care costs and NIH research funding. Annual findings were presented for various periods.

Total GI health care spending was $119.6 billion in 2018, down from $135.9 billion in 2015. The top five most costly conditions were biliary tract diseases ($16.9 billion), esophageal disorders ($12.1 billion), abdominal pain ($9.5 billion), abdominal hernias ($9.0 billion), and diverticular disease ($9.0 billion). The investigators noted that medication costs were particularly high for two categories: inflammatory bowel diseases and esophageal disorders, which had prescription drug costs relative to total expenditures of 71% and 53%, respectively.

“This conservative estimate [of $119.6 billion] did not include most GI cancers and likely underestimated the costs associated with some GI conditions,” the investigators noted. “For example, the Medical Expenditure Panel Survey estimate associated with GI bleeding was $300 million. In comparison, the aggregate cost of GI bleeding was more realistically $3.7 billion, as estimated using inpatient data from the National Inpatient Sample.”

In 2016, the most common GI-related diagnosis in the U.S. was abdominal pain (15.7 million annual visits), followed by nausea and vomiting (5.0 million visits), gastroesophageal reflux disorder and reflux esophagitis (4.7 million visits), constipation (3.1 million visits), and abdominal wall/inguinal hernia (2.8 million visits).

The top three most common GI-related hospital admissions in 2018 were GI bleeding (1.3 million admissions), followed by cholelithiasis and cholecystitis (741,060 admissions), then pancreatitis (685,880 admissions). GI bleeding was also the leading cause of 30-day readmission in 2018 (84,533 readmissions).

“We found substantial numbers of GI conditions and symptoms listed in secondary positions on the discharge record,” the investigators wrote. “For example, liver disease accounted for 280,645 discharges with a primary diagnosis; however, there were 13-fold as many discharges (3.6 million in 2018) with liver disease as a secondary diagnosis. Including all diagnoses captures a burden of GI disease not previously reported.”

In 2018 and 2019, GI diseases and cancers caused 255,407 annual deaths. The most common noncancer deaths were caused by alcohol-associated liver disease (24,110 deaths), hepatic fibrosis/cirrhosis (20,184 deaths), and GI bleeding (9,548 deaths). Among GI-cancer related deaths, colorectal cancer (CRC) caused the most mortalities (52,163 deaths), followed by pancreatic cancer (44,914 deaths), and hepatic/biliary cancer (44,914 deaths). The investigators noted that CRC was disproportionately common among non-Hispanic Black individuals, whereas gastric cancer was relatively high among Hispanic individuals.

“GI cancers account for a large number of diagnoses and deaths annually, with persistent disparities in incidence and mortality rates by race/ethnicity,” the investigators wrote. “Racial, ethnic, and regional disparities in access to most GI endoscopy procedures exist, which suggests an unmet need for GI procedures across the United States.”

A total of 22.2 million endoscopies were performed in 2019, most commonly colonoscopy (13.8 million procedures), followed by upper endoscopy (7.5 million procedures), and flexible sigmoidoscopy (379,883 procedures).

In 2020, the NIH spent $3.1 billion, or approximately 7.5% of its budget, on GI disease research. Digestive diseases captured the bulk of this spending, with $2.3 billion. In the same year, the NIH spent 10.5% of its cancer research budget on GI cancers, with the greatest proportion ($325 million) awarded to CRC research.

“Carefully examining the data in this report can help generate areas for future investigation, prioritize research funding, identify areas of unmet need or disparities, and provide an important overview of the impact of digestive and liver conditions,” the investigators concluded. “We hope that others will use this report as motivation to take a deeper dive into individual diseases. There is much to learn from carefully studying existing data sources.”

The study was supported by the National Center for Advancing Translational Sciences, National Institutes of Health. The investigators disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article