User login
AMA delegates decry ICD-10, EHRs
CHICAGO – Coding and computers were among key concerns for physician leaders at the American Medical Association’s annual House of Delegates meeting.
Resolutions from several delegations aimed to delay or scuttle the transition to the newest incarnation of the International Classification of Diseases, ICD-10.
Delegates from the American College of Rheumatology (ACR) introduced a resolution urging the association to keep up its campaign to stop ICD-10 implementation, specifically via federal legislation.
Without a statement supporting delay, there is a "perception out there that the AMA has essentially caved on the issue of ICD-10," said ACR delegate Dr. Gary Bryant . "Now that’s not my perception, but I believe it’s the perception, to some degree, among American physicians."
The House adopted instead a resolution calling for the AMA to support federal legislation to delay ICD-10 implementation for 2 years. During that time, payers would not be allowed to deny payment based on the specificity of the diagnosis, but they would be required to provide feedback in the case of an incorrect diagnosis. The resolution was brought by the Colorado delegation.
Dr. Reid Blackwelder, president-elect of the American Academy of Family Physicians, spoke in favor of the resolution.
"It’s not likely that we’re moving from ICD-9, we are." Instead, the resolution "allows our members to have a period of time to get used to the sticker shock," he said.
Another issue is that "ICD-10 initially came into use in 1994 and was never designed to be computer-savvy. ICD-11 is due in 2015, and will be designed to be easily coded by computer software," said Dr. Peter Kaufman, the AGA’s delegate to AMA. "If we go to ICD-10 in 2014, or even 2016, when will we be able to go to the newer, more appropriate 11th Revision?"
The AMA has estimated that the cost of implementing ICD-10 could range from $83,290 to more than $2.7 million per practice, depending on practice size.
Delegates cited major problems with electronic health record interoperability, and some also sought to slow the adoption of electronic health records.
Karthik Sarmah medical student alternate delegate in the California delegation, cited interoperability as a major concern.
"The lack of interoperability is the primary driver of why so many people in this room hate their EHR system," he said, adding that interoperability standards exist, but that there are no incentives for venders to create ways to allow physicians to share their patient data with each other.
Dr. Melissa Garretson, a delegate from the American Academy of Pediatrics, agreed.
"I can’t tell you the number of times I have to repeat labs," and CT scans because data can’t be accessed from other physicians, Dr. Garretson said. She called the lack of interoperability an unfunded mandate on physicians because the vendors aren’t making it possible. "If we force them to do this through legislation, it will finally happen."
Kaufman testified \"there are strong interoperability standards already out there. They may only cover limited amounts of data but they work between programs well. The problem is that while they were required when EHRs were certified by CCHIT, with the advent of Meaningful Use, that requirement to use the same specific standard was no longer mandatory.\" Kaufman went on to state that the standards committees were woefully short of practicing physicians, and called for doctors to join the process to the standards could be completed and be workable for clinicians.
Other delegates were skeptical.
"I have been waiting now for about 12 years for this interoperability to occur and I think I’ll either be retired or dead before it finally does," said Dr. Arthur E. Palamara, a vascular surgeon with the Florida delegation.
The House approved a resolution "seeking legislation or regulation to require all EHR vendors to utilize standard and interoperable software technology to enable cost efficient use of electronic health records across all health care delivery systems including institutional and community based settings of care delivery."
On Twitter @aliciaault
CHICAGO – Coding and computers were among key concerns for physician leaders at the American Medical Association’s annual House of Delegates meeting.
Resolutions from several delegations aimed to delay or scuttle the transition to the newest incarnation of the International Classification of Diseases, ICD-10.
Delegates from the American College of Rheumatology (ACR) introduced a resolution urging the association to keep up its campaign to stop ICD-10 implementation, specifically via federal legislation.
Without a statement supporting delay, there is a "perception out there that the AMA has essentially caved on the issue of ICD-10," said ACR delegate Dr. Gary Bryant . "Now that’s not my perception, but I believe it’s the perception, to some degree, among American physicians."
The House adopted instead a resolution calling for the AMA to support federal legislation to delay ICD-10 implementation for 2 years. During that time, payers would not be allowed to deny payment based on the specificity of the diagnosis, but they would be required to provide feedback in the case of an incorrect diagnosis. The resolution was brought by the Colorado delegation.
Dr. Reid Blackwelder, president-elect of the American Academy of Family Physicians, spoke in favor of the resolution.
"It’s not likely that we’re moving from ICD-9, we are." Instead, the resolution "allows our members to have a period of time to get used to the sticker shock," he said.
Another issue is that "ICD-10 initially came into use in 1994 and was never designed to be computer-savvy. ICD-11 is due in 2015, and will be designed to be easily coded by computer software," said Dr. Peter Kaufman, the AGA’s delegate to AMA. "If we go to ICD-10 in 2014, or even 2016, when will we be able to go to the newer, more appropriate 11th Revision?"
The AMA has estimated that the cost of implementing ICD-10 could range from $83,290 to more than $2.7 million per practice, depending on practice size.
Delegates cited major problems with electronic health record interoperability, and some also sought to slow the adoption of electronic health records.
Karthik Sarmah medical student alternate delegate in the California delegation, cited interoperability as a major concern.
"The lack of interoperability is the primary driver of why so many people in this room hate their EHR system," he said, adding that interoperability standards exist, but that there are no incentives for venders to create ways to allow physicians to share their patient data with each other.
Dr. Melissa Garretson, a delegate from the American Academy of Pediatrics, agreed.
"I can’t tell you the number of times I have to repeat labs," and CT scans because data can’t be accessed from other physicians, Dr. Garretson said. She called the lack of interoperability an unfunded mandate on physicians because the vendors aren’t making it possible. "If we force them to do this through legislation, it will finally happen."
Kaufman testified \"there are strong interoperability standards already out there. They may only cover limited amounts of data but they work between programs well. The problem is that while they were required when EHRs were certified by CCHIT, with the advent of Meaningful Use, that requirement to use the same specific standard was no longer mandatory.\" Kaufman went on to state that the standards committees were woefully short of practicing physicians, and called for doctors to join the process to the standards could be completed and be workable for clinicians.
Other delegates were skeptical.
"I have been waiting now for about 12 years for this interoperability to occur and I think I’ll either be retired or dead before it finally does," said Dr. Arthur E. Palamara, a vascular surgeon with the Florida delegation.
The House approved a resolution "seeking legislation or regulation to require all EHR vendors to utilize standard and interoperable software technology to enable cost efficient use of electronic health records across all health care delivery systems including institutional and community based settings of care delivery."
On Twitter @aliciaault
CHICAGO – Coding and computers were among key concerns for physician leaders at the American Medical Association’s annual House of Delegates meeting.
Resolutions from several delegations aimed to delay or scuttle the transition to the newest incarnation of the International Classification of Diseases, ICD-10.
Delegates from the American College of Rheumatology (ACR) introduced a resolution urging the association to keep up its campaign to stop ICD-10 implementation, specifically via federal legislation.
Without a statement supporting delay, there is a "perception out there that the AMA has essentially caved on the issue of ICD-10," said ACR delegate Dr. Gary Bryant . "Now that’s not my perception, but I believe it’s the perception, to some degree, among American physicians."
The House adopted instead a resolution calling for the AMA to support federal legislation to delay ICD-10 implementation for 2 years. During that time, payers would not be allowed to deny payment based on the specificity of the diagnosis, but they would be required to provide feedback in the case of an incorrect diagnosis. The resolution was brought by the Colorado delegation.
Dr. Reid Blackwelder, president-elect of the American Academy of Family Physicians, spoke in favor of the resolution.
"It’s not likely that we’re moving from ICD-9, we are." Instead, the resolution "allows our members to have a period of time to get used to the sticker shock," he said.
Another issue is that "ICD-10 initially came into use in 1994 and was never designed to be computer-savvy. ICD-11 is due in 2015, and will be designed to be easily coded by computer software," said Dr. Peter Kaufman, the AGA’s delegate to AMA. "If we go to ICD-10 in 2014, or even 2016, when will we be able to go to the newer, more appropriate 11th Revision?"
The AMA has estimated that the cost of implementing ICD-10 could range from $83,290 to more than $2.7 million per practice, depending on practice size.
Delegates cited major problems with electronic health record interoperability, and some also sought to slow the adoption of electronic health records.
Karthik Sarmah medical student alternate delegate in the California delegation, cited interoperability as a major concern.
"The lack of interoperability is the primary driver of why so many people in this room hate their EHR system," he said, adding that interoperability standards exist, but that there are no incentives for venders to create ways to allow physicians to share their patient data with each other.
Dr. Melissa Garretson, a delegate from the American Academy of Pediatrics, agreed.
"I can’t tell you the number of times I have to repeat labs," and CT scans because data can’t be accessed from other physicians, Dr. Garretson said. She called the lack of interoperability an unfunded mandate on physicians because the vendors aren’t making it possible. "If we force them to do this through legislation, it will finally happen."
Kaufman testified \"there are strong interoperability standards already out there. They may only cover limited amounts of data but they work between programs well. The problem is that while they were required when EHRs were certified by CCHIT, with the advent of Meaningful Use, that requirement to use the same specific standard was no longer mandatory.\" Kaufman went on to state that the standards committees were woefully short of practicing physicians, and called for doctors to join the process to the standards could be completed and be workable for clinicians.
Other delegates were skeptical.
"I have been waiting now for about 12 years for this interoperability to occur and I think I’ll either be retired or dead before it finally does," said Dr. Arthur E. Palamara, a vascular surgeon with the Florida delegation.
The House approved a resolution "seeking legislation or regulation to require all EHR vendors to utilize standard and interoperable software technology to enable cost efficient use of electronic health records across all health care delivery systems including institutional and community based settings of care delivery."
On Twitter @aliciaault
AT THE AMA HOUSE OF DELEGATES
FIT sensitivity varies by lesion type, location
Fecal immunochemical testing has lower sensitivity for advanced neoplasms of earlier stage, nonpolypoid morphology, and proximal location.
The finding "highlights the importance of future studies exploring whether this limitation really leads to less protection against proximal colon cancer," wrote Dr. Han-Mo Chiu in the July issue of Clinical Gastroenterology and Hepatology.
In the study, which was also published online Jan. 31, Dr. Chiu of the National Taiwan University Hospital, in Taipei, looked at 18,296 asymptomatic, prospectively enrolled, consecutive patients aged 50 years or older who underwent screening colonoscopy at the Health Management Center at the University Hospital between September 2005 and September 2010.
All participants collected their own fecal samples at home using the fecal immunochemical test (FIT), which detects human hemoglobin in the feces, 1 day prior to colonoscopy.
Overall, 59.2% of patients were male, and the mean age was 59.8 years.
The researchers found that the FIT was positive in 1,330 subjects (7.3%).
On colonoscopy, however, the researchers revealed nonadvanced adenomas in 3,385 patients (18.5%), advanced adenomas in 632 (3.5%), cancer in 28 (0.15%), and invasive cancer in 23 (0.13%) of the subjects.
That amounted to a sensitivity of FIT for nonadvanced adenomas, advanced adenomas, and cancer of 10.6%, 28.0%, and 78.6%, respectively, wrote the authors.
Looking at neoplasms classified according to World Health Organization criteria, the FIT also showed its sensitivity to be highly stage dependent. For example, that number was 28.0% for advanced adenomas, but climbed to 66.7% for carcinoma in situ plus T1 cancers, and 100% for T2 to T4 cancers (P for trend less than .001).
The location of the lesion also affected the reliability of the test. Excluding patients who were found to have both proximal and distal lesions, Dr. Chiu calculated that the sensitivity of FIT was significantly lower for proximal lesions than for overall advanced neoplasms (proximal vs. distal, 24.1% vs. 34.3%; P less than .013).
Finally, the researchers looked at the reliability of FIT in relation to the lesion morphology.
In this case, they determined that FIT sensitivity also was significantly lower for proximal or distal nonpolypoid advanced neoplasms, compared with polypoid lesions at a corresponding location (P less than .001 for both).
Regarding this last finding, "It is still not well understood why these lesions are less sensitive to detection by the FIT," wrote Dr. Chiu.
He added: "We speculated that their smaller surface areas in contact with feces, the sparser vasculature in the mucosa, and the more advanced degree of hemoglobin degradation during bowel passage for proximally located lesions provide partial explanations, and this issue is worth ... further investigation."
Nevertheless, early detection of these lesions, which "carry a higher risk for malignant transformation and invasiveness at a relatively smaller size when compared with their polypoid counterparts ... may help prevent interval cancer and improve the effectiveness of screening programs."
The authors disclosed no individual conflicts of interest. They reported that this study was partially supported by a research grant from the Department of Health of Taiwan.
In an editorial accompanying the article, Callum G. Fraser, Ph.D., Dr. James E. Allison, Dr. Graeme P. Young, and Stephen P. Halloran questioned whether the results of the present study could safely be generalized to all FIT tests that offer qualitative, positive or negative results.
"It is vital for users to recognize that all FITs are not the same and that, using the same specimens, different FITs yield markedly different positivity rates, sensitivities, and specificities," they cautioned.
They also proposed FIT testing that uses different hemoglobin cut-off rates for different populations, although they concede that this would "undoubtedly [be] difficult to institute in practice."
Nevertheless, they wrote, "It has become well recognized that fecal hemoglobin concentration is affected by age, with older people having a higher concentration than younger people, and by sex, with men having a higher concentration than women."
Therefore, "it is likely that programs would benefit considerably if different cut-off fecal hemoglobin concentrations were used as criteria for the initiation of further investigation, usually colonoscopy, for different groups."
Finally, the editorialists questioned the "often-quoted take-away message" that FIT testing, with its high false-negative rate for small, early cancers, lacks much utility.
"This conclusion does not take into consideration that the reported sensitivities are examples of test application sensitivity (test once only) and not test programmatic sensitivity (test repeatedly performed in a program of repeated screening episodes over time), as per recommendations for population screening with FITs," they wrote.
Indeed, as pointed out by Dr. Chiu and colleagues, "good programmatic sensitivity allows for missed advanced adenomas and early cancers to be detected in subsequent screens before they become fatal cancers."
Dr. Fraser is at the University of Dundee, Scotland; Dr. Allison is affiliated with the University of California, San Francisco; Dr. Young is from Flinders University, in Adelaide, South Australia; and Mr. Halloran is a pathologist from the University of Surrey, England. Dr. Fraser disclosed financial relationships with several companies marketing diagnostic equipment, including the makers of fecal blood testing kits.
In an editorial accompanying the article, Callum G. Fraser, Ph.D., Dr. James E. Allison, Dr. Graeme P. Young, and Stephen P. Halloran questioned whether the results of the present study could safely be generalized to all FIT tests that offer qualitative, positive or negative results.
"It is vital for users to recognize that all FITs are not the same and that, using the same specimens, different FITs yield markedly different positivity rates, sensitivities, and specificities," they cautioned.
They also proposed FIT testing that uses different hemoglobin cut-off rates for different populations, although they concede that this would "undoubtedly [be] difficult to institute in practice."
Nevertheless, they wrote, "It has become well recognized that fecal hemoglobin concentration is affected by age, with older people having a higher concentration than younger people, and by sex, with men having a higher concentration than women."
Therefore, "it is likely that programs would benefit considerably if different cut-off fecal hemoglobin concentrations were used as criteria for the initiation of further investigation, usually colonoscopy, for different groups."
Finally, the editorialists questioned the "often-quoted take-away message" that FIT testing, with its high false-negative rate for small, early cancers, lacks much utility.
"This conclusion does not take into consideration that the reported sensitivities are examples of test application sensitivity (test once only) and not test programmatic sensitivity (test repeatedly performed in a program of repeated screening episodes over time), as per recommendations for population screening with FITs," they wrote.
Indeed, as pointed out by Dr. Chiu and colleagues, "good programmatic sensitivity allows for missed advanced adenomas and early cancers to be detected in subsequent screens before they become fatal cancers."
Dr. Fraser is at the University of Dundee, Scotland; Dr. Allison is affiliated with the University of California, San Francisco; Dr. Young is from Flinders University, in Adelaide, South Australia; and Mr. Halloran is a pathologist from the University of Surrey, England. Dr. Fraser disclosed financial relationships with several companies marketing diagnostic equipment, including the makers of fecal blood testing kits.
In an editorial accompanying the article, Callum G. Fraser, Ph.D., Dr. James E. Allison, Dr. Graeme P. Young, and Stephen P. Halloran questioned whether the results of the present study could safely be generalized to all FIT tests that offer qualitative, positive or negative results.
"It is vital for users to recognize that all FITs are not the same and that, using the same specimens, different FITs yield markedly different positivity rates, sensitivities, and specificities," they cautioned.
They also proposed FIT testing that uses different hemoglobin cut-off rates for different populations, although they concede that this would "undoubtedly [be] difficult to institute in practice."
Nevertheless, they wrote, "It has become well recognized that fecal hemoglobin concentration is affected by age, with older people having a higher concentration than younger people, and by sex, with men having a higher concentration than women."
Therefore, "it is likely that programs would benefit considerably if different cut-off fecal hemoglobin concentrations were used as criteria for the initiation of further investigation, usually colonoscopy, for different groups."
Finally, the editorialists questioned the "often-quoted take-away message" that FIT testing, with its high false-negative rate for small, early cancers, lacks much utility.
"This conclusion does not take into consideration that the reported sensitivities are examples of test application sensitivity (test once only) and not test programmatic sensitivity (test repeatedly performed in a program of repeated screening episodes over time), as per recommendations for population screening with FITs," they wrote.
Indeed, as pointed out by Dr. Chiu and colleagues, "good programmatic sensitivity allows for missed advanced adenomas and early cancers to be detected in subsequent screens before they become fatal cancers."
Dr. Fraser is at the University of Dundee, Scotland; Dr. Allison is affiliated with the University of California, San Francisco; Dr. Young is from Flinders University, in Adelaide, South Australia; and Mr. Halloran is a pathologist from the University of Surrey, England. Dr. Fraser disclosed financial relationships with several companies marketing diagnostic equipment, including the makers of fecal blood testing kits.
Fecal immunochemical testing has lower sensitivity for advanced neoplasms of earlier stage, nonpolypoid morphology, and proximal location.
The finding "highlights the importance of future studies exploring whether this limitation really leads to less protection against proximal colon cancer," wrote Dr. Han-Mo Chiu in the July issue of Clinical Gastroenterology and Hepatology.
In the study, which was also published online Jan. 31, Dr. Chiu of the National Taiwan University Hospital, in Taipei, looked at 18,296 asymptomatic, prospectively enrolled, consecutive patients aged 50 years or older who underwent screening colonoscopy at the Health Management Center at the University Hospital between September 2005 and September 2010.
All participants collected their own fecal samples at home using the fecal immunochemical test (FIT), which detects human hemoglobin in the feces, 1 day prior to colonoscopy.
Overall, 59.2% of patients were male, and the mean age was 59.8 years.
The researchers found that the FIT was positive in 1,330 subjects (7.3%).
On colonoscopy, however, the researchers revealed nonadvanced adenomas in 3,385 patients (18.5%), advanced adenomas in 632 (3.5%), cancer in 28 (0.15%), and invasive cancer in 23 (0.13%) of the subjects.
That amounted to a sensitivity of FIT for nonadvanced adenomas, advanced adenomas, and cancer of 10.6%, 28.0%, and 78.6%, respectively, wrote the authors.
Looking at neoplasms classified according to World Health Organization criteria, the FIT also showed its sensitivity to be highly stage dependent. For example, that number was 28.0% for advanced adenomas, but climbed to 66.7% for carcinoma in situ plus T1 cancers, and 100% for T2 to T4 cancers (P for trend less than .001).
The location of the lesion also affected the reliability of the test. Excluding patients who were found to have both proximal and distal lesions, Dr. Chiu calculated that the sensitivity of FIT was significantly lower for proximal lesions than for overall advanced neoplasms (proximal vs. distal, 24.1% vs. 34.3%; P less than .013).
Finally, the researchers looked at the reliability of FIT in relation to the lesion morphology.
In this case, they determined that FIT sensitivity also was significantly lower for proximal or distal nonpolypoid advanced neoplasms, compared with polypoid lesions at a corresponding location (P less than .001 for both).
Regarding this last finding, "It is still not well understood why these lesions are less sensitive to detection by the FIT," wrote Dr. Chiu.
He added: "We speculated that their smaller surface areas in contact with feces, the sparser vasculature in the mucosa, and the more advanced degree of hemoglobin degradation during bowel passage for proximally located lesions provide partial explanations, and this issue is worth ... further investigation."
Nevertheless, early detection of these lesions, which "carry a higher risk for malignant transformation and invasiveness at a relatively smaller size when compared with their polypoid counterparts ... may help prevent interval cancer and improve the effectiveness of screening programs."
The authors disclosed no individual conflicts of interest. They reported that this study was partially supported by a research grant from the Department of Health of Taiwan.
Fecal immunochemical testing has lower sensitivity for advanced neoplasms of earlier stage, nonpolypoid morphology, and proximal location.
The finding "highlights the importance of future studies exploring whether this limitation really leads to less protection against proximal colon cancer," wrote Dr. Han-Mo Chiu in the July issue of Clinical Gastroenterology and Hepatology.
In the study, which was also published online Jan. 31, Dr. Chiu of the National Taiwan University Hospital, in Taipei, looked at 18,296 asymptomatic, prospectively enrolled, consecutive patients aged 50 years or older who underwent screening colonoscopy at the Health Management Center at the University Hospital between September 2005 and September 2010.
All participants collected their own fecal samples at home using the fecal immunochemical test (FIT), which detects human hemoglobin in the feces, 1 day prior to colonoscopy.
Overall, 59.2% of patients were male, and the mean age was 59.8 years.
The researchers found that the FIT was positive in 1,330 subjects (7.3%).
On colonoscopy, however, the researchers revealed nonadvanced adenomas in 3,385 patients (18.5%), advanced adenomas in 632 (3.5%), cancer in 28 (0.15%), and invasive cancer in 23 (0.13%) of the subjects.
That amounted to a sensitivity of FIT for nonadvanced adenomas, advanced adenomas, and cancer of 10.6%, 28.0%, and 78.6%, respectively, wrote the authors.
Looking at neoplasms classified according to World Health Organization criteria, the FIT also showed its sensitivity to be highly stage dependent. For example, that number was 28.0% for advanced adenomas, but climbed to 66.7% for carcinoma in situ plus T1 cancers, and 100% for T2 to T4 cancers (P for trend less than .001).
The location of the lesion also affected the reliability of the test. Excluding patients who were found to have both proximal and distal lesions, Dr. Chiu calculated that the sensitivity of FIT was significantly lower for proximal lesions than for overall advanced neoplasms (proximal vs. distal, 24.1% vs. 34.3%; P less than .013).
Finally, the researchers looked at the reliability of FIT in relation to the lesion morphology.
In this case, they determined that FIT sensitivity also was significantly lower for proximal or distal nonpolypoid advanced neoplasms, compared with polypoid lesions at a corresponding location (P less than .001 for both).
Regarding this last finding, "It is still not well understood why these lesions are less sensitive to detection by the FIT," wrote Dr. Chiu.
He added: "We speculated that their smaller surface areas in contact with feces, the sparser vasculature in the mucosa, and the more advanced degree of hemoglobin degradation during bowel passage for proximally located lesions provide partial explanations, and this issue is worth ... further investigation."
Nevertheless, early detection of these lesions, which "carry a higher risk for malignant transformation and invasiveness at a relatively smaller size when compared with their polypoid counterparts ... may help prevent interval cancer and improve the effectiveness of screening programs."
The authors disclosed no individual conflicts of interest. They reported that this study was partially supported by a research grant from the Department of Health of Taiwan.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: Fecal immunochemical testing of a sample of asymptomatic adults demonstrated sensitivity rates for nonadvanced adenomas, advanced adenomas, and cancer of 10.6%, 28.0%, and 78.6%, respectively.
Data source: A prospective cohort of 18,296 Chinese subjects who underwent screening colonoscopy and FIT during a 5-year period.
Disclosures: The authors disclosed no individual conflicts of interest. They reported that this study was partially supported by a research grant from the Department of Health of Taiwan.
IBD gains foothold in Asia
In what they called the first comparative epidemiologic study of inflammatory bowel disease across the region, researchers calculated an incidence of 1.37 cases/100,000 people in Asia, and 23.67 cases/100,000 people in Australia.
And despite the fact that these rates are dwarfed by Western nations, "these figures still represent a clinically important disease burden, considering that 20 years ago, IBD [inflammatory bowel disease] was rare or almost nonexistent in Asia," wrote Siew C. Ng, Ph.D., in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2013.04.007).
Source: American Gastroenterological Association
In the report Dr. Ng of the Chinese University of Hong Kong, and her colleagues looked at data from the Asia-Pacific Crohn’s and Colitis Epidemiology (ACCESS) study.
That database included information from 21 centers in 12 cities in nine countries, including China, Australia, Hong Kong, and Thailand.
Specifically, the researchers focused on incident IBD cases diagnosed between April 1, 2011, and March 31, 2012, all of which were confirmed clinically as well as endoscopically, histologically, and radiographically.
The researchers found that during the 1-year period, there were 419 new IBD cases, including 232 (55.4%) classified as ulcerative colitis (UC), 166 (39.6%) as Crohn’s disease (CD), and 21 (5.0%) undetermined.
That translated to a crude annual overall incidence of IBD of 1.37/100,000 people (95% CI: 1.25-1.51) in Asia, and 23.67 (95% CI: 18.46-29.85) in Australia.
The authors then looked at individual patient demographics.
Overall, the mean age at the time of diagnosis was 39 years, with a median number of months from symptom onset to diagnosis of 5 months in Asia and 6 months in Australia.
They also found that while inflammatory disease phenotype was reported in 66% of patients in Asia, it was found in 88% of Australian patients (P = .005).
Indeed, 3% of patients in Asia reported a family history of IBD, compared with 17% of patients in Australia (P less than 0.001).
Severity of disease was also catalogued. Penetrating disease characterized 19% and 2% of cases in Asia and Australia, respectively (P = .012).
Meanwhile, stricturing disease was diagnosed in 17% of Asian cases and 10% of Australian cases (P = .277), and perianal disease in 18% and 12% of Asians and Australians, respectively (P = .356).
Finally, the researchers looked at treatment patterns between Asia and Australia, and found that while use of antibiotics, immunosuppressives, and biologic therapy were similar, mesalazine (79% vs. 62%; P less than .012) and corticosteroids (62% vs. 28%; P less than .0001) were more commonly prescribed in Australia than in Asia.
According to Dr. Ng, in a region that is host to more than 4.2 billion people, "The emerging incidence in Asia offers a unique opportunity to study etiologic factors, particularly factors associated with Western lifestyle, including improved home amenities, refrigeration, consumption of protein- and carbohydrate-rich diets, widespread use of antibiotics, vaccination, and industrial pollution."
She added: "The complex disease behavior for CD in Asia has major implications for local health care planning and resource allocation."
The authors disclosed no conflicts of interest. The study was supported by Ferring Pharmaceuticals, maker of mesalazine, which is used in IBD.
In what they called the first comparative epidemiologic study of inflammatory bowel disease across the region, researchers calculated an incidence of 1.37 cases/100,000 people in Asia, and 23.67 cases/100,000 people in Australia.
And despite the fact that these rates are dwarfed by Western nations, "these figures still represent a clinically important disease burden, considering that 20 years ago, IBD [inflammatory bowel disease] was rare or almost nonexistent in Asia," wrote Siew C. Ng, Ph.D., in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2013.04.007).
Source: American Gastroenterological Association
In the report Dr. Ng of the Chinese University of Hong Kong, and her colleagues looked at data from the Asia-Pacific Crohn’s and Colitis Epidemiology (ACCESS) study.
That database included information from 21 centers in 12 cities in nine countries, including China, Australia, Hong Kong, and Thailand.
Specifically, the researchers focused on incident IBD cases diagnosed between April 1, 2011, and March 31, 2012, all of which were confirmed clinically as well as endoscopically, histologically, and radiographically.
The researchers found that during the 1-year period, there were 419 new IBD cases, including 232 (55.4%) classified as ulcerative colitis (UC), 166 (39.6%) as Crohn’s disease (CD), and 21 (5.0%) undetermined.
That translated to a crude annual overall incidence of IBD of 1.37/100,000 people (95% CI: 1.25-1.51) in Asia, and 23.67 (95% CI: 18.46-29.85) in Australia.
The authors then looked at individual patient demographics.
Overall, the mean age at the time of diagnosis was 39 years, with a median number of months from symptom onset to diagnosis of 5 months in Asia and 6 months in Australia.
They also found that while inflammatory disease phenotype was reported in 66% of patients in Asia, it was found in 88% of Australian patients (P = .005).
Indeed, 3% of patients in Asia reported a family history of IBD, compared with 17% of patients in Australia (P less than 0.001).
Severity of disease was also catalogued. Penetrating disease characterized 19% and 2% of cases in Asia and Australia, respectively (P = .012).
Meanwhile, stricturing disease was diagnosed in 17% of Asian cases and 10% of Australian cases (P = .277), and perianal disease in 18% and 12% of Asians and Australians, respectively (P = .356).
Finally, the researchers looked at treatment patterns between Asia and Australia, and found that while use of antibiotics, immunosuppressives, and biologic therapy were similar, mesalazine (79% vs. 62%; P less than .012) and corticosteroids (62% vs. 28%; P less than .0001) were more commonly prescribed in Australia than in Asia.
According to Dr. Ng, in a region that is host to more than 4.2 billion people, "The emerging incidence in Asia offers a unique opportunity to study etiologic factors, particularly factors associated with Western lifestyle, including improved home amenities, refrigeration, consumption of protein- and carbohydrate-rich diets, widespread use of antibiotics, vaccination, and industrial pollution."
She added: "The complex disease behavior for CD in Asia has major implications for local health care planning and resource allocation."
The authors disclosed no conflicts of interest. The study was supported by Ferring Pharmaceuticals, maker of mesalazine, which is used in IBD.
In what they called the first comparative epidemiologic study of inflammatory bowel disease across the region, researchers calculated an incidence of 1.37 cases/100,000 people in Asia, and 23.67 cases/100,000 people in Australia.
And despite the fact that these rates are dwarfed by Western nations, "these figures still represent a clinically important disease burden, considering that 20 years ago, IBD [inflammatory bowel disease] was rare or almost nonexistent in Asia," wrote Siew C. Ng, Ph.D., in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2013.04.007).
Source: American Gastroenterological Association
In the report Dr. Ng of the Chinese University of Hong Kong, and her colleagues looked at data from the Asia-Pacific Crohn’s and Colitis Epidemiology (ACCESS) study.
That database included information from 21 centers in 12 cities in nine countries, including China, Australia, Hong Kong, and Thailand.
Specifically, the researchers focused on incident IBD cases diagnosed between April 1, 2011, and March 31, 2012, all of which were confirmed clinically as well as endoscopically, histologically, and radiographically.
The researchers found that during the 1-year period, there were 419 new IBD cases, including 232 (55.4%) classified as ulcerative colitis (UC), 166 (39.6%) as Crohn’s disease (CD), and 21 (5.0%) undetermined.
That translated to a crude annual overall incidence of IBD of 1.37/100,000 people (95% CI: 1.25-1.51) in Asia, and 23.67 (95% CI: 18.46-29.85) in Australia.
The authors then looked at individual patient demographics.
Overall, the mean age at the time of diagnosis was 39 years, with a median number of months from symptom onset to diagnosis of 5 months in Asia and 6 months in Australia.
They also found that while inflammatory disease phenotype was reported in 66% of patients in Asia, it was found in 88% of Australian patients (P = .005).
Indeed, 3% of patients in Asia reported a family history of IBD, compared with 17% of patients in Australia (P less than 0.001).
Severity of disease was also catalogued. Penetrating disease characterized 19% and 2% of cases in Asia and Australia, respectively (P = .012).
Meanwhile, stricturing disease was diagnosed in 17% of Asian cases and 10% of Australian cases (P = .277), and perianal disease in 18% and 12% of Asians and Australians, respectively (P = .356).
Finally, the researchers looked at treatment patterns between Asia and Australia, and found that while use of antibiotics, immunosuppressives, and biologic therapy were similar, mesalazine (79% vs. 62%; P less than .012) and corticosteroids (62% vs. 28%; P less than .0001) were more commonly prescribed in Australia than in Asia.
According to Dr. Ng, in a region that is host to more than 4.2 billion people, "The emerging incidence in Asia offers a unique opportunity to study etiologic factors, particularly factors associated with Western lifestyle, including improved home amenities, refrigeration, consumption of protein- and carbohydrate-rich diets, widespread use of antibiotics, vaccination, and industrial pollution."
She added: "The complex disease behavior for CD in Asia has major implications for local health care planning and resource allocation."
The authors disclosed no conflicts of interest. The study was supported by Ferring Pharmaceuticals, maker of mesalazine, which is used in IBD.
FROM GASTROENTEROLOGY
Major finding: In Asia, the crude annual overall incidence of IBD is 1.37/100,000 people (95% CI: 1.25-1.51); in Australia it is 23.67 (95% CI: 18.46-29.85).
Data source: The Asia-Pacific Crohn’s and Colitis Epidemiology (ACCESS) study.
Disclosures: The authors disclosed no conflicts of interest. The study was supported by Ferring Pharmaceuticals, maker of mesalazine, which is used in IBD.
Teduglutide cut parenteral intake at 52 weeks
Two-thirds of patients with short bowel intestinal failure who took teduglutide reduced their parenteral nutrition intake by 20% or more, for an average reduction of 5 L of parenteral nutrition per week.
Moreover, once-daily dosing was safe and effective, reported Dr. Stephen J.D. O’Keefe and his coauthors in the July issue of Clinical Gastroenterology and Hepatology.
According to the authors, teduglutide, a glucagon-like peptide-2 analogue, produces villous hypertrophy, retards gastric secretion and emptying, and increases mucosal blood flow and absorption.
Source: American Gastroenterological Association
In a 28-week extension study of an initial 24-week, double-blind, randomized controlled trial of teduglutide, Dr. O’Keefe, of the University of Pittsburgh, looked at adult patients with small bowel intestinal failure caused by intestinal resection.
All patients were dependent on parenteral nutrition (PN) or fluid and electrolytes at least 3 times per week, for at least 12 months before the start of the study.
Antimotility and antidiarrheal agents were permitted, but only if their use was stable for 4 or more weeks prior to study baseline.
At the start of the study, published online in January (doi: 10.1016/j.cgh.2012.12.029), all patients’ fluid intakes were optimized to maintain a urine output of between 1 and 2 L daily, with stable blood creatinine concentrations and a urine sodium content of at least 20 mEq/L.
Patients were then randomized to receive subcutaneous, once-daily injections of teduglutide (dosed at either 0.05 or 0.10 mg/kg per day) or placebo.
The 56 patients who received the drug in the initial 24-week study (Gut 2011;60:902-14) were then invited to participate in a 28-week extension trial. Of these, 52 agreed (25 patients taking 0.05 mg/kg per day and 27 patients taking 0.10 mg/kg per day).
By 52 weeks, 68% of patients in the teduglutide 0.05-mg/kg per day group and 52% of patients in the 0.10-mg/kg per day group were responders, defined as achieving the endpoint of reductions of 20% or greater of baseline PN volume, according to the authors.
Indeed, the authors reported that patients on teduglutide 0.05 mg/kg per day decreased their PN volume by 4.9 L/week (52%), while patients taking teduglutide 0.10 mg/kg per day decreased their PN volume by 3.3 L/week (26%).
Four subjects in the study were completely weaned from PN. "Three patients in the teduglutide 0.05-mg/kg per day treatment group became completely independent of PN after 25, 2, and 6.5 years [of parenteral support], receiving 5.4, 3.5, and 12.0 L/d parenteral support per week at baseline, respectively," wrote the authors. "Another patient receiving the teduglutide 0.10-mg/kg per day dose had been receiving parenteral support for 3.7 years and received 4.5 L/day parenteral support at baseline."
Looking at safety, although no "clinically meaningful differences" were observed through the study period in terms of patient vital signs, electrocardiograms, body weight, and physical examinations, the authors did report that 50 of the 52 patients reported at least one treatment-emergent adverse event, most commonly headache (35%), nausea (31%), or abdominal pain (25%).
The authors conceded that while they assume that reductions in PN requirements would translate to improvements in quality of life, the study was not sufficiently powered to address that question.
"It remains to be seen what the overall applicability of this product will be in clinical practice," they added.
The study was sponsored by NPS Pharmaceuticals, maker of teduglutide. Dr. O’Keefe and all his coauthors disclosed financial ties to NPS.
Two-thirds of patients with short bowel intestinal failure who took teduglutide reduced their parenteral nutrition intake by 20% or more, for an average reduction of 5 L of parenteral nutrition per week.
Moreover, once-daily dosing was safe and effective, reported Dr. Stephen J.D. O’Keefe and his coauthors in the July issue of Clinical Gastroenterology and Hepatology.
According to the authors, teduglutide, a glucagon-like peptide-2 analogue, produces villous hypertrophy, retards gastric secretion and emptying, and increases mucosal blood flow and absorption.
Source: American Gastroenterological Association
In a 28-week extension study of an initial 24-week, double-blind, randomized controlled trial of teduglutide, Dr. O’Keefe, of the University of Pittsburgh, looked at adult patients with small bowel intestinal failure caused by intestinal resection.
All patients were dependent on parenteral nutrition (PN) or fluid and electrolytes at least 3 times per week, for at least 12 months before the start of the study.
Antimotility and antidiarrheal agents were permitted, but only if their use was stable for 4 or more weeks prior to study baseline.
At the start of the study, published online in January (doi: 10.1016/j.cgh.2012.12.029), all patients’ fluid intakes were optimized to maintain a urine output of between 1 and 2 L daily, with stable blood creatinine concentrations and a urine sodium content of at least 20 mEq/L.
Patients were then randomized to receive subcutaneous, once-daily injections of teduglutide (dosed at either 0.05 or 0.10 mg/kg per day) or placebo.
The 56 patients who received the drug in the initial 24-week study (Gut 2011;60:902-14) were then invited to participate in a 28-week extension trial. Of these, 52 agreed (25 patients taking 0.05 mg/kg per day and 27 patients taking 0.10 mg/kg per day).
By 52 weeks, 68% of patients in the teduglutide 0.05-mg/kg per day group and 52% of patients in the 0.10-mg/kg per day group were responders, defined as achieving the endpoint of reductions of 20% or greater of baseline PN volume, according to the authors.
Indeed, the authors reported that patients on teduglutide 0.05 mg/kg per day decreased their PN volume by 4.9 L/week (52%), while patients taking teduglutide 0.10 mg/kg per day decreased their PN volume by 3.3 L/week (26%).
Four subjects in the study were completely weaned from PN. "Three patients in the teduglutide 0.05-mg/kg per day treatment group became completely independent of PN after 25, 2, and 6.5 years [of parenteral support], receiving 5.4, 3.5, and 12.0 L/d parenteral support per week at baseline, respectively," wrote the authors. "Another patient receiving the teduglutide 0.10-mg/kg per day dose had been receiving parenteral support for 3.7 years and received 4.5 L/day parenteral support at baseline."
Looking at safety, although no "clinically meaningful differences" were observed through the study period in terms of patient vital signs, electrocardiograms, body weight, and physical examinations, the authors did report that 50 of the 52 patients reported at least one treatment-emergent adverse event, most commonly headache (35%), nausea (31%), or abdominal pain (25%).
The authors conceded that while they assume that reductions in PN requirements would translate to improvements in quality of life, the study was not sufficiently powered to address that question.
"It remains to be seen what the overall applicability of this product will be in clinical practice," they added.
The study was sponsored by NPS Pharmaceuticals, maker of teduglutide. Dr. O’Keefe and all his coauthors disclosed financial ties to NPS.
Two-thirds of patients with short bowel intestinal failure who took teduglutide reduced their parenteral nutrition intake by 20% or more, for an average reduction of 5 L of parenteral nutrition per week.
Moreover, once-daily dosing was safe and effective, reported Dr. Stephen J.D. O’Keefe and his coauthors in the July issue of Clinical Gastroenterology and Hepatology.
According to the authors, teduglutide, a glucagon-like peptide-2 analogue, produces villous hypertrophy, retards gastric secretion and emptying, and increases mucosal blood flow and absorption.
Source: American Gastroenterological Association
In a 28-week extension study of an initial 24-week, double-blind, randomized controlled trial of teduglutide, Dr. O’Keefe, of the University of Pittsburgh, looked at adult patients with small bowel intestinal failure caused by intestinal resection.
All patients were dependent on parenteral nutrition (PN) or fluid and electrolytes at least 3 times per week, for at least 12 months before the start of the study.
Antimotility and antidiarrheal agents were permitted, but only if their use was stable for 4 or more weeks prior to study baseline.
At the start of the study, published online in January (doi: 10.1016/j.cgh.2012.12.029), all patients’ fluid intakes were optimized to maintain a urine output of between 1 and 2 L daily, with stable blood creatinine concentrations and a urine sodium content of at least 20 mEq/L.
Patients were then randomized to receive subcutaneous, once-daily injections of teduglutide (dosed at either 0.05 or 0.10 mg/kg per day) or placebo.
The 56 patients who received the drug in the initial 24-week study (Gut 2011;60:902-14) were then invited to participate in a 28-week extension trial. Of these, 52 agreed (25 patients taking 0.05 mg/kg per day and 27 patients taking 0.10 mg/kg per day).
By 52 weeks, 68% of patients in the teduglutide 0.05-mg/kg per day group and 52% of patients in the 0.10-mg/kg per day group were responders, defined as achieving the endpoint of reductions of 20% or greater of baseline PN volume, according to the authors.
Indeed, the authors reported that patients on teduglutide 0.05 mg/kg per day decreased their PN volume by 4.9 L/week (52%), while patients taking teduglutide 0.10 mg/kg per day decreased their PN volume by 3.3 L/week (26%).
Four subjects in the study were completely weaned from PN. "Three patients in the teduglutide 0.05-mg/kg per day treatment group became completely independent of PN after 25, 2, and 6.5 years [of parenteral support], receiving 5.4, 3.5, and 12.0 L/d parenteral support per week at baseline, respectively," wrote the authors. "Another patient receiving the teduglutide 0.10-mg/kg per day dose had been receiving parenteral support for 3.7 years and received 4.5 L/day parenteral support at baseline."
Looking at safety, although no "clinically meaningful differences" were observed through the study period in terms of patient vital signs, electrocardiograms, body weight, and physical examinations, the authors did report that 50 of the 52 patients reported at least one treatment-emergent adverse event, most commonly headache (35%), nausea (31%), or abdominal pain (25%).
The authors conceded that while they assume that reductions in PN requirements would translate to improvements in quality of life, the study was not sufficiently powered to address that question.
"It remains to be seen what the overall applicability of this product will be in clinical practice," they added.
The study was sponsored by NPS Pharmaceuticals, maker of teduglutide. Dr. O’Keefe and all his coauthors disclosed financial ties to NPS.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: By 52 weeks, 68% of small bowel intestinal failure patients taking teduglutide 0.05 mg/kg per day achieved reductions of 20% or greater in their required parenteral nutrition volume.
Data source: A 24-week, double-blind, randomized controlled study of teduglutide, plus a 28-week double-blind extension study, for a total 52 weeks.
Disclosures: The study was sponsored by NPS Pharmaceuticals, maker of teduglutide. Dr. O’Keefe and all his coauthors disclosed financial ties to NPS.
Dutch study confirms RFA for Barrett's durable at 5 years
At 5 years after radiofrequency ablation for Barrett’s esophagus, 93% of patients remained in complete, sustained remission.
That’s the finding from a Netherlands cohort with the "longest duration of follow-up of patients undergoing RFA for BE containing high-grade intraepithelial neoplasia and/or early-stage cancer," wrote Dr. K. Nadine Phoa and coauthors. The study, for the July issue of Gastroenterology, was published online April 1.
The investigators cautioned, however, that "both cancer recurrences occurred after almost 5 years of follow-up," demonstrating the need for long-term monitoring in this population (doi: 10.1053/j.gastro.2013.03.046).
Dr. Phoa, of the Academic Medical Center in Amsterdam, and her colleagues looked at data from four distinct consecutive cohort studies: AMC-I, which was the first pilot study of circumferential RFA using the HALO360 ablation device; AMC-II, the second, prospective study of the device; EURO-I, the first European multicenter RFA trial; and AMC-IV, a prospective, randomized, multicenter trial of the device.
Of the 55 patients who underwent one of the included trials and were treated at Dr. Phoa’s institution (45 men; mean age, 65 years), 72% underwent endoscopic resection before the first RFA treatment, either piecemeal or en bloc.
After RFA, complete remission of neoplasia and/or complete remission of intestinal metaplasia was achieved in 54 of 55 patients; of the 54 patients, 8 withdrew during follow-up due to unrelated death, comorbidity, or emigration, leaving 46 patients with a median follow-up of 61 months and six endoscopies for analysis.
The investigators found that among these, sustained complete remission of neoplasia and complete remission of intestinal metaplasia were maintained in 43 of 46 patients (93%; 95% confidence interval, 82.5-97.8).
Among the 3 patients who did have recurrence, one was 71 years old and initially presented with early-stage cancer and multifocal high-grade intraepithelial neoplasia. At the 5-year visit, a "small area with columnar mucosa with low-grade intraepithelial neoplasia was discovered," wrote the authors, and "18 months after argon plasma coagulation, no endoscopic or histological evidence of residual BE was found."
The second case was an 81-year-old patient with baseline early-stage cancer and residual BE with high-grade intraepithelial neoplasia. During the patient’s fifth endoscopy, at 52 months post RFA, "a 6-mm lesion was seen and radically removed en bloc by endoscopic resection-cap technique," the authors wrote.
"Histological evaluation showed a radically resected mucosal cancer without evidence of lymph-vascular invasion," they added, and at 3 and 9 months post resection, no endoscopic or histologic evidence of neoplasia was found.
Finally, the third case of recurrence was seen in a 62-year-old patient with baseline early-stage cancer and high-grade intraepithelial neoplasia, who, at the 5-year visit, had an elevated Barrett’s island containing carcinoma 2 cm above the neo-squamocolumnar junction.
"The lesion was radically removed en bloc by endoscopic resection-cap technique," wrote the authors, and as in the case of the prior two lesions, 3 months later, "no endoscopic or histological evidence of neoplasia or intestinal metaplasia was found."
Taking into account these three cases, the authors performed a Kaplan-Meier analysis and found a recurrence-free proportion of 90% of patients after 5 years.
According to the authors, even though remission was reestablished in all 3 patients who recurred, "this study also demonstrates how small and subtle recurrences can be."
Indeed, "even a minimal area of residual Barrett’s might be at risk for malignant progression, and this emphasizes the importance of a dedicated treatment protocol and careful endoscopic inspection to ensure complete eradication of all Barrett’s epithelium."
One of Dr. Phoa’s coauthors disclosed financial relationships with BÂRRX Medical, maker of the HALO device, and other pharmaceutical and device makers. The researchers wrote that BÂRRX Medical also supported this study.
At 5 years after radiofrequency ablation for Barrett’s esophagus, 93% of patients remained in complete, sustained remission.
That’s the finding from a Netherlands cohort with the "longest duration of follow-up of patients undergoing RFA for BE containing high-grade intraepithelial neoplasia and/or early-stage cancer," wrote Dr. K. Nadine Phoa and coauthors. The study, for the July issue of Gastroenterology, was published online April 1.
The investigators cautioned, however, that "both cancer recurrences occurred after almost 5 years of follow-up," demonstrating the need for long-term monitoring in this population (doi: 10.1053/j.gastro.2013.03.046).
Dr. Phoa, of the Academic Medical Center in Amsterdam, and her colleagues looked at data from four distinct consecutive cohort studies: AMC-I, which was the first pilot study of circumferential RFA using the HALO360 ablation device; AMC-II, the second, prospective study of the device; EURO-I, the first European multicenter RFA trial; and AMC-IV, a prospective, randomized, multicenter trial of the device.
Of the 55 patients who underwent one of the included trials and were treated at Dr. Phoa’s institution (45 men; mean age, 65 years), 72% underwent endoscopic resection before the first RFA treatment, either piecemeal or en bloc.
After RFA, complete remission of neoplasia and/or complete remission of intestinal metaplasia was achieved in 54 of 55 patients; of the 54 patients, 8 withdrew during follow-up due to unrelated death, comorbidity, or emigration, leaving 46 patients with a median follow-up of 61 months and six endoscopies for analysis.
The investigators found that among these, sustained complete remission of neoplasia and complete remission of intestinal metaplasia were maintained in 43 of 46 patients (93%; 95% confidence interval, 82.5-97.8).
Among the 3 patients who did have recurrence, one was 71 years old and initially presented with early-stage cancer and multifocal high-grade intraepithelial neoplasia. At the 5-year visit, a "small area with columnar mucosa with low-grade intraepithelial neoplasia was discovered," wrote the authors, and "18 months after argon plasma coagulation, no endoscopic or histological evidence of residual BE was found."
The second case was an 81-year-old patient with baseline early-stage cancer and residual BE with high-grade intraepithelial neoplasia. During the patient’s fifth endoscopy, at 52 months post RFA, "a 6-mm lesion was seen and radically removed en bloc by endoscopic resection-cap technique," the authors wrote.
"Histological evaluation showed a radically resected mucosal cancer without evidence of lymph-vascular invasion," they added, and at 3 and 9 months post resection, no endoscopic or histologic evidence of neoplasia was found.
Finally, the third case of recurrence was seen in a 62-year-old patient with baseline early-stage cancer and high-grade intraepithelial neoplasia, who, at the 5-year visit, had an elevated Barrett’s island containing carcinoma 2 cm above the neo-squamocolumnar junction.
"The lesion was radically removed en bloc by endoscopic resection-cap technique," wrote the authors, and as in the case of the prior two lesions, 3 months later, "no endoscopic or histological evidence of neoplasia or intestinal metaplasia was found."
Taking into account these three cases, the authors performed a Kaplan-Meier analysis and found a recurrence-free proportion of 90% of patients after 5 years.
According to the authors, even though remission was reestablished in all 3 patients who recurred, "this study also demonstrates how small and subtle recurrences can be."
Indeed, "even a minimal area of residual Barrett’s might be at risk for malignant progression, and this emphasizes the importance of a dedicated treatment protocol and careful endoscopic inspection to ensure complete eradication of all Barrett’s epithelium."
One of Dr. Phoa’s coauthors disclosed financial relationships with BÂRRX Medical, maker of the HALO device, and other pharmaceutical and device makers. The researchers wrote that BÂRRX Medical also supported this study.
At 5 years after radiofrequency ablation for Barrett’s esophagus, 93% of patients remained in complete, sustained remission.
That’s the finding from a Netherlands cohort with the "longest duration of follow-up of patients undergoing RFA for BE containing high-grade intraepithelial neoplasia and/or early-stage cancer," wrote Dr. K. Nadine Phoa and coauthors. The study, for the July issue of Gastroenterology, was published online April 1.
The investigators cautioned, however, that "both cancer recurrences occurred after almost 5 years of follow-up," demonstrating the need for long-term monitoring in this population (doi: 10.1053/j.gastro.2013.03.046).
Dr. Phoa, of the Academic Medical Center in Amsterdam, and her colleagues looked at data from four distinct consecutive cohort studies: AMC-I, which was the first pilot study of circumferential RFA using the HALO360 ablation device; AMC-II, the second, prospective study of the device; EURO-I, the first European multicenter RFA trial; and AMC-IV, a prospective, randomized, multicenter trial of the device.
Of the 55 patients who underwent one of the included trials and were treated at Dr. Phoa’s institution (45 men; mean age, 65 years), 72% underwent endoscopic resection before the first RFA treatment, either piecemeal or en bloc.
After RFA, complete remission of neoplasia and/or complete remission of intestinal metaplasia was achieved in 54 of 55 patients; of the 54 patients, 8 withdrew during follow-up due to unrelated death, comorbidity, or emigration, leaving 46 patients with a median follow-up of 61 months and six endoscopies for analysis.
The investigators found that among these, sustained complete remission of neoplasia and complete remission of intestinal metaplasia were maintained in 43 of 46 patients (93%; 95% confidence interval, 82.5-97.8).
Among the 3 patients who did have recurrence, one was 71 years old and initially presented with early-stage cancer and multifocal high-grade intraepithelial neoplasia. At the 5-year visit, a "small area with columnar mucosa with low-grade intraepithelial neoplasia was discovered," wrote the authors, and "18 months after argon plasma coagulation, no endoscopic or histological evidence of residual BE was found."
The second case was an 81-year-old patient with baseline early-stage cancer and residual BE with high-grade intraepithelial neoplasia. During the patient’s fifth endoscopy, at 52 months post RFA, "a 6-mm lesion was seen and radically removed en bloc by endoscopic resection-cap technique," the authors wrote.
"Histological evaluation showed a radically resected mucosal cancer without evidence of lymph-vascular invasion," they added, and at 3 and 9 months post resection, no endoscopic or histologic evidence of neoplasia was found.
Finally, the third case of recurrence was seen in a 62-year-old patient with baseline early-stage cancer and high-grade intraepithelial neoplasia, who, at the 5-year visit, had an elevated Barrett’s island containing carcinoma 2 cm above the neo-squamocolumnar junction.
"The lesion was radically removed en bloc by endoscopic resection-cap technique," wrote the authors, and as in the case of the prior two lesions, 3 months later, "no endoscopic or histological evidence of neoplasia or intestinal metaplasia was found."
Taking into account these three cases, the authors performed a Kaplan-Meier analysis and found a recurrence-free proportion of 90% of patients after 5 years.
According to the authors, even though remission was reestablished in all 3 patients who recurred, "this study also demonstrates how small and subtle recurrences can be."
Indeed, "even a minimal area of residual Barrett’s might be at risk for malignant progression, and this emphasizes the importance of a dedicated treatment protocol and careful endoscopic inspection to ensure complete eradication of all Barrett’s epithelium."
One of Dr. Phoa’s coauthors disclosed financial relationships with BÂRRX Medical, maker of the HALO device, and other pharmaceutical and device makers. The researchers wrote that BÂRRX Medical also supported this study.
FROM GASTROENTEROLOGY
Major finding: Sustained complete remission of neoplasia and complete remission of intestinal metaplasia were maintained at 5 years in 43 of 46 patients with Barrett’s esophagus treated with radiofrequency ablation (93%; 95% confidence interval, 82.5-97.8).
Data source: Four cohorts comprising 46 patients treated at Amsterdam’s Academic Medical Center.
Disclosures: One of Dr. Phoa’s coauthors disclosed financial relationships with BÂRRX Medical, maker of the HALO device, and other pharmaceutical and device makers. The researchers wrote that BÂRRX Medical also supported this study.
U.K. data confirm efficacy of RFA for Barrett's esophagus
Data from a U.K. registry reveal that 86% of Barrett’s esophagus patients achieve complete remission of high-grade dysplasia after radiofrequency ablation, with only 9% reporting adverse events.
"The safety and short-term efficacy of radiofrequency ablation as a minimally invasive intervention for premalignant dysplastic Barrett’s esophagus are now beyond debate," wrote Dr. Rehan J. Haidry and his coauthors in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2013.03.045).
Indeed, radiofrequency ablation (RFA) "has provided clinicians in specialist centers a valuable adjunct to more established surgical treatments."
Dr. Haidry, of the National Medical Laser Centre at University College London, and his colleagues analyzed data from the U.K. National Halo RFA Registry, founded in 2008 to audit outcomes of all patients undergoing ablation using the Halo device for high-grade dysplasia and early cancer in Barrett’s esophagus.
The cohort included 335 patients (mean age, 68.1 years); 81% were men and 100% were white.
Dr. Haidry calculated that the median number of RFA treatments was 2 (range, 1-6), and they took place over a median period of 11.6 months. The mean length of Barrett’s esophagus to be ablated was 5.8 cm. In all, 72% of patients were classed as having high-grade dysplasia, 24% as having low-grade dysplasia, and 4% as having intramucosal cancer.
About half the patients (49%) underwent endoscopic resection before starting RFA.
The researchers then combed the records for evidence of the primary outcomes of this analysis: complete reversal of high-grade dysplasia and complete reversal of all dysplasia 12 months from the index RFA treatment.
They found that at 1 year post RFA, 86% of patients had achieved complete reversal of high-grade dysplasia, and 81% showed complete reversal of all dysplasia.
Nevertheless, the authors also found 12 cases of recurrent neoplasia after "apparently successful" ablation, occurring at a mean 8 months after completion of the protocol.
Additionally, "5.1% have progressed to invasive disease at their most recent follow-up biopsy," the authors wrote.
Looking at the safety profile of RFA, the investigators uncovered one case of perforation in a patient who was treated with a 34-mm balloon (since removed from the market, according to the researchers), and 30 cases (9%) of stricturing, all of which were managed endoscopically.
The authors conceded several limitations to this "real world" analysis of what they called "the largest series to date of patients with early Barrett’s neoplasia undergoing RFA."
For example, "the proposed protocol states that all patients should undergo ablation every 3 months until the 12-month period, at which point the end-of-protocol biopsy defines treatment success or failure," wrote Dr. Haidry.
"Within the confines of varied practice nationwide and diversity in resources, these strict timelines were difficult to achieve."
Moreover, "although the durability of favorable response appears promising, our median follow-up time of 19 months is short," added the authors, noting that two late relapses occurred more than 4 years after "apparently successful" ablation.
"This highlights the importance of long-term follow-up of these patients."
The researchers disclosed that one of their coauthors has financial ties to the makers of the Halo device. The work was supported by the Cancer Research UK (CRUK) University College London Early Cancer Medicine Centre, and was conducted at a facility sponsored by the U.K. Department of Health.
Ablation of Barrett’s esophagus may be a “game changer” for high-risk patients with Barrett’s esophagus: It provides the potential of changing the management of this precancerous condition from surveillance for early cancer to prevention. We need to confirm several factors, however, to determine if ablation is both effective and practical, especially as compared with surveillance. In particular, does it decrease the risk of important endpoints such as cancer or cancer death; is the decrease in cancer risk durable; and can it decrease the need for alternative strategies (e.g., repeated endoscopies for surveillance)?
The two articles recently published in Gastroenterology ( "Dutch study confirms RFA for Barrett's durable at 5 years," "One-third recur after RFA in Barrett's esophagus") provide insights into the durability of ablation, after the initial successful elimination of Barrett’s esophagus. Reported from three different multicenter groups, from three different countries, the somewhat disparate results provide a greater understanding of ablation outcomes. Together, the studies indicate that:
- P Most patients have durable eradication of their dysplasia; thus, ablation is a reasonable long-term strategy for high-risk patients (e.g., patients with high-grade dysplasia).
- P There is a low but significant risk of recurrent metaplasia and dysplasia. In a study from the United Kingdom, among 208 patients who achieved remission, only 9% in follow-up had recurrent intestinal metaplasia by the end of follow-up; but of those with recurrence, 47% had recurrent dysplasia. The study from the Netherlands found recurrent esophageal metaplasia in only 6% of their 54 patients; among these patients, one had low-grade dysplasia, one had high-grade dysplasia, and one had a small cancer. In contrast, the U.S. Multicenter Consortium found that, among patients who were in remission by 24 months, 20% developed recurrent metaplasia within 1 year and 33% had metaplasia after 2 years. These findings suggest that continued endoscopic surveillance is required until better predictors of risk are available.
- P The differences in results between the centers suggest we can learn more about how to optimize outcomes. Comparative trials are needed to confirm the optimum methods for both ablation and postablation acid suppression, which differed somewhat between the centers.
Douglas A. Corley, M.D., Ph.D., is in the division of research, Kaiser Permanente Northern California, Oakland, and Kaiser Permanente San Francisco Medical Center. He reported no conflicts of interest.
Ablation of Barrett’s esophagus may be a “game changer” for high-risk patients with Barrett’s esophagus: It provides the potential of changing the management of this precancerous condition from surveillance for early cancer to prevention. We need to confirm several factors, however, to determine if ablation is both effective and practical, especially as compared with surveillance. In particular, does it decrease the risk of important endpoints such as cancer or cancer death; is the decrease in cancer risk durable; and can it decrease the need for alternative strategies (e.g., repeated endoscopies for surveillance)?
The two articles recently published in Gastroenterology ( "Dutch study confirms RFA for Barrett's durable at 5 years," "One-third recur after RFA in Barrett's esophagus") provide insights into the durability of ablation, after the initial successful elimination of Barrett’s esophagus. Reported from three different multicenter groups, from three different countries, the somewhat disparate results provide a greater understanding of ablation outcomes. Together, the studies indicate that:
- P Most patients have durable eradication of their dysplasia; thus, ablation is a reasonable long-term strategy for high-risk patients (e.g., patients with high-grade dysplasia).
- P There is a low but significant risk of recurrent metaplasia and dysplasia. In a study from the United Kingdom, among 208 patients who achieved remission, only 9% in follow-up had recurrent intestinal metaplasia by the end of follow-up; but of those with recurrence, 47% had recurrent dysplasia. The study from the Netherlands found recurrent esophageal metaplasia in only 6% of their 54 patients; among these patients, one had low-grade dysplasia, one had high-grade dysplasia, and one had a small cancer. In contrast, the U.S. Multicenter Consortium found that, among patients who were in remission by 24 months, 20% developed recurrent metaplasia within 1 year and 33% had metaplasia after 2 years. These findings suggest that continued endoscopic surveillance is required until better predictors of risk are available.
- P The differences in results between the centers suggest we can learn more about how to optimize outcomes. Comparative trials are needed to confirm the optimum methods for both ablation and postablation acid suppression, which differed somewhat between the centers.
Douglas A. Corley, M.D., Ph.D., is in the division of research, Kaiser Permanente Northern California, Oakland, and Kaiser Permanente San Francisco Medical Center. He reported no conflicts of interest.
Ablation of Barrett’s esophagus may be a “game changer” for high-risk patients with Barrett’s esophagus: It provides the potential of changing the management of this precancerous condition from surveillance for early cancer to prevention. We need to confirm several factors, however, to determine if ablation is both effective and practical, especially as compared with surveillance. In particular, does it decrease the risk of important endpoints such as cancer or cancer death; is the decrease in cancer risk durable; and can it decrease the need for alternative strategies (e.g., repeated endoscopies for surveillance)?
The two articles recently published in Gastroenterology ( "Dutch study confirms RFA for Barrett's durable at 5 years," "One-third recur after RFA in Barrett's esophagus") provide insights into the durability of ablation, after the initial successful elimination of Barrett’s esophagus. Reported from three different multicenter groups, from three different countries, the somewhat disparate results provide a greater understanding of ablation outcomes. Together, the studies indicate that:
- P Most patients have durable eradication of their dysplasia; thus, ablation is a reasonable long-term strategy for high-risk patients (e.g., patients with high-grade dysplasia).
- P There is a low but significant risk of recurrent metaplasia and dysplasia. In a study from the United Kingdom, among 208 patients who achieved remission, only 9% in follow-up had recurrent intestinal metaplasia by the end of follow-up; but of those with recurrence, 47% had recurrent dysplasia. The study from the Netherlands found recurrent esophageal metaplasia in only 6% of their 54 patients; among these patients, one had low-grade dysplasia, one had high-grade dysplasia, and one had a small cancer. In contrast, the U.S. Multicenter Consortium found that, among patients who were in remission by 24 months, 20% developed recurrent metaplasia within 1 year and 33% had metaplasia after 2 years. These findings suggest that continued endoscopic surveillance is required until better predictors of risk are available.
- P The differences in results between the centers suggest we can learn more about how to optimize outcomes. Comparative trials are needed to confirm the optimum methods for both ablation and postablation acid suppression, which differed somewhat between the centers.
Douglas A. Corley, M.D., Ph.D., is in the division of research, Kaiser Permanente Northern California, Oakland, and Kaiser Permanente San Francisco Medical Center. He reported no conflicts of interest.
Data from a U.K. registry reveal that 86% of Barrett’s esophagus patients achieve complete remission of high-grade dysplasia after radiofrequency ablation, with only 9% reporting adverse events.
"The safety and short-term efficacy of radiofrequency ablation as a minimally invasive intervention for premalignant dysplastic Barrett’s esophagus are now beyond debate," wrote Dr. Rehan J. Haidry and his coauthors in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2013.03.045).
Indeed, radiofrequency ablation (RFA) "has provided clinicians in specialist centers a valuable adjunct to more established surgical treatments."
Dr. Haidry, of the National Medical Laser Centre at University College London, and his colleagues analyzed data from the U.K. National Halo RFA Registry, founded in 2008 to audit outcomes of all patients undergoing ablation using the Halo device for high-grade dysplasia and early cancer in Barrett’s esophagus.
The cohort included 335 patients (mean age, 68.1 years); 81% were men and 100% were white.
Dr. Haidry calculated that the median number of RFA treatments was 2 (range, 1-6), and they took place over a median period of 11.6 months. The mean length of Barrett’s esophagus to be ablated was 5.8 cm. In all, 72% of patients were classed as having high-grade dysplasia, 24% as having low-grade dysplasia, and 4% as having intramucosal cancer.
About half the patients (49%) underwent endoscopic resection before starting RFA.
The researchers then combed the records for evidence of the primary outcomes of this analysis: complete reversal of high-grade dysplasia and complete reversal of all dysplasia 12 months from the index RFA treatment.
They found that at 1 year post RFA, 86% of patients had achieved complete reversal of high-grade dysplasia, and 81% showed complete reversal of all dysplasia.
Nevertheless, the authors also found 12 cases of recurrent neoplasia after "apparently successful" ablation, occurring at a mean 8 months after completion of the protocol.
Additionally, "5.1% have progressed to invasive disease at their most recent follow-up biopsy," the authors wrote.
Looking at the safety profile of RFA, the investigators uncovered one case of perforation in a patient who was treated with a 34-mm balloon (since removed from the market, according to the researchers), and 30 cases (9%) of stricturing, all of which were managed endoscopically.
The authors conceded several limitations to this "real world" analysis of what they called "the largest series to date of patients with early Barrett’s neoplasia undergoing RFA."
For example, "the proposed protocol states that all patients should undergo ablation every 3 months until the 12-month period, at which point the end-of-protocol biopsy defines treatment success or failure," wrote Dr. Haidry.
"Within the confines of varied practice nationwide and diversity in resources, these strict timelines were difficult to achieve."
Moreover, "although the durability of favorable response appears promising, our median follow-up time of 19 months is short," added the authors, noting that two late relapses occurred more than 4 years after "apparently successful" ablation.
"This highlights the importance of long-term follow-up of these patients."
The researchers disclosed that one of their coauthors has financial ties to the makers of the Halo device. The work was supported by the Cancer Research UK (CRUK) University College London Early Cancer Medicine Centre, and was conducted at a facility sponsored by the U.K. Department of Health.
Data from a U.K. registry reveal that 86% of Barrett’s esophagus patients achieve complete remission of high-grade dysplasia after radiofrequency ablation, with only 9% reporting adverse events.
"The safety and short-term efficacy of radiofrequency ablation as a minimally invasive intervention for premalignant dysplastic Barrett’s esophagus are now beyond debate," wrote Dr. Rehan J. Haidry and his coauthors in the July issue of Gastroenterology (doi: 10.1053/j.gastro.2013.03.045).
Indeed, radiofrequency ablation (RFA) "has provided clinicians in specialist centers a valuable adjunct to more established surgical treatments."
Dr. Haidry, of the National Medical Laser Centre at University College London, and his colleagues analyzed data from the U.K. National Halo RFA Registry, founded in 2008 to audit outcomes of all patients undergoing ablation using the Halo device for high-grade dysplasia and early cancer in Barrett’s esophagus.
The cohort included 335 patients (mean age, 68.1 years); 81% were men and 100% were white.
Dr. Haidry calculated that the median number of RFA treatments was 2 (range, 1-6), and they took place over a median period of 11.6 months. The mean length of Barrett’s esophagus to be ablated was 5.8 cm. In all, 72% of patients were classed as having high-grade dysplasia, 24% as having low-grade dysplasia, and 4% as having intramucosal cancer.
About half the patients (49%) underwent endoscopic resection before starting RFA.
The researchers then combed the records for evidence of the primary outcomes of this analysis: complete reversal of high-grade dysplasia and complete reversal of all dysplasia 12 months from the index RFA treatment.
They found that at 1 year post RFA, 86% of patients had achieved complete reversal of high-grade dysplasia, and 81% showed complete reversal of all dysplasia.
Nevertheless, the authors also found 12 cases of recurrent neoplasia after "apparently successful" ablation, occurring at a mean 8 months after completion of the protocol.
Additionally, "5.1% have progressed to invasive disease at their most recent follow-up biopsy," the authors wrote.
Looking at the safety profile of RFA, the investigators uncovered one case of perforation in a patient who was treated with a 34-mm balloon (since removed from the market, according to the researchers), and 30 cases (9%) of stricturing, all of which were managed endoscopically.
The authors conceded several limitations to this "real world" analysis of what they called "the largest series to date of patients with early Barrett’s neoplasia undergoing RFA."
For example, "the proposed protocol states that all patients should undergo ablation every 3 months until the 12-month period, at which point the end-of-protocol biopsy defines treatment success or failure," wrote Dr. Haidry.
"Within the confines of varied practice nationwide and diversity in resources, these strict timelines were difficult to achieve."
Moreover, "although the durability of favorable response appears promising, our median follow-up time of 19 months is short," added the authors, noting that two late relapses occurred more than 4 years after "apparently successful" ablation.
"This highlights the importance of long-term follow-up of these patients."
The researchers disclosed that one of their coauthors has financial ties to the makers of the Halo device. The work was supported by the Cancer Research UK (CRUK) University College London Early Cancer Medicine Centre, and was conducted at a facility sponsored by the U.K. Department of Health.
FROM GASTROENTEROLOGY
Major finding: At 1 year post radiofrequency ablation for Barrett’s esophagus, 86% of patients had achieved complete reversal of high-grade dysplasia, and 81% complete reversal of all dysplasia.
Data source: The U.K. National Halo RFA Registry.
Disclosures: The researchers disclosed that one of their coauthors has financial ties to the makers of the Halo device. The work was supported by the Cancer Research UK (CRUK) University College London Early Cancer Medicine Centre, and was conducted at a facility sponsored by the U.K. Department of Health.
Test-and-treat approach best in Crohn's disease
In Crohn’s disease that is no longer responsive to infliximab, a testing-based strategy for regaining treatment efficacy is more cost-effective than is empiric dose escalation, wrote Dr. Fernando S. Velayos and colleagues. The study was published in the June issue of Clinical Gastroenterology and Hepatology.
In a decision analytical model conducted from the perspective of a third-party payer, Dr. Velayos of the University of California, San Francisco, looked at two cohorts of Crohn’s patients with loss of response to infliximab.
"The model included only infliximab and not other TNF antagonists given that, first, infliximab drug and antibody concentrations are the only commercial tests currently available in the United States, and, second, sufficient published data exist for infliximab to populate the necessary efficacy and cost inputs," he noted.
Quality-adjusted life years (QALYs) and costs were calculated based on a 1-year time horizon; costs included treatment and health state costs, but not indirect costs such as time missed from work, wrote the authors.
The base case in this model was a 35-year-old man weighing 70 kg with moderate to severe active ileocolonic Crohn’s disease who had achieved remission after infliximab induction, but experienced Crohn’s-like symptoms during maintenance. Additionally, the model assumed that nonbiologic therapies had already been exhausted, that the patient did not have Clostridium difficile, and that surgery was not warranted.
Each cohort was then subjected to one of two strategies: testing-based treatment or empiric treatment.
In the former strategy, all patients receive testing to detect antibodies to infliximab and quantify serum infliximab concentration.
Patients who had antibodies to infliximab were then switched to a different TNF antagonist, adalimumab.
Patients who lacked infliximab antibodies and did have a therapeutic serum concentration of the drug, on the other hand, underwent computerized tomography enterography and/or colonoscopy.
A finding of active inflammation meant surgery, and a finding of no active inflammation meant continuation of infliximab "for symptoms presumed caused by some other disease process that does not clearly justify immediate surgery or change in Crohn’s therapy."
The empiric strategy, meanwhile, followed guidance from the World Congress of Gastroenterology, whereby nonresponders first underwent an increase in the infliximab dose; if they still failed, they were switched to adalimumab.
Further failure meant an increased adalimumab dose, and, finally, if Crohn’s symptoms were still present, the patient proceeded to surgery.
Dr. Velayos found that the testing strategy yielded similar QALYs compared with the empiric strategy, at 0.801 versus 0.800, respectively. However, the testing approach to treatment was less expensive, at $31,870, compared with $37,266 for the empiric strategy.
The researchers also found that although the proportion of patients with response and in remission was similar at 1 year for both the testing and empiric cohort, "this was achieved differently."
Specifically, patients in the testing group underwent more surgery (48% versus 34% in the empiric group) and had substantially lower use of high-dose biological therapy (41% versus 54%).
In addition, more patients in the testing strategy were managed without biological therapy (34% versus 27%, respectively).
Moreover, "Even in situations in which the empiric strategy was not dominated by the testing strategy and thus an incremental cost-efficiency ratio was calculated, the estimates ranged between $500,000 to more than $5 million per QALY gained, well in excess of the $50,000 to $100,000 per QALY considered to be a reasonable cost-effectiveness threshold."
Several authors, including Dr. Velayos, disclosed ties with pharmaceutical companies. The researchers said the study was funded by Prometheus Laboratories, maker of diagnostic laboratory tests for use in Crohn’s disease and other conditions.
In Crohn’s disease that is no longer responsive to infliximab, a testing-based strategy for regaining treatment efficacy is more cost-effective than is empiric dose escalation, wrote Dr. Fernando S. Velayos and colleagues. The study was published in the June issue of Clinical Gastroenterology and Hepatology.
In a decision analytical model conducted from the perspective of a third-party payer, Dr. Velayos of the University of California, San Francisco, looked at two cohorts of Crohn’s patients with loss of response to infliximab.
"The model included only infliximab and not other TNF antagonists given that, first, infliximab drug and antibody concentrations are the only commercial tests currently available in the United States, and, second, sufficient published data exist for infliximab to populate the necessary efficacy and cost inputs," he noted.
Quality-adjusted life years (QALYs) and costs were calculated based on a 1-year time horizon; costs included treatment and health state costs, but not indirect costs such as time missed from work, wrote the authors.
The base case in this model was a 35-year-old man weighing 70 kg with moderate to severe active ileocolonic Crohn’s disease who had achieved remission after infliximab induction, but experienced Crohn’s-like symptoms during maintenance. Additionally, the model assumed that nonbiologic therapies had already been exhausted, that the patient did not have Clostridium difficile, and that surgery was not warranted.
Each cohort was then subjected to one of two strategies: testing-based treatment or empiric treatment.
In the former strategy, all patients receive testing to detect antibodies to infliximab and quantify serum infliximab concentration.
Patients who had antibodies to infliximab were then switched to a different TNF antagonist, adalimumab.
Patients who lacked infliximab antibodies and did have a therapeutic serum concentration of the drug, on the other hand, underwent computerized tomography enterography and/or colonoscopy.
A finding of active inflammation meant surgery, and a finding of no active inflammation meant continuation of infliximab "for symptoms presumed caused by some other disease process that does not clearly justify immediate surgery or change in Crohn’s therapy."
The empiric strategy, meanwhile, followed guidance from the World Congress of Gastroenterology, whereby nonresponders first underwent an increase in the infliximab dose; if they still failed, they were switched to adalimumab.
Further failure meant an increased adalimumab dose, and, finally, if Crohn’s symptoms were still present, the patient proceeded to surgery.
Dr. Velayos found that the testing strategy yielded similar QALYs compared with the empiric strategy, at 0.801 versus 0.800, respectively. However, the testing approach to treatment was less expensive, at $31,870, compared with $37,266 for the empiric strategy.
The researchers also found that although the proportion of patients with response and in remission was similar at 1 year for both the testing and empiric cohort, "this was achieved differently."
Specifically, patients in the testing group underwent more surgery (48% versus 34% in the empiric group) and had substantially lower use of high-dose biological therapy (41% versus 54%).
In addition, more patients in the testing strategy were managed without biological therapy (34% versus 27%, respectively).
Moreover, "Even in situations in which the empiric strategy was not dominated by the testing strategy and thus an incremental cost-efficiency ratio was calculated, the estimates ranged between $500,000 to more than $5 million per QALY gained, well in excess of the $50,000 to $100,000 per QALY considered to be a reasonable cost-effectiveness threshold."
Several authors, including Dr. Velayos, disclosed ties with pharmaceutical companies. The researchers said the study was funded by Prometheus Laboratories, maker of diagnostic laboratory tests for use in Crohn’s disease and other conditions.
In Crohn’s disease that is no longer responsive to infliximab, a testing-based strategy for regaining treatment efficacy is more cost-effective than is empiric dose escalation, wrote Dr. Fernando S. Velayos and colleagues. The study was published in the June issue of Clinical Gastroenterology and Hepatology.
In a decision analytical model conducted from the perspective of a third-party payer, Dr. Velayos of the University of California, San Francisco, looked at two cohorts of Crohn’s patients with loss of response to infliximab.
"The model included only infliximab and not other TNF antagonists given that, first, infliximab drug and antibody concentrations are the only commercial tests currently available in the United States, and, second, sufficient published data exist for infliximab to populate the necessary efficacy and cost inputs," he noted.
Quality-adjusted life years (QALYs) and costs were calculated based on a 1-year time horizon; costs included treatment and health state costs, but not indirect costs such as time missed from work, wrote the authors.
The base case in this model was a 35-year-old man weighing 70 kg with moderate to severe active ileocolonic Crohn’s disease who had achieved remission after infliximab induction, but experienced Crohn’s-like symptoms during maintenance. Additionally, the model assumed that nonbiologic therapies had already been exhausted, that the patient did not have Clostridium difficile, and that surgery was not warranted.
Each cohort was then subjected to one of two strategies: testing-based treatment or empiric treatment.
In the former strategy, all patients receive testing to detect antibodies to infliximab and quantify serum infliximab concentration.
Patients who had antibodies to infliximab were then switched to a different TNF antagonist, adalimumab.
Patients who lacked infliximab antibodies and did have a therapeutic serum concentration of the drug, on the other hand, underwent computerized tomography enterography and/or colonoscopy.
A finding of active inflammation meant surgery, and a finding of no active inflammation meant continuation of infliximab "for symptoms presumed caused by some other disease process that does not clearly justify immediate surgery or change in Crohn’s therapy."
The empiric strategy, meanwhile, followed guidance from the World Congress of Gastroenterology, whereby nonresponders first underwent an increase in the infliximab dose; if they still failed, they were switched to adalimumab.
Further failure meant an increased adalimumab dose, and, finally, if Crohn’s symptoms were still present, the patient proceeded to surgery.
Dr. Velayos found that the testing strategy yielded similar QALYs compared with the empiric strategy, at 0.801 versus 0.800, respectively. However, the testing approach to treatment was less expensive, at $31,870, compared with $37,266 for the empiric strategy.
The researchers also found that although the proportion of patients with response and in remission was similar at 1 year for both the testing and empiric cohort, "this was achieved differently."
Specifically, patients in the testing group underwent more surgery (48% versus 34% in the empiric group) and had substantially lower use of high-dose biological therapy (41% versus 54%).
In addition, more patients in the testing strategy were managed without biological therapy (34% versus 27%, respectively).
Moreover, "Even in situations in which the empiric strategy was not dominated by the testing strategy and thus an incremental cost-efficiency ratio was calculated, the estimates ranged between $500,000 to more than $5 million per QALY gained, well in excess of the $50,000 to $100,000 per QALY considered to be a reasonable cost-effectiveness threshold."
Several authors, including Dr. Velayos, disclosed ties with pharmaceutical companies. The researchers said the study was funded by Prometheus Laboratories, maker of diagnostic laboratory tests for use in Crohn’s disease and other conditions.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: A test-and-treat approach to refractory Crohn’s disease was less expensive, at $31,870, compared with $37,266 for an empiric approach.
Data source: A decision analysis comparing the cost effectiveness of a testing-based algorithm with an empiric algorithm.
Disclosures: Several authors, including Dr. Velayos, disclosed ties with pharmaceutical companies. The researchers said the study was funded by Prometheus Laboratories, maker of diagnostic laboratory tests for use in Crohn’s disease and other conditions.
Universal screening doesn't pay in celiac disease
Serologic screening for celiac disease for symptomatic and high-risk children is more cost effective than is universal screening, at least when it comes to future bone disease.
Indeed, the current standard of practice of selective screening is also associated with greater quality-adjusted life year (QALY) gains over universal screening, reported Dr. K.T. Park in the June issue of Clinical Gastroenterology and Hepatology.
In light of "ongoing clinical concern that current practice of celiac disease screening misses a considerable proportion of asymptomatic celiac disease patients," Dr. Park of Stanford (Calif.) University and colleagues developed a decision analytic Markov model of 12-year-old cohorts (1,000 male and 1,000 female) with population-based prevalence of celiac disease in North America (Clin. Gastroenterol. Hepatol. 2013 June [doi:10.1016/j.cgh.2012.12.037]).
They used hip bone and vertebral fractures as clinical endpoints to assess the cost effectiveness of either universal serologic screening for celiac disease or selective screening in only symptomatic or high-risk children.
"Suboptimal bone health in the form of nontraumatic fractures is an established risk factor for celiac disease patients who are nonadherent to a gluten-free diet, or have undiagnosed subclinical disease," they explained.
Selective screening – the current standard of care – included screening of high-risk children, such as those with type 1 diabetes mellitus; Down, Turner, and Williams syndromes; IgA deficiency; systemic lupus; autoimmune thyroiditis; and those with a first-degree and/or second-degree relative with celiac disease.
Selective screening also included screening of symptomatic children exhibiting diarrhea, abdominal pain, bloating, and other irritable bowel–like symptoms, as well as poor growth, wasting, failure to thrive, or anemia.
In the model, "Any positive serologic screens required diagnostic confirmation via endoscopic duodenal mucosal biopsies," wrote the authors.
"Once a celiac disease patient started lifelong therapy by maintaining a strict gluten-free diet, patients were subject to natural adherence and nonadherence rates reported in the literature."
Additionally, the investigators’ model assumed the development of deteriorating bone disease among celiac patients, and calculated this at the same rate as the non–celiac population with comparable bone demineralization found in the literature.
They specifically focused on the development of hip and vertebral fractures, which "carry the highest morbidity rate in terms of progressing to long-term disability."
Cost estimates for procedures were derived from the Centers for Medicare and Medicaid Services for 2012.
Dr. Park found that for males, universal screening accrues a lifetime average cost of $8,532, with an associated QALY-gained of 25.511.
In contrast, selective screening had lower costs of $8,472, as well as a higher QALY-gained of 25.515.
Similarly, for females, universal screening carried lifetime average cost of $11,383 with an associated QALY-gained of 25.74; selective screening was cheaper, at $11,328, and had a slightly higher QALY-gained, of 25.75.
"Adopting the universal serologic screening strategy, where virtually every preadolescent child would be screened for celiac disease as part of his/her routine blood work in the primary care setting, would be more expensive and fails to increase the long-term quality of life of the population as a whole" in terms of bone health, wrote the investigators.
Indeed, "the universal serologic screening strategy introduces potential harm from unnecessary endoscopic evaluations of healthy individuals if serologic screening is falsely positive."
On the other hand, they cautioned that cost and quality of life assessments that use endpoints other than fracture, including anemia, infertility, or malignancy, "could change the cost effectiveness of universal screening for celiac disease."
The authors disclosed that Dr. Park was supported by a grant from the National Institutes of Health. They stated that they have no conflicts of interest.
Cost-effective screening for any disease
necessitates the ability to detect disease at a stage such that low-cost
intervention could be initiated that avoids undesirable outcomes. In this
carefully constructed decision analysis model from Park et al., two strategies
were studied: screening without symptoms at all in 12-year-old children and
selective screening in those who were symptomatic or at high risk. In general,
evaluation of a symptomatic patient is not usually considered screening
although in this model, the symptoms are of celiac disease and not those of the
study endpoint, complications of osteoporosis.
The inference from this study is that population-based
screening is not cost effective, and a strategy that investigates those at-risk
persons with hereditary diseases such as Down, Turner, and Williams syndromes,
first-degree relatives with celiac disease, or those with symptoms of celiac disease
is cost effective. There are important considerations not taken into account by
this model, for instance those who are tissue transglutaminase (tTG) positive
but biopsy negative are patients who are at higher risk of eventually
developing small bowel abnormalities, though this at most might affect 10% of
these patients.
In addition, the cost of potential complications of
untreated celiac disease other than bone disease, such as lymphoma, seizures,
or peripheral neuropathy, are not accounted for in this analysis due to the
difficulty of quantifying them but would likely increase the cost-effectiveness
of screening. Finally, as the course of osteoporosis in untreated celiac
disease is unknown, it is not possible to model the benefits of earlier
treatment. Ultimately, the only way to really test these strategies is a
carefully constructed clinical trial of these two strategies, but this model
certainly suggests that current practices may end up being the appropriate
ones.
Kenneth K. Wang, M.D., is the Van Cleve Professor of
Gastroenterology Research at the Mayo Clinic, Rochester, Minn. He had no
relevant disclosures.
Cost-effective screening for any disease
necessitates the ability to detect disease at a stage such that low-cost
intervention could be initiated that avoids undesirable outcomes. In this
carefully constructed decision analysis model from Park et al., two strategies
were studied: screening without symptoms at all in 12-year-old children and
selective screening in those who were symptomatic or at high risk. In general,
evaluation of a symptomatic patient is not usually considered screening
although in this model, the symptoms are of celiac disease and not those of the
study endpoint, complications of osteoporosis.
The inference from this study is that population-based
screening is not cost effective, and a strategy that investigates those at-risk
persons with hereditary diseases such as Down, Turner, and Williams syndromes,
first-degree relatives with celiac disease, or those with symptoms of celiac disease
is cost effective. There are important considerations not taken into account by
this model, for instance those who are tissue transglutaminase (tTG) positive
but biopsy negative are patients who are at higher risk of eventually
developing small bowel abnormalities, though this at most might affect 10% of
these patients.
In addition, the cost of potential complications of
untreated celiac disease other than bone disease, such as lymphoma, seizures,
or peripheral neuropathy, are not accounted for in this analysis due to the
difficulty of quantifying them but would likely increase the cost-effectiveness
of screening. Finally, as the course of osteoporosis in untreated celiac
disease is unknown, it is not possible to model the benefits of earlier
treatment. Ultimately, the only way to really test these strategies is a
carefully constructed clinical trial of these two strategies, but this model
certainly suggests that current practices may end up being the appropriate
ones.
Kenneth K. Wang, M.D., is the Van Cleve Professor of
Gastroenterology Research at the Mayo Clinic, Rochester, Minn. He had no
relevant disclosures.
Cost-effective screening for any disease
necessitates the ability to detect disease at a stage such that low-cost
intervention could be initiated that avoids undesirable outcomes. In this
carefully constructed decision analysis model from Park et al., two strategies
were studied: screening without symptoms at all in 12-year-old children and
selective screening in those who were symptomatic or at high risk. In general,
evaluation of a symptomatic patient is not usually considered screening
although in this model, the symptoms are of celiac disease and not those of the
study endpoint, complications of osteoporosis.
The inference from this study is that population-based
screening is not cost effective, and a strategy that investigates those at-risk
persons with hereditary diseases such as Down, Turner, and Williams syndromes,
first-degree relatives with celiac disease, or those with symptoms of celiac disease
is cost effective. There are important considerations not taken into account by
this model, for instance those who are tissue transglutaminase (tTG) positive
but biopsy negative are patients who are at higher risk of eventually
developing small bowel abnormalities, though this at most might affect 10% of
these patients.
In addition, the cost of potential complications of
untreated celiac disease other than bone disease, such as lymphoma, seizures,
or peripheral neuropathy, are not accounted for in this analysis due to the
difficulty of quantifying them but would likely increase the cost-effectiveness
of screening. Finally, as the course of osteoporosis in untreated celiac
disease is unknown, it is not possible to model the benefits of earlier
treatment. Ultimately, the only way to really test these strategies is a
carefully constructed clinical trial of these two strategies, but this model
certainly suggests that current practices may end up being the appropriate
ones.
Kenneth K. Wang, M.D., is the Van Cleve Professor of
Gastroenterology Research at the Mayo Clinic, Rochester, Minn. He had no
relevant disclosures.
Serologic screening for celiac disease for symptomatic and high-risk children is more cost effective than is universal screening, at least when it comes to future bone disease.
Indeed, the current standard of practice of selective screening is also associated with greater quality-adjusted life year (QALY) gains over universal screening, reported Dr. K.T. Park in the June issue of Clinical Gastroenterology and Hepatology.
In light of "ongoing clinical concern that current practice of celiac disease screening misses a considerable proportion of asymptomatic celiac disease patients," Dr. Park of Stanford (Calif.) University and colleagues developed a decision analytic Markov model of 12-year-old cohorts (1,000 male and 1,000 female) with population-based prevalence of celiac disease in North America (Clin. Gastroenterol. Hepatol. 2013 June [doi:10.1016/j.cgh.2012.12.037]).
They used hip bone and vertebral fractures as clinical endpoints to assess the cost effectiveness of either universal serologic screening for celiac disease or selective screening in only symptomatic or high-risk children.
"Suboptimal bone health in the form of nontraumatic fractures is an established risk factor for celiac disease patients who are nonadherent to a gluten-free diet, or have undiagnosed subclinical disease," they explained.
Selective screening – the current standard of care – included screening of high-risk children, such as those with type 1 diabetes mellitus; Down, Turner, and Williams syndromes; IgA deficiency; systemic lupus; autoimmune thyroiditis; and those with a first-degree and/or second-degree relative with celiac disease.
Selective screening also included screening of symptomatic children exhibiting diarrhea, abdominal pain, bloating, and other irritable bowel–like symptoms, as well as poor growth, wasting, failure to thrive, or anemia.
In the model, "Any positive serologic screens required diagnostic confirmation via endoscopic duodenal mucosal biopsies," wrote the authors.
"Once a celiac disease patient started lifelong therapy by maintaining a strict gluten-free diet, patients were subject to natural adherence and nonadherence rates reported in the literature."
Additionally, the investigators’ model assumed the development of deteriorating bone disease among celiac patients, and calculated this at the same rate as the non–celiac population with comparable bone demineralization found in the literature.
They specifically focused on the development of hip and vertebral fractures, which "carry the highest morbidity rate in terms of progressing to long-term disability."
Cost estimates for procedures were derived from the Centers for Medicare and Medicaid Services for 2012.
Dr. Park found that for males, universal screening accrues a lifetime average cost of $8,532, with an associated QALY-gained of 25.511.
In contrast, selective screening had lower costs of $8,472, as well as a higher QALY-gained of 25.515.
Similarly, for females, universal screening carried lifetime average cost of $11,383 with an associated QALY-gained of 25.74; selective screening was cheaper, at $11,328, and had a slightly higher QALY-gained, of 25.75.
"Adopting the universal serologic screening strategy, where virtually every preadolescent child would be screened for celiac disease as part of his/her routine blood work in the primary care setting, would be more expensive and fails to increase the long-term quality of life of the population as a whole" in terms of bone health, wrote the investigators.
Indeed, "the universal serologic screening strategy introduces potential harm from unnecessary endoscopic evaluations of healthy individuals if serologic screening is falsely positive."
On the other hand, they cautioned that cost and quality of life assessments that use endpoints other than fracture, including anemia, infertility, or malignancy, "could change the cost effectiveness of universal screening for celiac disease."
The authors disclosed that Dr. Park was supported by a grant from the National Institutes of Health. They stated that they have no conflicts of interest.
Serologic screening for celiac disease for symptomatic and high-risk children is more cost effective than is universal screening, at least when it comes to future bone disease.
Indeed, the current standard of practice of selective screening is also associated with greater quality-adjusted life year (QALY) gains over universal screening, reported Dr. K.T. Park in the June issue of Clinical Gastroenterology and Hepatology.
In light of "ongoing clinical concern that current practice of celiac disease screening misses a considerable proportion of asymptomatic celiac disease patients," Dr. Park of Stanford (Calif.) University and colleagues developed a decision analytic Markov model of 12-year-old cohorts (1,000 male and 1,000 female) with population-based prevalence of celiac disease in North America (Clin. Gastroenterol. Hepatol. 2013 June [doi:10.1016/j.cgh.2012.12.037]).
They used hip bone and vertebral fractures as clinical endpoints to assess the cost effectiveness of either universal serologic screening for celiac disease or selective screening in only symptomatic or high-risk children.
"Suboptimal bone health in the form of nontraumatic fractures is an established risk factor for celiac disease patients who are nonadherent to a gluten-free diet, or have undiagnosed subclinical disease," they explained.
Selective screening – the current standard of care – included screening of high-risk children, such as those with type 1 diabetes mellitus; Down, Turner, and Williams syndromes; IgA deficiency; systemic lupus; autoimmune thyroiditis; and those with a first-degree and/or second-degree relative with celiac disease.
Selective screening also included screening of symptomatic children exhibiting diarrhea, abdominal pain, bloating, and other irritable bowel–like symptoms, as well as poor growth, wasting, failure to thrive, or anemia.
In the model, "Any positive serologic screens required diagnostic confirmation via endoscopic duodenal mucosal biopsies," wrote the authors.
"Once a celiac disease patient started lifelong therapy by maintaining a strict gluten-free diet, patients were subject to natural adherence and nonadherence rates reported in the literature."
Additionally, the investigators’ model assumed the development of deteriorating bone disease among celiac patients, and calculated this at the same rate as the non–celiac population with comparable bone demineralization found in the literature.
They specifically focused on the development of hip and vertebral fractures, which "carry the highest morbidity rate in terms of progressing to long-term disability."
Cost estimates for procedures were derived from the Centers for Medicare and Medicaid Services for 2012.
Dr. Park found that for males, universal screening accrues a lifetime average cost of $8,532, with an associated QALY-gained of 25.511.
In contrast, selective screening had lower costs of $8,472, as well as a higher QALY-gained of 25.515.
Similarly, for females, universal screening carried lifetime average cost of $11,383 with an associated QALY-gained of 25.74; selective screening was cheaper, at $11,328, and had a slightly higher QALY-gained, of 25.75.
"Adopting the universal serologic screening strategy, where virtually every preadolescent child would be screened for celiac disease as part of his/her routine blood work in the primary care setting, would be more expensive and fails to increase the long-term quality of life of the population as a whole" in terms of bone health, wrote the investigators.
Indeed, "the universal serologic screening strategy introduces potential harm from unnecessary endoscopic evaluations of healthy individuals if serologic screening is falsely positive."
On the other hand, they cautioned that cost and quality of life assessments that use endpoints other than fracture, including anemia, infertility, or malignancy, "could change the cost effectiveness of universal screening for celiac disease."
The authors disclosed that Dr. Park was supported by a grant from the National Institutes of Health. They stated that they have no conflicts of interest.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Major finding: Universal screening for celiac disease among adolescents is not cost effective, nor is it associated with any gain in quality of life.
Data source: A Markov model of 1,000 12-year old males and 1,000 12-year old females.
Disclosures: The authors disclosed that Dr. Park was supported by a grant from the National Institutes of Health. They stated that they have no conflicts of interest.
High mortality seen in acute-on-chronic liver failure
An attempt to define and classify acute-on-chronic liver failure showed that the syndrome carries a 28-day mortality rate that is 15 times greater than in cirrhosis patients who do not have the syndrome.
Moreover, the syndrome is extremely common, and may be found in nearly one-third of acutely decompensated cirrhosis patients, wrote Dr. Richard Moreau and colleagues. The study was published in the June issue of Gastroenterology.
Source: American Gastroenterological Association
"A universally accepted and used definition of acute-on-chronic liver failure (ACLF) is still lacking," said Dr. Moreau of Université Paris Diderot, Paris.
"Defining ACLF is not only a matter of nosology, but also is of great importance because it would allow early identification of patients at high risk for end-organ failure–related death, requiring specific treatments and/or intensive management," he added.
To that end, Dr. Moreau looked at 1,343 adult patients hospitalized for at least 1 day who had an acute decompensation of cirrhosis as defined by the acute development of large ascites, hepatic encephalopathy, gastrointestinal hemorrhage, bacterial infection, or any combination of the above.
Patients who were admitted for a scheduled procedure or treatment were excluded from the analysis, as were patients with severe chronic extrahepatic disease and patients with HIV infection.
The study subjects were then divided into four groups. The first group, which was judged not to have ACLF, had no organ failure, a serum creatinine less than 1.5 mg/dL, and no hepatic encephalopathy. This group made up 1,040 of the 1,343 enrolled patients (77.4%), and had 28-day and 90-day mortality rates of 4.7% and 14%, respectively.
The next cohort, called ACLF grade 1, comprised patients with a single coagulation, circulatory or respiratory failure; a serum creatinine between 1.5 and 1.9 mg/dL; and/or mild to moderate hepatic encephalopathy. The 148 patients in this class (11.0%) had 28- and 90-day mortality rates of 22.1% and 40.7%, respectively.
ACLF grade 2 was more severe, with two organ failures; 108 patients (8%) had this at enrollment, and exhibited 28- and 90-day mortality rates of 32.0% and 40.7%, respectively.
The most severely ill patients were classed as having ACLF grade 3, with three organ failures or more. A total of 47 patients (3.5%) fell into this category, and they had 28- and 90-day mortality rates of 76.7% and 79.1%, respectively.
Overall, according to the investigators, patients with ACLF were younger (mean age 56 years versus 58 years in patients without ACLF; P = .02), had a lower mean arterial blood pressure on admittance to the hospital (79 mm Hg versus 85 mm Hg in non-ACLF patients; P less than .001) and more frequently were actively alcoholic.
They also found that patients with ACLF had a significantly higher white cell count (9.7 x 109 compared with 6.6 x 109/L; P less than .001) and plasma C-reactive protein level (40.3 versus 24.9 mg/L; P less than .001) than the group without ACLF.
And finally, in what they called an "outstanding observation," the authors determined that up to 43.6% of patients with ACLF had no precipitating event leading to their acute decompensation, including gastrointestinal hemorrhage, bacterial infection, or active alcoholism.
The authors concluded that their novel diagnostic criteria show that ACLF is "distinct from ‘mere’ AD."
They conceded that their study was not designed to assess ideal management for these patients. "Whether patients with ACLF should be admitted or not to the intensive care unit is controversial," they wrote. "Nevertheless, our results can serve as a resource for designing studies aimed to investigate the appropriate site of hospitalization for patients with ACLF."
The authors disclosed that pharmaceutical companies provided funding for a chronic liver failure consortium, which provided the initiative for this study; several other investigators also disclosed ties with pharmaceutical companies.
The study by Moreau and colleagues represents a culmination of efforts to bring together a continent’s worth of experience in research and patient care into defining an important issue that has been struggling to find a definition: acute-on-chronic liver failure or ACLF. This important study crucially separates ACLF as an entity distinct
from mere decompensation of cirrhosis, something that has long confused the
picture among clinicians. The definition of organ failure, the basis for ACLF,
was determined a priori by expert opinion. The team found a significantly poorer
prognosis in patients with more than two organ failures, especially renal
failure. An intriguing finding was the better prognosis of previously
decompensated cirrhotic patients compared with those without prior
decompensation, which should form the basis for future investigation into the
relationship of ACLF with potential immune tolerance. Other interesting points
raised, such as the absence of precipitating factors in almost half of the ACLF
cases, and correlation of outcomes with leukocyte count and C-reaction protein,
need further corroboration since a large proportion of patients had an
alcoholic etiology. In a large study, it is not always possible to have uniform
precipitating factor investigations. Importantly, these results have also been
corroborated by the initial reports on ACLF using simpler criteria for organ
failure in the North American Consortium for End-Stage Liver Disease (NACSELD).
As the cirrhotic population in Western countries ages along with a scarcity of
donor organs and the emergence of resistant pathogens, the knowledge of ACLF is
going to be increasingly relevant and further large, collaborative studies are
needed along the lines of this paper to tackle this growing issue.
Dr. Jasmohan Bajaj is in the division of gastroenterology, hepatology,
and nutrition at Virginia Commonwealth University and at McGuire VA Medical
Center, Richmond.
His institution has received grants from, and he has been a consultant to,
Grifols, Salix, and Otsuka. He has received an honorarium from Merz.
The study by Moreau and colleagues represents a culmination of efforts to bring together a continent’s worth of experience in research and patient care into defining an important issue that has been struggling to find a definition: acute-on-chronic liver failure or ACLF. This important study crucially separates ACLF as an entity distinct
from mere decompensation of cirrhosis, something that has long confused the
picture among clinicians. The definition of organ failure, the basis for ACLF,
was determined a priori by expert opinion. The team found a significantly poorer
prognosis in patients with more than two organ failures, especially renal
failure. An intriguing finding was the better prognosis of previously
decompensated cirrhotic patients compared with those without prior
decompensation, which should form the basis for future investigation into the
relationship of ACLF with potential immune tolerance. Other interesting points
raised, such as the absence of precipitating factors in almost half of the ACLF
cases, and correlation of outcomes with leukocyte count and C-reaction protein,
need further corroboration since a large proportion of patients had an
alcoholic etiology. In a large study, it is not always possible to have uniform
precipitating factor investigations. Importantly, these results have also been
corroborated by the initial reports on ACLF using simpler criteria for organ
failure in the North American Consortium for End-Stage Liver Disease (NACSELD).
As the cirrhotic population in Western countries ages along with a scarcity of
donor organs and the emergence of resistant pathogens, the knowledge of ACLF is
going to be increasingly relevant and further large, collaborative studies are
needed along the lines of this paper to tackle this growing issue.
Dr. Jasmohan Bajaj is in the division of gastroenterology, hepatology,
and nutrition at Virginia Commonwealth University and at McGuire VA Medical
Center, Richmond.
His institution has received grants from, and he has been a consultant to,
Grifols, Salix, and Otsuka. He has received an honorarium from Merz.
The study by Moreau and colleagues represents a culmination of efforts to bring together a continent’s worth of experience in research and patient care into defining an important issue that has been struggling to find a definition: acute-on-chronic liver failure or ACLF. This important study crucially separates ACLF as an entity distinct
from mere decompensation of cirrhosis, something that has long confused the
picture among clinicians. The definition of organ failure, the basis for ACLF,
was determined a priori by expert opinion. The team found a significantly poorer
prognosis in patients with more than two organ failures, especially renal
failure. An intriguing finding was the better prognosis of previously
decompensated cirrhotic patients compared with those without prior
decompensation, which should form the basis for future investigation into the
relationship of ACLF with potential immune tolerance. Other interesting points
raised, such as the absence of precipitating factors in almost half of the ACLF
cases, and correlation of outcomes with leukocyte count and C-reaction protein,
need further corroboration since a large proportion of patients had an
alcoholic etiology. In a large study, it is not always possible to have uniform
precipitating factor investigations. Importantly, these results have also been
corroborated by the initial reports on ACLF using simpler criteria for organ
failure in the North American Consortium for End-Stage Liver Disease (NACSELD).
As the cirrhotic population in Western countries ages along with a scarcity of
donor organs and the emergence of resistant pathogens, the knowledge of ACLF is
going to be increasingly relevant and further large, collaborative studies are
needed along the lines of this paper to tackle this growing issue.
Dr. Jasmohan Bajaj is in the division of gastroenterology, hepatology,
and nutrition at Virginia Commonwealth University and at McGuire VA Medical
Center, Richmond.
His institution has received grants from, and he has been a consultant to,
Grifols, Salix, and Otsuka. He has received an honorarium from Merz.
An attempt to define and classify acute-on-chronic liver failure showed that the syndrome carries a 28-day mortality rate that is 15 times greater than in cirrhosis patients who do not have the syndrome.
Moreover, the syndrome is extremely common, and may be found in nearly one-third of acutely decompensated cirrhosis patients, wrote Dr. Richard Moreau and colleagues. The study was published in the June issue of Gastroenterology.
Source: American Gastroenterological Association
"A universally accepted and used definition of acute-on-chronic liver failure (ACLF) is still lacking," said Dr. Moreau of Université Paris Diderot, Paris.
"Defining ACLF is not only a matter of nosology, but also is of great importance because it would allow early identification of patients at high risk for end-organ failure–related death, requiring specific treatments and/or intensive management," he added.
To that end, Dr. Moreau looked at 1,343 adult patients hospitalized for at least 1 day who had an acute decompensation of cirrhosis as defined by the acute development of large ascites, hepatic encephalopathy, gastrointestinal hemorrhage, bacterial infection, or any combination of the above.
Patients who were admitted for a scheduled procedure or treatment were excluded from the analysis, as were patients with severe chronic extrahepatic disease and patients with HIV infection.
The study subjects were then divided into four groups. The first group, which was judged not to have ACLF, had no organ failure, a serum creatinine less than 1.5 mg/dL, and no hepatic encephalopathy. This group made up 1,040 of the 1,343 enrolled patients (77.4%), and had 28-day and 90-day mortality rates of 4.7% and 14%, respectively.
The next cohort, called ACLF grade 1, comprised patients with a single coagulation, circulatory or respiratory failure; a serum creatinine between 1.5 and 1.9 mg/dL; and/or mild to moderate hepatic encephalopathy. The 148 patients in this class (11.0%) had 28- and 90-day mortality rates of 22.1% and 40.7%, respectively.
ACLF grade 2 was more severe, with two organ failures; 108 patients (8%) had this at enrollment, and exhibited 28- and 90-day mortality rates of 32.0% and 40.7%, respectively.
The most severely ill patients were classed as having ACLF grade 3, with three organ failures or more. A total of 47 patients (3.5%) fell into this category, and they had 28- and 90-day mortality rates of 76.7% and 79.1%, respectively.
Overall, according to the investigators, patients with ACLF were younger (mean age 56 years versus 58 years in patients without ACLF; P = .02), had a lower mean arterial blood pressure on admittance to the hospital (79 mm Hg versus 85 mm Hg in non-ACLF patients; P less than .001) and more frequently were actively alcoholic.
They also found that patients with ACLF had a significantly higher white cell count (9.7 x 109 compared with 6.6 x 109/L; P less than .001) and plasma C-reactive protein level (40.3 versus 24.9 mg/L; P less than .001) than the group without ACLF.
And finally, in what they called an "outstanding observation," the authors determined that up to 43.6% of patients with ACLF had no precipitating event leading to their acute decompensation, including gastrointestinal hemorrhage, bacterial infection, or active alcoholism.
The authors concluded that their novel diagnostic criteria show that ACLF is "distinct from ‘mere’ AD."
They conceded that their study was not designed to assess ideal management for these patients. "Whether patients with ACLF should be admitted or not to the intensive care unit is controversial," they wrote. "Nevertheless, our results can serve as a resource for designing studies aimed to investigate the appropriate site of hospitalization for patients with ACLF."
The authors disclosed that pharmaceutical companies provided funding for a chronic liver failure consortium, which provided the initiative for this study; several other investigators also disclosed ties with pharmaceutical companies.
An attempt to define and classify acute-on-chronic liver failure showed that the syndrome carries a 28-day mortality rate that is 15 times greater than in cirrhosis patients who do not have the syndrome.
Moreover, the syndrome is extremely common, and may be found in nearly one-third of acutely decompensated cirrhosis patients, wrote Dr. Richard Moreau and colleagues. The study was published in the June issue of Gastroenterology.
Source: American Gastroenterological Association
"A universally accepted and used definition of acute-on-chronic liver failure (ACLF) is still lacking," said Dr. Moreau of Université Paris Diderot, Paris.
"Defining ACLF is not only a matter of nosology, but also is of great importance because it would allow early identification of patients at high risk for end-organ failure–related death, requiring specific treatments and/or intensive management," he added.
To that end, Dr. Moreau looked at 1,343 adult patients hospitalized for at least 1 day who had an acute decompensation of cirrhosis as defined by the acute development of large ascites, hepatic encephalopathy, gastrointestinal hemorrhage, bacterial infection, or any combination of the above.
Patients who were admitted for a scheduled procedure or treatment were excluded from the analysis, as were patients with severe chronic extrahepatic disease and patients with HIV infection.
The study subjects were then divided into four groups. The first group, which was judged not to have ACLF, had no organ failure, a serum creatinine less than 1.5 mg/dL, and no hepatic encephalopathy. This group made up 1,040 of the 1,343 enrolled patients (77.4%), and had 28-day and 90-day mortality rates of 4.7% and 14%, respectively.
The next cohort, called ACLF grade 1, comprised patients with a single coagulation, circulatory or respiratory failure; a serum creatinine between 1.5 and 1.9 mg/dL; and/or mild to moderate hepatic encephalopathy. The 148 patients in this class (11.0%) had 28- and 90-day mortality rates of 22.1% and 40.7%, respectively.
ACLF grade 2 was more severe, with two organ failures; 108 patients (8%) had this at enrollment, and exhibited 28- and 90-day mortality rates of 32.0% and 40.7%, respectively.
The most severely ill patients were classed as having ACLF grade 3, with three organ failures or more. A total of 47 patients (3.5%) fell into this category, and they had 28- and 90-day mortality rates of 76.7% and 79.1%, respectively.
Overall, according to the investigators, patients with ACLF were younger (mean age 56 years versus 58 years in patients without ACLF; P = .02), had a lower mean arterial blood pressure on admittance to the hospital (79 mm Hg versus 85 mm Hg in non-ACLF patients; P less than .001) and more frequently were actively alcoholic.
They also found that patients with ACLF had a significantly higher white cell count (9.7 x 109 compared with 6.6 x 109/L; P less than .001) and plasma C-reactive protein level (40.3 versus 24.9 mg/L; P less than .001) than the group without ACLF.
And finally, in what they called an "outstanding observation," the authors determined that up to 43.6% of patients with ACLF had no precipitating event leading to their acute decompensation, including gastrointestinal hemorrhage, bacterial infection, or active alcoholism.
The authors concluded that their novel diagnostic criteria show that ACLF is "distinct from ‘mere’ AD."
They conceded that their study was not designed to assess ideal management for these patients. "Whether patients with ACLF should be admitted or not to the intensive care unit is controversial," they wrote. "Nevertheless, our results can serve as a resource for designing studies aimed to investigate the appropriate site of hospitalization for patients with ACLF."
The authors disclosed that pharmaceutical companies provided funding for a chronic liver failure consortium, which provided the initiative for this study; several other investigators also disclosed ties with pharmaceutical companies.
FROM GASTROENTEROLOGY
Major finding: Acute-on-chronic liver failure syndrome can be divided into three classes and is distinct from acute decompensation of chronic liver failure.
Data source: Data from 1,343 hospitalized patients with cirrhosis and acute decompensation from February to September 2011 at 29 liver units in eight European countries.
Disclosures: The authors disclosed that pharmaceutical companies provided funding for a chronic liver failure consortium, which provided the initiative for this study; several other investigators also disclosed ties with pharmaceutical companies.
Augmentin implicated in drug-induced liver injury
The crude incidence of drug-induced liver injury is roughly 19.1 cases per 100,000 inhabitants, with amoxicillin-clavulanate the most commonly implicated agent.
That’s according to the second published population-based cohort study of drug-induced liver injury (DILI), wrote Dr. Einar S. Björnsson. The study was published in the June issue of Gastroenterology.
Dr. Björnsson, of the University of Iceland, and colleagues looked at all patients aged older than 15 years hospitalized for liver disease with suspected DILI, plus outpatients at the National University Hospital of Iceland, and all those seen in private practice between March 1, 2010, and Feb. 29, 2012.
Source: American Gastroenterological Association
According to the authors, "In Iceland, every citizen is issued a specific personal identification number that is, among other things, connected to a nationwide pharmaceutical database on outpatient prescriptions."
Therefore, "The study examined the Icelandic Medicines Registry records of prescriptions for all drugs associated with DILI that had at least a possible causal relationship" according to the Roussel Uclaf Causality Assessment Method.
DILI was defined as aspartate aminotransferase (AST) or alanine aminotransferase (ALT) levels greater than three times the upper limit of normal, and/or alkaline phosphatase (ALP) levels greater than two times the upper limit of normal.
Acetaminophen toxicity cases were excluded, though patients with preexisting chronic liver injury were not, if they were considered to have developed superimposed DILI on top of their baseline liver enzyme values.
The authors found that over the study period there were 96 cases eligible for inclusion, including 49 cases in the first year and 47 in the second. That translated into a crude annual incidence during the study period of 19.1 cases per 100,000 inhabitants.
Roughly half were female (56%), and the median age was 55 years (range, 16-91 years).
Looking at the clinical characteristics of the cohort, the authors calculated that only 27% of patients developed jaundice, while 10% of patients complained of rash and 6% of fever. Four of the patients had preexisting liver disease.
Overall, liver injury was judged to be due to a single prescription medication in 75% of cases, most commonly amoxicillin-clavulanate (22%), followed by diclofenac (6%), azathioprine (4%) infliximab (4%), and nitrofurantoin (4%).
By tying the injury to Iceland’s prescription drug database, that meant an incidence of DILI among outpatients of 1 per 133 filled azathioprine prescriptions and 1 in 2,350 amoxicillin-clavulanate users; among inpatients, the incidence of injury attributed to amoxicillin-clavulanate was 1 per 729 patients.
By drug classes, after antibiotics, immunosuppressants were found to be commonly associated with DILI (10%), followed by psychotropic drugs, which accounted for 7% of cases, and then nonsteroidal anti-inflammatory drugs, at 6%, "with diclofenac as the only agent."
Single-drug antineoplastic agents were the causes of DILI in 5% of the cohort, and lipid-lowering agents were the cause in just 3.1% of patients (atorvastatin, n = 2; simvastatin, n = 1).
After injuries due to a single agent, dietary supplements were assumed to be the culprit in 16% of cases, and the use of multiple agents was implicated in 9% of cases.
Looking at outcomes, the researchers reported that DILI was mild in 35 patients (36%), moderate in 55 patients (58%), and severe in 5 patients (5%); there was 1 death, in an 82-year-old patient.
Finally, the median duration from diagnosis of DILI to the normalization of liver enzymes was 64 days, and 7% still had abnormal liver tests 6 months after DILI diagnosis.
According to the authors, the only previously published population-based study, done in France, found an annual crude incidence rate of 13.9 cases per 100,000 inhabitants per year (Hepatology 2002;36:451-5).
They conceded that their rate is somewhat higher; however, "the French study provided no information about the patients at risk for DILI because information about drug consumption was not available," they wrote.
The authors stated that the study was funded by a grant from the National University Hospital of Iceland Research Fund; they disclosed no individual financial conflicts of interest.
The crude incidence of drug-induced liver injury is roughly 19.1 cases per 100,000 inhabitants, with amoxicillin-clavulanate the most commonly implicated agent.
That’s according to the second published population-based cohort study of drug-induced liver injury (DILI), wrote Dr. Einar S. Björnsson. The study was published in the June issue of Gastroenterology.
Dr. Björnsson, of the University of Iceland, and colleagues looked at all patients aged older than 15 years hospitalized for liver disease with suspected DILI, plus outpatients at the National University Hospital of Iceland, and all those seen in private practice between March 1, 2010, and Feb. 29, 2012.
Source: American Gastroenterological Association
According to the authors, "In Iceland, every citizen is issued a specific personal identification number that is, among other things, connected to a nationwide pharmaceutical database on outpatient prescriptions."
Therefore, "The study examined the Icelandic Medicines Registry records of prescriptions for all drugs associated with DILI that had at least a possible causal relationship" according to the Roussel Uclaf Causality Assessment Method.
DILI was defined as aspartate aminotransferase (AST) or alanine aminotransferase (ALT) levels greater than three times the upper limit of normal, and/or alkaline phosphatase (ALP) levels greater than two times the upper limit of normal.
Acetaminophen toxicity cases were excluded, though patients with preexisting chronic liver injury were not, if they were considered to have developed superimposed DILI on top of their baseline liver enzyme values.
The authors found that over the study period there were 96 cases eligible for inclusion, including 49 cases in the first year and 47 in the second. That translated into a crude annual incidence during the study period of 19.1 cases per 100,000 inhabitants.
Roughly half were female (56%), and the median age was 55 years (range, 16-91 years).
Looking at the clinical characteristics of the cohort, the authors calculated that only 27% of patients developed jaundice, while 10% of patients complained of rash and 6% of fever. Four of the patients had preexisting liver disease.
Overall, liver injury was judged to be due to a single prescription medication in 75% of cases, most commonly amoxicillin-clavulanate (22%), followed by diclofenac (6%), azathioprine (4%) infliximab (4%), and nitrofurantoin (4%).
By tying the injury to Iceland’s prescription drug database, that meant an incidence of DILI among outpatients of 1 per 133 filled azathioprine prescriptions and 1 in 2,350 amoxicillin-clavulanate users; among inpatients, the incidence of injury attributed to amoxicillin-clavulanate was 1 per 729 patients.
By drug classes, after antibiotics, immunosuppressants were found to be commonly associated with DILI (10%), followed by psychotropic drugs, which accounted for 7% of cases, and then nonsteroidal anti-inflammatory drugs, at 6%, "with diclofenac as the only agent."
Single-drug antineoplastic agents were the causes of DILI in 5% of the cohort, and lipid-lowering agents were the cause in just 3.1% of patients (atorvastatin, n = 2; simvastatin, n = 1).
After injuries due to a single agent, dietary supplements were assumed to be the culprit in 16% of cases, and the use of multiple agents was implicated in 9% of cases.
Looking at outcomes, the researchers reported that DILI was mild in 35 patients (36%), moderate in 55 patients (58%), and severe in 5 patients (5%); there was 1 death, in an 82-year-old patient.
Finally, the median duration from diagnosis of DILI to the normalization of liver enzymes was 64 days, and 7% still had abnormal liver tests 6 months after DILI diagnosis.
According to the authors, the only previously published population-based study, done in France, found an annual crude incidence rate of 13.9 cases per 100,000 inhabitants per year (Hepatology 2002;36:451-5).
They conceded that their rate is somewhat higher; however, "the French study provided no information about the patients at risk for DILI because information about drug consumption was not available," they wrote.
The authors stated that the study was funded by a grant from the National University Hospital of Iceland Research Fund; they disclosed no individual financial conflicts of interest.
The crude incidence of drug-induced liver injury is roughly 19.1 cases per 100,000 inhabitants, with amoxicillin-clavulanate the most commonly implicated agent.
That’s according to the second published population-based cohort study of drug-induced liver injury (DILI), wrote Dr. Einar S. Björnsson. The study was published in the June issue of Gastroenterology.
Dr. Björnsson, of the University of Iceland, and colleagues looked at all patients aged older than 15 years hospitalized for liver disease with suspected DILI, plus outpatients at the National University Hospital of Iceland, and all those seen in private practice between March 1, 2010, and Feb. 29, 2012.
Source: American Gastroenterological Association
According to the authors, "In Iceland, every citizen is issued a specific personal identification number that is, among other things, connected to a nationwide pharmaceutical database on outpatient prescriptions."
Therefore, "The study examined the Icelandic Medicines Registry records of prescriptions for all drugs associated with DILI that had at least a possible causal relationship" according to the Roussel Uclaf Causality Assessment Method.
DILI was defined as aspartate aminotransferase (AST) or alanine aminotransferase (ALT) levels greater than three times the upper limit of normal, and/or alkaline phosphatase (ALP) levels greater than two times the upper limit of normal.
Acetaminophen toxicity cases were excluded, though patients with preexisting chronic liver injury were not, if they were considered to have developed superimposed DILI on top of their baseline liver enzyme values.
The authors found that over the study period there were 96 cases eligible for inclusion, including 49 cases in the first year and 47 in the second. That translated into a crude annual incidence during the study period of 19.1 cases per 100,000 inhabitants.
Roughly half were female (56%), and the median age was 55 years (range, 16-91 years).
Looking at the clinical characteristics of the cohort, the authors calculated that only 27% of patients developed jaundice, while 10% of patients complained of rash and 6% of fever. Four of the patients had preexisting liver disease.
Overall, liver injury was judged to be due to a single prescription medication in 75% of cases, most commonly amoxicillin-clavulanate (22%), followed by diclofenac (6%), azathioprine (4%) infliximab (4%), and nitrofurantoin (4%).
By tying the injury to Iceland’s prescription drug database, that meant an incidence of DILI among outpatients of 1 per 133 filled azathioprine prescriptions and 1 in 2,350 amoxicillin-clavulanate users; among inpatients, the incidence of injury attributed to amoxicillin-clavulanate was 1 per 729 patients.
By drug classes, after antibiotics, immunosuppressants were found to be commonly associated with DILI (10%), followed by psychotropic drugs, which accounted for 7% of cases, and then nonsteroidal anti-inflammatory drugs, at 6%, "with diclofenac as the only agent."
Single-drug antineoplastic agents were the causes of DILI in 5% of the cohort, and lipid-lowering agents were the cause in just 3.1% of patients (atorvastatin, n = 2; simvastatin, n = 1).
After injuries due to a single agent, dietary supplements were assumed to be the culprit in 16% of cases, and the use of multiple agents was implicated in 9% of cases.
Looking at outcomes, the researchers reported that DILI was mild in 35 patients (36%), moderate in 55 patients (58%), and severe in 5 patients (5%); there was 1 death, in an 82-year-old patient.
Finally, the median duration from diagnosis of DILI to the normalization of liver enzymes was 64 days, and 7% still had abnormal liver tests 6 months after DILI diagnosis.
According to the authors, the only previously published population-based study, done in France, found an annual crude incidence rate of 13.9 cases per 100,000 inhabitants per year (Hepatology 2002;36:451-5).
They conceded that their rate is somewhat higher; however, "the French study provided no information about the patients at risk for DILI because information about drug consumption was not available," they wrote.
The authors stated that the study was funded by a grant from the National University Hospital of Iceland Research Fund; they disclosed no individual financial conflicts of interest.
FROM GASTROENTEROLOGY
Major finding: Drug-induced liver injury has an incidence of 19.1 per 100,000 persons, with the incidence per outpatient users of amoxicillin-clavulanate at 1 per 2,350.
Data source: A population-based cohort study of 251,000 Icelanders.
Disclosures: The authors stated that the study was funded by a grant from the National University Hospital of Iceland Research Fund; they disclosed no individual conflicts.