Cilofexor passes phase 2 for primary biliary cholangitis

Article Type
Changed
Tue, 11/26/2019 - 14:10

– Cilofexor, a nonsteroidal farnesoid X receptor (FXR) agonist, can improve disease biomarkers in patients with primary biliary cholangitis (PBC), based on results of a phase 2 trial.

Will Pass/MDedge News
Dr. Kris V. Kowdley

Compared with placebo, patients treated with cilofexor had significant reductions in serum alkaline phosphatase (ALP), gamma-glutamyltransferase (GGT), C-reactive protein (CRP), and primary bile acids, reported lead author Kris V. Kowdley, MD, of Swedish Medical Center in Seattle, and colleagues.

Dr. Kowdley, who presented findings at the annual meeting of the American Association for the Study of Liver Diseases, began by offering some context for the trial.

“There’s a strong rationale for FXR agonist therapy in PBC,” he said. “FXR is the key regulator of bile acid homeostasis, and FXR agonists have shown favorable effects on fibrosis, inflammatory activity, bile acid export and synthesis, as well as possibly effects on the microbiome and downstream in the gut.” He went on to explain that cilofexor may benefit patients with PBC, primary sclerosing cholangitis, or nonalcoholic steatohepatitis (NASH), noting preclinical data that have demonstrated reductions in bile acids, inflammation, fibrosis, and portal pressure.

The present trial involved 71 patients with PBC who lacked cirrhosis and had a serum ALP level that was at least 1.67 times greater than the upper limit of normal, and an elevated serum total bilirubin that was less than 2 times the upper limit of normal. Patients were randomized to receive either cilofexor 30 mg, cilofexor 100 mg, or placebo, once daily for 12 weeks. Stratification was based on use of ursodeoxycholic acid, which was stable for at least the preceding year. Safety and efficacy were evaluated, with the latter based on liver biochemistry, serum C4, bile acids, and serum fibrosis markers.

Across the entire population, baseline median serum bilirubin was 0.6 mg/dL and median serum ALP was 286 U/L. After 12 weeks, compared with placebo, patients treated with cilofexor, particularly those who received the 100-mg dose, showed significant improvements across multiple measures of liver health. Specifically, patients in the 100-mg group achieved median reductions in ALP (–13.8%; P = .005), GGT (–47.7%; P less than .001), CRP (–33.6%; P = .03), and primary bile acids (–30.5%; P = .008). These patients also exhibited trends toward reduced aspartate aminotransferase and aminoterminal propeptide of type III procollagen; Dr. Kowdley attributed the lack of statistical significance to insufficient population size.

Highlighting magnitude of ALP improvement, Dr. Kowdley noted that reductions in ALP greater than 25% were observed in 17% and 18% of patients in the 100-mg and 30-mg cilofexor groups, respectively, versus 0% of patients in the placebo group.

Although the 100-mg dose of cilofexor appeared more effective, the higher dose did come with some trade-offs in tolerability; grade 2 or 3 pruritus was more common in patients treated with the higher dose than in those who received the 30-mg dose (39% vs. 10%). As such, 7% of patients in the 100-mg group discontinued therapy because of the pruritus, compared with no patients in the 30-mg or placebo group.

Responding to a question from a conference attendee, Dr. Kowdley said that ALP reductions to below the 1.67-fold threshold were achieved by 9% and 14% of patients who received the 30-mg dose and 100-mg dose of cilofexor, respectively.

“We believe these data support further evaluation of cilofexor for the treatment of cholestatic liver disorders,” Dr. Kowdley concluded.

The study was funded by Gilead. The investigators disclosed additional relationships with Allergan, Novartis, GlaxoSmithKline, and others.

SOURCE: Kowdley KV et al. The Liver Meeting 2019. Abstract 45.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Cilofexor, a nonsteroidal farnesoid X receptor (FXR) agonist, can improve disease biomarkers in patients with primary biliary cholangitis (PBC), based on results of a phase 2 trial.

Will Pass/MDedge News
Dr. Kris V. Kowdley

Compared with placebo, patients treated with cilofexor had significant reductions in serum alkaline phosphatase (ALP), gamma-glutamyltransferase (GGT), C-reactive protein (CRP), and primary bile acids, reported lead author Kris V. Kowdley, MD, of Swedish Medical Center in Seattle, and colleagues.

Dr. Kowdley, who presented findings at the annual meeting of the American Association for the Study of Liver Diseases, began by offering some context for the trial.

“There’s a strong rationale for FXR agonist therapy in PBC,” he said. “FXR is the key regulator of bile acid homeostasis, and FXR agonists have shown favorable effects on fibrosis, inflammatory activity, bile acid export and synthesis, as well as possibly effects on the microbiome and downstream in the gut.” He went on to explain that cilofexor may benefit patients with PBC, primary sclerosing cholangitis, or nonalcoholic steatohepatitis (NASH), noting preclinical data that have demonstrated reductions in bile acids, inflammation, fibrosis, and portal pressure.

The present trial involved 71 patients with PBC who lacked cirrhosis and had a serum ALP level that was at least 1.67 times greater than the upper limit of normal, and an elevated serum total bilirubin that was less than 2 times the upper limit of normal. Patients were randomized to receive either cilofexor 30 mg, cilofexor 100 mg, or placebo, once daily for 12 weeks. Stratification was based on use of ursodeoxycholic acid, which was stable for at least the preceding year. Safety and efficacy were evaluated, with the latter based on liver biochemistry, serum C4, bile acids, and serum fibrosis markers.

Across the entire population, baseline median serum bilirubin was 0.6 mg/dL and median serum ALP was 286 U/L. After 12 weeks, compared with placebo, patients treated with cilofexor, particularly those who received the 100-mg dose, showed significant improvements across multiple measures of liver health. Specifically, patients in the 100-mg group achieved median reductions in ALP (–13.8%; P = .005), GGT (–47.7%; P less than .001), CRP (–33.6%; P = .03), and primary bile acids (–30.5%; P = .008). These patients also exhibited trends toward reduced aspartate aminotransferase and aminoterminal propeptide of type III procollagen; Dr. Kowdley attributed the lack of statistical significance to insufficient population size.

Highlighting magnitude of ALP improvement, Dr. Kowdley noted that reductions in ALP greater than 25% were observed in 17% and 18% of patients in the 100-mg and 30-mg cilofexor groups, respectively, versus 0% of patients in the placebo group.

Although the 100-mg dose of cilofexor appeared more effective, the higher dose did come with some trade-offs in tolerability; grade 2 or 3 pruritus was more common in patients treated with the higher dose than in those who received the 30-mg dose (39% vs. 10%). As such, 7% of patients in the 100-mg group discontinued therapy because of the pruritus, compared with no patients in the 30-mg or placebo group.

Responding to a question from a conference attendee, Dr. Kowdley said that ALP reductions to below the 1.67-fold threshold were achieved by 9% and 14% of patients who received the 30-mg dose and 100-mg dose of cilofexor, respectively.

“We believe these data support further evaluation of cilofexor for the treatment of cholestatic liver disorders,” Dr. Kowdley concluded.

The study was funded by Gilead. The investigators disclosed additional relationships with Allergan, Novartis, GlaxoSmithKline, and others.

SOURCE: Kowdley KV et al. The Liver Meeting 2019. Abstract 45.

– Cilofexor, a nonsteroidal farnesoid X receptor (FXR) agonist, can improve disease biomarkers in patients with primary biliary cholangitis (PBC), based on results of a phase 2 trial.

Will Pass/MDedge News
Dr. Kris V. Kowdley

Compared with placebo, patients treated with cilofexor had significant reductions in serum alkaline phosphatase (ALP), gamma-glutamyltransferase (GGT), C-reactive protein (CRP), and primary bile acids, reported lead author Kris V. Kowdley, MD, of Swedish Medical Center in Seattle, and colleagues.

Dr. Kowdley, who presented findings at the annual meeting of the American Association for the Study of Liver Diseases, began by offering some context for the trial.

“There’s a strong rationale for FXR agonist therapy in PBC,” he said. “FXR is the key regulator of bile acid homeostasis, and FXR agonists have shown favorable effects on fibrosis, inflammatory activity, bile acid export and synthesis, as well as possibly effects on the microbiome and downstream in the gut.” He went on to explain that cilofexor may benefit patients with PBC, primary sclerosing cholangitis, or nonalcoholic steatohepatitis (NASH), noting preclinical data that have demonstrated reductions in bile acids, inflammation, fibrosis, and portal pressure.

The present trial involved 71 patients with PBC who lacked cirrhosis and had a serum ALP level that was at least 1.67 times greater than the upper limit of normal, and an elevated serum total bilirubin that was less than 2 times the upper limit of normal. Patients were randomized to receive either cilofexor 30 mg, cilofexor 100 mg, or placebo, once daily for 12 weeks. Stratification was based on use of ursodeoxycholic acid, which was stable for at least the preceding year. Safety and efficacy were evaluated, with the latter based on liver biochemistry, serum C4, bile acids, and serum fibrosis markers.

Across the entire population, baseline median serum bilirubin was 0.6 mg/dL and median serum ALP was 286 U/L. After 12 weeks, compared with placebo, patients treated with cilofexor, particularly those who received the 100-mg dose, showed significant improvements across multiple measures of liver health. Specifically, patients in the 100-mg group achieved median reductions in ALP (–13.8%; P = .005), GGT (–47.7%; P less than .001), CRP (–33.6%; P = .03), and primary bile acids (–30.5%; P = .008). These patients also exhibited trends toward reduced aspartate aminotransferase and aminoterminal propeptide of type III procollagen; Dr. Kowdley attributed the lack of statistical significance to insufficient population size.

Highlighting magnitude of ALP improvement, Dr. Kowdley noted that reductions in ALP greater than 25% were observed in 17% and 18% of patients in the 100-mg and 30-mg cilofexor groups, respectively, versus 0% of patients in the placebo group.

Although the 100-mg dose of cilofexor appeared more effective, the higher dose did come with some trade-offs in tolerability; grade 2 or 3 pruritus was more common in patients treated with the higher dose than in those who received the 30-mg dose (39% vs. 10%). As such, 7% of patients in the 100-mg group discontinued therapy because of the pruritus, compared with no patients in the 30-mg or placebo group.

Responding to a question from a conference attendee, Dr. Kowdley said that ALP reductions to below the 1.67-fold threshold were achieved by 9% and 14% of patients who received the 30-mg dose and 100-mg dose of cilofexor, respectively.

“We believe these data support further evaluation of cilofexor for the treatment of cholestatic liver disorders,” Dr. Kowdley concluded.

The study was funded by Gilead. The investigators disclosed additional relationships with Allergan, Novartis, GlaxoSmithKline, and others.

SOURCE: Kowdley KV et al. The Liver Meeting 2019. Abstract 45.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

REPORTING FROM THE LIVER MEETING 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Machine-learning model predicts NASH based on common clinical and lab values

Article Type
Changed
Tue, 11/12/2019 - 15:58

 

– A machine-learning model based on standard clinical and laboratory values is able to predict nonalcoholic steatohepatitis (NASH) with a sensitivity of 72%-81%, an investigator reported at the annual meeting of the American Association for the Study of Liver Diseases.

Dr. Jörn M. Schattenberg

The tool, dubbed NASHmap, could serve as an initial screening tool to reveal more potential undiagnosed patients with NASH, according to Jörn M. Schattenberg, MD, with the metabolic liver research program in the department of medicine at University Medical Centre Mainz (Germany).

While not intended to replace current scoring systems, NASHmap potentially could be applied as a clinical decision support tool within electronic medical record systems, allowing greater numbers of patients with suspected NASH to be evaluated and referred to specialists for further testing, according to Dr. Schattenberg.

“To me, this is an at-risk population,” Dr. Schattenberg said in an interview. “There’s a lot of talk of increasing numbers of end-stage liver disease, and these are the cases that are waiting to happen. I think if we identify them, we can manage them better.”

“I’m not saying they should all get drugs – I’m saying they have to be informed about their condition because they may not have a clue they have liver disease,” he continued.

The machine-learning approach described here by Dr. Schattenberg included an exploratory analysis based on 704 patients with NASH or non-NASH nonalcoholic fatty liver disease (NAFLD) in the NAFLD Adult Database from the National Institute of Diabetes, Digestive, and Kidney Diseases.

The best-performing model they identified included 14 variables. Ranked by contribution to predictive power, those variables included hemoglobin A1c, aspartate aminotransferase, alanine aminotransferase, total protein, AST/ALT ratio, body mass index, triglycerides, height, platelets, white blood cells, hematocrit, albumin, hypertension, and sex.

That 14-variable model had good performance when tested on the Optum Humedica electronic medical record database, according to Dr. Schattenberg and colleagues.

In the analysis for final evaluation, including 1,016 patients with histologically confirmed NASH, the area under the curve was 0.76, they reported.

A simplified, five-variable model including just hemoglobin A1c, AST, ALT, total protein, and AST/ALT ratio also had good performance, with an area under the curve of 0.74, they said in their report.

Dr. Schattenberg provided disclosures related to AbbVie, Novartis, MSD, Pfizer, Boehringer Ingelheim, BMS, Intercept Pharmaceuticals, Genfit, Gilead, and Echosens. Several study coauthors reported employment with Novartis and stock ownership.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– A machine-learning model based on standard clinical and laboratory values is able to predict nonalcoholic steatohepatitis (NASH) with a sensitivity of 72%-81%, an investigator reported at the annual meeting of the American Association for the Study of Liver Diseases.

Dr. Jörn M. Schattenberg

The tool, dubbed NASHmap, could serve as an initial screening tool to reveal more potential undiagnosed patients with NASH, according to Jörn M. Schattenberg, MD, with the metabolic liver research program in the department of medicine at University Medical Centre Mainz (Germany).

While not intended to replace current scoring systems, NASHmap potentially could be applied as a clinical decision support tool within electronic medical record systems, allowing greater numbers of patients with suspected NASH to be evaluated and referred to specialists for further testing, according to Dr. Schattenberg.

“To me, this is an at-risk population,” Dr. Schattenberg said in an interview. “There’s a lot of talk of increasing numbers of end-stage liver disease, and these are the cases that are waiting to happen. I think if we identify them, we can manage them better.”

“I’m not saying they should all get drugs – I’m saying they have to be informed about their condition because they may not have a clue they have liver disease,” he continued.

The machine-learning approach described here by Dr. Schattenberg included an exploratory analysis based on 704 patients with NASH or non-NASH nonalcoholic fatty liver disease (NAFLD) in the NAFLD Adult Database from the National Institute of Diabetes, Digestive, and Kidney Diseases.

The best-performing model they identified included 14 variables. Ranked by contribution to predictive power, those variables included hemoglobin A1c, aspartate aminotransferase, alanine aminotransferase, total protein, AST/ALT ratio, body mass index, triglycerides, height, platelets, white blood cells, hematocrit, albumin, hypertension, and sex.

That 14-variable model had good performance when tested on the Optum Humedica electronic medical record database, according to Dr. Schattenberg and colleagues.

In the analysis for final evaluation, including 1,016 patients with histologically confirmed NASH, the area under the curve was 0.76, they reported.

A simplified, five-variable model including just hemoglobin A1c, AST, ALT, total protein, and AST/ALT ratio also had good performance, with an area under the curve of 0.74, they said in their report.

Dr. Schattenberg provided disclosures related to AbbVie, Novartis, MSD, Pfizer, Boehringer Ingelheim, BMS, Intercept Pharmaceuticals, Genfit, Gilead, and Echosens. Several study coauthors reported employment with Novartis and stock ownership.

 

– A machine-learning model based on standard clinical and laboratory values is able to predict nonalcoholic steatohepatitis (NASH) with a sensitivity of 72%-81%, an investigator reported at the annual meeting of the American Association for the Study of Liver Diseases.

Dr. Jörn M. Schattenberg

The tool, dubbed NASHmap, could serve as an initial screening tool to reveal more potential undiagnosed patients with NASH, according to Jörn M. Schattenberg, MD, with the metabolic liver research program in the department of medicine at University Medical Centre Mainz (Germany).

While not intended to replace current scoring systems, NASHmap potentially could be applied as a clinical decision support tool within electronic medical record systems, allowing greater numbers of patients with suspected NASH to be evaluated and referred to specialists for further testing, according to Dr. Schattenberg.

“To me, this is an at-risk population,” Dr. Schattenberg said in an interview. “There’s a lot of talk of increasing numbers of end-stage liver disease, and these are the cases that are waiting to happen. I think if we identify them, we can manage them better.”

“I’m not saying they should all get drugs – I’m saying they have to be informed about their condition because they may not have a clue they have liver disease,” he continued.

The machine-learning approach described here by Dr. Schattenberg included an exploratory analysis based on 704 patients with NASH or non-NASH nonalcoholic fatty liver disease (NAFLD) in the NAFLD Adult Database from the National Institute of Diabetes, Digestive, and Kidney Diseases.

The best-performing model they identified included 14 variables. Ranked by contribution to predictive power, those variables included hemoglobin A1c, aspartate aminotransferase, alanine aminotransferase, total protein, AST/ALT ratio, body mass index, triglycerides, height, platelets, white blood cells, hematocrit, albumin, hypertension, and sex.

That 14-variable model had good performance when tested on the Optum Humedica electronic medical record database, according to Dr. Schattenberg and colleagues.

In the analysis for final evaluation, including 1,016 patients with histologically confirmed NASH, the area under the curve was 0.76, they reported.

A simplified, five-variable model including just hemoglobin A1c, AST, ALT, total protein, and AST/ALT ratio also had good performance, with an area under the curve of 0.74, they said in their report.

Dr. Schattenberg provided disclosures related to AbbVie, Novartis, MSD, Pfizer, Boehringer Ingelheim, BMS, Intercept Pharmaceuticals, Genfit, Gilead, and Echosens. Several study coauthors reported employment with Novartis and stock ownership.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE LIVER MEETING 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Storytelling tool can assist elderly in the ICU

Article Type
Changed
Wed, 04/07/2021 - 16:02

– A “Best Case/Worst Case” (BCWC) framework tool has been adapted for use with geriatric trauma patients in the ICU, where it can help track a patient’s progress and enable better communication with patients and loved ones. The tool relies on a combination of graphics and text that surgeons update daily during rounds, and creates a longitudinal view of a patient’s trajectory during their stay in the ICU.

Andrei Malov/Thinkstock

The aim is to have surgeons incorporate story telling into care, by updating the best-case scenario in light of new clinical developments – for example, after a complication has arisen.

“Each day during rounds, the ICU team records important events on the graphic aid that change the patient’s course. The team draws a star to represent the best case, and a line to represent prognostic uncertainty. The attending trauma surgeon then uses the geriatric trauma outcome score, their knowledge of the health state of the patient, and their own clinical experience to tell a story about treatments, recovery, and outcomes if everything goes as well as we might hope. This story is written down in the best-case scenario box,” Christopher Zimmerman, MD, a general surgery resident at the University of Wisconsin–Madison, said during a presentation about the BCWC tool at the annual clinical congress of the American College of Surgeons

“We often like to talk to patients and their families [about best- and worst-case scenarios] anyway, but [the research team] have tried to formalize it,” said Tam Pham, MD, professor of surgery at the University of Washington, in an interview. Dr. Pham comoderated the session where the research was presented.

“When we’re able to communicate where the uncertainty is and where the boundaries are around the course of care and possible outcomes, we can build an alliance with patients and families that will be helpful when there is a big decision to make, say about a laparotomy for a perforated viscus,” said Dr. Zimmerman.

Dr. Zimmerman gave an example of a patient who came into the ICU after suffering multiple fractures from falling down a set of stairs. The team created an initial BCWC with a hoped-for best-case scenario. Later, the patient developed hypoxemic respiratory failure and had to be intubated overnight. “This event is recorded on the graphic, and her star representing the best case has changed position, the line representing uncertainty has shortened, and the contents of her best-case scenario has changed. Each day in rounds, this process is repeated,” said Dr. Zimmerman.

Palliative care physicians, education experts, and surgeons at the University of Wisconsin–Madison developed the tool in an effort to reduce unwanted care at the end of life, in the context of high-risk surgeries. The researchers adapted the tool to the trauma setting by gathering six focus groups of trauma practitioners at the University of Wisconsin; University of Texas, Dallas; and Oregon Health & Science University, Portland. They modified the tool after incorporating comments, and then iteratively modified it through tasks carried out in the ICU as part of a qualitative improvement initiative at the University of Wisconsin–Madison. They generated a change to the tool, implemented it in the ICU during subsequent rounds, then collected observations and field notes, then revised and repeated the process, streamlining it to fit into the ICU environment, according to Dr. Zimmerman.

The back side of the tool is available for family members to write important details about their loved ones, leading insight into the patient’s personality and desires, such as favorite music or affection for a family pet.

The work was supported by the National Institutes of Health. Dr. Zimmerman and Dr. Pham have no relevant financial disclosures.

SOURCE: Zimmerman C et al. Clinical Congress 2019, Abstract.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– A “Best Case/Worst Case” (BCWC) framework tool has been adapted for use with geriatric trauma patients in the ICU, where it can help track a patient’s progress and enable better communication with patients and loved ones. The tool relies on a combination of graphics and text that surgeons update daily during rounds, and creates a longitudinal view of a patient’s trajectory during their stay in the ICU.

Andrei Malov/Thinkstock

The aim is to have surgeons incorporate story telling into care, by updating the best-case scenario in light of new clinical developments – for example, after a complication has arisen.

“Each day during rounds, the ICU team records important events on the graphic aid that change the patient’s course. The team draws a star to represent the best case, and a line to represent prognostic uncertainty. The attending trauma surgeon then uses the geriatric trauma outcome score, their knowledge of the health state of the patient, and their own clinical experience to tell a story about treatments, recovery, and outcomes if everything goes as well as we might hope. This story is written down in the best-case scenario box,” Christopher Zimmerman, MD, a general surgery resident at the University of Wisconsin–Madison, said during a presentation about the BCWC tool at the annual clinical congress of the American College of Surgeons

“We often like to talk to patients and their families [about best- and worst-case scenarios] anyway, but [the research team] have tried to formalize it,” said Tam Pham, MD, professor of surgery at the University of Washington, in an interview. Dr. Pham comoderated the session where the research was presented.

“When we’re able to communicate where the uncertainty is and where the boundaries are around the course of care and possible outcomes, we can build an alliance with patients and families that will be helpful when there is a big decision to make, say about a laparotomy for a perforated viscus,” said Dr. Zimmerman.

Dr. Zimmerman gave an example of a patient who came into the ICU after suffering multiple fractures from falling down a set of stairs. The team created an initial BCWC with a hoped-for best-case scenario. Later, the patient developed hypoxemic respiratory failure and had to be intubated overnight. “This event is recorded on the graphic, and her star representing the best case has changed position, the line representing uncertainty has shortened, and the contents of her best-case scenario has changed. Each day in rounds, this process is repeated,” said Dr. Zimmerman.

Palliative care physicians, education experts, and surgeons at the University of Wisconsin–Madison developed the tool in an effort to reduce unwanted care at the end of life, in the context of high-risk surgeries. The researchers adapted the tool to the trauma setting by gathering six focus groups of trauma practitioners at the University of Wisconsin; University of Texas, Dallas; and Oregon Health & Science University, Portland. They modified the tool after incorporating comments, and then iteratively modified it through tasks carried out in the ICU as part of a qualitative improvement initiative at the University of Wisconsin–Madison. They generated a change to the tool, implemented it in the ICU during subsequent rounds, then collected observations and field notes, then revised and repeated the process, streamlining it to fit into the ICU environment, according to Dr. Zimmerman.

The back side of the tool is available for family members to write important details about their loved ones, leading insight into the patient’s personality and desires, such as favorite music or affection for a family pet.

The work was supported by the National Institutes of Health. Dr. Zimmerman and Dr. Pham have no relevant financial disclosures.

SOURCE: Zimmerman C et al. Clinical Congress 2019, Abstract.

– A “Best Case/Worst Case” (BCWC) framework tool has been adapted for use with geriatric trauma patients in the ICU, where it can help track a patient’s progress and enable better communication with patients and loved ones. The tool relies on a combination of graphics and text that surgeons update daily during rounds, and creates a longitudinal view of a patient’s trajectory during their stay in the ICU.

Andrei Malov/Thinkstock

The aim is to have surgeons incorporate story telling into care, by updating the best-case scenario in light of new clinical developments – for example, after a complication has arisen.

“Each day during rounds, the ICU team records important events on the graphic aid that change the patient’s course. The team draws a star to represent the best case, and a line to represent prognostic uncertainty. The attending trauma surgeon then uses the geriatric trauma outcome score, their knowledge of the health state of the patient, and their own clinical experience to tell a story about treatments, recovery, and outcomes if everything goes as well as we might hope. This story is written down in the best-case scenario box,” Christopher Zimmerman, MD, a general surgery resident at the University of Wisconsin–Madison, said during a presentation about the BCWC tool at the annual clinical congress of the American College of Surgeons

“We often like to talk to patients and their families [about best- and worst-case scenarios] anyway, but [the research team] have tried to formalize it,” said Tam Pham, MD, professor of surgery at the University of Washington, in an interview. Dr. Pham comoderated the session where the research was presented.

“When we’re able to communicate where the uncertainty is and where the boundaries are around the course of care and possible outcomes, we can build an alliance with patients and families that will be helpful when there is a big decision to make, say about a laparotomy for a perforated viscus,” said Dr. Zimmerman.

Dr. Zimmerman gave an example of a patient who came into the ICU after suffering multiple fractures from falling down a set of stairs. The team created an initial BCWC with a hoped-for best-case scenario. Later, the patient developed hypoxemic respiratory failure and had to be intubated overnight. “This event is recorded on the graphic, and her star representing the best case has changed position, the line representing uncertainty has shortened, and the contents of her best-case scenario has changed. Each day in rounds, this process is repeated,” said Dr. Zimmerman.

Palliative care physicians, education experts, and surgeons at the University of Wisconsin–Madison developed the tool in an effort to reduce unwanted care at the end of life, in the context of high-risk surgeries. The researchers adapted the tool to the trauma setting by gathering six focus groups of trauma practitioners at the University of Wisconsin; University of Texas, Dallas; and Oregon Health & Science University, Portland. They modified the tool after incorporating comments, and then iteratively modified it through tasks carried out in the ICU as part of a qualitative improvement initiative at the University of Wisconsin–Madison. They generated a change to the tool, implemented it in the ICU during subsequent rounds, then collected observations and field notes, then revised and repeated the process, streamlining it to fit into the ICU environment, according to Dr. Zimmerman.

The back side of the tool is available for family members to write important details about their loved ones, leading insight into the patient’s personality and desires, such as favorite music or affection for a family pet.

The work was supported by the National Institutes of Health. Dr. Zimmerman and Dr. Pham have no relevant financial disclosures.

SOURCE: Zimmerman C et al. Clinical Congress 2019, Abstract.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM CLINICAL CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Asymptomatic hypopigmented macules and patches

Article Type
Changed
Wed, 12/04/2019 - 10:18

Tinea versicolor (TV), or pityriasis versicolor, is a noninflammatory, superficial cutaneous fungal infection caused by Malassezia furfur, also known as Pityrosporum orbiculare or P. ovale. In its hyphal form, it produces skin lesions that appear as scaly, round or oval, hypopigmented, hyperpigmented, or pink macules or patches. Lesions are asymptomatic. The condition is more commonly seen in warm climates or during the summer months. Malassezia requires an oily environment for growth. Typically, TV appears in sebum-producing areas on the trunk. However, other sites may be affected such as the scalp, groin, and flexural areas. Infants may have facial lesions. Hypopigmentation may persist for months, even after lesions are treated, and takes time to resolve.

The differential diagnosis of hypopigmented lesions of tinea versicolor includes vitiligo, hypopigmented mycosis fungoides, progressive macular hypomelanosis (PMH), secondary syphilis, and pityriasis alba. Potassium hydroxide (KOH) preparations can be performed in the office for TV to reveal short, thick fungal hyphae with multiple spores, often referred to as “spaghetti and meatballs.” Use of a Wood’s light may aid in diagnosis. In TV, lesions may fluoresce yellow-green in adjacent follicles, unlike PMH, which characteristically show orange-red follicular fluorescence. A skin biopsy is necessary to rule out hypopgimented mycosis fungoides or syphilis. Histologically in TV, hyphae and spores will be present in the stratum corneum or in hair follicles. These are readily seen with PAS or GMS (Grocott methenamine silver) stains. There is usually no inflammation and skin appears “normal.” A biopsy was performed in this patient that revealed PAS positive hyphae.

Treatment for TV can be topical or systemic. Antifungal azole shampoo or creams, selenium sulfide shampoo, sulfur preparations, and allylamine creams have all been reported as useful treatments. Oral itraconazole or fluconazole are often given as systemic treatments. Monthly or weekly topical therapy may help prevent relapse.

Dr. Bilu Martin


This case and the photos were provided by Dr. Bilu Martin.

Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].

Publications
Topics
Sections

Tinea versicolor (TV), or pityriasis versicolor, is a noninflammatory, superficial cutaneous fungal infection caused by Malassezia furfur, also known as Pityrosporum orbiculare or P. ovale. In its hyphal form, it produces skin lesions that appear as scaly, round or oval, hypopigmented, hyperpigmented, or pink macules or patches. Lesions are asymptomatic. The condition is more commonly seen in warm climates or during the summer months. Malassezia requires an oily environment for growth. Typically, TV appears in sebum-producing areas on the trunk. However, other sites may be affected such as the scalp, groin, and flexural areas. Infants may have facial lesions. Hypopigmentation may persist for months, even after lesions are treated, and takes time to resolve.

The differential diagnosis of hypopigmented lesions of tinea versicolor includes vitiligo, hypopigmented mycosis fungoides, progressive macular hypomelanosis (PMH), secondary syphilis, and pityriasis alba. Potassium hydroxide (KOH) preparations can be performed in the office for TV to reveal short, thick fungal hyphae with multiple spores, often referred to as “spaghetti and meatballs.” Use of a Wood’s light may aid in diagnosis. In TV, lesions may fluoresce yellow-green in adjacent follicles, unlike PMH, which characteristically show orange-red follicular fluorescence. A skin biopsy is necessary to rule out hypopgimented mycosis fungoides or syphilis. Histologically in TV, hyphae and spores will be present in the stratum corneum or in hair follicles. These are readily seen with PAS or GMS (Grocott methenamine silver) stains. There is usually no inflammation and skin appears “normal.” A biopsy was performed in this patient that revealed PAS positive hyphae.

Treatment for TV can be topical or systemic. Antifungal azole shampoo or creams, selenium sulfide shampoo, sulfur preparations, and allylamine creams have all been reported as useful treatments. Oral itraconazole or fluconazole are often given as systemic treatments. Monthly or weekly topical therapy may help prevent relapse.

Dr. Bilu Martin


This case and the photos were provided by Dr. Bilu Martin.

Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].

Tinea versicolor (TV), or pityriasis versicolor, is a noninflammatory, superficial cutaneous fungal infection caused by Malassezia furfur, also known as Pityrosporum orbiculare or P. ovale. In its hyphal form, it produces skin lesions that appear as scaly, round or oval, hypopigmented, hyperpigmented, or pink macules or patches. Lesions are asymptomatic. The condition is more commonly seen in warm climates or during the summer months. Malassezia requires an oily environment for growth. Typically, TV appears in sebum-producing areas on the trunk. However, other sites may be affected such as the scalp, groin, and flexural areas. Infants may have facial lesions. Hypopigmentation may persist for months, even after lesions are treated, and takes time to resolve.

The differential diagnosis of hypopigmented lesions of tinea versicolor includes vitiligo, hypopigmented mycosis fungoides, progressive macular hypomelanosis (PMH), secondary syphilis, and pityriasis alba. Potassium hydroxide (KOH) preparations can be performed in the office for TV to reveal short, thick fungal hyphae with multiple spores, often referred to as “spaghetti and meatballs.” Use of a Wood’s light may aid in diagnosis. In TV, lesions may fluoresce yellow-green in adjacent follicles, unlike PMH, which characteristically show orange-red follicular fluorescence. A skin biopsy is necessary to rule out hypopgimented mycosis fungoides or syphilis. Histologically in TV, hyphae and spores will be present in the stratum corneum or in hair follicles. These are readily seen with PAS or GMS (Grocott methenamine silver) stains. There is usually no inflammation and skin appears “normal.” A biopsy was performed in this patient that revealed PAS positive hyphae.

Treatment for TV can be topical or systemic. Antifungal azole shampoo or creams, selenium sulfide shampoo, sulfur preparations, and allylamine creams have all been reported as useful treatments. Oral itraconazole or fluconazole are often given as systemic treatments. Monthly or weekly topical therapy may help prevent relapse.

Dr. Bilu Martin


This case and the photos were provided by Dr. Bilu Martin.

Dr. Bilu Martin is a board-certified dermatologist in private practice at Premier Dermatology, MD, in Aventura, Fla. More diagnostic cases are available at mdedge.com/dermatology. To submit a case for possible publication, send an email to [email protected].

Publications
Publications
Topics
Article Type
Sections
Questionnaire Body

A 30-year-old woman with no significant past medical history presented with asymptomatic hypopigmented macules and patches on her back, bilateral shoulders, and inner thighs. The lesions have been present for months.

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Risk stratification of syncope patients can help determine duration of telemetry monitoring

Article Type
Changed
Tue, 11/12/2019 - 15:09

Background: About half of ED patients with syncope of unknown etiology are admitted for telemetry monitoring. No consensus exists regarding the optimal duration of telemetry monitoring in these patients to detect underlying arrhythmia.



Study design: Prospective cohort study.

Setting: Six EDs in Canada during September 2010–March 2015.

Synopsis: Using the Canadian Syncope Risk Score, 5,581 adults who presented to the ED within 24 hours of a syncopal event were risk stratified as low, medium, or high risk for serious adverse events (arrhythmic vs. nonarrhythmic) and then followed for 30 days. Approximately half of arrhythmias were identified among low-risk patients within 2 hours of telemetry monitoring and within 6 hours of monitoring among medium- and high-risk patients. In the low-risk group, none experienced death or ventricular arrhythmia within 30 days. In the medium- and high-risk group, 91.7% of underlying arrhythmias were identified within 15 days. The study was limited by the lack of standardized approach in the use of outpatient cardiac rhythm monitoring, which may have resulted in arrhythmia underdetection.

Bottom line: Among ED patients with syncope of unknown etiology, approximately 47% of arrhythmias were detected after 2-6 hours of telemetry monitoring. Among medium- and high-risk patients, the majority of serious arrhythmias were identified within 15 days. Based on these results, the authors recommend the use of 15-day outpatient cardiac monitoring for medium- and high-risk patients.

Citation: Thiruganasambandamoorthy V et al. Duration of electrocardiographic monitoring of emergency department patients with syncope. Circulation. 2019 Mar 12;139(11):1396-406.

Dr. Roy is a hospitalist at Beth Israel Deaconess Medical Center and instructor in medicine at Harvard Medical School.

Publications
Topics
Sections

Background: About half of ED patients with syncope of unknown etiology are admitted for telemetry monitoring. No consensus exists regarding the optimal duration of telemetry monitoring in these patients to detect underlying arrhythmia.



Study design: Prospective cohort study.

Setting: Six EDs in Canada during September 2010–March 2015.

Synopsis: Using the Canadian Syncope Risk Score, 5,581 adults who presented to the ED within 24 hours of a syncopal event were risk stratified as low, medium, or high risk for serious adverse events (arrhythmic vs. nonarrhythmic) and then followed for 30 days. Approximately half of arrhythmias were identified among low-risk patients within 2 hours of telemetry monitoring and within 6 hours of monitoring among medium- and high-risk patients. In the low-risk group, none experienced death or ventricular arrhythmia within 30 days. In the medium- and high-risk group, 91.7% of underlying arrhythmias were identified within 15 days. The study was limited by the lack of standardized approach in the use of outpatient cardiac rhythm monitoring, which may have resulted in arrhythmia underdetection.

Bottom line: Among ED patients with syncope of unknown etiology, approximately 47% of arrhythmias were detected after 2-6 hours of telemetry monitoring. Among medium- and high-risk patients, the majority of serious arrhythmias were identified within 15 days. Based on these results, the authors recommend the use of 15-day outpatient cardiac monitoring for medium- and high-risk patients.

Citation: Thiruganasambandamoorthy V et al. Duration of electrocardiographic monitoring of emergency department patients with syncope. Circulation. 2019 Mar 12;139(11):1396-406.

Dr. Roy is a hospitalist at Beth Israel Deaconess Medical Center and instructor in medicine at Harvard Medical School.

Background: About half of ED patients with syncope of unknown etiology are admitted for telemetry monitoring. No consensus exists regarding the optimal duration of telemetry monitoring in these patients to detect underlying arrhythmia.



Study design: Prospective cohort study.

Setting: Six EDs in Canada during September 2010–March 2015.

Synopsis: Using the Canadian Syncope Risk Score, 5,581 adults who presented to the ED within 24 hours of a syncopal event were risk stratified as low, medium, or high risk for serious adverse events (arrhythmic vs. nonarrhythmic) and then followed for 30 days. Approximately half of arrhythmias were identified among low-risk patients within 2 hours of telemetry monitoring and within 6 hours of monitoring among medium- and high-risk patients. In the low-risk group, none experienced death or ventricular arrhythmia within 30 days. In the medium- and high-risk group, 91.7% of underlying arrhythmias were identified within 15 days. The study was limited by the lack of standardized approach in the use of outpatient cardiac rhythm monitoring, which may have resulted in arrhythmia underdetection.

Bottom line: Among ED patients with syncope of unknown etiology, approximately 47% of arrhythmias were detected after 2-6 hours of telemetry monitoring. Among medium- and high-risk patients, the majority of serious arrhythmias were identified within 15 days. Based on these results, the authors recommend the use of 15-day outpatient cardiac monitoring for medium- and high-risk patients.

Citation: Thiruganasambandamoorthy V et al. Duration of electrocardiographic monitoring of emergency department patients with syncope. Circulation. 2019 Mar 12;139(11):1396-406.

Dr. Roy is a hospitalist at Beth Israel Deaconess Medical Center and instructor in medicine at Harvard Medical School.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Direct-acting antiviral treatment linked to lower mortality in patients with HCC history

Article Type
Changed
Wed, 05/26/2021 - 13:46

– For patients with a complete response following treatment for hepatitis C virus (HCV)–related hepatocellular carcinoma (HCC), treatment with direct-acting antiviral therapy was linked to significantly reduced mortality compared with no such treatment, according to results of a large cohort study.

UT Southwestern Medical Center
Dr. Amit Singal

The mortality benefit associated with direct-acting antiviral (DAA) therapy was consistent across most subgroups, suggesting that the association was driven by achieving sustained virological response (SVR), according to Amit G. Singal, MD, associate professor of medicine at UT Southwestern Medical Center, Dallas.

Those results suggest that DAA therapy in patients with a history of HCC is not only safe, but is also beneficial, Dr. Singal said at the annual meeting of the American Association for the Study of Liver Diseases.

“To be slightly controversial, I think that this changes the paradigm in this subgroup of patients from ‘can be treated’ for their hepatitis C to ‘should be treated,’ ” he concluded in an oral presentation of the results.

While DAA treatment is proven to reduce risk of incident HCC in patients with cirrhosis, the risk-benefit ratio is “less clear” in patients with a history of HCC following complete response, according to Dr. Singal.

Moreover, concerns were raised about the safety of DAA therapy in patients with an HCC history, after early data suggested a potentially higher recurrence risk, he added.

In the current multicenter, retrospective North American cohort study, Dr. Singal and colleagues reviewed data for 797 patients with HCV–related HCC who achieved complete response following ablation, resection, radiotherapy, transarterial chemoembolization, or transarterial radioembolization.

Treatment with DAA therapy was associated with improved overall survival, according to results of multivariable analysis, with a hazard ratio of 0.54 (95% confidence interval, 0.33-0.90). Median time from HCC complete response to death was 25.7 months for the DAA treatment group, versus 11.5 months for the untreated group.

The association between DAA treatment and death was apparently driven by SVR, as reduced mortality was seen in the DAA-treated patients who did achieve SVR, but not in those without SVR, Dr. Singal reported.

While these findings together suggest that DAA treatment is linked to reduced mortality after HCC complete response, prospective studies are needed to confirm this association, Dr. Singal said.

Dr. Singal reported disclosures related to AbbVie, Bayer, BMS, Eisai, Exact Sciences, Exelixis, Genentech, Gilead FOCUS, Glycotest, GRAIL, Merck, Roche, TARGET Pharmasolutions, and Wako Diagnostics.
 

SOURCE: Singal AG et al. The Liver Meeting 2019, Abstract 199.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– For patients with a complete response following treatment for hepatitis C virus (HCV)–related hepatocellular carcinoma (HCC), treatment with direct-acting antiviral therapy was linked to significantly reduced mortality compared with no such treatment, according to results of a large cohort study.

UT Southwestern Medical Center
Dr. Amit Singal

The mortality benefit associated with direct-acting antiviral (DAA) therapy was consistent across most subgroups, suggesting that the association was driven by achieving sustained virological response (SVR), according to Amit G. Singal, MD, associate professor of medicine at UT Southwestern Medical Center, Dallas.

Those results suggest that DAA therapy in patients with a history of HCC is not only safe, but is also beneficial, Dr. Singal said at the annual meeting of the American Association for the Study of Liver Diseases.

“To be slightly controversial, I think that this changes the paradigm in this subgroup of patients from ‘can be treated’ for their hepatitis C to ‘should be treated,’ ” he concluded in an oral presentation of the results.

While DAA treatment is proven to reduce risk of incident HCC in patients with cirrhosis, the risk-benefit ratio is “less clear” in patients with a history of HCC following complete response, according to Dr. Singal.

Moreover, concerns were raised about the safety of DAA therapy in patients with an HCC history, after early data suggested a potentially higher recurrence risk, he added.

In the current multicenter, retrospective North American cohort study, Dr. Singal and colleagues reviewed data for 797 patients with HCV–related HCC who achieved complete response following ablation, resection, radiotherapy, transarterial chemoembolization, or transarterial radioembolization.

Treatment with DAA therapy was associated with improved overall survival, according to results of multivariable analysis, with a hazard ratio of 0.54 (95% confidence interval, 0.33-0.90). Median time from HCC complete response to death was 25.7 months for the DAA treatment group, versus 11.5 months for the untreated group.

The association between DAA treatment and death was apparently driven by SVR, as reduced mortality was seen in the DAA-treated patients who did achieve SVR, but not in those without SVR, Dr. Singal reported.

While these findings together suggest that DAA treatment is linked to reduced mortality after HCC complete response, prospective studies are needed to confirm this association, Dr. Singal said.

Dr. Singal reported disclosures related to AbbVie, Bayer, BMS, Eisai, Exact Sciences, Exelixis, Genentech, Gilead FOCUS, Glycotest, GRAIL, Merck, Roche, TARGET Pharmasolutions, and Wako Diagnostics.
 

SOURCE: Singal AG et al. The Liver Meeting 2019, Abstract 199.

– For patients with a complete response following treatment for hepatitis C virus (HCV)–related hepatocellular carcinoma (HCC), treatment with direct-acting antiviral therapy was linked to significantly reduced mortality compared with no such treatment, according to results of a large cohort study.

UT Southwestern Medical Center
Dr. Amit Singal

The mortality benefit associated with direct-acting antiviral (DAA) therapy was consistent across most subgroups, suggesting that the association was driven by achieving sustained virological response (SVR), according to Amit G. Singal, MD, associate professor of medicine at UT Southwestern Medical Center, Dallas.

Those results suggest that DAA therapy in patients with a history of HCC is not only safe, but is also beneficial, Dr. Singal said at the annual meeting of the American Association for the Study of Liver Diseases.

“To be slightly controversial, I think that this changes the paradigm in this subgroup of patients from ‘can be treated’ for their hepatitis C to ‘should be treated,’ ” he concluded in an oral presentation of the results.

While DAA treatment is proven to reduce risk of incident HCC in patients with cirrhosis, the risk-benefit ratio is “less clear” in patients with a history of HCC following complete response, according to Dr. Singal.

Moreover, concerns were raised about the safety of DAA therapy in patients with an HCC history, after early data suggested a potentially higher recurrence risk, he added.

In the current multicenter, retrospective North American cohort study, Dr. Singal and colleagues reviewed data for 797 patients with HCV–related HCC who achieved complete response following ablation, resection, radiotherapy, transarterial chemoembolization, or transarterial radioembolization.

Treatment with DAA therapy was associated with improved overall survival, according to results of multivariable analysis, with a hazard ratio of 0.54 (95% confidence interval, 0.33-0.90). Median time from HCC complete response to death was 25.7 months for the DAA treatment group, versus 11.5 months for the untreated group.

The association between DAA treatment and death was apparently driven by SVR, as reduced mortality was seen in the DAA-treated patients who did achieve SVR, but not in those without SVR, Dr. Singal reported.

While these findings together suggest that DAA treatment is linked to reduced mortality after HCC complete response, prospective studies are needed to confirm this association, Dr. Singal said.

Dr. Singal reported disclosures related to AbbVie, Bayer, BMS, Eisai, Exact Sciences, Exelixis, Genentech, Gilead FOCUS, Glycotest, GRAIL, Merck, Roche, TARGET Pharmasolutions, and Wako Diagnostics.
 

SOURCE: Singal AG et al. The Liver Meeting 2019, Abstract 199.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE LIVER MEETING 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Kratom: Botanical with opiate-like effects increasingly blamed for liver injury

Article Type
Changed
Tue, 11/12/2019 - 16:00

 

– Kratom, a botanical product with opioid-like activity, is increasingly responsible for cases of liver injury in the United States, according to investigators.

Will Pass/MDedge News
Dr. Victor J. Navarro

Kratom-associated liver damage involves a mixed pattern of hepatocellular and cholestatic injury that typically occurs after about 2-6 weeks of use, reported lead author Victor J. Navarro, MD, division head of gastroenterology at Einstein Healthcare Network in Philadelphia, and colleagues.

“I think it’s important for clinicians to have heightened awareness of the abuse potential [of kratom], because it is an opioid agonist and [because of] its capacity to cause liver injury,” Dr. Navarro said.

Kratom acts as a stimulant at low doses, while higher doses have sedating and narcotic properties. These effects are attributed to several alkaloids found in kratom’s source plant, Mitragyna speciose, of which mitragynine, a suspected opioid agonist, is most common.

Presenting at the annual meeting of the American Association for the Study of Liver Diseases, Dr. Navarro cited figures from the National Poison Data System that suggest an upward trend in kratom usage in the United States, from very little use in 2011 to 1 exposure per million people in 2014 and more recently to slightly more than 2.5 exposures per million people in 2017, predominantly among individuals aged 20 years and older. According to the Centers for Disease Control and Prevention, more than 90 kratom-associated deaths occurred between July 2016 and December 2017. Because of growing concerns, the Food and Drug Administration has issued multiple public warnings about kratom, ranging from products contaminated with Salmonella and heavy metals, to adverse effects such as seizures and liver toxicity.

The present study aimed to characterize kratom-associated liver injury through a case series analysis. First, the investigators reviewed 404 cases of herbal and dietary supplement-associated liver injury from the Drug-Induced Liver Injury Network prospective study. They found 11 suspected cases of kratom-related liver injury, with an upward trend in recent years. At this time, seven of the cases have been adjudicated by an expert panel and confirmed to be highly likely or probably associated with kratom.

Of these seven cases, all patients were hospitalized, although all recovered without need for liver transplant. Patients presented after a median of 15 days of kratom use, with a 28-day symptom latency period. However, Dr. Navarro noted that some cases presented after just 5 days of use. The most common presenting symptom was itching (86%), followed by jaundice (71%), abdominal pain (71%), nausea (57%), and fever (43%). Blood work revealed a mixed hepatocellular and cholestatic pattern. Median peak ALT was 362 U/L, peak alkaline phosphatase was 294 U/L, and peak total bilirubin was 20.1 mg/dL. Despite these changes, patients did not have significant liver dysfunction, such as coagulopathy.

Following this clinical characterization, Dr. Navarro reviewed existing toxicity data. Rat studies suggest that kratom is safe at doses between 1-10 mg/kg, while toxicity occurs after prolonged exposure to more than 100 mg/kg. A cross-sectional human study reported that kratom was safe at doses up to 75 mg/day. However, in the present case series, some patients presented after ingesting as little as 0.66 mg/day, and Dr. Navarro pointed out wide variations in product concentrations of mitragynine.

“Certainly, we need more human toxicity studies to determine what a safe dose really is, because this product is not going away,” Dr. Navarro said.

The investigators disclosed relationships with Gilead, Bristol-Myers Squibb, Sanofi, and others.

SOURCE: Navarro VJ et al. The Liver Meeting 2019, Abstract 212.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Kratom, a botanical product with opioid-like activity, is increasingly responsible for cases of liver injury in the United States, according to investigators.

Will Pass/MDedge News
Dr. Victor J. Navarro

Kratom-associated liver damage involves a mixed pattern of hepatocellular and cholestatic injury that typically occurs after about 2-6 weeks of use, reported lead author Victor J. Navarro, MD, division head of gastroenterology at Einstein Healthcare Network in Philadelphia, and colleagues.

“I think it’s important for clinicians to have heightened awareness of the abuse potential [of kratom], because it is an opioid agonist and [because of] its capacity to cause liver injury,” Dr. Navarro said.

Kratom acts as a stimulant at low doses, while higher doses have sedating and narcotic properties. These effects are attributed to several alkaloids found in kratom’s source plant, Mitragyna speciose, of which mitragynine, a suspected opioid agonist, is most common.

Presenting at the annual meeting of the American Association for the Study of Liver Diseases, Dr. Navarro cited figures from the National Poison Data System that suggest an upward trend in kratom usage in the United States, from very little use in 2011 to 1 exposure per million people in 2014 and more recently to slightly more than 2.5 exposures per million people in 2017, predominantly among individuals aged 20 years and older. According to the Centers for Disease Control and Prevention, more than 90 kratom-associated deaths occurred between July 2016 and December 2017. Because of growing concerns, the Food and Drug Administration has issued multiple public warnings about kratom, ranging from products contaminated with Salmonella and heavy metals, to adverse effects such as seizures and liver toxicity.

The present study aimed to characterize kratom-associated liver injury through a case series analysis. First, the investigators reviewed 404 cases of herbal and dietary supplement-associated liver injury from the Drug-Induced Liver Injury Network prospective study. They found 11 suspected cases of kratom-related liver injury, with an upward trend in recent years. At this time, seven of the cases have been adjudicated by an expert panel and confirmed to be highly likely or probably associated with kratom.

Of these seven cases, all patients were hospitalized, although all recovered without need for liver transplant. Patients presented after a median of 15 days of kratom use, with a 28-day symptom latency period. However, Dr. Navarro noted that some cases presented after just 5 days of use. The most common presenting symptom was itching (86%), followed by jaundice (71%), abdominal pain (71%), nausea (57%), and fever (43%). Blood work revealed a mixed hepatocellular and cholestatic pattern. Median peak ALT was 362 U/L, peak alkaline phosphatase was 294 U/L, and peak total bilirubin was 20.1 mg/dL. Despite these changes, patients did not have significant liver dysfunction, such as coagulopathy.

Following this clinical characterization, Dr. Navarro reviewed existing toxicity data. Rat studies suggest that kratom is safe at doses between 1-10 mg/kg, while toxicity occurs after prolonged exposure to more than 100 mg/kg. A cross-sectional human study reported that kratom was safe at doses up to 75 mg/day. However, in the present case series, some patients presented after ingesting as little as 0.66 mg/day, and Dr. Navarro pointed out wide variations in product concentrations of mitragynine.

“Certainly, we need more human toxicity studies to determine what a safe dose really is, because this product is not going away,” Dr. Navarro said.

The investigators disclosed relationships with Gilead, Bristol-Myers Squibb, Sanofi, and others.

SOURCE: Navarro VJ et al. The Liver Meeting 2019, Abstract 212.

 

– Kratom, a botanical product with opioid-like activity, is increasingly responsible for cases of liver injury in the United States, according to investigators.

Will Pass/MDedge News
Dr. Victor J. Navarro

Kratom-associated liver damage involves a mixed pattern of hepatocellular and cholestatic injury that typically occurs after about 2-6 weeks of use, reported lead author Victor J. Navarro, MD, division head of gastroenterology at Einstein Healthcare Network in Philadelphia, and colleagues.

“I think it’s important for clinicians to have heightened awareness of the abuse potential [of kratom], because it is an opioid agonist and [because of] its capacity to cause liver injury,” Dr. Navarro said.

Kratom acts as a stimulant at low doses, while higher doses have sedating and narcotic properties. These effects are attributed to several alkaloids found in kratom’s source plant, Mitragyna speciose, of which mitragynine, a suspected opioid agonist, is most common.

Presenting at the annual meeting of the American Association for the Study of Liver Diseases, Dr. Navarro cited figures from the National Poison Data System that suggest an upward trend in kratom usage in the United States, from very little use in 2011 to 1 exposure per million people in 2014 and more recently to slightly more than 2.5 exposures per million people in 2017, predominantly among individuals aged 20 years and older. According to the Centers for Disease Control and Prevention, more than 90 kratom-associated deaths occurred between July 2016 and December 2017. Because of growing concerns, the Food and Drug Administration has issued multiple public warnings about kratom, ranging from products contaminated with Salmonella and heavy metals, to adverse effects such as seizures and liver toxicity.

The present study aimed to characterize kratom-associated liver injury through a case series analysis. First, the investigators reviewed 404 cases of herbal and dietary supplement-associated liver injury from the Drug-Induced Liver Injury Network prospective study. They found 11 suspected cases of kratom-related liver injury, with an upward trend in recent years. At this time, seven of the cases have been adjudicated by an expert panel and confirmed to be highly likely or probably associated with kratom.

Of these seven cases, all patients were hospitalized, although all recovered without need for liver transplant. Patients presented after a median of 15 days of kratom use, with a 28-day symptom latency period. However, Dr. Navarro noted that some cases presented after just 5 days of use. The most common presenting symptom was itching (86%), followed by jaundice (71%), abdominal pain (71%), nausea (57%), and fever (43%). Blood work revealed a mixed hepatocellular and cholestatic pattern. Median peak ALT was 362 U/L, peak alkaline phosphatase was 294 U/L, and peak total bilirubin was 20.1 mg/dL. Despite these changes, patients did not have significant liver dysfunction, such as coagulopathy.

Following this clinical characterization, Dr. Navarro reviewed existing toxicity data. Rat studies suggest that kratom is safe at doses between 1-10 mg/kg, while toxicity occurs after prolonged exposure to more than 100 mg/kg. A cross-sectional human study reported that kratom was safe at doses up to 75 mg/day. However, in the present case series, some patients presented after ingesting as little as 0.66 mg/day, and Dr. Navarro pointed out wide variations in product concentrations of mitragynine.

“Certainly, we need more human toxicity studies to determine what a safe dose really is, because this product is not going away,” Dr. Navarro said.

The investigators disclosed relationships with Gilead, Bristol-Myers Squibb, Sanofi, and others.

SOURCE: Navarro VJ et al. The Liver Meeting 2019, Abstract 212.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE LIVER MEETING 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Fentanyl-related deaths show strong regional pattern

Article Type
Changed
Fri, 11/15/2019 - 14:25

 

Fentanyl was involved in more overdose deaths than any other drug in 2017, and the death rate in New England was 15 times higher than in regions of the Midwest and West, according to the National Center for Health Statistics.

Nationally, fentanyl was involved in 39% of all drug overdose deaths and had an age-adjusted death rate of 8.7/100,000 standard population in 2017. In 2016, when fentanyl also was the most involved drug in the United States, the corresponding figures were 29% and 5.9/100,000, the agency said in a recent report.

Fentanyl was the most involved drug in overdose deaths for 6 of the country’s 10 public health regions in 2017, with a clear pattern of decreasing use from east to west. The highest death rate (22.5/100,000) occurred in Region 1 (New England) and the lowest rates (1.5/100,000) came in Region 6 (Arkansas, Louisiana, New Mexico, Oklahoma, and Texas) and Region 9 (Arizona, California, Hawaii, and Nevada), the researchers said.

A somewhat similar pattern was seen for heroin, which was second nationally on the list of drugs most frequently involved in overdose deaths (23%), except that New England was somewhat below three other regions in the East and upper Midwest. The highest heroin death rate (8.6/100,000) was seen in Region 2 (New Jersey and New York) and the lowest (2.2) occurred in Region 9, they said, based on data from the National Vital Statistics System’s mortality files.

The fentanyl pattern was even more closely repeated with cocaine, third in involvement nationally at 21% of overdose deaths in 2017. The high in overdose deaths (9.5/100,000) came in Region 1 again, and the low in Region 9 (1.3), along with Region 7 (Iowa, Kansas, Missouri, and Nebraska) and Region 10 (Alaska, Idaho, Oregon, and Washington), the report showed.

The regional pattern of overdose deaths for methamphetamine, which was fourth nationally in involvement (13.3%), basically reversed the other three drugs: highest in the West and lowest in the Northeast. Region 9 had the highest death rate (5.2/100,000) and Region 2 the lowest (0.4), with Region 1 just ahead at 0.6.

“Understanding these regional differences in the drugs most frequently involved in drug overdose deaths may help inform prevention and policy efforts,” the investigators wrote.

Publications
Topics
Sections

 

Fentanyl was involved in more overdose deaths than any other drug in 2017, and the death rate in New England was 15 times higher than in regions of the Midwest and West, according to the National Center for Health Statistics.

Nationally, fentanyl was involved in 39% of all drug overdose deaths and had an age-adjusted death rate of 8.7/100,000 standard population in 2017. In 2016, when fentanyl also was the most involved drug in the United States, the corresponding figures were 29% and 5.9/100,000, the agency said in a recent report.

Fentanyl was the most involved drug in overdose deaths for 6 of the country’s 10 public health regions in 2017, with a clear pattern of decreasing use from east to west. The highest death rate (22.5/100,000) occurred in Region 1 (New England) and the lowest rates (1.5/100,000) came in Region 6 (Arkansas, Louisiana, New Mexico, Oklahoma, and Texas) and Region 9 (Arizona, California, Hawaii, and Nevada), the researchers said.

A somewhat similar pattern was seen for heroin, which was second nationally on the list of drugs most frequently involved in overdose deaths (23%), except that New England was somewhat below three other regions in the East and upper Midwest. The highest heroin death rate (8.6/100,000) was seen in Region 2 (New Jersey and New York) and the lowest (2.2) occurred in Region 9, they said, based on data from the National Vital Statistics System’s mortality files.

The fentanyl pattern was even more closely repeated with cocaine, third in involvement nationally at 21% of overdose deaths in 2017. The high in overdose deaths (9.5/100,000) came in Region 1 again, and the low in Region 9 (1.3), along with Region 7 (Iowa, Kansas, Missouri, and Nebraska) and Region 10 (Alaska, Idaho, Oregon, and Washington), the report showed.

The regional pattern of overdose deaths for methamphetamine, which was fourth nationally in involvement (13.3%), basically reversed the other three drugs: highest in the West and lowest in the Northeast. Region 9 had the highest death rate (5.2/100,000) and Region 2 the lowest (0.4), with Region 1 just ahead at 0.6.

“Understanding these regional differences in the drugs most frequently involved in drug overdose deaths may help inform prevention and policy efforts,” the investigators wrote.

 

Fentanyl was involved in more overdose deaths than any other drug in 2017, and the death rate in New England was 15 times higher than in regions of the Midwest and West, according to the National Center for Health Statistics.

Nationally, fentanyl was involved in 39% of all drug overdose deaths and had an age-adjusted death rate of 8.7/100,000 standard population in 2017. In 2016, when fentanyl also was the most involved drug in the United States, the corresponding figures were 29% and 5.9/100,000, the agency said in a recent report.

Fentanyl was the most involved drug in overdose deaths for 6 of the country’s 10 public health regions in 2017, with a clear pattern of decreasing use from east to west. The highest death rate (22.5/100,000) occurred in Region 1 (New England) and the lowest rates (1.5/100,000) came in Region 6 (Arkansas, Louisiana, New Mexico, Oklahoma, and Texas) and Region 9 (Arizona, California, Hawaii, and Nevada), the researchers said.

A somewhat similar pattern was seen for heroin, which was second nationally on the list of drugs most frequently involved in overdose deaths (23%), except that New England was somewhat below three other regions in the East and upper Midwest. The highest heroin death rate (8.6/100,000) was seen in Region 2 (New Jersey and New York) and the lowest (2.2) occurred in Region 9, they said, based on data from the National Vital Statistics System’s mortality files.

The fentanyl pattern was even more closely repeated with cocaine, third in involvement nationally at 21% of overdose deaths in 2017. The high in overdose deaths (9.5/100,000) came in Region 1 again, and the low in Region 9 (1.3), along with Region 7 (Iowa, Kansas, Missouri, and Nebraska) and Region 10 (Alaska, Idaho, Oregon, and Washington), the report showed.

The regional pattern of overdose deaths for methamphetamine, which was fourth nationally in involvement (13.3%), basically reversed the other three drugs: highest in the West and lowest in the Northeast. Region 9 had the highest death rate (5.2/100,000) and Region 2 the lowest (0.4), with Region 1 just ahead at 0.6.

“Understanding these regional differences in the drugs most frequently involved in drug overdose deaths may help inform prevention and policy efforts,” the investigators wrote.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

MS-related disability may be decreasing

Article Type
Changed
Thu, 01/23/2020 - 15:54

The long-term prognosis of multiple sclerosis has improved markedly around the world,, according to an overview provided at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis. Data consistently indicate that the time that elapses before a patient requires a cane for ambulation has increased, and survival has likewise improved. “Some of the improvement can be attributed confidently to treatment effect,” said Ilya Kister, MD, associate professor of neurology at NYU Langone Health in New York. “We hope to see an even greater change with newer therapies.”

Dr. Ilya Kister

At the same time, neurologists appear to be diagnosing more cases of MS than they previously did, said Dr. Kister, which suggests that neurologists probably are diagnosing milder cases. The overall societal burden of MS remains high.

The relative prevalence of mild disability has increased

About 25 years have elapsed since the first disease-modifying treatment (DMT) for MS became available, and treatment has become widespread during that time. Dr. Kister and colleagues sought to determine whether the current clinical population of patients with MS, who for the most part receive DMTs, has less disability than do untreated patients or patients from natural history studies do. They identified the MS Severity Score (MSSS) as a measure with which to compare populations. The MSSS assigns a patient a ranking according to his or her level of disability, using a reference population of patients with the same disease duration for comparison. “MSSS can be conceptualized as rate of disability accumulation,” said Dr. Kister. “Lower MSSS corresponds to relatively slower disability accumulation, and higher MSSS to higher disability accumulation.”

The MSSS was developed using the Expanded Disability Status Scale (EDSS) score as a measure of disability. Because many neurologists do not routinely obtain EDSS scores for their patients, Dr. Kister and colleagues used the Patient-Determined Disease Steps (PDDS) to measure disability. As its name implies, the PDDS is a patient-reported outcome measure that mainly measures ambulation. It correlates strongly with EDSS, said Dr. Kister. He and colleagues used the PDDS to develop a reference table of MS disability, which they called the Patient-Derived MSSS.

The investigators examined a large sample of patients at NYU MS Center and Barnabas MS Center in Livingston, N.J. They grouped patients into sextiles according to their Patient-Derived MSSS. Dr. Kister and colleagues found that, rather than arriving at sextiles that contained equal numbers of patients, as would be expected if disability were distributed as in the reference population, they had significantly more patients in the two lowest sextiles and significantly fewer patients in the two highest sextiles. “This [result] suggests that the disability curve has indeed shifted toward the more benign end of the spectrum in the contemporary clinic population,” said Dr. Kister.

Other researchers have observed a similar phenomenon. George et al. published the results of a large, international collaboration in Neurology Genetics in 2016. After examining more than 7,000 patients, the investigators noted a similar overrepresentation of patients with milder severity scores and underrepresentation of patients with higher severity scores. These results support the hypothesis of a shift toward milder disability, said Dr. Kister.

 

 

Trend toward milder disability

The investigators next examined whether the rate of accumulation of disability among patients with MS had changed from year to year since DMTs were introduced. They conducted a univariate analysis of MSSS for 6,238 patients who were enrolled in the N.Y. State MS Consortium during 1996-2007. They found that patients who were enrolled in more recent years had significantly lower MSSS than patients who were enrolled in earlier years, regardless of disease duration. When Dr. Kister and colleagues replicated their analysis using EDSS, they found significantly lower levels of disability for patients enrolled in more recent years, except for patients with disease duration of 26-30 years. A multivariate analysis showed that the median MSSS of enrollees into the N.Y. State MS Consortium decreased from 5.04 in 1996 to 3.78 in 2006.

In a subsequent study, Dr. Kister and colleagues examined the age at which patients in the MSBase registry reached various disability milestones (e.g., EDSS of 6, which indicates the need of a cane to walk outdoors), according to their year of enrollment in the registry. They found a significant increase in age at milestone achievement with each subsequent calendar year. For example, for every consecutive year of enrollment, the age at which patients attained an EDSS of 6 increased by 0.38 years. These analyses were confirmed for the subgroups of patients diagnosed according to the Poser and McDonald criteria. The increase in age “is probably not just related to the shift in diagnostic criteria,” said Dr. Kister. When the researchers calculated the net average gains in years over the 13-year follow-up period, they found that patients who entered at the end of the enrollment period were 4.9 years older when they reached an EDSS of 6, compared with patients with an EDSS of 6 who entered at the beginning of the enrollment period.

International data show similar trends

Research conducted around the world shows similar trends, said Dr. Kister. In 2009, Veugelers et al. published the results of a study that included 1,752 patients with MS in Nova Scotia. Before the 1998 introduction of a drug insurance program that provides DMTs, the time to an EDSS of 6 was 14.4 years. After the introduction of this program, the time to EDSS of 6 was 18.6 years.

More recently, Capra et al. examined 1,324 patients with MS who attended an MS center in Brescia, Italy, during 1980-2010. They found that the age at which 50% of patients reached an EDSS of 6 was approximately 55 years in 1990. By 2010, the age at achieving this milestone had increased to approximately 63 years.

In a prospective study, Cree et al. examined the evolution of disability in 448 actively treated patients with relapsing-remitting MS and 69 patients with progressive MS. Approximately 45% of patients had no disability worsening during a 10-year follow-up period. Furthermore, a comparatively low 11% of patients had reached an EDSS of 6 at 10 years. The average disease duration of the cohort at that time was 17 years, said Dr. Kister. The results indicated that about 50% of patients would be expected to reach an EDSS of 6 after a disease duration of approximately 38 years, “which is much longer than in the natural history studies,” he added.

In 2019, Beiki et al. found that among patients with relapsing-remitting MS, the risk of reaching an EDSS of 6 decreased by 7% with each subsequent calendar year of diagnosis. The researchers did not observe a similar trend among patients with progressive MS. Their population-based, retrospective study included 7,331 patients in Sweden.

Two additional studies in Scandinavian populations add to the evidence of decreasing disability. In their examination of Swedish patients with MS who received a diagnosis of MS during 1968-2012, Burkill et al. found that the risk of death decreased over time. The hazard ratio of mortality for patients with MS, compared with a non-MS comparator group, decreased from 6.52 among those diagnosed during 1968-1980 to 2.08 for patients diagnosed during 2001-2012. The decrease in the risk of mortality was greater among patients with MS than in a matched comparator population. Similarly, in a nationwide, population-based study, Koch-Henriksen et al. found that all-cause excess mortality in Danish patients with MS decreased from 1950 through 1999.

 

 

The role of DMTs

The evidence suggests that DMTs are affecting the long-term progression of MS, said Dr. Kister. Palace et al. compared patients with MS in the UK who received treatment with interferon-beta with a modeled untreated cohort of patients in British Columbia. They found that treated patients reached an EDSS of 6 4 years later than did untreated patients.

Furthermore, an analysis by Brown et al. showed that the time to conversion to secondary progressive MS was longer among treated patients, compared with untreated patients. The risk of conversion was lower for patients treated with newer, more effective therapies (i.e., fingolimod, alemtuzumab, or natalizumab) than for those treated with glatiramer acetate or interferon beta.

Finally, Kingwell and colleagues examined the effect of treatment with interferon-beta on survival using an international cohort of approximately 6,000 patients with relapsing-remitting MS. They found that exposure to interferon-beta for more than 3 years was associated with a 32% reduction in the risk of mortality. They observed no similar risk reduction among patients exposed to interferon-beta for 6 months to 3 years.

Although these data are encouraging, other evidence indicates that the prevalence of MS in the United States has increased considerably in the past 40 years. Researchers estimate that 1 million Americans have MS, which “suggests that we are diagnosing many more mild cases,” said Dr. Kister. The burden of the disease remains high, he concluded.

Dr. Kister reported receiving consulting fees or research grants from Biogen, Roche, Genzyme and Genentech.

SOURCE: Kister I et al. ECTRIMS 2019. Abstract 281754.

Meeting/Event
Issue
Neurology Reviews- 27(12)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The long-term prognosis of multiple sclerosis has improved markedly around the world,, according to an overview provided at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis. Data consistently indicate that the time that elapses before a patient requires a cane for ambulation has increased, and survival has likewise improved. “Some of the improvement can be attributed confidently to treatment effect,” said Ilya Kister, MD, associate professor of neurology at NYU Langone Health in New York. “We hope to see an even greater change with newer therapies.”

Dr. Ilya Kister

At the same time, neurologists appear to be diagnosing more cases of MS than they previously did, said Dr. Kister, which suggests that neurologists probably are diagnosing milder cases. The overall societal burden of MS remains high.

The relative prevalence of mild disability has increased

About 25 years have elapsed since the first disease-modifying treatment (DMT) for MS became available, and treatment has become widespread during that time. Dr. Kister and colleagues sought to determine whether the current clinical population of patients with MS, who for the most part receive DMTs, has less disability than do untreated patients or patients from natural history studies do. They identified the MS Severity Score (MSSS) as a measure with which to compare populations. The MSSS assigns a patient a ranking according to his or her level of disability, using a reference population of patients with the same disease duration for comparison. “MSSS can be conceptualized as rate of disability accumulation,” said Dr. Kister. “Lower MSSS corresponds to relatively slower disability accumulation, and higher MSSS to higher disability accumulation.”

The MSSS was developed using the Expanded Disability Status Scale (EDSS) score as a measure of disability. Because many neurologists do not routinely obtain EDSS scores for their patients, Dr. Kister and colleagues used the Patient-Determined Disease Steps (PDDS) to measure disability. As its name implies, the PDDS is a patient-reported outcome measure that mainly measures ambulation. It correlates strongly with EDSS, said Dr. Kister. He and colleagues used the PDDS to develop a reference table of MS disability, which they called the Patient-Derived MSSS.

The investigators examined a large sample of patients at NYU MS Center and Barnabas MS Center in Livingston, N.J. They grouped patients into sextiles according to their Patient-Derived MSSS. Dr. Kister and colleagues found that, rather than arriving at sextiles that contained equal numbers of patients, as would be expected if disability were distributed as in the reference population, they had significantly more patients in the two lowest sextiles and significantly fewer patients in the two highest sextiles. “This [result] suggests that the disability curve has indeed shifted toward the more benign end of the spectrum in the contemporary clinic population,” said Dr. Kister.

Other researchers have observed a similar phenomenon. George et al. published the results of a large, international collaboration in Neurology Genetics in 2016. After examining more than 7,000 patients, the investigators noted a similar overrepresentation of patients with milder severity scores and underrepresentation of patients with higher severity scores. These results support the hypothesis of a shift toward milder disability, said Dr. Kister.

 

 

Trend toward milder disability

The investigators next examined whether the rate of accumulation of disability among patients with MS had changed from year to year since DMTs were introduced. They conducted a univariate analysis of MSSS for 6,238 patients who were enrolled in the N.Y. State MS Consortium during 1996-2007. They found that patients who were enrolled in more recent years had significantly lower MSSS than patients who were enrolled in earlier years, regardless of disease duration. When Dr. Kister and colleagues replicated their analysis using EDSS, they found significantly lower levels of disability for patients enrolled in more recent years, except for patients with disease duration of 26-30 years. A multivariate analysis showed that the median MSSS of enrollees into the N.Y. State MS Consortium decreased from 5.04 in 1996 to 3.78 in 2006.

In a subsequent study, Dr. Kister and colleagues examined the age at which patients in the MSBase registry reached various disability milestones (e.g., EDSS of 6, which indicates the need of a cane to walk outdoors), according to their year of enrollment in the registry. They found a significant increase in age at milestone achievement with each subsequent calendar year. For example, for every consecutive year of enrollment, the age at which patients attained an EDSS of 6 increased by 0.38 years. These analyses were confirmed for the subgroups of patients diagnosed according to the Poser and McDonald criteria. The increase in age “is probably not just related to the shift in diagnostic criteria,” said Dr. Kister. When the researchers calculated the net average gains in years over the 13-year follow-up period, they found that patients who entered at the end of the enrollment period were 4.9 years older when they reached an EDSS of 6, compared with patients with an EDSS of 6 who entered at the beginning of the enrollment period.

International data show similar trends

Research conducted around the world shows similar trends, said Dr. Kister. In 2009, Veugelers et al. published the results of a study that included 1,752 patients with MS in Nova Scotia. Before the 1998 introduction of a drug insurance program that provides DMTs, the time to an EDSS of 6 was 14.4 years. After the introduction of this program, the time to EDSS of 6 was 18.6 years.

More recently, Capra et al. examined 1,324 patients with MS who attended an MS center in Brescia, Italy, during 1980-2010. They found that the age at which 50% of patients reached an EDSS of 6 was approximately 55 years in 1990. By 2010, the age at achieving this milestone had increased to approximately 63 years.

In a prospective study, Cree et al. examined the evolution of disability in 448 actively treated patients with relapsing-remitting MS and 69 patients with progressive MS. Approximately 45% of patients had no disability worsening during a 10-year follow-up period. Furthermore, a comparatively low 11% of patients had reached an EDSS of 6 at 10 years. The average disease duration of the cohort at that time was 17 years, said Dr. Kister. The results indicated that about 50% of patients would be expected to reach an EDSS of 6 after a disease duration of approximately 38 years, “which is much longer than in the natural history studies,” he added.

In 2019, Beiki et al. found that among patients with relapsing-remitting MS, the risk of reaching an EDSS of 6 decreased by 7% with each subsequent calendar year of diagnosis. The researchers did not observe a similar trend among patients with progressive MS. Their population-based, retrospective study included 7,331 patients in Sweden.

Two additional studies in Scandinavian populations add to the evidence of decreasing disability. In their examination of Swedish patients with MS who received a diagnosis of MS during 1968-2012, Burkill et al. found that the risk of death decreased over time. The hazard ratio of mortality for patients with MS, compared with a non-MS comparator group, decreased from 6.52 among those diagnosed during 1968-1980 to 2.08 for patients diagnosed during 2001-2012. The decrease in the risk of mortality was greater among patients with MS than in a matched comparator population. Similarly, in a nationwide, population-based study, Koch-Henriksen et al. found that all-cause excess mortality in Danish patients with MS decreased from 1950 through 1999.

 

 

The role of DMTs

The evidence suggests that DMTs are affecting the long-term progression of MS, said Dr. Kister. Palace et al. compared patients with MS in the UK who received treatment with interferon-beta with a modeled untreated cohort of patients in British Columbia. They found that treated patients reached an EDSS of 6 4 years later than did untreated patients.

Furthermore, an analysis by Brown et al. showed that the time to conversion to secondary progressive MS was longer among treated patients, compared with untreated patients. The risk of conversion was lower for patients treated with newer, more effective therapies (i.e., fingolimod, alemtuzumab, or natalizumab) than for those treated with glatiramer acetate or interferon beta.

Finally, Kingwell and colleagues examined the effect of treatment with interferon-beta on survival using an international cohort of approximately 6,000 patients with relapsing-remitting MS. They found that exposure to interferon-beta for more than 3 years was associated with a 32% reduction in the risk of mortality. They observed no similar risk reduction among patients exposed to interferon-beta for 6 months to 3 years.

Although these data are encouraging, other evidence indicates that the prevalence of MS in the United States has increased considerably in the past 40 years. Researchers estimate that 1 million Americans have MS, which “suggests that we are diagnosing many more mild cases,” said Dr. Kister. The burden of the disease remains high, he concluded.

Dr. Kister reported receiving consulting fees or research grants from Biogen, Roche, Genzyme and Genentech.

SOURCE: Kister I et al. ECTRIMS 2019. Abstract 281754.

The long-term prognosis of multiple sclerosis has improved markedly around the world,, according to an overview provided at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis. Data consistently indicate that the time that elapses before a patient requires a cane for ambulation has increased, and survival has likewise improved. “Some of the improvement can be attributed confidently to treatment effect,” said Ilya Kister, MD, associate professor of neurology at NYU Langone Health in New York. “We hope to see an even greater change with newer therapies.”

Dr. Ilya Kister

At the same time, neurologists appear to be diagnosing more cases of MS than they previously did, said Dr. Kister, which suggests that neurologists probably are diagnosing milder cases. The overall societal burden of MS remains high.

The relative prevalence of mild disability has increased

About 25 years have elapsed since the first disease-modifying treatment (DMT) for MS became available, and treatment has become widespread during that time. Dr. Kister and colleagues sought to determine whether the current clinical population of patients with MS, who for the most part receive DMTs, has less disability than do untreated patients or patients from natural history studies do. They identified the MS Severity Score (MSSS) as a measure with which to compare populations. The MSSS assigns a patient a ranking according to his or her level of disability, using a reference population of patients with the same disease duration for comparison. “MSSS can be conceptualized as rate of disability accumulation,” said Dr. Kister. “Lower MSSS corresponds to relatively slower disability accumulation, and higher MSSS to higher disability accumulation.”

The MSSS was developed using the Expanded Disability Status Scale (EDSS) score as a measure of disability. Because many neurologists do not routinely obtain EDSS scores for their patients, Dr. Kister and colleagues used the Patient-Determined Disease Steps (PDDS) to measure disability. As its name implies, the PDDS is a patient-reported outcome measure that mainly measures ambulation. It correlates strongly with EDSS, said Dr. Kister. He and colleagues used the PDDS to develop a reference table of MS disability, which they called the Patient-Derived MSSS.

The investigators examined a large sample of patients at NYU MS Center and Barnabas MS Center in Livingston, N.J. They grouped patients into sextiles according to their Patient-Derived MSSS. Dr. Kister and colleagues found that, rather than arriving at sextiles that contained equal numbers of patients, as would be expected if disability were distributed as in the reference population, they had significantly more patients in the two lowest sextiles and significantly fewer patients in the two highest sextiles. “This [result] suggests that the disability curve has indeed shifted toward the more benign end of the spectrum in the contemporary clinic population,” said Dr. Kister.

Other researchers have observed a similar phenomenon. George et al. published the results of a large, international collaboration in Neurology Genetics in 2016. After examining more than 7,000 patients, the investigators noted a similar overrepresentation of patients with milder severity scores and underrepresentation of patients with higher severity scores. These results support the hypothesis of a shift toward milder disability, said Dr. Kister.

 

 

Trend toward milder disability

The investigators next examined whether the rate of accumulation of disability among patients with MS had changed from year to year since DMTs were introduced. They conducted a univariate analysis of MSSS for 6,238 patients who were enrolled in the N.Y. State MS Consortium during 1996-2007. They found that patients who were enrolled in more recent years had significantly lower MSSS than patients who were enrolled in earlier years, regardless of disease duration. When Dr. Kister and colleagues replicated their analysis using EDSS, they found significantly lower levels of disability for patients enrolled in more recent years, except for patients with disease duration of 26-30 years. A multivariate analysis showed that the median MSSS of enrollees into the N.Y. State MS Consortium decreased from 5.04 in 1996 to 3.78 in 2006.

In a subsequent study, Dr. Kister and colleagues examined the age at which patients in the MSBase registry reached various disability milestones (e.g., EDSS of 6, which indicates the need of a cane to walk outdoors), according to their year of enrollment in the registry. They found a significant increase in age at milestone achievement with each subsequent calendar year. For example, for every consecutive year of enrollment, the age at which patients attained an EDSS of 6 increased by 0.38 years. These analyses were confirmed for the subgroups of patients diagnosed according to the Poser and McDonald criteria. The increase in age “is probably not just related to the shift in diagnostic criteria,” said Dr. Kister. When the researchers calculated the net average gains in years over the 13-year follow-up period, they found that patients who entered at the end of the enrollment period were 4.9 years older when they reached an EDSS of 6, compared with patients with an EDSS of 6 who entered at the beginning of the enrollment period.

International data show similar trends

Research conducted around the world shows similar trends, said Dr. Kister. In 2009, Veugelers et al. published the results of a study that included 1,752 patients with MS in Nova Scotia. Before the 1998 introduction of a drug insurance program that provides DMTs, the time to an EDSS of 6 was 14.4 years. After the introduction of this program, the time to EDSS of 6 was 18.6 years.

More recently, Capra et al. examined 1,324 patients with MS who attended an MS center in Brescia, Italy, during 1980-2010. They found that the age at which 50% of patients reached an EDSS of 6 was approximately 55 years in 1990. By 2010, the age at achieving this milestone had increased to approximately 63 years.

In a prospective study, Cree et al. examined the evolution of disability in 448 actively treated patients with relapsing-remitting MS and 69 patients with progressive MS. Approximately 45% of patients had no disability worsening during a 10-year follow-up period. Furthermore, a comparatively low 11% of patients had reached an EDSS of 6 at 10 years. The average disease duration of the cohort at that time was 17 years, said Dr. Kister. The results indicated that about 50% of patients would be expected to reach an EDSS of 6 after a disease duration of approximately 38 years, “which is much longer than in the natural history studies,” he added.

In 2019, Beiki et al. found that among patients with relapsing-remitting MS, the risk of reaching an EDSS of 6 decreased by 7% with each subsequent calendar year of diagnosis. The researchers did not observe a similar trend among patients with progressive MS. Their population-based, retrospective study included 7,331 patients in Sweden.

Two additional studies in Scandinavian populations add to the evidence of decreasing disability. In their examination of Swedish patients with MS who received a diagnosis of MS during 1968-2012, Burkill et al. found that the risk of death decreased over time. The hazard ratio of mortality for patients with MS, compared with a non-MS comparator group, decreased from 6.52 among those diagnosed during 1968-1980 to 2.08 for patients diagnosed during 2001-2012. The decrease in the risk of mortality was greater among patients with MS than in a matched comparator population. Similarly, in a nationwide, population-based study, Koch-Henriksen et al. found that all-cause excess mortality in Danish patients with MS decreased from 1950 through 1999.

 

 

The role of DMTs

The evidence suggests that DMTs are affecting the long-term progression of MS, said Dr. Kister. Palace et al. compared patients with MS in the UK who received treatment with interferon-beta with a modeled untreated cohort of patients in British Columbia. They found that treated patients reached an EDSS of 6 4 years later than did untreated patients.

Furthermore, an analysis by Brown et al. showed that the time to conversion to secondary progressive MS was longer among treated patients, compared with untreated patients. The risk of conversion was lower for patients treated with newer, more effective therapies (i.e., fingolimod, alemtuzumab, or natalizumab) than for those treated with glatiramer acetate or interferon beta.

Finally, Kingwell and colleagues examined the effect of treatment with interferon-beta on survival using an international cohort of approximately 6,000 patients with relapsing-remitting MS. They found that exposure to interferon-beta for more than 3 years was associated with a 32% reduction in the risk of mortality. They observed no similar risk reduction among patients exposed to interferon-beta for 6 months to 3 years.

Although these data are encouraging, other evidence indicates that the prevalence of MS in the United States has increased considerably in the past 40 years. Researchers estimate that 1 million Americans have MS, which “suggests that we are diagnosing many more mild cases,” said Dr. Kister. The burden of the disease remains high, he concluded.

Dr. Kister reported receiving consulting fees or research grants from Biogen, Roche, Genzyme and Genentech.

SOURCE: Kister I et al. ECTRIMS 2019. Abstract 281754.

Issue
Neurology Reviews- 27(12)
Issue
Neurology Reviews- 27(12)
Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM ECTRIMS 2019

Citation Override
Publish date: November 12, 2019
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Integrating lay navigation programs into cancer care is a challenge

Article Type
Changed
Tue, 11/12/2019 - 14:24

The implementation of a lay navigation program as part of the delivery of cancer care presents a number of challenges, particularly because of the complexities of treating cancer, a new study has found.

Researchers looked at a lay navigator program, which uses nonclinical members to help provide information to cancer patients about their treatments, at the University of Alabama, and how the program was doing a year after its implementation.

“Integrating lay navigators into a complex clinical environment needs careful consideration at an organizational level, because this experience demonstrates that the integration process is not straightforward for clinical teams, patients, or navigators,” wrote Laura M. Holdsworth, PhD, Stanford (Calif.) University, and colleague. Their report is in the Journal of Oncology Practice.

A key difficulty discovered through the course of the research is that approximately two-thirds of concerns brought by patients to the lay navigators were clinical in nature, though only about a third (30%) required clinical follow-up and an additional 7% required social work follow-up.

“This seeming misalignment of nonclinical staff handling clinical issues is likely explained by the fact that clinical issues were often a request to repeat information previously delivered by a clinician,” the authors wrote. “The high proportion of clinical concerns brought to navigators is likely a consequence of navigators proactively contacting patients and thus being perceived as an accessible extension of the clinical team with whom to raise concern.”

Researchers found that clinical members did find navigators useful to the care team “specifically because they brought clinical issues to the attention of the clinical team that might otherwise have been missed.”

But on the other hand, some clinicians believed that “clinical concerns being raised with navigators were a source of concern and felt to be inappropriate, suggesting a lack of compatibility of a lay service layered onto complex clinical care,” Dr. Holdsworth and colleagues stated, adding that nurses “with a negative view of lay navigation seemed to lack knowledge and information about the navigator role, which created trust issues.”

Another potential issue is staff turnover within the navigator program and an ever-changing environment of patient education and support programs. Navigators can be very beneficial connecting patients to things such as support groups or helping to connect patients to insurers, but it requires a significant effort on the part of navigators to know, understand, and keep up to date with all the nonclinical opportunities that patients have, and high turnover can be an issue here.

Overall, though, the researchers note that the key finding “was that it was difficult to implement a lay navigation program outside of the clinical team for the purposes of cancer care coordination. The navigators were not integrated into the clinical teams and as such, the navigator role was treated with some suspicion by clinical team members; there was a sense of mistrust among some clinicians, and mismatched expectations around what navigators could or should be doing.”

SOURCE: Holdsworth L et al. J Oncol Pract, 2019 Nov. 6. doi: 10.1200/JOP.19.00339.

Publications
Topics
Sections

The implementation of a lay navigation program as part of the delivery of cancer care presents a number of challenges, particularly because of the complexities of treating cancer, a new study has found.

Researchers looked at a lay navigator program, which uses nonclinical members to help provide information to cancer patients about their treatments, at the University of Alabama, and how the program was doing a year after its implementation.

“Integrating lay navigators into a complex clinical environment needs careful consideration at an organizational level, because this experience demonstrates that the integration process is not straightforward for clinical teams, patients, or navigators,” wrote Laura M. Holdsworth, PhD, Stanford (Calif.) University, and colleague. Their report is in the Journal of Oncology Practice.

A key difficulty discovered through the course of the research is that approximately two-thirds of concerns brought by patients to the lay navigators were clinical in nature, though only about a third (30%) required clinical follow-up and an additional 7% required social work follow-up.

“This seeming misalignment of nonclinical staff handling clinical issues is likely explained by the fact that clinical issues were often a request to repeat information previously delivered by a clinician,” the authors wrote. “The high proportion of clinical concerns brought to navigators is likely a consequence of navigators proactively contacting patients and thus being perceived as an accessible extension of the clinical team with whom to raise concern.”

Researchers found that clinical members did find navigators useful to the care team “specifically because they brought clinical issues to the attention of the clinical team that might otherwise have been missed.”

But on the other hand, some clinicians believed that “clinical concerns being raised with navigators were a source of concern and felt to be inappropriate, suggesting a lack of compatibility of a lay service layered onto complex clinical care,” Dr. Holdsworth and colleagues stated, adding that nurses “with a negative view of lay navigation seemed to lack knowledge and information about the navigator role, which created trust issues.”

Another potential issue is staff turnover within the navigator program and an ever-changing environment of patient education and support programs. Navigators can be very beneficial connecting patients to things such as support groups or helping to connect patients to insurers, but it requires a significant effort on the part of navigators to know, understand, and keep up to date with all the nonclinical opportunities that patients have, and high turnover can be an issue here.

Overall, though, the researchers note that the key finding “was that it was difficult to implement a lay navigation program outside of the clinical team for the purposes of cancer care coordination. The navigators were not integrated into the clinical teams and as such, the navigator role was treated with some suspicion by clinical team members; there was a sense of mistrust among some clinicians, and mismatched expectations around what navigators could or should be doing.”

SOURCE: Holdsworth L et al. J Oncol Pract, 2019 Nov. 6. doi: 10.1200/JOP.19.00339.

The implementation of a lay navigation program as part of the delivery of cancer care presents a number of challenges, particularly because of the complexities of treating cancer, a new study has found.

Researchers looked at a lay navigator program, which uses nonclinical members to help provide information to cancer patients about their treatments, at the University of Alabama, and how the program was doing a year after its implementation.

“Integrating lay navigators into a complex clinical environment needs careful consideration at an organizational level, because this experience demonstrates that the integration process is not straightforward for clinical teams, patients, or navigators,” wrote Laura M. Holdsworth, PhD, Stanford (Calif.) University, and colleague. Their report is in the Journal of Oncology Practice.

A key difficulty discovered through the course of the research is that approximately two-thirds of concerns brought by patients to the lay navigators were clinical in nature, though only about a third (30%) required clinical follow-up and an additional 7% required social work follow-up.

“This seeming misalignment of nonclinical staff handling clinical issues is likely explained by the fact that clinical issues were often a request to repeat information previously delivered by a clinician,” the authors wrote. “The high proportion of clinical concerns brought to navigators is likely a consequence of navigators proactively contacting patients and thus being perceived as an accessible extension of the clinical team with whom to raise concern.”

Researchers found that clinical members did find navigators useful to the care team “specifically because they brought clinical issues to the attention of the clinical team that might otherwise have been missed.”

But on the other hand, some clinicians believed that “clinical concerns being raised with navigators were a source of concern and felt to be inappropriate, suggesting a lack of compatibility of a lay service layered onto complex clinical care,” Dr. Holdsworth and colleagues stated, adding that nurses “with a negative view of lay navigation seemed to lack knowledge and information about the navigator role, which created trust issues.”

Another potential issue is staff turnover within the navigator program and an ever-changing environment of patient education and support programs. Navigators can be very beneficial connecting patients to things such as support groups or helping to connect patients to insurers, but it requires a significant effort on the part of navigators to know, understand, and keep up to date with all the nonclinical opportunities that patients have, and high turnover can be an issue here.

Overall, though, the researchers note that the key finding “was that it was difficult to implement a lay navigation program outside of the clinical team for the purposes of cancer care coordination. The navigators were not integrated into the clinical teams and as such, the navigator role was treated with some suspicion by clinical team members; there was a sense of mistrust among some clinicians, and mismatched expectations around what navigators could or should be doing.”

SOURCE: Holdsworth L et al. J Oncol Pract, 2019 Nov. 6. doi: 10.1200/JOP.19.00339.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JOURNAL OF ONCOLOGY PRACTICE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.