User login
Firearm-related deaths show recent increase
After years of relative stability, firearm-related mortality in the United States rose sharply starting in 2015, according to analysis of a national mortality database.

U.S. firearm mortality was 10.4 per 100,000 person-years during 1999-2014, with the high in that period occurring in 2012 and dropping each of the next 2 years – compared with 11.8 per 100,000 during 2015-2017, an increase of 13.8%, Jason E. Goldstick, PhD, and associates wrote Oct. 8 in Health Affairs.
The majority of the 612,310 firearm deaths over the entire study period were suicides, with the proportion rising slightly from 58.6% in 1999-2014 to 60.0% in 2015-2017. Homicides made up 38.5% of deaths in 1999-2014 and 37.9% in 2015-2017, while the combined share of unintentional and undetermined deaths dropped from 2.9% to 2.1%, the investigators reported.
Dr. Goldstick of the University of Michigan, Ann Arbor, said in a separate written statement.
The geographic broadness can be seen when the change in mortality from 1999-2014 to 2015-2017 was calculated for each locale: 29 states had an increase of more than 20% and only 3 states (California, New York, and Rhode Island) and the District of Columbia had a decrease of at least 12.5%, they said. The data came from the Centers for Disease Control and Prevention’s Wide-ranging Online Data for Epidemiologic Research tool.
The different trends among states and subpopulations make it difficult to offer policy-based interventions. “The epidemiology of firearm violence is complex and varies based on the mechanism of death, demographic group under study, and regionally specific culture, making a one-size-fits-all solution inappropriate,” Dr. Goldstick and associates wrote.
The study was funded mainly by a grant from the National Institute of Child Health and Human Development. The investigators did not provide any information on conflicts of interest.
SOURCE: Goldstick JE et al. Health Aff. 2019;38(10):1646-52.
After years of relative stability, firearm-related mortality in the United States rose sharply starting in 2015, according to analysis of a national mortality database.

U.S. firearm mortality was 10.4 per 100,000 person-years during 1999-2014, with the high in that period occurring in 2012 and dropping each of the next 2 years – compared with 11.8 per 100,000 during 2015-2017, an increase of 13.8%, Jason E. Goldstick, PhD, and associates wrote Oct. 8 in Health Affairs.
The majority of the 612,310 firearm deaths over the entire study period were suicides, with the proportion rising slightly from 58.6% in 1999-2014 to 60.0% in 2015-2017. Homicides made up 38.5% of deaths in 1999-2014 and 37.9% in 2015-2017, while the combined share of unintentional and undetermined deaths dropped from 2.9% to 2.1%, the investigators reported.
Dr. Goldstick of the University of Michigan, Ann Arbor, said in a separate written statement.
The geographic broadness can be seen when the change in mortality from 1999-2014 to 2015-2017 was calculated for each locale: 29 states had an increase of more than 20% and only 3 states (California, New York, and Rhode Island) and the District of Columbia had a decrease of at least 12.5%, they said. The data came from the Centers for Disease Control and Prevention’s Wide-ranging Online Data for Epidemiologic Research tool.
The different trends among states and subpopulations make it difficult to offer policy-based interventions. “The epidemiology of firearm violence is complex and varies based on the mechanism of death, demographic group under study, and regionally specific culture, making a one-size-fits-all solution inappropriate,” Dr. Goldstick and associates wrote.
The study was funded mainly by a grant from the National Institute of Child Health and Human Development. The investigators did not provide any information on conflicts of interest.
SOURCE: Goldstick JE et al. Health Aff. 2019;38(10):1646-52.
After years of relative stability, firearm-related mortality in the United States rose sharply starting in 2015, according to analysis of a national mortality database.

U.S. firearm mortality was 10.4 per 100,000 person-years during 1999-2014, with the high in that period occurring in 2012 and dropping each of the next 2 years – compared with 11.8 per 100,000 during 2015-2017, an increase of 13.8%, Jason E. Goldstick, PhD, and associates wrote Oct. 8 in Health Affairs.
The majority of the 612,310 firearm deaths over the entire study period were suicides, with the proportion rising slightly from 58.6% in 1999-2014 to 60.0% in 2015-2017. Homicides made up 38.5% of deaths in 1999-2014 and 37.9% in 2015-2017, while the combined share of unintentional and undetermined deaths dropped from 2.9% to 2.1%, the investigators reported.
Dr. Goldstick of the University of Michigan, Ann Arbor, said in a separate written statement.
The geographic broadness can be seen when the change in mortality from 1999-2014 to 2015-2017 was calculated for each locale: 29 states had an increase of more than 20% and only 3 states (California, New York, and Rhode Island) and the District of Columbia had a decrease of at least 12.5%, they said. The data came from the Centers for Disease Control and Prevention’s Wide-ranging Online Data for Epidemiologic Research tool.
The different trends among states and subpopulations make it difficult to offer policy-based interventions. “The epidemiology of firearm violence is complex and varies based on the mechanism of death, demographic group under study, and regionally specific culture, making a one-size-fits-all solution inappropriate,” Dr. Goldstick and associates wrote.
The study was funded mainly by a grant from the National Institute of Child Health and Human Development. The investigators did not provide any information on conflicts of interest.
SOURCE: Goldstick JE et al. Health Aff. 2019;38(10):1646-52.
FROM HEALTH AFFAIRS
Online assessment identifies excess steroid use in IBD patients
according to recent research in the journal Alimentary Pharmacology & Therapeutics.
Since measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
To help your patients better understand their treatment options, share AGA’s IBD patient education, which is online at www.gastro.org/practice-guidance/gi-patient-center/topic/inflammatory-bowel-disease.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
according to recent research in the journal Alimentary Pharmacology & Therapeutics.
Since measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
To help your patients better understand their treatment options, share AGA’s IBD patient education, which is online at www.gastro.org/practice-guidance/gi-patient-center/topic/inflammatory-bowel-disease.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
according to recent research in the journal Alimentary Pharmacology & Therapeutics.
Since measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
To help your patients better understand their treatment options, share AGA’s IBD patient education, which is online at www.gastro.org/practice-guidance/gi-patient-center/topic/inflammatory-bowel-disease.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
FROM ALIMENTARY PHARMACOLOGY & THERAPEUTICS
Online assessment identifies excess steroid use in IBD patients
according to recent research in the journal Alimentary Pharmacology & Therapeutics.
Since measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
according to recent research in the journal Alimentary Pharmacology & Therapeutics.
Since measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
according to recent research in the journal Alimentary Pharmacology & Therapeutics.
Since measurement of excess corticosteroid use can be measured through an online assessment tool in clinical practice and long-term corticosteroid use is associated with adverse outcomes, it may be a quality marker for patients with IBD, wrote Christian P. Selinger, MD, from the Leeds (England) Gastroenterology Institute and colleagues. “Such key performance indicators have previously been lacking in IBD, unlike other disease areas such as diabetes and cardiovascular disease.”
Over a period of 3 months, Dr. Selinger and colleagues collected prospective data from 2,385 patients with IBD at 19 centers in England, Wales, and Scotland who had received steroids within the last year. The researchers divided the centers into groups based on whether they participated in the quality improvement program (7 centers), were new to the process of collecting data on steroid use (11 centers), or did not participate in the program (1 center). The seven centers that participated in the intervention were part of an audit that began in 2017, while the other centers were evaluated over a 3-month period between April and July 2017. Patients were asked questions about their steroid use, including whether the steroids were prescribed for their IBD, how long the course of steroids was, how many courses of steroids they received, and if they were able to stop using steroids without their symptoms returning.
The researchers found 14.8% of patients had an excess of steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program. Centers with the improvement program also had steroid use decrease over time, from 30.0% in 2015 to 23.8% in 2017 (P = .003), and steroid excess also decreased from 13.8% at those centers to 11.5% during that time (P equals .17). The researchers noted that, in over half of cases (50.7%), the steroid excess was “avoidable.”
In patients with Crohn’s disease, those who had reduced steroid excess were more likely to be part of an intervention center (odds ratio, 0.72; 95% confidence interval, 0.46-0.97), at a center with a multidisciplinary team (OR, 0.54; 95% CI, 0.20-0.86), or receiving maintenance anti–tumor necrosis factor therapy (OR, 0.61; 95% CI, 0.24-0.95); in contrast, patients who received aminosalicylate were more likely to have steroid excess (OR, 1.72; 95% CI, 1.24-2.09). Steroid excess in ulcerative colitis (UC) patients was more likely among those receiving thiopurine monotherapy (OR, 1.97; 95% CI, 1.19‐3.01), while UC patients at an intervention center were less likely to have steroid excess (OR; 0.72; 95% CI, 0.45‐0.95).
The researchers said the online assessment is limited in assessing the reason for steroid excess and is unable to consider variables such as patient age, sex, IBD phenotype, and disease duration, but is a “simple, pragmatic tool” that can be used in real time in a clinical setting.
“This advances the case for steroid excess as a potential key performance indicator of quality in an IBD service, although in order for clinicians to benchmark their service and provide targets for improvements, any numerical goal attached to this key performance indicator would require consideration of case mix. Further data, including from national and international contexts, is needed,” concluded Dr. Selinger and colleagues.
The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
SOURCE: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
FROM ALIMENTARY PHARMACOLOGY & THERAPEUTICS
Key clinical point: An online assessment tool can be used to identify patients with inflammatory bowel disease (IBD) receiving an excess of steroids, and a quality improvement program lowered excess steroids at centers that implemented the program.
Major finding: Of patients in the study, 14.8% of patients were given excess steroids or were dependent on steroids, and patients at centers that participated in the quality improvement programs had a lower rate of exposure (23.8% vs. 31.0%; P less than .001) and a lower rate of steroid excess (11.5% vs. 17.1%; P less than .001) than did patients at sites that did not participate in the program.
Study details: A prospective study of 2,385 patients with IBD at 19 centers in England, Wales, and Scotland.
Disclosures: The authors reported AbbVie provided the funding to develop the steroid assessment tool, as well as honoraria for invited attendees of the quality improvement plan, which the company also sponsored.
Source: Selinger CP et al. Aliment Pharmacol Ther. 2019. doi: 10.1111/apt.15497.
Considering the value of productivity bonuses
Connect high-value care with reimbursement
Physician payment models that include productivity bonuses are widespread, says Reshma Gupta, MD, MSHPM.
“These payment models are thought to affect clinician behavior, with productivity bonuses incentivizing clinicians to do more. While new policies aim to reduce total costs of care, little is known about the association between physician payment models and the culture of delivering high-value care,” said Dr. Gupta, the medical director for quality improvement at UCLA Health in Los Angeles.
To find out if hospitalist reimbursement models are associated with high-value culture in university, community, and safety-net hospitals, internal medicine hospitalists from 12 hospitals across California completed a cross-sectional survey assessing their perceptions of high-value care culture within their institutions. Dr. Gupta and colleagues summarized the results.
The study found that nearly 30% of hospitalists who were sampled reported payment with productivity bonuses, while only 5% of hospitalists sampled reported quality or value-based bonuses, Dr. Gupta said. “Hospitalists who reported payment with productivity bonuses were more likely to report lower high-value care culture within their programs.”
Hospitalist leaders interested in improving high-value care culture can use the High Value Care Culture Survey (http://www.highvaluecareculturesurvey.com) to quickly assess the culture within their programs, diagnose areas of opportunity and target improvement efforts.
“They can test new physician payment models within their programs and evaluate their high-value care culture to identify areas of opportunity for improvement,” Dr. Gupta said.
Reference
1. Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;1;16-21.
Connect high-value care with reimbursement
Connect high-value care with reimbursement
Physician payment models that include productivity bonuses are widespread, says Reshma Gupta, MD, MSHPM.
“These payment models are thought to affect clinician behavior, with productivity bonuses incentivizing clinicians to do more. While new policies aim to reduce total costs of care, little is known about the association between physician payment models and the culture of delivering high-value care,” said Dr. Gupta, the medical director for quality improvement at UCLA Health in Los Angeles.
To find out if hospitalist reimbursement models are associated with high-value culture in university, community, and safety-net hospitals, internal medicine hospitalists from 12 hospitals across California completed a cross-sectional survey assessing their perceptions of high-value care culture within their institutions. Dr. Gupta and colleagues summarized the results.
The study found that nearly 30% of hospitalists who were sampled reported payment with productivity bonuses, while only 5% of hospitalists sampled reported quality or value-based bonuses, Dr. Gupta said. “Hospitalists who reported payment with productivity bonuses were more likely to report lower high-value care culture within their programs.”
Hospitalist leaders interested in improving high-value care culture can use the High Value Care Culture Survey (http://www.highvaluecareculturesurvey.com) to quickly assess the culture within their programs, diagnose areas of opportunity and target improvement efforts.
“They can test new physician payment models within their programs and evaluate their high-value care culture to identify areas of opportunity for improvement,” Dr. Gupta said.
Reference
1. Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;1;16-21.
Physician payment models that include productivity bonuses are widespread, says Reshma Gupta, MD, MSHPM.
“These payment models are thought to affect clinician behavior, with productivity bonuses incentivizing clinicians to do more. While new policies aim to reduce total costs of care, little is known about the association between physician payment models and the culture of delivering high-value care,” said Dr. Gupta, the medical director for quality improvement at UCLA Health in Los Angeles.
To find out if hospitalist reimbursement models are associated with high-value culture in university, community, and safety-net hospitals, internal medicine hospitalists from 12 hospitals across California completed a cross-sectional survey assessing their perceptions of high-value care culture within their institutions. Dr. Gupta and colleagues summarized the results.
The study found that nearly 30% of hospitalists who were sampled reported payment with productivity bonuses, while only 5% of hospitalists sampled reported quality or value-based bonuses, Dr. Gupta said. “Hospitalists who reported payment with productivity bonuses were more likely to report lower high-value care culture within their programs.”
Hospitalist leaders interested in improving high-value care culture can use the High Value Care Culture Survey (http://www.highvaluecareculturesurvey.com) to quickly assess the culture within their programs, diagnose areas of opportunity and target improvement efforts.
“They can test new physician payment models within their programs and evaluate their high-value care culture to identify areas of opportunity for improvement,” Dr. Gupta said.
Reference
1. Gupta R et al. Association between hospitalist productivity payments and high-value care culture. J Hosp Med. 2019;1;16-21.
Best treatment approach for early stage follicular lymphoma is unclear
Randomized trials are needed to determine the optimal treatment approach for early stage follicular lymphoma (FL), according to researchers.
A retrospective study showed similar outcomes among patients who received radiotherapy, immunochemotherapy, combined modality treatment (CMT), and watchful waiting (WW).
There were some differences in progression-free survival (PFS) according to treatment approach. However, there were no significant differences in overall survival (OS) between any of the active treatments or between patients who received active treatment and those managed with WW.
Joshua W. D. Tobin, MD, of Princess Alexandra Hospital in Brisbane, Queensland, Australia, and colleagues conducted this research and reported the results in Blood Advances.
The researchers analyzed 365 patients with newly diagnosed, stage I/II FL. The patients had a median age of 63 years and more than half were men. They were diagnosed between 2005 and 2017, and the median follow-up was 45 months.
Most patients (n = 280) received active treatment, but 85 were managed with WW. The WW patients were older and had more extranodal involvement.
Types of active treatment included radiotherapy alone (n = 171), immunochemotherapy alone (n = 63), and CMT (n = 46). Compared with the other groups, patients who received radiotherapy alone had less bulk, fewer nodal sites, and fewer B symptoms, and were more likely to have stage I disease. Patients who received CMT had fewer B symptoms and lower FLIPI scores compared with patients who received immunochemotherapy.
The immunochemotherapy regimens used were largely rituximab based. In all, 106 patients received rituximab (alone or in combination) for induction, and 49 received maintenance rituximab (37 in the immunochemotherapy group and 12 in the CMT group).
Results
Response rates were similar among the active treatment groups. The overall response rate was 95% in the radiotherapy group, 96% in the immunochemotherapy group, and 95% in the CMT group (P = .87).
There was a significant difference in PFS between the radiotherapy, immunochemotherapy, and CMT groups (P = .023), but there was no difference in OS between these groups (P = .38).
There was no significant difference in PFS between the immunochemotherapy and CMT groups (hazard ratio [HR], 1.78; P = .24), so the researchers combined these groups into a single group called “systemic therapy.” The patients treated with systemic therapy had PFS (HR, 1.32; P = .96) and OS (HR, 0.46; P = .21) similar to that of patients treated with radiotherapy alone.
Maintenance rituximab was associated with prolonged PFS among patients treated with systemic therapy (HR, 0.24; P = .017). However, there was no significant difference in OS between patients who received maintenance and those who did not (HR, 0.89; P = .90).
Relapse was less common among patients who received maintenance, and there were no cases of transformation in that group. Relapse occurred in 24.6% of the radiotherapy group, 18.3% of the systemic therapy group, and 4.1% of the group that received systemic therapy plus maintenance (P = .006). Transformation was less likely in the systemic therapy group (1.8%) than in the radiotherapy (6.4%) and WW (9.4%) groups (HR, 0.20; P = .034).
Overall, the active treatment group had better PFS than the WW group (HR, 0.52; P = .002), but there was no significant difference in OS between the groups (HR, 0.94; P = .90).
“Based on our comparable OS between WW and actively treated patients, WW could be considered as an initial management strategy in early stage FL,” Dr. Tobin and colleagues wrote. “However, long-term follow-up is required to determine if a survival benefit exists favoring active treatment.”
The researchers reported relationships with many pharmaceutical companies.
SOURCE: Tobin JWD et al. Blood Adv. 2019 Oct 8;3(19):2804-11.
Randomized trials are needed to determine the optimal treatment approach for early stage follicular lymphoma (FL), according to researchers.
A retrospective study showed similar outcomes among patients who received radiotherapy, immunochemotherapy, combined modality treatment (CMT), and watchful waiting (WW).
There were some differences in progression-free survival (PFS) according to treatment approach. However, there were no significant differences in overall survival (OS) between any of the active treatments or between patients who received active treatment and those managed with WW.
Joshua W. D. Tobin, MD, of Princess Alexandra Hospital in Brisbane, Queensland, Australia, and colleagues conducted this research and reported the results in Blood Advances.
The researchers analyzed 365 patients with newly diagnosed, stage I/II FL. The patients had a median age of 63 years and more than half were men. They were diagnosed between 2005 and 2017, and the median follow-up was 45 months.
Most patients (n = 280) received active treatment, but 85 were managed with WW. The WW patients were older and had more extranodal involvement.
Types of active treatment included radiotherapy alone (n = 171), immunochemotherapy alone (n = 63), and CMT (n = 46). Compared with the other groups, patients who received radiotherapy alone had less bulk, fewer nodal sites, and fewer B symptoms, and were more likely to have stage I disease. Patients who received CMT had fewer B symptoms and lower FLIPI scores compared with patients who received immunochemotherapy.
The immunochemotherapy regimens used were largely rituximab based. In all, 106 patients received rituximab (alone or in combination) for induction, and 49 received maintenance rituximab (37 in the immunochemotherapy group and 12 in the CMT group).
Results
Response rates were similar among the active treatment groups. The overall response rate was 95% in the radiotherapy group, 96% in the immunochemotherapy group, and 95% in the CMT group (P = .87).
There was a significant difference in PFS between the radiotherapy, immunochemotherapy, and CMT groups (P = .023), but there was no difference in OS between these groups (P = .38).
There was no significant difference in PFS between the immunochemotherapy and CMT groups (hazard ratio [HR], 1.78; P = .24), so the researchers combined these groups into a single group called “systemic therapy.” The patients treated with systemic therapy had PFS (HR, 1.32; P = .96) and OS (HR, 0.46; P = .21) similar to that of patients treated with radiotherapy alone.
Maintenance rituximab was associated with prolonged PFS among patients treated with systemic therapy (HR, 0.24; P = .017). However, there was no significant difference in OS between patients who received maintenance and those who did not (HR, 0.89; P = .90).
Relapse was less common among patients who received maintenance, and there were no cases of transformation in that group. Relapse occurred in 24.6% of the radiotherapy group, 18.3% of the systemic therapy group, and 4.1% of the group that received systemic therapy plus maintenance (P = .006). Transformation was less likely in the systemic therapy group (1.8%) than in the radiotherapy (6.4%) and WW (9.4%) groups (HR, 0.20; P = .034).
Overall, the active treatment group had better PFS than the WW group (HR, 0.52; P = .002), but there was no significant difference in OS between the groups (HR, 0.94; P = .90).
“Based on our comparable OS between WW and actively treated patients, WW could be considered as an initial management strategy in early stage FL,” Dr. Tobin and colleagues wrote. “However, long-term follow-up is required to determine if a survival benefit exists favoring active treatment.”
The researchers reported relationships with many pharmaceutical companies.
SOURCE: Tobin JWD et al. Blood Adv. 2019 Oct 8;3(19):2804-11.
Randomized trials are needed to determine the optimal treatment approach for early stage follicular lymphoma (FL), according to researchers.
A retrospective study showed similar outcomes among patients who received radiotherapy, immunochemotherapy, combined modality treatment (CMT), and watchful waiting (WW).
There were some differences in progression-free survival (PFS) according to treatment approach. However, there were no significant differences in overall survival (OS) between any of the active treatments or between patients who received active treatment and those managed with WW.
Joshua W. D. Tobin, MD, of Princess Alexandra Hospital in Brisbane, Queensland, Australia, and colleagues conducted this research and reported the results in Blood Advances.
The researchers analyzed 365 patients with newly diagnosed, stage I/II FL. The patients had a median age of 63 years and more than half were men. They were diagnosed between 2005 and 2017, and the median follow-up was 45 months.
Most patients (n = 280) received active treatment, but 85 were managed with WW. The WW patients were older and had more extranodal involvement.
Types of active treatment included radiotherapy alone (n = 171), immunochemotherapy alone (n = 63), and CMT (n = 46). Compared with the other groups, patients who received radiotherapy alone had less bulk, fewer nodal sites, and fewer B symptoms, and were more likely to have stage I disease. Patients who received CMT had fewer B symptoms and lower FLIPI scores compared with patients who received immunochemotherapy.
The immunochemotherapy regimens used were largely rituximab based. In all, 106 patients received rituximab (alone or in combination) for induction, and 49 received maintenance rituximab (37 in the immunochemotherapy group and 12 in the CMT group).
Results
Response rates were similar among the active treatment groups. The overall response rate was 95% in the radiotherapy group, 96% in the immunochemotherapy group, and 95% in the CMT group (P = .87).
There was a significant difference in PFS between the radiotherapy, immunochemotherapy, and CMT groups (P = .023), but there was no difference in OS between these groups (P = .38).
There was no significant difference in PFS between the immunochemotherapy and CMT groups (hazard ratio [HR], 1.78; P = .24), so the researchers combined these groups into a single group called “systemic therapy.” The patients treated with systemic therapy had PFS (HR, 1.32; P = .96) and OS (HR, 0.46; P = .21) similar to that of patients treated with radiotherapy alone.
Maintenance rituximab was associated with prolonged PFS among patients treated with systemic therapy (HR, 0.24; P = .017). However, there was no significant difference in OS between patients who received maintenance and those who did not (HR, 0.89; P = .90).
Relapse was less common among patients who received maintenance, and there were no cases of transformation in that group. Relapse occurred in 24.6% of the radiotherapy group, 18.3% of the systemic therapy group, and 4.1% of the group that received systemic therapy plus maintenance (P = .006). Transformation was less likely in the systemic therapy group (1.8%) than in the radiotherapy (6.4%) and WW (9.4%) groups (HR, 0.20; P = .034).
Overall, the active treatment group had better PFS than the WW group (HR, 0.52; P = .002), but there was no significant difference in OS between the groups (HR, 0.94; P = .90).
“Based on our comparable OS between WW and actively treated patients, WW could be considered as an initial management strategy in early stage FL,” Dr. Tobin and colleagues wrote. “However, long-term follow-up is required to determine if a survival benefit exists favoring active treatment.”
The researchers reported relationships with many pharmaceutical companies.
SOURCE: Tobin JWD et al. Blood Adv. 2019 Oct 8;3(19):2804-11.
FROM BLOOD ADVANCES
Investigators use ARMSS score to predict future MS-related disability
STOCKHOLM – , according to research presented at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis. The resulting measurement is stable, not highly sensitive to age, and appropriate for research applications. “It could give a clinician an earlier indication of the potential disease course of a patient,” said Ryan Ramanujam, PhD, assistant professor of translational neuroepidemiology at Karolinska Institutet in Stockholm.
Researchers who study MS use various scores to measure disease severity, including the Expanded Disability Status Scale (EDSS) and the MS Severity Scale (MSSS). These scores cannot predict a patient’s future status, however, and they do not remain stable throughout the course of a patient’s disease. Fitting a linear model over a series of scores over time can provide a misleading impression of a patient’s disease progression. “What we need is a metric to give a holistic overview of disease course, regardless of when it’s measured in a patient’s disease progression,” said Dr. Ramanujam. Such a measurement could aid the search for genes that affect MS severity, he added.
Examining disability by patient age
Dr. Ramanujam and colleagues constructed their measure using the ARMSS score, which ranks EDSS score by age instead of by disease duration. The ARMSS score ranges from 0 to 10, and the median value is 5 for all patients at a given age. Investigators can calculate the score using a previously published global matrix of values for ARMSS and MSSS available in the R package ms.sev.
The investigators found that the ARMSS score is slightly superior to the MSSS in detecting small increases in EDSS. One benefit of the ARMSS score, compared with the MSSS, is that it allows investigators to study patients for whom time of disease onset is unknown. The ARMSS score also removes potential systematic bias that might result from a neurologist’s retrospective assignment of date of disease onset, said Dr. Ramanujam.
He and his colleagues used ARMSS to compare patients’ disease course with what is expected for that patient (i.e., an ARMSS that remains stable at 5). They extracted data for 15,831 patients participating in the Swedish MS registry, including age and EDSS score at each neurological visit. Eligible patients had serial EDSS scores for 10 years. Dr. Ramanujam and colleagues included 4,514 patients in their analysis.
Measures at 2 years correlated with those at 10 years
The researchers created what they called the ARMSS integral by calculating the ARMSS score’s change from 5 at each examination (e.g., −0.5 or 1). “The ARMSS integral can be thought of as the cumulative disability that a patient accrues over his or her disease course, relative to the average patient, who had the disease for the same ages,” said Dr. Ramanujam. At 2 years of follow-up and at 10 years of follow-up, the distribution of ARMSS integrals for the study population followed a normal pattern.
Next, the investigators sought to compare patients by standardizing their follow-up time. To do this, they calculated what they called the ARMSS-rate by dividing each patient’s ARMSS integral by the number of years of follow-up. The ARMSS-rate offers a “snapshot of disease severity and progression,” said Dr. Ramanujam. When the researchers compared ARMSS-rates at 2 years and 10 years for each patient, they found that the measure was “extremely stable over time and strongly correlated with future disability.” The correlation improved slightly when the researchers compared ARMSS-rates at 4 years and 10 years.
The investigators then categorized patients based on their ARMSS-rate at 2 years (e.g., 0 to 1, 1 to 2, 2 to 3). When they compared the values in these categories with the median ARMSS-rates for the same individuals over the subsequent 8 years, they found strong group-level correlations.
To analyze correlations on an individual level, Dr. Ramanujam and colleagues examined the ability of different metrics at the time closest to 2 years of follow-up to predict those measured at 10 years. They assigned the value 1 to the most severe quartile of outcomes and the value 0 to all other quartiles. For predictors and outcomes, the investigators examined ARMSS-rate and the integral of progression index, which they calculated using the integral of EDSS. They also included EDSS at 10 years as an outcome for progression index.
For predicting the subsequent 8 years of ARMSS-rates, ARMSS-rate at 2 years had an area under the curve (AUC) of 0.921. When the investigators performed the same analysis using a cohort of patients with MS from British Columbia, Canada, they obtained an AUC of 0.887. Progression index at 2 years had an AUC of 0.61 for predicting the most severe quartile of the next 8 years. Compared with this result, ARMSS integral up to 2 years was slightly better at predicting EDSS at 10 years, said Dr. Ramanujam. The progression index poorly predicted the most severe quartile of EDSS at 10 years.
The main limitation of the ARMSS integral and ARMSS-rate is that they are based on EDSS, he added. The EDSS gives great weight to mobility and largely does not measure cognitive disability. “Future metrics could therefore include additional data such as MRI, Symbol Digit Modalities Test, or neurofilament light levels,” said Dr. Ramanujam. “Also, self-assessment could be one area to improve in the future.”
Dr. Ramanujam had no conflicts of interest to disclose. He receives funding from the MultipleMS Project, which is part of the EU Horizon 2020 Framework.
STOCKHOLM – , according to research presented at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis. The resulting measurement is stable, not highly sensitive to age, and appropriate for research applications. “It could give a clinician an earlier indication of the potential disease course of a patient,” said Ryan Ramanujam, PhD, assistant professor of translational neuroepidemiology at Karolinska Institutet in Stockholm.
Researchers who study MS use various scores to measure disease severity, including the Expanded Disability Status Scale (EDSS) and the MS Severity Scale (MSSS). These scores cannot predict a patient’s future status, however, and they do not remain stable throughout the course of a patient’s disease. Fitting a linear model over a series of scores over time can provide a misleading impression of a patient’s disease progression. “What we need is a metric to give a holistic overview of disease course, regardless of when it’s measured in a patient’s disease progression,” said Dr. Ramanujam. Such a measurement could aid the search for genes that affect MS severity, he added.
Examining disability by patient age
Dr. Ramanujam and colleagues constructed their measure using the ARMSS score, which ranks EDSS score by age instead of by disease duration. The ARMSS score ranges from 0 to 10, and the median value is 5 for all patients at a given age. Investigators can calculate the score using a previously published global matrix of values for ARMSS and MSSS available in the R package ms.sev.
The investigators found that the ARMSS score is slightly superior to the MSSS in detecting small increases in EDSS. One benefit of the ARMSS score, compared with the MSSS, is that it allows investigators to study patients for whom time of disease onset is unknown. The ARMSS score also removes potential systematic bias that might result from a neurologist’s retrospective assignment of date of disease onset, said Dr. Ramanujam.
He and his colleagues used ARMSS to compare patients’ disease course with what is expected for that patient (i.e., an ARMSS that remains stable at 5). They extracted data for 15,831 patients participating in the Swedish MS registry, including age and EDSS score at each neurological visit. Eligible patients had serial EDSS scores for 10 years. Dr. Ramanujam and colleagues included 4,514 patients in their analysis.
Measures at 2 years correlated with those at 10 years
The researchers created what they called the ARMSS integral by calculating the ARMSS score’s change from 5 at each examination (e.g., −0.5 or 1). “The ARMSS integral can be thought of as the cumulative disability that a patient accrues over his or her disease course, relative to the average patient, who had the disease for the same ages,” said Dr. Ramanujam. At 2 years of follow-up and at 10 years of follow-up, the distribution of ARMSS integrals for the study population followed a normal pattern.
Next, the investigators sought to compare patients by standardizing their follow-up time. To do this, they calculated what they called the ARMSS-rate by dividing each patient’s ARMSS integral by the number of years of follow-up. The ARMSS-rate offers a “snapshot of disease severity and progression,” said Dr. Ramanujam. When the researchers compared ARMSS-rates at 2 years and 10 years for each patient, they found that the measure was “extremely stable over time and strongly correlated with future disability.” The correlation improved slightly when the researchers compared ARMSS-rates at 4 years and 10 years.
The investigators then categorized patients based on their ARMSS-rate at 2 years (e.g., 0 to 1, 1 to 2, 2 to 3). When they compared the values in these categories with the median ARMSS-rates for the same individuals over the subsequent 8 years, they found strong group-level correlations.
To analyze correlations on an individual level, Dr. Ramanujam and colleagues examined the ability of different metrics at the time closest to 2 years of follow-up to predict those measured at 10 years. They assigned the value 1 to the most severe quartile of outcomes and the value 0 to all other quartiles. For predictors and outcomes, the investigators examined ARMSS-rate and the integral of progression index, which they calculated using the integral of EDSS. They also included EDSS at 10 years as an outcome for progression index.
For predicting the subsequent 8 years of ARMSS-rates, ARMSS-rate at 2 years had an area under the curve (AUC) of 0.921. When the investigators performed the same analysis using a cohort of patients with MS from British Columbia, Canada, they obtained an AUC of 0.887. Progression index at 2 years had an AUC of 0.61 for predicting the most severe quartile of the next 8 years. Compared with this result, ARMSS integral up to 2 years was slightly better at predicting EDSS at 10 years, said Dr. Ramanujam. The progression index poorly predicted the most severe quartile of EDSS at 10 years.
The main limitation of the ARMSS integral and ARMSS-rate is that they are based on EDSS, he added. The EDSS gives great weight to mobility and largely does not measure cognitive disability. “Future metrics could therefore include additional data such as MRI, Symbol Digit Modalities Test, or neurofilament light levels,” said Dr. Ramanujam. “Also, self-assessment could be one area to improve in the future.”
Dr. Ramanujam had no conflicts of interest to disclose. He receives funding from the MultipleMS Project, which is part of the EU Horizon 2020 Framework.
STOCKHOLM – , according to research presented at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis. The resulting measurement is stable, not highly sensitive to age, and appropriate for research applications. “It could give a clinician an earlier indication of the potential disease course of a patient,” said Ryan Ramanujam, PhD, assistant professor of translational neuroepidemiology at Karolinska Institutet in Stockholm.
Researchers who study MS use various scores to measure disease severity, including the Expanded Disability Status Scale (EDSS) and the MS Severity Scale (MSSS). These scores cannot predict a patient’s future status, however, and they do not remain stable throughout the course of a patient’s disease. Fitting a linear model over a series of scores over time can provide a misleading impression of a patient’s disease progression. “What we need is a metric to give a holistic overview of disease course, regardless of when it’s measured in a patient’s disease progression,” said Dr. Ramanujam. Such a measurement could aid the search for genes that affect MS severity, he added.
Examining disability by patient age
Dr. Ramanujam and colleagues constructed their measure using the ARMSS score, which ranks EDSS score by age instead of by disease duration. The ARMSS score ranges from 0 to 10, and the median value is 5 for all patients at a given age. Investigators can calculate the score using a previously published global matrix of values for ARMSS and MSSS available in the R package ms.sev.
The investigators found that the ARMSS score is slightly superior to the MSSS in detecting small increases in EDSS. One benefit of the ARMSS score, compared with the MSSS, is that it allows investigators to study patients for whom time of disease onset is unknown. The ARMSS score also removes potential systematic bias that might result from a neurologist’s retrospective assignment of date of disease onset, said Dr. Ramanujam.
He and his colleagues used ARMSS to compare patients’ disease course with what is expected for that patient (i.e., an ARMSS that remains stable at 5). They extracted data for 15,831 patients participating in the Swedish MS registry, including age and EDSS score at each neurological visit. Eligible patients had serial EDSS scores for 10 years. Dr. Ramanujam and colleagues included 4,514 patients in their analysis.
Measures at 2 years correlated with those at 10 years
The researchers created what they called the ARMSS integral by calculating the ARMSS score’s change from 5 at each examination (e.g., −0.5 or 1). “The ARMSS integral can be thought of as the cumulative disability that a patient accrues over his or her disease course, relative to the average patient, who had the disease for the same ages,” said Dr. Ramanujam. At 2 years of follow-up and at 10 years of follow-up, the distribution of ARMSS integrals for the study population followed a normal pattern.
Next, the investigators sought to compare patients by standardizing their follow-up time. To do this, they calculated what they called the ARMSS-rate by dividing each patient’s ARMSS integral by the number of years of follow-up. The ARMSS-rate offers a “snapshot of disease severity and progression,” said Dr. Ramanujam. When the researchers compared ARMSS-rates at 2 years and 10 years for each patient, they found that the measure was “extremely stable over time and strongly correlated with future disability.” The correlation improved slightly when the researchers compared ARMSS-rates at 4 years and 10 years.
The investigators then categorized patients based on their ARMSS-rate at 2 years (e.g., 0 to 1, 1 to 2, 2 to 3). When they compared the values in these categories with the median ARMSS-rates for the same individuals over the subsequent 8 years, they found strong group-level correlations.
To analyze correlations on an individual level, Dr. Ramanujam and colleagues examined the ability of different metrics at the time closest to 2 years of follow-up to predict those measured at 10 years. They assigned the value 1 to the most severe quartile of outcomes and the value 0 to all other quartiles. For predictors and outcomes, the investigators examined ARMSS-rate and the integral of progression index, which they calculated using the integral of EDSS. They also included EDSS at 10 years as an outcome for progression index.
For predicting the subsequent 8 years of ARMSS-rates, ARMSS-rate at 2 years had an area under the curve (AUC) of 0.921. When the investigators performed the same analysis using a cohort of patients with MS from British Columbia, Canada, they obtained an AUC of 0.887. Progression index at 2 years had an AUC of 0.61 for predicting the most severe quartile of the next 8 years. Compared with this result, ARMSS integral up to 2 years was slightly better at predicting EDSS at 10 years, said Dr. Ramanujam. The progression index poorly predicted the most severe quartile of EDSS at 10 years.
The main limitation of the ARMSS integral and ARMSS-rate is that they are based on EDSS, he added. The EDSS gives great weight to mobility and largely does not measure cognitive disability. “Future metrics could therefore include additional data such as MRI, Symbol Digit Modalities Test, or neurofilament light levels,” said Dr. Ramanujam. “Also, self-assessment could be one area to improve in the future.”
Dr. Ramanujam had no conflicts of interest to disclose. He receives funding from the MultipleMS Project, which is part of the EU Horizon 2020 Framework.
REPORTING FROM ECTRIMS 2019
HCV+ kidney transplants: Similar outcomes to HCV- regardless of recipient serostatus
Kidneys from donors with hepatitis C virus (HCV) infection function well despite adverse quality assessment and are a valuable resource for transplantation candidates independent of HCV status, according to the findings of a large U.S. registry study.
A total of 260 HCV-viremic kidneys were transplanted in the first quarter of 2019, with 105 additional viremic kidneys being discarded, according to a report in the Journal of the American Society of Nephrology by Vishnu S. Potluri, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
Donor HCV viremia was defined as an HCV nucleic acid test–positive result reported to the Organ Procurement and Transplantation Network (OPTN). Donors who were HCV negative in this test were labeled as HCV nonviremic. Kidney transplantation recipients were defined as either HCV seropositive or seronegative based on HCV antibody testing.
During the first quarter of 2019, 74% of HCV-viremic kidneys were transplanted into seronegative recipients, which is a major change from how HCV-viremic kidneys were allocated a few years ago, according to the researchers. The results of small trials showing the benefits of such transplantations and the success of direct-acting antiviral therapy (DAA) on clearing HCV infections were indicated as likely responsible for the change.
HCV-viremic kidneys had similar function, compared with HCV-nonviremic kidneys, when matched on the donor elements included in the Kidney Profile Donor Index (KDPI), excluding HCV, they added. In addition, the 12-month estimated glomerular filtration rate (eGFR) was similar between the seropositive and seronegative recipients, respectively 65.4 and 71.1 mL/min per 1.73 m2 (P = .05), which suggests that recipient HCV serostatus does not negatively affect 1-year graft function using HCV-viremic kidneys in the era of DAA treatments, according to the authors.
Also, among HCV-seropositive recipients of HCV-viremic kidneys, seven (3.4%) died by 1 year post transplantation, while none of the HCV-seronegative recipients of HCV-viremic kidneys experienced graft failure or death.
“These striking results provide important additional evidence that the KDPI, with its current negative weighting for HCV status, does not accurately assess the quality of kidneys from HCV-viremic donors,” the authors wrote.
“HCV-viremic kidneys are a valuable resource for transplantation. Disincentives for accepting these organs should be addressed by the transplantation community,” Dr. Potluri and colleagues concluded.
This work was supported in part by the Health Resources and Services Administration of the U.S. Department of Health & Human Services. The various authors reported grant funding and advisory board participation with a number of pharmaceutical companies.
SOURCE: Potluri VS et al. J Am Soc Nephrol. 2019;30:1939-51.
Kidneys from donors with hepatitis C virus (HCV) infection function well despite adverse quality assessment and are a valuable resource for transplantation candidates independent of HCV status, according to the findings of a large U.S. registry study.
A total of 260 HCV-viremic kidneys were transplanted in the first quarter of 2019, with 105 additional viremic kidneys being discarded, according to a report in the Journal of the American Society of Nephrology by Vishnu S. Potluri, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
Donor HCV viremia was defined as an HCV nucleic acid test–positive result reported to the Organ Procurement and Transplantation Network (OPTN). Donors who were HCV negative in this test were labeled as HCV nonviremic. Kidney transplantation recipients were defined as either HCV seropositive or seronegative based on HCV antibody testing.
During the first quarter of 2019, 74% of HCV-viremic kidneys were transplanted into seronegative recipients, which is a major change from how HCV-viremic kidneys were allocated a few years ago, according to the researchers. The results of small trials showing the benefits of such transplantations and the success of direct-acting antiviral therapy (DAA) on clearing HCV infections were indicated as likely responsible for the change.
HCV-viremic kidneys had similar function, compared with HCV-nonviremic kidneys, when matched on the donor elements included in the Kidney Profile Donor Index (KDPI), excluding HCV, they added. In addition, the 12-month estimated glomerular filtration rate (eGFR) was similar between the seropositive and seronegative recipients, respectively 65.4 and 71.1 mL/min per 1.73 m2 (P = .05), which suggests that recipient HCV serostatus does not negatively affect 1-year graft function using HCV-viremic kidneys in the era of DAA treatments, according to the authors.
Also, among HCV-seropositive recipients of HCV-viremic kidneys, seven (3.4%) died by 1 year post transplantation, while none of the HCV-seronegative recipients of HCV-viremic kidneys experienced graft failure or death.
“These striking results provide important additional evidence that the KDPI, with its current negative weighting for HCV status, does not accurately assess the quality of kidneys from HCV-viremic donors,” the authors wrote.
“HCV-viremic kidneys are a valuable resource for transplantation. Disincentives for accepting these organs should be addressed by the transplantation community,” Dr. Potluri and colleagues concluded.
This work was supported in part by the Health Resources and Services Administration of the U.S. Department of Health & Human Services. The various authors reported grant funding and advisory board participation with a number of pharmaceutical companies.
SOURCE: Potluri VS et al. J Am Soc Nephrol. 2019;30:1939-51.
Kidneys from donors with hepatitis C virus (HCV) infection function well despite adverse quality assessment and are a valuable resource for transplantation candidates independent of HCV status, according to the findings of a large U.S. registry study.
A total of 260 HCV-viremic kidneys were transplanted in the first quarter of 2019, with 105 additional viremic kidneys being discarded, according to a report in the Journal of the American Society of Nephrology by Vishnu S. Potluri, MD, of the University of Pennsylvania, Philadelphia, and colleagues.
Donor HCV viremia was defined as an HCV nucleic acid test–positive result reported to the Organ Procurement and Transplantation Network (OPTN). Donors who were HCV negative in this test were labeled as HCV nonviremic. Kidney transplantation recipients were defined as either HCV seropositive or seronegative based on HCV antibody testing.
During the first quarter of 2019, 74% of HCV-viremic kidneys were transplanted into seronegative recipients, which is a major change from how HCV-viremic kidneys were allocated a few years ago, according to the researchers. The results of small trials showing the benefits of such transplantations and the success of direct-acting antiviral therapy (DAA) on clearing HCV infections were indicated as likely responsible for the change.
HCV-viremic kidneys had similar function, compared with HCV-nonviremic kidneys, when matched on the donor elements included in the Kidney Profile Donor Index (KDPI), excluding HCV, they added. In addition, the 12-month estimated glomerular filtration rate (eGFR) was similar between the seropositive and seronegative recipients, respectively 65.4 and 71.1 mL/min per 1.73 m2 (P = .05), which suggests that recipient HCV serostatus does not negatively affect 1-year graft function using HCV-viremic kidneys in the era of DAA treatments, according to the authors.
Also, among HCV-seropositive recipients of HCV-viremic kidneys, seven (3.4%) died by 1 year post transplantation, while none of the HCV-seronegative recipients of HCV-viremic kidneys experienced graft failure or death.
“These striking results provide important additional evidence that the KDPI, with its current negative weighting for HCV status, does not accurately assess the quality of kidneys from HCV-viremic donors,” the authors wrote.
“HCV-viremic kidneys are a valuable resource for transplantation. Disincentives for accepting these organs should be addressed by the transplantation community,” Dr. Potluri and colleagues concluded.
This work was supported in part by the Health Resources and Services Administration of the U.S. Department of Health & Human Services. The various authors reported grant funding and advisory board participation with a number of pharmaceutical companies.
SOURCE: Potluri VS et al. J Am Soc Nephrol. 2019;30:1939-51.
FROM JOURNAL OF THE AMERICAN SOCIETY OF NEPHROLOGY
Intensive cognitive training may be needed for memory gains in MS
STOCKHOLM – Cognitive rehabilitation to address memory deficits in multiple sclerosis (MS) can take a page from efforts to help those with other conditions, but practitioners and patients should realize that more intensive interventions are likely to be of greater benefit in MS.
in addressing the memory problems frequently seen in MS, Piet Bouman reported at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis.
Hippocampal pathology can underlie the high-impact memory deficits that are seen frequently in patients with MS, noted Mr. Bouman, a doctoral student at Amsterdam University Medical Centers, and his collaborators. However, they observed, which strategies might best ameliorate hippocampal memory loss for those with MS is an open question.
To address this knowledge gap, Mr. Bouman and his coauthors conducted a systematic review and meta-analysis that aimed to determine which memory interventions in current use most help hippocampal memory functioning. The authors did not limit the review to MS, but included other conditions where hippocampal lesions, atrophy, or changes in connection or functioning may affect memory. These include healthy aging, mild cognitive impairment, and Alzheimer’s disease.
Included in the search for studies were those that used either cognitive or exercise interventions and also evaluated both visuospatial and verbal memory using validated measures, such as the Brief Visuospatial Memory Test or the California Verbal Learning Test.
After reviewing an initial 6,697 articles, the authors used Cochrane criteria to eliminate studies that were at high risk of bias. In the end, 141 studies were selected for the final review, and 82 of these were included in the meta-analysis. Eighteen studies involving 895 individuals addressed healthy aging; 39 studies enrolled 2,256 patients with mild cognitive impairment; 8 studies enrolled 223 patients with Alzheimer’s disease; and 26 studies involving 1,174 patients looked at cognitive impairment in the MS population.
To express the efficacy of the interventions across the various studies, Mr. Bouman and collaborators used the ratio of the difference in mean outcomes between groups and the standard deviation in outcome among participants. This ratio, commonly used to harmonize data in meta-analyses, is termed standardized mean difference.
Individuals representing the healthy aging population saw the most benefit from interventions to address memory loss, with a standardized mean difference of 0.48. Patients with mild cognitive impairment saw a standardized mean difference of 0.46, followed by patients with Alzheimer’s disease with a standardized mean difference of 0.43. Patients with MS lagged far behind in their response to interventions to improve memory, with a standardized mean difference of 0.34.
Looking at the different kinds of interventions, exercise interventions showed moderate effectiveness, with a standardized mean difference of 0.46. By contrast, high intensity cognitive training working on memory strategies was the most effective intervention, said Mr. Bouman and his coauthors: This intervention showed a standardized mean difference of 1.03.
Among the varying conditions associated with hippocampal memory loss, MS-related memory problems saw the least response to intervention, “which might be a result of a more widespread pattern of cognitive decline in MS,” noted Mr. Bouman and coauthors.
“Future studies should work from the realization that memory rehabilitation in MS might require a different approach” than that used in healthy aging, mild cognitive impairment, and Alzheimer’s disease, wrote the authors.
Their review revealed “persistent methodological flaws” in the literature, they noted. These included small sample sizes and selection bias.
Mr. Bouman reported that he had no disclosures. One coauthor reported financial relationships with Sanofi Genzyme, Merck-Serono and Biogen Idec. Another reported financial relationships with Merck-Serono, Bogen, Novartis, Genzyme, and Teva Pharmaceuticals.
SOURCE: Bouman P et al. ECTRIMS 2019. Abstract P1439.
STOCKHOLM – Cognitive rehabilitation to address memory deficits in multiple sclerosis (MS) can take a page from efforts to help those with other conditions, but practitioners and patients should realize that more intensive interventions are likely to be of greater benefit in MS.
in addressing the memory problems frequently seen in MS, Piet Bouman reported at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis.
Hippocampal pathology can underlie the high-impact memory deficits that are seen frequently in patients with MS, noted Mr. Bouman, a doctoral student at Amsterdam University Medical Centers, and his collaborators. However, they observed, which strategies might best ameliorate hippocampal memory loss for those with MS is an open question.
To address this knowledge gap, Mr. Bouman and his coauthors conducted a systematic review and meta-analysis that aimed to determine which memory interventions in current use most help hippocampal memory functioning. The authors did not limit the review to MS, but included other conditions where hippocampal lesions, atrophy, or changes in connection or functioning may affect memory. These include healthy aging, mild cognitive impairment, and Alzheimer’s disease.
Included in the search for studies were those that used either cognitive or exercise interventions and also evaluated both visuospatial and verbal memory using validated measures, such as the Brief Visuospatial Memory Test or the California Verbal Learning Test.
After reviewing an initial 6,697 articles, the authors used Cochrane criteria to eliminate studies that were at high risk of bias. In the end, 141 studies were selected for the final review, and 82 of these were included in the meta-analysis. Eighteen studies involving 895 individuals addressed healthy aging; 39 studies enrolled 2,256 patients with mild cognitive impairment; 8 studies enrolled 223 patients with Alzheimer’s disease; and 26 studies involving 1,174 patients looked at cognitive impairment in the MS population.
To express the efficacy of the interventions across the various studies, Mr. Bouman and collaborators used the ratio of the difference in mean outcomes between groups and the standard deviation in outcome among participants. This ratio, commonly used to harmonize data in meta-analyses, is termed standardized mean difference.
Individuals representing the healthy aging population saw the most benefit from interventions to address memory loss, with a standardized mean difference of 0.48. Patients with mild cognitive impairment saw a standardized mean difference of 0.46, followed by patients with Alzheimer’s disease with a standardized mean difference of 0.43. Patients with MS lagged far behind in their response to interventions to improve memory, with a standardized mean difference of 0.34.
Looking at the different kinds of interventions, exercise interventions showed moderate effectiveness, with a standardized mean difference of 0.46. By contrast, high intensity cognitive training working on memory strategies was the most effective intervention, said Mr. Bouman and his coauthors: This intervention showed a standardized mean difference of 1.03.
Among the varying conditions associated with hippocampal memory loss, MS-related memory problems saw the least response to intervention, “which might be a result of a more widespread pattern of cognitive decline in MS,” noted Mr. Bouman and coauthors.
“Future studies should work from the realization that memory rehabilitation in MS might require a different approach” than that used in healthy aging, mild cognitive impairment, and Alzheimer’s disease, wrote the authors.
Their review revealed “persistent methodological flaws” in the literature, they noted. These included small sample sizes and selection bias.
Mr. Bouman reported that he had no disclosures. One coauthor reported financial relationships with Sanofi Genzyme, Merck-Serono and Biogen Idec. Another reported financial relationships with Merck-Serono, Bogen, Novartis, Genzyme, and Teva Pharmaceuticals.
SOURCE: Bouman P et al. ECTRIMS 2019. Abstract P1439.
STOCKHOLM – Cognitive rehabilitation to address memory deficits in multiple sclerosis (MS) can take a page from efforts to help those with other conditions, but practitioners and patients should realize that more intensive interventions are likely to be of greater benefit in MS.
in addressing the memory problems frequently seen in MS, Piet Bouman reported at the annual congress of the European Committee for Treatment and Research in Multiple Sclerosis.
Hippocampal pathology can underlie the high-impact memory deficits that are seen frequently in patients with MS, noted Mr. Bouman, a doctoral student at Amsterdam University Medical Centers, and his collaborators. However, they observed, which strategies might best ameliorate hippocampal memory loss for those with MS is an open question.
To address this knowledge gap, Mr. Bouman and his coauthors conducted a systematic review and meta-analysis that aimed to determine which memory interventions in current use most help hippocampal memory functioning. The authors did not limit the review to MS, but included other conditions where hippocampal lesions, atrophy, or changes in connection or functioning may affect memory. These include healthy aging, mild cognitive impairment, and Alzheimer’s disease.
Included in the search for studies were those that used either cognitive or exercise interventions and also evaluated both visuospatial and verbal memory using validated measures, such as the Brief Visuospatial Memory Test or the California Verbal Learning Test.
After reviewing an initial 6,697 articles, the authors used Cochrane criteria to eliminate studies that were at high risk of bias. In the end, 141 studies were selected for the final review, and 82 of these were included in the meta-analysis. Eighteen studies involving 895 individuals addressed healthy aging; 39 studies enrolled 2,256 patients with mild cognitive impairment; 8 studies enrolled 223 patients with Alzheimer’s disease; and 26 studies involving 1,174 patients looked at cognitive impairment in the MS population.
To express the efficacy of the interventions across the various studies, Mr. Bouman and collaborators used the ratio of the difference in mean outcomes between groups and the standard deviation in outcome among participants. This ratio, commonly used to harmonize data in meta-analyses, is termed standardized mean difference.
Individuals representing the healthy aging population saw the most benefit from interventions to address memory loss, with a standardized mean difference of 0.48. Patients with mild cognitive impairment saw a standardized mean difference of 0.46, followed by patients with Alzheimer’s disease with a standardized mean difference of 0.43. Patients with MS lagged far behind in their response to interventions to improve memory, with a standardized mean difference of 0.34.
Looking at the different kinds of interventions, exercise interventions showed moderate effectiveness, with a standardized mean difference of 0.46. By contrast, high intensity cognitive training working on memory strategies was the most effective intervention, said Mr. Bouman and his coauthors: This intervention showed a standardized mean difference of 1.03.
Among the varying conditions associated with hippocampal memory loss, MS-related memory problems saw the least response to intervention, “which might be a result of a more widespread pattern of cognitive decline in MS,” noted Mr. Bouman and coauthors.
“Future studies should work from the realization that memory rehabilitation in MS might require a different approach” than that used in healthy aging, mild cognitive impairment, and Alzheimer’s disease, wrote the authors.
Their review revealed “persistent methodological flaws” in the literature, they noted. These included small sample sizes and selection bias.
Mr. Bouman reported that he had no disclosures. One coauthor reported financial relationships with Sanofi Genzyme, Merck-Serono and Biogen Idec. Another reported financial relationships with Merck-Serono, Bogen, Novartis, Genzyme, and Teva Pharmaceuticals.
SOURCE: Bouman P et al. ECTRIMS 2019. Abstract P1439.
REPORTING FROM ECTRIMS 2019
In older patients with immune-mediated TTP, atypical features may delay diagnosis
Older patients with immune thrombotic thrombocytopenic purpura (iTTP) more often have an atypical neurological presentation, which could result in a delayed diagnosis, according to authors of a recent retrospective analysis.
“Practitioners should be aware of this in order to shorten the time to treatment, which could improve the prognosis in older iTTP patients,” Paul Coppo, MD, PhD, of Hôpital Saint-Antoine, Paris, and coauthors wrote in Blood.
The older patients also had increased 1-month and 1-year mortality compared with younger patients, and had more than a threefold risk of long-term mortality compared with elderly patients without iTTP, according to the study report.
The analysis included 411 patients with iTTP entered into a national registry in France between 2000 and 2016. Seventy-one patients were 60 years of age or older.
Time from hospital admission to diagnosis was 3 days for those older patients, versus just 1 day for patients under 60 years of age (P = .0001), Dr. Coppo and colleagues reported.
Clinical records were available for 67 of the older iTTP patients, of whom 17 had no evidence of delayed diagnosis. The remainder had a “possible diagnostic delay,” according to the report; among those, the iTTP diagnosis was preceded by neurological manifestations in 26 cases, and transient ischemic stroke that usually led to focal deficiency or aphasia in 14 cases. Other features preceding the diagnosis included malaise, behavioral abnormalities, seizure, and dizziness.
Many of these findings are “not specific to a disease, and they are less alarming than in young patients,” the researchers wrote. “In this context, the presence of a thrombocytopenia with anemia should alert physicians to this possible rare diagnosis.”
Older patients also presented with less pronounced cytopenias compared with younger patients, which could have contributed to a late diagnosis, they added.
Older age is a known risk factor for mortality related to iTTP. In the present study, rates of 1-month mortality were 37% for patients aged 60 years and older, and 9% for those younger than age 60 (P less than .0001). The 1-year mortality rates were 49% and 11% for older and younger patients, respectively (P less than .0001).
Compared with older individuals without iTTP from a different study, older iTTP patients had a lower long-term survival rate. iTTP remained an independent risk factor for death even after adjustment for age, sex, and some comorbidities (hazard ratio, 3.44; 95% confidence interval, 2.02-5.87).
The study was partly funded by a grant from the French Ministry of Health. Dr. Coppo reported that he is a clinical advisory board member for Alexion, Ablynx (now part of Sanofi), Shire, and Octapharma. Two other co-authors reported participating in advisory boards for Ablynx.
SOURCE: Prevel R et al. Blood. 2019 Sep 17. doi: 10.1182/blood.2019000748.
Older patients with immune thrombotic thrombocytopenic purpura (iTTP) more often have an atypical neurological presentation, which could result in a delayed diagnosis, according to authors of a recent retrospective analysis.
“Practitioners should be aware of this in order to shorten the time to treatment, which could improve the prognosis in older iTTP patients,” Paul Coppo, MD, PhD, of Hôpital Saint-Antoine, Paris, and coauthors wrote in Blood.
The older patients also had increased 1-month and 1-year mortality compared with younger patients, and had more than a threefold risk of long-term mortality compared with elderly patients without iTTP, according to the study report.
The analysis included 411 patients with iTTP entered into a national registry in France between 2000 and 2016. Seventy-one patients were 60 years of age or older.
Time from hospital admission to diagnosis was 3 days for those older patients, versus just 1 day for patients under 60 years of age (P = .0001), Dr. Coppo and colleagues reported.
Clinical records were available for 67 of the older iTTP patients, of whom 17 had no evidence of delayed diagnosis. The remainder had a “possible diagnostic delay,” according to the report; among those, the iTTP diagnosis was preceded by neurological manifestations in 26 cases, and transient ischemic stroke that usually led to focal deficiency or aphasia in 14 cases. Other features preceding the diagnosis included malaise, behavioral abnormalities, seizure, and dizziness.
Many of these findings are “not specific to a disease, and they are less alarming than in young patients,” the researchers wrote. “In this context, the presence of a thrombocytopenia with anemia should alert physicians to this possible rare diagnosis.”
Older patients also presented with less pronounced cytopenias compared with younger patients, which could have contributed to a late diagnosis, they added.
Older age is a known risk factor for mortality related to iTTP. In the present study, rates of 1-month mortality were 37% for patients aged 60 years and older, and 9% for those younger than age 60 (P less than .0001). The 1-year mortality rates were 49% and 11% for older and younger patients, respectively (P less than .0001).
Compared with older individuals without iTTP from a different study, older iTTP patients had a lower long-term survival rate. iTTP remained an independent risk factor for death even after adjustment for age, sex, and some comorbidities (hazard ratio, 3.44; 95% confidence interval, 2.02-5.87).
The study was partly funded by a grant from the French Ministry of Health. Dr. Coppo reported that he is a clinical advisory board member for Alexion, Ablynx (now part of Sanofi), Shire, and Octapharma. Two other co-authors reported participating in advisory boards for Ablynx.
SOURCE: Prevel R et al. Blood. 2019 Sep 17. doi: 10.1182/blood.2019000748.
Older patients with immune thrombotic thrombocytopenic purpura (iTTP) more often have an atypical neurological presentation, which could result in a delayed diagnosis, according to authors of a recent retrospective analysis.
“Practitioners should be aware of this in order to shorten the time to treatment, which could improve the prognosis in older iTTP patients,” Paul Coppo, MD, PhD, of Hôpital Saint-Antoine, Paris, and coauthors wrote in Blood.
The older patients also had increased 1-month and 1-year mortality compared with younger patients, and had more than a threefold risk of long-term mortality compared with elderly patients without iTTP, according to the study report.
The analysis included 411 patients with iTTP entered into a national registry in France between 2000 and 2016. Seventy-one patients were 60 years of age or older.
Time from hospital admission to diagnosis was 3 days for those older patients, versus just 1 day for patients under 60 years of age (P = .0001), Dr. Coppo and colleagues reported.
Clinical records were available for 67 of the older iTTP patients, of whom 17 had no evidence of delayed diagnosis. The remainder had a “possible diagnostic delay,” according to the report; among those, the iTTP diagnosis was preceded by neurological manifestations in 26 cases, and transient ischemic stroke that usually led to focal deficiency or aphasia in 14 cases. Other features preceding the diagnosis included malaise, behavioral abnormalities, seizure, and dizziness.
Many of these findings are “not specific to a disease, and they are less alarming than in young patients,” the researchers wrote. “In this context, the presence of a thrombocytopenia with anemia should alert physicians to this possible rare diagnosis.”
Older patients also presented with less pronounced cytopenias compared with younger patients, which could have contributed to a late diagnosis, they added.
Older age is a known risk factor for mortality related to iTTP. In the present study, rates of 1-month mortality were 37% for patients aged 60 years and older, and 9% for those younger than age 60 (P less than .0001). The 1-year mortality rates were 49% and 11% for older and younger patients, respectively (P less than .0001).
Compared with older individuals without iTTP from a different study, older iTTP patients had a lower long-term survival rate. iTTP remained an independent risk factor for death even after adjustment for age, sex, and some comorbidities (hazard ratio, 3.44; 95% confidence interval, 2.02-5.87).
The study was partly funded by a grant from the French Ministry of Health. Dr. Coppo reported that he is a clinical advisory board member for Alexion, Ablynx (now part of Sanofi), Shire, and Octapharma. Two other co-authors reported participating in advisory boards for Ablynx.
SOURCE: Prevel R et al. Blood. 2019 Sep 17. doi: 10.1182/blood.2019000748.
FROM BLOOD
Poll: New Algorithm for PE
Choose your answer in the poll below. To check the accuracy of your answer, see PURLs: A Better Approach to the Diagnosis of PE.
[polldaddy:10428150]
Click on page 2 below to find out what the correct answer is...
The correct answer is b.) 14%
To learn more, see this month's PURLs: A Better Approach to the Diagnosis of PE.
Choose your answer in the poll below. To check the accuracy of your answer, see PURLs: A Better Approach to the Diagnosis of PE.
[polldaddy:10428150]
Click on page 2 below to find out what the correct answer is...
The correct answer is b.) 14%
To learn more, see this month's PURLs: A Better Approach to the Diagnosis of PE.
Choose your answer in the poll below. To check the accuracy of your answer, see PURLs: A Better Approach to the Diagnosis of PE.
[polldaddy:10428150]
Click on page 2 below to find out what the correct answer is...
The correct answer is b.) 14%
To learn more, see this month's PURLs: A Better Approach to the Diagnosis of PE.



