Use and costs of CRC end-of-life care differ sharply between U.S., Canada

Article Type
Changed
Wed, 05/26/2021 - 13:46

 

Patterns of health care use and costs at the end of life among colorectal cancer (CRC) patients differ considerably between the United States and Canada and offer learning opportunities for both countries, suggests a cross-sectional cohort study.

Total costs were one-fourth higher for U.S. patients, who more often received chemotherapy and imaging in the month leading up to death. Canadian patients in the province of Ontario were more likely to be hospitalized and to die in the hospital.

“Our findings add to the growing body of research describing health care utilization and costs among patients in different systems to inform efforts to improve organization and delivery of care,” write the investigators, led by Karen E. Bremner, BSc, a research associate with the Toronto General Hospital Research Institute, University Health Network, and the Toronto Health Economics and Technology Assessment (THETA) Collaborative. “These findings suggest opportunities for reducing chemotherapy and ICU use in the U.S. and hospitalizations in Ontario.”

The investigators used registries to identify patients who received a diagnosis of CRC of any stage during 2007-2013 and died of any cancer during that period at the age of 66 years or older.

Analyses compared health care use and costs between 16,565 patients from the U.S. Surveillance, Epidemiology, and End Results (SEER) cancer registries linked to Medicare claims and 6,587 patients from the Ontario Cancer Registry linked to administrative health data.

Across months, but especially in the month before death, the SEER-Medicare group was more likely than the Ontario group to receive chemotherapy (15.7% vs. 8.0% in the last month of life) and have imaging tests (39.4% vs. 31.1% in the last month of life), according to results reported in the Journal of Oncology Practice.

Ontario patients more often visited the emergency department (14.7% vs. 6.7%) and were hospitalized (62.5% vs. 51.0%) in the month before death; had longer stays (14.1 vs. 10.9 days); and were more likely to die in the hospital (42.0% vs. 24.3%). But once hospitalized, they were less often admitted to the ICU (17.9% vs. 43.2%).

Mean total costs for all health care resources in the last month of life were 25% higher for the SEER-Medicare group compared with the Ontario group ($17,284 vs. $13,849), with the gap widening by stage at diagnosis. Costs were 12% higher for those with stage 0 to II disease, 27% higher for those with stage III disease, and 32% higher for those with stage IV disease.

The SEER-Medicare group had higher hospitalization costs ($11,180 vs. $9,434) with daily hospital costs that were about twice those of Ontario counterparts ($2,004 vs. $1,067).

“[O]ur descriptive study of health care utilization and costs at the end of life in similar groups of older CRC patients, although not supporting a direct comparison of two health systems, generated hypotheses concerning areas for improvement in service delivery and lower costs in both settings,” Ms. Bremner and coinvestigators maintained.

“In Ontario, improving coordination of end-of-life care and reducing hospitalizations and in-hospital deaths could provide savings,” they noted. “Reducing daily hospital costs and intensity of health care services for SEER-Medicare patients, especially those with stage IV disease at diagnosis, could reduce costs to the Medicare program and decrease the financial burden on patients and families.”

Ms. Bremner disclosed that she had no conflicts of interest. The Ontario arm of the study was funded by the Canadian Centre for Applied Research in Cancer Control, which receives core funding from the Canadian Cancer Society Research Institute.

SOURCE: Bremner KE et al. J Oncol Pract. 2019 Oct 24. doi: 10.1200/JOP.19.00061.

Publications
Topics
Sections

 

Patterns of health care use and costs at the end of life among colorectal cancer (CRC) patients differ considerably between the United States and Canada and offer learning opportunities for both countries, suggests a cross-sectional cohort study.

Total costs were one-fourth higher for U.S. patients, who more often received chemotherapy and imaging in the month leading up to death. Canadian patients in the province of Ontario were more likely to be hospitalized and to die in the hospital.

“Our findings add to the growing body of research describing health care utilization and costs among patients in different systems to inform efforts to improve organization and delivery of care,” write the investigators, led by Karen E. Bremner, BSc, a research associate with the Toronto General Hospital Research Institute, University Health Network, and the Toronto Health Economics and Technology Assessment (THETA) Collaborative. “These findings suggest opportunities for reducing chemotherapy and ICU use in the U.S. and hospitalizations in Ontario.”

The investigators used registries to identify patients who received a diagnosis of CRC of any stage during 2007-2013 and died of any cancer during that period at the age of 66 years or older.

Analyses compared health care use and costs between 16,565 patients from the U.S. Surveillance, Epidemiology, and End Results (SEER) cancer registries linked to Medicare claims and 6,587 patients from the Ontario Cancer Registry linked to administrative health data.

Across months, but especially in the month before death, the SEER-Medicare group was more likely than the Ontario group to receive chemotherapy (15.7% vs. 8.0% in the last month of life) and have imaging tests (39.4% vs. 31.1% in the last month of life), according to results reported in the Journal of Oncology Practice.

Ontario patients more often visited the emergency department (14.7% vs. 6.7%) and were hospitalized (62.5% vs. 51.0%) in the month before death; had longer stays (14.1 vs. 10.9 days); and were more likely to die in the hospital (42.0% vs. 24.3%). But once hospitalized, they were less often admitted to the ICU (17.9% vs. 43.2%).

Mean total costs for all health care resources in the last month of life were 25% higher for the SEER-Medicare group compared with the Ontario group ($17,284 vs. $13,849), with the gap widening by stage at diagnosis. Costs were 12% higher for those with stage 0 to II disease, 27% higher for those with stage III disease, and 32% higher for those with stage IV disease.

The SEER-Medicare group had higher hospitalization costs ($11,180 vs. $9,434) with daily hospital costs that were about twice those of Ontario counterparts ($2,004 vs. $1,067).

“[O]ur descriptive study of health care utilization and costs at the end of life in similar groups of older CRC patients, although not supporting a direct comparison of two health systems, generated hypotheses concerning areas for improvement in service delivery and lower costs in both settings,” Ms. Bremner and coinvestigators maintained.

“In Ontario, improving coordination of end-of-life care and reducing hospitalizations and in-hospital deaths could provide savings,” they noted. “Reducing daily hospital costs and intensity of health care services for SEER-Medicare patients, especially those with stage IV disease at diagnosis, could reduce costs to the Medicare program and decrease the financial burden on patients and families.”

Ms. Bremner disclosed that she had no conflicts of interest. The Ontario arm of the study was funded by the Canadian Centre for Applied Research in Cancer Control, which receives core funding from the Canadian Cancer Society Research Institute.

SOURCE: Bremner KE et al. J Oncol Pract. 2019 Oct 24. doi: 10.1200/JOP.19.00061.

 

Patterns of health care use and costs at the end of life among colorectal cancer (CRC) patients differ considerably between the United States and Canada and offer learning opportunities for both countries, suggests a cross-sectional cohort study.

Total costs were one-fourth higher for U.S. patients, who more often received chemotherapy and imaging in the month leading up to death. Canadian patients in the province of Ontario were more likely to be hospitalized and to die in the hospital.

“Our findings add to the growing body of research describing health care utilization and costs among patients in different systems to inform efforts to improve organization and delivery of care,” write the investigators, led by Karen E. Bremner, BSc, a research associate with the Toronto General Hospital Research Institute, University Health Network, and the Toronto Health Economics and Technology Assessment (THETA) Collaborative. “These findings suggest opportunities for reducing chemotherapy and ICU use in the U.S. and hospitalizations in Ontario.”

The investigators used registries to identify patients who received a diagnosis of CRC of any stage during 2007-2013 and died of any cancer during that period at the age of 66 years or older.

Analyses compared health care use and costs between 16,565 patients from the U.S. Surveillance, Epidemiology, and End Results (SEER) cancer registries linked to Medicare claims and 6,587 patients from the Ontario Cancer Registry linked to administrative health data.

Across months, but especially in the month before death, the SEER-Medicare group was more likely than the Ontario group to receive chemotherapy (15.7% vs. 8.0% in the last month of life) and have imaging tests (39.4% vs. 31.1% in the last month of life), according to results reported in the Journal of Oncology Practice.

Ontario patients more often visited the emergency department (14.7% vs. 6.7%) and were hospitalized (62.5% vs. 51.0%) in the month before death; had longer stays (14.1 vs. 10.9 days); and were more likely to die in the hospital (42.0% vs. 24.3%). But once hospitalized, they were less often admitted to the ICU (17.9% vs. 43.2%).

Mean total costs for all health care resources in the last month of life were 25% higher for the SEER-Medicare group compared with the Ontario group ($17,284 vs. $13,849), with the gap widening by stage at diagnosis. Costs were 12% higher for those with stage 0 to II disease, 27% higher for those with stage III disease, and 32% higher for those with stage IV disease.

The SEER-Medicare group had higher hospitalization costs ($11,180 vs. $9,434) with daily hospital costs that were about twice those of Ontario counterparts ($2,004 vs. $1,067).

“[O]ur descriptive study of health care utilization and costs at the end of life in similar groups of older CRC patients, although not supporting a direct comparison of two health systems, generated hypotheses concerning areas for improvement in service delivery and lower costs in both settings,” Ms. Bremner and coinvestigators maintained.

“In Ontario, improving coordination of end-of-life care and reducing hospitalizations and in-hospital deaths could provide savings,” they noted. “Reducing daily hospital costs and intensity of health care services for SEER-Medicare patients, especially those with stage IV disease at diagnosis, could reduce costs to the Medicare program and decrease the financial burden on patients and families.”

Ms. Bremner disclosed that she had no conflicts of interest. The Ontario arm of the study was funded by the Canadian Centre for Applied Research in Cancer Control, which receives core funding from the Canadian Cancer Society Research Institute.

SOURCE: Bremner KE et al. J Oncol Pract. 2019 Oct 24. doi: 10.1200/JOP.19.00061.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF ONCOLOGY PRACTICE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Bariatric surgery as safe in adolescents as it is in adults

Adolescent safety data are reassuring
Article Type
Changed
Tue, 02/14/2023 - 13:04

 

– Bariatric surgery in adolescents was about as safe as it was in adults in the largest U.S. database assembled so far for the procedure in this younger age group.

Mitchel L. Zoler/MDedge News
Dr. Keith J. King

The data from 1,983 patients aged 10-19 years who underwent bariatric surgery at an accredited U.S. center also showed, not unexpectedly, that laparoscopic sleeve gastrectomy was significantly safer during the perioperative and immediate postoperative periods, compared with the other main surgical option, laparoscopic Roux-en-Y gastric bypass.

The incidence of serious adverse events that occurred in adolescents either during surgery or in the 30 days after surgery was 2.9% in the 1,552 patients (78%) who underwent sleeve gastrectomy and 6.5% in the 431 (22%) patients who underwent gastric bypass, Keith J. King, MD, said at a meeting presented by the Obesity Society and the American Society for Metabolic and Bariatric Surgery.

Despite this safety disparity, “the decision to undergo sleeve gastrectomy or Roux-en-Y gastric bypass should be individualized to account for other factors, such as excess weight loss and long-term success,” said Dr. King, a bariatric surgeon at St. Luke’s Hospital, Allentown, Pa. But he acknowledged that having these recent safety data from a relatively large number of adolescents will help families that are trying to decide on treatment for their child.

The data came from records kept by the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program, begun in 2012 by the American College of Surgeons and the American Society for Bariatric and Metabolic Surgery, and a registry for every bariatric surgical procedure done at an accredited U.S. program. The database encompassed 840 surgical programs in 2019.



The incidence of perioperative and postoperative complications in the adolescent patients during the first 30 days after surgery was not statistically significant for any measured safety parameter, compared with 353,726 adults (at least 20 years old) enrolled in the same database during 2015-2017, except for the average duration of surgery, which was 8 minutes shorter in adolescents, Dr. King reported. The data showed that adolescents and adults had roughly similar rates of serious adverse events, organ space infections, and need for reoperation, intervention, or hospital readmission. The way in which clinicians applied bariatric surgery to adolescents also seemed similar to their use of the surgery in adults. The average body mass index of adult patients was about 45 kg/m2, and about 48 kg/m2 in adolescents, and in both age groups, nearly 80% of patients were women or girls.

In contrast, the comparison of sleeve gastrectomy and gastric bypass surgery in adolescents showed several statistically significant differences in safety and procedural characteristics. In addition to a more than twofold difference in the incidence of serious adverse events that favored the sleeve, the data also showed a twofold difference in the need for reoperation, 1% with the sleeve and 2% with bypass; and a threefold difference in the need for at least one intervention during 30-day follow-up, 1% in the sleeve recipients and 3% in those treated with gastric bypass. Patients required at least one hospital readmission within 30 days in 3% of the sleeve cases and in 6% of the bypass cases. Average hospital length of stay was 2 days in both groups.

An efficacy review from a different, large, U.S. database that included 544 adolescents who underwent bariatric surgery during 2005-2015 showed that at 3 years after surgery, average reductions in body mass index were 29% for patients who underwent gastric bypass and 25% in those treated with sleeve gastrectomy (Surg Obes Relat Dis. 2018;14[9]:1374-86).

The study received no commercial support. Dr. King had no disclosures.

SOURCE: El Chaar M et al. Obesity Week 2019, Abstract A138.

Body

 

Dr. Corrigan McBride
These data are very important because they come from the largest collection of data on adolescents who underwent bariatric surgery at a U.S. center and are nationally representative. When I speak with families about the possibility of performing bariatric surgery on an adolescent, their overriding concern is the procedure’s safety. These numbers on adolescent safety constitute the first safety report for this demographic group from the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program. The similarity in the rate of adverse events in adolescents, compared with adults, is reassuring. As the database matures, we will get additional insights into the longer-term outcomes of these patients, information that’s very important for families trying to choose treatment for an obese adolescent child.

The comparison of safety outcomes between sleeve gastrectomy and Roux-en-Y gastric bypass appears to favor using sleeves. In obese adolescents the most common complications we see are nonalcoholic fatty liver disease and obstructive sleep apnea, and prior reports have documented that both often improve following sleeve gastrectomy. That fact, plus these new safety findings, may help push the field toward greater sleeve use in adolescents, although the data also show that sleeve gastrectomy is already used in nearly four-fifths of adolescent cases.

Corrigan McBride, MD, is a professor of surgery and director of bariatric surgery at the University of Nebraska Medical Center in Omaha. She had no disclosures. She made these comments in an interview.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Body

 

Dr. Corrigan McBride
These data are very important because they come from the largest collection of data on adolescents who underwent bariatric surgery at a U.S. center and are nationally representative. When I speak with families about the possibility of performing bariatric surgery on an adolescent, their overriding concern is the procedure’s safety. These numbers on adolescent safety constitute the first safety report for this demographic group from the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program. The similarity in the rate of adverse events in adolescents, compared with adults, is reassuring. As the database matures, we will get additional insights into the longer-term outcomes of these patients, information that’s very important for families trying to choose treatment for an obese adolescent child.

The comparison of safety outcomes between sleeve gastrectomy and Roux-en-Y gastric bypass appears to favor using sleeves. In obese adolescents the most common complications we see are nonalcoholic fatty liver disease and obstructive sleep apnea, and prior reports have documented that both often improve following sleeve gastrectomy. That fact, plus these new safety findings, may help push the field toward greater sleeve use in adolescents, although the data also show that sleeve gastrectomy is already used in nearly four-fifths of adolescent cases.

Corrigan McBride, MD, is a professor of surgery and director of bariatric surgery at the University of Nebraska Medical Center in Omaha. She had no disclosures. She made these comments in an interview.

Body

 

Dr. Corrigan McBride
These data are very important because they come from the largest collection of data on adolescents who underwent bariatric surgery at a U.S. center and are nationally representative. When I speak with families about the possibility of performing bariatric surgery on an adolescent, their overriding concern is the procedure’s safety. These numbers on adolescent safety constitute the first safety report for this demographic group from the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program. The similarity in the rate of adverse events in adolescents, compared with adults, is reassuring. As the database matures, we will get additional insights into the longer-term outcomes of these patients, information that’s very important for families trying to choose treatment for an obese adolescent child.

The comparison of safety outcomes between sleeve gastrectomy and Roux-en-Y gastric bypass appears to favor using sleeves. In obese adolescents the most common complications we see are nonalcoholic fatty liver disease and obstructive sleep apnea, and prior reports have documented that both often improve following sleeve gastrectomy. That fact, plus these new safety findings, may help push the field toward greater sleeve use in adolescents, although the data also show that sleeve gastrectomy is already used in nearly four-fifths of adolescent cases.

Corrigan McBride, MD, is a professor of surgery and director of bariatric surgery at the University of Nebraska Medical Center in Omaha. She had no disclosures. She made these comments in an interview.

Title
Adolescent safety data are reassuring
Adolescent safety data are reassuring

 

– Bariatric surgery in adolescents was about as safe as it was in adults in the largest U.S. database assembled so far for the procedure in this younger age group.

Mitchel L. Zoler/MDedge News
Dr. Keith J. King

The data from 1,983 patients aged 10-19 years who underwent bariatric surgery at an accredited U.S. center also showed, not unexpectedly, that laparoscopic sleeve gastrectomy was significantly safer during the perioperative and immediate postoperative periods, compared with the other main surgical option, laparoscopic Roux-en-Y gastric bypass.

The incidence of serious adverse events that occurred in adolescents either during surgery or in the 30 days after surgery was 2.9% in the 1,552 patients (78%) who underwent sleeve gastrectomy and 6.5% in the 431 (22%) patients who underwent gastric bypass, Keith J. King, MD, said at a meeting presented by the Obesity Society and the American Society for Metabolic and Bariatric Surgery.

Despite this safety disparity, “the decision to undergo sleeve gastrectomy or Roux-en-Y gastric bypass should be individualized to account for other factors, such as excess weight loss and long-term success,” said Dr. King, a bariatric surgeon at St. Luke’s Hospital, Allentown, Pa. But he acknowledged that having these recent safety data from a relatively large number of adolescents will help families that are trying to decide on treatment for their child.

The data came from records kept by the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program, begun in 2012 by the American College of Surgeons and the American Society for Bariatric and Metabolic Surgery, and a registry for every bariatric surgical procedure done at an accredited U.S. program. The database encompassed 840 surgical programs in 2019.



The incidence of perioperative and postoperative complications in the adolescent patients during the first 30 days after surgery was not statistically significant for any measured safety parameter, compared with 353,726 adults (at least 20 years old) enrolled in the same database during 2015-2017, except for the average duration of surgery, which was 8 minutes shorter in adolescents, Dr. King reported. The data showed that adolescents and adults had roughly similar rates of serious adverse events, organ space infections, and need for reoperation, intervention, or hospital readmission. The way in which clinicians applied bariatric surgery to adolescents also seemed similar to their use of the surgery in adults. The average body mass index of adult patients was about 45 kg/m2, and about 48 kg/m2 in adolescents, and in both age groups, nearly 80% of patients were women or girls.

In contrast, the comparison of sleeve gastrectomy and gastric bypass surgery in adolescents showed several statistically significant differences in safety and procedural characteristics. In addition to a more than twofold difference in the incidence of serious adverse events that favored the sleeve, the data also showed a twofold difference in the need for reoperation, 1% with the sleeve and 2% with bypass; and a threefold difference in the need for at least one intervention during 30-day follow-up, 1% in the sleeve recipients and 3% in those treated with gastric bypass. Patients required at least one hospital readmission within 30 days in 3% of the sleeve cases and in 6% of the bypass cases. Average hospital length of stay was 2 days in both groups.

An efficacy review from a different, large, U.S. database that included 544 adolescents who underwent bariatric surgery during 2005-2015 showed that at 3 years after surgery, average reductions in body mass index were 29% for patients who underwent gastric bypass and 25% in those treated with sleeve gastrectomy (Surg Obes Relat Dis. 2018;14[9]:1374-86).

The study received no commercial support. Dr. King had no disclosures.

SOURCE: El Chaar M et al. Obesity Week 2019, Abstract A138.

 

– Bariatric surgery in adolescents was about as safe as it was in adults in the largest U.S. database assembled so far for the procedure in this younger age group.

Mitchel L. Zoler/MDedge News
Dr. Keith J. King

The data from 1,983 patients aged 10-19 years who underwent bariatric surgery at an accredited U.S. center also showed, not unexpectedly, that laparoscopic sleeve gastrectomy was significantly safer during the perioperative and immediate postoperative periods, compared with the other main surgical option, laparoscopic Roux-en-Y gastric bypass.

The incidence of serious adverse events that occurred in adolescents either during surgery or in the 30 days after surgery was 2.9% in the 1,552 patients (78%) who underwent sleeve gastrectomy and 6.5% in the 431 (22%) patients who underwent gastric bypass, Keith J. King, MD, said at a meeting presented by the Obesity Society and the American Society for Metabolic and Bariatric Surgery.

Despite this safety disparity, “the decision to undergo sleeve gastrectomy or Roux-en-Y gastric bypass should be individualized to account for other factors, such as excess weight loss and long-term success,” said Dr. King, a bariatric surgeon at St. Luke’s Hospital, Allentown, Pa. But he acknowledged that having these recent safety data from a relatively large number of adolescents will help families that are trying to decide on treatment for their child.

The data came from records kept by the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program, begun in 2012 by the American College of Surgeons and the American Society for Bariatric and Metabolic Surgery, and a registry for every bariatric surgical procedure done at an accredited U.S. program. The database encompassed 840 surgical programs in 2019.



The incidence of perioperative and postoperative complications in the adolescent patients during the first 30 days after surgery was not statistically significant for any measured safety parameter, compared with 353,726 adults (at least 20 years old) enrolled in the same database during 2015-2017, except for the average duration of surgery, which was 8 minutes shorter in adolescents, Dr. King reported. The data showed that adolescents and adults had roughly similar rates of serious adverse events, organ space infections, and need for reoperation, intervention, or hospital readmission. The way in which clinicians applied bariatric surgery to adolescents also seemed similar to their use of the surgery in adults. The average body mass index of adult patients was about 45 kg/m2, and about 48 kg/m2 in adolescents, and in both age groups, nearly 80% of patients were women or girls.

In contrast, the comparison of sleeve gastrectomy and gastric bypass surgery in adolescents showed several statistically significant differences in safety and procedural characteristics. In addition to a more than twofold difference in the incidence of serious adverse events that favored the sleeve, the data also showed a twofold difference in the need for reoperation, 1% with the sleeve and 2% with bypass; and a threefold difference in the need for at least one intervention during 30-day follow-up, 1% in the sleeve recipients and 3% in those treated with gastric bypass. Patients required at least one hospital readmission within 30 days in 3% of the sleeve cases and in 6% of the bypass cases. Average hospital length of stay was 2 days in both groups.

An efficacy review from a different, large, U.S. database that included 544 adolescents who underwent bariatric surgery during 2005-2015 showed that at 3 years after surgery, average reductions in body mass index were 29% for patients who underwent gastric bypass and 25% in those treated with sleeve gastrectomy (Surg Obes Relat Dis. 2018;14[9]:1374-86).

The study received no commercial support. Dr. King had no disclosures.

SOURCE: El Chaar M et al. Obesity Week 2019, Abstract A138.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM OBESITY WEEK 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Small nodules, big problems: AI's role in thyroid nodule diagnosis

Article Type
Changed
Thu, 11/21/2019 - 12:02

– A new image-analysis algorithm for benign thyroid nodules that uses a technique similar to facial recognition showed good sensitivity and specificity, with the potential to reduce biopsies by more than 50%.

The negative predictive value of the ultrasound analysis algorithm was 93.2%, a figure approximating the false-negative rate of about 5% that is seen in fine-needle aspiration of thyroid nodules, said Johnson Thomas, MD, at the annual meeting of the American Thyroid Association.

“Millions of people have thyroid nodules,” many of which are detected incidentally, said Dr. Thomas, an endocrinologist with the Mercy health care system in Springfield, Mo. Fewer than 10% of thyroid nodules turn out to be malignant, but each year, millions of patients undergo biopsies to determine the status of their thyroid nodules.

Faced with evaluating a thyroid nodule, an endocrinologist can currently turn to a risk-stratification scheme, such as those developed by the American College of Radiology and the American Thyroid Association. However, there’s a big subjective component to risk stratification – significant inter- and intraobserver variation has been observed, said Dr. Thomas, and not all nodules are classifiable. The result is a system that still has low specificity and positive predictive value, he said.

Even after a decision to proceed to biopsy, one in seven thyroid nodule biopsies will not produce a definitive diagnosis, he said.

“We are doing millions of thyroid biopsies based on very subjective criteria to find thyroid cancer in a very small percentage of the population, with an invasive technique that may not be diagnostic one out of seven times,” Dr. Thomas said in summing up the current medical situation as he sees it.

Dr. Thomas, who writes his own computer code, said he was searching for a reliable and explainable noninvasive technique, and one that lacked subjective room for error, to address the thyroid nodule problem.

The question was whether an artificial intelligence (AI) algorithm could match radiologist performance in classifying thyroid nodules according to the characteristics of their ultrasound images.

Other algorithms use AI to predict which nodules are malignant, but they function as “black boxes” – a common criticism of AI. The outside observer cannot ordinarily see how the AI algorithm “knows” what it knows. This characteristic of AI poses at least a theoretical problem when such algorithms are used for diagnosis or medical decision making.

Dr. Thomas’s* approach was to use a set of training data to allow the algorithm he constructed to see 2,025 images from a total of 482 nodules. The thyroid nodules used for training had been subjected to biopsy or excised in surgery, so they all had a definitive status of being benign or malignant.

Then, after the algorithm was refined, a set of 103 nodules with known malignancy status was used to test the algorithm’s sensitivity and specificity.

The algorithm, dubbed AiBx, used a convolutional neural network to build a unique image vector for each nodule. The AiBx algorithm then looked at the training database to find the “nearest neighbors,” or the images it found to be the most similar to those of the nodule being examined.

For example, said Dr. Thomas, a test image of a benign nodule would have an output from the AiBx analysis of three similar images from the database – all benign. Hence, rather than making a black-box call of whether a nodule is benign or malignant, the algorithm merely says: “This nodule resembles a benign nodule in our database.” The interpreting physician can then use the algorithm as a decision aid with confidence.

The overall accuracy of AiBx was 81.5%, sensitivity was 87.8%, and specificity was 78.5%. Positive predictive value was 65.9%.

As more images are added to the database, AiBx can easily be retrained and refined, said Dr. Thomas.

“It’s intuitive and explainable,” he added, noting that the algorithm is also a good teaching tool for residents and fellows.

“This AI model can be deployed as an app, integrated with [medical imaging systems] or hosted as a website. By using image-similarity AI models we can eliminate subjectivity and decrease the number of unnecessary biopsies,” he explained in the abstract accompanying the presentation.

However, he said that the algorithm as it currently stands has limitations: It has been tested on only 103 images thus far, and there’s the potential for selection bias.

Dr. Thomas* reported that, although he developed the AiBx algorithm, he has not drawn income or royalties from it. He reported no other relevant conflicts of interest.
 

SOURCE: Thomas* J et al. ATA 2019, Oral Abstract 27.

*Correction, 21/11/2019: An earlier version of this story misstated Dr. Thomas's last name.

 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– A new image-analysis algorithm for benign thyroid nodules that uses a technique similar to facial recognition showed good sensitivity and specificity, with the potential to reduce biopsies by more than 50%.

The negative predictive value of the ultrasound analysis algorithm was 93.2%, a figure approximating the false-negative rate of about 5% that is seen in fine-needle aspiration of thyroid nodules, said Johnson Thomas, MD, at the annual meeting of the American Thyroid Association.

“Millions of people have thyroid nodules,” many of which are detected incidentally, said Dr. Thomas, an endocrinologist with the Mercy health care system in Springfield, Mo. Fewer than 10% of thyroid nodules turn out to be malignant, but each year, millions of patients undergo biopsies to determine the status of their thyroid nodules.

Faced with evaluating a thyroid nodule, an endocrinologist can currently turn to a risk-stratification scheme, such as those developed by the American College of Radiology and the American Thyroid Association. However, there’s a big subjective component to risk stratification – significant inter- and intraobserver variation has been observed, said Dr. Thomas, and not all nodules are classifiable. The result is a system that still has low specificity and positive predictive value, he said.

Even after a decision to proceed to biopsy, one in seven thyroid nodule biopsies will not produce a definitive diagnosis, he said.

“We are doing millions of thyroid biopsies based on very subjective criteria to find thyroid cancer in a very small percentage of the population, with an invasive technique that may not be diagnostic one out of seven times,” Dr. Thomas said in summing up the current medical situation as he sees it.

Dr. Thomas, who writes his own computer code, said he was searching for a reliable and explainable noninvasive technique, and one that lacked subjective room for error, to address the thyroid nodule problem.

The question was whether an artificial intelligence (AI) algorithm could match radiologist performance in classifying thyroid nodules according to the characteristics of their ultrasound images.

Other algorithms use AI to predict which nodules are malignant, but they function as “black boxes” – a common criticism of AI. The outside observer cannot ordinarily see how the AI algorithm “knows” what it knows. This characteristic of AI poses at least a theoretical problem when such algorithms are used for diagnosis or medical decision making.

Dr. Thomas’s* approach was to use a set of training data to allow the algorithm he constructed to see 2,025 images from a total of 482 nodules. The thyroid nodules used for training had been subjected to biopsy or excised in surgery, so they all had a definitive status of being benign or malignant.

Then, after the algorithm was refined, a set of 103 nodules with known malignancy status was used to test the algorithm’s sensitivity and specificity.

The algorithm, dubbed AiBx, used a convolutional neural network to build a unique image vector for each nodule. The AiBx algorithm then looked at the training database to find the “nearest neighbors,” or the images it found to be the most similar to those of the nodule being examined.

For example, said Dr. Thomas, a test image of a benign nodule would have an output from the AiBx analysis of three similar images from the database – all benign. Hence, rather than making a black-box call of whether a nodule is benign or malignant, the algorithm merely says: “This nodule resembles a benign nodule in our database.” The interpreting physician can then use the algorithm as a decision aid with confidence.

The overall accuracy of AiBx was 81.5%, sensitivity was 87.8%, and specificity was 78.5%. Positive predictive value was 65.9%.

As more images are added to the database, AiBx can easily be retrained and refined, said Dr. Thomas.

“It’s intuitive and explainable,” he added, noting that the algorithm is also a good teaching tool for residents and fellows.

“This AI model can be deployed as an app, integrated with [medical imaging systems] or hosted as a website. By using image-similarity AI models we can eliminate subjectivity and decrease the number of unnecessary biopsies,” he explained in the abstract accompanying the presentation.

However, he said that the algorithm as it currently stands has limitations: It has been tested on only 103 images thus far, and there’s the potential for selection bias.

Dr. Thomas* reported that, although he developed the AiBx algorithm, he has not drawn income or royalties from it. He reported no other relevant conflicts of interest.
 

SOURCE: Thomas* J et al. ATA 2019, Oral Abstract 27.

*Correction, 21/11/2019: An earlier version of this story misstated Dr. Thomas's last name.

 

– A new image-analysis algorithm for benign thyroid nodules that uses a technique similar to facial recognition showed good sensitivity and specificity, with the potential to reduce biopsies by more than 50%.

The negative predictive value of the ultrasound analysis algorithm was 93.2%, a figure approximating the false-negative rate of about 5% that is seen in fine-needle aspiration of thyroid nodules, said Johnson Thomas, MD, at the annual meeting of the American Thyroid Association.

“Millions of people have thyroid nodules,” many of which are detected incidentally, said Dr. Thomas, an endocrinologist with the Mercy health care system in Springfield, Mo. Fewer than 10% of thyroid nodules turn out to be malignant, but each year, millions of patients undergo biopsies to determine the status of their thyroid nodules.

Faced with evaluating a thyroid nodule, an endocrinologist can currently turn to a risk-stratification scheme, such as those developed by the American College of Radiology and the American Thyroid Association. However, there’s a big subjective component to risk stratification – significant inter- and intraobserver variation has been observed, said Dr. Thomas, and not all nodules are classifiable. The result is a system that still has low specificity and positive predictive value, he said.

Even after a decision to proceed to biopsy, one in seven thyroid nodule biopsies will not produce a definitive diagnosis, he said.

“We are doing millions of thyroid biopsies based on very subjective criteria to find thyroid cancer in a very small percentage of the population, with an invasive technique that may not be diagnostic one out of seven times,” Dr. Thomas said in summing up the current medical situation as he sees it.

Dr. Thomas, who writes his own computer code, said he was searching for a reliable and explainable noninvasive technique, and one that lacked subjective room for error, to address the thyroid nodule problem.

The question was whether an artificial intelligence (AI) algorithm could match radiologist performance in classifying thyroid nodules according to the characteristics of their ultrasound images.

Other algorithms use AI to predict which nodules are malignant, but they function as “black boxes” – a common criticism of AI. The outside observer cannot ordinarily see how the AI algorithm “knows” what it knows. This characteristic of AI poses at least a theoretical problem when such algorithms are used for diagnosis or medical decision making.

Dr. Thomas’s* approach was to use a set of training data to allow the algorithm he constructed to see 2,025 images from a total of 482 nodules. The thyroid nodules used for training had been subjected to biopsy or excised in surgery, so they all had a definitive status of being benign or malignant.

Then, after the algorithm was refined, a set of 103 nodules with known malignancy status was used to test the algorithm’s sensitivity and specificity.

The algorithm, dubbed AiBx, used a convolutional neural network to build a unique image vector for each nodule. The AiBx algorithm then looked at the training database to find the “nearest neighbors,” or the images it found to be the most similar to those of the nodule being examined.

For example, said Dr. Thomas, a test image of a benign nodule would have an output from the AiBx analysis of three similar images from the database – all benign. Hence, rather than making a black-box call of whether a nodule is benign or malignant, the algorithm merely says: “This nodule resembles a benign nodule in our database.” The interpreting physician can then use the algorithm as a decision aid with confidence.

The overall accuracy of AiBx was 81.5%, sensitivity was 87.8%, and specificity was 78.5%. Positive predictive value was 65.9%.

As more images are added to the database, AiBx can easily be retrained and refined, said Dr. Thomas.

“It’s intuitive and explainable,” he added, noting that the algorithm is also a good teaching tool for residents and fellows.

“This AI model can be deployed as an app, integrated with [medical imaging systems] or hosted as a website. By using image-similarity AI models we can eliminate subjectivity and decrease the number of unnecessary biopsies,” he explained in the abstract accompanying the presentation.

However, he said that the algorithm as it currently stands has limitations: It has been tested on only 103 images thus far, and there’s the potential for selection bias.

Dr. Thomas* reported that, although he developed the AiBx algorithm, he has not drawn income or royalties from it. He reported no other relevant conflicts of interest.
 

SOURCE: Thomas* J et al. ATA 2019, Oral Abstract 27.

*Correction, 21/11/2019: An earlier version of this story misstated Dr. Thomas's last name.

 

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ATA 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Oral vs. IV antibiotic therapy in early treatment of complex bone and joint infections

Article Type
Changed
Mon, 11/11/2019 - 13:16

Background: The standard of care for complex bone and joint infections includes the use of IV antibiotics. A prior meta-analysis suggested that the outcomes for bone and joint infections treated with oral and IV antibiotics are similar.

Dr. Bethany Roy

Study design: Randomized, controlled trial.

Setting: Twenty-six U.K. sites during June 2010–October 2015.

Synopsis: The study enrolled 1,054 adults with bone or joint infections who would have been treated with 6 weeks of IV antibiotics; they were then randomized to receive either IV or oral antibiotics. Treatment regimens were selected by infectious disease specialists. The rate of the primary endpoint, definite treatment failure at 1 year after randomization, was 14.6% in the intravenous group and 13.2% in the oral group. The difference in the risk of definite treatment failure between the two groups was –1.4% (95% confidence interval, –5.6 to 2.9), which met the predefined noninferiority criteria. The use of oral antibiotics also was associated with a shorter hospital stay and fewer complications. The conclusions of the trial are limited by the open-label design. An associated editorial advocated for additional research before widespread change to current treatment recommendations.

Bottom line: Bone and joint infections treated with oral versus IV antibiotics may have similar treatment failure rates.

Citation: Li HK et al. Oral versus intravenous antibiotics for bone and joint infection. N Eng J Med. 2019 Jan 31;380(5):425-36.

Dr. Roy is a hospitalist at Beth Israel Deaconess Medical Center and instructor in medicine at Harvard Medical School.

Publications
Topics
Sections

Background: The standard of care for complex bone and joint infections includes the use of IV antibiotics. A prior meta-analysis suggested that the outcomes for bone and joint infections treated with oral and IV antibiotics are similar.

Dr. Bethany Roy

Study design: Randomized, controlled trial.

Setting: Twenty-six U.K. sites during June 2010–October 2015.

Synopsis: The study enrolled 1,054 adults with bone or joint infections who would have been treated with 6 weeks of IV antibiotics; they were then randomized to receive either IV or oral antibiotics. Treatment regimens were selected by infectious disease specialists. The rate of the primary endpoint, definite treatment failure at 1 year after randomization, was 14.6% in the intravenous group and 13.2% in the oral group. The difference in the risk of definite treatment failure between the two groups was –1.4% (95% confidence interval, –5.6 to 2.9), which met the predefined noninferiority criteria. The use of oral antibiotics also was associated with a shorter hospital stay and fewer complications. The conclusions of the trial are limited by the open-label design. An associated editorial advocated for additional research before widespread change to current treatment recommendations.

Bottom line: Bone and joint infections treated with oral versus IV antibiotics may have similar treatment failure rates.

Citation: Li HK et al. Oral versus intravenous antibiotics for bone and joint infection. N Eng J Med. 2019 Jan 31;380(5):425-36.

Dr. Roy is a hospitalist at Beth Israel Deaconess Medical Center and instructor in medicine at Harvard Medical School.

Background: The standard of care for complex bone and joint infections includes the use of IV antibiotics. A prior meta-analysis suggested that the outcomes for bone and joint infections treated with oral and IV antibiotics are similar.

Dr. Bethany Roy

Study design: Randomized, controlled trial.

Setting: Twenty-six U.K. sites during June 2010–October 2015.

Synopsis: The study enrolled 1,054 adults with bone or joint infections who would have been treated with 6 weeks of IV antibiotics; they were then randomized to receive either IV or oral antibiotics. Treatment regimens were selected by infectious disease specialists. The rate of the primary endpoint, definite treatment failure at 1 year after randomization, was 14.6% in the intravenous group and 13.2% in the oral group. The difference in the risk of definite treatment failure between the two groups was –1.4% (95% confidence interval, –5.6 to 2.9), which met the predefined noninferiority criteria. The use of oral antibiotics also was associated with a shorter hospital stay and fewer complications. The conclusions of the trial are limited by the open-label design. An associated editorial advocated for additional research before widespread change to current treatment recommendations.

Bottom line: Bone and joint infections treated with oral versus IV antibiotics may have similar treatment failure rates.

Citation: Li HK et al. Oral versus intravenous antibiotics for bone and joint infection. N Eng J Med. 2019 Jan 31;380(5):425-36.

Dr. Roy is a hospitalist at Beth Israel Deaconess Medical Center and instructor in medicine at Harvard Medical School.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Transcatheter TR repair tops medical management

Article Type
Changed
Tue, 11/12/2019 - 09:33

– Survival after 12 months was more likely with transcatheter repair of tricuspid regurgitation instead of guideline-directed medical therapy, and patients were less likely to be rehospitalized with heart failure, in a propensity-matched case-control study presented at the Transcatheter Cardiovascular Therapeutics annual meeting.

Dr. Maurizio Taramasso

Tricuspid regurgitation carries a substantial burden of morbidity and mortality, but there hasn’t been great success with surgical approaches, so several trials are underway assessing transcatheter repair. It’s unclear at the moment whether it will beat medical management, which generally includes diuretics and symptom relief, said lead investigator Maurizio Taramasso, MD, PhD, a cardiac surgeon and interventional cardiologist at the University Hospital of Zürich.

Dr. Taramasso and colleagues wanted to take a look at the issue pending results of the randomized trials. “There’s still a lot of uncertainty in regard to what we can do for the patient by reducing tricuspid regurgitation. [There are] no data showing that reducing tricuspid regurgitation improves survival,” he said at the meeting.

The investigators matched 268 patients from the international Transcatheter Tricuspid Valve Therapies registry treated during 2016-2018 with 268 medical-management patients from the Mayo Clinic in Rochester, Minn., and Leiden (the Netherlands) University, based on age, European System for Cardiac Operative Risk Evaluation II scores, and systolic pulmonary artery pressure, the major predictor of poor outcomes in tricuspid regurgitation.

Even with matching, transcatheter patients were worse off, which is probably why they had valve repair in the first place, Dr. Taramasso said at the meeting sponsored by the Cardiovascular Research Foundation. The baseline burden of right ventricular dysfunction, heart failure, mitral regurgitation, atrial fibrillation, and pacemaker placement were all significantly higher in the transcatheter group.

Even so, transcatheter patients had lower 1-year mortality (23% vs. 36%; P = .001) and fewer heart failure rehospitalizations (32% vs. 49%, P less than .0001). Transcatheter repair was associated with greater survival and freedom from heart failure rehospitalization (HR, 0.60; 95% CI, 0.46-0.79; P = .003), which remained significant after adjusting for sex, New York Heart Association functional class, right ventricular dysfunction, and atrial fibrillation (HR, 0.39; 95% CI, 0.26-0.59; P less than .0001), and after further adjustment for mitral regurgitation and pacemaker/defibrillator placement (HR, 0.35; 95% CI, 0.23-0.54; P less than .0001). Subgroup analyses based on mitral regurgitation severity, pulmonary artery pressure, and other factors all favored repair.

“This is an important set of data to show that, indeed, fixing the tricuspid valve does lead to better outcomes, and perhaps we can do that with a transcatheter approach,” said Robert Bonow, MD, a professor of cardiology at Northwestern University, Chicago, after hearing the presentation.



The fact that transcatheter patients were sicker when they were treated is reassuring, added moderator Ajay Kirtane, MD, an interventional cardiologist and associate professor of medicine at Columbia University, New York.

The success rate for the procedure, which was to be alive at the end of it, with the device successfully implanted, the delivery system retrieved, and residual tricuspid regurgitation (TR) less than 3+, was 86%, and 85% of patients were treated with MitraClip, most with two or three clips. Outcomes were similar, but not worse, than medical management when TR wasn’t significantly reduced.

Operators were highly experienced, there were no emergent conversions to surgery, and patients tolerated the approach “pretty well,” Dr. Taramasso said. The lesson is that “we should really try to reduce TR, but just a little bit is not enough.” Overall, “we probably need better devices and better patient selection. With the data we are collecting, we’ll be able soon to known when late is too late, which patients should not be treated,” he said.

The study didn’t address postprocedure medications, but it’s been noted in the registry that medication use generally declines after a few months. Subjects tended to be aged in their mid-70s, and there were slightly more women than men.

The results were published online concurrently with Dr. Taramasso’s report in the Journal of the American College of Cardiology.

No company funding was reported. Dr. Taramasso is a consultant for Abbott Vascular, Boston Scientific, 4TECH, and CoreMedic; and has received speaker fees from Edwards Lifesciences.

SOURCE: Taramasso M et al. J Am Coll Cardiol. 2019 Sep 24. doi: 10.1016/j.jacc.2019.09.028.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Survival after 12 months was more likely with transcatheter repair of tricuspid regurgitation instead of guideline-directed medical therapy, and patients were less likely to be rehospitalized with heart failure, in a propensity-matched case-control study presented at the Transcatheter Cardiovascular Therapeutics annual meeting.

Dr. Maurizio Taramasso

Tricuspid regurgitation carries a substantial burden of morbidity and mortality, but there hasn’t been great success with surgical approaches, so several trials are underway assessing transcatheter repair. It’s unclear at the moment whether it will beat medical management, which generally includes diuretics and symptom relief, said lead investigator Maurizio Taramasso, MD, PhD, a cardiac surgeon and interventional cardiologist at the University Hospital of Zürich.

Dr. Taramasso and colleagues wanted to take a look at the issue pending results of the randomized trials. “There’s still a lot of uncertainty in regard to what we can do for the patient by reducing tricuspid regurgitation. [There are] no data showing that reducing tricuspid regurgitation improves survival,” he said at the meeting.

The investigators matched 268 patients from the international Transcatheter Tricuspid Valve Therapies registry treated during 2016-2018 with 268 medical-management patients from the Mayo Clinic in Rochester, Minn., and Leiden (the Netherlands) University, based on age, European System for Cardiac Operative Risk Evaluation II scores, and systolic pulmonary artery pressure, the major predictor of poor outcomes in tricuspid regurgitation.

Even with matching, transcatheter patients were worse off, which is probably why they had valve repair in the first place, Dr. Taramasso said at the meeting sponsored by the Cardiovascular Research Foundation. The baseline burden of right ventricular dysfunction, heart failure, mitral regurgitation, atrial fibrillation, and pacemaker placement were all significantly higher in the transcatheter group.

Even so, transcatheter patients had lower 1-year mortality (23% vs. 36%; P = .001) and fewer heart failure rehospitalizations (32% vs. 49%, P less than .0001). Transcatheter repair was associated with greater survival and freedom from heart failure rehospitalization (HR, 0.60; 95% CI, 0.46-0.79; P = .003), which remained significant after adjusting for sex, New York Heart Association functional class, right ventricular dysfunction, and atrial fibrillation (HR, 0.39; 95% CI, 0.26-0.59; P less than .0001), and after further adjustment for mitral regurgitation and pacemaker/defibrillator placement (HR, 0.35; 95% CI, 0.23-0.54; P less than .0001). Subgroup analyses based on mitral regurgitation severity, pulmonary artery pressure, and other factors all favored repair.

“This is an important set of data to show that, indeed, fixing the tricuspid valve does lead to better outcomes, and perhaps we can do that with a transcatheter approach,” said Robert Bonow, MD, a professor of cardiology at Northwestern University, Chicago, after hearing the presentation.



The fact that transcatheter patients were sicker when they were treated is reassuring, added moderator Ajay Kirtane, MD, an interventional cardiologist and associate professor of medicine at Columbia University, New York.

The success rate for the procedure, which was to be alive at the end of it, with the device successfully implanted, the delivery system retrieved, and residual tricuspid regurgitation (TR) less than 3+, was 86%, and 85% of patients were treated with MitraClip, most with two or three clips. Outcomes were similar, but not worse, than medical management when TR wasn’t significantly reduced.

Operators were highly experienced, there were no emergent conversions to surgery, and patients tolerated the approach “pretty well,” Dr. Taramasso said. The lesson is that “we should really try to reduce TR, but just a little bit is not enough.” Overall, “we probably need better devices and better patient selection. With the data we are collecting, we’ll be able soon to known when late is too late, which patients should not be treated,” he said.

The study didn’t address postprocedure medications, but it’s been noted in the registry that medication use generally declines after a few months. Subjects tended to be aged in their mid-70s, and there were slightly more women than men.

The results were published online concurrently with Dr. Taramasso’s report in the Journal of the American College of Cardiology.

No company funding was reported. Dr. Taramasso is a consultant for Abbott Vascular, Boston Scientific, 4TECH, and CoreMedic; and has received speaker fees from Edwards Lifesciences.

SOURCE: Taramasso M et al. J Am Coll Cardiol. 2019 Sep 24. doi: 10.1016/j.jacc.2019.09.028.

– Survival after 12 months was more likely with transcatheter repair of tricuspid regurgitation instead of guideline-directed medical therapy, and patients were less likely to be rehospitalized with heart failure, in a propensity-matched case-control study presented at the Transcatheter Cardiovascular Therapeutics annual meeting.

Dr. Maurizio Taramasso

Tricuspid regurgitation carries a substantial burden of morbidity and mortality, but there hasn’t been great success with surgical approaches, so several trials are underway assessing transcatheter repair. It’s unclear at the moment whether it will beat medical management, which generally includes diuretics and symptom relief, said lead investigator Maurizio Taramasso, MD, PhD, a cardiac surgeon and interventional cardiologist at the University Hospital of Zürich.

Dr. Taramasso and colleagues wanted to take a look at the issue pending results of the randomized trials. “There’s still a lot of uncertainty in regard to what we can do for the patient by reducing tricuspid regurgitation. [There are] no data showing that reducing tricuspid regurgitation improves survival,” he said at the meeting.

The investigators matched 268 patients from the international Transcatheter Tricuspid Valve Therapies registry treated during 2016-2018 with 268 medical-management patients from the Mayo Clinic in Rochester, Minn., and Leiden (the Netherlands) University, based on age, European System for Cardiac Operative Risk Evaluation II scores, and systolic pulmonary artery pressure, the major predictor of poor outcomes in tricuspid regurgitation.

Even with matching, transcatheter patients were worse off, which is probably why they had valve repair in the first place, Dr. Taramasso said at the meeting sponsored by the Cardiovascular Research Foundation. The baseline burden of right ventricular dysfunction, heart failure, mitral regurgitation, atrial fibrillation, and pacemaker placement were all significantly higher in the transcatheter group.

Even so, transcatheter patients had lower 1-year mortality (23% vs. 36%; P = .001) and fewer heart failure rehospitalizations (32% vs. 49%, P less than .0001). Transcatheter repair was associated with greater survival and freedom from heart failure rehospitalization (HR, 0.60; 95% CI, 0.46-0.79; P = .003), which remained significant after adjusting for sex, New York Heart Association functional class, right ventricular dysfunction, and atrial fibrillation (HR, 0.39; 95% CI, 0.26-0.59; P less than .0001), and after further adjustment for mitral regurgitation and pacemaker/defibrillator placement (HR, 0.35; 95% CI, 0.23-0.54; P less than .0001). Subgroup analyses based on mitral regurgitation severity, pulmonary artery pressure, and other factors all favored repair.

“This is an important set of data to show that, indeed, fixing the tricuspid valve does lead to better outcomes, and perhaps we can do that with a transcatheter approach,” said Robert Bonow, MD, a professor of cardiology at Northwestern University, Chicago, after hearing the presentation.



The fact that transcatheter patients were sicker when they were treated is reassuring, added moderator Ajay Kirtane, MD, an interventional cardiologist and associate professor of medicine at Columbia University, New York.

The success rate for the procedure, which was to be alive at the end of it, with the device successfully implanted, the delivery system retrieved, and residual tricuspid regurgitation (TR) less than 3+, was 86%, and 85% of patients were treated with MitraClip, most with two or three clips. Outcomes were similar, but not worse, than medical management when TR wasn’t significantly reduced.

Operators were highly experienced, there were no emergent conversions to surgery, and patients tolerated the approach “pretty well,” Dr. Taramasso said. The lesson is that “we should really try to reduce TR, but just a little bit is not enough.” Overall, “we probably need better devices and better patient selection. With the data we are collecting, we’ll be able soon to known when late is too late, which patients should not be treated,” he said.

The study didn’t address postprocedure medications, but it’s been noted in the registry that medication use generally declines after a few months. Subjects tended to be aged in their mid-70s, and there were slightly more women than men.

The results were published online concurrently with Dr. Taramasso’s report in the Journal of the American College of Cardiology.

No company funding was reported. Dr. Taramasso is a consultant for Abbott Vascular, Boston Scientific, 4TECH, and CoreMedic; and has received speaker fees from Edwards Lifesciences.

SOURCE: Taramasso M et al. J Am Coll Cardiol. 2019 Sep 24. doi: 10.1016/j.jacc.2019.09.028.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM TCT 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Low LDL-C and blood pressure can reduce lifetime CVD risk by 80%

Mendelian randomization studies show striking effects of a lifetime of low lipids, blood pressure
Article Type
Changed
Fri, 11/22/2019 - 10:29

– Over the course of years and decades, lower LDL cholesterol levels and lower systolic blood pressure can reduce the lifetime risk of cardiovascular disease by up to 80%, according to a new study.

SilverV/thinkstockphotos

“What we found is that lifetime exposure to the combination of lower LDL and lower systolic blood pressure is associated with independent, additive, and dose-dependent effects on the lifetime risk of cardiovascular disease,” said the study’s senior author, Brian Ference, MD, speaking at the annual congress of the European Society of Cardiology. “The data seem to confirm that most cardiovascular events are preventable, and suggest that most cardiovascular events can be prevented, with prolonged exposure to modestly lower LDL cholesterol and systolic blood pressure.”

Any reduction of LDL-C and systolic blood pressure (SBP), in any combination, was associated with a lower lifetime risk of cardiovascular disease (CVD) in the study, which took advantage of the United Kingdom’s large Biobank to identify individuals with genetically lower LDL-C and blood pressure levels. The relationship was dose-dependent and showed a log-linear relationship to the combined absolute LDL-C and SBP differences, said Dr. Ference, professor and executive director of the Centre for Naturally Randomised Trials at the University of Cambridge, England.

The results validate current guidelines that focus on a lifetime approach to cardiovascular risk reduction and support a focus on therapeutic lifestyle interventions for individuals at all levels of risk for cardiovascular events, said Dr. Ference. He foresees the results shaping new risk-estimating algorithms and informing the next round of prevention guidelines.

Previous studies had suggested that long-term exposure to lower levels of LDL-C and lower systolic blood pressure reduced cardiovascular risk, but the association hadn’t been fully quantified. Ideally, said Dr. Ference, the question would be answered by a long-term randomized controlled trial, but it would be decades before meaningful data would accrue, and such a trial is unlikely to be conducted.

Using data from 438,952 Biobank participants, Dr. Ference and coinvestigators sought to quantify the association between LDL-C, systolic blood pressure, and atherosclerotic CVD. Taking advantage of genetic variants known to be associated with both lower LDL-C and lower systolic blood pressure, the researchers constructed a “natural randomization” trial. This trial design is also known as Mendelian randomization.

First, the entire study population was randomized into those with exome variants associated with higher or lower LDL-C, which resulted in a mean 15-mg/dL difference between the arms. Then, each LDL-C arm was randomized into groups with exome variants associated with higher or lower SBP, resulting in a difference of 2.9-3 mm Hg between the blood pressure arms within each LDL arm. This randomization yielded a reference group, a group with lower LDL-C, a group with lower SBP, and a group with lower LDL-C and SBP.

For the total population, the mean LDL-C was 138 mg/dL, and the mean SBP was 137.8 mm Hg.

A total of 24,980 participants had coronary revascularization, a nonfatal myocardial infarction (MI), or coronary death – the composite primary outcome measure of major coronary events.



“What we found is that long-term exposure to the combination of 1 mmol/L [about 39 mg/dL] lower LDL and 10 mm/Hg lower blood pressure is associated with an 80% lifetime reduction in risk of cardiovascular events, a 75% reduction in the risk of MI, and 68% reduction in the long-term risk of cardiovascular death,” said Dr. Ference.

By breaking participants out into separate quartiles of LDL-C and SBP levels, and examining outcomes for each quartile independently, Dr. Ference and collaborators were able to ascertain that the salutary effects of lower LDL-C and SBP were independent of each other.

Looking at individual cardiovascular outcomes, “The effect of combined exposure to both lower LDL and lower systolic blood pressure appear to be quite similar across multiple composite cardiovascular outcomes,” said Dr. Ference; benefit was seen in risk of MI, stroke, and other vascular events.

Plotting out the amount of risk reduction against the genetic scores for LDL-C and SBP reduction showed a proportional relationship that was logarithmically linear. “These large proportional reductions in risk really suggest that, for LDL, systolic blood pressure, and their combination, the benefit really depends both on the magnitude and the duration of the exposure,” said Dr. Ference. The effect was seen regardless of age, gender, body mass index, and diabetes status; being a smoker slightly attenuated the effects of LDL-C and SBP.

The mean participant age was 65 years, and women made up 54% of the study population. Aside from lipid values and systolic blood pressure, there were no significant between-group differences.

From these findings, what message can clinicians take to their patients? “Benefit is a much greater motivator, rather than the nebulous concept of risk,” said Dr. Ference. “So if we begin to crystallize and give an estimate of how much someone can benefit – either from adhering to a healthy lifestyle, with specific goals for LDL and blood pressure reductions, or from encouraging them to remain compliant with their therapies, achieving those corresponding goals – we can quantify their expected clinical benefit and encourage them to invest in their health over the long term.”

Dr. Ference said that the actual mechanism by which lipids and blood pressure are lowered matters less than the amount and duration of lowering: “These data are really agnostic as to the mechanism by which either blood pressure or LDL – or apo-B–containing lipoproteins generally – and blood pressure are reduced. It really suggests that whatever mechanism by which an individual person can most effectively lower their LDL and blood pressure, that’s the best one for that person, if they can maintain that over time.”

Dr. Ference reported financial relationships, including research contracts, consulting arrangements, receipt of royalties, and being an owner or stockholder of more than a dozen pharmaceutical companies. The study was funded by the United Kingdom’s National Institute of Health Research and Medical Research Council, and by the British Heart Foundation.

SOURCE: Ference B. et al. ESC Congress 2019, Hot Line Session 3.

Body

 

Jemma Hopewell, PhD, was the assigned discussant for the Mendelian randomization study of LDL-C and SBP’s effects on cardiovascular health. She placed the genetic epidemiological study within the framework of other short- and medium-term studies that have examined the effects of LDL-C and SBP on cardiovascular health.

“Let’s think about this in the context of other studies,” said Dr. Hopewell, asking what the study adds to what’s known about exposure to LDL-C and systolic blood pressure levels. Shorter-term clinical trials that tracked differences in LDL-C over about 5 years have shown a 20%-25% drop in cardiovascular risk, while medium-term observational studies have shown a decrease of about 30%.

Now, she said, Mendelian randomization studies such as this analysis of the UK Biobank data are showing larger effects with the lifelong exposure to lower LDL levels that genetic variants confer. “As you can see, a pattern emerges ... of larger effects on risk than might be anticipated from the short-term clinical trials.”

A similar pattern can be seen with SBP, with shorter-term clinical trials showing smaller reductions in CVD. Observational studies show more reduction in risk when participants are followed for longer periods, and studies such as the present one show the larger effects of a lifetime of lower blood pressure, said Dr. Hopewell.

In terms of the combined effects, “It’s for the first time today that we see these nice results in a Mendelian randomization framework. This is a very well conducted analysis.”

Still, she cited potential limitations that can inform interpretation of the study results. These include the fact that Biobank participants have been followed for just about 10 years at this point, with most participants still alive. “Therefore, it is unclear whether this truly reflects the lifetime risk of coronary events.”

Also, the paucity of ethnic variation in the Biobank cohort means generalization is problematic until studies are conducted across different ethnic groups, she said.

The study design leaves open the possibility for reverse causality given the fact that participant characteristics captured at the time of recruitment may be influenced by prior disease, said Dr. Hopewell.

She also cited the complication of pleiotropy that’s a known limitation of Mendelian randomization studies. Importantly, the study’s reliance on genetic variation means that results may not directly translate to long-term use of lipid-lowering medication and antihypertensives, she said.

Still, the effects seen with the Biobank population bolster the importance of prevention efforts. “This really is quite encouraging,” said Dr. Hopewell. “Small differences over a long period of time have a material impact on risk.”
 

Dr. Hopewell is associate professor and senior scientist in genetic epidemiology and clinical trials at Oxford Cardiovascular Science, University of Oxford, England. She disclosed research contracts from unspecified pharmaceutical companies, and she has a fellowship from the British Heart Foundation.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Body

 

Jemma Hopewell, PhD, was the assigned discussant for the Mendelian randomization study of LDL-C and SBP’s effects on cardiovascular health. She placed the genetic epidemiological study within the framework of other short- and medium-term studies that have examined the effects of LDL-C and SBP on cardiovascular health.

“Let’s think about this in the context of other studies,” said Dr. Hopewell, asking what the study adds to what’s known about exposure to LDL-C and systolic blood pressure levels. Shorter-term clinical trials that tracked differences in LDL-C over about 5 years have shown a 20%-25% drop in cardiovascular risk, while medium-term observational studies have shown a decrease of about 30%.

Now, she said, Mendelian randomization studies such as this analysis of the UK Biobank data are showing larger effects with the lifelong exposure to lower LDL levels that genetic variants confer. “As you can see, a pattern emerges ... of larger effects on risk than might be anticipated from the short-term clinical trials.”

A similar pattern can be seen with SBP, with shorter-term clinical trials showing smaller reductions in CVD. Observational studies show more reduction in risk when participants are followed for longer periods, and studies such as the present one show the larger effects of a lifetime of lower blood pressure, said Dr. Hopewell.

In terms of the combined effects, “It’s for the first time today that we see these nice results in a Mendelian randomization framework. This is a very well conducted analysis.”

Still, she cited potential limitations that can inform interpretation of the study results. These include the fact that Biobank participants have been followed for just about 10 years at this point, with most participants still alive. “Therefore, it is unclear whether this truly reflects the lifetime risk of coronary events.”

Also, the paucity of ethnic variation in the Biobank cohort means generalization is problematic until studies are conducted across different ethnic groups, she said.

The study design leaves open the possibility for reverse causality given the fact that participant characteristics captured at the time of recruitment may be influenced by prior disease, said Dr. Hopewell.

She also cited the complication of pleiotropy that’s a known limitation of Mendelian randomization studies. Importantly, the study’s reliance on genetic variation means that results may not directly translate to long-term use of lipid-lowering medication and antihypertensives, she said.

Still, the effects seen with the Biobank population bolster the importance of prevention efforts. “This really is quite encouraging,” said Dr. Hopewell. “Small differences over a long period of time have a material impact on risk.”
 

Dr. Hopewell is associate professor and senior scientist in genetic epidemiology and clinical trials at Oxford Cardiovascular Science, University of Oxford, England. She disclosed research contracts from unspecified pharmaceutical companies, and she has a fellowship from the British Heart Foundation.

Body

 

Jemma Hopewell, PhD, was the assigned discussant for the Mendelian randomization study of LDL-C and SBP’s effects on cardiovascular health. She placed the genetic epidemiological study within the framework of other short- and medium-term studies that have examined the effects of LDL-C and SBP on cardiovascular health.

“Let’s think about this in the context of other studies,” said Dr. Hopewell, asking what the study adds to what’s known about exposure to LDL-C and systolic blood pressure levels. Shorter-term clinical trials that tracked differences in LDL-C over about 5 years have shown a 20%-25% drop in cardiovascular risk, while medium-term observational studies have shown a decrease of about 30%.

Now, she said, Mendelian randomization studies such as this analysis of the UK Biobank data are showing larger effects with the lifelong exposure to lower LDL levels that genetic variants confer. “As you can see, a pattern emerges ... of larger effects on risk than might be anticipated from the short-term clinical trials.”

A similar pattern can be seen with SBP, with shorter-term clinical trials showing smaller reductions in CVD. Observational studies show more reduction in risk when participants are followed for longer periods, and studies such as the present one show the larger effects of a lifetime of lower blood pressure, said Dr. Hopewell.

In terms of the combined effects, “It’s for the first time today that we see these nice results in a Mendelian randomization framework. This is a very well conducted analysis.”

Still, she cited potential limitations that can inform interpretation of the study results. These include the fact that Biobank participants have been followed for just about 10 years at this point, with most participants still alive. “Therefore, it is unclear whether this truly reflects the lifetime risk of coronary events.”

Also, the paucity of ethnic variation in the Biobank cohort means generalization is problematic until studies are conducted across different ethnic groups, she said.

The study design leaves open the possibility for reverse causality given the fact that participant characteristics captured at the time of recruitment may be influenced by prior disease, said Dr. Hopewell.

She also cited the complication of pleiotropy that’s a known limitation of Mendelian randomization studies. Importantly, the study’s reliance on genetic variation means that results may not directly translate to long-term use of lipid-lowering medication and antihypertensives, she said.

Still, the effects seen with the Biobank population bolster the importance of prevention efforts. “This really is quite encouraging,” said Dr. Hopewell. “Small differences over a long period of time have a material impact on risk.”
 

Dr. Hopewell is associate professor and senior scientist in genetic epidemiology and clinical trials at Oxford Cardiovascular Science, University of Oxford, England. She disclosed research contracts from unspecified pharmaceutical companies, and she has a fellowship from the British Heart Foundation.

Title
Mendelian randomization studies show striking effects of a lifetime of low lipids, blood pressure
Mendelian randomization studies show striking effects of a lifetime of low lipids, blood pressure

– Over the course of years and decades, lower LDL cholesterol levels and lower systolic blood pressure can reduce the lifetime risk of cardiovascular disease by up to 80%, according to a new study.

SilverV/thinkstockphotos

“What we found is that lifetime exposure to the combination of lower LDL and lower systolic blood pressure is associated with independent, additive, and dose-dependent effects on the lifetime risk of cardiovascular disease,” said the study’s senior author, Brian Ference, MD, speaking at the annual congress of the European Society of Cardiology. “The data seem to confirm that most cardiovascular events are preventable, and suggest that most cardiovascular events can be prevented, with prolonged exposure to modestly lower LDL cholesterol and systolic blood pressure.”

Any reduction of LDL-C and systolic blood pressure (SBP), in any combination, was associated with a lower lifetime risk of cardiovascular disease (CVD) in the study, which took advantage of the United Kingdom’s large Biobank to identify individuals with genetically lower LDL-C and blood pressure levels. The relationship was dose-dependent and showed a log-linear relationship to the combined absolute LDL-C and SBP differences, said Dr. Ference, professor and executive director of the Centre for Naturally Randomised Trials at the University of Cambridge, England.

The results validate current guidelines that focus on a lifetime approach to cardiovascular risk reduction and support a focus on therapeutic lifestyle interventions for individuals at all levels of risk for cardiovascular events, said Dr. Ference. He foresees the results shaping new risk-estimating algorithms and informing the next round of prevention guidelines.

Previous studies had suggested that long-term exposure to lower levels of LDL-C and lower systolic blood pressure reduced cardiovascular risk, but the association hadn’t been fully quantified. Ideally, said Dr. Ference, the question would be answered by a long-term randomized controlled trial, but it would be decades before meaningful data would accrue, and such a trial is unlikely to be conducted.

Using data from 438,952 Biobank participants, Dr. Ference and coinvestigators sought to quantify the association between LDL-C, systolic blood pressure, and atherosclerotic CVD. Taking advantage of genetic variants known to be associated with both lower LDL-C and lower systolic blood pressure, the researchers constructed a “natural randomization” trial. This trial design is also known as Mendelian randomization.

First, the entire study population was randomized into those with exome variants associated with higher or lower LDL-C, which resulted in a mean 15-mg/dL difference between the arms. Then, each LDL-C arm was randomized into groups with exome variants associated with higher or lower SBP, resulting in a difference of 2.9-3 mm Hg between the blood pressure arms within each LDL arm. This randomization yielded a reference group, a group with lower LDL-C, a group with lower SBP, and a group with lower LDL-C and SBP.

For the total population, the mean LDL-C was 138 mg/dL, and the mean SBP was 137.8 mm Hg.

A total of 24,980 participants had coronary revascularization, a nonfatal myocardial infarction (MI), or coronary death – the composite primary outcome measure of major coronary events.



“What we found is that long-term exposure to the combination of 1 mmol/L [about 39 mg/dL] lower LDL and 10 mm/Hg lower blood pressure is associated with an 80% lifetime reduction in risk of cardiovascular events, a 75% reduction in the risk of MI, and 68% reduction in the long-term risk of cardiovascular death,” said Dr. Ference.

By breaking participants out into separate quartiles of LDL-C and SBP levels, and examining outcomes for each quartile independently, Dr. Ference and collaborators were able to ascertain that the salutary effects of lower LDL-C and SBP were independent of each other.

Looking at individual cardiovascular outcomes, “The effect of combined exposure to both lower LDL and lower systolic blood pressure appear to be quite similar across multiple composite cardiovascular outcomes,” said Dr. Ference; benefit was seen in risk of MI, stroke, and other vascular events.

Plotting out the amount of risk reduction against the genetic scores for LDL-C and SBP reduction showed a proportional relationship that was logarithmically linear. “These large proportional reductions in risk really suggest that, for LDL, systolic blood pressure, and their combination, the benefit really depends both on the magnitude and the duration of the exposure,” said Dr. Ference. The effect was seen regardless of age, gender, body mass index, and diabetes status; being a smoker slightly attenuated the effects of LDL-C and SBP.

The mean participant age was 65 years, and women made up 54% of the study population. Aside from lipid values and systolic blood pressure, there were no significant between-group differences.

From these findings, what message can clinicians take to their patients? “Benefit is a much greater motivator, rather than the nebulous concept of risk,” said Dr. Ference. “So if we begin to crystallize and give an estimate of how much someone can benefit – either from adhering to a healthy lifestyle, with specific goals for LDL and blood pressure reductions, or from encouraging them to remain compliant with their therapies, achieving those corresponding goals – we can quantify their expected clinical benefit and encourage them to invest in their health over the long term.”

Dr. Ference said that the actual mechanism by which lipids and blood pressure are lowered matters less than the amount and duration of lowering: “These data are really agnostic as to the mechanism by which either blood pressure or LDL – or apo-B–containing lipoproteins generally – and blood pressure are reduced. It really suggests that whatever mechanism by which an individual person can most effectively lower their LDL and blood pressure, that’s the best one for that person, if they can maintain that over time.”

Dr. Ference reported financial relationships, including research contracts, consulting arrangements, receipt of royalties, and being an owner or stockholder of more than a dozen pharmaceutical companies. The study was funded by the United Kingdom’s National Institute of Health Research and Medical Research Council, and by the British Heart Foundation.

SOURCE: Ference B. et al. ESC Congress 2019, Hot Line Session 3.

– Over the course of years and decades, lower LDL cholesterol levels and lower systolic blood pressure can reduce the lifetime risk of cardiovascular disease by up to 80%, according to a new study.

SilverV/thinkstockphotos

“What we found is that lifetime exposure to the combination of lower LDL and lower systolic blood pressure is associated with independent, additive, and dose-dependent effects on the lifetime risk of cardiovascular disease,” said the study’s senior author, Brian Ference, MD, speaking at the annual congress of the European Society of Cardiology. “The data seem to confirm that most cardiovascular events are preventable, and suggest that most cardiovascular events can be prevented, with prolonged exposure to modestly lower LDL cholesterol and systolic blood pressure.”

Any reduction of LDL-C and systolic blood pressure (SBP), in any combination, was associated with a lower lifetime risk of cardiovascular disease (CVD) in the study, which took advantage of the United Kingdom’s large Biobank to identify individuals with genetically lower LDL-C and blood pressure levels. The relationship was dose-dependent and showed a log-linear relationship to the combined absolute LDL-C and SBP differences, said Dr. Ference, professor and executive director of the Centre for Naturally Randomised Trials at the University of Cambridge, England.

The results validate current guidelines that focus on a lifetime approach to cardiovascular risk reduction and support a focus on therapeutic lifestyle interventions for individuals at all levels of risk for cardiovascular events, said Dr. Ference. He foresees the results shaping new risk-estimating algorithms and informing the next round of prevention guidelines.

Previous studies had suggested that long-term exposure to lower levels of LDL-C and lower systolic blood pressure reduced cardiovascular risk, but the association hadn’t been fully quantified. Ideally, said Dr. Ference, the question would be answered by a long-term randomized controlled trial, but it would be decades before meaningful data would accrue, and such a trial is unlikely to be conducted.

Using data from 438,952 Biobank participants, Dr. Ference and coinvestigators sought to quantify the association between LDL-C, systolic blood pressure, and atherosclerotic CVD. Taking advantage of genetic variants known to be associated with both lower LDL-C and lower systolic blood pressure, the researchers constructed a “natural randomization” trial. This trial design is also known as Mendelian randomization.

First, the entire study population was randomized into those with exome variants associated with higher or lower LDL-C, which resulted in a mean 15-mg/dL difference between the arms. Then, each LDL-C arm was randomized into groups with exome variants associated with higher or lower SBP, resulting in a difference of 2.9-3 mm Hg between the blood pressure arms within each LDL arm. This randomization yielded a reference group, a group with lower LDL-C, a group with lower SBP, and a group with lower LDL-C and SBP.

For the total population, the mean LDL-C was 138 mg/dL, and the mean SBP was 137.8 mm Hg.

A total of 24,980 participants had coronary revascularization, a nonfatal myocardial infarction (MI), or coronary death – the composite primary outcome measure of major coronary events.



“What we found is that long-term exposure to the combination of 1 mmol/L [about 39 mg/dL] lower LDL and 10 mm/Hg lower blood pressure is associated with an 80% lifetime reduction in risk of cardiovascular events, a 75% reduction in the risk of MI, and 68% reduction in the long-term risk of cardiovascular death,” said Dr. Ference.

By breaking participants out into separate quartiles of LDL-C and SBP levels, and examining outcomes for each quartile independently, Dr. Ference and collaborators were able to ascertain that the salutary effects of lower LDL-C and SBP were independent of each other.

Looking at individual cardiovascular outcomes, “The effect of combined exposure to both lower LDL and lower systolic blood pressure appear to be quite similar across multiple composite cardiovascular outcomes,” said Dr. Ference; benefit was seen in risk of MI, stroke, and other vascular events.

Plotting out the amount of risk reduction against the genetic scores for LDL-C and SBP reduction showed a proportional relationship that was logarithmically linear. “These large proportional reductions in risk really suggest that, for LDL, systolic blood pressure, and their combination, the benefit really depends both on the magnitude and the duration of the exposure,” said Dr. Ference. The effect was seen regardless of age, gender, body mass index, and diabetes status; being a smoker slightly attenuated the effects of LDL-C and SBP.

The mean participant age was 65 years, and women made up 54% of the study population. Aside from lipid values and systolic blood pressure, there were no significant between-group differences.

From these findings, what message can clinicians take to their patients? “Benefit is a much greater motivator, rather than the nebulous concept of risk,” said Dr. Ference. “So if we begin to crystallize and give an estimate of how much someone can benefit – either from adhering to a healthy lifestyle, with specific goals for LDL and blood pressure reductions, or from encouraging them to remain compliant with their therapies, achieving those corresponding goals – we can quantify their expected clinical benefit and encourage them to invest in their health over the long term.”

Dr. Ference said that the actual mechanism by which lipids and blood pressure are lowered matters less than the amount and duration of lowering: “These data are really agnostic as to the mechanism by which either blood pressure or LDL – or apo-B–containing lipoproteins generally – and blood pressure are reduced. It really suggests that whatever mechanism by which an individual person can most effectively lower their LDL and blood pressure, that’s the best one for that person, if they can maintain that over time.”

Dr. Ference reported financial relationships, including research contracts, consulting arrangements, receipt of royalties, and being an owner or stockholder of more than a dozen pharmaceutical companies. The study was funded by the United Kingdom’s National Institute of Health Research and Medical Research Council, and by the British Heart Foundation.

SOURCE: Ference B. et al. ESC Congress 2019, Hot Line Session 3.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ESC CONGRESS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Glycemic Control eQUIPS yields success at Dignity Health Sequoia Hospital

Article Type
Changed
Tue, 05/03/2022 - 15:12

Glucometrics database aids tracking, trending

In honor of Diabetes Awareness Month, The Hospitalist spoke recently with Stephanie Dizon, PharmD, BCPS, director of pharmacy at Dignity Health Sequoia Hospital in Redwood City, Calif. Dr. Dizon was the project lead for Dignity Health Sequoia’s participation in the Society of Hospital Medicine’s Glycemic Control eQUIPS program. The Northern California hospital was recognized as a top performer in the program.

Dr. Stephanie Dizon

SHM’s eQUIPS offers a virtual library of resources, including a step-by-step implementation guide, that addresses various issues that range from subcutaneous insulin protocols to care coordination and good hypoglycemia management. In addition, the program offers access to a data center for performance tracking and benchmarking.

Dr. Dizon shared her experience as a participant in the program, and explained its impact on glycemic control at Dignity Health Sequoia Hospital.
 

Could you tell us about your personal involvement with SHM?

I started as the quality lead for glycemic control for Sequoia Hospital in 2017 while serving in the role as the clinical pharmacy manager. Currently, I am the director of pharmacy.

What inspired your institution to enroll in the GC eQUIPS program? What were the challenges it helped you address?

Sequoia Hospital started in this journey to improve overall glycemic control in a collaborative with eight other Dignity Health hospitals in 2011. At Sequoia Hospital, this effort was led by Karen Harrison, RN, MSN, CCRN. At the time, Dignity Health saw variations in insulin management and adverse events, and it inspired this group to review their practices and try to find a better way to standardize them. The hope was that sharing information and making efforts to standardize practices would lead to better glycemic control.

Enrollment in the GC eQUIPS program helped Sequoia Hospital efficiently analyze data that would otherwise be too large to manage. In addition, by tracking and trending these large data sets, it helped us not only to see where the hospital’s greatest challenges are in glycemic control but also observe what the impact is when making changes. We were part of a nine-site study that proved the effectiveness of GC eQUIPS and highlighted the collective success across the health system.
 

What did you find most useful in the suite of resources included in eQUIPS?

The benchmarking webinars and informational webinars that have been provided by Greg Maynard, MD, over the years have been especially helpful. They have broadened my understanding of glycemic control. The glucometrics database is especially helpful for tracking and trending – we share these reports on a monthly basis with nursing and provider leadership. In addition, being able to benchmark ourselves with other hospitals pushes us to improve and keep an eye on glycemic control.

Are there any other highlights from your participation– and your institution’s – in the program that you feel would be beneficial to others who may be considering enrollment?

Having access to the tools available in the GC eQUIPS program is very powerful for data analysis and benchmarking. As a result, it allows the people at an institution to focus on the day-to-day tasks, clinical initiatives, and building a culture that can make a program successful instead of focusing on data collection.

For more information on SHM’s Glycemic Control resources or to enroll in eQUIPS, visit hospitalmedicine.org/gc.

Publications
Topics
Sections

Glucometrics database aids tracking, trending

Glucometrics database aids tracking, trending

In honor of Diabetes Awareness Month, The Hospitalist spoke recently with Stephanie Dizon, PharmD, BCPS, director of pharmacy at Dignity Health Sequoia Hospital in Redwood City, Calif. Dr. Dizon was the project lead for Dignity Health Sequoia’s participation in the Society of Hospital Medicine’s Glycemic Control eQUIPS program. The Northern California hospital was recognized as a top performer in the program.

Dr. Stephanie Dizon

SHM’s eQUIPS offers a virtual library of resources, including a step-by-step implementation guide, that addresses various issues that range from subcutaneous insulin protocols to care coordination and good hypoglycemia management. In addition, the program offers access to a data center for performance tracking and benchmarking.

Dr. Dizon shared her experience as a participant in the program, and explained its impact on glycemic control at Dignity Health Sequoia Hospital.
 

Could you tell us about your personal involvement with SHM?

I started as the quality lead for glycemic control for Sequoia Hospital in 2017 while serving in the role as the clinical pharmacy manager. Currently, I am the director of pharmacy.

What inspired your institution to enroll in the GC eQUIPS program? What were the challenges it helped you address?

Sequoia Hospital started in this journey to improve overall glycemic control in a collaborative with eight other Dignity Health hospitals in 2011. At Sequoia Hospital, this effort was led by Karen Harrison, RN, MSN, CCRN. At the time, Dignity Health saw variations in insulin management and adverse events, and it inspired this group to review their practices and try to find a better way to standardize them. The hope was that sharing information and making efforts to standardize practices would lead to better glycemic control.

Enrollment in the GC eQUIPS program helped Sequoia Hospital efficiently analyze data that would otherwise be too large to manage. In addition, by tracking and trending these large data sets, it helped us not only to see where the hospital’s greatest challenges are in glycemic control but also observe what the impact is when making changes. We were part of a nine-site study that proved the effectiveness of GC eQUIPS and highlighted the collective success across the health system.
 

What did you find most useful in the suite of resources included in eQUIPS?

The benchmarking webinars and informational webinars that have been provided by Greg Maynard, MD, over the years have been especially helpful. They have broadened my understanding of glycemic control. The glucometrics database is especially helpful for tracking and trending – we share these reports on a monthly basis with nursing and provider leadership. In addition, being able to benchmark ourselves with other hospitals pushes us to improve and keep an eye on glycemic control.

Are there any other highlights from your participation– and your institution’s – in the program that you feel would be beneficial to others who may be considering enrollment?

Having access to the tools available in the GC eQUIPS program is very powerful for data analysis and benchmarking. As a result, it allows the people at an institution to focus on the day-to-day tasks, clinical initiatives, and building a culture that can make a program successful instead of focusing on data collection.

For more information on SHM’s Glycemic Control resources or to enroll in eQUIPS, visit hospitalmedicine.org/gc.

In honor of Diabetes Awareness Month, The Hospitalist spoke recently with Stephanie Dizon, PharmD, BCPS, director of pharmacy at Dignity Health Sequoia Hospital in Redwood City, Calif. Dr. Dizon was the project lead for Dignity Health Sequoia’s participation in the Society of Hospital Medicine’s Glycemic Control eQUIPS program. The Northern California hospital was recognized as a top performer in the program.

Dr. Stephanie Dizon

SHM’s eQUIPS offers a virtual library of resources, including a step-by-step implementation guide, that addresses various issues that range from subcutaneous insulin protocols to care coordination and good hypoglycemia management. In addition, the program offers access to a data center for performance tracking and benchmarking.

Dr. Dizon shared her experience as a participant in the program, and explained its impact on glycemic control at Dignity Health Sequoia Hospital.
 

Could you tell us about your personal involvement with SHM?

I started as the quality lead for glycemic control for Sequoia Hospital in 2017 while serving in the role as the clinical pharmacy manager. Currently, I am the director of pharmacy.

What inspired your institution to enroll in the GC eQUIPS program? What were the challenges it helped you address?

Sequoia Hospital started in this journey to improve overall glycemic control in a collaborative with eight other Dignity Health hospitals in 2011. At Sequoia Hospital, this effort was led by Karen Harrison, RN, MSN, CCRN. At the time, Dignity Health saw variations in insulin management and adverse events, and it inspired this group to review their practices and try to find a better way to standardize them. The hope was that sharing information and making efforts to standardize practices would lead to better glycemic control.

Enrollment in the GC eQUIPS program helped Sequoia Hospital efficiently analyze data that would otherwise be too large to manage. In addition, by tracking and trending these large data sets, it helped us not only to see where the hospital’s greatest challenges are in glycemic control but also observe what the impact is when making changes. We were part of a nine-site study that proved the effectiveness of GC eQUIPS and highlighted the collective success across the health system.
 

What did you find most useful in the suite of resources included in eQUIPS?

The benchmarking webinars and informational webinars that have been provided by Greg Maynard, MD, over the years have been especially helpful. They have broadened my understanding of glycemic control. The glucometrics database is especially helpful for tracking and trending – we share these reports on a monthly basis with nursing and provider leadership. In addition, being able to benchmark ourselves with other hospitals pushes us to improve and keep an eye on glycemic control.

Are there any other highlights from your participation– and your institution’s – in the program that you feel would be beneficial to others who may be considering enrollment?

Having access to the tools available in the GC eQUIPS program is very powerful for data analysis and benchmarking. As a result, it allows the people at an institution to focus on the day-to-day tasks, clinical initiatives, and building a culture that can make a program successful instead of focusing on data collection.

For more information on SHM’s Glycemic Control resources or to enroll in eQUIPS, visit hospitalmedicine.org/gc.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Better time data from in-hospital resuscitations

Article Type
Changed
Mon, 11/18/2019 - 14:43

Benefits of an undocumented defibrillator feature

Research and quality improvement (QI) related to in-hospital cardiopulmonary resuscitation attempts (“codes” from here forward) are hampered significantly by the poor quality of data on time intervals from arrest onset to clinical interventions.1

John A. Stewart

In 2000, the American Heart Association’s (AHA) Emergency Cardiac Care Guidelines said that current data were inaccurate and that greater accuracy was “the key to future high-quality research”2 – but since then, the general situation has not improved: Time intervals reported by the national AHA-supported registry Get With the Guidelines–Resuscitation (GWTG-R, 200+ hospitals enrolled) include a figure from all hospitals for times to first defibrillation of 1 minute median and 0 minutes first interquartile.3 Such numbers are typical – when they are tracked at all – but they strain credulity, and prima facie evidence is available at most clinical simulation centers simply by timing simulated defibrillation attempts under realistic conditions, as in “mock codes.”4,5

Taking artificially short time-interval data from GWTG-R or other sources at face value can hide serious delays in response to in-hospital arrests. It can also lead to flawed studies and highly questionable conclusions.6

The key to accuracy of critical time intervals – the intervals from arrest to key interventions – is an accurate time of arrest.7 Codes are typically recorded in handwritten form, though they may later be transcribed or scanned into electronic records. The “start” of the code for unmonitored arrests and most monitored arrests is typically taken to be the time that a human bedside recorder, arriving at an unknown interval after the arrest, writes down the first intervention. Researchers acknowledged the problem of artificially short time intervals in 2005, but they did not propose a remedy.1 Since then, the problem of in-hospital resuscitation delays has received little to no attention in the professional literature.
 

Description of feature

To get better time data from unmonitored resuscitation attempts, it is necessary to use a “surrogate marker” – a stand-in or substitute event – for the time of arrest. This event should occur reliably for each code, and as near as possible to the actual time of arrest. The main early events in a code are starting basic CPR, paging the code, and moving the defibrillator (usually on a code cart) to the scene. Ideally these events occur almost simultaneously, but that is not consistently achieved.

There are significant problems with use of the first two events as surrogate markers: the time of starting CPR cannot be determined accurately, and paging the code is dependent on several intermediate steps that lead to inaccuracy. Furthermore, the times of both markers are recorded using clocks that are typically not synchronized with the clock used for recording the code (defibrillator clock or the human recorder’s timepiece). Reconciliation of these times with the code record, while not particularly difficult,8 is rarely if ever done.

Defibrillator Power On is recorded on the defibrillator timeline and thus does not need to be reconciled with the defibrillator clock, but it is not suitable as a surrogate marker because this time is highly variable: It often does not occur until the time that monitoring pads are placed. Moving the code cart to the scene, which must occur early in the code, is a much more valid surrogate marker, with the added benefit that it can be marked on the defibrillator timeline.

The undocumented feature described here provides that marker. This feature has been a part of the LIFEPAK 20/20e’s design since it was launched in 2002, but it has not been publicized until now and is not documented in the user manual.

Hospital defibrillators are connected to alternating-current (AC) power when not in use. When the defibrillator is moved to the scene of the code, it is obviously necessary to disconnect the defibrillator from the wall outlet, at which time “AC Power Loss” is recorded on the event record generated by the LIFEPAK 20/20e defibrillators. The defibrillator may be powered on up to 10 minutes later while retaining the AC Power Loss marker in the event record. This surrogate marker for the start time will be on the same timeline as other events recorded by the defibrillator, including times of first monitoring and shocks.

Once the event record is acquired, determining time intervals is accomplished by subtracting clock times (see example, Figure 1).

In the example, using AC Power Loss as the start time, time intervals from arrest to first monitoring (Initial Rhythm on the Event Record) and first shock were 3:12 (07:16:34 minus 07:13:22) and 8:42 (07:22:14 minus 07:13:22). Note that if Power On were used as the surrogate time of arrest in the example, the calculated intervals would be artificially shorter, by 2 min 12 sec.

Using this undocumented feature, any facility using LIFEPAK 20/20e defibrillators can easily measure critical time intervals during resuscitation attempts with much greater accuracy, including times to first monitoring and first defibrillation. Each defibrillator stores code summaries sufficient for dozens of events and accessing past data is simple. Analysis of the data can provide a much-improved measure of the facility’s speed of response as a baseline for QI.

If desired, the time-interval data thus obtained can also be integrated with the handwritten record. The usual handwritten code sheet records times only in whole minutes, but with one of the more accurate intervals from the defibrillator – to first monitoring or first defibrillation – an adjusted time of arrest can be added to any code record to get other intervals that better approximate real-world response times.9


 

 

 

Research prospects

The feature opens multiple avenues for future research. Acquiring data by this method should be simple for any facility using LIFEPAK 20/20e defibrillators as its standard devices. Matching the existing handwritten code records with the time intervals obtained using this surrogate time marker will show how inaccurate the commonly reported data are. This can be done with a retrospective study comparing the time intervals from the archived event records with those from the handwritten records, to provide an example of the inaccuracy of data reported in the medical literature. The more accurate picture of time intervals can provide a much-needed yardstick for future research aimed at shortening response times.

The feature can facilitate aggregation of data across multiple facilities that use the LIFEPAK 20/20e as their standard defibrillator. Also, it is possible that other defibrillator manufacturers will duplicate this feature with their devices – it should produce valid data with any defibrillator – although there may be legal and technical obstacles to adopting it.

Combining data from multiple sites might lead to an important contribution to resuscitation research: a reasonably accurate overall survival curve for in-hospital tachyarrhythmic arrests. A commonly cited but crude guideline is that survival from tachyarrhythmic arrests decreases by 10%-15% per minute as defibrillation is delayed,10 but it seems unlikely that the relationship would be linear: Experience and the literature suggest that survival drops very quickly in the first few minutes, flattening out as elapsed time after arrest increases. Aggregating the much more accurate time-interval data from multiple facilities should produce a survival curve for in-hospital tachyarrhythmic arrests that comes much closer to reality.
 

Conclusion

It is unknown whether this feature will be used to improve the accuracy of reported code response times. It greatly facilitates acquiring more accurate times, but the task has never been especially difficult – particularly when balanced with the importance of better time data for QI and research.8 One possible impediment may be institutional obstacles to publishing studies with accurate response times due to concerns about public relations or legal exposure: The more accurate times will almost certainly be longer than those generally reported.

As was stated almost 2 decades ago and remains true today, acquiring accurate time-interval data is “the key to future high-quality research.”2 It is also key to improving any hospital’s quality of code response. As described in this article, better time data can easily be acquired. It is time for this important problem to be recognized and remedied.
 

Mr. Stewart has worked as a hospital nurse in Seattle for many years, and has numerous publications to his credit related to resuscitation issues. You can contact him at [email protected].

References

1. Kaye W et al. When minutes count – the fallacy of accurate time documentation during in-hospital resuscitation. Resuscitation. 2005;65(3):285-90.

2. The American Heart Association in collaboration with the International Liaison Committee on Resuscitation. Guidelines 2000 for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care, Part 4: the automated external defibrillator: key link in the chain of survival. Circulation. 2000;102(8 Suppl):I-60-76.

3. Chan PS et al. American Heart Association National Registry of Cardiopulmonary Resuscitation Investigators. Delayed time to defibrillation after in-hospital cardiac arrest. N Engl J Med. 2008 Jan 3;358(1):9-17. doi: 10.1056/NEJMoa0706467.

4. Hunt EA et al. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: Highlighting the importance of the first 5 minutes. Pediatrics. 2008;121(1):e34-e43. doi: 10.1542/peds.2007-0029.

5. Reeson M et al. Defibrillator design and usability may be impeding timely defibrillation. Comm J Qual Patient Saf. 2018 Sep;44(9):536-544. doi: 10.1016/j.jcjq.2018.01.005.

6. Hunt EA et al. American Heart Association’s Get With The Guidelines – Resuscitation Investigators. Association between time to defibrillation and survival in pediatric in-hospital cardiac arrest with a first documented shockable rhythm JAMA Netw Open. 2018;1(5):e182643. doi: 10.1001/jamanetworkopen.2018.2643.

7. Cummins RO et al. Recommended guidelines for reviewing, reporting, and conducting research on in-hospital resuscitation: the in-hospital “Utstein” style. Circulation. 1997;95:2213-39.

8. Stewart JA. Determining accurate call-to-shock times is easy. Resuscitation. 2005 Oct;67(1):150-1.

9. In infrequent cases, the code cart and defibrillator may be moved to a deteriorating patient before a full arrest. Such occurrences should be analyzed separately or excluded from analysis.

10. Valenzuela TD et al. Estimating effectiveness of cardiac arrest interventions: a logistic regression survival model. Circulation. 1997;96(10):3308-13. doi: 10.1161/01.cir.96.10.3308.

Publications
Topics
Sections

Benefits of an undocumented defibrillator feature

Benefits of an undocumented defibrillator feature

Research and quality improvement (QI) related to in-hospital cardiopulmonary resuscitation attempts (“codes” from here forward) are hampered significantly by the poor quality of data on time intervals from arrest onset to clinical interventions.1

John A. Stewart

In 2000, the American Heart Association’s (AHA) Emergency Cardiac Care Guidelines said that current data were inaccurate and that greater accuracy was “the key to future high-quality research”2 – but since then, the general situation has not improved: Time intervals reported by the national AHA-supported registry Get With the Guidelines–Resuscitation (GWTG-R, 200+ hospitals enrolled) include a figure from all hospitals for times to first defibrillation of 1 minute median and 0 minutes first interquartile.3 Such numbers are typical – when they are tracked at all – but they strain credulity, and prima facie evidence is available at most clinical simulation centers simply by timing simulated defibrillation attempts under realistic conditions, as in “mock codes.”4,5

Taking artificially short time-interval data from GWTG-R or other sources at face value can hide serious delays in response to in-hospital arrests. It can also lead to flawed studies and highly questionable conclusions.6

The key to accuracy of critical time intervals – the intervals from arrest to key interventions – is an accurate time of arrest.7 Codes are typically recorded in handwritten form, though they may later be transcribed or scanned into electronic records. The “start” of the code for unmonitored arrests and most monitored arrests is typically taken to be the time that a human bedside recorder, arriving at an unknown interval after the arrest, writes down the first intervention. Researchers acknowledged the problem of artificially short time intervals in 2005, but they did not propose a remedy.1 Since then, the problem of in-hospital resuscitation delays has received little to no attention in the professional literature.
 

Description of feature

To get better time data from unmonitored resuscitation attempts, it is necessary to use a “surrogate marker” – a stand-in or substitute event – for the time of arrest. This event should occur reliably for each code, and as near as possible to the actual time of arrest. The main early events in a code are starting basic CPR, paging the code, and moving the defibrillator (usually on a code cart) to the scene. Ideally these events occur almost simultaneously, but that is not consistently achieved.

There are significant problems with use of the first two events as surrogate markers: the time of starting CPR cannot be determined accurately, and paging the code is dependent on several intermediate steps that lead to inaccuracy. Furthermore, the times of both markers are recorded using clocks that are typically not synchronized with the clock used for recording the code (defibrillator clock or the human recorder’s timepiece). Reconciliation of these times with the code record, while not particularly difficult,8 is rarely if ever done.

Defibrillator Power On is recorded on the defibrillator timeline and thus does not need to be reconciled with the defibrillator clock, but it is not suitable as a surrogate marker because this time is highly variable: It often does not occur until the time that monitoring pads are placed. Moving the code cart to the scene, which must occur early in the code, is a much more valid surrogate marker, with the added benefit that it can be marked on the defibrillator timeline.

The undocumented feature described here provides that marker. This feature has been a part of the LIFEPAK 20/20e’s design since it was launched in 2002, but it has not been publicized until now and is not documented in the user manual.

Hospital defibrillators are connected to alternating-current (AC) power when not in use. When the defibrillator is moved to the scene of the code, it is obviously necessary to disconnect the defibrillator from the wall outlet, at which time “AC Power Loss” is recorded on the event record generated by the LIFEPAK 20/20e defibrillators. The defibrillator may be powered on up to 10 minutes later while retaining the AC Power Loss marker in the event record. This surrogate marker for the start time will be on the same timeline as other events recorded by the defibrillator, including times of first monitoring and shocks.

Once the event record is acquired, determining time intervals is accomplished by subtracting clock times (see example, Figure 1).

In the example, using AC Power Loss as the start time, time intervals from arrest to first monitoring (Initial Rhythm on the Event Record) and first shock were 3:12 (07:16:34 minus 07:13:22) and 8:42 (07:22:14 minus 07:13:22). Note that if Power On were used as the surrogate time of arrest in the example, the calculated intervals would be artificially shorter, by 2 min 12 sec.

Using this undocumented feature, any facility using LIFEPAK 20/20e defibrillators can easily measure critical time intervals during resuscitation attempts with much greater accuracy, including times to first monitoring and first defibrillation. Each defibrillator stores code summaries sufficient for dozens of events and accessing past data is simple. Analysis of the data can provide a much-improved measure of the facility’s speed of response as a baseline for QI.

If desired, the time-interval data thus obtained can also be integrated with the handwritten record. The usual handwritten code sheet records times only in whole minutes, but with one of the more accurate intervals from the defibrillator – to first monitoring or first defibrillation – an adjusted time of arrest can be added to any code record to get other intervals that better approximate real-world response times.9


 

 

 

Research prospects

The feature opens multiple avenues for future research. Acquiring data by this method should be simple for any facility using LIFEPAK 20/20e defibrillators as its standard devices. Matching the existing handwritten code records with the time intervals obtained using this surrogate time marker will show how inaccurate the commonly reported data are. This can be done with a retrospective study comparing the time intervals from the archived event records with those from the handwritten records, to provide an example of the inaccuracy of data reported in the medical literature. The more accurate picture of time intervals can provide a much-needed yardstick for future research aimed at shortening response times.

The feature can facilitate aggregation of data across multiple facilities that use the LIFEPAK 20/20e as their standard defibrillator. Also, it is possible that other defibrillator manufacturers will duplicate this feature with their devices – it should produce valid data with any defibrillator – although there may be legal and technical obstacles to adopting it.

Combining data from multiple sites might lead to an important contribution to resuscitation research: a reasonably accurate overall survival curve for in-hospital tachyarrhythmic arrests. A commonly cited but crude guideline is that survival from tachyarrhythmic arrests decreases by 10%-15% per minute as defibrillation is delayed,10 but it seems unlikely that the relationship would be linear: Experience and the literature suggest that survival drops very quickly in the first few minutes, flattening out as elapsed time after arrest increases. Aggregating the much more accurate time-interval data from multiple facilities should produce a survival curve for in-hospital tachyarrhythmic arrests that comes much closer to reality.
 

Conclusion

It is unknown whether this feature will be used to improve the accuracy of reported code response times. It greatly facilitates acquiring more accurate times, but the task has never been especially difficult – particularly when balanced with the importance of better time data for QI and research.8 One possible impediment may be institutional obstacles to publishing studies with accurate response times due to concerns about public relations or legal exposure: The more accurate times will almost certainly be longer than those generally reported.

As was stated almost 2 decades ago and remains true today, acquiring accurate time-interval data is “the key to future high-quality research.”2 It is also key to improving any hospital’s quality of code response. As described in this article, better time data can easily be acquired. It is time for this important problem to be recognized and remedied.
 

Mr. Stewart has worked as a hospital nurse in Seattle for many years, and has numerous publications to his credit related to resuscitation issues. You can contact him at [email protected].

References

1. Kaye W et al. When minutes count – the fallacy of accurate time documentation during in-hospital resuscitation. Resuscitation. 2005;65(3):285-90.

2. The American Heart Association in collaboration with the International Liaison Committee on Resuscitation. Guidelines 2000 for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care, Part 4: the automated external defibrillator: key link in the chain of survival. Circulation. 2000;102(8 Suppl):I-60-76.

3. Chan PS et al. American Heart Association National Registry of Cardiopulmonary Resuscitation Investigators. Delayed time to defibrillation after in-hospital cardiac arrest. N Engl J Med. 2008 Jan 3;358(1):9-17. doi: 10.1056/NEJMoa0706467.

4. Hunt EA et al. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: Highlighting the importance of the first 5 minutes. Pediatrics. 2008;121(1):e34-e43. doi: 10.1542/peds.2007-0029.

5. Reeson M et al. Defibrillator design and usability may be impeding timely defibrillation. Comm J Qual Patient Saf. 2018 Sep;44(9):536-544. doi: 10.1016/j.jcjq.2018.01.005.

6. Hunt EA et al. American Heart Association’s Get With The Guidelines – Resuscitation Investigators. Association between time to defibrillation and survival in pediatric in-hospital cardiac arrest with a first documented shockable rhythm JAMA Netw Open. 2018;1(5):e182643. doi: 10.1001/jamanetworkopen.2018.2643.

7. Cummins RO et al. Recommended guidelines for reviewing, reporting, and conducting research on in-hospital resuscitation: the in-hospital “Utstein” style. Circulation. 1997;95:2213-39.

8. Stewart JA. Determining accurate call-to-shock times is easy. Resuscitation. 2005 Oct;67(1):150-1.

9. In infrequent cases, the code cart and defibrillator may be moved to a deteriorating patient before a full arrest. Such occurrences should be analyzed separately or excluded from analysis.

10. Valenzuela TD et al. Estimating effectiveness of cardiac arrest interventions: a logistic regression survival model. Circulation. 1997;96(10):3308-13. doi: 10.1161/01.cir.96.10.3308.

Research and quality improvement (QI) related to in-hospital cardiopulmonary resuscitation attempts (“codes” from here forward) are hampered significantly by the poor quality of data on time intervals from arrest onset to clinical interventions.1

John A. Stewart

In 2000, the American Heart Association’s (AHA) Emergency Cardiac Care Guidelines said that current data were inaccurate and that greater accuracy was “the key to future high-quality research”2 – but since then, the general situation has not improved: Time intervals reported by the national AHA-supported registry Get With the Guidelines–Resuscitation (GWTG-R, 200+ hospitals enrolled) include a figure from all hospitals for times to first defibrillation of 1 minute median and 0 minutes first interquartile.3 Such numbers are typical – when they are tracked at all – but they strain credulity, and prima facie evidence is available at most clinical simulation centers simply by timing simulated defibrillation attempts under realistic conditions, as in “mock codes.”4,5

Taking artificially short time-interval data from GWTG-R or other sources at face value can hide serious delays in response to in-hospital arrests. It can also lead to flawed studies and highly questionable conclusions.6

The key to accuracy of critical time intervals – the intervals from arrest to key interventions – is an accurate time of arrest.7 Codes are typically recorded in handwritten form, though they may later be transcribed or scanned into electronic records. The “start” of the code for unmonitored arrests and most monitored arrests is typically taken to be the time that a human bedside recorder, arriving at an unknown interval after the arrest, writes down the first intervention. Researchers acknowledged the problem of artificially short time intervals in 2005, but they did not propose a remedy.1 Since then, the problem of in-hospital resuscitation delays has received little to no attention in the professional literature.
 

Description of feature

To get better time data from unmonitored resuscitation attempts, it is necessary to use a “surrogate marker” – a stand-in or substitute event – for the time of arrest. This event should occur reliably for each code, and as near as possible to the actual time of arrest. The main early events in a code are starting basic CPR, paging the code, and moving the defibrillator (usually on a code cart) to the scene. Ideally these events occur almost simultaneously, but that is not consistently achieved.

There are significant problems with use of the first two events as surrogate markers: the time of starting CPR cannot be determined accurately, and paging the code is dependent on several intermediate steps that lead to inaccuracy. Furthermore, the times of both markers are recorded using clocks that are typically not synchronized with the clock used for recording the code (defibrillator clock or the human recorder’s timepiece). Reconciliation of these times with the code record, while not particularly difficult,8 is rarely if ever done.

Defibrillator Power On is recorded on the defibrillator timeline and thus does not need to be reconciled with the defibrillator clock, but it is not suitable as a surrogate marker because this time is highly variable: It often does not occur until the time that monitoring pads are placed. Moving the code cart to the scene, which must occur early in the code, is a much more valid surrogate marker, with the added benefit that it can be marked on the defibrillator timeline.

The undocumented feature described here provides that marker. This feature has been a part of the LIFEPAK 20/20e’s design since it was launched in 2002, but it has not been publicized until now and is not documented in the user manual.

Hospital defibrillators are connected to alternating-current (AC) power when not in use. When the defibrillator is moved to the scene of the code, it is obviously necessary to disconnect the defibrillator from the wall outlet, at which time “AC Power Loss” is recorded on the event record generated by the LIFEPAK 20/20e defibrillators. The defibrillator may be powered on up to 10 minutes later while retaining the AC Power Loss marker in the event record. This surrogate marker for the start time will be on the same timeline as other events recorded by the defibrillator, including times of first monitoring and shocks.

Once the event record is acquired, determining time intervals is accomplished by subtracting clock times (see example, Figure 1).

In the example, using AC Power Loss as the start time, time intervals from arrest to first monitoring (Initial Rhythm on the Event Record) and first shock were 3:12 (07:16:34 minus 07:13:22) and 8:42 (07:22:14 minus 07:13:22). Note that if Power On were used as the surrogate time of arrest in the example, the calculated intervals would be artificially shorter, by 2 min 12 sec.

Using this undocumented feature, any facility using LIFEPAK 20/20e defibrillators can easily measure critical time intervals during resuscitation attempts with much greater accuracy, including times to first monitoring and first defibrillation. Each defibrillator stores code summaries sufficient for dozens of events and accessing past data is simple. Analysis of the data can provide a much-improved measure of the facility’s speed of response as a baseline for QI.

If desired, the time-interval data thus obtained can also be integrated with the handwritten record. The usual handwritten code sheet records times only in whole minutes, but with one of the more accurate intervals from the defibrillator – to first monitoring or first defibrillation – an adjusted time of arrest can be added to any code record to get other intervals that better approximate real-world response times.9


 

 

 

Research prospects

The feature opens multiple avenues for future research. Acquiring data by this method should be simple for any facility using LIFEPAK 20/20e defibrillators as its standard devices. Matching the existing handwritten code records with the time intervals obtained using this surrogate time marker will show how inaccurate the commonly reported data are. This can be done with a retrospective study comparing the time intervals from the archived event records with those from the handwritten records, to provide an example of the inaccuracy of data reported in the medical literature. The more accurate picture of time intervals can provide a much-needed yardstick for future research aimed at shortening response times.

The feature can facilitate aggregation of data across multiple facilities that use the LIFEPAK 20/20e as their standard defibrillator. Also, it is possible that other defibrillator manufacturers will duplicate this feature with their devices – it should produce valid data with any defibrillator – although there may be legal and technical obstacles to adopting it.

Combining data from multiple sites might lead to an important contribution to resuscitation research: a reasonably accurate overall survival curve for in-hospital tachyarrhythmic arrests. A commonly cited but crude guideline is that survival from tachyarrhythmic arrests decreases by 10%-15% per minute as defibrillation is delayed,10 but it seems unlikely that the relationship would be linear: Experience and the literature suggest that survival drops very quickly in the first few minutes, flattening out as elapsed time after arrest increases. Aggregating the much more accurate time-interval data from multiple facilities should produce a survival curve for in-hospital tachyarrhythmic arrests that comes much closer to reality.
 

Conclusion

It is unknown whether this feature will be used to improve the accuracy of reported code response times. It greatly facilitates acquiring more accurate times, but the task has never been especially difficult – particularly when balanced with the importance of better time data for QI and research.8 One possible impediment may be institutional obstacles to publishing studies with accurate response times due to concerns about public relations or legal exposure: The more accurate times will almost certainly be longer than those generally reported.

As was stated almost 2 decades ago and remains true today, acquiring accurate time-interval data is “the key to future high-quality research.”2 It is also key to improving any hospital’s quality of code response. As described in this article, better time data can easily be acquired. It is time for this important problem to be recognized and remedied.
 

Mr. Stewart has worked as a hospital nurse in Seattle for many years, and has numerous publications to his credit related to resuscitation issues. You can contact him at [email protected].

References

1. Kaye W et al. When minutes count – the fallacy of accurate time documentation during in-hospital resuscitation. Resuscitation. 2005;65(3):285-90.

2. The American Heart Association in collaboration with the International Liaison Committee on Resuscitation. Guidelines 2000 for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care, Part 4: the automated external defibrillator: key link in the chain of survival. Circulation. 2000;102(8 Suppl):I-60-76.

3. Chan PS et al. American Heart Association National Registry of Cardiopulmonary Resuscitation Investigators. Delayed time to defibrillation after in-hospital cardiac arrest. N Engl J Med. 2008 Jan 3;358(1):9-17. doi: 10.1056/NEJMoa0706467.

4. Hunt EA et al. Simulation of in-hospital pediatric medical emergencies and cardiopulmonary arrests: Highlighting the importance of the first 5 minutes. Pediatrics. 2008;121(1):e34-e43. doi: 10.1542/peds.2007-0029.

5. Reeson M et al. Defibrillator design and usability may be impeding timely defibrillation. Comm J Qual Patient Saf. 2018 Sep;44(9):536-544. doi: 10.1016/j.jcjq.2018.01.005.

6. Hunt EA et al. American Heart Association’s Get With The Guidelines – Resuscitation Investigators. Association between time to defibrillation and survival in pediatric in-hospital cardiac arrest with a first documented shockable rhythm JAMA Netw Open. 2018;1(5):e182643. doi: 10.1001/jamanetworkopen.2018.2643.

7. Cummins RO et al. Recommended guidelines for reviewing, reporting, and conducting research on in-hospital resuscitation: the in-hospital “Utstein” style. Circulation. 1997;95:2213-39.

8. Stewart JA. Determining accurate call-to-shock times is easy. Resuscitation. 2005 Oct;67(1):150-1.

9. In infrequent cases, the code cart and defibrillator may be moved to a deteriorating patient before a full arrest. Such occurrences should be analyzed separately or excluded from analysis.

10. Valenzuela TD et al. Estimating effectiveness of cardiac arrest interventions: a logistic regression survival model. Circulation. 1997;96(10):3308-13. doi: 10.1161/01.cir.96.10.3308.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

VRIC Abstract Submission Site Now Open

Article Type
Changed
Mon, 11/25/2019 - 10:49

The Vascular Research Initiatives Conference emphasizes emerging vascular science and encourages interactive participation of attendees. Scheduled the day before Vascular Discovery Scientific Sessions, VRIC is considered a key event for connecting with vascular researchers.  Join us for the 2020 program "VRIC Chicago 2020: From Discovery to Translation." The SVS is now accepting abstracts for the program and will continue through January 7. Submit your abstract now and be a part of this important event for vascular researchers.

Publications
Topics
Sections

The Vascular Research Initiatives Conference emphasizes emerging vascular science and encourages interactive participation of attendees. Scheduled the day before Vascular Discovery Scientific Sessions, VRIC is considered a key event for connecting with vascular researchers.  Join us for the 2020 program "VRIC Chicago 2020: From Discovery to Translation." The SVS is now accepting abstracts for the program and will continue through January 7. Submit your abstract now and be a part of this important event for vascular researchers.

The Vascular Research Initiatives Conference emphasizes emerging vascular science and encourages interactive participation of attendees. Scheduled the day before Vascular Discovery Scientific Sessions, VRIC is considered a key event for connecting with vascular researchers.  Join us for the 2020 program "VRIC Chicago 2020: From Discovery to Translation." The SVS is now accepting abstracts for the program and will continue through January 7. Submit your abstract now and be a part of this important event for vascular researchers.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 11/11/2019 - 11:00
Un-Gate On Date
Mon, 11/11/2019 - 11:00
Use ProPublica
CFC Schedule Remove Status
Mon, 11/11/2019 - 11:00
Hide sidebar & use full width
render the right sidebar.

CVD risk in black SLE patients 18 times higher than in whites

Article Type
Changed
Tue, 11/12/2019 - 13:54

 

– Black race was the single greatest predictor of cardiovascular disease (CVD) events in systemic lupus erythematosus, with black patients having an 18-fold higher risk than white patients from 2 years before to 8 years after diagnosis, according to a review of 336 patients in the Georgia Lupus Registry that was presented at the annual meeting of the American College of Rheumatology.

The greatest risk was in the first 2 years after diagnosis, which has been reported before in white patients, but not before in a mostly (75%) black cohort.

Lupus is known to strike earlier and be more aggressive in black patients, so “we were expecting racial disparities in incident CVD, but” the magnitude of the increased risk “was very surprising. This study [identifies] a population that needs more attention, more targeted CVD prevention. We have to intervene early and be on top of everything,” especially for black patients, said lead investigator Shivani Garg, MD, an assistant professor of rheumatology at the University of Wisconsin–Madison.

Lipids, blood pressure, and the other usual CVD risk factors, as well as lupus itself, have to be optimally controlled; glucocorticoid use limited as much as possible; and there needs to be improved adherence to hydroxychloroquine, which has been shown to reduce CVD events in lupus patients, she said in an interview.

The 336 patients, mostly women (87%) from the Atlanta area, were diagnosed during 2002-2004 at a mean age of 40 years. Dr. Garg and associates reviewed CVD events – ischemic heart disease, stroke, transient ischemic attack, and peripheral vascular disease – and death over 16 years, beginning 2 years before diagnosis.

About 22% of subjects had a CVD event, most commonly within 2 years after diagnosis. The risk was 500% higher in black patients overall (adjusted hazard ratio, 6.4; 95% confidence interval, 2.4-17.5; P = .0003), and markedly higher in the first 10 years (aHR, 18; 95% CI, 2.2-141; P less than .0001). The findings were not adjusted for socioeconomic factors.

In the first 12 years of the study, the mean age at lupus diagnosis was 46 years and the first CVD event occurred at an average of 48 years. From 12 to 16 years follow-up, the mean age of diagnosis was 38 years, and the first CVD event occurred at 52 years.

Age older than 65 years (aHR, 7.9; 95% CI, 2.2-29) and the presence of disease-associated antibodies (aHR, 2.1; 95% CI, 1.01-4.4) increased CVD risk, which wasn’t surprising, but another predictor – discoid lupus – was unexpected (aHR, 3.2; 95% CI, 1.5-6.8). “A lot of times, we’ve considered discoid rash to be a milder form, but these patients have some kind of chronic, smoldering inflammation that is leading to atherosclerosis,” Dr. Garg said.

At diagnosis, 84% of the subjects had lupus hematologic disorders, 69% immunologic disorders, and 14% a discoid rash. CVD risk factor data were not collected.

There was no external funding, and the investigators reported no disclosures.

SOURCE: Garg S et al. Arthritis Rheumatol. 2019;71(suppl 10), Abstract 805.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Black race was the single greatest predictor of cardiovascular disease (CVD) events in systemic lupus erythematosus, with black patients having an 18-fold higher risk than white patients from 2 years before to 8 years after diagnosis, according to a review of 336 patients in the Georgia Lupus Registry that was presented at the annual meeting of the American College of Rheumatology.

The greatest risk was in the first 2 years after diagnosis, which has been reported before in white patients, but not before in a mostly (75%) black cohort.

Lupus is known to strike earlier and be more aggressive in black patients, so “we were expecting racial disparities in incident CVD, but” the magnitude of the increased risk “was very surprising. This study [identifies] a population that needs more attention, more targeted CVD prevention. We have to intervene early and be on top of everything,” especially for black patients, said lead investigator Shivani Garg, MD, an assistant professor of rheumatology at the University of Wisconsin–Madison.

Lipids, blood pressure, and the other usual CVD risk factors, as well as lupus itself, have to be optimally controlled; glucocorticoid use limited as much as possible; and there needs to be improved adherence to hydroxychloroquine, which has been shown to reduce CVD events in lupus patients, she said in an interview.

The 336 patients, mostly women (87%) from the Atlanta area, were diagnosed during 2002-2004 at a mean age of 40 years. Dr. Garg and associates reviewed CVD events – ischemic heart disease, stroke, transient ischemic attack, and peripheral vascular disease – and death over 16 years, beginning 2 years before diagnosis.

About 22% of subjects had a CVD event, most commonly within 2 years after diagnosis. The risk was 500% higher in black patients overall (adjusted hazard ratio, 6.4; 95% confidence interval, 2.4-17.5; P = .0003), and markedly higher in the first 10 years (aHR, 18; 95% CI, 2.2-141; P less than .0001). The findings were not adjusted for socioeconomic factors.

In the first 12 years of the study, the mean age at lupus diagnosis was 46 years and the first CVD event occurred at an average of 48 years. From 12 to 16 years follow-up, the mean age of diagnosis was 38 years, and the first CVD event occurred at 52 years.

Age older than 65 years (aHR, 7.9; 95% CI, 2.2-29) and the presence of disease-associated antibodies (aHR, 2.1; 95% CI, 1.01-4.4) increased CVD risk, which wasn’t surprising, but another predictor – discoid lupus – was unexpected (aHR, 3.2; 95% CI, 1.5-6.8). “A lot of times, we’ve considered discoid rash to be a milder form, but these patients have some kind of chronic, smoldering inflammation that is leading to atherosclerosis,” Dr. Garg said.

At diagnosis, 84% of the subjects had lupus hematologic disorders, 69% immunologic disorders, and 14% a discoid rash. CVD risk factor data were not collected.

There was no external funding, and the investigators reported no disclosures.

SOURCE: Garg S et al. Arthritis Rheumatol. 2019;71(suppl 10), Abstract 805.

 

– Black race was the single greatest predictor of cardiovascular disease (CVD) events in systemic lupus erythematosus, with black patients having an 18-fold higher risk than white patients from 2 years before to 8 years after diagnosis, according to a review of 336 patients in the Georgia Lupus Registry that was presented at the annual meeting of the American College of Rheumatology.

The greatest risk was in the first 2 years after diagnosis, which has been reported before in white patients, but not before in a mostly (75%) black cohort.

Lupus is known to strike earlier and be more aggressive in black patients, so “we were expecting racial disparities in incident CVD, but” the magnitude of the increased risk “was very surprising. This study [identifies] a population that needs more attention, more targeted CVD prevention. We have to intervene early and be on top of everything,” especially for black patients, said lead investigator Shivani Garg, MD, an assistant professor of rheumatology at the University of Wisconsin–Madison.

Lipids, blood pressure, and the other usual CVD risk factors, as well as lupus itself, have to be optimally controlled; glucocorticoid use limited as much as possible; and there needs to be improved adherence to hydroxychloroquine, which has been shown to reduce CVD events in lupus patients, she said in an interview.

The 336 patients, mostly women (87%) from the Atlanta area, were diagnosed during 2002-2004 at a mean age of 40 years. Dr. Garg and associates reviewed CVD events – ischemic heart disease, stroke, transient ischemic attack, and peripheral vascular disease – and death over 16 years, beginning 2 years before diagnosis.

About 22% of subjects had a CVD event, most commonly within 2 years after diagnosis. The risk was 500% higher in black patients overall (adjusted hazard ratio, 6.4; 95% confidence interval, 2.4-17.5; P = .0003), and markedly higher in the first 10 years (aHR, 18; 95% CI, 2.2-141; P less than .0001). The findings were not adjusted for socioeconomic factors.

In the first 12 years of the study, the mean age at lupus diagnosis was 46 years and the first CVD event occurred at an average of 48 years. From 12 to 16 years follow-up, the mean age of diagnosis was 38 years, and the first CVD event occurred at 52 years.

Age older than 65 years (aHR, 7.9; 95% CI, 2.2-29) and the presence of disease-associated antibodies (aHR, 2.1; 95% CI, 1.01-4.4) increased CVD risk, which wasn’t surprising, but another predictor – discoid lupus – was unexpected (aHR, 3.2; 95% CI, 1.5-6.8). “A lot of times, we’ve considered discoid rash to be a milder form, but these patients have some kind of chronic, smoldering inflammation that is leading to atherosclerosis,” Dr. Garg said.

At diagnosis, 84% of the subjects had lupus hematologic disorders, 69% immunologic disorders, and 14% a discoid rash. CVD risk factor data were not collected.

There was no external funding, and the investigators reported no disclosures.

SOURCE: Garg S et al. Arthritis Rheumatol. 2019;71(suppl 10), Abstract 805.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ACR 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.