Portfolio of physician-led measures nets better quality of care

Article Type
Changed
Fri, 01/04/2019 - 13:32

 

– A multifaceted portfolio of physician-led measures with feedback and financial incentives can dramatically improve the quality of care provided at cancer centers, suggests the experience of Stanford (Calif.) Health Care.

Physician leaders of 13 disease-specific cancer care programs (CCPs) identified measures of care that were meaningful to their team and patients, spanning the spectrum from new diagnosis through end of life and survivorship care. Quality and analytics teams developed 16 corresponding metrics and performance reports used for feedback. Programs were also given a financial incentive to meet jointly set targets.

After a year, the CCPs had improved on 12 of the metrics and maintained high baseline levels of performance on the other 4 metrics, investigators reported at a symposium on quality care sponsored by the American Society of Clinical Oncology. For example, they got better at entering staging information in a dedicated field in the electronic health record (+50% absolute increase), recording hand and foot pain (+34%), performing hepatitis B testing before rituximab use (+17%), and referring patients with ovarian cancer for genetic counseling (+43%).

Susan London/Frontline Medical News
Ms. Julie Bryar Porter
“This [initiative] was quite resource intensive for the modest number of patients’ lives covered in our measurements,” commented lead investigator Julie Bryar Porter, MSc, administrative director of the Blood and Marrow Transplant Program and the Cancer Quality Program at Stanford Health Care. “However, it was encouraging that all metrics maintained their strong results or improved performance over time to meet their target.”

“The main drivers, I would argue, besides the Hawthorne effect, were a high level of physician engagement in the selection, management, and improvement of the metrics, and these metrics excited the care teams, which also provided some motivation,” she said. “We provided real-time, high-quality feedback of performance. And last but probably not least was a financial incentive for the CCP as a team, not part of any individual compensation.”

The investigators plan to continue measuring the metrics, to expand them to other sites in their network, and to add new metrics that are common across the programs to minimize measurement burden, according to Ms. Porter. “We also plan to build cohorts for value-based care and unplanned care like ED visits and unplanned admissions. Finally, we want to keep momentum going and capitalize upon a provider engagement in value measurement and improvement,” she said.

“Based on this work and prior abstracts, … there are many validated metrics to be used. So, to choose those metrics and to choose them through local leadership support, most importantly, engaging frontline staff and having their buy-in of the measures that you are collecting are important,” commented invited discussant Jessica A. Zerillo, MD, MPH, of the Beth Israel Deaconess Medical Center in Boston. “And this can include using incentives that drive such stakeholders, whether they be financial or simply pride with public reporting.”

Susan London/Frontline Medical News
Dr. Jessica A. Zerillo
To take this effort forward, certain issues will need to be addressed, she maintained. First, “how do we sustain data collection and change with the fewer resources that continue to be available to us? How do we integrate quality measurement into overall system metrics so that we can demonstrate to our administrative colleagues that the work that we do in quality has an importance at the system level? And lastly, how do we implement patient-reported and long-term outcomes to enhance these measures?”

Study details

“In the summer of 2015, we were starting to feel a lot of pressure to prepare for evolving reimbursement models,” Ms. Porter said, explaining the initiative’s genesis. “Mainly, how do we define our value, and how can we measure and improve on that value of the care we deliver? One answer, of course, is to measure and reduce unnecessary variation. And we knew, to be successful, we had to increase our physician engagement and leadership in the selection and improvement of our metrics.”

 

 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– A multifaceted portfolio of physician-led measures with feedback and financial incentives can dramatically improve the quality of care provided at cancer centers, suggests the experience of Stanford (Calif.) Health Care.

Physician leaders of 13 disease-specific cancer care programs (CCPs) identified measures of care that were meaningful to their team and patients, spanning the spectrum from new diagnosis through end of life and survivorship care. Quality and analytics teams developed 16 corresponding metrics and performance reports used for feedback. Programs were also given a financial incentive to meet jointly set targets.

After a year, the CCPs had improved on 12 of the metrics and maintained high baseline levels of performance on the other 4 metrics, investigators reported at a symposium on quality care sponsored by the American Society of Clinical Oncology. For example, they got better at entering staging information in a dedicated field in the electronic health record (+50% absolute increase), recording hand and foot pain (+34%), performing hepatitis B testing before rituximab use (+17%), and referring patients with ovarian cancer for genetic counseling (+43%).

Susan London/Frontline Medical News
Ms. Julie Bryar Porter
“This [initiative] was quite resource intensive for the modest number of patients’ lives covered in our measurements,” commented lead investigator Julie Bryar Porter, MSc, administrative director of the Blood and Marrow Transplant Program and the Cancer Quality Program at Stanford Health Care. “However, it was encouraging that all metrics maintained their strong results or improved performance over time to meet their target.”

“The main drivers, I would argue, besides the Hawthorne effect, were a high level of physician engagement in the selection, management, and improvement of the metrics, and these metrics excited the care teams, which also provided some motivation,” she said. “We provided real-time, high-quality feedback of performance. And last but probably not least was a financial incentive for the CCP as a team, not part of any individual compensation.”

The investigators plan to continue measuring the metrics, to expand them to other sites in their network, and to add new metrics that are common across the programs to minimize measurement burden, according to Ms. Porter. “We also plan to build cohorts for value-based care and unplanned care like ED visits and unplanned admissions. Finally, we want to keep momentum going and capitalize upon a provider engagement in value measurement and improvement,” she said.

“Based on this work and prior abstracts, … there are many validated metrics to be used. So, to choose those metrics and to choose them through local leadership support, most importantly, engaging frontline staff and having their buy-in of the measures that you are collecting are important,” commented invited discussant Jessica A. Zerillo, MD, MPH, of the Beth Israel Deaconess Medical Center in Boston. “And this can include using incentives that drive such stakeholders, whether they be financial or simply pride with public reporting.”

Susan London/Frontline Medical News
Dr. Jessica A. Zerillo
To take this effort forward, certain issues will need to be addressed, she maintained. First, “how do we sustain data collection and change with the fewer resources that continue to be available to us? How do we integrate quality measurement into overall system metrics so that we can demonstrate to our administrative colleagues that the work that we do in quality has an importance at the system level? And lastly, how do we implement patient-reported and long-term outcomes to enhance these measures?”

Study details

“In the summer of 2015, we were starting to feel a lot of pressure to prepare for evolving reimbursement models,” Ms. Porter said, explaining the initiative’s genesis. “Mainly, how do we define our value, and how can we measure and improve on that value of the care we deliver? One answer, of course, is to measure and reduce unnecessary variation. And we knew, to be successful, we had to increase our physician engagement and leadership in the selection and improvement of our metrics.”

 

 

 

– A multifaceted portfolio of physician-led measures with feedback and financial incentives can dramatically improve the quality of care provided at cancer centers, suggests the experience of Stanford (Calif.) Health Care.

Physician leaders of 13 disease-specific cancer care programs (CCPs) identified measures of care that were meaningful to their team and patients, spanning the spectrum from new diagnosis through end of life and survivorship care. Quality and analytics teams developed 16 corresponding metrics and performance reports used for feedback. Programs were also given a financial incentive to meet jointly set targets.

After a year, the CCPs had improved on 12 of the metrics and maintained high baseline levels of performance on the other 4 metrics, investigators reported at a symposium on quality care sponsored by the American Society of Clinical Oncology. For example, they got better at entering staging information in a dedicated field in the electronic health record (+50% absolute increase), recording hand and foot pain (+34%), performing hepatitis B testing before rituximab use (+17%), and referring patients with ovarian cancer for genetic counseling (+43%).

Susan London/Frontline Medical News
Ms. Julie Bryar Porter
“This [initiative] was quite resource intensive for the modest number of patients’ lives covered in our measurements,” commented lead investigator Julie Bryar Porter, MSc, administrative director of the Blood and Marrow Transplant Program and the Cancer Quality Program at Stanford Health Care. “However, it was encouraging that all metrics maintained their strong results or improved performance over time to meet their target.”

“The main drivers, I would argue, besides the Hawthorne effect, were a high level of physician engagement in the selection, management, and improvement of the metrics, and these metrics excited the care teams, which also provided some motivation,” she said. “We provided real-time, high-quality feedback of performance. And last but probably not least was a financial incentive for the CCP as a team, not part of any individual compensation.”

The investigators plan to continue measuring the metrics, to expand them to other sites in their network, and to add new metrics that are common across the programs to minimize measurement burden, according to Ms. Porter. “We also plan to build cohorts for value-based care and unplanned care like ED visits and unplanned admissions. Finally, we want to keep momentum going and capitalize upon a provider engagement in value measurement and improvement,” she said.

“Based on this work and prior abstracts, … there are many validated metrics to be used. So, to choose those metrics and to choose them through local leadership support, most importantly, engaging frontline staff and having their buy-in of the measures that you are collecting are important,” commented invited discussant Jessica A. Zerillo, MD, MPH, of the Beth Israel Deaconess Medical Center in Boston. “And this can include using incentives that drive such stakeholders, whether they be financial or simply pride with public reporting.”

Susan London/Frontline Medical News
Dr. Jessica A. Zerillo
To take this effort forward, certain issues will need to be addressed, she maintained. First, “how do we sustain data collection and change with the fewer resources that continue to be available to us? How do we integrate quality measurement into overall system metrics so that we can demonstrate to our administrative colleagues that the work that we do in quality has an importance at the system level? And lastly, how do we implement patient-reported and long-term outcomes to enhance these measures?”

Study details

“In the summer of 2015, we were starting to feel a lot of pressure to prepare for evolving reimbursement models,” Ms. Porter said, explaining the initiative’s genesis. “Mainly, how do we define our value, and how can we measure and improve on that value of the care we deliver? One answer, of course, is to measure and reduce unnecessary variation. And we knew, to be successful, we had to increase our physician engagement and leadership in the selection and improvement of our metrics.”

 

 

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE QUALITY CARE SYMPOSIUM

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Implementation of the portfolio of measures selected by physician leaders improved metrics of quality care.

Major finding: Over a 1-year period, the center saw improvements in practices such as completion of staging modules (+50%), recording of hand and foot pain (+34%), hepatitis B testing before rituximab use (+17%), and referral of patients with ovarian cancer for genetic counseling (+43%).

Data source: An initiative targeting 16 quality metrics undertaken by 13 cancer care programs at Stanford Health Care.

Disclosures: Ms. Porter disclosed that she had no relevant conflicts of interest.

Machine learning melanoma

Article Type
Changed
Fri, 01/18/2019 - 16:37

 

What if an app could diagnose melanoma from a photo? That was my idea. In December 2009, Google introduced Google Goggles, an application that recognized images. At the time, I thought, “Wouldn’t it be neat if we could use this with telederm?” I even pitched it to a friend at the search giant. “Great idea!” he wrote back, placating me. For those uninitiated in innovation, “Great idea!” is a euphemism for “Yeah, we thought of that.”

Yes, it isn’t only mine; no doubt, many of you had this same idea: Let’s use amazing image interpretation capabilities from companies like Google or Apple to help us make diagnoses. Sounds simple. It isn’t. This is why most melanoma-finding apps are for entertainment purposes only – they don’t work.

Dr. Jeffrey Benabio
To reliably get this right takes immense experience and intuition, things we do better than computers. Or do we? Since 2009, processors have sped up and machine learning has become exponentially better. Now cars drive themselves and software can ID someone even in a grainy video. The two are related: Both require tremendous processing power and sophisticated algorithms to achieve artificial intelligence (AI). You’ve likely heard about AI or machine learning lately. If you’re unsure what all the fuss is about, read my previous column (Dermatology News, March 2017, p. 30).

So can melanoma be diagnosed from an app? A Stanford University team believes so. They trained a machine learning system to make dermatologic diagnoses from photos of skin lesions. To overcome previous barriers, they used open-sourced software from Google and awesome processors. For a start, they pretrained the program on over 1.28 million images. Then they fed it 128,450 images of known diagnoses.

Then, just as when Google’s AlphaGo algorithm challenged Lee Sedol, the world Go champion, the Stanford research team challenged 21 dermatologists. They had to choose if they would biopsy/treat or reassure patients based on photos of benign lesions, keratinocyte carcinomas, clinical melanomas, and dermoscopic melanomas. Guess who won?

In a stunning victory (or defeat, if you’re rooting for our team), the trained algorithm matched or outperformed all the dermatologists when scored on sensitivity-specificity curves. While we dermatologists, of course, use more than just a photo to diagnose skin cancer, many around the globe don’t have access to us. Based on these findings, they might need access only to a smartphone to get potentially life-saving advice.

But, what does this mean? Will we someday be outsourced to AI? Will a future POTUS promise to “bring back the doctor industry?” Not if we adapt. The future is bright – if we learn to apply machine learning in ways that can have an impact. (Brain + Computer > Brain.) Consider the following: An optimized ophthalmologist who reads retinal scans prediagnosed by a computer. A teledermatologist who uses AI to perform perfectly in diagnosing melanoma.

Patients have always wanted high quality and high touch care. In the history of medicine, we’ve never been better at both than we are today. Until tomorrow, when we’ll be better still.


 

Jeff Benabio, MD, MBA, is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected]. He has no disclosures related to this column.

Publications
Topics
Sections
Related Articles

 

What if an app could diagnose melanoma from a photo? That was my idea. In December 2009, Google introduced Google Goggles, an application that recognized images. At the time, I thought, “Wouldn’t it be neat if we could use this with telederm?” I even pitched it to a friend at the search giant. “Great idea!” he wrote back, placating me. For those uninitiated in innovation, “Great idea!” is a euphemism for “Yeah, we thought of that.”

Yes, it isn’t only mine; no doubt, many of you had this same idea: Let’s use amazing image interpretation capabilities from companies like Google or Apple to help us make diagnoses. Sounds simple. It isn’t. This is why most melanoma-finding apps are for entertainment purposes only – they don’t work.

Dr. Jeffrey Benabio
To reliably get this right takes immense experience and intuition, things we do better than computers. Or do we? Since 2009, processors have sped up and machine learning has become exponentially better. Now cars drive themselves and software can ID someone even in a grainy video. The two are related: Both require tremendous processing power and sophisticated algorithms to achieve artificial intelligence (AI). You’ve likely heard about AI or machine learning lately. If you’re unsure what all the fuss is about, read my previous column (Dermatology News, March 2017, p. 30).

So can melanoma be diagnosed from an app? A Stanford University team believes so. They trained a machine learning system to make dermatologic diagnoses from photos of skin lesions. To overcome previous barriers, they used open-sourced software from Google and awesome processors. For a start, they pretrained the program on over 1.28 million images. Then they fed it 128,450 images of known diagnoses.

Then, just as when Google’s AlphaGo algorithm challenged Lee Sedol, the world Go champion, the Stanford research team challenged 21 dermatologists. They had to choose if they would biopsy/treat or reassure patients based on photos of benign lesions, keratinocyte carcinomas, clinical melanomas, and dermoscopic melanomas. Guess who won?

In a stunning victory (or defeat, if you’re rooting for our team), the trained algorithm matched or outperformed all the dermatologists when scored on sensitivity-specificity curves. While we dermatologists, of course, use more than just a photo to diagnose skin cancer, many around the globe don’t have access to us. Based on these findings, they might need access only to a smartphone to get potentially life-saving advice.

But, what does this mean? Will we someday be outsourced to AI? Will a future POTUS promise to “bring back the doctor industry?” Not if we adapt. The future is bright – if we learn to apply machine learning in ways that can have an impact. (Brain + Computer > Brain.) Consider the following: An optimized ophthalmologist who reads retinal scans prediagnosed by a computer. A teledermatologist who uses AI to perform perfectly in diagnosing melanoma.

Patients have always wanted high quality and high touch care. In the history of medicine, we’ve never been better at both than we are today. Until tomorrow, when we’ll be better still.


 

Jeff Benabio, MD, MBA, is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected]. He has no disclosures related to this column.

 

What if an app could diagnose melanoma from a photo? That was my idea. In December 2009, Google introduced Google Goggles, an application that recognized images. At the time, I thought, “Wouldn’t it be neat if we could use this with telederm?” I even pitched it to a friend at the search giant. “Great idea!” he wrote back, placating me. For those uninitiated in innovation, “Great idea!” is a euphemism for “Yeah, we thought of that.”

Yes, it isn’t only mine; no doubt, many of you had this same idea: Let’s use amazing image interpretation capabilities from companies like Google or Apple to help us make diagnoses. Sounds simple. It isn’t. This is why most melanoma-finding apps are for entertainment purposes only – they don’t work.

Dr. Jeffrey Benabio
To reliably get this right takes immense experience and intuition, things we do better than computers. Or do we? Since 2009, processors have sped up and machine learning has become exponentially better. Now cars drive themselves and software can ID someone even in a grainy video. The two are related: Both require tremendous processing power and sophisticated algorithms to achieve artificial intelligence (AI). You’ve likely heard about AI or machine learning lately. If you’re unsure what all the fuss is about, read my previous column (Dermatology News, March 2017, p. 30).

So can melanoma be diagnosed from an app? A Stanford University team believes so. They trained a machine learning system to make dermatologic diagnoses from photos of skin lesions. To overcome previous barriers, they used open-sourced software from Google and awesome processors. For a start, they pretrained the program on over 1.28 million images. Then they fed it 128,450 images of known diagnoses.

Then, just as when Google’s AlphaGo algorithm challenged Lee Sedol, the world Go champion, the Stanford research team challenged 21 dermatologists. They had to choose if they would biopsy/treat or reassure patients based on photos of benign lesions, keratinocyte carcinomas, clinical melanomas, and dermoscopic melanomas. Guess who won?

In a stunning victory (or defeat, if you’re rooting for our team), the trained algorithm matched or outperformed all the dermatologists when scored on sensitivity-specificity curves. While we dermatologists, of course, use more than just a photo to diagnose skin cancer, many around the globe don’t have access to us. Based on these findings, they might need access only to a smartphone to get potentially life-saving advice.

But, what does this mean? Will we someday be outsourced to AI? Will a future POTUS promise to “bring back the doctor industry?” Not if we adapt. The future is bright – if we learn to apply machine learning in ways that can have an impact. (Brain + Computer > Brain.) Consider the following: An optimized ophthalmologist who reads retinal scans prediagnosed by a computer. A teledermatologist who uses AI to perform perfectly in diagnosing melanoma.

Patients have always wanted high quality and high touch care. In the history of medicine, we’ve never been better at both than we are today. Until tomorrow, when we’ll be better still.


 

Jeff Benabio, MD, MBA, is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected]. He has no disclosures related to this column.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Auto-HCT patients run high risks for myeloid neoplasms

Article Type
Changed
Fri, 01/04/2019 - 10:01

 

– For post–autologous hematopoietic cell transplant (auto-HCT) patients, the 10-year risk of developing a myeloid neoplasm was as high as 6%, based on a recent review of two large cancer databases.

Older age at transplant, receiving total body irradiation, and receiving multiple lines of chemotherapy before transplant all upped the risk of later cancers, according to a study presented by Shahrukh Hashmi, MD, and his collaborators at the combined annual meetings of the Center for International Blood & Marrow Transplant Research (CIBMTR) and the American Society for Blood and Marrow Transplantation.

“The guidelines for autologous stem cell transplantation for surveillance for AML [acute myeloid leukemia] and MDS [myelodysplastic syndrome] need to be clearly formulated. We are doing 30,000 autologous transplants a year globally and these patients are at risk for the most feared cancer, which is leukemia and MDS, for which outcomes are very poor,” said Dr. Hashmi of the Mayo Clinic in Rochester, Minn.

The researchers examined data from auto-HCT patients with diagnoses of non-Hodgkin lymphoma (NHL), Hodgkin lymphoma, and multiple myeloma to determine the relative risks of developing AML and MDS. The study also explored which patient characteristics and aspects of the conditioning regimen might affect risk for later myeloid neoplasms.

In the dataset of 9,108 patients that Dr. Hashmi and his colleagues obtained from CIBMTR, 3,540 patients had NHL.

“As age progresses, the risk of acquiring myeloid neoplasms increases significantly,” he said, noting that the relative risk (RR) rose to 4.52 for patients aged 55 years and older at the time of transplant (95% confidence interval [CI], 2.63-7.77; P less than .0001).

Patients with NHL who received more than two lines of chemotherapy had approximately double the rate of myeloid cancers (RR, 1.93; 95% CI, 1.34-2.78; P = .0004).

The type of conditioning regimen made a difference for NHL patients as well. With total-body irradiation set as the reference at RR = 1, carmustine-etoposide-cytarabine-melphalan (BEAM) or similar therapies were relatively protective, with an RR of 0.59 (95% CI, 0.40-0.87; P = .0083). Also protective were cyclophosphamide-carmustine-etoposide (CBV) and similar therapies (RR, 0.57; 95% CI, 0.33-0.99; P = .0463).

Age at transplant was a factor among the 4,653 patients with multiple myeloma, with an RR of 2.47 for those transplanted at age 55 years or older (95% CI, 1.55-3.93; P = .0001). Multiple lines of chemotherapy also increased risk, with patients who received more than two lines having an RR of 1.77 for neoplasm (95% CI, 0.04-2.06; P = .0302). Women had less than half the risk of recurrence as men (RR, 0.44; 95% CI, 0.28-0.69; P = .0003).

Among the 915 study patients with Hodgkin lymphoma, patients aged 45 years and older at the time of transplant carried an RR of 5.59 for new myeloid neoplasms (95% CI, 2.98-11.70; P less than .0001).

Total-body irradiation was received by 14% of patients with non-Hodgkin lymphoma and by 5% of patients with multiple myeloma and Hodgkin lymphoma. Total-body irradiation was associated with a fourfold increase in neoplasm risk (RR, 4.02; 95% CI, 1.40-11.55; P = .0096).

Dr. Hashmi and his colleagues then examined the incidence rates for myelodysplastic syndrome and acute myelogenous leukemia in the Surveillance, Epidemiology, and End Results (SEER) database , finding that, even at baseline, the rates of myeloid neoplasms were higher for patients with NHL, Hodgkin lymphoma, or MM patients than for the general population of cancer survivors. “Post NHL, Hodgkin lymphoma, and myeloma, the risks are significantly higher to begin with. … We saw a high risk of AML and MDS compared to the SEER controls – risks as high as 100 times greater for auto-transplant patients,” said Dr. Hashmi. “A risk of one hundred times more for MDS was astounding, surprising, unexpected,” he said. The risk of AML, he said, was elevated about 10-50 times in the CIBMTR data.

The cumulative incidence of MDS or AML for NHL was 6% at 10 years post transplant, 4% for Hodgkin lymphoma, and 3% for multiple myeloma.

A limitation of the study, said Dr. Hashmi, was that the investigators did not assess for post-transplant maintenance chemotherapy.

“We have to prospectively assess our transplant patients in a fashion to detect changes early. Or maybe they were present at the time of transplant and we never did sophisticated methods [like] next-generation sequencing” to detect them, he said.

Dr. Hashmi reported no conflicts of interest.
 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– For post–autologous hematopoietic cell transplant (auto-HCT) patients, the 10-year risk of developing a myeloid neoplasm was as high as 6%, based on a recent review of two large cancer databases.

Older age at transplant, receiving total body irradiation, and receiving multiple lines of chemotherapy before transplant all upped the risk of later cancers, according to a study presented by Shahrukh Hashmi, MD, and his collaborators at the combined annual meetings of the Center for International Blood & Marrow Transplant Research (CIBMTR) and the American Society for Blood and Marrow Transplantation.

“The guidelines for autologous stem cell transplantation for surveillance for AML [acute myeloid leukemia] and MDS [myelodysplastic syndrome] need to be clearly formulated. We are doing 30,000 autologous transplants a year globally and these patients are at risk for the most feared cancer, which is leukemia and MDS, for which outcomes are very poor,” said Dr. Hashmi of the Mayo Clinic in Rochester, Minn.

The researchers examined data from auto-HCT patients with diagnoses of non-Hodgkin lymphoma (NHL), Hodgkin lymphoma, and multiple myeloma to determine the relative risks of developing AML and MDS. The study also explored which patient characteristics and aspects of the conditioning regimen might affect risk for later myeloid neoplasms.

In the dataset of 9,108 patients that Dr. Hashmi and his colleagues obtained from CIBMTR, 3,540 patients had NHL.

“As age progresses, the risk of acquiring myeloid neoplasms increases significantly,” he said, noting that the relative risk (RR) rose to 4.52 for patients aged 55 years and older at the time of transplant (95% confidence interval [CI], 2.63-7.77; P less than .0001).

Patients with NHL who received more than two lines of chemotherapy had approximately double the rate of myeloid cancers (RR, 1.93; 95% CI, 1.34-2.78; P = .0004).

The type of conditioning regimen made a difference for NHL patients as well. With total-body irradiation set as the reference at RR = 1, carmustine-etoposide-cytarabine-melphalan (BEAM) or similar therapies were relatively protective, with an RR of 0.59 (95% CI, 0.40-0.87; P = .0083). Also protective were cyclophosphamide-carmustine-etoposide (CBV) and similar therapies (RR, 0.57; 95% CI, 0.33-0.99; P = .0463).

Age at transplant was a factor among the 4,653 patients with multiple myeloma, with an RR of 2.47 for those transplanted at age 55 years or older (95% CI, 1.55-3.93; P = .0001). Multiple lines of chemotherapy also increased risk, with patients who received more than two lines having an RR of 1.77 for neoplasm (95% CI, 0.04-2.06; P = .0302). Women had less than half the risk of recurrence as men (RR, 0.44; 95% CI, 0.28-0.69; P = .0003).

Among the 915 study patients with Hodgkin lymphoma, patients aged 45 years and older at the time of transplant carried an RR of 5.59 for new myeloid neoplasms (95% CI, 2.98-11.70; P less than .0001).

Total-body irradiation was received by 14% of patients with non-Hodgkin lymphoma and by 5% of patients with multiple myeloma and Hodgkin lymphoma. Total-body irradiation was associated with a fourfold increase in neoplasm risk (RR, 4.02; 95% CI, 1.40-11.55; P = .0096).

Dr. Hashmi and his colleagues then examined the incidence rates for myelodysplastic syndrome and acute myelogenous leukemia in the Surveillance, Epidemiology, and End Results (SEER) database , finding that, even at baseline, the rates of myeloid neoplasms were higher for patients with NHL, Hodgkin lymphoma, or MM patients than for the general population of cancer survivors. “Post NHL, Hodgkin lymphoma, and myeloma, the risks are significantly higher to begin with. … We saw a high risk of AML and MDS compared to the SEER controls – risks as high as 100 times greater for auto-transplant patients,” said Dr. Hashmi. “A risk of one hundred times more for MDS was astounding, surprising, unexpected,” he said. The risk of AML, he said, was elevated about 10-50 times in the CIBMTR data.

The cumulative incidence of MDS or AML for NHL was 6% at 10 years post transplant, 4% for Hodgkin lymphoma, and 3% for multiple myeloma.

A limitation of the study, said Dr. Hashmi, was that the investigators did not assess for post-transplant maintenance chemotherapy.

“We have to prospectively assess our transplant patients in a fashion to detect changes early. Or maybe they were present at the time of transplant and we never did sophisticated methods [like] next-generation sequencing” to detect them, he said.

Dr. Hashmi reported no conflicts of interest.
 

 

– For post–autologous hematopoietic cell transplant (auto-HCT) patients, the 10-year risk of developing a myeloid neoplasm was as high as 6%, based on a recent review of two large cancer databases.

Older age at transplant, receiving total body irradiation, and receiving multiple lines of chemotherapy before transplant all upped the risk of later cancers, according to a study presented by Shahrukh Hashmi, MD, and his collaborators at the combined annual meetings of the Center for International Blood & Marrow Transplant Research (CIBMTR) and the American Society for Blood and Marrow Transplantation.

“The guidelines for autologous stem cell transplantation for surveillance for AML [acute myeloid leukemia] and MDS [myelodysplastic syndrome] need to be clearly formulated. We are doing 30,000 autologous transplants a year globally and these patients are at risk for the most feared cancer, which is leukemia and MDS, for which outcomes are very poor,” said Dr. Hashmi of the Mayo Clinic in Rochester, Minn.

The researchers examined data from auto-HCT patients with diagnoses of non-Hodgkin lymphoma (NHL), Hodgkin lymphoma, and multiple myeloma to determine the relative risks of developing AML and MDS. The study also explored which patient characteristics and aspects of the conditioning regimen might affect risk for later myeloid neoplasms.

In the dataset of 9,108 patients that Dr. Hashmi and his colleagues obtained from CIBMTR, 3,540 patients had NHL.

“As age progresses, the risk of acquiring myeloid neoplasms increases significantly,” he said, noting that the relative risk (RR) rose to 4.52 for patients aged 55 years and older at the time of transplant (95% confidence interval [CI], 2.63-7.77; P less than .0001).

Patients with NHL who received more than two lines of chemotherapy had approximately double the rate of myeloid cancers (RR, 1.93; 95% CI, 1.34-2.78; P = .0004).

The type of conditioning regimen made a difference for NHL patients as well. With total-body irradiation set as the reference at RR = 1, carmustine-etoposide-cytarabine-melphalan (BEAM) or similar therapies were relatively protective, with an RR of 0.59 (95% CI, 0.40-0.87; P = .0083). Also protective were cyclophosphamide-carmustine-etoposide (CBV) and similar therapies (RR, 0.57; 95% CI, 0.33-0.99; P = .0463).

Age at transplant was a factor among the 4,653 patients with multiple myeloma, with an RR of 2.47 for those transplanted at age 55 years or older (95% CI, 1.55-3.93; P = .0001). Multiple lines of chemotherapy also increased risk, with patients who received more than two lines having an RR of 1.77 for neoplasm (95% CI, 0.04-2.06; P = .0302). Women had less than half the risk of recurrence as men (RR, 0.44; 95% CI, 0.28-0.69; P = .0003).

Among the 915 study patients with Hodgkin lymphoma, patients aged 45 years and older at the time of transplant carried an RR of 5.59 for new myeloid neoplasms (95% CI, 2.98-11.70; P less than .0001).

Total-body irradiation was received by 14% of patients with non-Hodgkin lymphoma and by 5% of patients with multiple myeloma and Hodgkin lymphoma. Total-body irradiation was associated with a fourfold increase in neoplasm risk (RR, 4.02; 95% CI, 1.40-11.55; P = .0096).

Dr. Hashmi and his colleagues then examined the incidence rates for myelodysplastic syndrome and acute myelogenous leukemia in the Surveillance, Epidemiology, and End Results (SEER) database , finding that, even at baseline, the rates of myeloid neoplasms were higher for patients with NHL, Hodgkin lymphoma, or MM patients than for the general population of cancer survivors. “Post NHL, Hodgkin lymphoma, and myeloma, the risks are significantly higher to begin with. … We saw a high risk of AML and MDS compared to the SEER controls – risks as high as 100 times greater for auto-transplant patients,” said Dr. Hashmi. “A risk of one hundred times more for MDS was astounding, surprising, unexpected,” he said. The risk of AML, he said, was elevated about 10-50 times in the CIBMTR data.

The cumulative incidence of MDS or AML for NHL was 6% at 10 years post transplant, 4% for Hodgkin lymphoma, and 3% for multiple myeloma.

A limitation of the study, said Dr. Hashmi, was that the investigators did not assess for post-transplant maintenance chemotherapy.

“We have to prospectively assess our transplant patients in a fashion to detect changes early. Or maybe they were present at the time of transplant and we never did sophisticated methods [like] next-generation sequencing” to detect them, he said.

Dr. Hashmi reported no conflicts of interest.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE BMT TANDEM MEETINGS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Autologous hematopoietic cell transplant (auto-HCT) patients were at increased risk for later myelodysplastic syndrome and acute myeloid leukemia.

Major finding: The 10-year cumulative risk for auto-HCT patients with Hodgkin or non-Hodgkin lymphoma or multiple myeloma was as high at 6%.

Data source: Review of 9,108 patients from an international transplant database.

Disclosures: Dr. Hashmi reported no conflicts of interest.

Local Data on Cancer Mortality Reveal Valuable ‘Patterns’ in Changes

Article Type
Changed
Thu, 12/15/2022 - 14:54
Researchers find that different factors contribute to the variation of mortality rates from state to county across the U.S. for certain cancer types.

Cancer death rates in the U.S. declined by 20% between 1980 and 2014, but not everywhere:  In 160 counties, mortality rose substantially during the same time, according to University of Washington researchers. And those weren’t the only striking variations they found.

The researchers analyzed data on deaths from 29 cancer types. Deaths dropped from about 240 per 100,000 people in 1980 to 192 per 100,000 in 2014. But the researchers say they found “stark” disparities. In 2014, the county with the highest overall cancer mortality had about 7 times as many cancer deaths per 100,000 residents as the county with the lowest overall cancer mortality. For many cancers there were distinct clusters of counties in different regions with especially high mortality, such as in Kentucky, West Virginia, and Alabama.

Related: Major Cancer Death Rates Are Down

The pattern of changes across counties also varied tremendously by type, the researchers say. For instance, breast, cervical, prostate, testicular, and other cancers, mortality rates declined in nearly all counties, whereas liver cancer and mesothelioma increased in nearly all counties.

Previous reports on geographic differences in cancer mortality have focused on variation by state, the researchers say. But the local patterns they found would have been masked by a national or state number. Their innovative approach to aggregating and analyzing the data at the county level has value, they note, because “public health programs and policies are mainly designed and implemented at the local level.”

Related: Demographic and Clinical Characteristics of Patients With Polycythemia Vera (PV) in the U.S. Veterans Population

The policy response from the public health and medical care communities, the researchers add, depends on “parsing these trends into component factors”: trends driven by known risk factors, unexplained trends in incidence, cancers for which screening and early detection can make a major difference, and cancers for which high-quality treatment can make a major difference. Local information, the researchers point out, can be useful for health care practitioners to understand community needs for care and aid in identifying “cancer hot spots” that need more investigation.

In an article for the National Cancer Institute’s newsletter, Eric Durbin, DPh, director of cancer informatics for the Kentucky Cancer Registry at the University of Kentucky Markey Cancer Center, cautioned against basing too many assumptions on local data, especially in rural, sparsely populated areas where small number changes can translate into giant percentages. “We really have no other way to guide cancer prevention and control activities other than using [that] data. Otherwise, you’re just throwing money or resources at a problem without any way to measure the impact,” added Durbin.

Sources:

  1. National Cancer Institute. U.S. cancer mortality rates falling, but some regions left behind, study finds. https://www.cancer.gov/news-events/cancer-currents-blog/2017/cancer-death-disparities. Published February 21, 2017. Accessed March 15, 2017. 
  2. Mokdad AH, Dwyer-Lindgren L, Fitzmaurice C, et al. JAMA. 2017;317(4):388-406.
    doi: 10.1001/jama.2016.20324.
Publications
Topics
Sections
Related Articles
Researchers find that different factors contribute to the variation of mortality rates from state to county across the U.S. for certain cancer types.
Researchers find that different factors contribute to the variation of mortality rates from state to county across the U.S. for certain cancer types.

Cancer death rates in the U.S. declined by 20% between 1980 and 2014, but not everywhere:  In 160 counties, mortality rose substantially during the same time, according to University of Washington researchers. And those weren’t the only striking variations they found.

The researchers analyzed data on deaths from 29 cancer types. Deaths dropped from about 240 per 100,000 people in 1980 to 192 per 100,000 in 2014. But the researchers say they found “stark” disparities. In 2014, the county with the highest overall cancer mortality had about 7 times as many cancer deaths per 100,000 residents as the county with the lowest overall cancer mortality. For many cancers there were distinct clusters of counties in different regions with especially high mortality, such as in Kentucky, West Virginia, and Alabama.

Related: Major Cancer Death Rates Are Down

The pattern of changes across counties also varied tremendously by type, the researchers say. For instance, breast, cervical, prostate, testicular, and other cancers, mortality rates declined in nearly all counties, whereas liver cancer and mesothelioma increased in nearly all counties.

Previous reports on geographic differences in cancer mortality have focused on variation by state, the researchers say. But the local patterns they found would have been masked by a national or state number. Their innovative approach to aggregating and analyzing the data at the county level has value, they note, because “public health programs and policies are mainly designed and implemented at the local level.”

Related: Demographic and Clinical Characteristics of Patients With Polycythemia Vera (PV) in the U.S. Veterans Population

The policy response from the public health and medical care communities, the researchers add, depends on “parsing these trends into component factors”: trends driven by known risk factors, unexplained trends in incidence, cancers for which screening and early detection can make a major difference, and cancers for which high-quality treatment can make a major difference. Local information, the researchers point out, can be useful for health care practitioners to understand community needs for care and aid in identifying “cancer hot spots” that need more investigation.

In an article for the National Cancer Institute’s newsletter, Eric Durbin, DPh, director of cancer informatics for the Kentucky Cancer Registry at the University of Kentucky Markey Cancer Center, cautioned against basing too many assumptions on local data, especially in rural, sparsely populated areas where small number changes can translate into giant percentages. “We really have no other way to guide cancer prevention and control activities other than using [that] data. Otherwise, you’re just throwing money or resources at a problem without any way to measure the impact,” added Durbin.

Sources:

  1. National Cancer Institute. U.S. cancer mortality rates falling, but some regions left behind, study finds. https://www.cancer.gov/news-events/cancer-currents-blog/2017/cancer-death-disparities. Published February 21, 2017. Accessed March 15, 2017. 
  2. Mokdad AH, Dwyer-Lindgren L, Fitzmaurice C, et al. JAMA. 2017;317(4):388-406.
    doi: 10.1001/jama.2016.20324.

Cancer death rates in the U.S. declined by 20% between 1980 and 2014, but not everywhere:  In 160 counties, mortality rose substantially during the same time, according to University of Washington researchers. And those weren’t the only striking variations they found.

The researchers analyzed data on deaths from 29 cancer types. Deaths dropped from about 240 per 100,000 people in 1980 to 192 per 100,000 in 2014. But the researchers say they found “stark” disparities. In 2014, the county with the highest overall cancer mortality had about 7 times as many cancer deaths per 100,000 residents as the county with the lowest overall cancer mortality. For many cancers there were distinct clusters of counties in different regions with especially high mortality, such as in Kentucky, West Virginia, and Alabama.

Related: Major Cancer Death Rates Are Down

The pattern of changes across counties also varied tremendously by type, the researchers say. For instance, breast, cervical, prostate, testicular, and other cancers, mortality rates declined in nearly all counties, whereas liver cancer and mesothelioma increased in nearly all counties.

Previous reports on geographic differences in cancer mortality have focused on variation by state, the researchers say. But the local patterns they found would have been masked by a national or state number. Their innovative approach to aggregating and analyzing the data at the county level has value, they note, because “public health programs and policies are mainly designed and implemented at the local level.”

Related: Demographic and Clinical Characteristics of Patients With Polycythemia Vera (PV) in the U.S. Veterans Population

The policy response from the public health and medical care communities, the researchers add, depends on “parsing these trends into component factors”: trends driven by known risk factors, unexplained trends in incidence, cancers for which screening and early detection can make a major difference, and cancers for which high-quality treatment can make a major difference. Local information, the researchers point out, can be useful for health care practitioners to understand community needs for care and aid in identifying “cancer hot spots” that need more investigation.

In an article for the National Cancer Institute’s newsletter, Eric Durbin, DPh, director of cancer informatics for the Kentucky Cancer Registry at the University of Kentucky Markey Cancer Center, cautioned against basing too many assumptions on local data, especially in rural, sparsely populated areas where small number changes can translate into giant percentages. “We really have no other way to guide cancer prevention and control activities other than using [that] data. Otherwise, you’re just throwing money or resources at a problem without any way to measure the impact,” added Durbin.

Sources:

  1. National Cancer Institute. U.S. cancer mortality rates falling, but some regions left behind, study finds. https://www.cancer.gov/news-events/cancer-currents-blog/2017/cancer-death-disparities. Published February 21, 2017. Accessed March 15, 2017. 
  2. Mokdad AH, Dwyer-Lindgren L, Fitzmaurice C, et al. JAMA. 2017;317(4):388-406.
    doi: 10.1001/jama.2016.20324.
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Computerized systems reduce risk of VTE, analysis suggests

Article Type
Changed
Thu, 03/16/2017 - 00:04
Display Headline
Computerized systems reduce risk of VTE, analysis suggests

Photo by Piotr Bodzek
Team performing surgery

The use of computerized clinical decision support systems can reduce the risk of venous thromboembolism (VTE) among surgical patients, according to new research.

Results of a review and meta-analysis showed that use of these computerized systems was associated with a significant increase in the proportion of surgical patients with adequate VTE prophylaxis and a significant decrease in the patients’ risk of developing VTE.

Zachary M. Borab, of the New York University School of Medicine in New York, New York, and his colleagues reported these findings in JAMA Surgery.

A computerized clinical decision support system is rule­ or algorithm-based software that can be integrated into an electronic health record and uses data to present evidence-based knowledge at the individual patient level.

Borab and his colleagues conducted a review and meta-analysis to assess the effect of such systems on increasing adherence to VTE prophylaxis guidelines and decreasing post-operative VTEs, when compared with routine care.

The researchers combed through several databases looking for studies of surgical patients in which investigators compared routine care to computerized clinical decision support systems with VTE risk stratification and assistance in ordering VTE prophylaxis.

The team found 11 studies that were eligible for meta-analysis—9 prospective and 2 retrospective trials. The trials included a total of 156,366 patients—104,241 in the computerized clinical decision support systems group and 52,125 in the control group.

Analysis of these data revealed that using the computerized systems was associated with a significant increase in the rate of appropriate ordering of VTE prophylaxis. The odds ratio was 2.35 (95% CI, 1.78-3.10; P<0.001).

Use of the computerized systems was also associated with a significant decrease in the risk of VTE. The risk ratio was 0.78 (95% CI, 0.72-0.85; P<0.001).

Based on these results, Borab and his colleagues concluded that computerized clinical decision support systems should be used to help clinicians assess the risk of VTE and provide the appropriate prophylaxis in surgical patients.

Publications
Topics

Photo by Piotr Bodzek
Team performing surgery

The use of computerized clinical decision support systems can reduce the risk of venous thromboembolism (VTE) among surgical patients, according to new research.

Results of a review and meta-analysis showed that use of these computerized systems was associated with a significant increase in the proportion of surgical patients with adequate VTE prophylaxis and a significant decrease in the patients’ risk of developing VTE.

Zachary M. Borab, of the New York University School of Medicine in New York, New York, and his colleagues reported these findings in JAMA Surgery.

A computerized clinical decision support system is rule­ or algorithm-based software that can be integrated into an electronic health record and uses data to present evidence-based knowledge at the individual patient level.

Borab and his colleagues conducted a review and meta-analysis to assess the effect of such systems on increasing adherence to VTE prophylaxis guidelines and decreasing post-operative VTEs, when compared with routine care.

The researchers combed through several databases looking for studies of surgical patients in which investigators compared routine care to computerized clinical decision support systems with VTE risk stratification and assistance in ordering VTE prophylaxis.

The team found 11 studies that were eligible for meta-analysis—9 prospective and 2 retrospective trials. The trials included a total of 156,366 patients—104,241 in the computerized clinical decision support systems group and 52,125 in the control group.

Analysis of these data revealed that using the computerized systems was associated with a significant increase in the rate of appropriate ordering of VTE prophylaxis. The odds ratio was 2.35 (95% CI, 1.78-3.10; P<0.001).

Use of the computerized systems was also associated with a significant decrease in the risk of VTE. The risk ratio was 0.78 (95% CI, 0.72-0.85; P<0.001).

Based on these results, Borab and his colleagues concluded that computerized clinical decision support systems should be used to help clinicians assess the risk of VTE and provide the appropriate prophylaxis in surgical patients.

Photo by Piotr Bodzek
Team performing surgery

The use of computerized clinical decision support systems can reduce the risk of venous thromboembolism (VTE) among surgical patients, according to new research.

Results of a review and meta-analysis showed that use of these computerized systems was associated with a significant increase in the proportion of surgical patients with adequate VTE prophylaxis and a significant decrease in the patients’ risk of developing VTE.

Zachary M. Borab, of the New York University School of Medicine in New York, New York, and his colleagues reported these findings in JAMA Surgery.

A computerized clinical decision support system is rule­ or algorithm-based software that can be integrated into an electronic health record and uses data to present evidence-based knowledge at the individual patient level.

Borab and his colleagues conducted a review and meta-analysis to assess the effect of such systems on increasing adherence to VTE prophylaxis guidelines and decreasing post-operative VTEs, when compared with routine care.

The researchers combed through several databases looking for studies of surgical patients in which investigators compared routine care to computerized clinical decision support systems with VTE risk stratification and assistance in ordering VTE prophylaxis.

The team found 11 studies that were eligible for meta-analysis—9 prospective and 2 retrospective trials. The trials included a total of 156,366 patients—104,241 in the computerized clinical decision support systems group and 52,125 in the control group.

Analysis of these data revealed that using the computerized systems was associated with a significant increase in the rate of appropriate ordering of VTE prophylaxis. The odds ratio was 2.35 (95% CI, 1.78-3.10; P<0.001).

Use of the computerized systems was also associated with a significant decrease in the risk of VTE. The risk ratio was 0.78 (95% CI, 0.72-0.85; P<0.001).

Based on these results, Borab and his colleagues concluded that computerized clinical decision support systems should be used to help clinicians assess the risk of VTE and provide the appropriate prophylaxis in surgical patients.

Publications
Publications
Topics
Article Type
Display Headline
Computerized systems reduce risk of VTE, analysis suggests
Display Headline
Computerized systems reduce risk of VTE, analysis suggests
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Team develops paper-based test for blood typing

Article Type
Changed
Thu, 03/16/2017 - 00:02
Display Headline
Team develops paper-based test for blood typing

Photo by Graham Colm
Blood samples

Researchers say they have created a paper-based assay that provides “rapid and reliable” blood typing.

The team used this test to analyze 3550 blood samples and observed a more than 99.9% accuracy rate.

The test was able to classify samples into the common ABO and Rh blood groups in less than 30 seconds.

With slightly more time (but still in less than 2 minutes), the assay was able to identify multiple rare blood types.

Hong Zhang, of Southwest Hospital, Third Military Medical University in Chongqing, China, and colleagues described this test in Science Translational Medicine.

To create the test, the researchers took advantage of chemical reactions between blood serum proteins and the dye bromocreosol green.

The team applied a small sample of whole blood onto a test-strip containing antibodies that recognized different blood group antigens.

The results appeared as visual color changes—teal if a blood group antigen was present in a sample and brown if not.

The researchers also incorporated a separation membrane to isolate plasma from whole blood, which allowed them to simultaneously identify specific blood cell antigens and detect antibodies in plasma based on how the blood cells clumped together (also known as forward and reverse typing), without a centrifuge.

The team said the rapid turnaround time of this test could be ideal for resource-limited situations, such as war zones, remote areas, and during emergencies.

Publications
Topics

Photo by Graham Colm
Blood samples

Researchers say they have created a paper-based assay that provides “rapid and reliable” blood typing.

The team used this test to analyze 3550 blood samples and observed a more than 99.9% accuracy rate.

The test was able to classify samples into the common ABO and Rh blood groups in less than 30 seconds.

With slightly more time (but still in less than 2 minutes), the assay was able to identify multiple rare blood types.

Hong Zhang, of Southwest Hospital, Third Military Medical University in Chongqing, China, and colleagues described this test in Science Translational Medicine.

To create the test, the researchers took advantage of chemical reactions between blood serum proteins and the dye bromocreosol green.

The team applied a small sample of whole blood onto a test-strip containing antibodies that recognized different blood group antigens.

The results appeared as visual color changes—teal if a blood group antigen was present in a sample and brown if not.

The researchers also incorporated a separation membrane to isolate plasma from whole blood, which allowed them to simultaneously identify specific blood cell antigens and detect antibodies in plasma based on how the blood cells clumped together (also known as forward and reverse typing), without a centrifuge.

The team said the rapid turnaround time of this test could be ideal for resource-limited situations, such as war zones, remote areas, and during emergencies.

Photo by Graham Colm
Blood samples

Researchers say they have created a paper-based assay that provides “rapid and reliable” blood typing.

The team used this test to analyze 3550 blood samples and observed a more than 99.9% accuracy rate.

The test was able to classify samples into the common ABO and Rh blood groups in less than 30 seconds.

With slightly more time (but still in less than 2 minutes), the assay was able to identify multiple rare blood types.

Hong Zhang, of Southwest Hospital, Third Military Medical University in Chongqing, China, and colleagues described this test in Science Translational Medicine.

To create the test, the researchers took advantage of chemical reactions between blood serum proteins and the dye bromocreosol green.

The team applied a small sample of whole blood onto a test-strip containing antibodies that recognized different blood group antigens.

The results appeared as visual color changes—teal if a blood group antigen was present in a sample and brown if not.

The researchers also incorporated a separation membrane to isolate plasma from whole blood, which allowed them to simultaneously identify specific blood cell antigens and detect antibodies in plasma based on how the blood cells clumped together (also known as forward and reverse typing), without a centrifuge.

The team said the rapid turnaround time of this test could be ideal for resource-limited situations, such as war zones, remote areas, and during emergencies.

Publications
Publications
Topics
Article Type
Display Headline
Team develops paper-based test for blood typing
Display Headline
Team develops paper-based test for blood typing
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Death risks associated with long-term DAPT

Article Type
Changed
Thu, 03/16/2017 - 00:01
Display Headline
Death risks associated with long-term DAPT

Photo courtesy of CDC
Prescription medications

A new analysis suggests that patients who receive dual antiplatelet therapy (DAPT) for at least 1 year after coronary stenting are more likely to experience ischemic events than bleeding events, but both types of events are associated with a high risk of death.

Researchers performed a secondary analysis of data from the DAPT study and found that 4% of patients had ischemic events and 2% had bleeding events between 12 and 33 months after stenting.

Both types of events incurred a serious mortality risk—an 18-fold increase after any bleeding event and a 13-fold increase after any ischemic event.

These findings were published in JAMA Cardiology.

“We know from previous trials that continuing dual antiplatelet therapy longer than 12 months after coronary stenting is associated with both decreased ischemia and increased bleeding risk, so these findings reinforce the need to identify individuals who are likely to experience more benefit than harm from continued dual antiplatelet therapy,” said study author Eric Secemsky, MD, of Massachusetts General Hospital in Boston.

For this study, Dr Secemsky and his colleagues analyzed data collected in the DAPT trial, which was designed to determine the benefits and risks of continuing DAPT for more than a year.

The trial enrolled 25,682 patients who were set to receive a drug-eluting or bare-metal stent. After stent placement, they received DAPT—aspirin plus thienopyridine (clopidogrel or prasugrel)—for at least 12 months.

After 12 months of therapy, patients who were treatment-compliant and event-free (no myocardial infarction, stroke, or moderate or severe bleeding) were randomized to continued DAPT or aspirin alone for an additional 18 months. At month 30, patients discontinued randomized treatment but remained on aspirin and were followed for 3 months.

For the present secondary analysis, Dr Secemsky and his colleagues examined data from all 11,648 randomized patients.

Ischemic events

During the study period, 478 patients (4.1%) had 502 ischemic events, including 306 myocardial infarctions, 113 cases of stent thrombosis, and 83 ischemic strokes.

The death rate among patients with ischemic events was 10.9% (n=52), and 78.8% of these deaths (n=41) were attributable to cardiovascular causes. The death rate was 0.7% among patients without a cardiovascular event (82/11,082, P<0.001).

The cumulative incidence of death after ischemic events was 0.5% (0.3% with myocardial infarction, 0.1% with stent thrombosis, and 0.1% with ischemic stroke) among the more than 11,600 randomized patients.

The unadjusted annualized mortality rate after an ischemic event was 27.2 per 100 person-years.

When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, having an ischemic event was associated with a 12.6-fold increased risk of death (hazard ratio=14.6 for stent thrombosis, 13.1 for ischemic stroke, and 9.1 for myocardial infarction).

Deaths after ischemic stroke or stent thrombosis usually occurred soon after the event, but the increased risk of death from a myocardial infarction persisted throughout the study period.

Bleeding events

A total of 232 patients (2.0%) had 235 bleeding events—155 moderate and 80 severe bleeds.

The death rate among patients with bleeding events was 17.7% (n=41), compared to 1.6% among patients without a bleed (181/11,416, P<0.001). However, more than half of the deaths occurring after a bleeding event were attributable to cardiovascular causes (53.7%, n=22).

The cumulative incidence of death after a bleeding event was 0.3% (0.1% with moderate and 0.2% with severe bleeding) in the randomized study population.

The unadjusted annualized mortality rate after a bleeding event was 21.5 per 100 person-years.

When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, a bleeding event was associated with an 18.1-fold increased risk of death (hazard ratio=36.3 for a severe bleed and 8.0 for a moderate bleed).

 

 

Deaths following bleeding events primarily occurred within 30 days of the event.

“Since our analysis found that the development of both ischemic and bleeding events portend a particularly poor overall prognosis, we conclude that we must be thoughtful when prescribing any treatment, such as dual antiplatelet therapy, that may include bleeding risk,” Dr Secemsky said.

“In order to understand the implications of therapies that have potentially conflicting effects—such as decreasing ischemic risk while increasing bleeding risk—we must understand the prognostic factors related to these events. Our efforts now need to be focused on individualizing treatment and identifying those who are at the greatest risk of developing recurrent ischemia and at the lowest risk of developing a bleed.”

In a previous study, Dr Secemsky and his colleagues developed a risk score using DAPT data that can help determine whether or not DAPT should continue past the 1-year mark.

The tool has recently been included in American College of Cardiology(ACC)/American Heart Association guidelines on the duration of DAPT and is available on the ACC website.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Photo courtesy of CDC
Prescription medications

A new analysis suggests that patients who receive dual antiplatelet therapy (DAPT) for at least 1 year after coronary stenting are more likely to experience ischemic events than bleeding events, but both types of events are associated with a high risk of death.

Researchers performed a secondary analysis of data from the DAPT study and found that 4% of patients had ischemic events and 2% had bleeding events between 12 and 33 months after stenting.

Both types of events incurred a serious mortality risk—an 18-fold increase after any bleeding event and a 13-fold increase after any ischemic event.

These findings were published in JAMA Cardiology.

“We know from previous trials that continuing dual antiplatelet therapy longer than 12 months after coronary stenting is associated with both decreased ischemia and increased bleeding risk, so these findings reinforce the need to identify individuals who are likely to experience more benefit than harm from continued dual antiplatelet therapy,” said study author Eric Secemsky, MD, of Massachusetts General Hospital in Boston.

For this study, Dr Secemsky and his colleagues analyzed data collected in the DAPT trial, which was designed to determine the benefits and risks of continuing DAPT for more than a year.

The trial enrolled 25,682 patients who were set to receive a drug-eluting or bare-metal stent. After stent placement, they received DAPT—aspirin plus thienopyridine (clopidogrel or prasugrel)—for at least 12 months.

After 12 months of therapy, patients who were treatment-compliant and event-free (no myocardial infarction, stroke, or moderate or severe bleeding) were randomized to continued DAPT or aspirin alone for an additional 18 months. At month 30, patients discontinued randomized treatment but remained on aspirin and were followed for 3 months.

For the present secondary analysis, Dr Secemsky and his colleagues examined data from all 11,648 randomized patients.

Ischemic events

During the study period, 478 patients (4.1%) had 502 ischemic events, including 306 myocardial infarctions, 113 cases of stent thrombosis, and 83 ischemic strokes.

The death rate among patients with ischemic events was 10.9% (n=52), and 78.8% of these deaths (n=41) were attributable to cardiovascular causes. The death rate was 0.7% among patients without a cardiovascular event (82/11,082, P<0.001).

The cumulative incidence of death after ischemic events was 0.5% (0.3% with myocardial infarction, 0.1% with stent thrombosis, and 0.1% with ischemic stroke) among the more than 11,600 randomized patients.

The unadjusted annualized mortality rate after an ischemic event was 27.2 per 100 person-years.

When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, having an ischemic event was associated with a 12.6-fold increased risk of death (hazard ratio=14.6 for stent thrombosis, 13.1 for ischemic stroke, and 9.1 for myocardial infarction).

Deaths after ischemic stroke or stent thrombosis usually occurred soon after the event, but the increased risk of death from a myocardial infarction persisted throughout the study period.

Bleeding events

A total of 232 patients (2.0%) had 235 bleeding events—155 moderate and 80 severe bleeds.

The death rate among patients with bleeding events was 17.7% (n=41), compared to 1.6% among patients without a bleed (181/11,416, P<0.001). However, more than half of the deaths occurring after a bleeding event were attributable to cardiovascular causes (53.7%, n=22).

The cumulative incidence of death after a bleeding event was 0.3% (0.1% with moderate and 0.2% with severe bleeding) in the randomized study population.

The unadjusted annualized mortality rate after a bleeding event was 21.5 per 100 person-years.

When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, a bleeding event was associated with an 18.1-fold increased risk of death (hazard ratio=36.3 for a severe bleed and 8.0 for a moderate bleed).

 

 

Deaths following bleeding events primarily occurred within 30 days of the event.

“Since our analysis found that the development of both ischemic and bleeding events portend a particularly poor overall prognosis, we conclude that we must be thoughtful when prescribing any treatment, such as dual antiplatelet therapy, that may include bleeding risk,” Dr Secemsky said.

“In order to understand the implications of therapies that have potentially conflicting effects—such as decreasing ischemic risk while increasing bleeding risk—we must understand the prognostic factors related to these events. Our efforts now need to be focused on individualizing treatment and identifying those who are at the greatest risk of developing recurrent ischemia and at the lowest risk of developing a bleed.”

In a previous study, Dr Secemsky and his colleagues developed a risk score using DAPT data that can help determine whether or not DAPT should continue past the 1-year mark.

The tool has recently been included in American College of Cardiology(ACC)/American Heart Association guidelines on the duration of DAPT and is available on the ACC website.

Photo courtesy of CDC
Prescription medications

A new analysis suggests that patients who receive dual antiplatelet therapy (DAPT) for at least 1 year after coronary stenting are more likely to experience ischemic events than bleeding events, but both types of events are associated with a high risk of death.

Researchers performed a secondary analysis of data from the DAPT study and found that 4% of patients had ischemic events and 2% had bleeding events between 12 and 33 months after stenting.

Both types of events incurred a serious mortality risk—an 18-fold increase after any bleeding event and a 13-fold increase after any ischemic event.

These findings were published in JAMA Cardiology.

“We know from previous trials that continuing dual antiplatelet therapy longer than 12 months after coronary stenting is associated with both decreased ischemia and increased bleeding risk, so these findings reinforce the need to identify individuals who are likely to experience more benefit than harm from continued dual antiplatelet therapy,” said study author Eric Secemsky, MD, of Massachusetts General Hospital in Boston.

For this study, Dr Secemsky and his colleagues analyzed data collected in the DAPT trial, which was designed to determine the benefits and risks of continuing DAPT for more than a year.

The trial enrolled 25,682 patients who were set to receive a drug-eluting or bare-metal stent. After stent placement, they received DAPT—aspirin plus thienopyridine (clopidogrel or prasugrel)—for at least 12 months.

After 12 months of therapy, patients who were treatment-compliant and event-free (no myocardial infarction, stroke, or moderate or severe bleeding) were randomized to continued DAPT or aspirin alone for an additional 18 months. At month 30, patients discontinued randomized treatment but remained on aspirin and were followed for 3 months.

For the present secondary analysis, Dr Secemsky and his colleagues examined data from all 11,648 randomized patients.

Ischemic events

During the study period, 478 patients (4.1%) had 502 ischemic events, including 306 myocardial infarctions, 113 cases of stent thrombosis, and 83 ischemic strokes.

The death rate among patients with ischemic events was 10.9% (n=52), and 78.8% of these deaths (n=41) were attributable to cardiovascular causes. The death rate was 0.7% among patients without a cardiovascular event (82/11,082, P<0.001).

The cumulative incidence of death after ischemic events was 0.5% (0.3% with myocardial infarction, 0.1% with stent thrombosis, and 0.1% with ischemic stroke) among the more than 11,600 randomized patients.

The unadjusted annualized mortality rate after an ischemic event was 27.2 per 100 person-years.

When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, having an ischemic event was associated with a 12.6-fold increased risk of death (hazard ratio=14.6 for stent thrombosis, 13.1 for ischemic stroke, and 9.1 for myocardial infarction).

Deaths after ischemic stroke or stent thrombosis usually occurred soon after the event, but the increased risk of death from a myocardial infarction persisted throughout the study period.

Bleeding events

A total of 232 patients (2.0%) had 235 bleeding events—155 moderate and 80 severe bleeds.

The death rate among patients with bleeding events was 17.7% (n=41), compared to 1.6% among patients without a bleed (181/11,416, P<0.001). However, more than half of the deaths occurring after a bleeding event were attributable to cardiovascular causes (53.7%, n=22).

The cumulative incidence of death after a bleeding event was 0.3% (0.1% with moderate and 0.2% with severe bleeding) in the randomized study population.

The unadjusted annualized mortality rate after a bleeding event was 21.5 per 100 person-years.

When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, a bleeding event was associated with an 18.1-fold increased risk of death (hazard ratio=36.3 for a severe bleed and 8.0 for a moderate bleed).

 

 

Deaths following bleeding events primarily occurred within 30 days of the event.

“Since our analysis found that the development of both ischemic and bleeding events portend a particularly poor overall prognosis, we conclude that we must be thoughtful when prescribing any treatment, such as dual antiplatelet therapy, that may include bleeding risk,” Dr Secemsky said.

“In order to understand the implications of therapies that have potentially conflicting effects—such as decreasing ischemic risk while increasing bleeding risk—we must understand the prognostic factors related to these events. Our efforts now need to be focused on individualizing treatment and identifying those who are at the greatest risk of developing recurrent ischemia and at the lowest risk of developing a bleed.”

In a previous study, Dr Secemsky and his colleagues developed a risk score using DAPT data that can help determine whether or not DAPT should continue past the 1-year mark.

The tool has recently been included in American College of Cardiology(ACC)/American Heart Association guidelines on the duration of DAPT and is available on the ACC website.

Publications
Publications
Topics
Article Type
Display Headline
Death risks associated with long-term DAPT
Display Headline
Death risks associated with long-term DAPT
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Rash in both axillae

Article Type
Changed
Fri, 01/18/2019 - 08:45
Display Headline
Rash in both axillae

 

The family physician (FP) suspected that the patient had a contact dermatitis to his deodorant. After further questioning, the patient said he had changed his deodorant about one month before the rash started. The FP explained that an ingredient in this new deodorant was likely causing the allergic reaction.

The FP prescribed 0.1% triamcinolone cream to be applied twice daily. He suggested that the patient either go back to his original deodorant or read the ingredients on the new deodorant and choose a deodorant that does not have the same ingredients.

At a follow-up visit one month later, the patient's skin had cleared and he was very happy with the results. He said he’d gone back to using his original deodorant, which didn’t have the same ingredients as the new one.

This is a typical case of contact dermatitis in which the history and physical exam were sufficient to make the diagnosis. No patch testing or referrals to Dermatology were required.

 

Photos and text for Photo Rounds Friday courtesy of Richard P. Usatine, MD. This case was adapted from: Usatine R. Contact dermatitis. In: Usatine R, Smith M, Mayeaux EJ, et al, eds. Color Atlas of Family Medicine. 2nd ed. New York, NY: McGraw-Hill; 2013:591-596.

To learn more about the Color Atlas of Family Medicine, see: www.amazon.com/Color-Family-Medicine-Richard-Usatine/dp/0071769641/

You can now get the second edition of the Color Atlas of Family Medicine as an app by clicking on this link: usatinemedia.com

Issue
The Journal of Family Practice - 66(3)
Publications
Topics
Sections

 

The family physician (FP) suspected that the patient had a contact dermatitis to his deodorant. After further questioning, the patient said he had changed his deodorant about one month before the rash started. The FP explained that an ingredient in this new deodorant was likely causing the allergic reaction.

The FP prescribed 0.1% triamcinolone cream to be applied twice daily. He suggested that the patient either go back to his original deodorant or read the ingredients on the new deodorant and choose a deodorant that does not have the same ingredients.

At a follow-up visit one month later, the patient's skin had cleared and he was very happy with the results. He said he’d gone back to using his original deodorant, which didn’t have the same ingredients as the new one.

This is a typical case of contact dermatitis in which the history and physical exam were sufficient to make the diagnosis. No patch testing or referrals to Dermatology were required.

 

Photos and text for Photo Rounds Friday courtesy of Richard P. Usatine, MD. This case was adapted from: Usatine R. Contact dermatitis. In: Usatine R, Smith M, Mayeaux EJ, et al, eds. Color Atlas of Family Medicine. 2nd ed. New York, NY: McGraw-Hill; 2013:591-596.

To learn more about the Color Atlas of Family Medicine, see: www.amazon.com/Color-Family-Medicine-Richard-Usatine/dp/0071769641/

You can now get the second edition of the Color Atlas of Family Medicine as an app by clicking on this link: usatinemedia.com

 

The family physician (FP) suspected that the patient had a contact dermatitis to his deodorant. After further questioning, the patient said he had changed his deodorant about one month before the rash started. The FP explained that an ingredient in this new deodorant was likely causing the allergic reaction.

The FP prescribed 0.1% triamcinolone cream to be applied twice daily. He suggested that the patient either go back to his original deodorant or read the ingredients on the new deodorant and choose a deodorant that does not have the same ingredients.

At a follow-up visit one month later, the patient's skin had cleared and he was very happy with the results. He said he’d gone back to using his original deodorant, which didn’t have the same ingredients as the new one.

This is a typical case of contact dermatitis in which the history and physical exam were sufficient to make the diagnosis. No patch testing or referrals to Dermatology were required.

 

Photos and text for Photo Rounds Friday courtesy of Richard P. Usatine, MD. This case was adapted from: Usatine R. Contact dermatitis. In: Usatine R, Smith M, Mayeaux EJ, et al, eds. Color Atlas of Family Medicine. 2nd ed. New York, NY: McGraw-Hill; 2013:591-596.

To learn more about the Color Atlas of Family Medicine, see: www.amazon.com/Color-Family-Medicine-Richard-Usatine/dp/0071769641/

You can now get the second edition of the Color Atlas of Family Medicine as an app by clicking on this link: usatinemedia.com

Issue
The Journal of Family Practice - 66(3)
Issue
The Journal of Family Practice - 66(3)
Publications
Publications
Topics
Article Type
Display Headline
Rash in both axillae
Display Headline
Rash in both axillae
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

2016 Updates to AASLD Guidance Document on gastroesophageal bleeding in decompensated cirrhosis

Article Type
Changed
Fri, 09/14/2018 - 12:00

Clinical question: What is appropriate inpatient management of a cirrhotic patient with acute esophageal or gastric variceal bleeding?

 

Study design: Guidance document developed by expert panel based on literature review, consensus conferences and authors’ clinical experience.

Background: Practice guidelines for the diagnosis and treatment of gastroesophageal hemorrhage were last published in 2007 and endorsed by several major professional societies. Since then, there have been a number of randomized controlled trials (RCTs) and consensus conferences. The American Association for the Study of Liver Diseases (AASLD) published updated practice guidelines in 2016 that encompass pathophysiology, monitoring, diagnosis, and treatment of gastroesophageal hemorrhage in cirrhotic patients. This summary will focus on inpatient management for active gastroesophageal hemorrhage.

Dr. Jinyu Byron Lu

Synopsis of Inpatient Management for Esophageal Variceal Hemorrhage: The authors suggest that all VH requires ICU admission with the goal of acute control of bleeding, prevention of early recurrence, and reduction in 6-week mortality. Imaging to rule out portal vein thrombosis and HCC should be considered. Hepatic-Venous Pressure Gradient (HVPG) greater than 20 mm Hg is the strongest predictor of early rebleeding and death. However, catheter measurements of portal pressure are not available at most centers. As with any critically ill patient, stabilization of respiratory status and ensuring hemodynamic stability with volume resuscitation is paramount. RCTs evaluating transfusion goals suggest that a restrictive transfusion goal of HgB 7 g/dL is superior to a liberal goal of 9 g/dL. The authors hypothesize this may be related to lower HVPG observed with lower transfusion thresholds. In terms of treating coagulopathy, RCTs evaluating recombinant VIIa have not shown clear benefit. Correction of INR with FFPs similarly not recommended. No recommendations are made regarding utility of platelet transfusions. Vasoactive drugs should be administered when VH is suspected with the goal of decreasing splanchnic blood flow. Octreotide is the only vasoactive drug available in the United States. RCTs show that antibiotics administered prophylactically decrease infections, recurrent hemorrhage, and death. Ceftriaxone 1 g daily is the drug of choice in the United States and should be given up to a maximum of 7 days. A reasonable strategy is discontinuation of prophylaxis concurrently with discontinuation of vasoactive agents. After stabilization of hemodynamics, patients should proceed to endoscopy no more than 12 hours after presentation. Endoscopic Variceal Ligation (EVL) should be done if signs of active or recent variceal bleeding are found. After EVL, select patients at high risk of rebleeding (Child-Pugh B with active bleeding seen on endoscopy or Child-Pugh C patients) may benefit from TIPS within 72 hours. If TIPS is done, vasoactive agents can be discontinued. Otherwise, vasoactive agents should continue for 2-5 days with subsequent transition to nonselective beta blockers (NSBB) such as nadolol or propranolol. For secondary prophylaxis of esophageal bleeding, combination EVL and NSBB is first-line therapy. If recurrent hemorrhage occurs while on secondary prophylaxis, rescue TIPS is recommended.

Synopsis of Inpatient Management for Gastric Variceal Hemorrhage: Management of Gastric Variceal Hemorrhage is similar to Esophageal Variceal (EV) Hemorrhage and encompasses volume resuscitation, vasoactive drugs, and antibiotics with endoscopy shortly thereafter. Balloon tamponade can be used as a bridge to endoscopy in massive bleeds. In addition to the above, anatomic location of Gastric Varices (GV) affects choice of intervention. GOV1 varices extend from the gastric cardia to the lesser curvature and represent 75% of GV. If these are small, they can be managed with EVL. Otherwise these can be managed with injection of cyanoacrylate glue. GOV2 varices extend from the gastric cardia into the fundus. Isolated GV type 1 varices (IGV1) are located entirely in the fundus and have the highest propensity for bleeding. For these latter two types of “cardio-fundal varices” TIPS is the preferred intervention to control acute bleeding. Data on the efficacy of secondary prophylaxis for GV bleeding is limited. A combination of NSBB, cyanoacrylate injection, or TIPS can be considered. Balloon Occluded Retrograde Transvenous Obliteration (BRTO) can be considered if fundal varices are associated with a large gastrorenal or splenorenal collateral. However, no RCTs have compared BRTO with other strategies. Isolated GV type 2 (IGV2) varices are not localized to the esophageal or gastric cardio-fundal region and are rare in cirrhotic patients but tend to occur in pre-hepatic portal hypertension. Management requires multidisciplinary input from endoscopists, hepatologists, interventional radiologists, and surgeons.

Bottom line: For esophageal variceal bleeding related to cirrhosis: volume resuscitation, antibiotic prophylaxis, and vasoactive agents are mainstays of therapy to stabilize patient for endoscopic intervention within 12 hours. This should be followed by early TIPS within 72 hours in high risk patients.

A similar approach applies to gastric variceal bleeding, but interventional management is dependent on the anatomic location of the varices in question.

Citations: Garcia-Tsao G et al. Portal hypertensive bleeding in cirrhosis: Risk stratification, diagnosis and management – 2016 practice guidance by the American Association for the Study of Liver Diseases. Hepatology 2017 Jan;65[1]:310-35.

 

Dr. Lu is a hospitalist at Cooper University Hospital in Camden, N.J.

Publications
Topics
Sections

Clinical question: What is appropriate inpatient management of a cirrhotic patient with acute esophageal or gastric variceal bleeding?

 

Study design: Guidance document developed by expert panel based on literature review, consensus conferences and authors’ clinical experience.

Background: Practice guidelines for the diagnosis and treatment of gastroesophageal hemorrhage were last published in 2007 and endorsed by several major professional societies. Since then, there have been a number of randomized controlled trials (RCTs) and consensus conferences. The American Association for the Study of Liver Diseases (AASLD) published updated practice guidelines in 2016 that encompass pathophysiology, monitoring, diagnosis, and treatment of gastroesophageal hemorrhage in cirrhotic patients. This summary will focus on inpatient management for active gastroesophageal hemorrhage.

Dr. Jinyu Byron Lu

Synopsis of Inpatient Management for Esophageal Variceal Hemorrhage: The authors suggest that all VH requires ICU admission with the goal of acute control of bleeding, prevention of early recurrence, and reduction in 6-week mortality. Imaging to rule out portal vein thrombosis and HCC should be considered. Hepatic-Venous Pressure Gradient (HVPG) greater than 20 mm Hg is the strongest predictor of early rebleeding and death. However, catheter measurements of portal pressure are not available at most centers. As with any critically ill patient, stabilization of respiratory status and ensuring hemodynamic stability with volume resuscitation is paramount. RCTs evaluating transfusion goals suggest that a restrictive transfusion goal of HgB 7 g/dL is superior to a liberal goal of 9 g/dL. The authors hypothesize this may be related to lower HVPG observed with lower transfusion thresholds. In terms of treating coagulopathy, RCTs evaluating recombinant VIIa have not shown clear benefit. Correction of INR with FFPs similarly not recommended. No recommendations are made regarding utility of platelet transfusions. Vasoactive drugs should be administered when VH is suspected with the goal of decreasing splanchnic blood flow. Octreotide is the only vasoactive drug available in the United States. RCTs show that antibiotics administered prophylactically decrease infections, recurrent hemorrhage, and death. Ceftriaxone 1 g daily is the drug of choice in the United States and should be given up to a maximum of 7 days. A reasonable strategy is discontinuation of prophylaxis concurrently with discontinuation of vasoactive agents. After stabilization of hemodynamics, patients should proceed to endoscopy no more than 12 hours after presentation. Endoscopic Variceal Ligation (EVL) should be done if signs of active or recent variceal bleeding are found. After EVL, select patients at high risk of rebleeding (Child-Pugh B with active bleeding seen on endoscopy or Child-Pugh C patients) may benefit from TIPS within 72 hours. If TIPS is done, vasoactive agents can be discontinued. Otherwise, vasoactive agents should continue for 2-5 days with subsequent transition to nonselective beta blockers (NSBB) such as nadolol or propranolol. For secondary prophylaxis of esophageal bleeding, combination EVL and NSBB is first-line therapy. If recurrent hemorrhage occurs while on secondary prophylaxis, rescue TIPS is recommended.

Synopsis of Inpatient Management for Gastric Variceal Hemorrhage: Management of Gastric Variceal Hemorrhage is similar to Esophageal Variceal (EV) Hemorrhage and encompasses volume resuscitation, vasoactive drugs, and antibiotics with endoscopy shortly thereafter. Balloon tamponade can be used as a bridge to endoscopy in massive bleeds. In addition to the above, anatomic location of Gastric Varices (GV) affects choice of intervention. GOV1 varices extend from the gastric cardia to the lesser curvature and represent 75% of GV. If these are small, they can be managed with EVL. Otherwise these can be managed with injection of cyanoacrylate glue. GOV2 varices extend from the gastric cardia into the fundus. Isolated GV type 1 varices (IGV1) are located entirely in the fundus and have the highest propensity for bleeding. For these latter two types of “cardio-fundal varices” TIPS is the preferred intervention to control acute bleeding. Data on the efficacy of secondary prophylaxis for GV bleeding is limited. A combination of NSBB, cyanoacrylate injection, or TIPS can be considered. Balloon Occluded Retrograde Transvenous Obliteration (BRTO) can be considered if fundal varices are associated with a large gastrorenal or splenorenal collateral. However, no RCTs have compared BRTO with other strategies. Isolated GV type 2 (IGV2) varices are not localized to the esophageal or gastric cardio-fundal region and are rare in cirrhotic patients but tend to occur in pre-hepatic portal hypertension. Management requires multidisciplinary input from endoscopists, hepatologists, interventional radiologists, and surgeons.

Bottom line: For esophageal variceal bleeding related to cirrhosis: volume resuscitation, antibiotic prophylaxis, and vasoactive agents are mainstays of therapy to stabilize patient for endoscopic intervention within 12 hours. This should be followed by early TIPS within 72 hours in high risk patients.

A similar approach applies to gastric variceal bleeding, but interventional management is dependent on the anatomic location of the varices in question.

Citations: Garcia-Tsao G et al. Portal hypertensive bleeding in cirrhosis: Risk stratification, diagnosis and management – 2016 practice guidance by the American Association for the Study of Liver Diseases. Hepatology 2017 Jan;65[1]:310-35.

 

Dr. Lu is a hospitalist at Cooper University Hospital in Camden, N.J.

Clinical question: What is appropriate inpatient management of a cirrhotic patient with acute esophageal or gastric variceal bleeding?

 

Study design: Guidance document developed by expert panel based on literature review, consensus conferences and authors’ clinical experience.

Background: Practice guidelines for the diagnosis and treatment of gastroesophageal hemorrhage were last published in 2007 and endorsed by several major professional societies. Since then, there have been a number of randomized controlled trials (RCTs) and consensus conferences. The American Association for the Study of Liver Diseases (AASLD) published updated practice guidelines in 2016 that encompass pathophysiology, monitoring, diagnosis, and treatment of gastroesophageal hemorrhage in cirrhotic patients. This summary will focus on inpatient management for active gastroesophageal hemorrhage.

Dr. Jinyu Byron Lu

Synopsis of Inpatient Management for Esophageal Variceal Hemorrhage: The authors suggest that all VH requires ICU admission with the goal of acute control of bleeding, prevention of early recurrence, and reduction in 6-week mortality. Imaging to rule out portal vein thrombosis and HCC should be considered. Hepatic-Venous Pressure Gradient (HVPG) greater than 20 mm Hg is the strongest predictor of early rebleeding and death. However, catheter measurements of portal pressure are not available at most centers. As with any critically ill patient, stabilization of respiratory status and ensuring hemodynamic stability with volume resuscitation is paramount. RCTs evaluating transfusion goals suggest that a restrictive transfusion goal of HgB 7 g/dL is superior to a liberal goal of 9 g/dL. The authors hypothesize this may be related to lower HVPG observed with lower transfusion thresholds. In terms of treating coagulopathy, RCTs evaluating recombinant VIIa have not shown clear benefit. Correction of INR with FFPs similarly not recommended. No recommendations are made regarding utility of platelet transfusions. Vasoactive drugs should be administered when VH is suspected with the goal of decreasing splanchnic blood flow. Octreotide is the only vasoactive drug available in the United States. RCTs show that antibiotics administered prophylactically decrease infections, recurrent hemorrhage, and death. Ceftriaxone 1 g daily is the drug of choice in the United States and should be given up to a maximum of 7 days. A reasonable strategy is discontinuation of prophylaxis concurrently with discontinuation of vasoactive agents. After stabilization of hemodynamics, patients should proceed to endoscopy no more than 12 hours after presentation. Endoscopic Variceal Ligation (EVL) should be done if signs of active or recent variceal bleeding are found. After EVL, select patients at high risk of rebleeding (Child-Pugh B with active bleeding seen on endoscopy or Child-Pugh C patients) may benefit from TIPS within 72 hours. If TIPS is done, vasoactive agents can be discontinued. Otherwise, vasoactive agents should continue for 2-5 days with subsequent transition to nonselective beta blockers (NSBB) such as nadolol or propranolol. For secondary prophylaxis of esophageal bleeding, combination EVL and NSBB is first-line therapy. If recurrent hemorrhage occurs while on secondary prophylaxis, rescue TIPS is recommended.

Synopsis of Inpatient Management for Gastric Variceal Hemorrhage: Management of Gastric Variceal Hemorrhage is similar to Esophageal Variceal (EV) Hemorrhage and encompasses volume resuscitation, vasoactive drugs, and antibiotics with endoscopy shortly thereafter. Balloon tamponade can be used as a bridge to endoscopy in massive bleeds. In addition to the above, anatomic location of Gastric Varices (GV) affects choice of intervention. GOV1 varices extend from the gastric cardia to the lesser curvature and represent 75% of GV. If these are small, they can be managed with EVL. Otherwise these can be managed with injection of cyanoacrylate glue. GOV2 varices extend from the gastric cardia into the fundus. Isolated GV type 1 varices (IGV1) are located entirely in the fundus and have the highest propensity for bleeding. For these latter two types of “cardio-fundal varices” TIPS is the preferred intervention to control acute bleeding. Data on the efficacy of secondary prophylaxis for GV bleeding is limited. A combination of NSBB, cyanoacrylate injection, or TIPS can be considered. Balloon Occluded Retrograde Transvenous Obliteration (BRTO) can be considered if fundal varices are associated with a large gastrorenal or splenorenal collateral. However, no RCTs have compared BRTO with other strategies. Isolated GV type 2 (IGV2) varices are not localized to the esophageal or gastric cardio-fundal region and are rare in cirrhotic patients but tend to occur in pre-hepatic portal hypertension. Management requires multidisciplinary input from endoscopists, hepatologists, interventional radiologists, and surgeons.

Bottom line: For esophageal variceal bleeding related to cirrhosis: volume resuscitation, antibiotic prophylaxis, and vasoactive agents are mainstays of therapy to stabilize patient for endoscopic intervention within 12 hours. This should be followed by early TIPS within 72 hours in high risk patients.

A similar approach applies to gastric variceal bleeding, but interventional management is dependent on the anatomic location of the varices in question.

Citations: Garcia-Tsao G et al. Portal hypertensive bleeding in cirrhosis: Risk stratification, diagnosis and management – 2016 practice guidance by the American Association for the Study of Liver Diseases. Hepatology 2017 Jan;65[1]:310-35.

 

Dr. Lu is a hospitalist at Cooper University Hospital in Camden, N.J.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Infant hepatitis B vaccine protection lingers into adolescence

Article Type
Changed
Fri, 01/18/2019 - 16:37

 

Adolescents who received hepatitis B virus (HBV) vaccinations as infants still showed protection despite little evidence of residual antibodies, a study showed.

This finding was based on data from a prospective study of 137 children, aged 10-11 years, and 213 children, aged 15-16 years, with no history of HBV infection who were vaccinated at 2, 4, and 6 months of age. Michelle Pinto, MD, of the Vaccine Evaluation Center in Vancouver and her colleagues measured residual immunity to determine whether HBV boosters might be needed in adolescents vaccinated as infants to prolong immunity and reduce disease transmission in adulthood.

Overall, 97% of the younger age group and 91% of the older age group showed reactions to an HBV vaccine challenge. An additional 3 (2%) younger children and 12 (6%) older children responded to a second vaccine challenge after failing to respond to the first.

Limitations of the study included a “limited ability of the challenge vaccine procedure to accurately identify immune memory and anamnestic responses” and the differences between the findings and those from long-term outcome data in similar studies in other countries, Dr. Pinto and her associates wrote.

However, “the fact that substantial differences exist in measures of residual protection among teenagers after infant or adolescent HBV vaccinations warrants close ongoing scrutiny of whether important differences will emerge in long-term protection, with or without booster vaccination,” they said (Pediatr Infect Dis J. 2017. doi: 10.1097/INF.0000000000001543).

Publications
Topics
Sections

 

Adolescents who received hepatitis B virus (HBV) vaccinations as infants still showed protection despite little evidence of residual antibodies, a study showed.

This finding was based on data from a prospective study of 137 children, aged 10-11 years, and 213 children, aged 15-16 years, with no history of HBV infection who were vaccinated at 2, 4, and 6 months of age. Michelle Pinto, MD, of the Vaccine Evaluation Center in Vancouver and her colleagues measured residual immunity to determine whether HBV boosters might be needed in adolescents vaccinated as infants to prolong immunity and reduce disease transmission in adulthood.

Overall, 97% of the younger age group and 91% of the older age group showed reactions to an HBV vaccine challenge. An additional 3 (2%) younger children and 12 (6%) older children responded to a second vaccine challenge after failing to respond to the first.

Limitations of the study included a “limited ability of the challenge vaccine procedure to accurately identify immune memory and anamnestic responses” and the differences between the findings and those from long-term outcome data in similar studies in other countries, Dr. Pinto and her associates wrote.

However, “the fact that substantial differences exist in measures of residual protection among teenagers after infant or adolescent HBV vaccinations warrants close ongoing scrutiny of whether important differences will emerge in long-term protection, with or without booster vaccination,” they said (Pediatr Infect Dis J. 2017. doi: 10.1097/INF.0000000000001543).

 

Adolescents who received hepatitis B virus (HBV) vaccinations as infants still showed protection despite little evidence of residual antibodies, a study showed.

This finding was based on data from a prospective study of 137 children, aged 10-11 years, and 213 children, aged 15-16 years, with no history of HBV infection who were vaccinated at 2, 4, and 6 months of age. Michelle Pinto, MD, of the Vaccine Evaluation Center in Vancouver and her colleagues measured residual immunity to determine whether HBV boosters might be needed in adolescents vaccinated as infants to prolong immunity and reduce disease transmission in adulthood.

Overall, 97% of the younger age group and 91% of the older age group showed reactions to an HBV vaccine challenge. An additional 3 (2%) younger children and 12 (6%) older children responded to a second vaccine challenge after failing to respond to the first.

Limitations of the study included a “limited ability of the challenge vaccine procedure to accurately identify immune memory and anamnestic responses” and the differences between the findings and those from long-term outcome data in similar studies in other countries, Dr. Pinto and her associates wrote.

However, “the fact that substantial differences exist in measures of residual protection among teenagers after infant or adolescent HBV vaccinations warrants close ongoing scrutiny of whether important differences will emerge in long-term protection, with or without booster vaccination,” they said (Pediatr Infect Dis J. 2017. doi: 10.1097/INF.0000000000001543).

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE PEDIATRIC INFECTIOUS DISEASE JOURNAL

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME