Lost time amid COVID-19

Article Type
Changed
Tue, 12/08/2020 - 12:15

At the end of my second year of medical school was what I call “The Lost Month.”

Dr. Allan M. Block, a neurologist in Scottsdale, Arizona.
Dr. Allan M. Block


Between the end of classes and USMLE-1 we had 30 days to study for an 800-question, 2-day test that covered the entirety of the first 2 years. If you failed it once, you had to retake it. If you failed it twice you were out of medical school.

It was understandably stressful and I felt like every minute counted. I stopped shaving for the month to free up a few extra minutes each day. I unplugged my TV and put it in a closet.

Every day was the same. I was up at 7:00, had corn flakes, walked to Creighton, and found an empty library room. I took 30 minutes off at lunch and dinner to get something from the student union to eat outside (the only chance I had to enjoy sunlight), then study again until 1:00-2:00 in the morning.

The whole month become a blur. Days of the week were meaningless, only the number left until boards. Saturday or Tuesday, my life was the same. I don’t remember many specifics.

That was “The Lost Month.”

Now, somewhere in the middle of my attendinghood, I’ve come to 2020 (and likely beyond) which is, “The Lost Year.”

The days of the week have a bit more meaning now, as I still go to my office for a few hours and am home on weekends. But the weeks and months blend together. I’m home most of the time, I busy myself with working, and I have meal breaks with my family. There are no vacations or parties or movies. Even the holidays aren’t that different from the weekends—there isn’t much else to do to pass the time. And the stress is still there (in the early 90s it was academic, today it’s financial).

At least now I still try to shave regularly.

Thirty years ago I passed the boards and moved on to where I am today. My fear of failing out of medical school never materialized.

Today I try to remain optimistic. Vaccines are coming. Our learning curve on treating COVID-19 is getting better. Hopefully, The Lost Year will gradually become a memory as life goes on and normalizes.

Like the The Lost Month, I have to view 2020 as bump in the road. If this is the worst crisis I and my loved ones have to go through, I can deal with that. I know we’re fortunate compared with others. I try to remember that every time I pass a Salvation Army kettle or canned food drive, and donate.

In 1990 I had a specific date when The Lost Month would be over, and it was coming up way too fast. In 2020 no such date exists, now or in the immediate future. The best any of us can do is keep hanging on and hoping.

Dr. Block has a solo neurology practice in Scottsdale, Ariz.

Publications
Topics
Sections

At the end of my second year of medical school was what I call “The Lost Month.”

Dr. Allan M. Block, a neurologist in Scottsdale, Arizona.
Dr. Allan M. Block


Between the end of classes and USMLE-1 we had 30 days to study for an 800-question, 2-day test that covered the entirety of the first 2 years. If you failed it once, you had to retake it. If you failed it twice you were out of medical school.

It was understandably stressful and I felt like every minute counted. I stopped shaving for the month to free up a few extra minutes each day. I unplugged my TV and put it in a closet.

Every day was the same. I was up at 7:00, had corn flakes, walked to Creighton, and found an empty library room. I took 30 minutes off at lunch and dinner to get something from the student union to eat outside (the only chance I had to enjoy sunlight), then study again until 1:00-2:00 in the morning.

The whole month become a blur. Days of the week were meaningless, only the number left until boards. Saturday or Tuesday, my life was the same. I don’t remember many specifics.

That was “The Lost Month.”

Now, somewhere in the middle of my attendinghood, I’ve come to 2020 (and likely beyond) which is, “The Lost Year.”

The days of the week have a bit more meaning now, as I still go to my office for a few hours and am home on weekends. But the weeks and months blend together. I’m home most of the time, I busy myself with working, and I have meal breaks with my family. There are no vacations or parties or movies. Even the holidays aren’t that different from the weekends—there isn’t much else to do to pass the time. And the stress is still there (in the early 90s it was academic, today it’s financial).

At least now I still try to shave regularly.

Thirty years ago I passed the boards and moved on to where I am today. My fear of failing out of medical school never materialized.

Today I try to remain optimistic. Vaccines are coming. Our learning curve on treating COVID-19 is getting better. Hopefully, The Lost Year will gradually become a memory as life goes on and normalizes.

Like the The Lost Month, I have to view 2020 as bump in the road. If this is the worst crisis I and my loved ones have to go through, I can deal with that. I know we’re fortunate compared with others. I try to remember that every time I pass a Salvation Army kettle or canned food drive, and donate.

In 1990 I had a specific date when The Lost Month would be over, and it was coming up way too fast. In 2020 no such date exists, now or in the immediate future. The best any of us can do is keep hanging on and hoping.

Dr. Block has a solo neurology practice in Scottsdale, Ariz.

At the end of my second year of medical school was what I call “The Lost Month.”

Dr. Allan M. Block, a neurologist in Scottsdale, Arizona.
Dr. Allan M. Block


Between the end of classes and USMLE-1 we had 30 days to study for an 800-question, 2-day test that covered the entirety of the first 2 years. If you failed it once, you had to retake it. If you failed it twice you were out of medical school.

It was understandably stressful and I felt like every minute counted. I stopped shaving for the month to free up a few extra minutes each day. I unplugged my TV and put it in a closet.

Every day was the same. I was up at 7:00, had corn flakes, walked to Creighton, and found an empty library room. I took 30 minutes off at lunch and dinner to get something from the student union to eat outside (the only chance I had to enjoy sunlight), then study again until 1:00-2:00 in the morning.

The whole month become a blur. Days of the week were meaningless, only the number left until boards. Saturday or Tuesday, my life was the same. I don’t remember many specifics.

That was “The Lost Month.”

Now, somewhere in the middle of my attendinghood, I’ve come to 2020 (and likely beyond) which is, “The Lost Year.”

The days of the week have a bit more meaning now, as I still go to my office for a few hours and am home on weekends. But the weeks and months blend together. I’m home most of the time, I busy myself with working, and I have meal breaks with my family. There are no vacations or parties or movies. Even the holidays aren’t that different from the weekends—there isn’t much else to do to pass the time. And the stress is still there (in the early 90s it was academic, today it’s financial).

At least now I still try to shave regularly.

Thirty years ago I passed the boards and moved on to where I am today. My fear of failing out of medical school never materialized.

Today I try to remain optimistic. Vaccines are coming. Our learning curve on treating COVID-19 is getting better. Hopefully, The Lost Year will gradually become a memory as life goes on and normalizes.

Like the The Lost Month, I have to view 2020 as bump in the road. If this is the worst crisis I and my loved ones have to go through, I can deal with that. I know we’re fortunate compared with others. I try to remember that every time I pass a Salvation Army kettle or canned food drive, and donate.

In 1990 I had a specific date when The Lost Month would be over, and it was coming up way too fast. In 2020 no such date exists, now or in the immediate future. The best any of us can do is keep hanging on and hoping.

Dr. Block has a solo neurology practice in Scottsdale, Ariz.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

New residency matching sets record, says NRMP

Article Type
Changed
Tue, 12/08/2020 - 16:20

The 2020 Medical Specialties Matching Program (MSMP), a division of the National Resident Matching Program, matched a record number of applicants to subspecialty training programs for positions beginning in 2021, the NRMP reported.

“Specifically, the 2020 MSMP included 6,847 applicants submitting certified rank order lists (an 8.9% increase), 2042 programs submitting certified rank order lists (a 4.3% increase), 5,734 positions (a 2.8% increase), and 5,208 positions filled (a 6.1% increase),” according to a news release.

The MSMP now includes 14 internal medicine subspecialties and four sub-subspecialties. The MSMP offered 5,734 positions this year, and 5,208 (90.8%) were successfully filled. That represents an increase of almost 3 percentage points, compared with last year’s results.

Among those subspecialties that offered 30 positions or more, the most competitive were allergy and immunology, cardiovascular disease, clinical cardiac electrophysiology, gastroenterology, hematology and oncology, and pulmonary/critical care. Each of those filled at least 95% of available slots. More than half of the positions were filled by U.S. MDs.

By contrast, the least competitive subspecialties were geriatric medicine and nephrology. Programs in these two fields filled less than 75% of positions offered. Less than 45% were filled by U.S. MDs.

More than 76% of the 6,847 applicants who submitted rank order lists (5,208) matched into residency programs.

The number of U.S. MDs in this category increased nearly 7% over last year, with a total of 2,935. The number of DO graduates increased as well, with a total of 855, which was 9.6% more than the previous year.

More U.S. citizens who graduated from international medical schools matched this year as well; 1,087 placed into subspecialty residency, a 9% increase, compared with last year.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

The 2020 Medical Specialties Matching Program (MSMP), a division of the National Resident Matching Program, matched a record number of applicants to subspecialty training programs for positions beginning in 2021, the NRMP reported.

“Specifically, the 2020 MSMP included 6,847 applicants submitting certified rank order lists (an 8.9% increase), 2042 programs submitting certified rank order lists (a 4.3% increase), 5,734 positions (a 2.8% increase), and 5,208 positions filled (a 6.1% increase),” according to a news release.

The MSMP now includes 14 internal medicine subspecialties and four sub-subspecialties. The MSMP offered 5,734 positions this year, and 5,208 (90.8%) were successfully filled. That represents an increase of almost 3 percentage points, compared with last year’s results.

Among those subspecialties that offered 30 positions or more, the most competitive were allergy and immunology, cardiovascular disease, clinical cardiac electrophysiology, gastroenterology, hematology and oncology, and pulmonary/critical care. Each of those filled at least 95% of available slots. More than half of the positions were filled by U.S. MDs.

By contrast, the least competitive subspecialties were geriatric medicine and nephrology. Programs in these two fields filled less than 75% of positions offered. Less than 45% were filled by U.S. MDs.

More than 76% of the 6,847 applicants who submitted rank order lists (5,208) matched into residency programs.

The number of U.S. MDs in this category increased nearly 7% over last year, with a total of 2,935. The number of DO graduates increased as well, with a total of 855, which was 9.6% more than the previous year.

More U.S. citizens who graduated from international medical schools matched this year as well; 1,087 placed into subspecialty residency, a 9% increase, compared with last year.

A version of this article originally appeared on Medscape.com.

The 2020 Medical Specialties Matching Program (MSMP), a division of the National Resident Matching Program, matched a record number of applicants to subspecialty training programs for positions beginning in 2021, the NRMP reported.

“Specifically, the 2020 MSMP included 6,847 applicants submitting certified rank order lists (an 8.9% increase), 2042 programs submitting certified rank order lists (a 4.3% increase), 5,734 positions (a 2.8% increase), and 5,208 positions filled (a 6.1% increase),” according to a news release.

The MSMP now includes 14 internal medicine subspecialties and four sub-subspecialties. The MSMP offered 5,734 positions this year, and 5,208 (90.8%) were successfully filled. That represents an increase of almost 3 percentage points, compared with last year’s results.

Among those subspecialties that offered 30 positions or more, the most competitive were allergy and immunology, cardiovascular disease, clinical cardiac electrophysiology, gastroenterology, hematology and oncology, and pulmonary/critical care. Each of those filled at least 95% of available slots. More than half of the positions were filled by U.S. MDs.

By contrast, the least competitive subspecialties were geriatric medicine and nephrology. Programs in these two fields filled less than 75% of positions offered. Less than 45% were filled by U.S. MDs.

More than 76% of the 6,847 applicants who submitted rank order lists (5,208) matched into residency programs.

The number of U.S. MDs in this category increased nearly 7% over last year, with a total of 2,935. The number of DO graduates increased as well, with a total of 855, which was 9.6% more than the previous year.

More U.S. citizens who graduated from international medical schools matched this year as well; 1,087 placed into subspecialty residency, a 9% increase, compared with last year.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

‘Excellent short-term outcomes’ seen in HCV+ liver transplants to HCV– recipients

Article Type
Changed
Thu, 12/24/2020 - 13:22

Liver transplantation using hepatitis C virus (HCV)-seropositive grafts to HCV-seronegative recipients resulted in “excellent short-term outcomes,” according to the results of a prospective, multicenter study reported in the Journal of Hepatology.

A total of 34 HCV– liver transplantation recipients received grafts from HCV+ donors (20 HCV viremic and 14 nonviremic) from January 2018 to September 2019, according to Bashar Aqel, MD, of the Mayo Clinic, Phoenix, Ariz., and colleagues.

Seven of the grafts were obtained from donation after cardiac death (DCD). Six recipients underwent simultaneous liver/kidney (SLK) transplant, and four patients were repeat liver transplants.
 

Sustained viral response

None of the recipients of an HCV nonviremic graft developed HCV viremia. However, all 20 patients who received HCV viremic grafts had HCV viremia confirmed within 3 days after liver transplant. Direct-acting antiviral (DAA) treatment was started at the median time of 27.5 days in these patients.

All 20 patients successfully completed the treatment and achieved a sustained viral response. In addition, the DAA treatment was well tolerated with minimal adverse events, according to the researchers.

However, one patient died, having developed HCV-related acute membranous nephropathy that resulted in end-stage kidney disease. In addition, a recipient of an HCV nonviremic graft died with acute myocardial infarction 610 days post liver transplant, the authors reported.

“This multicenter study demonstrated LT [liver transplantation] using HCV-seropositive grafts to HCV-seronegative recipients resulted in acceptable short-term outcomes even with the use of DCD grafts and expansion into SLK or repeat LT. However, a careful ongoing assessment regarding patient and graft selection, complications, and the timing of treatment is required,” the researchers concluded.

The study was funded in part by the McIver Estate Young Investigator Benefactor Award. The authors reported they had no potential conflicts.

SOURCE: Aqel B et al. J Hepatol. 2020, Nov 11. doi: 10.1016/j.jhep.2020.11.005.

Publications
Topics
Sections

Liver transplantation using hepatitis C virus (HCV)-seropositive grafts to HCV-seronegative recipients resulted in “excellent short-term outcomes,” according to the results of a prospective, multicenter study reported in the Journal of Hepatology.

A total of 34 HCV– liver transplantation recipients received grafts from HCV+ donors (20 HCV viremic and 14 nonviremic) from January 2018 to September 2019, according to Bashar Aqel, MD, of the Mayo Clinic, Phoenix, Ariz., and colleagues.

Seven of the grafts were obtained from donation after cardiac death (DCD). Six recipients underwent simultaneous liver/kidney (SLK) transplant, and four patients were repeat liver transplants.
 

Sustained viral response

None of the recipients of an HCV nonviremic graft developed HCV viremia. However, all 20 patients who received HCV viremic grafts had HCV viremia confirmed within 3 days after liver transplant. Direct-acting antiviral (DAA) treatment was started at the median time of 27.5 days in these patients.

All 20 patients successfully completed the treatment and achieved a sustained viral response. In addition, the DAA treatment was well tolerated with minimal adverse events, according to the researchers.

However, one patient died, having developed HCV-related acute membranous nephropathy that resulted in end-stage kidney disease. In addition, a recipient of an HCV nonviremic graft died with acute myocardial infarction 610 days post liver transplant, the authors reported.

“This multicenter study demonstrated LT [liver transplantation] using HCV-seropositive grafts to HCV-seronegative recipients resulted in acceptable short-term outcomes even with the use of DCD grafts and expansion into SLK or repeat LT. However, a careful ongoing assessment regarding patient and graft selection, complications, and the timing of treatment is required,” the researchers concluded.

The study was funded in part by the McIver Estate Young Investigator Benefactor Award. The authors reported they had no potential conflicts.

SOURCE: Aqel B et al. J Hepatol. 2020, Nov 11. doi: 10.1016/j.jhep.2020.11.005.

Liver transplantation using hepatitis C virus (HCV)-seropositive grafts to HCV-seronegative recipients resulted in “excellent short-term outcomes,” according to the results of a prospective, multicenter study reported in the Journal of Hepatology.

A total of 34 HCV– liver transplantation recipients received grafts from HCV+ donors (20 HCV viremic and 14 nonviremic) from January 2018 to September 2019, according to Bashar Aqel, MD, of the Mayo Clinic, Phoenix, Ariz., and colleagues.

Seven of the grafts were obtained from donation after cardiac death (DCD). Six recipients underwent simultaneous liver/kidney (SLK) transplant, and four patients were repeat liver transplants.
 

Sustained viral response

None of the recipients of an HCV nonviremic graft developed HCV viremia. However, all 20 patients who received HCV viremic grafts had HCV viremia confirmed within 3 days after liver transplant. Direct-acting antiviral (DAA) treatment was started at the median time of 27.5 days in these patients.

All 20 patients successfully completed the treatment and achieved a sustained viral response. In addition, the DAA treatment was well tolerated with minimal adverse events, according to the researchers.

However, one patient died, having developed HCV-related acute membranous nephropathy that resulted in end-stage kidney disease. In addition, a recipient of an HCV nonviremic graft died with acute myocardial infarction 610 days post liver transplant, the authors reported.

“This multicenter study demonstrated LT [liver transplantation] using HCV-seropositive grafts to HCV-seronegative recipients resulted in acceptable short-term outcomes even with the use of DCD grafts and expansion into SLK or repeat LT. However, a careful ongoing assessment regarding patient and graft selection, complications, and the timing of treatment is required,” the researchers concluded.

The study was funded in part by the McIver Estate Young Investigator Benefactor Award. The authors reported they had no potential conflicts.

SOURCE: Aqel B et al. J Hepatol. 2020, Nov 11. doi: 10.1016/j.jhep.2020.11.005.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JOURNAL OF HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Ankylosing Spondylitis Treatment

Article Type
Changed
Tue, 12/08/2020 - 10:29

Publications
Topics
Sections

Publications
Publications
Topics
Article Type
Sections
Article Source
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 09/24/2019 - 15:30
Un-Gate On Date
Tue, 09/24/2019 - 15:30
Use ProPublica
CFC Schedule Remove Status
Tue, 09/24/2019 - 15:30
Hide sidebar & use full width
Do not render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Rounding to make the hospital go ‘round

Article Type
Changed
Tue, 12/08/2020 - 10:04

Hospitalists and performance incentive measures

No matter how you spin it, hospitalists are key to making the world of the hospital go ‘round, making their daily work of paramount interest to both hospitals and health systems.

In an effort to drive quality, safety, and efficiency, hospitals most commonly measure hospitalist work and reward it through ties to compensation. There are several trends in performance incentive metrics highlighted by the SHM 2020 State of Hospital Medicine (SoHM) Report. As hospitals support the subsidy required for hospitalist salaries, there is an increasing ask for hospitalist groups to partner with hospital operations to achieve certain goals. The lever of compensation, when appropriately applied to meaningful metrics, is one way of promoting desired behaviors.

Hospitalists are the primary attending physicians for patients in the hospital while also bridging the patient and their needs to the services of other subspecialists, allied health professionals, and when needed, postacute services. In this way, patients are efficiently moved along the acute care experience with multiple process and outcome measures being recorded along the way.

Some of these common performance incentive measures are determined by the Centers for Medicare and Medicaid Services while others may be of interest to third party payers. Often surrogate markers of process metrics (i.e. order set usage for certain diagnoses) are measured and incentivized as a way of directionally measuring small steps that each hospitalist can reliably control toward a presumably associated improvement in mortality or readmissions, for instance. Still other measures such as length of stay or timely completion of documentation have more to do with hospital operations, regulatory governance, and finance.

There are a variety of performance incentive metrics reported in the 2020 SoHM Report. Survey respondents could choose all measures that applied as compensation measures for their group in the past year. The most common metrics reported include patient satisfaction (48.7%), citizenship (45.8%), accuracy or timeliness of documentation (32.8%), and clinical process measures (30.7%).

DjelicS/Getty Images

It is important to acknowledge that most of these metrics are objective measurements and can be measured down to the individual physician. However, some of the objective measures, such as patient satisfaction data, must rely on agreed upon methods of attribution – which can include anything from attributing based on admitting physician, discharging attending, or the attending with the greatest number of days (i.e. daily charges) seeing the patient. Because of challenges with attribution, groups may opt for group measurement of metrics for some of the compensation metrics where attribution is most muddy.

For performance incentive metrics that may be more subjective, such as citizenship, it is important for hospitalist leaders to consider having a method of determining a person’s contribution with a rubric as well as some shared decision making among a committee of leaders or team members to promote fairness in compensation.

Hospital leaders must also recognize that what is measured will lead to “performance” in that area. The perfect example here is the “early morning discharge time/orders” which is a compensation metric in 27.6% of hospitalist groups. Most agree that having some early discharges, up to maybe 25%-30% of the total number of discharges before noon, can be helpful with hospital throughput. The trick here is that if a patient can be discharged that early, it is likely that some of those patients could have gone home the evening prior. It is important for hospitalist physician leaders and administrators to think about the behaviors that are incentivized in compensation metrics to ensure that the result is indeed helpful.

Dr. Tresa Muir McNeal

Other hospitalist compensation metrics such as readmissions are most effectively addressed if there are multiple physician teams working toward the same metric. Hospitalist work does effect readmissions within the first 7 days of discharge based on available evidence.1 Preventing readmissions from days 8-30 following discharge are more amenable to outpatient and home-based interventions. Also, effective readmission work involves collaboration among the emergency physician team, surgeons, primary care, and subspecialty physicians. So while having this as a compensation metric will gain the attention of hospitalist physicians, the work will be most effective when it is shared with other teams.

Overall, performance incentive metrics for hospitalists can be effective in allowing hospitals and hospitalist groups to partner toward achieving important outcomes for patients. Easy and frequent sharing of data on meaningful metrics with hospitalists is important to effect change. Also, hospital leadership can facilitate collaboration among nursing and multiple physician groups to promote a team culture with hospitalists in achieving goals related to performance incentive metrics.

Dr. McNeal is the division director of inpatient medicine at Baylor Scott & White Medical Center in Temple, Tex.
 

Reference

1. Graham, et al. Preventability of Early Versus Late Hospital Readmissions in a National Cohort of General Medicine Patients. Ann Intern Med. 2018 Jun 5;168(11):766-74.

Publications
Topics
Sections

Hospitalists and performance incentive measures

Hospitalists and performance incentive measures

No matter how you spin it, hospitalists are key to making the world of the hospital go ‘round, making their daily work of paramount interest to both hospitals and health systems.

In an effort to drive quality, safety, and efficiency, hospitals most commonly measure hospitalist work and reward it through ties to compensation. There are several trends in performance incentive metrics highlighted by the SHM 2020 State of Hospital Medicine (SoHM) Report. As hospitals support the subsidy required for hospitalist salaries, there is an increasing ask for hospitalist groups to partner with hospital operations to achieve certain goals. The lever of compensation, when appropriately applied to meaningful metrics, is one way of promoting desired behaviors.

Hospitalists are the primary attending physicians for patients in the hospital while also bridging the patient and their needs to the services of other subspecialists, allied health professionals, and when needed, postacute services. In this way, patients are efficiently moved along the acute care experience with multiple process and outcome measures being recorded along the way.

Some of these common performance incentive measures are determined by the Centers for Medicare and Medicaid Services while others may be of interest to third party payers. Often surrogate markers of process metrics (i.e. order set usage for certain diagnoses) are measured and incentivized as a way of directionally measuring small steps that each hospitalist can reliably control toward a presumably associated improvement in mortality or readmissions, for instance. Still other measures such as length of stay or timely completion of documentation have more to do with hospital operations, regulatory governance, and finance.

There are a variety of performance incentive metrics reported in the 2020 SoHM Report. Survey respondents could choose all measures that applied as compensation measures for their group in the past year. The most common metrics reported include patient satisfaction (48.7%), citizenship (45.8%), accuracy or timeliness of documentation (32.8%), and clinical process measures (30.7%).

DjelicS/Getty Images

It is important to acknowledge that most of these metrics are objective measurements and can be measured down to the individual physician. However, some of the objective measures, such as patient satisfaction data, must rely on agreed upon methods of attribution – which can include anything from attributing based on admitting physician, discharging attending, or the attending with the greatest number of days (i.e. daily charges) seeing the patient. Because of challenges with attribution, groups may opt for group measurement of metrics for some of the compensation metrics where attribution is most muddy.

For performance incentive metrics that may be more subjective, such as citizenship, it is important for hospitalist leaders to consider having a method of determining a person’s contribution with a rubric as well as some shared decision making among a committee of leaders or team members to promote fairness in compensation.

Hospital leaders must also recognize that what is measured will lead to “performance” in that area. The perfect example here is the “early morning discharge time/orders” which is a compensation metric in 27.6% of hospitalist groups. Most agree that having some early discharges, up to maybe 25%-30% of the total number of discharges before noon, can be helpful with hospital throughput. The trick here is that if a patient can be discharged that early, it is likely that some of those patients could have gone home the evening prior. It is important for hospitalist physician leaders and administrators to think about the behaviors that are incentivized in compensation metrics to ensure that the result is indeed helpful.

Dr. Tresa Muir McNeal

Other hospitalist compensation metrics such as readmissions are most effectively addressed if there are multiple physician teams working toward the same metric. Hospitalist work does effect readmissions within the first 7 days of discharge based on available evidence.1 Preventing readmissions from days 8-30 following discharge are more amenable to outpatient and home-based interventions. Also, effective readmission work involves collaboration among the emergency physician team, surgeons, primary care, and subspecialty physicians. So while having this as a compensation metric will gain the attention of hospitalist physicians, the work will be most effective when it is shared with other teams.

Overall, performance incentive metrics for hospitalists can be effective in allowing hospitals and hospitalist groups to partner toward achieving important outcomes for patients. Easy and frequent sharing of data on meaningful metrics with hospitalists is important to effect change. Also, hospital leadership can facilitate collaboration among nursing and multiple physician groups to promote a team culture with hospitalists in achieving goals related to performance incentive metrics.

Dr. McNeal is the division director of inpatient medicine at Baylor Scott & White Medical Center in Temple, Tex.
 

Reference

1. Graham, et al. Preventability of Early Versus Late Hospital Readmissions in a National Cohort of General Medicine Patients. Ann Intern Med. 2018 Jun 5;168(11):766-74.

No matter how you spin it, hospitalists are key to making the world of the hospital go ‘round, making their daily work of paramount interest to both hospitals and health systems.

In an effort to drive quality, safety, and efficiency, hospitals most commonly measure hospitalist work and reward it through ties to compensation. There are several trends in performance incentive metrics highlighted by the SHM 2020 State of Hospital Medicine (SoHM) Report. As hospitals support the subsidy required for hospitalist salaries, there is an increasing ask for hospitalist groups to partner with hospital operations to achieve certain goals. The lever of compensation, when appropriately applied to meaningful metrics, is one way of promoting desired behaviors.

Hospitalists are the primary attending physicians for patients in the hospital while also bridging the patient and their needs to the services of other subspecialists, allied health professionals, and when needed, postacute services. In this way, patients are efficiently moved along the acute care experience with multiple process and outcome measures being recorded along the way.

Some of these common performance incentive measures are determined by the Centers for Medicare and Medicaid Services while others may be of interest to third party payers. Often surrogate markers of process metrics (i.e. order set usage for certain diagnoses) are measured and incentivized as a way of directionally measuring small steps that each hospitalist can reliably control toward a presumably associated improvement in mortality or readmissions, for instance. Still other measures such as length of stay or timely completion of documentation have more to do with hospital operations, regulatory governance, and finance.

There are a variety of performance incentive metrics reported in the 2020 SoHM Report. Survey respondents could choose all measures that applied as compensation measures for their group in the past year. The most common metrics reported include patient satisfaction (48.7%), citizenship (45.8%), accuracy or timeliness of documentation (32.8%), and clinical process measures (30.7%).

DjelicS/Getty Images

It is important to acknowledge that most of these metrics are objective measurements and can be measured down to the individual physician. However, some of the objective measures, such as patient satisfaction data, must rely on agreed upon methods of attribution – which can include anything from attributing based on admitting physician, discharging attending, or the attending with the greatest number of days (i.e. daily charges) seeing the patient. Because of challenges with attribution, groups may opt for group measurement of metrics for some of the compensation metrics where attribution is most muddy.

For performance incentive metrics that may be more subjective, such as citizenship, it is important for hospitalist leaders to consider having a method of determining a person’s contribution with a rubric as well as some shared decision making among a committee of leaders or team members to promote fairness in compensation.

Hospital leaders must also recognize that what is measured will lead to “performance” in that area. The perfect example here is the “early morning discharge time/orders” which is a compensation metric in 27.6% of hospitalist groups. Most agree that having some early discharges, up to maybe 25%-30% of the total number of discharges before noon, can be helpful with hospital throughput. The trick here is that if a patient can be discharged that early, it is likely that some of those patients could have gone home the evening prior. It is important for hospitalist physician leaders and administrators to think about the behaviors that are incentivized in compensation metrics to ensure that the result is indeed helpful.

Dr. Tresa Muir McNeal

Other hospitalist compensation metrics such as readmissions are most effectively addressed if there are multiple physician teams working toward the same metric. Hospitalist work does effect readmissions within the first 7 days of discharge based on available evidence.1 Preventing readmissions from days 8-30 following discharge are more amenable to outpatient and home-based interventions. Also, effective readmission work involves collaboration among the emergency physician team, surgeons, primary care, and subspecialty physicians. So while having this as a compensation metric will gain the attention of hospitalist physicians, the work will be most effective when it is shared with other teams.

Overall, performance incentive metrics for hospitalists can be effective in allowing hospitals and hospitalist groups to partner toward achieving important outcomes for patients. Easy and frequent sharing of data on meaningful metrics with hospitalists is important to effect change. Also, hospital leadership can facilitate collaboration among nursing and multiple physician groups to promote a team culture with hospitalists in achieving goals related to performance incentive metrics.

Dr. McNeal is the division director of inpatient medicine at Baylor Scott & White Medical Center in Temple, Tex.
 

Reference

1. Graham, et al. Preventability of Early Versus Late Hospital Readmissions in a National Cohort of General Medicine Patients. Ann Intern Med. 2018 Jun 5;168(11):766-74.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Does XR injectable naltrexone prevent relapse as effectively as daily sublingual buprenorphine-naloxone?

Article Type
Changed
Tue, 01/12/2021 - 14:24
Display Headline
Does XR injectable naltrexone prevent relapse as effectively as daily sublingual buprenorphine-naloxone?

EVIDENCE SUMMARY

Two recent multicenter, open-label RCTs, 1 in the United States and 1 in Norway, compared monthly XR-NTX with daily BUP-NX.1,2 Both studies evaluated effectiveness (defined by either the number of people who relapsed or self-reported opioid use), cravings, and safety (defined as the absence of serious adverse events such as medically complex withdrawal or fatal overdose).

The participant populations were similar in both mean age and mean age of onset of opioid use. Duration of opioid use was reported differently (total duration or years of heavy heroin or other opioid use) and couldn’t be compared directly.

Naltrexone and buprenorphine-naloxone are similarly effective

The US study enrolled 570 opioid-dependent participants in a 24-week comparative effectiveness trial.1 The 8 study sites were community treatment programs, and the participants were recruited during voluntary inpatient detoxification admissions. Some participants were randomized while on methadone or buprenorphine tapers and some after complete detoxification.

The intention-to-treat analysis included 283 patients in the XR-NTX group and 287 in the BUP-NX group. At 24 weeks, the number of participants who’d had a relapse event (self-reported use or positive urine drug test for nonstudy opioids or refusal to provide a urine sample) was 185 (65%) for XR-NTX compared with 163 (57%) for BUP-NX (odds ratio [OR] = 1.44, 95% confidence interval [CI], 1.02 to 2.01; P = .036).

The 12-week Norwegian noninferiority trial enrolled 159 participants.2 In contrast to the US study, all participants were required to complete inpatient detoxification before randomization and induction onto the study medication.

Patients on BUP-NX reported 3.6 more days of heroin use within the previous 28 days than patients in the XR-NTX group (95% CI, 1.2 to 6; P = .003). For other illicit opioids, self-reported use was 2.4 days greater in the BUP-NX group (95% CI, −0.1 to 4.9; P = .06). Retention with XR-NTX was noninferior to BUP-NX (mean days in therapy [standard deviation], 69.3 [25.9] and 63.7 [29.9]; P = .33).

Randomizing after complete detox reduces induction failures

Naltrexone, a full opioid antagonist, precipitates withdrawal when a full or partial opioid agonist is engaging the opioid receptor. For this reason, an opioid-free interval of 7 to 10 days is generally recommended before initiating naltrexone, raising the risk for relapse during the induction process.

Continue to: The Norwegian trial...

 

 

The Norwegian trial randomized participants after detoxification. The US trial, in which some participants were randomized before completing detoxification, reported 79 (28%) induction failures for XR-NTX and 17 (6%) for BUP-NX.1 As a result, a per protocol analysis was completed with the 204 patients on XR-NTX and 270 patients on BUP-NX who were successfully inducted onto a study medication. The 24-week relapse rate was 52% (106) for XR-NTX and 56% (150) for BUP-NX (OR = 0.87; 95% CI, 0.60 to 1.25; P = .44).

Cravings, adverse events, and cost considerations

Patients reported cravings using a visual analog scale. At 12 weeks in both studies, the XR-NTX groups reported fewer cravings than the BUP-NX groups, although by the end of the 24-week US trial, no statistically significant difference in cravings was found between the 2 groups.1,2

The Norwegian trial found a difference between the XR-NTX and the BUP-NX groups in the percentage of nonserious adverse events such as nausea or chills (60.6% in the XR-NTX group vs 30.6% in the BUP-NX group; P < .001), and the US trial found a difference in total number of overdoses (64% of the total overdoses were in the XR-NTX group). Neither trial, however, reported a statistically significant difference in serious adverse events or fatal overdoses between the 2 groups.1,2

The price for naltrexone is $1665.06 per monthly injection.3 The price for buprenorphine-naloxone varies depending on dose and formulation, with a general range of $527 to $600 per month at 16 mg/d.4

Editor’s takeaway

Two higher-quality RCTs show similar but imperfect effectiveness for both XR-NTX and daily sublingual BUP-NX. Injectable naltrexone’s higher cost may influence medication choice.

References

1. Lee JD, Nunes EV Jr, Novo P, et al. Comparative effectiveness of extended-release naltrexone versus buprenorphine-naloxone for opioid relapse prevention (X:BOT): a multicentre, open-label, randomised controlled trial. Lancet. 2018;391:309-318.

2. Tanum L, Solli KK, Latif ZE, et al. Effectiveness of injectable extended-release naltrexone vs daily buprenorphine-naloxone for opioid dependence: a randomized clinical noninferiority trial. JAMA Psychiatry. 2017;74:1197-1205.

3. Naltrexone: drug information. Lexi-Comp, Inc (Lexi-Drugs). Wolters Kluwer Health, Inc. Riverwoods, IL. http://online.lexi.com. Accessed November 20, 2020.

4. Buprenorphine and naloxone: drug information. Lexi-Comp, Inc (Lexi-Drugs). Wolters Kluwer Health, Inc. Riverwoods, IL. http://online.lexi.com. Accessed November 20, 2020.

Article PDF
Author and Disclosure Information

Matthew Roe, MD
Mountain Area Health Education Center (MAHEC), Asheville, NC

Courtenay Gilmore Wilson, PharmD, BCPS, BCACP, CDE, CPP
Eshelman School of Pharmacy, University of North Carolina Health Sciences at MAHEC, Asheville

Carriedelle Wilson Fusco, FNP-BC
Stephen Hulkower, MD

University of North Carolina Health Sciences at MAHEC, Asheville

Sue Stigleman, MLS
University of North Carolina Health Sciences at MAHEC, Asheville

DEPUTY EDITOR
Rick Guthmann, MD, MPH

Advocate Illinois Masonic Family Medicine Residency, Chicago

Issue
The Journal of Family Practice - 69(10)
Publications
Topics
Page Number
E14-E15
Sections
Author and Disclosure Information

Matthew Roe, MD
Mountain Area Health Education Center (MAHEC), Asheville, NC

Courtenay Gilmore Wilson, PharmD, BCPS, BCACP, CDE, CPP
Eshelman School of Pharmacy, University of North Carolina Health Sciences at MAHEC, Asheville

Carriedelle Wilson Fusco, FNP-BC
Stephen Hulkower, MD

University of North Carolina Health Sciences at MAHEC, Asheville

Sue Stigleman, MLS
University of North Carolina Health Sciences at MAHEC, Asheville

DEPUTY EDITOR
Rick Guthmann, MD, MPH

Advocate Illinois Masonic Family Medicine Residency, Chicago

Author and Disclosure Information

Matthew Roe, MD
Mountain Area Health Education Center (MAHEC), Asheville, NC

Courtenay Gilmore Wilson, PharmD, BCPS, BCACP, CDE, CPP
Eshelman School of Pharmacy, University of North Carolina Health Sciences at MAHEC, Asheville

Carriedelle Wilson Fusco, FNP-BC
Stephen Hulkower, MD

University of North Carolina Health Sciences at MAHEC, Asheville

Sue Stigleman, MLS
University of North Carolina Health Sciences at MAHEC, Asheville

DEPUTY EDITOR
Rick Guthmann, MD, MPH

Advocate Illinois Masonic Family Medicine Residency, Chicago

Article PDF
Article PDF

EVIDENCE SUMMARY

Two recent multicenter, open-label RCTs, 1 in the United States and 1 in Norway, compared monthly XR-NTX with daily BUP-NX.1,2 Both studies evaluated effectiveness (defined by either the number of people who relapsed or self-reported opioid use), cravings, and safety (defined as the absence of serious adverse events such as medically complex withdrawal or fatal overdose).

The participant populations were similar in both mean age and mean age of onset of opioid use. Duration of opioid use was reported differently (total duration or years of heavy heroin or other opioid use) and couldn’t be compared directly.

Naltrexone and buprenorphine-naloxone are similarly effective

The US study enrolled 570 opioid-dependent participants in a 24-week comparative effectiveness trial.1 The 8 study sites were community treatment programs, and the participants were recruited during voluntary inpatient detoxification admissions. Some participants were randomized while on methadone or buprenorphine tapers and some after complete detoxification.

The intention-to-treat analysis included 283 patients in the XR-NTX group and 287 in the BUP-NX group. At 24 weeks, the number of participants who’d had a relapse event (self-reported use or positive urine drug test for nonstudy opioids or refusal to provide a urine sample) was 185 (65%) for XR-NTX compared with 163 (57%) for BUP-NX (odds ratio [OR] = 1.44, 95% confidence interval [CI], 1.02 to 2.01; P = .036).

The 12-week Norwegian noninferiority trial enrolled 159 participants.2 In contrast to the US study, all participants were required to complete inpatient detoxification before randomization and induction onto the study medication.

Patients on BUP-NX reported 3.6 more days of heroin use within the previous 28 days than patients in the XR-NTX group (95% CI, 1.2 to 6; P = .003). For other illicit opioids, self-reported use was 2.4 days greater in the BUP-NX group (95% CI, −0.1 to 4.9; P = .06). Retention with XR-NTX was noninferior to BUP-NX (mean days in therapy [standard deviation], 69.3 [25.9] and 63.7 [29.9]; P = .33).

Randomizing after complete detox reduces induction failures

Naltrexone, a full opioid antagonist, precipitates withdrawal when a full or partial opioid agonist is engaging the opioid receptor. For this reason, an opioid-free interval of 7 to 10 days is generally recommended before initiating naltrexone, raising the risk for relapse during the induction process.

Continue to: The Norwegian trial...

 

 

The Norwegian trial randomized participants after detoxification. The US trial, in which some participants were randomized before completing detoxification, reported 79 (28%) induction failures for XR-NTX and 17 (6%) for BUP-NX.1 As a result, a per protocol analysis was completed with the 204 patients on XR-NTX and 270 patients on BUP-NX who were successfully inducted onto a study medication. The 24-week relapse rate was 52% (106) for XR-NTX and 56% (150) for BUP-NX (OR = 0.87; 95% CI, 0.60 to 1.25; P = .44).

Cravings, adverse events, and cost considerations

Patients reported cravings using a visual analog scale. At 12 weeks in both studies, the XR-NTX groups reported fewer cravings than the BUP-NX groups, although by the end of the 24-week US trial, no statistically significant difference in cravings was found between the 2 groups.1,2

The Norwegian trial found a difference between the XR-NTX and the BUP-NX groups in the percentage of nonserious adverse events such as nausea or chills (60.6% in the XR-NTX group vs 30.6% in the BUP-NX group; P < .001), and the US trial found a difference in total number of overdoses (64% of the total overdoses were in the XR-NTX group). Neither trial, however, reported a statistically significant difference in serious adverse events or fatal overdoses between the 2 groups.1,2

The price for naltrexone is $1665.06 per monthly injection.3 The price for buprenorphine-naloxone varies depending on dose and formulation, with a general range of $527 to $600 per month at 16 mg/d.4

Editor’s takeaway

Two higher-quality RCTs show similar but imperfect effectiveness for both XR-NTX and daily sublingual BUP-NX. Injectable naltrexone’s higher cost may influence medication choice.

EVIDENCE SUMMARY

Two recent multicenter, open-label RCTs, 1 in the United States and 1 in Norway, compared monthly XR-NTX with daily BUP-NX.1,2 Both studies evaluated effectiveness (defined by either the number of people who relapsed or self-reported opioid use), cravings, and safety (defined as the absence of serious adverse events such as medically complex withdrawal or fatal overdose).

The participant populations were similar in both mean age and mean age of onset of opioid use. Duration of opioid use was reported differently (total duration or years of heavy heroin or other opioid use) and couldn’t be compared directly.

Naltrexone and buprenorphine-naloxone are similarly effective

The US study enrolled 570 opioid-dependent participants in a 24-week comparative effectiveness trial.1 The 8 study sites were community treatment programs, and the participants were recruited during voluntary inpatient detoxification admissions. Some participants were randomized while on methadone or buprenorphine tapers and some after complete detoxification.

The intention-to-treat analysis included 283 patients in the XR-NTX group and 287 in the BUP-NX group. At 24 weeks, the number of participants who’d had a relapse event (self-reported use or positive urine drug test for nonstudy opioids or refusal to provide a urine sample) was 185 (65%) for XR-NTX compared with 163 (57%) for BUP-NX (odds ratio [OR] = 1.44, 95% confidence interval [CI], 1.02 to 2.01; P = .036).

The 12-week Norwegian noninferiority trial enrolled 159 participants.2 In contrast to the US study, all participants were required to complete inpatient detoxification before randomization and induction onto the study medication.

Patients on BUP-NX reported 3.6 more days of heroin use within the previous 28 days than patients in the XR-NTX group (95% CI, 1.2 to 6; P = .003). For other illicit opioids, self-reported use was 2.4 days greater in the BUP-NX group (95% CI, −0.1 to 4.9; P = .06). Retention with XR-NTX was noninferior to BUP-NX (mean days in therapy [standard deviation], 69.3 [25.9] and 63.7 [29.9]; P = .33).

Randomizing after complete detox reduces induction failures

Naltrexone, a full opioid antagonist, precipitates withdrawal when a full or partial opioid agonist is engaging the opioid receptor. For this reason, an opioid-free interval of 7 to 10 days is generally recommended before initiating naltrexone, raising the risk for relapse during the induction process.

Continue to: The Norwegian trial...

 

 

The Norwegian trial randomized participants after detoxification. The US trial, in which some participants were randomized before completing detoxification, reported 79 (28%) induction failures for XR-NTX and 17 (6%) for BUP-NX.1 As a result, a per protocol analysis was completed with the 204 patients on XR-NTX and 270 patients on BUP-NX who were successfully inducted onto a study medication. The 24-week relapse rate was 52% (106) for XR-NTX and 56% (150) for BUP-NX (OR = 0.87; 95% CI, 0.60 to 1.25; P = .44).

Cravings, adverse events, and cost considerations

Patients reported cravings using a visual analog scale. At 12 weeks in both studies, the XR-NTX groups reported fewer cravings than the BUP-NX groups, although by the end of the 24-week US trial, no statistically significant difference in cravings was found between the 2 groups.1,2

The Norwegian trial found a difference between the XR-NTX and the BUP-NX groups in the percentage of nonserious adverse events such as nausea or chills (60.6% in the XR-NTX group vs 30.6% in the BUP-NX group; P < .001), and the US trial found a difference in total number of overdoses (64% of the total overdoses were in the XR-NTX group). Neither trial, however, reported a statistically significant difference in serious adverse events or fatal overdoses between the 2 groups.1,2

The price for naltrexone is $1665.06 per monthly injection.3 The price for buprenorphine-naloxone varies depending on dose and formulation, with a general range of $527 to $600 per month at 16 mg/d.4

Editor’s takeaway

Two higher-quality RCTs show similar but imperfect effectiveness for both XR-NTX and daily sublingual BUP-NX. Injectable naltrexone’s higher cost may influence medication choice.

References

1. Lee JD, Nunes EV Jr, Novo P, et al. Comparative effectiveness of extended-release naltrexone versus buprenorphine-naloxone for opioid relapse prevention (X:BOT): a multicentre, open-label, randomised controlled trial. Lancet. 2018;391:309-318.

2. Tanum L, Solli KK, Latif ZE, et al. Effectiveness of injectable extended-release naltrexone vs daily buprenorphine-naloxone for opioid dependence: a randomized clinical noninferiority trial. JAMA Psychiatry. 2017;74:1197-1205.

3. Naltrexone: drug information. Lexi-Comp, Inc (Lexi-Drugs). Wolters Kluwer Health, Inc. Riverwoods, IL. http://online.lexi.com. Accessed November 20, 2020.

4. Buprenorphine and naloxone: drug information. Lexi-Comp, Inc (Lexi-Drugs). Wolters Kluwer Health, Inc. Riverwoods, IL. http://online.lexi.com. Accessed November 20, 2020.

References

1. Lee JD, Nunes EV Jr, Novo P, et al. Comparative effectiveness of extended-release naltrexone versus buprenorphine-naloxone for opioid relapse prevention (X:BOT): a multicentre, open-label, randomised controlled trial. Lancet. 2018;391:309-318.

2. Tanum L, Solli KK, Latif ZE, et al. Effectiveness of injectable extended-release naltrexone vs daily buprenorphine-naloxone for opioid dependence: a randomized clinical noninferiority trial. JAMA Psychiatry. 2017;74:1197-1205.

3. Naltrexone: drug information. Lexi-Comp, Inc (Lexi-Drugs). Wolters Kluwer Health, Inc. Riverwoods, IL. http://online.lexi.com. Accessed November 20, 2020.

4. Buprenorphine and naloxone: drug information. Lexi-Comp, Inc (Lexi-Drugs). Wolters Kluwer Health, Inc. Riverwoods, IL. http://online.lexi.com. Accessed November 20, 2020.

Issue
The Journal of Family Practice - 69(10)
Issue
The Journal of Family Practice - 69(10)
Page Number
E14-E15
Page Number
E14-E15
Publications
Publications
Topics
Article Type
Display Headline
Does XR injectable naltrexone prevent relapse as effectively as daily sublingual buprenorphine-naloxone?
Display Headline
Does XR injectable naltrexone prevent relapse as effectively as daily sublingual buprenorphine-naloxone?
Sections
PURLs Copyright
Evidence-based answers from the Family Physicians Inquiries Network
Inside the Article

EVIDENCE-BASED ANSWER: 

Yes. Monthly extended-release injectable naltrexone (XR-NTX) treats opioid use disorder as effectively as daily sublingual buprenorphine-naloxone (BUP-NX) without causing any increase in serious adverse events or fatal overdoses. (strength of recommendation: A, 2 good-quality RCTs).

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Article PDF Media

Normal gut colonizer induces serotonergic system, appears to regulate behavior

Gut microbes are needed for normal neurologic function
Article Type
Changed
Tue, 12/08/2020 - 09:56

Together, Bifidobacterium dentium and its acetate metabolite regulate key parts of the serotonergic system and are associated with “a functional change in adult behavior,” according to a report published in Cellular and Molecular Gastroenterology and Hepatology.

Human gut microbiota had been known to regulate serotonin (5-hydroxytryptamine) production by gut cells, but underlying mechanisms had been unclear. This study showed that a common bacterial colonizer of the healthy adult gut stimulates serotonin (5-hydroxytryptamine, or 5-HT) release from enterochromaffin cells in both mice (in vivo) and humans (in vitro), wrote Melinda A. Engevik, PhD, of Baylor College of Medicine, Houston, and associates. “B. dentium modulates the serotonergic system in both the intestine and the brain [, which] likely influences behavior, and suggests that supplementation with a single, carefully selected, bacterial strain may be able to partially rescue behavioral deficits induced by shifts in the intestinal microbiota,” they added.

In a prior study, B. dentium modulated sensory neurons in rats with visceral hypersensitivity. In mammals, serotonin is primarily produced and released by enterochromaffin cells in the gut. To discover whether acetate – a short-chain fatty acid metabolite of B. dentium and some other microbiota – induces this pathway, the researchers first confirmed that B. dentium itself lacks the gene pathway for 5-HT production, and that growth media inoculated with B. dentium do not subsequently contain 5-HT. Next, they treated adult germ-free mice with either sterile media, live B. dentium, heat-killed B. dentium, or live Bacteroides ovatus (another commensal gut microbe). Gram staining and fluorescence in situ hybridization (FISH) confirmed that live B. dentium colonized mouse ileum and colon. Mass spectrometry, immunostaining, and quantitative PCR showed that mice treated with live B. dentium, but not B. ovatus, had greater intestinal concentrations of acetate, 5-HT, 5-HT receptors (2a and 4), serotonin transporter, and the gene that encodes free fatty acid receptor 2 (FFAR2), through which acetate signals. Furthermore, “[i]ncreases in 5-HT were observed in enteroendocrine cells directly above enteric neurons,” the researchers said.

They also performed RNA in situ hybridization of mouse brain tissue, which showed significantly increased expression of 5-HT-receptor 2a in the B. dentium–treated compared with germ-free controls. Mice were caged with specified numbers of marbles so the researchers could find out if these changes also modified behavior. Those with complete gut microbiota buried an average of 25% of the marbles, B. dentium­–monocolonized mice buried 15%, and germ-free mice buried fewer marbles. Hence, even short-term monocolonization by a bacterium that acts on the serotonergic system might help normalize behavior, even later in life, the researchers said. They noted that B. dentium–treated and germ-free mice performed similarly on both balance beam and footprint tests, suggesting that treatment with B. dentium does not affect motor coordination.

In humans, enterochromaffin cells released more 5-HT when exposed to B. dentium or acetate. Taken together, the findings “highlight the importance of Bifidobacterium species, and specifically B. dentium, in the adult microbiome-gut-brain axis,” the researchers wrote. Probiotic strains such as Lactobacillus and Bifidobacterium species are thought to improve health by means of signaling pathways, including the serotonergic system, they noted. “Our findings support the modulation of the serotonergic system by a model gut microbe, B. dentium, and provide a potential mechanism by which select microbes and their metabolites can promote endogenous, localized 5-HT biosynthesis. We speculate this may be an important bridging signal in the microbiome-gut-brain axis.”

The National Institutes of Health, BioGaia AB, and the RNA In Situ Hybridization Core facility supported the work. Two coinvestigators disclosed ties to BioGaia AB, Seed, Biomica, Plexus Worldwide, Tenza, Mikrovia, Probiotech, and Takeda. Dr. Engevik and the other investigators reported having no conflicts of interest.

SOURCE: Engevik MA et al. Cell Molec Gastro Hepatol. 2021;11:221-48. doi: 10.1016/j.jcmgh.2020.08.002.

Body

 

“Gut-brain axis” is a widely used term that refers to the idea that the functions of these two organs are linked by bidirectional communication. The gut plays host to a large community of microbes and increasing data suggest that metabolites generated by these microbes can alter nervous system function. Such findings raise the exciting possibility that microbes and/or their metabolites could be used to treat a variety of disorders that involve gut-brain axis dysfunction, from irritable bowel syndrome (IBS) to Parkinson’s disease. To realize this possibility, it will be essential to establish clear mechanistic links between microbes, their products, and effects on host physiology. This study by Engevik and colleagues represents an important advance, demonstrating how a single microbe that commonly colonizes the healthy human intestine, Bifidobacterium dentium, is sufficient to stimulate the gut to make serotonin, a powerful signaling molecule known to influence visceral sensitivity, gut motility, and mood.

Dr. Meenakshi Rao

One key approach to understanding the effects of microbes on host function is to study germ-free mice, which are raised such that they are never exposed to microbes. Germ-free mice have a wide range of immune and neurologic deficits, highlighting how essential microbes are to host function. Previous work has shown that germ-free mice have diminished serotonin levels and abnormal behavior. Exposure to human microbiota could rescue some of these impairments but it was unclear which microbes or signals were essential. This study shows that supplementing germ-free mice with B. dentium is sufficient to stimulate the gut to ramp up serotonin production, alter gene expression in the brain, and rescue some behavioral deficits. Acetate, a short-chain fatty acid produced by B. dentium, was crucial for this phenomenon. This work not only identifies B. dentium as a promising candidate for therapeutic development, it also emphasizes the value of rigorous studies that probe functional interactions between microbes and the nervous system.

Meenakshi Rao, MD, PhD, is a principal investigator at Boston Children’s Hospital, division of gastroenterology, hepatology and nutrition, and assistant professor of pediatrics at Harvard Medical School. She has no conflicts relevant to this study. She receives research support from Boston Pharmaceuticals for unrelated work and has participated on a scientific advisory board for Takeda Pharmaceuticals.

Publications
Topics
Sections
Body

 

“Gut-brain axis” is a widely used term that refers to the idea that the functions of these two organs are linked by bidirectional communication. The gut plays host to a large community of microbes and increasing data suggest that metabolites generated by these microbes can alter nervous system function. Such findings raise the exciting possibility that microbes and/or their metabolites could be used to treat a variety of disorders that involve gut-brain axis dysfunction, from irritable bowel syndrome (IBS) to Parkinson’s disease. To realize this possibility, it will be essential to establish clear mechanistic links between microbes, their products, and effects on host physiology. This study by Engevik and colleagues represents an important advance, demonstrating how a single microbe that commonly colonizes the healthy human intestine, Bifidobacterium dentium, is sufficient to stimulate the gut to make serotonin, a powerful signaling molecule known to influence visceral sensitivity, gut motility, and mood.

Dr. Meenakshi Rao

One key approach to understanding the effects of microbes on host function is to study germ-free mice, which are raised such that they are never exposed to microbes. Germ-free mice have a wide range of immune and neurologic deficits, highlighting how essential microbes are to host function. Previous work has shown that germ-free mice have diminished serotonin levels and abnormal behavior. Exposure to human microbiota could rescue some of these impairments but it was unclear which microbes or signals were essential. This study shows that supplementing germ-free mice with B. dentium is sufficient to stimulate the gut to ramp up serotonin production, alter gene expression in the brain, and rescue some behavioral deficits. Acetate, a short-chain fatty acid produced by B. dentium, was crucial for this phenomenon. This work not only identifies B. dentium as a promising candidate for therapeutic development, it also emphasizes the value of rigorous studies that probe functional interactions between microbes and the nervous system.

Meenakshi Rao, MD, PhD, is a principal investigator at Boston Children’s Hospital, division of gastroenterology, hepatology and nutrition, and assistant professor of pediatrics at Harvard Medical School. She has no conflicts relevant to this study. She receives research support from Boston Pharmaceuticals for unrelated work and has participated on a scientific advisory board for Takeda Pharmaceuticals.

Body

 

“Gut-brain axis” is a widely used term that refers to the idea that the functions of these two organs are linked by bidirectional communication. The gut plays host to a large community of microbes and increasing data suggest that metabolites generated by these microbes can alter nervous system function. Such findings raise the exciting possibility that microbes and/or their metabolites could be used to treat a variety of disorders that involve gut-brain axis dysfunction, from irritable bowel syndrome (IBS) to Parkinson’s disease. To realize this possibility, it will be essential to establish clear mechanistic links between microbes, their products, and effects on host physiology. This study by Engevik and colleagues represents an important advance, demonstrating how a single microbe that commonly colonizes the healthy human intestine, Bifidobacterium dentium, is sufficient to stimulate the gut to make serotonin, a powerful signaling molecule known to influence visceral sensitivity, gut motility, and mood.

Dr. Meenakshi Rao

One key approach to understanding the effects of microbes on host function is to study germ-free mice, which are raised such that they are never exposed to microbes. Germ-free mice have a wide range of immune and neurologic deficits, highlighting how essential microbes are to host function. Previous work has shown that germ-free mice have diminished serotonin levels and abnormal behavior. Exposure to human microbiota could rescue some of these impairments but it was unclear which microbes or signals were essential. This study shows that supplementing germ-free mice with B. dentium is sufficient to stimulate the gut to ramp up serotonin production, alter gene expression in the brain, and rescue some behavioral deficits. Acetate, a short-chain fatty acid produced by B. dentium, was crucial for this phenomenon. This work not only identifies B. dentium as a promising candidate for therapeutic development, it also emphasizes the value of rigorous studies that probe functional interactions between microbes and the nervous system.

Meenakshi Rao, MD, PhD, is a principal investigator at Boston Children’s Hospital, division of gastroenterology, hepatology and nutrition, and assistant professor of pediatrics at Harvard Medical School. She has no conflicts relevant to this study. She receives research support from Boston Pharmaceuticals for unrelated work and has participated on a scientific advisory board for Takeda Pharmaceuticals.

Title
Gut microbes are needed for normal neurologic function
Gut microbes are needed for normal neurologic function

Together, Bifidobacterium dentium and its acetate metabolite regulate key parts of the serotonergic system and are associated with “a functional change in adult behavior,” according to a report published in Cellular and Molecular Gastroenterology and Hepatology.

Human gut microbiota had been known to regulate serotonin (5-hydroxytryptamine) production by gut cells, but underlying mechanisms had been unclear. This study showed that a common bacterial colonizer of the healthy adult gut stimulates serotonin (5-hydroxytryptamine, or 5-HT) release from enterochromaffin cells in both mice (in vivo) and humans (in vitro), wrote Melinda A. Engevik, PhD, of Baylor College of Medicine, Houston, and associates. “B. dentium modulates the serotonergic system in both the intestine and the brain [, which] likely influences behavior, and suggests that supplementation with a single, carefully selected, bacterial strain may be able to partially rescue behavioral deficits induced by shifts in the intestinal microbiota,” they added.

In a prior study, B. dentium modulated sensory neurons in rats with visceral hypersensitivity. In mammals, serotonin is primarily produced and released by enterochromaffin cells in the gut. To discover whether acetate – a short-chain fatty acid metabolite of B. dentium and some other microbiota – induces this pathway, the researchers first confirmed that B. dentium itself lacks the gene pathway for 5-HT production, and that growth media inoculated with B. dentium do not subsequently contain 5-HT. Next, they treated adult germ-free mice with either sterile media, live B. dentium, heat-killed B. dentium, or live Bacteroides ovatus (another commensal gut microbe). Gram staining and fluorescence in situ hybridization (FISH) confirmed that live B. dentium colonized mouse ileum and colon. Mass spectrometry, immunostaining, and quantitative PCR showed that mice treated with live B. dentium, but not B. ovatus, had greater intestinal concentrations of acetate, 5-HT, 5-HT receptors (2a and 4), serotonin transporter, and the gene that encodes free fatty acid receptor 2 (FFAR2), through which acetate signals. Furthermore, “[i]ncreases in 5-HT were observed in enteroendocrine cells directly above enteric neurons,” the researchers said.

They also performed RNA in situ hybridization of mouse brain tissue, which showed significantly increased expression of 5-HT-receptor 2a in the B. dentium–treated compared with germ-free controls. Mice were caged with specified numbers of marbles so the researchers could find out if these changes also modified behavior. Those with complete gut microbiota buried an average of 25% of the marbles, B. dentium­–monocolonized mice buried 15%, and germ-free mice buried fewer marbles. Hence, even short-term monocolonization by a bacterium that acts on the serotonergic system might help normalize behavior, even later in life, the researchers said. They noted that B. dentium–treated and germ-free mice performed similarly on both balance beam and footprint tests, suggesting that treatment with B. dentium does not affect motor coordination.

In humans, enterochromaffin cells released more 5-HT when exposed to B. dentium or acetate. Taken together, the findings “highlight the importance of Bifidobacterium species, and specifically B. dentium, in the adult microbiome-gut-brain axis,” the researchers wrote. Probiotic strains such as Lactobacillus and Bifidobacterium species are thought to improve health by means of signaling pathways, including the serotonergic system, they noted. “Our findings support the modulation of the serotonergic system by a model gut microbe, B. dentium, and provide a potential mechanism by which select microbes and their metabolites can promote endogenous, localized 5-HT biosynthesis. We speculate this may be an important bridging signal in the microbiome-gut-brain axis.”

The National Institutes of Health, BioGaia AB, and the RNA In Situ Hybridization Core facility supported the work. Two coinvestigators disclosed ties to BioGaia AB, Seed, Biomica, Plexus Worldwide, Tenza, Mikrovia, Probiotech, and Takeda. Dr. Engevik and the other investigators reported having no conflicts of interest.

SOURCE: Engevik MA et al. Cell Molec Gastro Hepatol. 2021;11:221-48. doi: 10.1016/j.jcmgh.2020.08.002.

Together, Bifidobacterium dentium and its acetate metabolite regulate key parts of the serotonergic system and are associated with “a functional change in adult behavior,” according to a report published in Cellular and Molecular Gastroenterology and Hepatology.

Human gut microbiota had been known to regulate serotonin (5-hydroxytryptamine) production by gut cells, but underlying mechanisms had been unclear. This study showed that a common bacterial colonizer of the healthy adult gut stimulates serotonin (5-hydroxytryptamine, or 5-HT) release from enterochromaffin cells in both mice (in vivo) and humans (in vitro), wrote Melinda A. Engevik, PhD, of Baylor College of Medicine, Houston, and associates. “B. dentium modulates the serotonergic system in both the intestine and the brain [, which] likely influences behavior, and suggests that supplementation with a single, carefully selected, bacterial strain may be able to partially rescue behavioral deficits induced by shifts in the intestinal microbiota,” they added.

In a prior study, B. dentium modulated sensory neurons in rats with visceral hypersensitivity. In mammals, serotonin is primarily produced and released by enterochromaffin cells in the gut. To discover whether acetate – a short-chain fatty acid metabolite of B. dentium and some other microbiota – induces this pathway, the researchers first confirmed that B. dentium itself lacks the gene pathway for 5-HT production, and that growth media inoculated with B. dentium do not subsequently contain 5-HT. Next, they treated adult germ-free mice with either sterile media, live B. dentium, heat-killed B. dentium, or live Bacteroides ovatus (another commensal gut microbe). Gram staining and fluorescence in situ hybridization (FISH) confirmed that live B. dentium colonized mouse ileum and colon. Mass spectrometry, immunostaining, and quantitative PCR showed that mice treated with live B. dentium, but not B. ovatus, had greater intestinal concentrations of acetate, 5-HT, 5-HT receptors (2a and 4), serotonin transporter, and the gene that encodes free fatty acid receptor 2 (FFAR2), through which acetate signals. Furthermore, “[i]ncreases in 5-HT were observed in enteroendocrine cells directly above enteric neurons,” the researchers said.

They also performed RNA in situ hybridization of mouse brain tissue, which showed significantly increased expression of 5-HT-receptor 2a in the B. dentium–treated compared with germ-free controls. Mice were caged with specified numbers of marbles so the researchers could find out if these changes also modified behavior. Those with complete gut microbiota buried an average of 25% of the marbles, B. dentium­–monocolonized mice buried 15%, and germ-free mice buried fewer marbles. Hence, even short-term monocolonization by a bacterium that acts on the serotonergic system might help normalize behavior, even later in life, the researchers said. They noted that B. dentium–treated and germ-free mice performed similarly on both balance beam and footprint tests, suggesting that treatment with B. dentium does not affect motor coordination.

In humans, enterochromaffin cells released more 5-HT when exposed to B. dentium or acetate. Taken together, the findings “highlight the importance of Bifidobacterium species, and specifically B. dentium, in the adult microbiome-gut-brain axis,” the researchers wrote. Probiotic strains such as Lactobacillus and Bifidobacterium species are thought to improve health by means of signaling pathways, including the serotonergic system, they noted. “Our findings support the modulation of the serotonergic system by a model gut microbe, B. dentium, and provide a potential mechanism by which select microbes and their metabolites can promote endogenous, localized 5-HT biosynthesis. We speculate this may be an important bridging signal in the microbiome-gut-brain axis.”

The National Institutes of Health, BioGaia AB, and the RNA In Situ Hybridization Core facility supported the work. Two coinvestigators disclosed ties to BioGaia AB, Seed, Biomica, Plexus Worldwide, Tenza, Mikrovia, Probiotech, and Takeda. Dr. Engevik and the other investigators reported having no conflicts of interest.

SOURCE: Engevik MA et al. Cell Molec Gastro Hepatol. 2021;11:221-48. doi: 10.1016/j.jcmgh.2020.08.002.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

COVID-19 fuels surge in overdose-related cardiac arrests

Article Type
Changed
Thu, 08/26/2021 - 15:55

There has been a sharp increase in overdose-related cardiac arrests in the United States during the COVID-19 pandemic, a new analysis shows.

Overall rates in 2020 were elevated above the baseline from 2018 and 2019 by about 50%, the data show.

“Our results suggest that overdoses may be strongly on the rise in 2020, and efforts to combat the COVID-19 pandemic have not been effective at reducing overdoses,” Joseph Friedman, MPH, MD/PhD student, medical scientist training program, University of California, Los Angeles, said in an interview.

“We need to invest heavily in substance use treatment, harm reduction, and the structural drivers of overdose as core elements of the COVID-19 response,” said Mr. Friedman, who coauthored the study with UCLA colleague David Schriger, MD, MPH, and Leo Beletsky, JD, MPH, Northeastern University, Boston.

The study was published as a research letter Dec. 3 in JAMA Psychiatry.
 

Social isolation a key driver

Emergency medical services (EMS) data are available in near real time, providing a novel source of up-to-date information to monitor epidemiological shifts during the COVID-19 pandemic.

For the study, the researchers leveraged data from the National EMS Information System, a large registry of more than 10,000 EMS agencies in 47 states that represent over 80% of all EMS calls nationally in 2020. They used the data to track shifts in overdose-related cardiac arrests observed by EMS.

They found clear evidence of a large-scale uptick in overdose-related deaths during the COVID-19 pandemic.

The overall rate of overdose-related cardiac arrests in 2020 was about 50% higher than trends observed during 2018 and 2019, including a maximum peak of 123% above baseline reached in early May.

All overdose-related incidents (fatal and nonfatal) were elevated in 2020, by about 17% above baseline. However, there were larger increases in fatal overdose-related incidents, compared to all incidents, which may suggest a rising case fatality rate, the authors noted.

The observed trends line up in time with reductions in mobility (a metric of social interaction), as measured using cell phone data, they wrote.

“Many of the trends predicted by experts at the beginning of the pandemic could cause these shifts. Increases in social isolation likely play an important role, as people using [drugs] alone are less likely to receive help when they need it. Also shifts in the drug supply, and reduced access to healthcare and treatment,” said Mr. Friedman.

“We need to undertake short- and long-term strategies to combat the rising tide of overdose mortality in the United States,” he added.

In the short term, Mr. Friedman suggested reducing financial and logistical barriers for accessing a safe opioid supply. Such measures include allowing pharmacies to dispense methadone, allowing all physicians to prescribe buprenorphine without a special waiver, and releasing emergency funds to make these medications universally affordable.

“In the longer term, we should acknowledge that overdose is a symptom of structural problems in the U.S. We need to invest in making employment, housing, education, and health care accessible to all to address the upstream drivers of overdose,” he added.

The study had no commercial funding. The authors disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

There has been a sharp increase in overdose-related cardiac arrests in the United States during the COVID-19 pandemic, a new analysis shows.

Overall rates in 2020 were elevated above the baseline from 2018 and 2019 by about 50%, the data show.

“Our results suggest that overdoses may be strongly on the rise in 2020, and efforts to combat the COVID-19 pandemic have not been effective at reducing overdoses,” Joseph Friedman, MPH, MD/PhD student, medical scientist training program, University of California, Los Angeles, said in an interview.

“We need to invest heavily in substance use treatment, harm reduction, and the structural drivers of overdose as core elements of the COVID-19 response,” said Mr. Friedman, who coauthored the study with UCLA colleague David Schriger, MD, MPH, and Leo Beletsky, JD, MPH, Northeastern University, Boston.

The study was published as a research letter Dec. 3 in JAMA Psychiatry.
 

Social isolation a key driver

Emergency medical services (EMS) data are available in near real time, providing a novel source of up-to-date information to monitor epidemiological shifts during the COVID-19 pandemic.

For the study, the researchers leveraged data from the National EMS Information System, a large registry of more than 10,000 EMS agencies in 47 states that represent over 80% of all EMS calls nationally in 2020. They used the data to track shifts in overdose-related cardiac arrests observed by EMS.

They found clear evidence of a large-scale uptick in overdose-related deaths during the COVID-19 pandemic.

The overall rate of overdose-related cardiac arrests in 2020 was about 50% higher than trends observed during 2018 and 2019, including a maximum peak of 123% above baseline reached in early May.

All overdose-related incidents (fatal and nonfatal) were elevated in 2020, by about 17% above baseline. However, there were larger increases in fatal overdose-related incidents, compared to all incidents, which may suggest a rising case fatality rate, the authors noted.

The observed trends line up in time with reductions in mobility (a metric of social interaction), as measured using cell phone data, they wrote.

“Many of the trends predicted by experts at the beginning of the pandemic could cause these shifts. Increases in social isolation likely play an important role, as people using [drugs] alone are less likely to receive help when they need it. Also shifts in the drug supply, and reduced access to healthcare and treatment,” said Mr. Friedman.

“We need to undertake short- and long-term strategies to combat the rising tide of overdose mortality in the United States,” he added.

In the short term, Mr. Friedman suggested reducing financial and logistical barriers for accessing a safe opioid supply. Such measures include allowing pharmacies to dispense methadone, allowing all physicians to prescribe buprenorphine without a special waiver, and releasing emergency funds to make these medications universally affordable.

“In the longer term, we should acknowledge that overdose is a symptom of structural problems in the U.S. We need to invest in making employment, housing, education, and health care accessible to all to address the upstream drivers of overdose,” he added.

The study had no commercial funding. The authors disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

There has been a sharp increase in overdose-related cardiac arrests in the United States during the COVID-19 pandemic, a new analysis shows.

Overall rates in 2020 were elevated above the baseline from 2018 and 2019 by about 50%, the data show.

“Our results suggest that overdoses may be strongly on the rise in 2020, and efforts to combat the COVID-19 pandemic have not been effective at reducing overdoses,” Joseph Friedman, MPH, MD/PhD student, medical scientist training program, University of California, Los Angeles, said in an interview.

“We need to invest heavily in substance use treatment, harm reduction, and the structural drivers of overdose as core elements of the COVID-19 response,” said Mr. Friedman, who coauthored the study with UCLA colleague David Schriger, MD, MPH, and Leo Beletsky, JD, MPH, Northeastern University, Boston.

The study was published as a research letter Dec. 3 in JAMA Psychiatry.
 

Social isolation a key driver

Emergency medical services (EMS) data are available in near real time, providing a novel source of up-to-date information to monitor epidemiological shifts during the COVID-19 pandemic.

For the study, the researchers leveraged data from the National EMS Information System, a large registry of more than 10,000 EMS agencies in 47 states that represent over 80% of all EMS calls nationally in 2020. They used the data to track shifts in overdose-related cardiac arrests observed by EMS.

They found clear evidence of a large-scale uptick in overdose-related deaths during the COVID-19 pandemic.

The overall rate of overdose-related cardiac arrests in 2020 was about 50% higher than trends observed during 2018 and 2019, including a maximum peak of 123% above baseline reached in early May.

All overdose-related incidents (fatal and nonfatal) were elevated in 2020, by about 17% above baseline. However, there were larger increases in fatal overdose-related incidents, compared to all incidents, which may suggest a rising case fatality rate, the authors noted.

The observed trends line up in time with reductions in mobility (a metric of social interaction), as measured using cell phone data, they wrote.

“Many of the trends predicted by experts at the beginning of the pandemic could cause these shifts. Increases in social isolation likely play an important role, as people using [drugs] alone are less likely to receive help when they need it. Also shifts in the drug supply, and reduced access to healthcare and treatment,” said Mr. Friedman.

“We need to undertake short- and long-term strategies to combat the rising tide of overdose mortality in the United States,” he added.

In the short term, Mr. Friedman suggested reducing financial and logistical barriers for accessing a safe opioid supply. Such measures include allowing pharmacies to dispense methadone, allowing all physicians to prescribe buprenorphine without a special waiver, and releasing emergency funds to make these medications universally affordable.

“In the longer term, we should acknowledge that overdose is a symptom of structural problems in the U.S. We need to invest in making employment, housing, education, and health care accessible to all to address the upstream drivers of overdose,” he added.

The study had no commercial funding. The authors disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

From cradle to grave, alcohol is bad for the brain

Article Type
Changed
Mon, 01/04/2021 - 12:29

There is “compelling” evidence of the harmful effects of alcohol on the brain. The greatest risk occurs during three periods of life that are marked by dynamic brain changes, say researchers from Australia and the United Kingdom.

alenkadr/Thinkstock

The three periods are:

  • Gestation (conception to birth), which is characterized by extensive production, migration, and differentiation of neurons, as well as substantial apoptosis.
  • Later adolescence (aged 15-19 years), a period marked by synaptic pruning and increased axonal myelination.
  • Older adulthood (aged 65 and beyond), a period associated with brain atrophy. Changes accelerate after age 65, largely driven by decreases in neuron size and reductions in the number of dendritic spines and synapses.

These changes in neurocircuitry could increase sensitivity to the neurotoxic effects of alcohol, Louise Mewton, PhD, of the Center for Healthy Brain Aging, University of New South Wales, Sydney, and colleagues said.

“A life course perspective on brain health supports the formulation of policy and public health interventions to reduce alcohol use and misuse at all ages,” they wrote in an editorial published online Dec. 4 in The BMJ.
 

Worrisome trends

Research has shown that globally about 10% of pregnant women drink alcohol. In European countries, the rates are much higher than the global average.

Heavy drinking during gestation can cause fetal alcohol spectrum disorder, which is associated with widespread reductions in brain volume and cognitive impairment.

Even low or moderate alcohol consumption during pregnancy is significantly associated with poorer psychological and behavioral outcomes in children, the investigators noted.

In adolescence, more than 20% of 15- to 19-year-olds in European and other high-income countries report at least occasional binge drinking, which is linked to reduced brain volume, poorer white matter development, and deficits in a range of cognitive functions, they added.

In a recent study of older adults, alcohol use disorders emerged as one of the strongest modifiable risk factors for dementia (particularly early-onset dementia), compared with other established risk factors such as high blood pressure and smoking.

Alcohol use disorders are relatively rare in older adults, but even moderate drinking during midlife has been linked to “small but significant” brain volume loss, the authors said.

Dr. Mewton and colleagues said demographic trends may compound the effect of alcohol use on brain health.

They noted that women are now just as likely as men to drink alcohol and suffer alcohol-related problems. Global consumption is forecast to increase further in the next decade.

Although the effects of the COVID-19 pandemic on alcohol intake and related harms remain unclear, alcohol use has increased in the long term after other major public health crises, they added.

Given the data, Dr. Mewton and colleagues called for “an integrated approach” to reducing the harms of alcohol intake at all ages.

“Population-based interventions such as guidelines on low-risk drinking, alcohol pricing policies, and lower drink driving limits need to be accompanied by the development of training and care pathways that consider the human brain at risk throughout life,” they concluded.

The authors have disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 29(1)
Publications
Topics
Sections

There is “compelling” evidence of the harmful effects of alcohol on the brain. The greatest risk occurs during three periods of life that are marked by dynamic brain changes, say researchers from Australia and the United Kingdom.

alenkadr/Thinkstock

The three periods are:

  • Gestation (conception to birth), which is characterized by extensive production, migration, and differentiation of neurons, as well as substantial apoptosis.
  • Later adolescence (aged 15-19 years), a period marked by synaptic pruning and increased axonal myelination.
  • Older adulthood (aged 65 and beyond), a period associated with brain atrophy. Changes accelerate after age 65, largely driven by decreases in neuron size and reductions in the number of dendritic spines and synapses.

These changes in neurocircuitry could increase sensitivity to the neurotoxic effects of alcohol, Louise Mewton, PhD, of the Center for Healthy Brain Aging, University of New South Wales, Sydney, and colleagues said.

“A life course perspective on brain health supports the formulation of policy and public health interventions to reduce alcohol use and misuse at all ages,” they wrote in an editorial published online Dec. 4 in The BMJ.
 

Worrisome trends

Research has shown that globally about 10% of pregnant women drink alcohol. In European countries, the rates are much higher than the global average.

Heavy drinking during gestation can cause fetal alcohol spectrum disorder, which is associated with widespread reductions in brain volume and cognitive impairment.

Even low or moderate alcohol consumption during pregnancy is significantly associated with poorer psychological and behavioral outcomes in children, the investigators noted.

In adolescence, more than 20% of 15- to 19-year-olds in European and other high-income countries report at least occasional binge drinking, which is linked to reduced brain volume, poorer white matter development, and deficits in a range of cognitive functions, they added.

In a recent study of older adults, alcohol use disorders emerged as one of the strongest modifiable risk factors for dementia (particularly early-onset dementia), compared with other established risk factors such as high blood pressure and smoking.

Alcohol use disorders are relatively rare in older adults, but even moderate drinking during midlife has been linked to “small but significant” brain volume loss, the authors said.

Dr. Mewton and colleagues said demographic trends may compound the effect of alcohol use on brain health.

They noted that women are now just as likely as men to drink alcohol and suffer alcohol-related problems. Global consumption is forecast to increase further in the next decade.

Although the effects of the COVID-19 pandemic on alcohol intake and related harms remain unclear, alcohol use has increased in the long term after other major public health crises, they added.

Given the data, Dr. Mewton and colleagues called for “an integrated approach” to reducing the harms of alcohol intake at all ages.

“Population-based interventions such as guidelines on low-risk drinking, alcohol pricing policies, and lower drink driving limits need to be accompanied by the development of training and care pathways that consider the human brain at risk throughout life,” they concluded.

The authors have disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

There is “compelling” evidence of the harmful effects of alcohol on the brain. The greatest risk occurs during three periods of life that are marked by dynamic brain changes, say researchers from Australia and the United Kingdom.

alenkadr/Thinkstock

The three periods are:

  • Gestation (conception to birth), which is characterized by extensive production, migration, and differentiation of neurons, as well as substantial apoptosis.
  • Later adolescence (aged 15-19 years), a period marked by synaptic pruning and increased axonal myelination.
  • Older adulthood (aged 65 and beyond), a period associated with brain atrophy. Changes accelerate after age 65, largely driven by decreases in neuron size and reductions in the number of dendritic spines and synapses.

These changes in neurocircuitry could increase sensitivity to the neurotoxic effects of alcohol, Louise Mewton, PhD, of the Center for Healthy Brain Aging, University of New South Wales, Sydney, and colleagues said.

“A life course perspective on brain health supports the formulation of policy and public health interventions to reduce alcohol use and misuse at all ages,” they wrote in an editorial published online Dec. 4 in The BMJ.
 

Worrisome trends

Research has shown that globally about 10% of pregnant women drink alcohol. In European countries, the rates are much higher than the global average.

Heavy drinking during gestation can cause fetal alcohol spectrum disorder, which is associated with widespread reductions in brain volume and cognitive impairment.

Even low or moderate alcohol consumption during pregnancy is significantly associated with poorer psychological and behavioral outcomes in children, the investigators noted.

In adolescence, more than 20% of 15- to 19-year-olds in European and other high-income countries report at least occasional binge drinking, which is linked to reduced brain volume, poorer white matter development, and deficits in a range of cognitive functions, they added.

In a recent study of older adults, alcohol use disorders emerged as one of the strongest modifiable risk factors for dementia (particularly early-onset dementia), compared with other established risk factors such as high blood pressure and smoking.

Alcohol use disorders are relatively rare in older adults, but even moderate drinking during midlife has been linked to “small but significant” brain volume loss, the authors said.

Dr. Mewton and colleagues said demographic trends may compound the effect of alcohol use on brain health.

They noted that women are now just as likely as men to drink alcohol and suffer alcohol-related problems. Global consumption is forecast to increase further in the next decade.

Although the effects of the COVID-19 pandemic on alcohol intake and related harms remain unclear, alcohol use has increased in the long term after other major public health crises, they added.

Given the data, Dr. Mewton and colleagues called for “an integrated approach” to reducing the harms of alcohol intake at all ages.

“Population-based interventions such as guidelines on low-risk drinking, alcohol pricing policies, and lower drink driving limits need to be accompanied by the development of training and care pathways that consider the human brain at risk throughout life,” they concluded.

The authors have disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Issue
Neurology Reviews- 29(1)
Issue
Neurology Reviews- 29(1)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: December 8, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Umbilicated Keratotic Papule on the Scalp

Article Type
Changed
Tue, 03/02/2021 - 13:22

The Diagnosis: Warty Dyskeratoma 

Warty dyskeratoma (WD) is a benign cutaneous tumor that was first described in 1954 as isolated Darier disease (DD). In 1957, Szymanski1 renamed it warty dyskeratoma as a distinct condition from DD. Warty dyskeratoma typically presents as a flesh-colored to brownish, round, well-demarcated, and slightly elevated papule or nodule accompanied by an umbilical invagination at the center. It most commonly arises on the scalp, face, or neck.2 In contrast to DD, familial occurrence is uncommon. It usually is difficult to distinguish WD from other conditions such as seborrheic keratosis, verruca vulgaris, or keratoacanthoma due to its macroscopic features. Therefore, histopathologic investigation is necessary for a precise diagnosis. 

In our case, histologic investigation revealed a symmetric cup-shaped invagination filled with acantholytic and dyskeratotic keratinocytes with no atypia or mitotic figures (Figure, A). The bottom of the invagination was occupied with numerous villi covered by a single layer of basal cells (Figure, B). At the edge of the invagination, corps ronds and grains were observed in the granular and cornified layers, respectively (Figure, C).

 

Warty dyskeratoma. A, A symmetric cup-shaped invagination filled with acantholytic and dyskeratotic keratinocytes (H&E, original magnification ×40). B, Numerous villi covered by a single layer of basal cells at the bottom of the invagination (H&E, original magnification ×200). C, At the edge of the invagination there were corps ronds (yellow arrow) in the granular layer and grains (white arrow) in the cornified layer (H&E, original magnification ×200).

The hallmark histopathologic findings are acantholysis and dyskeratosis just above the basal cell layer, called focal acantholytic dyskeratosis. The differential diagnosis includes other disorders associated with focal acantholytic dyskeratosis, such as DD and acantholytic squamous cell carcinoma.3 Distinguishing WD from DD may be difficult in rare cases with multiple lesions.4 In such cases, an autosomal-dominant inheritance pattern and younger age of onset should prompt clinicians to seek for mutations in the ATPase sarcoplasmic/endoplasmic reticulum Ca2+ transporting 2 gene, ATP2A2, for the diagnosis of DD.5 Additionally, the presence of atypia or mitotic figures will rule out malignant disorders such as squamous cell carcinoma.  

Although the pathogenesis of WD is not fully understood, most clinicians consider it a follicular adnexal neoplasm because the lesions often are connected to the pilosebaceous unit on microscopic observation.6 Although WD-like lesions arising from the oral mucosa have been reported,7 their etiology may be different from WD because the oral mucosa lacks hair follicles.8 The term warty leads to speculation of the contribution of human papillomavirus to the pathogenesis of WD, but this has been questioned due to the negative result of viral DNA detection from WD lesions by polymerase chain reaction analysis.2 Therefore, the term follicular dyskeratoma has been suggested as a novel denomination that reflects its etiology more precisely.2  

The efficacy of topical treatment has not yet been established. Cryosurgery is another therapeutic option, but it sometimes fails.9 As performed in our patient, excisional biopsy is the most reasonable treatment option to obtain both complete removal and precise diagnosis.  
 

Article PDF
Author and Disclosure Information

Drs. Matsuda, Hino, and Kagami are from the Department of Dermatology, Kanto Central Hospital of the Mutual Aid Association of Public School Teachers, Tokyo, Japan. Dr. Nishio is from Tsurumaki Dermatology, Tokyo.

The authors report no conflict of interest.

Correspondence: Kazuki Mitsuru Matsuda, MD, Department of Dermatology, Kanto Central Hospital of the Mutual Aid Association of Public School Teachers, 6-25-1, Kamiyoga, Setagaya-ku, Tokyo 1588531, Japan ([email protected]).

Issue
Cutis - 106(5)
Publications
Page Number
E28-E30
Sections
Author and Disclosure Information

Drs. Matsuda, Hino, and Kagami are from the Department of Dermatology, Kanto Central Hospital of the Mutual Aid Association of Public School Teachers, Tokyo, Japan. Dr. Nishio is from Tsurumaki Dermatology, Tokyo.

The authors report no conflict of interest.

Correspondence: Kazuki Mitsuru Matsuda, MD, Department of Dermatology, Kanto Central Hospital of the Mutual Aid Association of Public School Teachers, 6-25-1, Kamiyoga, Setagaya-ku, Tokyo 1588531, Japan ([email protected]).

Author and Disclosure Information

Drs. Matsuda, Hino, and Kagami are from the Department of Dermatology, Kanto Central Hospital of the Mutual Aid Association of Public School Teachers, Tokyo, Japan. Dr. Nishio is from Tsurumaki Dermatology, Tokyo.

The authors report no conflict of interest.

Correspondence: Kazuki Mitsuru Matsuda, MD, Department of Dermatology, Kanto Central Hospital of the Mutual Aid Association of Public School Teachers, 6-25-1, Kamiyoga, Setagaya-ku, Tokyo 1588531, Japan ([email protected]).

Article PDF
Article PDF
Related Articles

The Diagnosis: Warty Dyskeratoma 

Warty dyskeratoma (WD) is a benign cutaneous tumor that was first described in 1954 as isolated Darier disease (DD). In 1957, Szymanski1 renamed it warty dyskeratoma as a distinct condition from DD. Warty dyskeratoma typically presents as a flesh-colored to brownish, round, well-demarcated, and slightly elevated papule or nodule accompanied by an umbilical invagination at the center. It most commonly arises on the scalp, face, or neck.2 In contrast to DD, familial occurrence is uncommon. It usually is difficult to distinguish WD from other conditions such as seborrheic keratosis, verruca vulgaris, or keratoacanthoma due to its macroscopic features. Therefore, histopathologic investigation is necessary for a precise diagnosis. 

In our case, histologic investigation revealed a symmetric cup-shaped invagination filled with acantholytic and dyskeratotic keratinocytes with no atypia or mitotic figures (Figure, A). The bottom of the invagination was occupied with numerous villi covered by a single layer of basal cells (Figure, B). At the edge of the invagination, corps ronds and grains were observed in the granular and cornified layers, respectively (Figure, C).

 

Warty dyskeratoma. A, A symmetric cup-shaped invagination filled with acantholytic and dyskeratotic keratinocytes (H&E, original magnification ×40). B, Numerous villi covered by a single layer of basal cells at the bottom of the invagination (H&E, original magnification ×200). C, At the edge of the invagination there were corps ronds (yellow arrow) in the granular layer and grains (white arrow) in the cornified layer (H&E, original magnification ×200).

The hallmark histopathologic findings are acantholysis and dyskeratosis just above the basal cell layer, called focal acantholytic dyskeratosis. The differential diagnosis includes other disorders associated with focal acantholytic dyskeratosis, such as DD and acantholytic squamous cell carcinoma.3 Distinguishing WD from DD may be difficult in rare cases with multiple lesions.4 In such cases, an autosomal-dominant inheritance pattern and younger age of onset should prompt clinicians to seek for mutations in the ATPase sarcoplasmic/endoplasmic reticulum Ca2+ transporting 2 gene, ATP2A2, for the diagnosis of DD.5 Additionally, the presence of atypia or mitotic figures will rule out malignant disorders such as squamous cell carcinoma.  

Although the pathogenesis of WD is not fully understood, most clinicians consider it a follicular adnexal neoplasm because the lesions often are connected to the pilosebaceous unit on microscopic observation.6 Although WD-like lesions arising from the oral mucosa have been reported,7 their etiology may be different from WD because the oral mucosa lacks hair follicles.8 The term warty leads to speculation of the contribution of human papillomavirus to the pathogenesis of WD, but this has been questioned due to the negative result of viral DNA detection from WD lesions by polymerase chain reaction analysis.2 Therefore, the term follicular dyskeratoma has been suggested as a novel denomination that reflects its etiology more precisely.2  

The efficacy of topical treatment has not yet been established. Cryosurgery is another therapeutic option, but it sometimes fails.9 As performed in our patient, excisional biopsy is the most reasonable treatment option to obtain both complete removal and precise diagnosis.  
 

The Diagnosis: Warty Dyskeratoma 

Warty dyskeratoma (WD) is a benign cutaneous tumor that was first described in 1954 as isolated Darier disease (DD). In 1957, Szymanski1 renamed it warty dyskeratoma as a distinct condition from DD. Warty dyskeratoma typically presents as a flesh-colored to brownish, round, well-demarcated, and slightly elevated papule or nodule accompanied by an umbilical invagination at the center. It most commonly arises on the scalp, face, or neck.2 In contrast to DD, familial occurrence is uncommon. It usually is difficult to distinguish WD from other conditions such as seborrheic keratosis, verruca vulgaris, or keratoacanthoma due to its macroscopic features. Therefore, histopathologic investigation is necessary for a precise diagnosis. 

In our case, histologic investigation revealed a symmetric cup-shaped invagination filled with acantholytic and dyskeratotic keratinocytes with no atypia or mitotic figures (Figure, A). The bottom of the invagination was occupied with numerous villi covered by a single layer of basal cells (Figure, B). At the edge of the invagination, corps ronds and grains were observed in the granular and cornified layers, respectively (Figure, C).

 

Warty dyskeratoma. A, A symmetric cup-shaped invagination filled with acantholytic and dyskeratotic keratinocytes (H&E, original magnification ×40). B, Numerous villi covered by a single layer of basal cells at the bottom of the invagination (H&E, original magnification ×200). C, At the edge of the invagination there were corps ronds (yellow arrow) in the granular layer and grains (white arrow) in the cornified layer (H&E, original magnification ×200).

The hallmark histopathologic findings are acantholysis and dyskeratosis just above the basal cell layer, called focal acantholytic dyskeratosis. The differential diagnosis includes other disorders associated with focal acantholytic dyskeratosis, such as DD and acantholytic squamous cell carcinoma.3 Distinguishing WD from DD may be difficult in rare cases with multiple lesions.4 In such cases, an autosomal-dominant inheritance pattern and younger age of onset should prompt clinicians to seek for mutations in the ATPase sarcoplasmic/endoplasmic reticulum Ca2+ transporting 2 gene, ATP2A2, for the diagnosis of DD.5 Additionally, the presence of atypia or mitotic figures will rule out malignant disorders such as squamous cell carcinoma.  

Although the pathogenesis of WD is not fully understood, most clinicians consider it a follicular adnexal neoplasm because the lesions often are connected to the pilosebaceous unit on microscopic observation.6 Although WD-like lesions arising from the oral mucosa have been reported,7 their etiology may be different from WD because the oral mucosa lacks hair follicles.8 The term warty leads to speculation of the contribution of human papillomavirus to the pathogenesis of WD, but this has been questioned due to the negative result of viral DNA detection from WD lesions by polymerase chain reaction analysis.2 Therefore, the term follicular dyskeratoma has been suggested as a novel denomination that reflects its etiology more precisely.2  

The efficacy of topical treatment has not yet been established. Cryosurgery is another therapeutic option, but it sometimes fails.9 As performed in our patient, excisional biopsy is the most reasonable treatment option to obtain both complete removal and precise diagnosis.  
 

Issue
Cutis - 106(5)
Issue
Cutis - 106(5)
Page Number
E28-E30
Page Number
E28-E30
Publications
Publications
Article Type
Sections
Questionnaire Body

A 72-year-old man was referred to our dermatology clinic for evaluation of a solitary papule on the scalp measuring 3.2 mm in diameter with a keratotic umbilicated center of 1 year’s duration. His medical history included acute appendicitis. Treatment with fusidic acid ointment 2% was unsuccessful. The papule was hard without tenderness on palpation. An excisional biopsy was performed under local anesthesia.

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 12/08/2020 - 07:45
Un-Gate On Date
Tue, 12/08/2020 - 07:45
Use ProPublica
CFC Schedule Remove Status
Tue, 12/08/2020 - 07:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Article PDF Media