Triage MS-related ED visits to reduce unnecessary treatment

Article Type
Changed
Fri, 01/18/2019 - 15:44
Display Headline
Triage MS-related ED visits to reduce unnecessary treatment

NEW ORLEANS – The majority of multiple sclerosis–related emergency department visits in a recent chart review were related to pseudoflares or MS-related complications, rather than to true MS relapse.

The findings suggest that many diagnostic tests, treatments, and hospital admissions are unnecessary, Dr. Hesham Abboud of the Cleveland Clinic and his colleagues reported in a poster at the meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis.

©EyeMark/thinkstockphotos.com

Of 97 MS-related visits among 75 patients, 33 were for new neurologic symptoms, 29 were for worsening of preexisting symptoms, and 36 were for MS-related complications. New relapse was diagnosed in only 27 visits (27.8%), and urinary tract infections were found in about one-third of patients presenting with either urinary or neurologic symptoms, the investigators said.

New MRIs were ordered in 37 patients (38.1%); 89 emergency department visits (91.7%) resulted in hospital admissions and 40.4% were related to neurology; and steroid treatment was used in 24 visits (24.7%), 7 of which were for worsening of preexisting symptoms, the investigators said.

Of the visits involving new neurologic symptoms, one-third were not from relapse; 59% of the MRIs done in those situations were positive for enhancing or new lesions. Of the visits with worsening preexisting symptoms, only 16.6% were associated with a new relapse, and 28.5% of MRIs done in those situations were positive, the investigators said.

Although many ED visits among MS patients are driven by neurologic complaints, true relapse is rarely present, and not all those with true relapse require hospital admission and steroid treatment, they noted, concluding that developing a care path and triaging system for MS patients in the ED could prevent unnecessary MRIs, steroid treatment, and hospital admissions.

The investigators reported having no disclosures.

[email protected]

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

NEW ORLEANS – The majority of multiple sclerosis–related emergency department visits in a recent chart review were related to pseudoflares or MS-related complications, rather than to true MS relapse.

The findings suggest that many diagnostic tests, treatments, and hospital admissions are unnecessary, Dr. Hesham Abboud of the Cleveland Clinic and his colleagues reported in a poster at the meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis.

©EyeMark/thinkstockphotos.com

Of 97 MS-related visits among 75 patients, 33 were for new neurologic symptoms, 29 were for worsening of preexisting symptoms, and 36 were for MS-related complications. New relapse was diagnosed in only 27 visits (27.8%), and urinary tract infections were found in about one-third of patients presenting with either urinary or neurologic symptoms, the investigators said.

New MRIs were ordered in 37 patients (38.1%); 89 emergency department visits (91.7%) resulted in hospital admissions and 40.4% were related to neurology; and steroid treatment was used in 24 visits (24.7%), 7 of which were for worsening of preexisting symptoms, the investigators said.

Of the visits involving new neurologic symptoms, one-third were not from relapse; 59% of the MRIs done in those situations were positive for enhancing or new lesions. Of the visits with worsening preexisting symptoms, only 16.6% were associated with a new relapse, and 28.5% of MRIs done in those situations were positive, the investigators said.

Although many ED visits among MS patients are driven by neurologic complaints, true relapse is rarely present, and not all those with true relapse require hospital admission and steroid treatment, they noted, concluding that developing a care path and triaging system for MS patients in the ED could prevent unnecessary MRIs, steroid treatment, and hospital admissions.

The investigators reported having no disclosures.

[email protected]

NEW ORLEANS – The majority of multiple sclerosis–related emergency department visits in a recent chart review were related to pseudoflares or MS-related complications, rather than to true MS relapse.

The findings suggest that many diagnostic tests, treatments, and hospital admissions are unnecessary, Dr. Hesham Abboud of the Cleveland Clinic and his colleagues reported in a poster at the meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis.

©EyeMark/thinkstockphotos.com

Of 97 MS-related visits among 75 patients, 33 were for new neurologic symptoms, 29 were for worsening of preexisting symptoms, and 36 were for MS-related complications. New relapse was diagnosed in only 27 visits (27.8%), and urinary tract infections were found in about one-third of patients presenting with either urinary or neurologic symptoms, the investigators said.

New MRIs were ordered in 37 patients (38.1%); 89 emergency department visits (91.7%) resulted in hospital admissions and 40.4% were related to neurology; and steroid treatment was used in 24 visits (24.7%), 7 of which were for worsening of preexisting symptoms, the investigators said.

Of the visits involving new neurologic symptoms, one-third were not from relapse; 59% of the MRIs done in those situations were positive for enhancing or new lesions. Of the visits with worsening preexisting symptoms, only 16.6% were associated with a new relapse, and 28.5% of MRIs done in those situations were positive, the investigators said.

Although many ED visits among MS patients are driven by neurologic complaints, true relapse is rarely present, and not all those with true relapse require hospital admission and steroid treatment, they noted, concluding that developing a care path and triaging system for MS patients in the ED could prevent unnecessary MRIs, steroid treatment, and hospital admissions.

The investigators reported having no disclosures.

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
Triage MS-related ED visits to reduce unnecessary treatment
Display Headline
Triage MS-related ED visits to reduce unnecessary treatment
Sections
Article Source

AT ACTRIMS FORUM 2016

PURLs Copyright

Inside the Article

Vitals

Key clinical point: The majority of multiple sclerosis–related emergency department visits in a recent chart review were related to pseudoflares or MS-related complications, rather than true MS relapse.

Major finding: New relapse was diagnosed in only 27 visits (27.8%).

Data source: A retrospective chart review of 75 patients with 97 MS-related ED visits.

Disclosures: The investigators reported having no disclosures.

For Men, Exercise-related Bone Loading During Adolescence Reaps Benefits Later in Life

Article Type
Changed
Thu, 11/07/2019 - 15:48
Display Headline
For Men, Exercise-related Bone Loading During Adolescence Reaps Benefits Later in Life

Men who continuously participate in high-impact activities, such as jogging and tennis, during adolescence and young adulthood have greater hip and lumbar spine bone mineral density than those who do not take part in such activities, according to a study published in American Journal of Men’s Health.

In a cross-sectional study, researchers analyzed the physical histories of 203 healthy, physically active males ages 30 to 65. Participants’ sports and exercise histories varied in the type and level of activity and the length of time spent doing various physical activities.

Exercise-associated bone loading scores were calculated based on the biomechanical ground-reaction forces of the patients’ past and current physical activities. Current bone mineral density (BMD) was measured using dual-energy x-ray absorptiometry. In addition, participants were grouped based on current participation in a high-impact activity, resistance training, both, or neither.

Bone loading during adolescence and young adulthood were significant, positive predictors of BMD of the whole body, total hip, and lumbar spine, adjusting for lean body mass and/or age Individuals who currently participate in a high-impact activity had greater lumbar spine BMD than nonparticipants. Men who continuously participated in a high-impact activity had greater hip and lumbar spine BMD than those who did not.

References

Suggested Reading
Matthew A. Strope, Peggy Nigh, Melissa I. Carter, et al. Physical activity–associated bone loading during adolescence and young adulthood is positively associated with adult bone mineral density in men. Am J Mens Health. 2015 November. [Epub ahead of print].

Author and Disclosure Information

Publications
Topics
Legacy Keywords
AJO, exercise, bone, men, males
Author and Disclosure Information

Author and Disclosure Information

Men who continuously participate in high-impact activities, such as jogging and tennis, during adolescence and young adulthood have greater hip and lumbar spine bone mineral density than those who do not take part in such activities, according to a study published in American Journal of Men’s Health.

In a cross-sectional study, researchers analyzed the physical histories of 203 healthy, physically active males ages 30 to 65. Participants’ sports and exercise histories varied in the type and level of activity and the length of time spent doing various physical activities.

Exercise-associated bone loading scores were calculated based on the biomechanical ground-reaction forces of the patients’ past and current physical activities. Current bone mineral density (BMD) was measured using dual-energy x-ray absorptiometry. In addition, participants were grouped based on current participation in a high-impact activity, resistance training, both, or neither.

Bone loading during adolescence and young adulthood were significant, positive predictors of BMD of the whole body, total hip, and lumbar spine, adjusting for lean body mass and/or age Individuals who currently participate in a high-impact activity had greater lumbar spine BMD than nonparticipants. Men who continuously participated in a high-impact activity had greater hip and lumbar spine BMD than those who did not.

Men who continuously participate in high-impact activities, such as jogging and tennis, during adolescence and young adulthood have greater hip and lumbar spine bone mineral density than those who do not take part in such activities, according to a study published in American Journal of Men’s Health.

In a cross-sectional study, researchers analyzed the physical histories of 203 healthy, physically active males ages 30 to 65. Participants’ sports and exercise histories varied in the type and level of activity and the length of time spent doing various physical activities.

Exercise-associated bone loading scores were calculated based on the biomechanical ground-reaction forces of the patients’ past and current physical activities. Current bone mineral density (BMD) was measured using dual-energy x-ray absorptiometry. In addition, participants were grouped based on current participation in a high-impact activity, resistance training, both, or neither.

Bone loading during adolescence and young adulthood were significant, positive predictors of BMD of the whole body, total hip, and lumbar spine, adjusting for lean body mass and/or age Individuals who currently participate in a high-impact activity had greater lumbar spine BMD than nonparticipants. Men who continuously participated in a high-impact activity had greater hip and lumbar spine BMD than those who did not.

References

Suggested Reading
Matthew A. Strope, Peggy Nigh, Melissa I. Carter, et al. Physical activity–associated bone loading during adolescence and young adulthood is positively associated with adult bone mineral density in men. Am J Mens Health. 2015 November. [Epub ahead of print].

References

Suggested Reading
Matthew A. Strope, Peggy Nigh, Melissa I. Carter, et al. Physical activity–associated bone loading during adolescence and young adulthood is positively associated with adult bone mineral density in men. Am J Mens Health. 2015 November. [Epub ahead of print].

Publications
Publications
Topics
Article Type
Display Headline
For Men, Exercise-related Bone Loading During Adolescence Reaps Benefits Later in Life
Display Headline
For Men, Exercise-related Bone Loading During Adolescence Reaps Benefits Later in Life
Legacy Keywords
AJO, exercise, bone, men, males
Legacy Keywords
AJO, exercise, bone, men, males
Article Source

PURLs Copyright

Inside the Article

New data points to slower course of labor

Article Type
Changed
Tue, 08/28/2018 - 10:04
Display Headline
New data points to slower course of labor

Only recently has evidence emerged that challenges our long-held understanding of “normal” and “abnormal” labor. We now know there is a much wider range of normal labor progress in women who go on to have good labor outcomes. We have a new labor curve to guide us – one that shows us, for example, that active labor occurs most commonly after 6 cm dilation rather than 4 cm as we’d previously thought.

By appreciating this new labor paradigm, we can potentially have a significant impact on the cesarean rate in the United States. While our use of the older labor curve is not the only reason for the rise in cesarean deliveries over the last 30 years, it very likely has played a role. A study published in 2011 of more than 32,000 live births at a major academic hospital demonstrated that one of the most common reasons for primary cesarean is abnormal labor or arrest (Obstet Gynecol. 2011 Jul;118[1]:29-38).

Dr. Alison G. Cahill

Another study by the Consortium on Safe Labor – an analysis of labor and delivery information from more than 228,000 women across the United States – showed that half of the cesarean deliveries performed for dystocia in women undergoing labor induction were performed before 6 cm of cervical dilation and relatively soon after the previous cervical examination (Am J Obstet Gynecol. 2010 Oct; 203[4]: 326.e1–326.e10).

Our new labor paradigm brings to the forefront a host of new issues and questions about how we can best manage labor to optimize outcomes. In a way, recent discoveries about labor progress have highlighted a dearth of evidence and made “old” issues in labor management seem new and urgent.

As we strive to learn more, however, we are challenged to change our practices and behavior at the bedside with the evidence we currently have. By appreciating both the new labor curve and our current understanding of how labor induction, obesity, and other patient characteristics and clinical conditions can affect labor progress, we can expect that many women will simply progress much more slowly than was historically expected.

As long as we have indications of the well-being of the baby and the well-being of the mother, a slower but progressive labor in the first stage should not prompt us to intervene. We should no longer apply the standards of active-phase progress – standards that have traditionally driven our diagnoses of labor dystocia – until the patient has achieved 6 cm of dilation.

The labor curve that had shaped our thinking about normal and abnormal labor progress until recently was developed by Dr. Emanuel Friedman. Based on findings from a prospective cohort study of 500 nulliparous women, Dr. Friedman plotted labor progress with centimeters of cervical dilation on the Y-axis and time on the X-axis, and divided labor into several stages and phases. In this curve, the rate of change of cervical dilation over time started increasing significantly at 4 cm; this period of increasing slope defined the active phase of labor.

Abnormal labor progress in the active phase was then defined, based on the 95th percentile, as cervical dilation of less than 1.2 cm per hour for nulliparous women and less than 1.5 cm per hour for multiparous women. Based on Dr. Friedman’s work, a woman was deemed to be in active-phase arrest when she had no cervical changes for 2 hours or more while having adequate uterine contractions and cervical dilation of at least 4 cm. These concepts came to govern labor management.

The paradigm shifted when the Consortium on Safe Labor reported in 2010 on a retrospective cohort study of more than 62,000 women at 19 U.S. hospitals. The women had a singleton term gestation, spontaneous labor, vertex presentation, vaginal delivery, and a normal perinatal outcome. In their analysis of labor and delivery information, Dr. Jun Zhang of the National Institutes of Health’s Eunice Kennedy Shriver National Institute of Child Health and Human Development and his colleagues accounted for the fact that the exact times of cervical change are unknown.

They used modern statistical methods and analytical tools that took into account the specific nature of cervical dilation data – that cervical measurements are interval-censored (we never know the exact time when a woman’s cervix changes) and that multiple exams of the cervix in the same patient are not independent (Obstet Gynecol. 2010 Dec;116[6]:1281-7).

The methodology used in the Consortium study accounted for both the interval-censored and repeated-measures nature of cervical dilation data. It thus addressed analytical flaws in the previous approach to labor data, which was purely descriptive of the exam findings and did not consider the nature of the data itself.

 

 

Under the new analysis and in the larger, contemporary population of patients, the period of increasing slope was found to occur most commonly after 6 cm, not 4 cm. The slowest 5% of nulliparous women had cervical dilation of 0.4 cm per hour (with the median at 1.9 cm per hour), compared with 1.2 cm per hour (with a median of 3.0 cm per hour) as in the Friedman data.

Dr. Zhang’s study showed us that labor may take more than 6 hours to progress from 4 to 5 cm dilation, and more than 3 hours to progress from 5 to 6 cm dilation – a rate of progress that is significantly slower than what Dr. Friedman had described. The new data showed us, moreover, that from 4 cm-6 cm dilation, nulliparous and multiparous women progressed similarly slowly. Beyond 6 cm, multiparous women dilated more rapidly, with a steeper acceleration phase than previously described.

A consensus statement published in 2014 by the American College of Obstetricians and Gynecologists (ACOG) and the Society for Maternal-Fetal Medicine (SMFM) on “Safe Prevention of the Primary Cesarean Delivery” encourages use of the Consortium data to revisit the definition of labor dystocia. While the data “do not directly address an optimal duration for the diagnosis of active-phase protraction or labor arrest, [they] do suggest that neither should be diagnosed before 6 cm dilation” (Obstet Gynecol. 2014 Mar;123[3]:693-711).

The ACOG-SMFM statement makes a series of recommendations for managing the first and second stages of labor, based not only on the Consortium data but on a broader literature review. It recommends that if mother and fetus appear well, cesarean delivery for active-phase arrest in the first stage of labor be reserved for women of at least 6 cm of dilation with ruptured membranes who fail to progress despite 4 hours of adequate uterine activity, or at least 6 hours of oxytocin administration with inadequate uterine activity and no cervical change.

Regarding the latent phase of labor, the statement says that most women with a prolonged latent phase ultimately will enter the active phase with expectant management. It advises that a prolonged latent phase (for example, greater than 20 hours in nulliparous women and greater than 14 hours in multiparous women) should not be an isolated indication for cesarean delivery.

The consensus statement also recognizes recent data showing that women who undergo labor induction have an even slower “normal” course of labor, particularly a longer latent phase, than women who labor spontaneously. A retrospective cohort study of more than 5,000 women, for instance, found that before 6 cm, women whose labor is induced can spend up to 10 hours to achieve each 1 cm of dilation (Obstet Gynecol. 2012 Jun;119[6]:1113-8).

As long as maternal and fetal status are reassuring, the statement says, cesarean deliveries for failed induction of labor in the latent phase can be avoided by allowing longer durations of the latent phase (up to 24 hours) and by requiring that oxytocin be administered for 12-18 hours after membrane rupture before deeming induction a failure.

Each of these described recommendations were graded in the ACOG-SMFM consensus document as “strong” recommendations with “moderate quality evidence.”

Examining our standards

Moving forward, we must further develop and define our thresholds for identifying who will most benefit from a cesarean delivery. We have many specific aspects of labor management to address as well, such as the optimal timing of artificial membrane rupture and the safety and efficacy of different oxytocin protocols. We may also want to revisit recommendations for serial cervical assessment, possibly adjusting the intervals given our understanding of the new labor curve.

Under the new labor paradigm, moreover, we must think not only about the clinical decisions we make at the bedside, but about the decisions we make early in the labor management process.

The timing of admission is one such decision. A statement published in 2012 on “Preventing the First Cesarean Delivery” by ACOG, SMFM, and the Eunice Kennedy Shriver National Institute of Child Health and Human Development advises us to avoid admittance of women during the early latent phase of labor (Obstet Gynecol. 2012 Nov;120[5]:1181-93).

It may even be advisable that we consider admittance at higher cervical dilation. A study published this year shows that women admitted at less than 6 cm of dilation had an increased risk of cesarean delivery, compared with women admitted at higher cervical dilation (Am J Perinatol. 2016 Jan;33[2]:188-94). We have more to learn, but certainly, given what we know now about labor progress and the start of active labor, the timing of admission is an important factor to consider.

 

 

The second stage of labor, defined as the interval from complete cervical dilation through delivery of the fetus, presents many questions as well. There is a paucity of quality published data concerning what is normal, how long the stage should last, and how we should manage it. Historically, we have been taught to allow 2 hours of pushing for nulliparous women and 1 hour for multiparous women, when epidural anesthesia has not been administered, and to add an additional hour when epidural is used.

The 2014 ACOG-SMFM consensus statement recommends extending each of these limits by an hour, if maternal and fetal conditions permit, so that we allow at least 3 hours of pushing for nulliparous women and at least 2 hours for multiparous women before diagnosing arrest of labor in the second stage. Longer durations may be appropriate with the use of epidural anesthesia and on an individualized basis.

At this time, it is unclear whether there is any absolute maximum length of time beyond which all women in the second stage of labor should undergo cesarean delivery. We also still do not know the optimal technique for managing maternal pushing during the second stage. Should women with an epidural push right away or should they allow for a period of spontaneous descent? Many of the high-quality studies reported thus far that compare delayed and immediate pushing have limited applicability to current practice because they involved now-obsolete midpelvic forceps deliveries. A large multicenter randomized trial currently underway should provide us with some answers.

Dr. Cahill is an associate professor and chief of the division of maternal-fetal medicine in the department of obstetrics and gynecology at Washington University School of Medicine in St. Louis. She reported having no relevant financial disclosures.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
pregnancy, labor curve, cesarean
Sections
Author and Disclosure Information

Author and Disclosure Information

Related Articles

Only recently has evidence emerged that challenges our long-held understanding of “normal” and “abnormal” labor. We now know there is a much wider range of normal labor progress in women who go on to have good labor outcomes. We have a new labor curve to guide us – one that shows us, for example, that active labor occurs most commonly after 6 cm dilation rather than 4 cm as we’d previously thought.

By appreciating this new labor paradigm, we can potentially have a significant impact on the cesarean rate in the United States. While our use of the older labor curve is not the only reason for the rise in cesarean deliveries over the last 30 years, it very likely has played a role. A study published in 2011 of more than 32,000 live births at a major academic hospital demonstrated that one of the most common reasons for primary cesarean is abnormal labor or arrest (Obstet Gynecol. 2011 Jul;118[1]:29-38).

Dr. Alison G. Cahill

Another study by the Consortium on Safe Labor – an analysis of labor and delivery information from more than 228,000 women across the United States – showed that half of the cesarean deliveries performed for dystocia in women undergoing labor induction were performed before 6 cm of cervical dilation and relatively soon after the previous cervical examination (Am J Obstet Gynecol. 2010 Oct; 203[4]: 326.e1–326.e10).

Our new labor paradigm brings to the forefront a host of new issues and questions about how we can best manage labor to optimize outcomes. In a way, recent discoveries about labor progress have highlighted a dearth of evidence and made “old” issues in labor management seem new and urgent.

As we strive to learn more, however, we are challenged to change our practices and behavior at the bedside with the evidence we currently have. By appreciating both the new labor curve and our current understanding of how labor induction, obesity, and other patient characteristics and clinical conditions can affect labor progress, we can expect that many women will simply progress much more slowly than was historically expected.

As long as we have indications of the well-being of the baby and the well-being of the mother, a slower but progressive labor in the first stage should not prompt us to intervene. We should no longer apply the standards of active-phase progress – standards that have traditionally driven our diagnoses of labor dystocia – until the patient has achieved 6 cm of dilation.

The labor curve that had shaped our thinking about normal and abnormal labor progress until recently was developed by Dr. Emanuel Friedman. Based on findings from a prospective cohort study of 500 nulliparous women, Dr. Friedman plotted labor progress with centimeters of cervical dilation on the Y-axis and time on the X-axis, and divided labor into several stages and phases. In this curve, the rate of change of cervical dilation over time started increasing significantly at 4 cm; this period of increasing slope defined the active phase of labor.

Abnormal labor progress in the active phase was then defined, based on the 95th percentile, as cervical dilation of less than 1.2 cm per hour for nulliparous women and less than 1.5 cm per hour for multiparous women. Based on Dr. Friedman’s work, a woman was deemed to be in active-phase arrest when she had no cervical changes for 2 hours or more while having adequate uterine contractions and cervical dilation of at least 4 cm. These concepts came to govern labor management.

The paradigm shifted when the Consortium on Safe Labor reported in 2010 on a retrospective cohort study of more than 62,000 women at 19 U.S. hospitals. The women had a singleton term gestation, spontaneous labor, vertex presentation, vaginal delivery, and a normal perinatal outcome. In their analysis of labor and delivery information, Dr. Jun Zhang of the National Institutes of Health’s Eunice Kennedy Shriver National Institute of Child Health and Human Development and his colleagues accounted for the fact that the exact times of cervical change are unknown.

They used modern statistical methods and analytical tools that took into account the specific nature of cervical dilation data – that cervical measurements are interval-censored (we never know the exact time when a woman’s cervix changes) and that multiple exams of the cervix in the same patient are not independent (Obstet Gynecol. 2010 Dec;116[6]:1281-7).

The methodology used in the Consortium study accounted for both the interval-censored and repeated-measures nature of cervical dilation data. It thus addressed analytical flaws in the previous approach to labor data, which was purely descriptive of the exam findings and did not consider the nature of the data itself.

 

 

Under the new analysis and in the larger, contemporary population of patients, the period of increasing slope was found to occur most commonly after 6 cm, not 4 cm. The slowest 5% of nulliparous women had cervical dilation of 0.4 cm per hour (with the median at 1.9 cm per hour), compared with 1.2 cm per hour (with a median of 3.0 cm per hour) as in the Friedman data.

Dr. Zhang’s study showed us that labor may take more than 6 hours to progress from 4 to 5 cm dilation, and more than 3 hours to progress from 5 to 6 cm dilation – a rate of progress that is significantly slower than what Dr. Friedman had described. The new data showed us, moreover, that from 4 cm-6 cm dilation, nulliparous and multiparous women progressed similarly slowly. Beyond 6 cm, multiparous women dilated more rapidly, with a steeper acceleration phase than previously described.

A consensus statement published in 2014 by the American College of Obstetricians and Gynecologists (ACOG) and the Society for Maternal-Fetal Medicine (SMFM) on “Safe Prevention of the Primary Cesarean Delivery” encourages use of the Consortium data to revisit the definition of labor dystocia. While the data “do not directly address an optimal duration for the diagnosis of active-phase protraction or labor arrest, [they] do suggest that neither should be diagnosed before 6 cm dilation” (Obstet Gynecol. 2014 Mar;123[3]:693-711).

The ACOG-SMFM statement makes a series of recommendations for managing the first and second stages of labor, based not only on the Consortium data but on a broader literature review. It recommends that if mother and fetus appear well, cesarean delivery for active-phase arrest in the first stage of labor be reserved for women of at least 6 cm of dilation with ruptured membranes who fail to progress despite 4 hours of adequate uterine activity, or at least 6 hours of oxytocin administration with inadequate uterine activity and no cervical change.

Regarding the latent phase of labor, the statement says that most women with a prolonged latent phase ultimately will enter the active phase with expectant management. It advises that a prolonged latent phase (for example, greater than 20 hours in nulliparous women and greater than 14 hours in multiparous women) should not be an isolated indication for cesarean delivery.

The consensus statement also recognizes recent data showing that women who undergo labor induction have an even slower “normal” course of labor, particularly a longer latent phase, than women who labor spontaneously. A retrospective cohort study of more than 5,000 women, for instance, found that before 6 cm, women whose labor is induced can spend up to 10 hours to achieve each 1 cm of dilation (Obstet Gynecol. 2012 Jun;119[6]:1113-8).

As long as maternal and fetal status are reassuring, the statement says, cesarean deliveries for failed induction of labor in the latent phase can be avoided by allowing longer durations of the latent phase (up to 24 hours) and by requiring that oxytocin be administered for 12-18 hours after membrane rupture before deeming induction a failure.

Each of these described recommendations were graded in the ACOG-SMFM consensus document as “strong” recommendations with “moderate quality evidence.”

Examining our standards

Moving forward, we must further develop and define our thresholds for identifying who will most benefit from a cesarean delivery. We have many specific aspects of labor management to address as well, such as the optimal timing of artificial membrane rupture and the safety and efficacy of different oxytocin protocols. We may also want to revisit recommendations for serial cervical assessment, possibly adjusting the intervals given our understanding of the new labor curve.

Under the new labor paradigm, moreover, we must think not only about the clinical decisions we make at the bedside, but about the decisions we make early in the labor management process.

The timing of admission is one such decision. A statement published in 2012 on “Preventing the First Cesarean Delivery” by ACOG, SMFM, and the Eunice Kennedy Shriver National Institute of Child Health and Human Development advises us to avoid admittance of women during the early latent phase of labor (Obstet Gynecol. 2012 Nov;120[5]:1181-93).

It may even be advisable that we consider admittance at higher cervical dilation. A study published this year shows that women admitted at less than 6 cm of dilation had an increased risk of cesarean delivery, compared with women admitted at higher cervical dilation (Am J Perinatol. 2016 Jan;33[2]:188-94). We have more to learn, but certainly, given what we know now about labor progress and the start of active labor, the timing of admission is an important factor to consider.

 

 

The second stage of labor, defined as the interval from complete cervical dilation through delivery of the fetus, presents many questions as well. There is a paucity of quality published data concerning what is normal, how long the stage should last, and how we should manage it. Historically, we have been taught to allow 2 hours of pushing for nulliparous women and 1 hour for multiparous women, when epidural anesthesia has not been administered, and to add an additional hour when epidural is used.

The 2014 ACOG-SMFM consensus statement recommends extending each of these limits by an hour, if maternal and fetal conditions permit, so that we allow at least 3 hours of pushing for nulliparous women and at least 2 hours for multiparous women before diagnosing arrest of labor in the second stage. Longer durations may be appropriate with the use of epidural anesthesia and on an individualized basis.

At this time, it is unclear whether there is any absolute maximum length of time beyond which all women in the second stage of labor should undergo cesarean delivery. We also still do not know the optimal technique for managing maternal pushing during the second stage. Should women with an epidural push right away or should they allow for a period of spontaneous descent? Many of the high-quality studies reported thus far that compare delayed and immediate pushing have limited applicability to current practice because they involved now-obsolete midpelvic forceps deliveries. A large multicenter randomized trial currently underway should provide us with some answers.

Dr. Cahill is an associate professor and chief of the division of maternal-fetal medicine in the department of obstetrics and gynecology at Washington University School of Medicine in St. Louis. She reported having no relevant financial disclosures.

Only recently has evidence emerged that challenges our long-held understanding of “normal” and “abnormal” labor. We now know there is a much wider range of normal labor progress in women who go on to have good labor outcomes. We have a new labor curve to guide us – one that shows us, for example, that active labor occurs most commonly after 6 cm dilation rather than 4 cm as we’d previously thought.

By appreciating this new labor paradigm, we can potentially have a significant impact on the cesarean rate in the United States. While our use of the older labor curve is not the only reason for the rise in cesarean deliveries over the last 30 years, it very likely has played a role. A study published in 2011 of more than 32,000 live births at a major academic hospital demonstrated that one of the most common reasons for primary cesarean is abnormal labor or arrest (Obstet Gynecol. 2011 Jul;118[1]:29-38).

Dr. Alison G. Cahill

Another study by the Consortium on Safe Labor – an analysis of labor and delivery information from more than 228,000 women across the United States – showed that half of the cesarean deliveries performed for dystocia in women undergoing labor induction were performed before 6 cm of cervical dilation and relatively soon after the previous cervical examination (Am J Obstet Gynecol. 2010 Oct; 203[4]: 326.e1–326.e10).

Our new labor paradigm brings to the forefront a host of new issues and questions about how we can best manage labor to optimize outcomes. In a way, recent discoveries about labor progress have highlighted a dearth of evidence and made “old” issues in labor management seem new and urgent.

As we strive to learn more, however, we are challenged to change our practices and behavior at the bedside with the evidence we currently have. By appreciating both the new labor curve and our current understanding of how labor induction, obesity, and other patient characteristics and clinical conditions can affect labor progress, we can expect that many women will simply progress much more slowly than was historically expected.

As long as we have indications of the well-being of the baby and the well-being of the mother, a slower but progressive labor in the first stage should not prompt us to intervene. We should no longer apply the standards of active-phase progress – standards that have traditionally driven our diagnoses of labor dystocia – until the patient has achieved 6 cm of dilation.

The labor curve that had shaped our thinking about normal and abnormal labor progress until recently was developed by Dr. Emanuel Friedman. Based on findings from a prospective cohort study of 500 nulliparous women, Dr. Friedman plotted labor progress with centimeters of cervical dilation on the Y-axis and time on the X-axis, and divided labor into several stages and phases. In this curve, the rate of change of cervical dilation over time started increasing significantly at 4 cm; this period of increasing slope defined the active phase of labor.

Abnormal labor progress in the active phase was then defined, based on the 95th percentile, as cervical dilation of less than 1.2 cm per hour for nulliparous women and less than 1.5 cm per hour for multiparous women. Based on Dr. Friedman’s work, a woman was deemed to be in active-phase arrest when she had no cervical changes for 2 hours or more while having adequate uterine contractions and cervical dilation of at least 4 cm. These concepts came to govern labor management.

The paradigm shifted when the Consortium on Safe Labor reported in 2010 on a retrospective cohort study of more than 62,000 women at 19 U.S. hospitals. The women had a singleton term gestation, spontaneous labor, vertex presentation, vaginal delivery, and a normal perinatal outcome. In their analysis of labor and delivery information, Dr. Jun Zhang of the National Institutes of Health’s Eunice Kennedy Shriver National Institute of Child Health and Human Development and his colleagues accounted for the fact that the exact times of cervical change are unknown.

They used modern statistical methods and analytical tools that took into account the specific nature of cervical dilation data – that cervical measurements are interval-censored (we never know the exact time when a woman’s cervix changes) and that multiple exams of the cervix in the same patient are not independent (Obstet Gynecol. 2010 Dec;116[6]:1281-7).

The methodology used in the Consortium study accounted for both the interval-censored and repeated-measures nature of cervical dilation data. It thus addressed analytical flaws in the previous approach to labor data, which was purely descriptive of the exam findings and did not consider the nature of the data itself.

 

 

Under the new analysis and in the larger, contemporary population of patients, the period of increasing slope was found to occur most commonly after 6 cm, not 4 cm. The slowest 5% of nulliparous women had cervical dilation of 0.4 cm per hour (with the median at 1.9 cm per hour), compared with 1.2 cm per hour (with a median of 3.0 cm per hour) as in the Friedman data.

Dr. Zhang’s study showed us that labor may take more than 6 hours to progress from 4 to 5 cm dilation, and more than 3 hours to progress from 5 to 6 cm dilation – a rate of progress that is significantly slower than what Dr. Friedman had described. The new data showed us, moreover, that from 4 cm-6 cm dilation, nulliparous and multiparous women progressed similarly slowly. Beyond 6 cm, multiparous women dilated more rapidly, with a steeper acceleration phase than previously described.

A consensus statement published in 2014 by the American College of Obstetricians and Gynecologists (ACOG) and the Society for Maternal-Fetal Medicine (SMFM) on “Safe Prevention of the Primary Cesarean Delivery” encourages use of the Consortium data to revisit the definition of labor dystocia. While the data “do not directly address an optimal duration for the diagnosis of active-phase protraction or labor arrest, [they] do suggest that neither should be diagnosed before 6 cm dilation” (Obstet Gynecol. 2014 Mar;123[3]:693-711).

The ACOG-SMFM statement makes a series of recommendations for managing the first and second stages of labor, based not only on the Consortium data but on a broader literature review. It recommends that if mother and fetus appear well, cesarean delivery for active-phase arrest in the first stage of labor be reserved for women of at least 6 cm of dilation with ruptured membranes who fail to progress despite 4 hours of adequate uterine activity, or at least 6 hours of oxytocin administration with inadequate uterine activity and no cervical change.

Regarding the latent phase of labor, the statement says that most women with a prolonged latent phase ultimately will enter the active phase with expectant management. It advises that a prolonged latent phase (for example, greater than 20 hours in nulliparous women and greater than 14 hours in multiparous women) should not be an isolated indication for cesarean delivery.

The consensus statement also recognizes recent data showing that women who undergo labor induction have an even slower “normal” course of labor, particularly a longer latent phase, than women who labor spontaneously. A retrospective cohort study of more than 5,000 women, for instance, found that before 6 cm, women whose labor is induced can spend up to 10 hours to achieve each 1 cm of dilation (Obstet Gynecol. 2012 Jun;119[6]:1113-8).

As long as maternal and fetal status are reassuring, the statement says, cesarean deliveries for failed induction of labor in the latent phase can be avoided by allowing longer durations of the latent phase (up to 24 hours) and by requiring that oxytocin be administered for 12-18 hours after membrane rupture before deeming induction a failure.

Each of these described recommendations were graded in the ACOG-SMFM consensus document as “strong” recommendations with “moderate quality evidence.”

Examining our standards

Moving forward, we must further develop and define our thresholds for identifying who will most benefit from a cesarean delivery. We have many specific aspects of labor management to address as well, such as the optimal timing of artificial membrane rupture and the safety and efficacy of different oxytocin protocols. We may also want to revisit recommendations for serial cervical assessment, possibly adjusting the intervals given our understanding of the new labor curve.

Under the new labor paradigm, moreover, we must think not only about the clinical decisions we make at the bedside, but about the decisions we make early in the labor management process.

The timing of admission is one such decision. A statement published in 2012 on “Preventing the First Cesarean Delivery” by ACOG, SMFM, and the Eunice Kennedy Shriver National Institute of Child Health and Human Development advises us to avoid admittance of women during the early latent phase of labor (Obstet Gynecol. 2012 Nov;120[5]:1181-93).

It may even be advisable that we consider admittance at higher cervical dilation. A study published this year shows that women admitted at less than 6 cm of dilation had an increased risk of cesarean delivery, compared with women admitted at higher cervical dilation (Am J Perinatol. 2016 Jan;33[2]:188-94). We have more to learn, but certainly, given what we know now about labor progress and the start of active labor, the timing of admission is an important factor to consider.

 

 

The second stage of labor, defined as the interval from complete cervical dilation through delivery of the fetus, presents many questions as well. There is a paucity of quality published data concerning what is normal, how long the stage should last, and how we should manage it. Historically, we have been taught to allow 2 hours of pushing for nulliparous women and 1 hour for multiparous women, when epidural anesthesia has not been administered, and to add an additional hour when epidural is used.

The 2014 ACOG-SMFM consensus statement recommends extending each of these limits by an hour, if maternal and fetal conditions permit, so that we allow at least 3 hours of pushing for nulliparous women and at least 2 hours for multiparous women before diagnosing arrest of labor in the second stage. Longer durations may be appropriate with the use of epidural anesthesia and on an individualized basis.

At this time, it is unclear whether there is any absolute maximum length of time beyond which all women in the second stage of labor should undergo cesarean delivery. We also still do not know the optimal technique for managing maternal pushing during the second stage. Should women with an epidural push right away or should they allow for a period of spontaneous descent? Many of the high-quality studies reported thus far that compare delayed and immediate pushing have limited applicability to current practice because they involved now-obsolete midpelvic forceps deliveries. A large multicenter randomized trial currently underway should provide us with some answers.

Dr. Cahill is an associate professor and chief of the division of maternal-fetal medicine in the department of obstetrics and gynecology at Washington University School of Medicine in St. Louis. She reported having no relevant financial disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
New data points to slower course of labor
Display Headline
New data points to slower course of labor
Legacy Keywords
pregnancy, labor curve, cesarean
Legacy Keywords
pregnancy, labor curve, cesarean
Sections
Article Source

PURLs Copyright

Inside the Article

Rethinking the management of labor

Article Type
Changed
Tue, 08/28/2018 - 10:04
Display Headline
Rethinking the management of labor

Over the last 50 years, we have witnessed some incredible advancements that have vastly improved maternal and fetal outcomes, even in the face of the most complex obstetrical dilemmas. As our practice and the research continues to evolve, it is increasingly important that we carefully review our practice standards to ensure that every woman and her baby receives the most up-to-date medical care.

This month’s Master Class highlights a critical area of obstetrics where the convergence of technology, clinical observation, and research stimulated a change in practice guidelines: the use of the labor curve to monitor normal versus abnormal labor. Until quite recently, ob.gyns. had based labor criteria on the “Friedman Curve,” first established in the mid-1950s, and supported by other smaller and less comprehensive studies. This work was adopted by the American College of Obstetricians and Gynecologists.

Dr. E. Albert Reece

For more than half a century, we used these parameters to determine if a woman had entered active-phase arrest, and to make the very important decision of whether to perform a cesarean section. However, work in the early 2000s strongly suggested that the old criteria no longer applied to the full course of labor in contemporary patients (Am J Obstet Gynecol. 2002 Oct;187[4]:824-8). A 2010 comprehensive study showed that we needed to consider a new approach to labor management (Am J Obstet Gynecol. 2010 Oct;203[4]:326.e1-326.e10).

It may seem incredible that it took such a long time to update our thinking about what constitutes normal versus abnormal labor progression. However, we must keep in mind that many studies supported the original labor curve, and advanced tools to assess fetal health during labor were just being developed. The first commercially available fetal heart rate monitor would not be produced until 1968, and debates about the utility of these devices would continue into the early 1990s.

Additionally, our patient population has changed. As we have discussed in previous columns, the incidence and severity of other chronic conditions, such as diabetes and obesity, has increased significantly and deeply impacted labor progression.

Just as technology has advanced and our patients’ needs have changed, so, too, must our practice standards. We have invited Dr. Alison G. Cahill, associate professor and chief of the division of maternal-fetal medicine in the department of obstetrics and gynecology at Washington University, St. Louis, to discuss the importance and implications of the new labor curve.

Dr. Reece, who specializes in maternal-fetal medicine, is vice president for medical affairs at the University of Maryland, Baltimore, as well as the John Z. and Akiko K. Bowers Distinguished Professor and dean of the school of medicine. Dr. Reece said he had no relevant financial disclosures. He is the medical editor of this column. Contact him at [email protected].

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
normal labor, abnormal labor, labor curve, pregnancy, cesarean
Sections
Author and Disclosure Information

Author and Disclosure Information

Related Articles

Over the last 50 years, we have witnessed some incredible advancements that have vastly improved maternal and fetal outcomes, even in the face of the most complex obstetrical dilemmas. As our practice and the research continues to evolve, it is increasingly important that we carefully review our practice standards to ensure that every woman and her baby receives the most up-to-date medical care.

This month’s Master Class highlights a critical area of obstetrics where the convergence of technology, clinical observation, and research stimulated a change in practice guidelines: the use of the labor curve to monitor normal versus abnormal labor. Until quite recently, ob.gyns. had based labor criteria on the “Friedman Curve,” first established in the mid-1950s, and supported by other smaller and less comprehensive studies. This work was adopted by the American College of Obstetricians and Gynecologists.

Dr. E. Albert Reece

For more than half a century, we used these parameters to determine if a woman had entered active-phase arrest, and to make the very important decision of whether to perform a cesarean section. However, work in the early 2000s strongly suggested that the old criteria no longer applied to the full course of labor in contemporary patients (Am J Obstet Gynecol. 2002 Oct;187[4]:824-8). A 2010 comprehensive study showed that we needed to consider a new approach to labor management (Am J Obstet Gynecol. 2010 Oct;203[4]:326.e1-326.e10).

It may seem incredible that it took such a long time to update our thinking about what constitutes normal versus abnormal labor progression. However, we must keep in mind that many studies supported the original labor curve, and advanced tools to assess fetal health during labor were just being developed. The first commercially available fetal heart rate monitor would not be produced until 1968, and debates about the utility of these devices would continue into the early 1990s.

Additionally, our patient population has changed. As we have discussed in previous columns, the incidence and severity of other chronic conditions, such as diabetes and obesity, has increased significantly and deeply impacted labor progression.

Just as technology has advanced and our patients’ needs have changed, so, too, must our practice standards. We have invited Dr. Alison G. Cahill, associate professor and chief of the division of maternal-fetal medicine in the department of obstetrics and gynecology at Washington University, St. Louis, to discuss the importance and implications of the new labor curve.

Dr. Reece, who specializes in maternal-fetal medicine, is vice president for medical affairs at the University of Maryland, Baltimore, as well as the John Z. and Akiko K. Bowers Distinguished Professor and dean of the school of medicine. Dr. Reece said he had no relevant financial disclosures. He is the medical editor of this column. Contact him at [email protected].

Over the last 50 years, we have witnessed some incredible advancements that have vastly improved maternal and fetal outcomes, even in the face of the most complex obstetrical dilemmas. As our practice and the research continues to evolve, it is increasingly important that we carefully review our practice standards to ensure that every woman and her baby receives the most up-to-date medical care.

This month’s Master Class highlights a critical area of obstetrics where the convergence of technology, clinical observation, and research stimulated a change in practice guidelines: the use of the labor curve to monitor normal versus abnormal labor. Until quite recently, ob.gyns. had based labor criteria on the “Friedman Curve,” first established in the mid-1950s, and supported by other smaller and less comprehensive studies. This work was adopted by the American College of Obstetricians and Gynecologists.

Dr. E. Albert Reece

For more than half a century, we used these parameters to determine if a woman had entered active-phase arrest, and to make the very important decision of whether to perform a cesarean section. However, work in the early 2000s strongly suggested that the old criteria no longer applied to the full course of labor in contemporary patients (Am J Obstet Gynecol. 2002 Oct;187[4]:824-8). A 2010 comprehensive study showed that we needed to consider a new approach to labor management (Am J Obstet Gynecol. 2010 Oct;203[4]:326.e1-326.e10).

It may seem incredible that it took such a long time to update our thinking about what constitutes normal versus abnormal labor progression. However, we must keep in mind that many studies supported the original labor curve, and advanced tools to assess fetal health during labor were just being developed. The first commercially available fetal heart rate monitor would not be produced until 1968, and debates about the utility of these devices would continue into the early 1990s.

Additionally, our patient population has changed. As we have discussed in previous columns, the incidence and severity of other chronic conditions, such as diabetes and obesity, has increased significantly and deeply impacted labor progression.

Just as technology has advanced and our patients’ needs have changed, so, too, must our practice standards. We have invited Dr. Alison G. Cahill, associate professor and chief of the division of maternal-fetal medicine in the department of obstetrics and gynecology at Washington University, St. Louis, to discuss the importance and implications of the new labor curve.

Dr. Reece, who specializes in maternal-fetal medicine, is vice president for medical affairs at the University of Maryland, Baltimore, as well as the John Z. and Akiko K. Bowers Distinguished Professor and dean of the school of medicine. Dr. Reece said he had no relevant financial disclosures. He is the medical editor of this column. Contact him at [email protected].

References

References

Publications
Publications
Topics
Article Type
Display Headline
Rethinking the management of labor
Display Headline
Rethinking the management of labor
Legacy Keywords
normal labor, abnormal labor, labor curve, pregnancy, cesarean
Legacy Keywords
normal labor, abnormal labor, labor curve, pregnancy, cesarean
Sections
Article Source

PURLs Copyright

Inside the Article

Smoking affects molecular profile of HPV-positive oropharyngeal cancer

Article Type
Changed
Fri, 01/04/2019 - 13:11
Display Headline
Smoking affects molecular profile of HPV-positive oropharyngeal cancer

SCOTTSDALE, ARIZ. – The human papillomavirus (HPV)–positive oropharyngeal cancers of heavy smokers and light smokers have distinctly different molecular profiles, which may have implications for treatment, according to a study presented at the Multidisciplinary Head and Neck Cancer Symposium.

The population-based cohort study of 66 patients found that mutations in certain genes associated with tobacco exposure and poorer survival – for example, NOTCH1, TP53, CDKN2A, and KRAS – were found almost exclusively in heavy smokers, investigators reported in a session and related press briefing. Also, the number of HPV reads detected in tumors was lower for heavy smokers as compared with light smokers.

Dr. Jose P. Zevallos

Taken together, the findings suggest that although HPV-positive cancers in heavy smokers may be initiated through virus-related mutations, they go on to acquire tobacco-related mutations and become less dependent on the E6/E7 carcinogenesis mechanisms typically associated with the virus, said first author Dr. Jose P. Zevallos of the University of North Carolina, Chapel Hill.

“We think that this study and future studies based on this work will have important implications for personalizing treatment and decision making in HPV-positive oropharynx cancer, particularly in the era of less aggressive treatments for HPV-positive tumors because of their excellent prognosis,” he said. “As opposed to arbitrarily deciding that 10 pack-years [of smoking] is a number that we use to define more aggressive disease, we are trying to provide a molecular basis for more aggressive disease in order to decide who will benefit from less-aggressive versus more-aggressive treatment.”

Dr. Christine Gourin

Press briefing moderator Dr. Christine Gourin of Johns Hopkins University, Baltimore, said “This study is so important because we know that the molecular fingerprint of HPV-related oropharyngeal cancer is really different from anything that we have seen before – different patient population, different outcomes than when I was in training.”

“We don’t really understand fully why this fingerprint is so different and why tobacco affects the fingerprint,” she added. “The finding of differences in the molecular phenotypes of light smokers versus heavy smokers is something that we all appreciate clinically and we need to understand better to tailor treatment.”

Introducing the study, Dr. Zevallos noted that the HPV-positive cancers of smokers are known to have prognosis intermediate between those of the more favorable HPV-positive cancers of never smokers and the less favorable HPV-negative cancers. What remains unclear is the molecular basis for these differences.

Patients came from the population-based CHANCE (Carolina Head and Neck Cancer Epidemiology) study conducted during 2001-2006. The investigators performed targeted next-generation DNA sequencing in tumors with an assay for more than 700 genes associated with human cancers.

“We focused our attention on genes that overlap with those in COSMIC [the Catalogue of Somatic Mutations in Cancer] as well as on TCGA [The Cancer Genome Atlas] genes that were demonstrated to be significant in head and neck cancer,” Dr. Zevallos explained.

All 66 patients studied had HPV-positive tumors according to p16 expression or HPV polymerase chain reaction findings. Overall, 61% were heavy smokers, defined as having a greater than 10 pack-year history of smoking.

In terms of clinical outcome, the 5-year overall survival rate was 82% among the heavy smokers and 60% among the light or never smokers.

Mutations associated with tobacco use were found almost exclusively in the heavy smokers, Dr. Zevallos reported. For example, they had higher prevalences of mutations in NOTCH1 (18% vs. 0%), FAT1 (14% vs. 6%), and FGFR3 (10% vs. 0%), among others. On the other hand, the light and never smokers had a higher prevalence of mutations in PIK3CA (50% vs. 34%). Additionally, KRAS mutations were found only in the heavy smokers (4% vs. 0%), whereas HRAS mutations were found in the light and never smokers only (6% vs. 0%).

A pathway analysis incorporating the new information for HPV-positive heavy smokers confirmed that despite persistence of the HPV-related signature, these tumors also had signaling in several of the pathways typically associated with HPV-negative cancers, according to Dr. Zevallos.

HPV DNA was detected in all of the tumors, and in 95% of cases, the viral type was type 16. However, PCR for HPV was falsely negative in 9%. “This is a very important number as we rely on this as a surrogate for HPV status,” he commented. “p16 was the main inclusion criterion for this particular study, but this should be noted.”

Heavy smokers and patients who had died had a lower number of HPV reads per tumor. “This tells us that there are potentially subclones developing in these patients that are driven by tobacco-associated mutations, and this may explain worse outcomes in this patient population and warrants further exploration,” Dr. Zevallos elaborated.

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

SCOTTSDALE, ARIZ. – The human papillomavirus (HPV)–positive oropharyngeal cancers of heavy smokers and light smokers have distinctly different molecular profiles, which may have implications for treatment, according to a study presented at the Multidisciplinary Head and Neck Cancer Symposium.

The population-based cohort study of 66 patients found that mutations in certain genes associated with tobacco exposure and poorer survival – for example, NOTCH1, TP53, CDKN2A, and KRAS – were found almost exclusively in heavy smokers, investigators reported in a session and related press briefing. Also, the number of HPV reads detected in tumors was lower for heavy smokers as compared with light smokers.

Dr. Jose P. Zevallos

Taken together, the findings suggest that although HPV-positive cancers in heavy smokers may be initiated through virus-related mutations, they go on to acquire tobacco-related mutations and become less dependent on the E6/E7 carcinogenesis mechanisms typically associated with the virus, said first author Dr. Jose P. Zevallos of the University of North Carolina, Chapel Hill.

“We think that this study and future studies based on this work will have important implications for personalizing treatment and decision making in HPV-positive oropharynx cancer, particularly in the era of less aggressive treatments for HPV-positive tumors because of their excellent prognosis,” he said. “As opposed to arbitrarily deciding that 10 pack-years [of smoking] is a number that we use to define more aggressive disease, we are trying to provide a molecular basis for more aggressive disease in order to decide who will benefit from less-aggressive versus more-aggressive treatment.”

Dr. Christine Gourin

Press briefing moderator Dr. Christine Gourin of Johns Hopkins University, Baltimore, said “This study is so important because we know that the molecular fingerprint of HPV-related oropharyngeal cancer is really different from anything that we have seen before – different patient population, different outcomes than when I was in training.”

“We don’t really understand fully why this fingerprint is so different and why tobacco affects the fingerprint,” she added. “The finding of differences in the molecular phenotypes of light smokers versus heavy smokers is something that we all appreciate clinically and we need to understand better to tailor treatment.”

Introducing the study, Dr. Zevallos noted that the HPV-positive cancers of smokers are known to have prognosis intermediate between those of the more favorable HPV-positive cancers of never smokers and the less favorable HPV-negative cancers. What remains unclear is the molecular basis for these differences.

Patients came from the population-based CHANCE (Carolina Head and Neck Cancer Epidemiology) study conducted during 2001-2006. The investigators performed targeted next-generation DNA sequencing in tumors with an assay for more than 700 genes associated with human cancers.

“We focused our attention on genes that overlap with those in COSMIC [the Catalogue of Somatic Mutations in Cancer] as well as on TCGA [The Cancer Genome Atlas] genes that were demonstrated to be significant in head and neck cancer,” Dr. Zevallos explained.

All 66 patients studied had HPV-positive tumors according to p16 expression or HPV polymerase chain reaction findings. Overall, 61% were heavy smokers, defined as having a greater than 10 pack-year history of smoking.

In terms of clinical outcome, the 5-year overall survival rate was 82% among the heavy smokers and 60% among the light or never smokers.

Mutations associated with tobacco use were found almost exclusively in the heavy smokers, Dr. Zevallos reported. For example, they had higher prevalences of mutations in NOTCH1 (18% vs. 0%), FAT1 (14% vs. 6%), and FGFR3 (10% vs. 0%), among others. On the other hand, the light and never smokers had a higher prevalence of mutations in PIK3CA (50% vs. 34%). Additionally, KRAS mutations were found only in the heavy smokers (4% vs. 0%), whereas HRAS mutations were found in the light and never smokers only (6% vs. 0%).

A pathway analysis incorporating the new information for HPV-positive heavy smokers confirmed that despite persistence of the HPV-related signature, these tumors also had signaling in several of the pathways typically associated with HPV-negative cancers, according to Dr. Zevallos.

HPV DNA was detected in all of the tumors, and in 95% of cases, the viral type was type 16. However, PCR for HPV was falsely negative in 9%. “This is a very important number as we rely on this as a surrogate for HPV status,” he commented. “p16 was the main inclusion criterion for this particular study, but this should be noted.”

Heavy smokers and patients who had died had a lower number of HPV reads per tumor. “This tells us that there are potentially subclones developing in these patients that are driven by tobacco-associated mutations, and this may explain worse outcomes in this patient population and warrants further exploration,” Dr. Zevallos elaborated.

SCOTTSDALE, ARIZ. – The human papillomavirus (HPV)–positive oropharyngeal cancers of heavy smokers and light smokers have distinctly different molecular profiles, which may have implications for treatment, according to a study presented at the Multidisciplinary Head and Neck Cancer Symposium.

The population-based cohort study of 66 patients found that mutations in certain genes associated with tobacco exposure and poorer survival – for example, NOTCH1, TP53, CDKN2A, and KRAS – were found almost exclusively in heavy smokers, investigators reported in a session and related press briefing. Also, the number of HPV reads detected in tumors was lower for heavy smokers as compared with light smokers.

Dr. Jose P. Zevallos

Taken together, the findings suggest that although HPV-positive cancers in heavy smokers may be initiated through virus-related mutations, they go on to acquire tobacco-related mutations and become less dependent on the E6/E7 carcinogenesis mechanisms typically associated with the virus, said first author Dr. Jose P. Zevallos of the University of North Carolina, Chapel Hill.

“We think that this study and future studies based on this work will have important implications for personalizing treatment and decision making in HPV-positive oropharynx cancer, particularly in the era of less aggressive treatments for HPV-positive tumors because of their excellent prognosis,” he said. “As opposed to arbitrarily deciding that 10 pack-years [of smoking] is a number that we use to define more aggressive disease, we are trying to provide a molecular basis for more aggressive disease in order to decide who will benefit from less-aggressive versus more-aggressive treatment.”

Dr. Christine Gourin

Press briefing moderator Dr. Christine Gourin of Johns Hopkins University, Baltimore, said “This study is so important because we know that the molecular fingerprint of HPV-related oropharyngeal cancer is really different from anything that we have seen before – different patient population, different outcomes than when I was in training.”

“We don’t really understand fully why this fingerprint is so different and why tobacco affects the fingerprint,” she added. “The finding of differences in the molecular phenotypes of light smokers versus heavy smokers is something that we all appreciate clinically and we need to understand better to tailor treatment.”

Introducing the study, Dr. Zevallos noted that the HPV-positive cancers of smokers are known to have prognosis intermediate between those of the more favorable HPV-positive cancers of never smokers and the less favorable HPV-negative cancers. What remains unclear is the molecular basis for these differences.

Patients came from the population-based CHANCE (Carolina Head and Neck Cancer Epidemiology) study conducted during 2001-2006. The investigators performed targeted next-generation DNA sequencing in tumors with an assay for more than 700 genes associated with human cancers.

“We focused our attention on genes that overlap with those in COSMIC [the Catalogue of Somatic Mutations in Cancer] as well as on TCGA [The Cancer Genome Atlas] genes that were demonstrated to be significant in head and neck cancer,” Dr. Zevallos explained.

All 66 patients studied had HPV-positive tumors according to p16 expression or HPV polymerase chain reaction findings. Overall, 61% were heavy smokers, defined as having a greater than 10 pack-year history of smoking.

In terms of clinical outcome, the 5-year overall survival rate was 82% among the heavy smokers and 60% among the light or never smokers.

Mutations associated with tobacco use were found almost exclusively in the heavy smokers, Dr. Zevallos reported. For example, they had higher prevalences of mutations in NOTCH1 (18% vs. 0%), FAT1 (14% vs. 6%), and FGFR3 (10% vs. 0%), among others. On the other hand, the light and never smokers had a higher prevalence of mutations in PIK3CA (50% vs. 34%). Additionally, KRAS mutations were found only in the heavy smokers (4% vs. 0%), whereas HRAS mutations were found in the light and never smokers only (6% vs. 0%).

A pathway analysis incorporating the new information for HPV-positive heavy smokers confirmed that despite persistence of the HPV-related signature, these tumors also had signaling in several of the pathways typically associated with HPV-negative cancers, according to Dr. Zevallos.

HPV DNA was detected in all of the tumors, and in 95% of cases, the viral type was type 16. However, PCR for HPV was falsely negative in 9%. “This is a very important number as we rely on this as a surrogate for HPV status,” he commented. “p16 was the main inclusion criterion for this particular study, but this should be noted.”

Heavy smokers and patients who had died had a lower number of HPV reads per tumor. “This tells us that there are potentially subclones developing in these patients that are driven by tobacco-associated mutations, and this may explain worse outcomes in this patient population and warrants further exploration,” Dr. Zevallos elaborated.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Smoking affects molecular profile of HPV-positive oropharyngeal cancer
Display Headline
Smoking affects molecular profile of HPV-positive oropharyngeal cancer
Sections
Article Source

AT THE MULTIDISCIPLINARY HEAD AND NECK CANCER SYMPOSIUM

PURLs Copyright

Inside the Article

Vitals

Key clinical point: The molecular profile of HPV-positive oropharyngeal cancer differs distinctly between heavy and light smokers.

Major finding: Heavy smokers were more likely to have mutations of NOTCH1 (18% vs. 0%), TP53 (6% vs. 0%), and KRAS (4% vs. 0%), and they had fewer HPV reads in their tumors.

Data source: A population-based cohort study of 66 patients with HPV-positive oropharyngeal cancer.

Disclosures: Dr. Zevallos disclosed that he had no relevant conflicts of interest.

Infection control is everyone’s responsibility

Article Type
Changed
Tue, 12/04/2018 - 13:11
Display Headline
Infection control is everyone’s responsibility

Big things come in small packages, very small – so small they may even be invisible to the naked eye. Take for instance a huge infection causing multiorgan system failure, disseminated intravascular coagulation, even septic shock refractory to high-dose pressors. This catastrophe may be the end result of exposure to tiny pathogenic microbes that can take down an otherwise healthy 300-pound man, tout suite!

Microorganisms are everywhere. We can’t live without them, but we can’t live with certain ones either. Unless you live in a bubble you are going to be exposed to countless bacteria each and every day. They are in the air we breathe, the water we drink, the beds we sleep in. While it is a given that we all will be continuously exposed to bacteria, having a well-considered strategy to curtail the spread of disease can dramatically decrease the risk that we, our families, and our patients are needlessly exposed to potentially life-threatening organisms.

Dr. A. Maria Hester

We all know we are to wash our hands on the way in, and out, of patients’ rooms. This practice is our front line of defense against the spread of numerous potentially lethal diseases. Yet, many clinicians, as well as ancillary hospital personnel, repeatedly fail to abide by this rule, thinking that ‘this one time won’t hurt anything.’ Whether it’s the nurse who rushes into a patient’s room to stop a beeping IV pole or the doctor who eyes a family member in the room and makes a beeline to discuss the discharge plan, all of us have been guilty of entering or leaving a patient’s room without following appropriate infection control standards.

Or, how many times have you followed the protocol meticulously, at least initially, and removed your gown and gloves and washed your hands on your way out the door when the patient remembers another question, or asks you to hand him something that leads to more contact with him or his surroundings? You already washed your hands once, so must you really do it again? After all, what is the likelihood that you pick up (or pass along) any germs anyway? Sometimes, more than we realize. Something as simple as handing a patient his nurses’ call button can expose us to enough C. difficile spores to cause infection in us or others we come into contact with unwittingly. So wash those hands, and wash them again if you touch anything in a patient’s room, even if it is not the patient himself.

Direct observation (AKA “Secret Santas”) can provide invaluable information about adherence to hand hygiene among health care workers and providing feedback is key. This can be unit based, group based, and even provider based. Once collected, this information should be used to drive changes in behavior, which could be punitive or positive; each hospital should decide how to best use its data.

Visitor contact is another important issue and not everyone agrees on how to enforce, or whether to even try to enforce, infection control procedures. The Society for Healthcare Epidemiology of America (SHEA) has several helpful pocket guidelines to address this and other infection control issues. For instance, the society recommends that hospitals consider adopting guidelines to minimize horizontal transmission by visitors, though these guidelines should be feasible to enforce. Factors such as the specific organism and its potential to cause harm are important to consider when developing these guidelines. For instance, the spouse of a patient admitted with influenza has likely already been exposed, and postexposure prophylaxis may be more feasible to her than wearing an uncomfortable mask during an 8-hour hospital visit.

A pharmacy stewardship program is another invaluable infection control tool. With this model, a group of pharmacists, under the direction of an infectious disease specialist, reviews culture results daily and makes recommendations to the physician regarding narrowing antibiotic coverage. I greatly appreciate receiving calls to notify me that the final culture results are in long before I would have actually seen them myself. This allows me to adjust antibiotics in a timely fashion, thus reducing the emergence of drug-resistant organisms or precipitating an unnecessary case of C. difficile.

In addition, written guidelines should be established for indwelling catheters, both urinary and venous. The indication for continued use should be reassessed daily; a computer alert that requires a response is very helpful, as is a call from the friendly floor nurse asking, “Does this patient really still need his catheter?”

Infection control is everyone’s responsibility and we all need to work together toward this common goal.

 

 

Dr. Hester is a hospitalist at Baltimore-Washington Medical Center in Glen Burnie, Md. She is the creator of the Patient Whiz, a patient-engagement app for iOS. Reach her at [email protected].

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
infection control
Sections
Author and Disclosure Information

Author and Disclosure Information

Big things come in small packages, very small – so small they may even be invisible to the naked eye. Take for instance a huge infection causing multiorgan system failure, disseminated intravascular coagulation, even septic shock refractory to high-dose pressors. This catastrophe may be the end result of exposure to tiny pathogenic microbes that can take down an otherwise healthy 300-pound man, tout suite!

Microorganisms are everywhere. We can’t live without them, but we can’t live with certain ones either. Unless you live in a bubble you are going to be exposed to countless bacteria each and every day. They are in the air we breathe, the water we drink, the beds we sleep in. While it is a given that we all will be continuously exposed to bacteria, having a well-considered strategy to curtail the spread of disease can dramatically decrease the risk that we, our families, and our patients are needlessly exposed to potentially life-threatening organisms.

Dr. A. Maria Hester

We all know we are to wash our hands on the way in, and out, of patients’ rooms. This practice is our front line of defense against the spread of numerous potentially lethal diseases. Yet, many clinicians, as well as ancillary hospital personnel, repeatedly fail to abide by this rule, thinking that ‘this one time won’t hurt anything.’ Whether it’s the nurse who rushes into a patient’s room to stop a beeping IV pole or the doctor who eyes a family member in the room and makes a beeline to discuss the discharge plan, all of us have been guilty of entering or leaving a patient’s room without following appropriate infection control standards.

Or, how many times have you followed the protocol meticulously, at least initially, and removed your gown and gloves and washed your hands on your way out the door when the patient remembers another question, or asks you to hand him something that leads to more contact with him or his surroundings? You already washed your hands once, so must you really do it again? After all, what is the likelihood that you pick up (or pass along) any germs anyway? Sometimes, more than we realize. Something as simple as handing a patient his nurses’ call button can expose us to enough C. difficile spores to cause infection in us or others we come into contact with unwittingly. So wash those hands, and wash them again if you touch anything in a patient’s room, even if it is not the patient himself.

Direct observation (AKA “Secret Santas”) can provide invaluable information about adherence to hand hygiene among health care workers and providing feedback is key. This can be unit based, group based, and even provider based. Once collected, this information should be used to drive changes in behavior, which could be punitive or positive; each hospital should decide how to best use its data.

Visitor contact is another important issue and not everyone agrees on how to enforce, or whether to even try to enforce, infection control procedures. The Society for Healthcare Epidemiology of America (SHEA) has several helpful pocket guidelines to address this and other infection control issues. For instance, the society recommends that hospitals consider adopting guidelines to minimize horizontal transmission by visitors, though these guidelines should be feasible to enforce. Factors such as the specific organism and its potential to cause harm are important to consider when developing these guidelines. For instance, the spouse of a patient admitted with influenza has likely already been exposed, and postexposure prophylaxis may be more feasible to her than wearing an uncomfortable mask during an 8-hour hospital visit.

A pharmacy stewardship program is another invaluable infection control tool. With this model, a group of pharmacists, under the direction of an infectious disease specialist, reviews culture results daily and makes recommendations to the physician regarding narrowing antibiotic coverage. I greatly appreciate receiving calls to notify me that the final culture results are in long before I would have actually seen them myself. This allows me to adjust antibiotics in a timely fashion, thus reducing the emergence of drug-resistant organisms or precipitating an unnecessary case of C. difficile.

In addition, written guidelines should be established for indwelling catheters, both urinary and venous. The indication for continued use should be reassessed daily; a computer alert that requires a response is very helpful, as is a call from the friendly floor nurse asking, “Does this patient really still need his catheter?”

Infection control is everyone’s responsibility and we all need to work together toward this common goal.

 

 

Dr. Hester is a hospitalist at Baltimore-Washington Medical Center in Glen Burnie, Md. She is the creator of the Patient Whiz, a patient-engagement app for iOS. Reach her at [email protected].

Big things come in small packages, very small – so small they may even be invisible to the naked eye. Take for instance a huge infection causing multiorgan system failure, disseminated intravascular coagulation, even septic shock refractory to high-dose pressors. This catastrophe may be the end result of exposure to tiny pathogenic microbes that can take down an otherwise healthy 300-pound man, tout suite!

Microorganisms are everywhere. We can’t live without them, but we can’t live with certain ones either. Unless you live in a bubble you are going to be exposed to countless bacteria each and every day. They are in the air we breathe, the water we drink, the beds we sleep in. While it is a given that we all will be continuously exposed to bacteria, having a well-considered strategy to curtail the spread of disease can dramatically decrease the risk that we, our families, and our patients are needlessly exposed to potentially life-threatening organisms.

Dr. A. Maria Hester

We all know we are to wash our hands on the way in, and out, of patients’ rooms. This practice is our front line of defense against the spread of numerous potentially lethal diseases. Yet, many clinicians, as well as ancillary hospital personnel, repeatedly fail to abide by this rule, thinking that ‘this one time won’t hurt anything.’ Whether it’s the nurse who rushes into a patient’s room to stop a beeping IV pole or the doctor who eyes a family member in the room and makes a beeline to discuss the discharge plan, all of us have been guilty of entering or leaving a patient’s room without following appropriate infection control standards.

Or, how many times have you followed the protocol meticulously, at least initially, and removed your gown and gloves and washed your hands on your way out the door when the patient remembers another question, or asks you to hand him something that leads to more contact with him or his surroundings? You already washed your hands once, so must you really do it again? After all, what is the likelihood that you pick up (or pass along) any germs anyway? Sometimes, more than we realize. Something as simple as handing a patient his nurses’ call button can expose us to enough C. difficile spores to cause infection in us or others we come into contact with unwittingly. So wash those hands, and wash them again if you touch anything in a patient’s room, even if it is not the patient himself.

Direct observation (AKA “Secret Santas”) can provide invaluable information about adherence to hand hygiene among health care workers and providing feedback is key. This can be unit based, group based, and even provider based. Once collected, this information should be used to drive changes in behavior, which could be punitive or positive; each hospital should decide how to best use its data.

Visitor contact is another important issue and not everyone agrees on how to enforce, or whether to even try to enforce, infection control procedures. The Society for Healthcare Epidemiology of America (SHEA) has several helpful pocket guidelines to address this and other infection control issues. For instance, the society recommends that hospitals consider adopting guidelines to minimize horizontal transmission by visitors, though these guidelines should be feasible to enforce. Factors such as the specific organism and its potential to cause harm are important to consider when developing these guidelines. For instance, the spouse of a patient admitted with influenza has likely already been exposed, and postexposure prophylaxis may be more feasible to her than wearing an uncomfortable mask during an 8-hour hospital visit.

A pharmacy stewardship program is another invaluable infection control tool. With this model, a group of pharmacists, under the direction of an infectious disease specialist, reviews culture results daily and makes recommendations to the physician regarding narrowing antibiotic coverage. I greatly appreciate receiving calls to notify me that the final culture results are in long before I would have actually seen them myself. This allows me to adjust antibiotics in a timely fashion, thus reducing the emergence of drug-resistant organisms or precipitating an unnecessary case of C. difficile.

In addition, written guidelines should be established for indwelling catheters, both urinary and venous. The indication for continued use should be reassessed daily; a computer alert that requires a response is very helpful, as is a call from the friendly floor nurse asking, “Does this patient really still need his catheter?”

Infection control is everyone’s responsibility and we all need to work together toward this common goal.

 

 

Dr. Hester is a hospitalist at Baltimore-Washington Medical Center in Glen Burnie, Md. She is the creator of the Patient Whiz, a patient-engagement app for iOS. Reach her at [email protected].

References

References

Publications
Publications
Topics
Article Type
Display Headline
Infection control is everyone’s responsibility
Display Headline
Infection control is everyone’s responsibility
Legacy Keywords
infection control
Legacy Keywords
infection control
Sections
Article Source

PURLs Copyright

Inside the Article

Medicare Grants Billing Code for Hospitalists

Article Type
Changed
Fri, 09/14/2018 - 12:05
Display Headline
Medicare Grants Billing Code for Hospitalists

PHILADELPHIAThe Society of Hospital Medicine (SHM) is pleased to announce the introduction of a dedicated billing code for hospitalists by the Centers for Medicare & Medicaid Services (CMS). This decision comes in response to concerted advocacy efforts from SHM for CMS to recognize the specialty. This is a monumental step for hospital medicine, which continues to be the fastest growing medical specialty in the United States with over 48,000 practitioners identifying as hospitalists, growing from approximately 1,000 in the mid-1990s.

“We see each day that hospitalists are driving positive change in healthcare, and this recognition by CMS affirms that hospital medicine is growing both in scope and impact,” notes Laurence Wellikson, MD, MHM, CEO of SHM. “The ability for hospital medicine practitioners to differentiate themselves from providers in other specialties will have a huge impact, particularly for upcoming value-based or pay-for-performance programs.”

Until now, hospitalists could only compare performance to that of practitioners in internal medicine or another related specialty. This new billing code will allow hospitalists to appropriately benchmark and focus improvement efforts with others in the hospital medicine specialty, facilitating more accurate comparisons and fairer assessments of hospitalist performance.

Despite varied training backgrounds, hospitalists have become focused within their own unique specialty, dedicated to providing care to hospitalized patients and working toward high-quality, patient-centered care in the hospital. They have developed institutional-based skills that differentiate them from practitioners in other specialties, such as internal and family medicine. Their specialized expertise includes improving both the efficiency and safety of care for hospitalized patients and the ability to manage and innovate in a hospital’s team-based environment.

This momentous decision coincides with the twenty-year anniversary of the coining of the term ‘hospitalist’ by Robert Wachter, MD, MHM, and Lee Goldman, MD in the New England Journal of Medicine. In recognition of this anniversary, SHM introduced a year-long celebration, the “Year of the Hospitalist,” to commemorate the specialty’s continued success and bright future.

“We have known who we are for years, and the special role that hospitalists play in the well-being of our patients, communities and health systems,” explains Brian Harte, MD, SFHM, president-elect of SHM. “The hospitalist provider code will provide Medicare and other players in the healthcare system an important new tool to better understand and acknowledge the critical role we play in the care of hospitalized patients nationwide.”

Lisa Zoks is SHM's Vice-President of Communications.

ABOUT SHM

Representing the fastest growing specialty in modern healthcare, SHM is the leading medical society for more than 48,000 hospitalists and their patients. SHM is dedicated to promoting the highest quality care for all hospitalized patients and overall excellence in the practice of hospital medicine through quality improvement, education, advocacy and research. Over the past decade, studies have shown that hospitalists can contribute to decreased patient lengths of stay, reductions in hospital costs and readmission rates, and increased patient satisfaction.

Issue
The Hospitalist - 2016(02)
Publications
Sections

PHILADELPHIAThe Society of Hospital Medicine (SHM) is pleased to announce the introduction of a dedicated billing code for hospitalists by the Centers for Medicare & Medicaid Services (CMS). This decision comes in response to concerted advocacy efforts from SHM for CMS to recognize the specialty. This is a monumental step for hospital medicine, which continues to be the fastest growing medical specialty in the United States with over 48,000 practitioners identifying as hospitalists, growing from approximately 1,000 in the mid-1990s.

“We see each day that hospitalists are driving positive change in healthcare, and this recognition by CMS affirms that hospital medicine is growing both in scope and impact,” notes Laurence Wellikson, MD, MHM, CEO of SHM. “The ability for hospital medicine practitioners to differentiate themselves from providers in other specialties will have a huge impact, particularly for upcoming value-based or pay-for-performance programs.”

Until now, hospitalists could only compare performance to that of practitioners in internal medicine or another related specialty. This new billing code will allow hospitalists to appropriately benchmark and focus improvement efforts with others in the hospital medicine specialty, facilitating more accurate comparisons and fairer assessments of hospitalist performance.

Despite varied training backgrounds, hospitalists have become focused within their own unique specialty, dedicated to providing care to hospitalized patients and working toward high-quality, patient-centered care in the hospital. They have developed institutional-based skills that differentiate them from practitioners in other specialties, such as internal and family medicine. Their specialized expertise includes improving both the efficiency and safety of care for hospitalized patients and the ability to manage and innovate in a hospital’s team-based environment.

This momentous decision coincides with the twenty-year anniversary of the coining of the term ‘hospitalist’ by Robert Wachter, MD, MHM, and Lee Goldman, MD in the New England Journal of Medicine. In recognition of this anniversary, SHM introduced a year-long celebration, the “Year of the Hospitalist,” to commemorate the specialty’s continued success and bright future.

“We have known who we are for years, and the special role that hospitalists play in the well-being of our patients, communities and health systems,” explains Brian Harte, MD, SFHM, president-elect of SHM. “The hospitalist provider code will provide Medicare and other players in the healthcare system an important new tool to better understand and acknowledge the critical role we play in the care of hospitalized patients nationwide.”

Lisa Zoks is SHM's Vice-President of Communications.

ABOUT SHM

Representing the fastest growing specialty in modern healthcare, SHM is the leading medical society for more than 48,000 hospitalists and their patients. SHM is dedicated to promoting the highest quality care for all hospitalized patients and overall excellence in the practice of hospital medicine through quality improvement, education, advocacy and research. Over the past decade, studies have shown that hospitalists can contribute to decreased patient lengths of stay, reductions in hospital costs and readmission rates, and increased patient satisfaction.

PHILADELPHIAThe Society of Hospital Medicine (SHM) is pleased to announce the introduction of a dedicated billing code for hospitalists by the Centers for Medicare & Medicaid Services (CMS). This decision comes in response to concerted advocacy efforts from SHM for CMS to recognize the specialty. This is a monumental step for hospital medicine, which continues to be the fastest growing medical specialty in the United States with over 48,000 practitioners identifying as hospitalists, growing from approximately 1,000 in the mid-1990s.

“We see each day that hospitalists are driving positive change in healthcare, and this recognition by CMS affirms that hospital medicine is growing both in scope and impact,” notes Laurence Wellikson, MD, MHM, CEO of SHM. “The ability for hospital medicine practitioners to differentiate themselves from providers in other specialties will have a huge impact, particularly for upcoming value-based or pay-for-performance programs.”

Until now, hospitalists could only compare performance to that of practitioners in internal medicine or another related specialty. This new billing code will allow hospitalists to appropriately benchmark and focus improvement efforts with others in the hospital medicine specialty, facilitating more accurate comparisons and fairer assessments of hospitalist performance.

Despite varied training backgrounds, hospitalists have become focused within their own unique specialty, dedicated to providing care to hospitalized patients and working toward high-quality, patient-centered care in the hospital. They have developed institutional-based skills that differentiate them from practitioners in other specialties, such as internal and family medicine. Their specialized expertise includes improving both the efficiency and safety of care for hospitalized patients and the ability to manage and innovate in a hospital’s team-based environment.

This momentous decision coincides with the twenty-year anniversary of the coining of the term ‘hospitalist’ by Robert Wachter, MD, MHM, and Lee Goldman, MD in the New England Journal of Medicine. In recognition of this anniversary, SHM introduced a year-long celebration, the “Year of the Hospitalist,” to commemorate the specialty’s continued success and bright future.

“We have known who we are for years, and the special role that hospitalists play in the well-being of our patients, communities and health systems,” explains Brian Harte, MD, SFHM, president-elect of SHM. “The hospitalist provider code will provide Medicare and other players in the healthcare system an important new tool to better understand and acknowledge the critical role we play in the care of hospitalized patients nationwide.”

Lisa Zoks is SHM's Vice-President of Communications.

ABOUT SHM

Representing the fastest growing specialty in modern healthcare, SHM is the leading medical society for more than 48,000 hospitalists and their patients. SHM is dedicated to promoting the highest quality care for all hospitalized patients and overall excellence in the practice of hospital medicine through quality improvement, education, advocacy and research. Over the past decade, studies have shown that hospitalists can contribute to decreased patient lengths of stay, reductions in hospital costs and readmission rates, and increased patient satisfaction.

Issue
The Hospitalist - 2016(02)
Issue
The Hospitalist - 2016(02)
Publications
Publications
Article Type
Display Headline
Medicare Grants Billing Code for Hospitalists
Display Headline
Medicare Grants Billing Code for Hospitalists
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Periop statins don’t prevent acute kidney injury after cardiac surgery

Article Type
Changed
Wed, 01/02/2019 - 09:30
Display Headline
Periop statins don’t prevent acute kidney injury after cardiac surgery

ORLANDO – Statins administered perioperatively offered no protection against acute kidney injury following cardiac surgery, according to new results of a 5-year randomized clinical trial.

The findings held true whether or not patients were naive to statins; serum creatinine levels actually increased significantly more for statin-naive patients given atorvastatin than those given placebo.

The study was stopped early for patients naive to statins because increased acute kidney injury was seen in those patients who had chronic kidney disease (eGFR less than 60 mL/min/1.73 m2), and was subsequently stopped early for futility for all patients.

Dr. Frederic Tremaine Billings

“De novo initiation of daily perioperative atorvastatin treatment did not reduce the incidence of AKI or reduce the increase in serum creatinine concentration associated with cardiac surgery,” wrote Dr. Frederic T. Billings IV, professor of medicine at Vanderbilt University, Nashville, Tenn., and his collaborators. The findings (JAMA 2016 Feb 23. doi: 10.1001/jama.2016.0548) were published concurrently with his presentation at the Critical Care Congress, sponsored by the Society for Critical Care Medicine.

In what Dr. Phil B. Fontanarosa, executive editor of JAMA and comoderator of the late-breaking trials session at the meeting, described as “really an elegant clinical trial,” Dr. Billings and his collaborators enrolled 615 patients over 5 years at Vanderbilt University Medical Center.

Patients undergoing elective coronary artery bypass grafting, valvular heart surgery, or ascending aortic surgery were eligible. Patients were excluded if they had prior statin intolerance, acute coronary syndrome, or liver dysfunction; were taking potent CYP3A4 inhibitors or cyclosporine; were receiving renal replacement therapy or had a kidney transplant; or were pregnant.

Both patients currently on a statin and patients naive to statins were recruited. Statin-naive patients received 80 mg atorvastatin the day before surgery, and then 40 mg of atorvastatin on the day of surgery and daily following surgery, or a matched placebo regimen.

Patients who were already on a statin received the study drug only on days that they would not have received a statin if treated according to the current standard of care. It was deemed unethical to allow those patients to receive placebo during and after surgery, since observational studies suggested that doing so might increase their potential for AKI.

For those patients already on a statin, this meant that they stayed on their usual regimen until the day of surgery, and then were randomized to receive either 80 mg of atorvastatin on the day of surgery and 40 mg of atorvastatin the day after surgery, or a matching placebo regimen.

For both groups, the study drug was given at least 3 hours before surgery on the day of surgery.

Randomization was stratified for prior statin use, for chronic kidney disease, and by history of diabetes. The 199 patients naive to statins and the 416 already on a statin were similar in demographic and health characteristics. Median age was 67 years, 188 (30.6%) were women; 202 participants (32.8%) had diabetes.

The primary outcome measure was diagnosis of AKI, defined as an increase of 0.3 mg/dL in serum creatinine, or beginning renal replacement therapy within 48 hours of surgery. Baseline serum creatinine was measured no more than 7 days prior to surgery.

AKI occurred in 64 of 308 patients (20.8%) in the atorvastatin group, and in 60 of 307 patients (19.5%) receiving placebo overall (P = .75). For those naive to statins, 21.6% of the atorvastatin group and 13.4% of the placebo group developed AKI (P = .15). Overall, 179 enrolled patients had CKD, and the incidence of AKI did not significantly differ in the atorvastatin and the placebo arms of this subgroup.

The subpopulation of participants with CKD who were statin naive (n = 36), however, saw an increased incidence of AKI with atorvastatin compared to placebo. AKI occurred in 9 of 17 patients (52.9%) given atorvastatin, and in 3 of 19 (15.8%) given placebo group (RR, 3.35[95% confidence interval 0.12 to 10.05]; P = .03). “It should be noted that the number of patients in this subgroup was particularly small, leading to a wide confidence interval and an increased chance of type 1 error,” said Dr. Billings.

Secondary outcome measures were maximum increase in creatinine concentration from baseline through postop day 2, delirium in the ICU, degree of myocardial injury, and incidence of postoperative pneumonia, atrial fibrillation, or stroke. Perioperative atorvastatin administration did not affect any of these endpoints.

The safety analysis showed no indications of increased risk of skeletal muscle or liver injury with perioperative atorvastatin use.

In the real world, “Most patients presenting for cardiac surgery … are already taking statins, and in the current study there was little evidence that continuation or withdrawal from statin treatment on the day of surgery and postoperative day 1 affects AKI,” wrote Dr. Billings and his coauthors.

 

 

Study limitations included its single-center design, and the use of AKI criteria that may not be sensitive to late-developing AKI. Also, for enrolled patients who were already on statins, statin exposure was not reduced in comparison with usual care.

After the presentation, Dr. Billings reported that the researchers also collected information about other biomarkers that may signal AKI, including IgM. He and his collaborators plan later publication of those data after a full analysis.

The National Institutes of Health and the Vanderbilt University Medical Center department of anesthesiology funded the study. Dr. Brown reported receiving grants from Shire Pharmaceuticals and New Haven Pharmaceuticals, and personal fees from Novartis Pharmaceuticals and Alnylam Pharmaceuticals. The other authors reported no conflicts of interest.

[email protected]

On Twitter @karioakes

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

ORLANDO – Statins administered perioperatively offered no protection against acute kidney injury following cardiac surgery, according to new results of a 5-year randomized clinical trial.

The findings held true whether or not patients were naive to statins; serum creatinine levels actually increased significantly more for statin-naive patients given atorvastatin than those given placebo.

The study was stopped early for patients naive to statins because increased acute kidney injury was seen in those patients who had chronic kidney disease (eGFR less than 60 mL/min/1.73 m2), and was subsequently stopped early for futility for all patients.

Dr. Frederic Tremaine Billings

“De novo initiation of daily perioperative atorvastatin treatment did not reduce the incidence of AKI or reduce the increase in serum creatinine concentration associated with cardiac surgery,” wrote Dr. Frederic T. Billings IV, professor of medicine at Vanderbilt University, Nashville, Tenn., and his collaborators. The findings (JAMA 2016 Feb 23. doi: 10.1001/jama.2016.0548) were published concurrently with his presentation at the Critical Care Congress, sponsored by the Society for Critical Care Medicine.

In what Dr. Phil B. Fontanarosa, executive editor of JAMA and comoderator of the late-breaking trials session at the meeting, described as “really an elegant clinical trial,” Dr. Billings and his collaborators enrolled 615 patients over 5 years at Vanderbilt University Medical Center.

Patients undergoing elective coronary artery bypass grafting, valvular heart surgery, or ascending aortic surgery were eligible. Patients were excluded if they had prior statin intolerance, acute coronary syndrome, or liver dysfunction; were taking potent CYP3A4 inhibitors or cyclosporine; were receiving renal replacement therapy or had a kidney transplant; or were pregnant.

Both patients currently on a statin and patients naive to statins were recruited. Statin-naive patients received 80 mg atorvastatin the day before surgery, and then 40 mg of atorvastatin on the day of surgery and daily following surgery, or a matched placebo regimen.

Patients who were already on a statin received the study drug only on days that they would not have received a statin if treated according to the current standard of care. It was deemed unethical to allow those patients to receive placebo during and after surgery, since observational studies suggested that doing so might increase their potential for AKI.

For those patients already on a statin, this meant that they stayed on their usual regimen until the day of surgery, and then were randomized to receive either 80 mg of atorvastatin on the day of surgery and 40 mg of atorvastatin the day after surgery, or a matching placebo regimen.

For both groups, the study drug was given at least 3 hours before surgery on the day of surgery.

Randomization was stratified for prior statin use, for chronic kidney disease, and by history of diabetes. The 199 patients naive to statins and the 416 already on a statin were similar in demographic and health characteristics. Median age was 67 years, 188 (30.6%) were women; 202 participants (32.8%) had diabetes.

The primary outcome measure was diagnosis of AKI, defined as an increase of 0.3 mg/dL in serum creatinine, or beginning renal replacement therapy within 48 hours of surgery. Baseline serum creatinine was measured no more than 7 days prior to surgery.

AKI occurred in 64 of 308 patients (20.8%) in the atorvastatin group, and in 60 of 307 patients (19.5%) receiving placebo overall (P = .75). For those naive to statins, 21.6% of the atorvastatin group and 13.4% of the placebo group developed AKI (P = .15). Overall, 179 enrolled patients had CKD, and the incidence of AKI did not significantly differ in the atorvastatin and the placebo arms of this subgroup.

The subpopulation of participants with CKD who were statin naive (n = 36), however, saw an increased incidence of AKI with atorvastatin compared to placebo. AKI occurred in 9 of 17 patients (52.9%) given atorvastatin, and in 3 of 19 (15.8%) given placebo group (RR, 3.35[95% confidence interval 0.12 to 10.05]; P = .03). “It should be noted that the number of patients in this subgroup was particularly small, leading to a wide confidence interval and an increased chance of type 1 error,” said Dr. Billings.

Secondary outcome measures were maximum increase in creatinine concentration from baseline through postop day 2, delirium in the ICU, degree of myocardial injury, and incidence of postoperative pneumonia, atrial fibrillation, or stroke. Perioperative atorvastatin administration did not affect any of these endpoints.

The safety analysis showed no indications of increased risk of skeletal muscle or liver injury with perioperative atorvastatin use.

In the real world, “Most patients presenting for cardiac surgery … are already taking statins, and in the current study there was little evidence that continuation or withdrawal from statin treatment on the day of surgery and postoperative day 1 affects AKI,” wrote Dr. Billings and his coauthors.

 

 

Study limitations included its single-center design, and the use of AKI criteria that may not be sensitive to late-developing AKI. Also, for enrolled patients who were already on statins, statin exposure was not reduced in comparison with usual care.

After the presentation, Dr. Billings reported that the researchers also collected information about other biomarkers that may signal AKI, including IgM. He and his collaborators plan later publication of those data after a full analysis.

The National Institutes of Health and the Vanderbilt University Medical Center department of anesthesiology funded the study. Dr. Brown reported receiving grants from Shire Pharmaceuticals and New Haven Pharmaceuticals, and personal fees from Novartis Pharmaceuticals and Alnylam Pharmaceuticals. The other authors reported no conflicts of interest.

[email protected]

On Twitter @karioakes

ORLANDO – Statins administered perioperatively offered no protection against acute kidney injury following cardiac surgery, according to new results of a 5-year randomized clinical trial.

The findings held true whether or not patients were naive to statins; serum creatinine levels actually increased significantly more for statin-naive patients given atorvastatin than those given placebo.

The study was stopped early for patients naive to statins because increased acute kidney injury was seen in those patients who had chronic kidney disease (eGFR less than 60 mL/min/1.73 m2), and was subsequently stopped early for futility for all patients.

Dr. Frederic Tremaine Billings

“De novo initiation of daily perioperative atorvastatin treatment did not reduce the incidence of AKI or reduce the increase in serum creatinine concentration associated with cardiac surgery,” wrote Dr. Frederic T. Billings IV, professor of medicine at Vanderbilt University, Nashville, Tenn., and his collaborators. The findings (JAMA 2016 Feb 23. doi: 10.1001/jama.2016.0548) were published concurrently with his presentation at the Critical Care Congress, sponsored by the Society for Critical Care Medicine.

In what Dr. Phil B. Fontanarosa, executive editor of JAMA and comoderator of the late-breaking trials session at the meeting, described as “really an elegant clinical trial,” Dr. Billings and his collaborators enrolled 615 patients over 5 years at Vanderbilt University Medical Center.

Patients undergoing elective coronary artery bypass grafting, valvular heart surgery, or ascending aortic surgery were eligible. Patients were excluded if they had prior statin intolerance, acute coronary syndrome, or liver dysfunction; were taking potent CYP3A4 inhibitors or cyclosporine; were receiving renal replacement therapy or had a kidney transplant; or were pregnant.

Both patients currently on a statin and patients naive to statins were recruited. Statin-naive patients received 80 mg atorvastatin the day before surgery, and then 40 mg of atorvastatin on the day of surgery and daily following surgery, or a matched placebo regimen.

Patients who were already on a statin received the study drug only on days that they would not have received a statin if treated according to the current standard of care. It was deemed unethical to allow those patients to receive placebo during and after surgery, since observational studies suggested that doing so might increase their potential for AKI.

For those patients already on a statin, this meant that they stayed on their usual regimen until the day of surgery, and then were randomized to receive either 80 mg of atorvastatin on the day of surgery and 40 mg of atorvastatin the day after surgery, or a matching placebo regimen.

For both groups, the study drug was given at least 3 hours before surgery on the day of surgery.

Randomization was stratified for prior statin use, for chronic kidney disease, and by history of diabetes. The 199 patients naive to statins and the 416 already on a statin were similar in demographic and health characteristics. Median age was 67 years, 188 (30.6%) were women; 202 participants (32.8%) had diabetes.

The primary outcome measure was diagnosis of AKI, defined as an increase of 0.3 mg/dL in serum creatinine, or beginning renal replacement therapy within 48 hours of surgery. Baseline serum creatinine was measured no more than 7 days prior to surgery.

AKI occurred in 64 of 308 patients (20.8%) in the atorvastatin group, and in 60 of 307 patients (19.5%) receiving placebo overall (P = .75). For those naive to statins, 21.6% of the atorvastatin group and 13.4% of the placebo group developed AKI (P = .15). Overall, 179 enrolled patients had CKD, and the incidence of AKI did not significantly differ in the atorvastatin and the placebo arms of this subgroup.

The subpopulation of participants with CKD who were statin naive (n = 36), however, saw an increased incidence of AKI with atorvastatin compared to placebo. AKI occurred in 9 of 17 patients (52.9%) given atorvastatin, and in 3 of 19 (15.8%) given placebo group (RR, 3.35[95% confidence interval 0.12 to 10.05]; P = .03). “It should be noted that the number of patients in this subgroup was particularly small, leading to a wide confidence interval and an increased chance of type 1 error,” said Dr. Billings.

Secondary outcome measures were maximum increase in creatinine concentration from baseline through postop day 2, delirium in the ICU, degree of myocardial injury, and incidence of postoperative pneumonia, atrial fibrillation, or stroke. Perioperative atorvastatin administration did not affect any of these endpoints.

The safety analysis showed no indications of increased risk of skeletal muscle or liver injury with perioperative atorvastatin use.

In the real world, “Most patients presenting for cardiac surgery … are already taking statins, and in the current study there was little evidence that continuation or withdrawal from statin treatment on the day of surgery and postoperative day 1 affects AKI,” wrote Dr. Billings and his coauthors.

 

 

Study limitations included its single-center design, and the use of AKI criteria that may not be sensitive to late-developing AKI. Also, for enrolled patients who were already on statins, statin exposure was not reduced in comparison with usual care.

After the presentation, Dr. Billings reported that the researchers also collected information about other biomarkers that may signal AKI, including IgM. He and his collaborators plan later publication of those data after a full analysis.

The National Institutes of Health and the Vanderbilt University Medical Center department of anesthesiology funded the study. Dr. Brown reported receiving grants from Shire Pharmaceuticals and New Haven Pharmaceuticals, and personal fees from Novartis Pharmaceuticals and Alnylam Pharmaceuticals. The other authors reported no conflicts of interest.

[email protected]

On Twitter @karioakes

References

References

Publications
Publications
Topics
Article Type
Display Headline
Periop statins don’t prevent acute kidney injury after cardiac surgery
Display Headline
Periop statins don’t prevent acute kidney injury after cardiac surgery
Sections
Article Source

AT THE CRITICAL CARE CONGRESS

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Perioperative atorvastatin did not protect against acute kidney injury after cardiac surgery.

Major finding: Acute kidney injury occurred in 64 of 308 patients (20.8%) in the atorvastatin group, and in 60 of 307 patients (19.5%) receiving placebo overall, a nonsignificant difference (P = .75).

Data source: Randomized, double-blinded, placebo-controlled trial of 615 adults who underwent cardiac surgery.

Disclosures: The National Institutes of Health and the Vanderbilt University Medical Center department of anesthesiology funded the study. Dr. Brown reported receiving grants from Shire Pharmaceuticals and New Haven Pharmaceuticals, and personal fees from Novartis Pharmaceuticals and Alnylam Pharmaceuticals. The other authors reported no conflicts of interest.

Fingolimod improved gait impairment in small study

Article Type
Changed
Wed, 01/16/2019 - 15:45
Display Headline
Fingolimod improved gait impairment in small study

NEW ORLEANS – Treatment with fingolimod improved gait impairment in treatment-naive multiple sclerosis (MS) patients and those on a previous first-line therapy in a small, single-center study.

“Fingolimod [Gilenya] is the first disease-modifying treatment shown to improve gait impairment in MS,” commented Dr. Soledad Pérez-Sánchez of Virgen Macarena University Hospital, Seville, Spain, who presented the study as a poster at the meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis.

©solitude72/iStockphoto

The investigators also found that patients who were unsuccessfully treated with natalizumab (Tysabri) prior to fingolimod did not improve their gait during the course of the 6-month study. Natalizumab is a second-line drug in Spain, and fingolimod is a second-line therapy there as well, except in cases of aggressive onset of disease, according to the investigators.

Of 36 patients in the study, 24 were treatment-naïve/first-line patients (17 females and 7 males; mean age, 38.25 years), with the remaining 12 (9 females, 3 males; mean age, 44.25 years) having been treated with natalizumab. The mean duration of MS was 11.2 years in the naïve/first-line group and 17.9 years in patients on natalizumab prior to fingolimod. The mean Extended Disability Status Scale score in the two groups was similar at 3.79 and 3.38, respectively.

The investigators measured gait profile changes during fingolimod treatment with the Gaitrite Electronic System, which comprises an electronic pathway equipped with sensors designed to measure the timing and position of walking. The measurement parameters included velocity, ambulation time, and functional ambulation profile (FAP), the time to move unassisted through five common environmental terrains.

All patients completed the walking test prior to treatment and 3 months after treatment. At 6 months, 20 naïve/first-line patients and 11 patients previously on natalizumab completed the test. For each group of patients, the results prior to treatment and 3 months after were statistically similar. But significant differences were evident for naïve/first-line patients between the 3- and 6-month measures of velocity (89.10±31.03 to 100.70±23.75 cm/s; P = .01) and FAP (82.81±16.93 to 91.95±9.02 seconds; P = .01). These measurements trended toward significance when the values prior to treatment and 6 months after treatment were compared (P = .096, .077, and .065, in the same respective order).

Patients who had been treated with natalizumab did not display appreciable changes in velocity, ambulation time, and FAP.

“Our study shows that fingolimod improves gait impairment in naïve patients and those switched from first-line therapy. Our data are consistent with other clinical measures published so far which have pointed to better outcomes with fingolimod in naïve/first-line patients than in natalizumab-switched patients,” Dr. Pérez-Sánchez and her colleagues said.

The single-center study design and small number of patients limit any conclusions on the use of fingolimod as a gait-improving therapy in MS until further studies are completed, according to the researchers.

Funding was provided by Novartis. Dr. Pérez-Sánchez had no disclosures.

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

NEW ORLEANS – Treatment with fingolimod improved gait impairment in treatment-naive multiple sclerosis (MS) patients and those on a previous first-line therapy in a small, single-center study.

“Fingolimod [Gilenya] is the first disease-modifying treatment shown to improve gait impairment in MS,” commented Dr. Soledad Pérez-Sánchez of Virgen Macarena University Hospital, Seville, Spain, who presented the study as a poster at the meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis.

©solitude72/iStockphoto

The investigators also found that patients who were unsuccessfully treated with natalizumab (Tysabri) prior to fingolimod did not improve their gait during the course of the 6-month study. Natalizumab is a second-line drug in Spain, and fingolimod is a second-line therapy there as well, except in cases of aggressive onset of disease, according to the investigators.

Of 36 patients in the study, 24 were treatment-naïve/first-line patients (17 females and 7 males; mean age, 38.25 years), with the remaining 12 (9 females, 3 males; mean age, 44.25 years) having been treated with natalizumab. The mean duration of MS was 11.2 years in the naïve/first-line group and 17.9 years in patients on natalizumab prior to fingolimod. The mean Extended Disability Status Scale score in the two groups was similar at 3.79 and 3.38, respectively.

The investigators measured gait profile changes during fingolimod treatment with the Gaitrite Electronic System, which comprises an electronic pathway equipped with sensors designed to measure the timing and position of walking. The measurement parameters included velocity, ambulation time, and functional ambulation profile (FAP), the time to move unassisted through five common environmental terrains.

All patients completed the walking test prior to treatment and 3 months after treatment. At 6 months, 20 naïve/first-line patients and 11 patients previously on natalizumab completed the test. For each group of patients, the results prior to treatment and 3 months after were statistically similar. But significant differences were evident for naïve/first-line patients between the 3- and 6-month measures of velocity (89.10±31.03 to 100.70±23.75 cm/s; P = .01) and FAP (82.81±16.93 to 91.95±9.02 seconds; P = .01). These measurements trended toward significance when the values prior to treatment and 6 months after treatment were compared (P = .096, .077, and .065, in the same respective order).

Patients who had been treated with natalizumab did not display appreciable changes in velocity, ambulation time, and FAP.

“Our study shows that fingolimod improves gait impairment in naïve patients and those switched from first-line therapy. Our data are consistent with other clinical measures published so far which have pointed to better outcomes with fingolimod in naïve/first-line patients than in natalizumab-switched patients,” Dr. Pérez-Sánchez and her colleagues said.

The single-center study design and small number of patients limit any conclusions on the use of fingolimod as a gait-improving therapy in MS until further studies are completed, according to the researchers.

Funding was provided by Novartis. Dr. Pérez-Sánchez had no disclosures.

NEW ORLEANS – Treatment with fingolimod improved gait impairment in treatment-naive multiple sclerosis (MS) patients and those on a previous first-line therapy in a small, single-center study.

“Fingolimod [Gilenya] is the first disease-modifying treatment shown to improve gait impairment in MS,” commented Dr. Soledad Pérez-Sánchez of Virgen Macarena University Hospital, Seville, Spain, who presented the study as a poster at the meeting held by the Americas Committee for Treatment and Research in Multiple Sclerosis.

©solitude72/iStockphoto

The investigators also found that patients who were unsuccessfully treated with natalizumab (Tysabri) prior to fingolimod did not improve their gait during the course of the 6-month study. Natalizumab is a second-line drug in Spain, and fingolimod is a second-line therapy there as well, except in cases of aggressive onset of disease, according to the investigators.

Of 36 patients in the study, 24 were treatment-naïve/first-line patients (17 females and 7 males; mean age, 38.25 years), with the remaining 12 (9 females, 3 males; mean age, 44.25 years) having been treated with natalizumab. The mean duration of MS was 11.2 years in the naïve/first-line group and 17.9 years in patients on natalizumab prior to fingolimod. The mean Extended Disability Status Scale score in the two groups was similar at 3.79 and 3.38, respectively.

The investigators measured gait profile changes during fingolimod treatment with the Gaitrite Electronic System, which comprises an electronic pathway equipped with sensors designed to measure the timing and position of walking. The measurement parameters included velocity, ambulation time, and functional ambulation profile (FAP), the time to move unassisted through five common environmental terrains.

All patients completed the walking test prior to treatment and 3 months after treatment. At 6 months, 20 naïve/first-line patients and 11 patients previously on natalizumab completed the test. For each group of patients, the results prior to treatment and 3 months after were statistically similar. But significant differences were evident for naïve/first-line patients between the 3- and 6-month measures of velocity (89.10±31.03 to 100.70±23.75 cm/s; P = .01) and FAP (82.81±16.93 to 91.95±9.02 seconds; P = .01). These measurements trended toward significance when the values prior to treatment and 6 months after treatment were compared (P = .096, .077, and .065, in the same respective order).

Patients who had been treated with natalizumab did not display appreciable changes in velocity, ambulation time, and FAP.

“Our study shows that fingolimod improves gait impairment in naïve patients and those switched from first-line therapy. Our data are consistent with other clinical measures published so far which have pointed to better outcomes with fingolimod in naïve/first-line patients than in natalizumab-switched patients,” Dr. Pérez-Sánchez and her colleagues said.

The single-center study design and small number of patients limit any conclusions on the use of fingolimod as a gait-improving therapy in MS until further studies are completed, according to the researchers.

Funding was provided by Novartis. Dr. Pérez-Sánchez had no disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Fingolimod improved gait impairment in small study
Display Headline
Fingolimod improved gait impairment in small study
Sections
Article Source

AT ACTRIMS FORUM 2016

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Fingolimod shows signs that it may improve gait in MS patients who are naive to treatment or have only received first-line therapy.

Major finding: Significant differences were evident for naïve/first-line patients between the 3- and 6-month measures of velocity (89.10±31.03 to 100.70±23.75 cm/s; P = .01) and functional ambulation profile (82.81±16.93 to 91.95±9.02 seconds; P = .01).

Data source: Single-center study of 36 patients.

Disclosures: Funding was provided by Novartis. Dr. Pérez-Sánchez had no disclosures.

DNA delivery vehicles may circumvent drug resistance in AML

Article Type
Changed
Fri, 02/26/2016 - 08:00
Display Headline
DNA delivery vehicles may circumvent drug resistance in AML

Drug release in a cancer cell

Image courtesy of PNAS

DNA origami nanostructures may be used to overcome drug resistance in acute myeloid leukemia (AML), according to preclinical research published in the journal Small.

Researchers found they could create these nanostructures in 10 minutes and load them with the anthracycline daunorubicin.

When the team introduced the structures to daunorubicin-resistant AML cells, the drug delivery vehicles entered the cells via endocytosis.

This allowed the drug to bypass defenses in the cell membrane that are effective against the free drug.

Once the nanostructures broke down, daunorubicin flooded the cells and killed them off.

Other research groups have used this delivery technique to overcome drug resistance in solid tumors, but this is the first time researchers have shown the same technique works on drug-resistant leukemia cells.

To create the DNA origami nanostructures, the researchers used the genome of a common bacteriophage and synthetic strands that were designed to fold up the bacteriophage DNA.

Although the folded-up shape performs a function, the DNA itself does not, explained Patrick Halley, a graduate student at The Ohio State University in Columbus.

“[T]he DNA capsule doesn’t do anything except hold a shape,” Halley said. “It’s just a static, rigid structure that carries things. It doesn’t encode any proteins or do anything else that we normally think of DNA as doing.”

The researchers tested the DNA origami nanostructures in AML cell lines that had developed resistance to daunorubicin. When molecules of daunorubicin enter these cells, the cells recognize the drug molecules and eject them through openings in the cell wall.

“Cancer cells have novel ways of resisting drugs, like these ‘pumps,’ and the exciting part of packaging the drug this way is that we can circumvent those defenses so that the drug accumulates in the cancer cell and causes it to die,” said John Byrd, MD, of The Ohio State University.

“Potentially, we can also tailor these structures to make them deliver drugs selectively to cancer cells and not to other parts of the body where they can cause side effects.”

In tests, the resistant AML cells effectively absorbed molecules of daunorubicin when they were hidden inside the rod-shaped nanostructures.

The researchers tracked the nanostructures inside the cells using fluorescent tags. Each structure measures about 15 nanometers wide and 100 nanometers long, and each has 4 hollow, open-ended interior compartments.

Study author Christopher Lucas, PhD, of The Ohio State University, said the design of the nanostructures maximizes the surface area available to carry the drug.

“The way daunorubicin works is it tucks into the cancer cell’s DNA and prevents it from replicating,” Dr Lucas said. “So we designed a capsule structure that would have lots of accessible DNA base-pairs for it to tuck into. When the capsule breaks down, the drug molecules are freed to flood the cell.”

The researchers said they designed the nanostructures to be strong and stable so they wouldn’t fully disintegrate and release the bulk of the drug until it was too late for the cells to eject them.

And that’s what the team observed with a fluorescence microscope. The cells drew the nanostructures into the organelles that would normally digest them (if they were food).

When the nanostructures broke down, the drug flooded the cells and caused them to disintegrate. Most cells died within the first 15 hours after consuming the nanostructures.

“DNA origami nanostructures have a lot of potential for drug delivery, not just for making effective drug delivery vehicles, but enabling new ways to study drug delivery,” said Carlos Castro, PhD, of The Ohio State University.

 

 

“For instance, we can vary the shape or mechanical stiffness of a structure very precisely and see how that affects entry into cells.”

Dr Castro said he hopes to create a streamlined and economically viable process for building DNA origami nanostructures as part of a modular drug delivery system.

Dr Byrd said the technique should work on most any form of drug-resistant cancer if further research shows it can be translated to animal models.

Publications
Topics

Drug release in a cancer cell

Image courtesy of PNAS

DNA origami nanostructures may be used to overcome drug resistance in acute myeloid leukemia (AML), according to preclinical research published in the journal Small.

Researchers found they could create these nanostructures in 10 minutes and load them with the anthracycline daunorubicin.

When the team introduced the structures to daunorubicin-resistant AML cells, the drug delivery vehicles entered the cells via endocytosis.

This allowed the drug to bypass defenses in the cell membrane that are effective against the free drug.

Once the nanostructures broke down, daunorubicin flooded the cells and killed them off.

Other research groups have used this delivery technique to overcome drug resistance in solid tumors, but this is the first time researchers have shown the same technique works on drug-resistant leukemia cells.

To create the DNA origami nanostructures, the researchers used the genome of a common bacteriophage and synthetic strands that were designed to fold up the bacteriophage DNA.

Although the folded-up shape performs a function, the DNA itself does not, explained Patrick Halley, a graduate student at The Ohio State University in Columbus.

“[T]he DNA capsule doesn’t do anything except hold a shape,” Halley said. “It’s just a static, rigid structure that carries things. It doesn’t encode any proteins or do anything else that we normally think of DNA as doing.”

The researchers tested the DNA origami nanostructures in AML cell lines that had developed resistance to daunorubicin. When molecules of daunorubicin enter these cells, the cells recognize the drug molecules and eject them through openings in the cell wall.

“Cancer cells have novel ways of resisting drugs, like these ‘pumps,’ and the exciting part of packaging the drug this way is that we can circumvent those defenses so that the drug accumulates in the cancer cell and causes it to die,” said John Byrd, MD, of The Ohio State University.

“Potentially, we can also tailor these structures to make them deliver drugs selectively to cancer cells and not to other parts of the body where they can cause side effects.”

In tests, the resistant AML cells effectively absorbed molecules of daunorubicin when they were hidden inside the rod-shaped nanostructures.

The researchers tracked the nanostructures inside the cells using fluorescent tags. Each structure measures about 15 nanometers wide and 100 nanometers long, and each has 4 hollow, open-ended interior compartments.

Study author Christopher Lucas, PhD, of The Ohio State University, said the design of the nanostructures maximizes the surface area available to carry the drug.

“The way daunorubicin works is it tucks into the cancer cell’s DNA and prevents it from replicating,” Dr Lucas said. “So we designed a capsule structure that would have lots of accessible DNA base-pairs for it to tuck into. When the capsule breaks down, the drug molecules are freed to flood the cell.”

The researchers said they designed the nanostructures to be strong and stable so they wouldn’t fully disintegrate and release the bulk of the drug until it was too late for the cells to eject them.

And that’s what the team observed with a fluorescence microscope. The cells drew the nanostructures into the organelles that would normally digest them (if they were food).

When the nanostructures broke down, the drug flooded the cells and caused them to disintegrate. Most cells died within the first 15 hours after consuming the nanostructures.

“DNA origami nanostructures have a lot of potential for drug delivery, not just for making effective drug delivery vehicles, but enabling new ways to study drug delivery,” said Carlos Castro, PhD, of The Ohio State University.

 

 

“For instance, we can vary the shape or mechanical stiffness of a structure very precisely and see how that affects entry into cells.”

Dr Castro said he hopes to create a streamlined and economically viable process for building DNA origami nanostructures as part of a modular drug delivery system.

Dr Byrd said the technique should work on most any form of drug-resistant cancer if further research shows it can be translated to animal models.

Drug release in a cancer cell

Image courtesy of PNAS

DNA origami nanostructures may be used to overcome drug resistance in acute myeloid leukemia (AML), according to preclinical research published in the journal Small.

Researchers found they could create these nanostructures in 10 minutes and load them with the anthracycline daunorubicin.

When the team introduced the structures to daunorubicin-resistant AML cells, the drug delivery vehicles entered the cells via endocytosis.

This allowed the drug to bypass defenses in the cell membrane that are effective against the free drug.

Once the nanostructures broke down, daunorubicin flooded the cells and killed them off.

Other research groups have used this delivery technique to overcome drug resistance in solid tumors, but this is the first time researchers have shown the same technique works on drug-resistant leukemia cells.

To create the DNA origami nanostructures, the researchers used the genome of a common bacteriophage and synthetic strands that were designed to fold up the bacteriophage DNA.

Although the folded-up shape performs a function, the DNA itself does not, explained Patrick Halley, a graduate student at The Ohio State University in Columbus.

“[T]he DNA capsule doesn’t do anything except hold a shape,” Halley said. “It’s just a static, rigid structure that carries things. It doesn’t encode any proteins or do anything else that we normally think of DNA as doing.”

The researchers tested the DNA origami nanostructures in AML cell lines that had developed resistance to daunorubicin. When molecules of daunorubicin enter these cells, the cells recognize the drug molecules and eject them through openings in the cell wall.

“Cancer cells have novel ways of resisting drugs, like these ‘pumps,’ and the exciting part of packaging the drug this way is that we can circumvent those defenses so that the drug accumulates in the cancer cell and causes it to die,” said John Byrd, MD, of The Ohio State University.

“Potentially, we can also tailor these structures to make them deliver drugs selectively to cancer cells and not to other parts of the body where they can cause side effects.”

In tests, the resistant AML cells effectively absorbed molecules of daunorubicin when they were hidden inside the rod-shaped nanostructures.

The researchers tracked the nanostructures inside the cells using fluorescent tags. Each structure measures about 15 nanometers wide and 100 nanometers long, and each has 4 hollow, open-ended interior compartments.

Study author Christopher Lucas, PhD, of The Ohio State University, said the design of the nanostructures maximizes the surface area available to carry the drug.

“The way daunorubicin works is it tucks into the cancer cell’s DNA and prevents it from replicating,” Dr Lucas said. “So we designed a capsule structure that would have lots of accessible DNA base-pairs for it to tuck into. When the capsule breaks down, the drug molecules are freed to flood the cell.”

The researchers said they designed the nanostructures to be strong and stable so they wouldn’t fully disintegrate and release the bulk of the drug until it was too late for the cells to eject them.

And that’s what the team observed with a fluorescence microscope. The cells drew the nanostructures into the organelles that would normally digest them (if they were food).

When the nanostructures broke down, the drug flooded the cells and caused them to disintegrate. Most cells died within the first 15 hours after consuming the nanostructures.

“DNA origami nanostructures have a lot of potential for drug delivery, not just for making effective drug delivery vehicles, but enabling new ways to study drug delivery,” said Carlos Castro, PhD, of The Ohio State University.

 

 

“For instance, we can vary the shape or mechanical stiffness of a structure very precisely and see how that affects entry into cells.”

Dr Castro said he hopes to create a streamlined and economically viable process for building DNA origami nanostructures as part of a modular drug delivery system.

Dr Byrd said the technique should work on most any form of drug-resistant cancer if further research shows it can be translated to animal models.

Publications
Publications
Topics
Article Type
Display Headline
DNA delivery vehicles may circumvent drug resistance in AML
Display Headline
DNA delivery vehicles may circumvent drug resistance in AML
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica