Managing psychiatric illness during pregnancy and breastfeeding: Tools for decision making

Article Type
Changed
Tue, 08/28/2018 - 11:09
Display Headline
Managing psychiatric illness during pregnancy and breastfeeding: Tools for decision making

Increasingly, women with psychiatric illness are undergoing pharmacologic treatment during pregnancy. In the United States, an estimated 8% of pregnant women are prescribed antidepressants, and the number of such cases has risen over the past 15 years.1 Women with a psychiatric diagnosis were once instructed either to discontinue all medication immediately on learning they were pregnant, or to forgo motherhood because their illness might have a negative effect on a child or because avoiding medication during pregnancy might lead to a relapse.

Fortunately, women with depression, anxiety, bipolar disorder, or schizophrenia no longer are being told that they cannot become mothers. For many women, however, stopping medication is not an option. Furthermore, psychiatric illness sometimes is diagnosed initially during pregnancy and requires treatment.

Illustration: Kimberly Martens for OBG Management
Many women with psychiatric illness no longer have to choose between mental health and starting a family.

Pregnant women and their physicians need accurate information about when to taper off medication, when to start or continue, and which medications are safest. Even for clinicians with a solid knowledge base, counseling a woman who needs or may need psychotropic medication during pregnancy and breastfeeding is a daunting task. Some clinicians still recommend no drug treatment as the safest and best option, given the potential risks to the fetus.

In this review we offer a methodologic approach for decision making about pharmacologic treatment during pregnancy. As the scientific literature is constantly being updated, it is imperative to have the most current information on psychotropics and to know how to individualize that information when counseling a pregnant woman and her family. Using this framework for analyzing the risks and benefits for both mother and fetus, clinicians can avoid the unanswerable question of which medication is the “safest.”

A patient’s mental health care provider is a useful resource for information about a woman’s mental health history and current stability, but he or she may not be expert or comfortable in recommending treatment for a pregnant patient. During pregnancy, a woman’s obstetrician often becomes the “expert” for all treatment decisions.

Psychotropic use during pregnancy: Certain risks in offspring lower than thought, recent data show

Antidepressants. Previous studies may have overestimated the association between prenatal use of antidepressants and attention deficit/hyperactivity disorder (ADHD) in children because they did not control for shared family factors, according to investigators who say that their recent study findings raise the possibility that "confounding by indication" might partially explain the observed association.1

In a population-based cohort study in Hong Kong, Man and colleagues analyzed the records of 190,618 maternal-child pairs.1 A total of 1,252 children were exposed to maternal antidepressant use during pregnancy. Medications included selective serotonin reuptake inhibitors (SSRIs), non-SSRIs, and antipsychotics as monotherapy or in various combination regimens. Overall, 5,659 of the cohort children (3%) were diagnosed with or received treatment for ADHD.

When gestational medication users were compared with nongestational users, the crude hazard ratio (HR) of antidepressant use during pregnancy and ADHD was 2.26 (P<.01). After adjusting for potential confounding factors (such as maternal psychiatric disorders and use of other psychotropic drugs), this reduced to 1.39 (95% confidence interval [CI], 1.07-1.82; P = .01). Children of mothers with psychiatric disorders had a higher risk of ADHD than did children of mothers without psychiatric disorders (HR, 1.84; 95% CI, 1.54-2.18; P<.01), even if the mothers had never used antidepressants.

While acknowledging the potential for type 2 error in the study analysis, the investigators proposed that the results "further strengthen our hypothesis that confounding by indication may play a major role in the observed positive association between gestational use of antidepressants and ADHD in offspring."

Lithium. Similarly, investigators of another recently published study found that the magnitude of the association between prenatal lithium use and increased risk of cardiac malformations in infants was smaller than previously shown.2 This finding may be important clinically because lithium is a first-line treatment for many US women of reproductive age with bipolar disorder.

Most earlier data were derived from a database registry, case reports, and small studies that often had conflicting results. However, Patorno and colleagues conducted a large retrospective cohort study that involved data on 1,325,563 pregnancies in women enrolled in Medicaid.2 Exposure to lithium was defined as at least 1 filled prescription during the first trimester, and the primary reference group included women with no lithium or lamotrigine (another mood stabilizer not associated with congenital malformations) dispensing during the 3 months before the start of pregnancy or during the first trimester.

A total of 663 pregnancies (0.05%) were exposed to lithium and 1,945 (0.15%) were exposed to lamotrigine during the first trimester. The adjusted risk ratios for cardiac malformations among infants exposed to lithium were 1.65 (95% CI, 1.02-2.68) as compared with nonexposed infants and 2.25 (95% CI, 1.17-4.34) as compared with lamotrigine-exposed infants. Notably, all right ventricular outflow tract obstruction defects identified in the infants exposed to lithium occurred with a daily dose of more than 600 mg.

Although the study results suggest an increased risk of cardiac malformations--of approximately 1 additional case per 100 live births--associated with lithium use in early pregnancy, the magnitude of risk is much lower than originally proposed based on early lithium registry data.

-- Kathy Christie, Senior Editor

References

  1. Man KC, Chan EW, Ip P, et al. Prenatal antidepressant use and risk of attention-deficit/hyperactivity disorder in offspring: population based cohort study. BMJ. 2017;357:j2350.
  2. Patorno E, Huybrechts KR, Bateman BT, et al. Lithium use in pregnancy and risk of cardiac malformations. N Engl J Med. 2017;376(23):2245-2254.

Analyze risks and benefits of medication versus no medication

The US Food and Drug Administration (FDA) has not approved any psychotropic medication for use during pregnancy. While a clinical study would provide more scientifically rigorous safety data, conducting a double-blinded, placebo-controlled trial in pregnant women with a psychiatric disorder is unethical. Thus, the literature consists mostly of reports on case series, retrospective chart reviews, prospective naturalistic studies, and analyses of large registry databases. Each has benefits and limitations. It is important to understand the limitations when making treatment decisions.

In 1979, the FDA developed a 5-lettersystem (A, B, C, D, X) for classifying the relative safety of medications used during pregnancy.2 Many clinicians and pregnant women relied on this system to decide which medications were safe. Unfortunately, the information in the system was inadequate for making informed decisions. For example, although a class B medication might have appeared safer than one in class C, the studies of risk in humans might not have been adequate to permit comparisons. Drug safety classifications were seldom changed, despite the availability of additional data.

In June 2015, the FDA changed the requirements for the Pregnancy and Lactation subsections of the labeling for human prescription drugs and biologic products. Drug manufacturers must now include in each subsection a risk summary, clinical considerations supporting patient care decisions and counseling, and detailed data. These subsections provide information on available human and animal studies, known or potential maternal or fetal adverse reactions, and dose adjustments needed during pregnancy and the postpartum period. In addition, the FDA added a subsection: Females and Males of Reproductive Potential.3

These changes acknowledge there is no list of “safe” medications. The safest medication generally is the one that works for a particular patient at the lowest effective dose. As each woman’s history of illness and effective treatment is different, the best medication may differ as well, even among women with the same illness. Therefore, medication should be individualized to the patient. A risk–benefit analysis comparing psychotropic medication treatment with no medication treatment must be performed for each patient according to her personal history and the best available data.

Read about the risks of untreated illness during pregnancy

 

 

What is the risk of untreated illness during pregnancy?

During pregnancy, women are treated for many medical disorders, including psychiatric illness. One general guideline is that, if a pregnant woman does not need a medication—whether it be for an allergy, hypertension, or another disorder—she should not take it. Conversely, if a medication is required for a patient’s well-being, her physician should continue it or switch to a safer one. This general guideline is the same for women with depression, anxiety, or a psychotic disorder.

Managing hypertension during pregnancy is an example of choosing treatment when the risk of the illness to the mother and the infant outweighs the likely small risk associated with taking a medication. Blood pressure is monitored, and, when it reaches a threshold, an antihypertensive is started promptly to avoid morbidity and mortality.

Psychiatric illness carries risks for both mother and fetus as well, but no data show a clear threshold for initiating pharmacologic treatment. Therefore, in prescribing medication the most important steps are to take a complete history and perform a thorough evaluation. Important information includes the number and severity of previous episodes, prior history of hospitalization or suicidal thoughts or attempts, and any history of psychotic or manic status.

Whether to continue or discontinue medication is often decided after inquiring about other times a medication was discontinued. A patient who in the past stayed well for several years after stopping a medication may be able to taper off a medication and conceive during a window of wellness. Some women who have experienced only one episode of illness and have been stable for at least a year may be able to taper off a medication before conceiving (TABLE 1).

In the risk–benefit analysis, assess the need for pharmacologic treatment by considering the risk that untreated illness poses for both mother and fetus, the benefits of treatment for both, and the risk of medication exposure for the fetus.4

Mother: Risk of untreated illness versus benefit of treatment

A complete history and a current symptom evaluation are needed to assess the risk that nonpharmacologic treatment poses for the mother. Women with functional impairment, including inability to work, to perform activities of daily living, or to take care of other children, likely require treatment. Studies have found that women who discontinue treatment for a psychiatric illness around the time of conception are likely to experience a recurrence of illness during pregnancy, often in the first trimester, and must restart medication.5,6 For some diagnoses, particularly bipolar disorder, symptoms during a relapse can be more severe and more difficult to treat, and they carry a risk for both mother and fetus.7 A longitudinal study of pregnant women who stopped medication for bipolar disorder found a 71% rate of relapse.7 In cases in which there is a history of hospitalization, suicide attempt, or psychosis, discontinuing treatment is not an option; instead, the physician must determine which medication is safest for the particular patient.


Related article:
Does PTSD during pregnancy increase the likelihood of preterm birth?

Fetus: Risk of untreated illness versus benefit of treatment

Mothers with untreated psychiatric illness are at higher risk for poor prenatal care, substance abuse, and inadequate nutrition, all of which increase the risk of negative obstetric and neonatal outcomes.8 Evidence indicates that untreated maternal depression increases the risk of preterm delivery and low birth weight.9 Children born to mothers with depression have more behavioral problems, more psychiatric illness, more visits to pediatricians, lower IQ scores, and attachment issues.10 Some of the long-term negative effects of intrauterine stress, which include hypertension, coronary heart disease, and autoimmune disorders, persist into adulthood.11

Fetus: Risk of medication exposure

With any pharmacologic treatment, the timing of fetal exposure affects resultant risks and therefore must be considered in the management plan.

Before conception. Is there any effect on ovulation or fertilization?

Implantation. Does the exposure impair the blastocyst’s ability to implant in the uterine lining?

First trimester. This is the period of organogenesis. Regardless of drug exposure, there is a 2% to 4% baseline risk of a major malformation during any pregnancy. The risk of a particular malformation must be weighed against this baseline risk.

According to limited data, selective serotonin reuptake inhibitors (SSRIs) may increase the risk of early miscarriage.12 SSRIs also have been implicated in increasing the risk of cardiovascular malformations, although the data are conflicting.13,14

Antiepileptics such as valproate and carbamazepine are used as mood stabilizers in the treatment of bipolar disorder.15 Extensive data have shown an association with teratogenicity. Pregnant women who require either of these medications also should be prescribed folic acid 4 or 5 mg/day. Given the high risk of birth defects and cognitive delay, valproate no longer is recommended for women of reproductive potential.16

Lithium, one of the safest medications used in the treatment of bipolar disorder, is associated with a very small risk of Ebstein anomaly.17

Lamotrigine is used to treat bipolar depression and appears to have a good safety profile, along with a possible small increased risk of oral clefts.18,19

Atypical antipsychotics (such as aripiprazole, olanzapine, quetiapine, and risperidone) are often used first-line in the treatment of psychotic disorders and bipolar disorder in women who are not pregnant. Although the safety data on use of these drugs during pregnancy are limited, a recent analysis of pregnant Medicaid enrollees found no increased risk of birth defects after controlling for potential confounding factors.20 Common practice is to avoid these newer agents, given their limited data and the time needed for rare malformations to emerge (adequate numbers require many exposures during pregnancy).

Read additional fetal risks of medication exposure

 

 

Second trimester. This is a period of growth and neural development. A 2006 study suggested that SSRI exposure after pregnancy week 20 increases the risk of persistent pulmonary hypertension of the newborn (PPHN).21 In 2011, however, the FDA removed the PPHN warning label for SSRIs, citing inconsistent data. Whether the PPHN risk is increased with SSRI use is unclear, but the risk is presumed to be smaller than previously suggested.22 Stopping SSRIs before week 20 puts the mother at risk for relapse during pregnancy and increases her risk of developing postpartum depression. If we follow the recommendation to prescribe medication only for women who need it most, then stopping the medication at any time during pregnancy is not an option.

Third trimester. This is a period of continued growth and lung maturation.

Delivery. Is there a potential for impairment in parturition?

Neonatal adaptation. Newborns are active mainly in adapting to extrauterine life: They regulate their temperature and muscle tone and learn to coordinate sucking, swallowing, and breathing. Does medication exposure impair adaptation, or are signs or symptoms of withdrawal or toxicity present? The evidence that in utero SSRI exposure increases the risk of neonatal adaptation syndrome is consistent, but symptoms are mild and self-limited.23 Tapering off SSRIs before delivery currently is not recommended, as doing so increases the mother’s risk for postpartum depression and, according to one study, does not prevent symptoms of neonatal adaptation syndrome from developing.24

Behavioral teratogenicity. What are the long-term developmental outcomes for the child? Are there any differences in IQ, speech and language, or psychiatric illness? One study found an increased risk of autism with in utero exposure to sertraline, but the study had many methodologic flaws and its findings have not been replicated.25 Most studies have not found consistent differences in speech, IQ, or behavior between infants exposed and infants not exposed to antidepressants.26,27 By contrast, in utero exposure to anticonvulsants, particularly valproate, has led to significant developmental problems in children.28 The data on atypical antipsychotics are limited.


Related article:
Do antidepressants really cause autism?
 

None of the medications used to treat depression, bipolar disorder, anxiety, or schizophrenia is considered first-line or safest therapy for the pregnant woman. For any woman who is doing well on a certain medication, but particularly for a pregnant woman, there is no compelling, data-supported reason to switch to another agent. For depression, options include all of the SSRIs, with the possible exception of paroxetine (TABLE 2). In conflicting studies, paroxetine was no different from any other SSRI in not being associated with cardiovascular defects.29

One goal in treatment is to use a medication that previously was effective in the remission of symptoms and to use it at the lowest dose possible. Treating simply to maintain a low dose of drug, however, and not to effect symptom remission, exposes the fetus to both the drug and the illness. Again, the lowest effective dose is the best choice.

Read about treatment during breastfeeding

 

 

Treatment during breastfeeding

Women are encouraged to breastfeed for physical and psychological health benefits, for both themselves and their babies. Many medications are compatible with breastfeeding.30 The amount of drug an infant receives through breast milk is considerably less than the amount received during the mother’s pregnancy. Breastfeeding generally is allowed if the calculated infant dose is less than 10% of the weight-adjusted maternal dose.31

The amount of drug transferred from maternal plasma into milk is highest for drugs with low protein binding and high lipid solubility.32 Drug clearance in infants must be considered as well. Renal clearance is decreased in newborns and does not reach adult levels until 5 or 6 months of age. In addition, liver metabolism is impaired in neonates and even more so in premature infants.33 Drugs that require extensive first-pass metabolism may have higher bioavailability, and this factor should be considered.

Some clinicians recommend pumping and discarding breast milk when the drug in it is at its peak level; although the drug is not eliminated, the infant ingests less of it.34 Most women who are anxious about breastfeeding while on medication “pump and dump” until they are more comfortable nursing and the infants are doing well. Except in cases of mother preference, most physicians with expertise in reproductive mental health generally recommend against pumping and discarding milk.

Through breast milk, infants ingest drugs in varying amounts. The amount depends on the qualities of the medication, the timing and duration of breastfeeding, and the characteristics of the infant. Few psychotropic drugs have significant effects on breastfed infants. Even lithium, previously contraindicated, is successfully used, with infant monitoring, during breastfeeding.35 Given breastfeeding’s benefits for both mother and child, many more women on psychotropic medications are choosing to breastfeed.


Related article:
USPSTF Recommendations to Support Breastfeeding

Balance the pros and cons

Deciding to use medication during pregnancy and breastfeeding involves considering the risk of untreated illness versus the benefit of treatment for both mother and fetus, and the risk of medication exposure for the fetus. Mother and fetus are inseparable, and neither can be isolated from the other in treatment decisions. Avoiding psychotropic medication during pregnancy is not always the safest option for mother or fetus. The patient and her clinician and support system must make an informed decision that is based on the best available data and that takes into account the mother’s history of illness and effective treatment. Many women with psychiatric illness no longer have to choose between mental health and starting a family, and their babies will be healthy.

 

Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.

References
  1. Andrade SE, Raebel MA, Brown J, et al. Use of antidepressant medications during pregnancy: a multisite study. Am J Obstet Gynecol. 2008;198(2):194.e1–e5.
  2. Hecht A. Drug safety labeling for doctors. FDA Consum. 1979;13(8):12–13.
  3. Ramoz LL, Patel-Shori NM. Recent changes in pregnancy and lactation labeling: retirement of risk categories. Pharmacotherapy. 2014;34(4):389–395.
  4. Yonkers KA, Wisner KL, Stewart DE, et al. The management of depression during pregnancy: a report from the American Psychiatric Association and the American College of Obstetricians and Gynecologists. Gen Hosp Psychiatry. 2009;31(5):403–413.
  5. Cohen LS, Altshuler LL, Harlow BL, et al. Relapse of major depression during pregnancy in women who maintain or discontinue antidepressant treatment. JAMA. 2006;295(5):499–507.
  6. O’Brien L, Laporte A, Koren G. Estimating the economic costs of antidepressant discontinuation during pregnancy. Can J Psychiatry. 2009;54(6):399–408.
  7. Viguera AC, Whitfield T, Baldessarini RJ, et al. Risk of recurrence in women with bipolar disorder during pregnancy: prospective study of mood stabilizer discontinuation. Am J Psychiatry. 2007;164(12):1817–1824.
  8. Bonari L, Pinto N, Ahn E, Einarson A, Steiner M, Koren G. Perinatal risks of untreated depression during pregnancy. Can J Psychiatry. 2004;49(11):726–735.
  9. Straub H, Adams M, Kim JJ, Silver RK. Antenatal depressive symptoms increase the likelihood of preterm birth. Am J Obstet Gynecol. 2012;207(4):329.e1–e4.
  10. Hayes LJ, Goodman SH, Carlson E. Maternal antenatal depression and infant disorganized attachment at 12 months. Attach Hum Dev. 2013;15(2):133–153.
  11. Field T. Prenatal depression effects on early development: a review. Infant Behav Dev. 2011;34(1):1–14.
  12. Kjaersgaard MI, Parner ET, Vestergaard M, et al. Prenatal antidepressant exposure and risk of spontaneous abortion—a population-based study. PLoS One. 2013;8(8):e72095.
  13. Nordeng H, van Gelder MM, Spigset O, Koren G, Einarson A, Eberhard-Gran M. Pregnancy outcome after exposure to antidepressants and the role of maternal depression: results from the Norwegian Mother and Child Cohort Study. J Clin Psychopharmacol. 2012;32(2):186–194.
  14. Källén BA, Otterblad Olausson P. Maternal use of selective serotonin re-uptake inhibitors in early pregnancy and infant congenital malformations. Birth Defects Res A Clin Mol Teratol. 2007;79(4):301–308.
  15. Tomson T, Battino D. Teratogenic effects of antiepileptic drugs. Lancet Neurol. 2012;11(9):803–813.
  16. Balon R, Riba M. Should women of childbearing potential be prescribed valproate? A call to action. J Clin Psychiatry. 2016;77(4):525–526.
  17. Giles JJ, Bannigan JG. Teratogenic and developmental effects of lithium. Curr Pharm Design. 2006;12(12):1531–1541.
  18. Nguyen HT, Sharma V, McIntyre RS. Teratogenesis associated with antibipolar agents. Adv Ther. 2009;26(3):281–294.
  19. Campbell E, Kennedy F, Irwin B, et al. Malformation risks of antiepileptic drug monotherapies in pregnancy. J Neurol Neurosurg Psychiatry. 2013;84(11):e2.
  20. Huybrechts KF, Hernández-Díaz S, Patorno E, et al. Antipsychotic use in pregnancy and the risk for congenital malformations. JAMA Psychiatry. 2016;73(9):938–946.
  21. Chambers CD, Hernández-Díaz S, Van Marter LJ, et al. Selective serotonin-reuptake inhibitors and risk of persistent pulmonary hypertension of the newborn. N Engl J Med. 2006;354(6):579–587.
  22. ‘t Jong GW, Einarson T, Koren G, Einarson A. Antidepressant use in pregnancy and persistent pulmonary hypertension of the newborn (PPHN): a systematic review. Reprod Toxicol. 2012;34(3):293–297.
  23. Oberlander TF, Misri S, Fitzgerald CE, Kostaras X, Rurak D, Riggs W. Pharmacologic factors associated with transient neonatal symptoms following prenatal psychotropic medication exposure. J Clin Psychiatry. 2004;65(2):230–237.
  24. Warburton W, Hertzman C, Oberlander TF. A register study of the impact of stopping third trimester selective serotonin reuptake inhibitor exposure on neonatal health. Acta Psychiatr Scand. 2010;121(6):471–479.
  25. Croen LA, Grether JK, Yoshida CK, Odouli R, Hendrick V. Antidepressant use during pregnancy and childhood autism spectrum disorders. Arch Gen Psychiatry. 2011;68(11):1104–1112.
  26. Batton B, Batton E, Weigler K, Aylward G, Batton D. In utero antidepressant exposure and neurodevelopment in preterm infants. Am J Perinatol. 2013;30(4):297–301.
  27. Austin MP, Karatas JC, Mishra P, Christl B, Kennedy D, Oei J. Infant neurodevelopment following in utero exposure to antidepressant medication. Acta Paediatr. 2013;102(11):1054–1059.
  28. Bromley RL, Mawer GE, Briggs M, et al. The prevalence of neurodevelopmental disorders in children prenatally exposed to antiepileptic drugs. J Neurol Neurosurg Psychiatry. 2013;84(6):637–643.
  29. Einarson A, Pistelli A, DeSantis M, et al. Evaluation of the risk of congenital cardiovascular defects associated with use of paroxetine during pregnancy. Am J Psychiatry. 2008;165(6):749–752.
  30. Davanzo R, Copertino M, De Cunto A, Minen F, Amaddeo A. Antidepressant drugs and breastfeeding: a review of the literature. Breastfeed Med. 2011;6(2):89–98.
  31. Ito S. Drug therapy for breast-feeding women. N Engl J Med. 2000;343(2):118–126.
  32. Suri RA, Altshuler LL, Burt VK, Hendrick VC. Managing psychiatric medications in the breast-feeding woman. Medscape Womens Health. 1998;3(1):1.
  33. Milsap RL, Jusko WJ. Pharmacokinetics in the infant. Environ Health Perspect. 1994;102(suppl 11):107–110.
  34. Newport DJ, Hostetter A, Arnold A, Stowe ZN. The treatment of postpartum depression: minimizing infant exposures. J Clin Psychiatry. 2002;63(suppl 7):31–44.
  35. Viguera AC, Newport DJ, Ritchie J, et al. Lithium in breast milk and nursing infants: clinical implications. Am J Psychiatry. 2007;164(2):342–345.
Article PDF
Author and Disclosure Information

Dr. Puryear is Associate Professor and Medical Director, Center for Reproductive Psychiatry, Department of Obstetrics and Gynecology and Menninger Department of Psychiatry, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women, Houston.

At the time of this writing, Dr. Hall was Assistant Professor, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women, and is currently an obstetrician-gynecologist with Georgia Perinatal Consultants, Atlanta.


Dr. Monga is Professor and Vice Chair (Clinical), Department of Obstetrics and Gynecology, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women.

Dr. Ramin is Henry and Emma Meyer Chair in Obstetrics and Gynecology, and Professor and Vice Chair for Education, Department of Obstetrics and Gynecology, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women.

Issue
OBG Management - 29(8)
Publications
Topics
Page Number
30-34, 36-38
Sections
Author and Disclosure Information

Dr. Puryear is Associate Professor and Medical Director, Center for Reproductive Psychiatry, Department of Obstetrics and Gynecology and Menninger Department of Psychiatry, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women, Houston.

At the time of this writing, Dr. Hall was Assistant Professor, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women, and is currently an obstetrician-gynecologist with Georgia Perinatal Consultants, Atlanta.


Dr. Monga is Professor and Vice Chair (Clinical), Department of Obstetrics and Gynecology, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women.

Dr. Ramin is Henry and Emma Meyer Chair in Obstetrics and Gynecology, and Professor and Vice Chair for Education, Department of Obstetrics and Gynecology, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women.

Author and Disclosure Information

Dr. Puryear is Associate Professor and Medical Director, Center for Reproductive Psychiatry, Department of Obstetrics and Gynecology and Menninger Department of Psychiatry, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women, Houston.

At the time of this writing, Dr. Hall was Assistant Professor, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women, and is currently an obstetrician-gynecologist with Georgia Perinatal Consultants, Atlanta.


Dr. Monga is Professor and Vice Chair (Clinical), Department of Obstetrics and Gynecology, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women.

Dr. Ramin is Henry and Emma Meyer Chair in Obstetrics and Gynecology, and Professor and Vice Chair for Education, Department of Obstetrics and Gynecology, Baylor College of Medicine, Texas Children’s Hospital–Pavilion for Women.

Article PDF
Article PDF

Increasingly, women with psychiatric illness are undergoing pharmacologic treatment during pregnancy. In the United States, an estimated 8% of pregnant women are prescribed antidepressants, and the number of such cases has risen over the past 15 years.1 Women with a psychiatric diagnosis were once instructed either to discontinue all medication immediately on learning they were pregnant, or to forgo motherhood because their illness might have a negative effect on a child or because avoiding medication during pregnancy might lead to a relapse.

Fortunately, women with depression, anxiety, bipolar disorder, or schizophrenia no longer are being told that they cannot become mothers. For many women, however, stopping medication is not an option. Furthermore, psychiatric illness sometimes is diagnosed initially during pregnancy and requires treatment.

Illustration: Kimberly Martens for OBG Management
Many women with psychiatric illness no longer have to choose between mental health and starting a family.

Pregnant women and their physicians need accurate information about when to taper off medication, when to start or continue, and which medications are safest. Even for clinicians with a solid knowledge base, counseling a woman who needs or may need psychotropic medication during pregnancy and breastfeeding is a daunting task. Some clinicians still recommend no drug treatment as the safest and best option, given the potential risks to the fetus.

In this review we offer a methodologic approach for decision making about pharmacologic treatment during pregnancy. As the scientific literature is constantly being updated, it is imperative to have the most current information on psychotropics and to know how to individualize that information when counseling a pregnant woman and her family. Using this framework for analyzing the risks and benefits for both mother and fetus, clinicians can avoid the unanswerable question of which medication is the “safest.”

A patient’s mental health care provider is a useful resource for information about a woman’s mental health history and current stability, but he or she may not be expert or comfortable in recommending treatment for a pregnant patient. During pregnancy, a woman’s obstetrician often becomes the “expert” for all treatment decisions.

Psychotropic use during pregnancy: Certain risks in offspring lower than thought, recent data show

Antidepressants. Previous studies may have overestimated the association between prenatal use of antidepressants and attention deficit/hyperactivity disorder (ADHD) in children because they did not control for shared family factors, according to investigators who say that their recent study findings raise the possibility that "confounding by indication" might partially explain the observed association.1

In a population-based cohort study in Hong Kong, Man and colleagues analyzed the records of 190,618 maternal-child pairs.1 A total of 1,252 children were exposed to maternal antidepressant use during pregnancy. Medications included selective serotonin reuptake inhibitors (SSRIs), non-SSRIs, and antipsychotics as monotherapy or in various combination regimens. Overall, 5,659 of the cohort children (3%) were diagnosed with or received treatment for ADHD.

When gestational medication users were compared with nongestational users, the crude hazard ratio (HR) of antidepressant use during pregnancy and ADHD was 2.26 (P<.01). After adjusting for potential confounding factors (such as maternal psychiatric disorders and use of other psychotropic drugs), this reduced to 1.39 (95% confidence interval [CI], 1.07-1.82; P = .01). Children of mothers with psychiatric disorders had a higher risk of ADHD than did children of mothers without psychiatric disorders (HR, 1.84; 95% CI, 1.54-2.18; P<.01), even if the mothers had never used antidepressants.

While acknowledging the potential for type 2 error in the study analysis, the investigators proposed that the results "further strengthen our hypothesis that confounding by indication may play a major role in the observed positive association between gestational use of antidepressants and ADHD in offspring."

Lithium. Similarly, investigators of another recently published study found that the magnitude of the association between prenatal lithium use and increased risk of cardiac malformations in infants was smaller than previously shown.2 This finding may be important clinically because lithium is a first-line treatment for many US women of reproductive age with bipolar disorder.

Most earlier data were derived from a database registry, case reports, and small studies that often had conflicting results. However, Patorno and colleagues conducted a large retrospective cohort study that involved data on 1,325,563 pregnancies in women enrolled in Medicaid.2 Exposure to lithium was defined as at least 1 filled prescription during the first trimester, and the primary reference group included women with no lithium or lamotrigine (another mood stabilizer not associated with congenital malformations) dispensing during the 3 months before the start of pregnancy or during the first trimester.

A total of 663 pregnancies (0.05%) were exposed to lithium and 1,945 (0.15%) were exposed to lamotrigine during the first trimester. The adjusted risk ratios for cardiac malformations among infants exposed to lithium were 1.65 (95% CI, 1.02-2.68) as compared with nonexposed infants and 2.25 (95% CI, 1.17-4.34) as compared with lamotrigine-exposed infants. Notably, all right ventricular outflow tract obstruction defects identified in the infants exposed to lithium occurred with a daily dose of more than 600 mg.

Although the study results suggest an increased risk of cardiac malformations--of approximately 1 additional case per 100 live births--associated with lithium use in early pregnancy, the magnitude of risk is much lower than originally proposed based on early lithium registry data.

-- Kathy Christie, Senior Editor

References

  1. Man KC, Chan EW, Ip P, et al. Prenatal antidepressant use and risk of attention-deficit/hyperactivity disorder in offspring: population based cohort study. BMJ. 2017;357:j2350.
  2. Patorno E, Huybrechts KR, Bateman BT, et al. Lithium use in pregnancy and risk of cardiac malformations. N Engl J Med. 2017;376(23):2245-2254.

Analyze risks and benefits of medication versus no medication

The US Food and Drug Administration (FDA) has not approved any psychotropic medication for use during pregnancy. While a clinical study would provide more scientifically rigorous safety data, conducting a double-blinded, placebo-controlled trial in pregnant women with a psychiatric disorder is unethical. Thus, the literature consists mostly of reports on case series, retrospective chart reviews, prospective naturalistic studies, and analyses of large registry databases. Each has benefits and limitations. It is important to understand the limitations when making treatment decisions.

In 1979, the FDA developed a 5-lettersystem (A, B, C, D, X) for classifying the relative safety of medications used during pregnancy.2 Many clinicians and pregnant women relied on this system to decide which medications were safe. Unfortunately, the information in the system was inadequate for making informed decisions. For example, although a class B medication might have appeared safer than one in class C, the studies of risk in humans might not have been adequate to permit comparisons. Drug safety classifications were seldom changed, despite the availability of additional data.

In June 2015, the FDA changed the requirements for the Pregnancy and Lactation subsections of the labeling for human prescription drugs and biologic products. Drug manufacturers must now include in each subsection a risk summary, clinical considerations supporting patient care decisions and counseling, and detailed data. These subsections provide information on available human and animal studies, known or potential maternal or fetal adverse reactions, and dose adjustments needed during pregnancy and the postpartum period. In addition, the FDA added a subsection: Females and Males of Reproductive Potential.3

These changes acknowledge there is no list of “safe” medications. The safest medication generally is the one that works for a particular patient at the lowest effective dose. As each woman’s history of illness and effective treatment is different, the best medication may differ as well, even among women with the same illness. Therefore, medication should be individualized to the patient. A risk–benefit analysis comparing psychotropic medication treatment with no medication treatment must be performed for each patient according to her personal history and the best available data.

Read about the risks of untreated illness during pregnancy

 

 

What is the risk of untreated illness during pregnancy?

During pregnancy, women are treated for many medical disorders, including psychiatric illness. One general guideline is that, if a pregnant woman does not need a medication—whether it be for an allergy, hypertension, or another disorder—she should not take it. Conversely, if a medication is required for a patient’s well-being, her physician should continue it or switch to a safer one. This general guideline is the same for women with depression, anxiety, or a psychotic disorder.

Managing hypertension during pregnancy is an example of choosing treatment when the risk of the illness to the mother and the infant outweighs the likely small risk associated with taking a medication. Blood pressure is monitored, and, when it reaches a threshold, an antihypertensive is started promptly to avoid morbidity and mortality.

Psychiatric illness carries risks for both mother and fetus as well, but no data show a clear threshold for initiating pharmacologic treatment. Therefore, in prescribing medication the most important steps are to take a complete history and perform a thorough evaluation. Important information includes the number and severity of previous episodes, prior history of hospitalization or suicidal thoughts or attempts, and any history of psychotic or manic status.

Whether to continue or discontinue medication is often decided after inquiring about other times a medication was discontinued. A patient who in the past stayed well for several years after stopping a medication may be able to taper off a medication and conceive during a window of wellness. Some women who have experienced only one episode of illness and have been stable for at least a year may be able to taper off a medication before conceiving (TABLE 1).

In the risk–benefit analysis, assess the need for pharmacologic treatment by considering the risk that untreated illness poses for both mother and fetus, the benefits of treatment for both, and the risk of medication exposure for the fetus.4

Mother: Risk of untreated illness versus benefit of treatment

A complete history and a current symptom evaluation are needed to assess the risk that nonpharmacologic treatment poses for the mother. Women with functional impairment, including inability to work, to perform activities of daily living, or to take care of other children, likely require treatment. Studies have found that women who discontinue treatment for a psychiatric illness around the time of conception are likely to experience a recurrence of illness during pregnancy, often in the first trimester, and must restart medication.5,6 For some diagnoses, particularly bipolar disorder, symptoms during a relapse can be more severe and more difficult to treat, and they carry a risk for both mother and fetus.7 A longitudinal study of pregnant women who stopped medication for bipolar disorder found a 71% rate of relapse.7 In cases in which there is a history of hospitalization, suicide attempt, or psychosis, discontinuing treatment is not an option; instead, the physician must determine which medication is safest for the particular patient.


Related article:
Does PTSD during pregnancy increase the likelihood of preterm birth?

Fetus: Risk of untreated illness versus benefit of treatment

Mothers with untreated psychiatric illness are at higher risk for poor prenatal care, substance abuse, and inadequate nutrition, all of which increase the risk of negative obstetric and neonatal outcomes.8 Evidence indicates that untreated maternal depression increases the risk of preterm delivery and low birth weight.9 Children born to mothers with depression have more behavioral problems, more psychiatric illness, more visits to pediatricians, lower IQ scores, and attachment issues.10 Some of the long-term negative effects of intrauterine stress, which include hypertension, coronary heart disease, and autoimmune disorders, persist into adulthood.11

Fetus: Risk of medication exposure

With any pharmacologic treatment, the timing of fetal exposure affects resultant risks and therefore must be considered in the management plan.

Before conception. Is there any effect on ovulation or fertilization?

Implantation. Does the exposure impair the blastocyst’s ability to implant in the uterine lining?

First trimester. This is the period of organogenesis. Regardless of drug exposure, there is a 2% to 4% baseline risk of a major malformation during any pregnancy. The risk of a particular malformation must be weighed against this baseline risk.

According to limited data, selective serotonin reuptake inhibitors (SSRIs) may increase the risk of early miscarriage.12 SSRIs also have been implicated in increasing the risk of cardiovascular malformations, although the data are conflicting.13,14

Antiepileptics such as valproate and carbamazepine are used as mood stabilizers in the treatment of bipolar disorder.15 Extensive data have shown an association with teratogenicity. Pregnant women who require either of these medications also should be prescribed folic acid 4 or 5 mg/day. Given the high risk of birth defects and cognitive delay, valproate no longer is recommended for women of reproductive potential.16

Lithium, one of the safest medications used in the treatment of bipolar disorder, is associated with a very small risk of Ebstein anomaly.17

Lamotrigine is used to treat bipolar depression and appears to have a good safety profile, along with a possible small increased risk of oral clefts.18,19

Atypical antipsychotics (such as aripiprazole, olanzapine, quetiapine, and risperidone) are often used first-line in the treatment of psychotic disorders and bipolar disorder in women who are not pregnant. Although the safety data on use of these drugs during pregnancy are limited, a recent analysis of pregnant Medicaid enrollees found no increased risk of birth defects after controlling for potential confounding factors.20 Common practice is to avoid these newer agents, given their limited data and the time needed for rare malformations to emerge (adequate numbers require many exposures during pregnancy).

Read additional fetal risks of medication exposure

 

 

Second trimester. This is a period of growth and neural development. A 2006 study suggested that SSRI exposure after pregnancy week 20 increases the risk of persistent pulmonary hypertension of the newborn (PPHN).21 In 2011, however, the FDA removed the PPHN warning label for SSRIs, citing inconsistent data. Whether the PPHN risk is increased with SSRI use is unclear, but the risk is presumed to be smaller than previously suggested.22 Stopping SSRIs before week 20 puts the mother at risk for relapse during pregnancy and increases her risk of developing postpartum depression. If we follow the recommendation to prescribe medication only for women who need it most, then stopping the medication at any time during pregnancy is not an option.

Third trimester. This is a period of continued growth and lung maturation.

Delivery. Is there a potential for impairment in parturition?

Neonatal adaptation. Newborns are active mainly in adapting to extrauterine life: They regulate their temperature and muscle tone and learn to coordinate sucking, swallowing, and breathing. Does medication exposure impair adaptation, or are signs or symptoms of withdrawal or toxicity present? The evidence that in utero SSRI exposure increases the risk of neonatal adaptation syndrome is consistent, but symptoms are mild and self-limited.23 Tapering off SSRIs before delivery currently is not recommended, as doing so increases the mother’s risk for postpartum depression and, according to one study, does not prevent symptoms of neonatal adaptation syndrome from developing.24

Behavioral teratogenicity. What are the long-term developmental outcomes for the child? Are there any differences in IQ, speech and language, or psychiatric illness? One study found an increased risk of autism with in utero exposure to sertraline, but the study had many methodologic flaws and its findings have not been replicated.25 Most studies have not found consistent differences in speech, IQ, or behavior between infants exposed and infants not exposed to antidepressants.26,27 By contrast, in utero exposure to anticonvulsants, particularly valproate, has led to significant developmental problems in children.28 The data on atypical antipsychotics are limited.


Related article:
Do antidepressants really cause autism?
 

None of the medications used to treat depression, bipolar disorder, anxiety, or schizophrenia is considered first-line or safest therapy for the pregnant woman. For any woman who is doing well on a certain medication, but particularly for a pregnant woman, there is no compelling, data-supported reason to switch to another agent. For depression, options include all of the SSRIs, with the possible exception of paroxetine (TABLE 2). In conflicting studies, paroxetine was no different from any other SSRI in not being associated with cardiovascular defects.29

One goal in treatment is to use a medication that previously was effective in the remission of symptoms and to use it at the lowest dose possible. Treating simply to maintain a low dose of drug, however, and not to effect symptom remission, exposes the fetus to both the drug and the illness. Again, the lowest effective dose is the best choice.

Read about treatment during breastfeeding

 

 

Treatment during breastfeeding

Women are encouraged to breastfeed for physical and psychological health benefits, for both themselves and their babies. Many medications are compatible with breastfeeding.30 The amount of drug an infant receives through breast milk is considerably less than the amount received during the mother’s pregnancy. Breastfeeding generally is allowed if the calculated infant dose is less than 10% of the weight-adjusted maternal dose.31

The amount of drug transferred from maternal plasma into milk is highest for drugs with low protein binding and high lipid solubility.32 Drug clearance in infants must be considered as well. Renal clearance is decreased in newborns and does not reach adult levels until 5 or 6 months of age. In addition, liver metabolism is impaired in neonates and even more so in premature infants.33 Drugs that require extensive first-pass metabolism may have higher bioavailability, and this factor should be considered.

Some clinicians recommend pumping and discarding breast milk when the drug in it is at its peak level; although the drug is not eliminated, the infant ingests less of it.34 Most women who are anxious about breastfeeding while on medication “pump and dump” until they are more comfortable nursing and the infants are doing well. Except in cases of mother preference, most physicians with expertise in reproductive mental health generally recommend against pumping and discarding milk.

Through breast milk, infants ingest drugs in varying amounts. The amount depends on the qualities of the medication, the timing and duration of breastfeeding, and the characteristics of the infant. Few psychotropic drugs have significant effects on breastfed infants. Even lithium, previously contraindicated, is successfully used, with infant monitoring, during breastfeeding.35 Given breastfeeding’s benefits for both mother and child, many more women on psychotropic medications are choosing to breastfeed.


Related article:
USPSTF Recommendations to Support Breastfeeding

Balance the pros and cons

Deciding to use medication during pregnancy and breastfeeding involves considering the risk of untreated illness versus the benefit of treatment for both mother and fetus, and the risk of medication exposure for the fetus. Mother and fetus are inseparable, and neither can be isolated from the other in treatment decisions. Avoiding psychotropic medication during pregnancy is not always the safest option for mother or fetus. The patient and her clinician and support system must make an informed decision that is based on the best available data and that takes into account the mother’s history of illness and effective treatment. Many women with psychiatric illness no longer have to choose between mental health and starting a family, and their babies will be healthy.

 

Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.

Increasingly, women with psychiatric illness are undergoing pharmacologic treatment during pregnancy. In the United States, an estimated 8% of pregnant women are prescribed antidepressants, and the number of such cases has risen over the past 15 years.1 Women with a psychiatric diagnosis were once instructed either to discontinue all medication immediately on learning they were pregnant, or to forgo motherhood because their illness might have a negative effect on a child or because avoiding medication during pregnancy might lead to a relapse.

Fortunately, women with depression, anxiety, bipolar disorder, or schizophrenia no longer are being told that they cannot become mothers. For many women, however, stopping medication is not an option. Furthermore, psychiatric illness sometimes is diagnosed initially during pregnancy and requires treatment.

Illustration: Kimberly Martens for OBG Management
Many women with psychiatric illness no longer have to choose between mental health and starting a family.

Pregnant women and their physicians need accurate information about when to taper off medication, when to start or continue, and which medications are safest. Even for clinicians with a solid knowledge base, counseling a woman who needs or may need psychotropic medication during pregnancy and breastfeeding is a daunting task. Some clinicians still recommend no drug treatment as the safest and best option, given the potential risks to the fetus.

In this review we offer a methodologic approach for decision making about pharmacologic treatment during pregnancy. As the scientific literature is constantly being updated, it is imperative to have the most current information on psychotropics and to know how to individualize that information when counseling a pregnant woman and her family. Using this framework for analyzing the risks and benefits for both mother and fetus, clinicians can avoid the unanswerable question of which medication is the “safest.”

A patient’s mental health care provider is a useful resource for information about a woman’s mental health history and current stability, but he or she may not be expert or comfortable in recommending treatment for a pregnant patient. During pregnancy, a woman’s obstetrician often becomes the “expert” for all treatment decisions.

Psychotropic use during pregnancy: Certain risks in offspring lower than thought, recent data show

Antidepressants. Previous studies may have overestimated the association between prenatal use of antidepressants and attention deficit/hyperactivity disorder (ADHD) in children because they did not control for shared family factors, according to investigators who say that their recent study findings raise the possibility that "confounding by indication" might partially explain the observed association.1

In a population-based cohort study in Hong Kong, Man and colleagues analyzed the records of 190,618 maternal-child pairs.1 A total of 1,252 children were exposed to maternal antidepressant use during pregnancy. Medications included selective serotonin reuptake inhibitors (SSRIs), non-SSRIs, and antipsychotics as monotherapy or in various combination regimens. Overall, 5,659 of the cohort children (3%) were diagnosed with or received treatment for ADHD.

When gestational medication users were compared with nongestational users, the crude hazard ratio (HR) of antidepressant use during pregnancy and ADHD was 2.26 (P<.01). After adjusting for potential confounding factors (such as maternal psychiatric disorders and use of other psychotropic drugs), this reduced to 1.39 (95% confidence interval [CI], 1.07-1.82; P = .01). Children of mothers with psychiatric disorders had a higher risk of ADHD than did children of mothers without psychiatric disorders (HR, 1.84; 95% CI, 1.54-2.18; P<.01), even if the mothers had never used antidepressants.

While acknowledging the potential for type 2 error in the study analysis, the investigators proposed that the results "further strengthen our hypothesis that confounding by indication may play a major role in the observed positive association between gestational use of antidepressants and ADHD in offspring."

Lithium. Similarly, investigators of another recently published study found that the magnitude of the association between prenatal lithium use and increased risk of cardiac malformations in infants was smaller than previously shown.2 This finding may be important clinically because lithium is a first-line treatment for many US women of reproductive age with bipolar disorder.

Most earlier data were derived from a database registry, case reports, and small studies that often had conflicting results. However, Patorno and colleagues conducted a large retrospective cohort study that involved data on 1,325,563 pregnancies in women enrolled in Medicaid.2 Exposure to lithium was defined as at least 1 filled prescription during the first trimester, and the primary reference group included women with no lithium or lamotrigine (another mood stabilizer not associated with congenital malformations) dispensing during the 3 months before the start of pregnancy or during the first trimester.

A total of 663 pregnancies (0.05%) were exposed to lithium and 1,945 (0.15%) were exposed to lamotrigine during the first trimester. The adjusted risk ratios for cardiac malformations among infants exposed to lithium were 1.65 (95% CI, 1.02-2.68) as compared with nonexposed infants and 2.25 (95% CI, 1.17-4.34) as compared with lamotrigine-exposed infants. Notably, all right ventricular outflow tract obstruction defects identified in the infants exposed to lithium occurred with a daily dose of more than 600 mg.

Although the study results suggest an increased risk of cardiac malformations--of approximately 1 additional case per 100 live births--associated with lithium use in early pregnancy, the magnitude of risk is much lower than originally proposed based on early lithium registry data.

-- Kathy Christie, Senior Editor

References

  1. Man KC, Chan EW, Ip P, et al. Prenatal antidepressant use and risk of attention-deficit/hyperactivity disorder in offspring: population based cohort study. BMJ. 2017;357:j2350.
  2. Patorno E, Huybrechts KR, Bateman BT, et al. Lithium use in pregnancy and risk of cardiac malformations. N Engl J Med. 2017;376(23):2245-2254.

Analyze risks and benefits of medication versus no medication

The US Food and Drug Administration (FDA) has not approved any psychotropic medication for use during pregnancy. While a clinical study would provide more scientifically rigorous safety data, conducting a double-blinded, placebo-controlled trial in pregnant women with a psychiatric disorder is unethical. Thus, the literature consists mostly of reports on case series, retrospective chart reviews, prospective naturalistic studies, and analyses of large registry databases. Each has benefits and limitations. It is important to understand the limitations when making treatment decisions.

In 1979, the FDA developed a 5-lettersystem (A, B, C, D, X) for classifying the relative safety of medications used during pregnancy.2 Many clinicians and pregnant women relied on this system to decide which medications were safe. Unfortunately, the information in the system was inadequate for making informed decisions. For example, although a class B medication might have appeared safer than one in class C, the studies of risk in humans might not have been adequate to permit comparisons. Drug safety classifications were seldom changed, despite the availability of additional data.

In June 2015, the FDA changed the requirements for the Pregnancy and Lactation subsections of the labeling for human prescription drugs and biologic products. Drug manufacturers must now include in each subsection a risk summary, clinical considerations supporting patient care decisions and counseling, and detailed data. These subsections provide information on available human and animal studies, known or potential maternal or fetal adverse reactions, and dose adjustments needed during pregnancy and the postpartum period. In addition, the FDA added a subsection: Females and Males of Reproductive Potential.3

These changes acknowledge there is no list of “safe” medications. The safest medication generally is the one that works for a particular patient at the lowest effective dose. As each woman’s history of illness and effective treatment is different, the best medication may differ as well, even among women with the same illness. Therefore, medication should be individualized to the patient. A risk–benefit analysis comparing psychotropic medication treatment with no medication treatment must be performed for each patient according to her personal history and the best available data.

Read about the risks of untreated illness during pregnancy

 

 

What is the risk of untreated illness during pregnancy?

During pregnancy, women are treated for many medical disorders, including psychiatric illness. One general guideline is that, if a pregnant woman does not need a medication—whether it be for an allergy, hypertension, or another disorder—she should not take it. Conversely, if a medication is required for a patient’s well-being, her physician should continue it or switch to a safer one. This general guideline is the same for women with depression, anxiety, or a psychotic disorder.

Managing hypertension during pregnancy is an example of choosing treatment when the risk of the illness to the mother and the infant outweighs the likely small risk associated with taking a medication. Blood pressure is monitored, and, when it reaches a threshold, an antihypertensive is started promptly to avoid morbidity and mortality.

Psychiatric illness carries risks for both mother and fetus as well, but no data show a clear threshold for initiating pharmacologic treatment. Therefore, in prescribing medication the most important steps are to take a complete history and perform a thorough evaluation. Important information includes the number and severity of previous episodes, prior history of hospitalization or suicidal thoughts or attempts, and any history of psychotic or manic status.

Whether to continue or discontinue medication is often decided after inquiring about other times a medication was discontinued. A patient who in the past stayed well for several years after stopping a medication may be able to taper off a medication and conceive during a window of wellness. Some women who have experienced only one episode of illness and have been stable for at least a year may be able to taper off a medication before conceiving (TABLE 1).

In the risk–benefit analysis, assess the need for pharmacologic treatment by considering the risk that untreated illness poses for both mother and fetus, the benefits of treatment for both, and the risk of medication exposure for the fetus.4

Mother: Risk of untreated illness versus benefit of treatment

A complete history and a current symptom evaluation are needed to assess the risk that nonpharmacologic treatment poses for the mother. Women with functional impairment, including inability to work, to perform activities of daily living, or to take care of other children, likely require treatment. Studies have found that women who discontinue treatment for a psychiatric illness around the time of conception are likely to experience a recurrence of illness during pregnancy, often in the first trimester, and must restart medication.5,6 For some diagnoses, particularly bipolar disorder, symptoms during a relapse can be more severe and more difficult to treat, and they carry a risk for both mother and fetus.7 A longitudinal study of pregnant women who stopped medication for bipolar disorder found a 71% rate of relapse.7 In cases in which there is a history of hospitalization, suicide attempt, or psychosis, discontinuing treatment is not an option; instead, the physician must determine which medication is safest for the particular patient.


Related article:
Does PTSD during pregnancy increase the likelihood of preterm birth?

Fetus: Risk of untreated illness versus benefit of treatment

Mothers with untreated psychiatric illness are at higher risk for poor prenatal care, substance abuse, and inadequate nutrition, all of which increase the risk of negative obstetric and neonatal outcomes.8 Evidence indicates that untreated maternal depression increases the risk of preterm delivery and low birth weight.9 Children born to mothers with depression have more behavioral problems, more psychiatric illness, more visits to pediatricians, lower IQ scores, and attachment issues.10 Some of the long-term negative effects of intrauterine stress, which include hypertension, coronary heart disease, and autoimmune disorders, persist into adulthood.11

Fetus: Risk of medication exposure

With any pharmacologic treatment, the timing of fetal exposure affects resultant risks and therefore must be considered in the management plan.

Before conception. Is there any effect on ovulation or fertilization?

Implantation. Does the exposure impair the blastocyst’s ability to implant in the uterine lining?

First trimester. This is the period of organogenesis. Regardless of drug exposure, there is a 2% to 4% baseline risk of a major malformation during any pregnancy. The risk of a particular malformation must be weighed against this baseline risk.

According to limited data, selective serotonin reuptake inhibitors (SSRIs) may increase the risk of early miscarriage.12 SSRIs also have been implicated in increasing the risk of cardiovascular malformations, although the data are conflicting.13,14

Antiepileptics such as valproate and carbamazepine are used as mood stabilizers in the treatment of bipolar disorder.15 Extensive data have shown an association with teratogenicity. Pregnant women who require either of these medications also should be prescribed folic acid 4 or 5 mg/day. Given the high risk of birth defects and cognitive delay, valproate no longer is recommended for women of reproductive potential.16

Lithium, one of the safest medications used in the treatment of bipolar disorder, is associated with a very small risk of Ebstein anomaly.17

Lamotrigine is used to treat bipolar depression and appears to have a good safety profile, along with a possible small increased risk of oral clefts.18,19

Atypical antipsychotics (such as aripiprazole, olanzapine, quetiapine, and risperidone) are often used first-line in the treatment of psychotic disorders and bipolar disorder in women who are not pregnant. Although the safety data on use of these drugs during pregnancy are limited, a recent analysis of pregnant Medicaid enrollees found no increased risk of birth defects after controlling for potential confounding factors.20 Common practice is to avoid these newer agents, given their limited data and the time needed for rare malformations to emerge (adequate numbers require many exposures during pregnancy).

Read additional fetal risks of medication exposure

 

 

Second trimester. This is a period of growth and neural development. A 2006 study suggested that SSRI exposure after pregnancy week 20 increases the risk of persistent pulmonary hypertension of the newborn (PPHN).21 In 2011, however, the FDA removed the PPHN warning label for SSRIs, citing inconsistent data. Whether the PPHN risk is increased with SSRI use is unclear, but the risk is presumed to be smaller than previously suggested.22 Stopping SSRIs before week 20 puts the mother at risk for relapse during pregnancy and increases her risk of developing postpartum depression. If we follow the recommendation to prescribe medication only for women who need it most, then stopping the medication at any time during pregnancy is not an option.

Third trimester. This is a period of continued growth and lung maturation.

Delivery. Is there a potential for impairment in parturition?

Neonatal adaptation. Newborns are active mainly in adapting to extrauterine life: They regulate their temperature and muscle tone and learn to coordinate sucking, swallowing, and breathing. Does medication exposure impair adaptation, or are signs or symptoms of withdrawal or toxicity present? The evidence that in utero SSRI exposure increases the risk of neonatal adaptation syndrome is consistent, but symptoms are mild and self-limited.23 Tapering off SSRIs before delivery currently is not recommended, as doing so increases the mother’s risk for postpartum depression and, according to one study, does not prevent symptoms of neonatal adaptation syndrome from developing.24

Behavioral teratogenicity. What are the long-term developmental outcomes for the child? Are there any differences in IQ, speech and language, or psychiatric illness? One study found an increased risk of autism with in utero exposure to sertraline, but the study had many methodologic flaws and its findings have not been replicated.25 Most studies have not found consistent differences in speech, IQ, or behavior between infants exposed and infants not exposed to antidepressants.26,27 By contrast, in utero exposure to anticonvulsants, particularly valproate, has led to significant developmental problems in children.28 The data on atypical antipsychotics are limited.


Related article:
Do antidepressants really cause autism?
 

None of the medications used to treat depression, bipolar disorder, anxiety, or schizophrenia is considered first-line or safest therapy for the pregnant woman. For any woman who is doing well on a certain medication, but particularly for a pregnant woman, there is no compelling, data-supported reason to switch to another agent. For depression, options include all of the SSRIs, with the possible exception of paroxetine (TABLE 2). In conflicting studies, paroxetine was no different from any other SSRI in not being associated with cardiovascular defects.29

One goal in treatment is to use a medication that previously was effective in the remission of symptoms and to use it at the lowest dose possible. Treating simply to maintain a low dose of drug, however, and not to effect symptom remission, exposes the fetus to both the drug and the illness. Again, the lowest effective dose is the best choice.

Read about treatment during breastfeeding

 

 

Treatment during breastfeeding

Women are encouraged to breastfeed for physical and psychological health benefits, for both themselves and their babies. Many medications are compatible with breastfeeding.30 The amount of drug an infant receives through breast milk is considerably less than the amount received during the mother’s pregnancy. Breastfeeding generally is allowed if the calculated infant dose is less than 10% of the weight-adjusted maternal dose.31

The amount of drug transferred from maternal plasma into milk is highest for drugs with low protein binding and high lipid solubility.32 Drug clearance in infants must be considered as well. Renal clearance is decreased in newborns and does not reach adult levels until 5 or 6 months of age. In addition, liver metabolism is impaired in neonates and even more so in premature infants.33 Drugs that require extensive first-pass metabolism may have higher bioavailability, and this factor should be considered.

Some clinicians recommend pumping and discarding breast milk when the drug in it is at its peak level; although the drug is not eliminated, the infant ingests less of it.34 Most women who are anxious about breastfeeding while on medication “pump and dump” until they are more comfortable nursing and the infants are doing well. Except in cases of mother preference, most physicians with expertise in reproductive mental health generally recommend against pumping and discarding milk.

Through breast milk, infants ingest drugs in varying amounts. The amount depends on the qualities of the medication, the timing and duration of breastfeeding, and the characteristics of the infant. Few psychotropic drugs have significant effects on breastfed infants. Even lithium, previously contraindicated, is successfully used, with infant monitoring, during breastfeeding.35 Given breastfeeding’s benefits for both mother and child, many more women on psychotropic medications are choosing to breastfeed.


Related article:
USPSTF Recommendations to Support Breastfeeding

Balance the pros and cons

Deciding to use medication during pregnancy and breastfeeding involves considering the risk of untreated illness versus the benefit of treatment for both mother and fetus, and the risk of medication exposure for the fetus. Mother and fetus are inseparable, and neither can be isolated from the other in treatment decisions. Avoiding psychotropic medication during pregnancy is not always the safest option for mother or fetus. The patient and her clinician and support system must make an informed decision that is based on the best available data and that takes into account the mother’s history of illness and effective treatment. Many women with psychiatric illness no longer have to choose between mental health and starting a family, and their babies will be healthy.

 

Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.

References
  1. Andrade SE, Raebel MA, Brown J, et al. Use of antidepressant medications during pregnancy: a multisite study. Am J Obstet Gynecol. 2008;198(2):194.e1–e5.
  2. Hecht A. Drug safety labeling for doctors. FDA Consum. 1979;13(8):12–13.
  3. Ramoz LL, Patel-Shori NM. Recent changes in pregnancy and lactation labeling: retirement of risk categories. Pharmacotherapy. 2014;34(4):389–395.
  4. Yonkers KA, Wisner KL, Stewart DE, et al. The management of depression during pregnancy: a report from the American Psychiatric Association and the American College of Obstetricians and Gynecologists. Gen Hosp Psychiatry. 2009;31(5):403–413.
  5. Cohen LS, Altshuler LL, Harlow BL, et al. Relapse of major depression during pregnancy in women who maintain or discontinue antidepressant treatment. JAMA. 2006;295(5):499–507.
  6. O’Brien L, Laporte A, Koren G. Estimating the economic costs of antidepressant discontinuation during pregnancy. Can J Psychiatry. 2009;54(6):399–408.
  7. Viguera AC, Whitfield T, Baldessarini RJ, et al. Risk of recurrence in women with bipolar disorder during pregnancy: prospective study of mood stabilizer discontinuation. Am J Psychiatry. 2007;164(12):1817–1824.
  8. Bonari L, Pinto N, Ahn E, Einarson A, Steiner M, Koren G. Perinatal risks of untreated depression during pregnancy. Can J Psychiatry. 2004;49(11):726–735.
  9. Straub H, Adams M, Kim JJ, Silver RK. Antenatal depressive symptoms increase the likelihood of preterm birth. Am J Obstet Gynecol. 2012;207(4):329.e1–e4.
  10. Hayes LJ, Goodman SH, Carlson E. Maternal antenatal depression and infant disorganized attachment at 12 months. Attach Hum Dev. 2013;15(2):133–153.
  11. Field T. Prenatal depression effects on early development: a review. Infant Behav Dev. 2011;34(1):1–14.
  12. Kjaersgaard MI, Parner ET, Vestergaard M, et al. Prenatal antidepressant exposure and risk of spontaneous abortion—a population-based study. PLoS One. 2013;8(8):e72095.
  13. Nordeng H, van Gelder MM, Spigset O, Koren G, Einarson A, Eberhard-Gran M. Pregnancy outcome after exposure to antidepressants and the role of maternal depression: results from the Norwegian Mother and Child Cohort Study. J Clin Psychopharmacol. 2012;32(2):186–194.
  14. Källén BA, Otterblad Olausson P. Maternal use of selective serotonin re-uptake inhibitors in early pregnancy and infant congenital malformations. Birth Defects Res A Clin Mol Teratol. 2007;79(4):301–308.
  15. Tomson T, Battino D. Teratogenic effects of antiepileptic drugs. Lancet Neurol. 2012;11(9):803–813.
  16. Balon R, Riba M. Should women of childbearing potential be prescribed valproate? A call to action. J Clin Psychiatry. 2016;77(4):525–526.
  17. Giles JJ, Bannigan JG. Teratogenic and developmental effects of lithium. Curr Pharm Design. 2006;12(12):1531–1541.
  18. Nguyen HT, Sharma V, McIntyre RS. Teratogenesis associated with antibipolar agents. Adv Ther. 2009;26(3):281–294.
  19. Campbell E, Kennedy F, Irwin B, et al. Malformation risks of antiepileptic drug monotherapies in pregnancy. J Neurol Neurosurg Psychiatry. 2013;84(11):e2.
  20. Huybrechts KF, Hernández-Díaz S, Patorno E, et al. Antipsychotic use in pregnancy and the risk for congenital malformations. JAMA Psychiatry. 2016;73(9):938–946.
  21. Chambers CD, Hernández-Díaz S, Van Marter LJ, et al. Selective serotonin-reuptake inhibitors and risk of persistent pulmonary hypertension of the newborn. N Engl J Med. 2006;354(6):579–587.
  22. ‘t Jong GW, Einarson T, Koren G, Einarson A. Antidepressant use in pregnancy and persistent pulmonary hypertension of the newborn (PPHN): a systematic review. Reprod Toxicol. 2012;34(3):293–297.
  23. Oberlander TF, Misri S, Fitzgerald CE, Kostaras X, Rurak D, Riggs W. Pharmacologic factors associated with transient neonatal symptoms following prenatal psychotropic medication exposure. J Clin Psychiatry. 2004;65(2):230–237.
  24. Warburton W, Hertzman C, Oberlander TF. A register study of the impact of stopping third trimester selective serotonin reuptake inhibitor exposure on neonatal health. Acta Psychiatr Scand. 2010;121(6):471–479.
  25. Croen LA, Grether JK, Yoshida CK, Odouli R, Hendrick V. Antidepressant use during pregnancy and childhood autism spectrum disorders. Arch Gen Psychiatry. 2011;68(11):1104–1112.
  26. Batton B, Batton E, Weigler K, Aylward G, Batton D. In utero antidepressant exposure and neurodevelopment in preterm infants. Am J Perinatol. 2013;30(4):297–301.
  27. Austin MP, Karatas JC, Mishra P, Christl B, Kennedy D, Oei J. Infant neurodevelopment following in utero exposure to antidepressant medication. Acta Paediatr. 2013;102(11):1054–1059.
  28. Bromley RL, Mawer GE, Briggs M, et al. The prevalence of neurodevelopmental disorders in children prenatally exposed to antiepileptic drugs. J Neurol Neurosurg Psychiatry. 2013;84(6):637–643.
  29. Einarson A, Pistelli A, DeSantis M, et al. Evaluation of the risk of congenital cardiovascular defects associated with use of paroxetine during pregnancy. Am J Psychiatry. 2008;165(6):749–752.
  30. Davanzo R, Copertino M, De Cunto A, Minen F, Amaddeo A. Antidepressant drugs and breastfeeding: a review of the literature. Breastfeed Med. 2011;6(2):89–98.
  31. Ito S. Drug therapy for breast-feeding women. N Engl J Med. 2000;343(2):118–126.
  32. Suri RA, Altshuler LL, Burt VK, Hendrick VC. Managing psychiatric medications in the breast-feeding woman. Medscape Womens Health. 1998;3(1):1.
  33. Milsap RL, Jusko WJ. Pharmacokinetics in the infant. Environ Health Perspect. 1994;102(suppl 11):107–110.
  34. Newport DJ, Hostetter A, Arnold A, Stowe ZN. The treatment of postpartum depression: minimizing infant exposures. J Clin Psychiatry. 2002;63(suppl 7):31–44.
  35. Viguera AC, Newport DJ, Ritchie J, et al. Lithium in breast milk and nursing infants: clinical implications. Am J Psychiatry. 2007;164(2):342–345.
References
  1. Andrade SE, Raebel MA, Brown J, et al. Use of antidepressant medications during pregnancy: a multisite study. Am J Obstet Gynecol. 2008;198(2):194.e1–e5.
  2. Hecht A. Drug safety labeling for doctors. FDA Consum. 1979;13(8):12–13.
  3. Ramoz LL, Patel-Shori NM. Recent changes in pregnancy and lactation labeling: retirement of risk categories. Pharmacotherapy. 2014;34(4):389–395.
  4. Yonkers KA, Wisner KL, Stewart DE, et al. The management of depression during pregnancy: a report from the American Psychiatric Association and the American College of Obstetricians and Gynecologists. Gen Hosp Psychiatry. 2009;31(5):403–413.
  5. Cohen LS, Altshuler LL, Harlow BL, et al. Relapse of major depression during pregnancy in women who maintain or discontinue antidepressant treatment. JAMA. 2006;295(5):499–507.
  6. O’Brien L, Laporte A, Koren G. Estimating the economic costs of antidepressant discontinuation during pregnancy. Can J Psychiatry. 2009;54(6):399–408.
  7. Viguera AC, Whitfield T, Baldessarini RJ, et al. Risk of recurrence in women with bipolar disorder during pregnancy: prospective study of mood stabilizer discontinuation. Am J Psychiatry. 2007;164(12):1817–1824.
  8. Bonari L, Pinto N, Ahn E, Einarson A, Steiner M, Koren G. Perinatal risks of untreated depression during pregnancy. Can J Psychiatry. 2004;49(11):726–735.
  9. Straub H, Adams M, Kim JJ, Silver RK. Antenatal depressive symptoms increase the likelihood of preterm birth. Am J Obstet Gynecol. 2012;207(4):329.e1–e4.
  10. Hayes LJ, Goodman SH, Carlson E. Maternal antenatal depression and infant disorganized attachment at 12 months. Attach Hum Dev. 2013;15(2):133–153.
  11. Field T. Prenatal depression effects on early development: a review. Infant Behav Dev. 2011;34(1):1–14.
  12. Kjaersgaard MI, Parner ET, Vestergaard M, et al. Prenatal antidepressant exposure and risk of spontaneous abortion—a population-based study. PLoS One. 2013;8(8):e72095.
  13. Nordeng H, van Gelder MM, Spigset O, Koren G, Einarson A, Eberhard-Gran M. Pregnancy outcome after exposure to antidepressants and the role of maternal depression: results from the Norwegian Mother and Child Cohort Study. J Clin Psychopharmacol. 2012;32(2):186–194.
  14. Källén BA, Otterblad Olausson P. Maternal use of selective serotonin re-uptake inhibitors in early pregnancy and infant congenital malformations. Birth Defects Res A Clin Mol Teratol. 2007;79(4):301–308.
  15. Tomson T, Battino D. Teratogenic effects of antiepileptic drugs. Lancet Neurol. 2012;11(9):803–813.
  16. Balon R, Riba M. Should women of childbearing potential be prescribed valproate? A call to action. J Clin Psychiatry. 2016;77(4):525–526.
  17. Giles JJ, Bannigan JG. Teratogenic and developmental effects of lithium. Curr Pharm Design. 2006;12(12):1531–1541.
  18. Nguyen HT, Sharma V, McIntyre RS. Teratogenesis associated with antibipolar agents. Adv Ther. 2009;26(3):281–294.
  19. Campbell E, Kennedy F, Irwin B, et al. Malformation risks of antiepileptic drug monotherapies in pregnancy. J Neurol Neurosurg Psychiatry. 2013;84(11):e2.
  20. Huybrechts KF, Hernández-Díaz S, Patorno E, et al. Antipsychotic use in pregnancy and the risk for congenital malformations. JAMA Psychiatry. 2016;73(9):938–946.
  21. Chambers CD, Hernández-Díaz S, Van Marter LJ, et al. Selective serotonin-reuptake inhibitors and risk of persistent pulmonary hypertension of the newborn. N Engl J Med. 2006;354(6):579–587.
  22. ‘t Jong GW, Einarson T, Koren G, Einarson A. Antidepressant use in pregnancy and persistent pulmonary hypertension of the newborn (PPHN): a systematic review. Reprod Toxicol. 2012;34(3):293–297.
  23. Oberlander TF, Misri S, Fitzgerald CE, Kostaras X, Rurak D, Riggs W. Pharmacologic factors associated with transient neonatal symptoms following prenatal psychotropic medication exposure. J Clin Psychiatry. 2004;65(2):230–237.
  24. Warburton W, Hertzman C, Oberlander TF. A register study of the impact of stopping third trimester selective serotonin reuptake inhibitor exposure on neonatal health. Acta Psychiatr Scand. 2010;121(6):471–479.
  25. Croen LA, Grether JK, Yoshida CK, Odouli R, Hendrick V. Antidepressant use during pregnancy and childhood autism spectrum disorders. Arch Gen Psychiatry. 2011;68(11):1104–1112.
  26. Batton B, Batton E, Weigler K, Aylward G, Batton D. In utero antidepressant exposure and neurodevelopment in preterm infants. Am J Perinatol. 2013;30(4):297–301.
  27. Austin MP, Karatas JC, Mishra P, Christl B, Kennedy D, Oei J. Infant neurodevelopment following in utero exposure to antidepressant medication. Acta Paediatr. 2013;102(11):1054–1059.
  28. Bromley RL, Mawer GE, Briggs M, et al. The prevalence of neurodevelopmental disorders in children prenatally exposed to antiepileptic drugs. J Neurol Neurosurg Psychiatry. 2013;84(6):637–643.
  29. Einarson A, Pistelli A, DeSantis M, et al. Evaluation of the risk of congenital cardiovascular defects associated with use of paroxetine during pregnancy. Am J Psychiatry. 2008;165(6):749–752.
  30. Davanzo R, Copertino M, De Cunto A, Minen F, Amaddeo A. Antidepressant drugs and breastfeeding: a review of the literature. Breastfeed Med. 2011;6(2):89–98.
  31. Ito S. Drug therapy for breast-feeding women. N Engl J Med. 2000;343(2):118–126.
  32. Suri RA, Altshuler LL, Burt VK, Hendrick VC. Managing psychiatric medications in the breast-feeding woman. Medscape Womens Health. 1998;3(1):1.
  33. Milsap RL, Jusko WJ. Pharmacokinetics in the infant. Environ Health Perspect. 1994;102(suppl 11):107–110.
  34. Newport DJ, Hostetter A, Arnold A, Stowe ZN. The treatment of postpartum depression: minimizing infant exposures. J Clin Psychiatry. 2002;63(suppl 7):31–44.
  35. Viguera AC, Newport DJ, Ritchie J, et al. Lithium in breast milk and nursing infants: clinical implications. Am J Psychiatry. 2007;164(2):342–345.
Issue
OBG Management - 29(8)
Issue
OBG Management - 29(8)
Page Number
30-34, 36-38
Page Number
30-34, 36-38
Publications
Publications
Topics
Article Type
Display Headline
Managing psychiatric illness during pregnancy and breastfeeding: Tools for decision making
Display Headline
Managing psychiatric illness during pregnancy and breastfeeding: Tools for decision making
Sections
Inside the Article
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

The pelvic exam revisited

Article Type
Changed
Tue, 08/28/2018 - 11:09
Display Headline
The pelvic exam revisited
The USPSTF says there is not enough evidence to assess the benefits and harms of the routine screening pelvic exam. These experts say that ObGyns should renew their commitment to individualized well-woman care and shared decision making.

More than 44 million pelvic examinations are performed annually in the United States.1 In March 2017, the United States Preventive Services Task Force (USPSTF) published an updated recommendation statement regarding the need for routine screening pelvic examinations in asymptomatic adult women (18 years and older) receiving primary care: “The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of performing screening pelvic examinations in asymptomatic, nonpregnant adult women.”2

That statement, however, was assigned a grade of I, which means that evidence is lacking, of poor quality, or conflicting, and that the balance of benefits and harms cannot be determined. This USPSTF recommendation statement thus will not change practice for ObGyn providers but likely will renew our commitment to provide individualized well-woman care. There was inadequate or poor quality evidence for benefits related to all-cause mortality, disease-specific morbidity, and quality of life, as well as inadequate evidence on harms related to false-positive findings and anxiety stemming from screening pelvic exams.

Read about coding and billing for a standard pelvic exam

 

 

The pelvic examination and insurance coverage

Melanie Witt, RN, MA

Coding and billing for the care provided at a well-woman visit can be uncomplicated if you know the right codes for the right program. The information presented here concerns straightforward preventive care and assumes that the patient also has not presented with a significant problem at the same visit.

First, a patient who is not Medicare-eligible might have insurance coverage for an annual preventive care examination every year. Normally, this service would be billed using the Current Procedural Terminology (CPT) preventive medicine codes, but some insurers require the use of special codes for an annual gynecologic exam. These special codes are:

  • S0610, Annual gynecological examination, new patient
  • S0612, Annual gynecological examination, established patient
  • S0613, Annual gynecological examination; clinical breast examination without pelvic evaluation.

Notably, Aetna, Cigna, and UnitedHealthcare require these codes to signify that a pelvic examination has been performed (except for code S0613), but many Blue Cross Blue Shield programs, for whom these codes were originally created, are now reverting to the CPT preventive medicine codes for all preventive care.

CPT outlines the requirements for use of the preventive medicine codes as: an initial or periodic comprehensive preventive medicine evaluation or reevaluation and management (E/M) service, which includes an age- and gender-appropriate history, examination, counseling/anticipatory guidance/risk factor reduction interventions, and the ordering of laboratory/diagnostic procedures. The codes are divided into new or established patient categories by age range as follows:

The Medicare E/M documentation guidelines do not apply to preventive services, and a head-to-toe examination also is not required. CPT recognizes the American College of Obstetricians and Gynecologists (ACOG) as an authoritative body to make recommendations for the expected preventive service for women, and if such a service is provided and documented, the preventive care codes are to be reported. The payers who use the S codes for a gynecologic exam will require that a pelvic examination has been performed, but such an examination would not be required when using the CPT codes or ACOG's guidelines if the physician and patient agreed that such an exam was not warranted every year. The other components of a preventive service applicable to the female patient's age, however, should be documented in order to report the CPT codes for preventive medicine services.

If a pelvic examination is not performed, say because the patient is young and not sexually active, but an examination of other areas is carried out, the diagnosis code would change from Z01.411, Encounter for gynecological examination (general) (routine) with abnormal findings, or Z01.419, Encounter for gynecological examination (general) (routine) without abnormal findings, to a general health exam: Z00.00, Encounter for general adult medical examination without abnormal findings, or Z00.01, Encounter for general adult medical examination with abnormal findings.  

What about Medicare?

Medicare requirements are somewhat different. First, Medicare covers only a small portion of the preventive care service; that is, it covers a physical examination of the genital organs and breasts and the collection and conveyance of a Pap specimen to the laboratory every 2 years for a low-risk patient. Second, the codes required to get reimbursed for the examination are:

  • G0101, Cervical or vaginal cancer screening; pelvic and clinical breast examination
  • Q0091, Screening Papanicolaou smear; obtaining, preparing, and conveyance of cervical or vaginal smear to laboratory.

It is not necessary to perform both of these services every 2 years (for instance, the patient may not need a Pap smear every 2 years based on her age and history), but the benefit is available if the service is performed. If the woman is at high risk for developing cervical or vaginal cancer, Medicare will cover this portion of the encounter every year so long as the Medicare-defined criteria for high risk have been documented at the time of the exam.

Related article:
GYN coding changes to note for your maximized reimbursement


Ms. Witt is an independent coding and documentation consultant and former program manager, department of coding and nomenclature, American Congress of Obstetricians and Gynecologists.


The author reports no financial relationships relevant to this article.

Read the authors’ interpretation of the new USPSTF statement

 

 

Interpreting the new USPSTF statement

We understand the USPSTF statement to mean that pelvic exams should not be abandoned, but rather should be individualized to each patient for her specific visit. We agree that for visits focused on counseling and routine screening in asymptomatic, nonpregnant women, pelvic exams likely will not increase the early detection and treatment of disease and more benefit likely would be derived by performing and discussing evidence-based and age-appropriate health services. A classic example would be for initiation or maintenance of oral contraception in an 18-year-old patient for whom an exam could cause unnecessary trauma, pain, or psychological distress leading to future avoidance or barriers to seeking health care. For long-acting reversible contraception placement, however, a pelvic exam clearly would be necessary for insertion of an intrauterine device.


Related article:
Women’s Preventive Services Initiative Guidelines provide consensus for practicing ObGyns

Indications for pelvic examination

Remember that the pelvic examination has 3 distinct parts (and that not all parts need to be routinely conducted)3:

  • general inspection of the external genitalia and vulva
  • speculum examination and evaluation of the vagina and cervix
  • bimanual examination with possible rectovaginal examination in age-appropriate or symptomatic women.

According to the Well-Woman Task Force of the American College of Obstetricians and Gynecologists (ACOG), “For women 21 years and older, external exam may be performed annually and that inclusion of speculum examination, bimanual examination, or both in otherwise healthy women should be a shared, informed decision between patient and provider.”4

Indications for performing certain parts of the pelvic exam include4:

  • routine screening for cervical cancer (Pap test)
  • routine screening for gonorrhea, chlamydia infection, and other sexually transmitted infections
  • evaluation of abnormal vaginal discharge
  • evaluation of abnormal bleeding, pelvic pain, and pelvic floor disorders, such as prolapse, urinary incontinence, and accidental bowel leakage
  • evaluation of menopausal symptoms, such as dryness, dyspareunia, and the genitourinary syndrome of menopause
  • evaluation of women at increased risk for gynecologic malignancy, such as women with known hereditary breast–ovarian cancer syndromes.

In 2016, ACOG launched the Women’s Preventive Services Initiative (WPSI) in conjunction with the Health Resources and Services Administration (HRSA) of the US Department of Health and Human Services. In this 5-year collaboration, the agencies are endeavoring to review and update the recommendations for women’s preventive health care services, including well-woman visits, human papillomavirus testing, and contraception, among many others.5 Once the HRSA adopts these recommendations, women will be able to access comprehensive preventive health services without incurring any out-of-pocket expenses.

The pediatric and adolescent gynecologist perspective

Roshanak Mansouri Zinn, MD, and Rebekah L. Williams, MD, MS

No literature addresses the utility of screening pelvic examination in the pediatric and adolescent population. According to the American College of Obstetricians and Gynecologists Committee on Adolescent Health Care opinion on the initial reproductive health visit for screening and preventive reproductive health care (reaffirmed in 2016), a screening internal exam is not necessary, but an external genital exam may be indicated and may vary depending on the patient's concerns and prior clinical encounters.1 The American Academy of Pediatrics promotes annual screening external genital examination for all female patients as part of routine primary care, with internal examinations only as indicated.2

Age-appropriate pelvic examination for girls and nonsexually active adolescents usually is limited to an external genital exam to evaluate the anatomy and note the sexual maturity rating (Tanner stage), an important indicator of normal pubertal development. As in adults, the potential benefits of screening examination in this population include detection of benign gynecologic conditions (including vulvar skin conditions and abnormalities of hymenal or vaginal development). Additionally, early reproductive health visits are an important time for clinicians to build rapport with younger patients and to provide anticipatory education on menstruation, hygiene, and anatomy. These visits can destigmatize and demystify the pelvic examination and help young women seek care more appropriately and more comfortably if problems do arise.

Even when a pelvic exam is indicated, a patient's young age can give providers pause as to what type of exam to perform. Patients with vulvovaginal symptoms, abnormal vaginal bleeding, vaginal discharge, or pelvic or abdominal pain should receive complete evaluation with external genital examination. If external vaginal examination does not allow for complete assessment of the problem, the patient and provider can assess the likelihood of her tolerating an internal exam in the clinic versus undergoing vaginoscopy under sedation. Limited laboratory evaluation and transabdominal pelvic ultrasonography may provide sufficient information for appropriate clinical decision making and management without internal examination. If symptoms persist or do not respond to first-line treatment, an internal exam should be performed.

Patients of any age may experience anxiety or physical discomfort or may even delay or avoid seeking care because of fear of a pelvic exam. However, providers of reproductive health care for children and adolescents can offer early education, reassurance, and a more comfortable experience when pelvic examination is necessary in this population.

References

  1. American College of Obstetricians and Gynecologists Committee on Adolescent Health Care. Committee Opinion No. 598: Committee on Adolescent Health Care: the initial reproductive health visit. Obstet Gynecol. 2014;123(5):1143-1147.
  2. Braverman PK, Breech L; Committee on Adolescence. American Academy of Pediatrics. Clinical report: gynecologic examination for adolescents in the pediatric office setting. Pediatrics. 2010;126(3):583-590.

 


Dr. Mansouri Zinn is Assistant Professor, Department of Women's Health, University of Texas at Austin.


Dr. Williams is Assistant Professor, Clinical Pediatrics, Section of Adolescent Medicine, Indiana University School of Medicine, Indianapolis.

Developed in collaboration with the North American Society for Pediatric and Adolescent Gynecology


The authors report no financial relationships relevant to this article.

How will the USPSTF statement affect practice?

In an editorial in the Journal of the American Medical Association commenting on the USPSTF statement, McNicholas and Peipert stated, “Based on the recommendation from the task force, clinicians may ask whether the pelvic examination should be abandoned. The answer is not found in this recommendation statement, but instead in a renewed commitment to shared decision making.”6 We wholeheartedly agree with this statement. The health care provider and the patient should make the decision, taking into consideration the patient’s risk factors for gynecologic cancers and other conditions, her personal preferences, and her overall values.

This new USPSTF recommendation statement will not change how we currently practice, and the statement’s grade I rating should not impact insurance coverage for pelvic exams. Additionally, further research is needed to better elucidate the role of the pelvic exam at well-woman visits, with hopes of obtaining more precise guidelines from the USPSTF and ACOG.

 

Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.

References
  1. Centers for Disease Control and Prevention. National Center for Health Statistics. National Ambulatory Medical Care Survey: 2012 state and national summary tables. https://www.cdc.gov/nchs/data/ahcd/namcs_summary/2012_namcs_web_tables.pdf. Accessed May 11, 2017.
  2. Bibbins-Domingo K, Grossman DC, Curry SJ, et al; US Preventive Services Task Force. Screening for gynecologic conditions with pelvic examination: US Preventive Services Task Force recommendation statement. JAMA. 2017;317(9):947–953.
  3. American College of Obstetricians and Gynecologists Committee on Gynecologic Practice. Committee Opinion No. 534: Well-woman visit. Obstet Gynecol. 2012;120(2 pt 1):421–424.
  4. Conry JA, Brown H. Well-Woman Task Force: components of the well-woman visit. Obstet Gynecol. 2015;126(4):697–701.
  5. American College of Obstetricians and Gynecologists. The Women’s Preventive Services Initiative (WPSI). https://www.womenspreventivehealth.org. Accessed May 11, 2017.
  6. McNicholas C, Peipert JF. Is it time to abandon the routine pelvic examination in asymptomatic nonpregnant women? JAMA. 2017;317(9):910–911.
Article PDF
Author and Disclosure Information

Dr. Higgins is a 2017 graduate of the ObGyn residency program at MedStar Washington Hospital Center/Georgetown University Hospital, Washington, DC. She is currently a Clinical Instructor and simulation Fellow at NYU Langone Medical Center, New York, New York.

Dr. Iglesia is Director, Section of Female Pelvic Medicine and Reconstructive Surgery, MedStar Washington Hospital Center, Washington, DC, and Professor, Departments of Obstetrics/Gynecology and Urology, Georgetown University School of Medicine, Washington, DC. Dr. Iglesia serves on the OBG Management Board of Editors.

The authors report no financial relationships relevant to this article.

Issue
OBG Management - 29(8)
Publications
Topics
Page Number
12-16
Sections
Author and Disclosure Information

Dr. Higgins is a 2017 graduate of the ObGyn residency program at MedStar Washington Hospital Center/Georgetown University Hospital, Washington, DC. She is currently a Clinical Instructor and simulation Fellow at NYU Langone Medical Center, New York, New York.

Dr. Iglesia is Director, Section of Female Pelvic Medicine and Reconstructive Surgery, MedStar Washington Hospital Center, Washington, DC, and Professor, Departments of Obstetrics/Gynecology and Urology, Georgetown University School of Medicine, Washington, DC. Dr. Iglesia serves on the OBG Management Board of Editors.

The authors report no financial relationships relevant to this article.

Author and Disclosure Information

Dr. Higgins is a 2017 graduate of the ObGyn residency program at MedStar Washington Hospital Center/Georgetown University Hospital, Washington, DC. She is currently a Clinical Instructor and simulation Fellow at NYU Langone Medical Center, New York, New York.

Dr. Iglesia is Director, Section of Female Pelvic Medicine and Reconstructive Surgery, MedStar Washington Hospital Center, Washington, DC, and Professor, Departments of Obstetrics/Gynecology and Urology, Georgetown University School of Medicine, Washington, DC. Dr. Iglesia serves on the OBG Management Board of Editors.

The authors report no financial relationships relevant to this article.

Article PDF
Article PDF
The USPSTF says there is not enough evidence to assess the benefits and harms of the routine screening pelvic exam. These experts say that ObGyns should renew their commitment to individualized well-woman care and shared decision making.
The USPSTF says there is not enough evidence to assess the benefits and harms of the routine screening pelvic exam. These experts say that ObGyns should renew their commitment to individualized well-woman care and shared decision making.

More than 44 million pelvic examinations are performed annually in the United States.1 In March 2017, the United States Preventive Services Task Force (USPSTF) published an updated recommendation statement regarding the need for routine screening pelvic examinations in asymptomatic adult women (18 years and older) receiving primary care: “The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of performing screening pelvic examinations in asymptomatic, nonpregnant adult women.”2

That statement, however, was assigned a grade of I, which means that evidence is lacking, of poor quality, or conflicting, and that the balance of benefits and harms cannot be determined. This USPSTF recommendation statement thus will not change practice for ObGyn providers but likely will renew our commitment to provide individualized well-woman care. There was inadequate or poor quality evidence for benefits related to all-cause mortality, disease-specific morbidity, and quality of life, as well as inadequate evidence on harms related to false-positive findings and anxiety stemming from screening pelvic exams.

Read about coding and billing for a standard pelvic exam

 

 

The pelvic examination and insurance coverage

Melanie Witt, RN, MA

Coding and billing for the care provided at a well-woman visit can be uncomplicated if you know the right codes for the right program. The information presented here concerns straightforward preventive care and assumes that the patient also has not presented with a significant problem at the same visit.

First, a patient who is not Medicare-eligible might have insurance coverage for an annual preventive care examination every year. Normally, this service would be billed using the Current Procedural Terminology (CPT) preventive medicine codes, but some insurers require the use of special codes for an annual gynecologic exam. These special codes are:

  • S0610, Annual gynecological examination, new patient
  • S0612, Annual gynecological examination, established patient
  • S0613, Annual gynecological examination; clinical breast examination without pelvic evaluation.

Notably, Aetna, Cigna, and UnitedHealthcare require these codes to signify that a pelvic examination has been performed (except for code S0613), but many Blue Cross Blue Shield programs, for whom these codes were originally created, are now reverting to the CPT preventive medicine codes for all preventive care.

CPT outlines the requirements for use of the preventive medicine codes as: an initial or periodic comprehensive preventive medicine evaluation or reevaluation and management (E/M) service, which includes an age- and gender-appropriate history, examination, counseling/anticipatory guidance/risk factor reduction interventions, and the ordering of laboratory/diagnostic procedures. The codes are divided into new or established patient categories by age range as follows:

The Medicare E/M documentation guidelines do not apply to preventive services, and a head-to-toe examination also is not required. CPT recognizes the American College of Obstetricians and Gynecologists (ACOG) as an authoritative body to make recommendations for the expected preventive service for women, and if such a service is provided and documented, the preventive care codes are to be reported. The payers who use the S codes for a gynecologic exam will require that a pelvic examination has been performed, but such an examination would not be required when using the CPT codes or ACOG's guidelines if the physician and patient agreed that such an exam was not warranted every year. The other components of a preventive service applicable to the female patient's age, however, should be documented in order to report the CPT codes for preventive medicine services.

If a pelvic examination is not performed, say because the patient is young and not sexually active, but an examination of other areas is carried out, the diagnosis code would change from Z01.411, Encounter for gynecological examination (general) (routine) with abnormal findings, or Z01.419, Encounter for gynecological examination (general) (routine) without abnormal findings, to a general health exam: Z00.00, Encounter for general adult medical examination without abnormal findings, or Z00.01, Encounter for general adult medical examination with abnormal findings.  

What about Medicare?

Medicare requirements are somewhat different. First, Medicare covers only a small portion of the preventive care service; that is, it covers a physical examination of the genital organs and breasts and the collection and conveyance of a Pap specimen to the laboratory every 2 years for a low-risk patient. Second, the codes required to get reimbursed for the examination are:

  • G0101, Cervical or vaginal cancer screening; pelvic and clinical breast examination
  • Q0091, Screening Papanicolaou smear; obtaining, preparing, and conveyance of cervical or vaginal smear to laboratory.

It is not necessary to perform both of these services every 2 years (for instance, the patient may not need a Pap smear every 2 years based on her age and history), but the benefit is available if the service is performed. If the woman is at high risk for developing cervical or vaginal cancer, Medicare will cover this portion of the encounter every year so long as the Medicare-defined criteria for high risk have been documented at the time of the exam.

Related article:
GYN coding changes to note for your maximized reimbursement


Ms. Witt is an independent coding and documentation consultant and former program manager, department of coding and nomenclature, American Congress of Obstetricians and Gynecologists.


The author reports no financial relationships relevant to this article.

Read the authors’ interpretation of the new USPSTF statement

 

 

Interpreting the new USPSTF statement

We understand the USPSTF statement to mean that pelvic exams should not be abandoned, but rather should be individualized to each patient for her specific visit. We agree that for visits focused on counseling and routine screening in asymptomatic, nonpregnant women, pelvic exams likely will not increase the early detection and treatment of disease and more benefit likely would be derived by performing and discussing evidence-based and age-appropriate health services. A classic example would be for initiation or maintenance of oral contraception in an 18-year-old patient for whom an exam could cause unnecessary trauma, pain, or psychological distress leading to future avoidance or barriers to seeking health care. For long-acting reversible contraception placement, however, a pelvic exam clearly would be necessary for insertion of an intrauterine device.


Related article:
Women’s Preventive Services Initiative Guidelines provide consensus for practicing ObGyns

Indications for pelvic examination

Remember that the pelvic examination has 3 distinct parts (and that not all parts need to be routinely conducted)3:

  • general inspection of the external genitalia and vulva
  • speculum examination and evaluation of the vagina and cervix
  • bimanual examination with possible rectovaginal examination in age-appropriate or symptomatic women.

According to the Well-Woman Task Force of the American College of Obstetricians and Gynecologists (ACOG), “For women 21 years and older, external exam may be performed annually and that inclusion of speculum examination, bimanual examination, or both in otherwise healthy women should be a shared, informed decision between patient and provider.”4

Indications for performing certain parts of the pelvic exam include4:

  • routine screening for cervical cancer (Pap test)
  • routine screening for gonorrhea, chlamydia infection, and other sexually transmitted infections
  • evaluation of abnormal vaginal discharge
  • evaluation of abnormal bleeding, pelvic pain, and pelvic floor disorders, such as prolapse, urinary incontinence, and accidental bowel leakage
  • evaluation of menopausal symptoms, such as dryness, dyspareunia, and the genitourinary syndrome of menopause
  • evaluation of women at increased risk for gynecologic malignancy, such as women with known hereditary breast–ovarian cancer syndromes.

In 2016, ACOG launched the Women’s Preventive Services Initiative (WPSI) in conjunction with the Health Resources and Services Administration (HRSA) of the US Department of Health and Human Services. In this 5-year collaboration, the agencies are endeavoring to review and update the recommendations for women’s preventive health care services, including well-woman visits, human papillomavirus testing, and contraception, among many others.5 Once the HRSA adopts these recommendations, women will be able to access comprehensive preventive health services without incurring any out-of-pocket expenses.

The pediatric and adolescent gynecologist perspective

Roshanak Mansouri Zinn, MD, and Rebekah L. Williams, MD, MS

No literature addresses the utility of screening pelvic examination in the pediatric and adolescent population. According to the American College of Obstetricians and Gynecologists Committee on Adolescent Health Care opinion on the initial reproductive health visit for screening and preventive reproductive health care (reaffirmed in 2016), a screening internal exam is not necessary, but an external genital exam may be indicated and may vary depending on the patient's concerns and prior clinical encounters.1 The American Academy of Pediatrics promotes annual screening external genital examination for all female patients as part of routine primary care, with internal examinations only as indicated.2

Age-appropriate pelvic examination for girls and nonsexually active adolescents usually is limited to an external genital exam to evaluate the anatomy and note the sexual maturity rating (Tanner stage), an important indicator of normal pubertal development. As in adults, the potential benefits of screening examination in this population include detection of benign gynecologic conditions (including vulvar skin conditions and abnormalities of hymenal or vaginal development). Additionally, early reproductive health visits are an important time for clinicians to build rapport with younger patients and to provide anticipatory education on menstruation, hygiene, and anatomy. These visits can destigmatize and demystify the pelvic examination and help young women seek care more appropriately and more comfortably if problems do arise.

Even when a pelvic exam is indicated, a patient's young age can give providers pause as to what type of exam to perform. Patients with vulvovaginal symptoms, abnormal vaginal bleeding, vaginal discharge, or pelvic or abdominal pain should receive complete evaluation with external genital examination. If external vaginal examination does not allow for complete assessment of the problem, the patient and provider can assess the likelihood of her tolerating an internal exam in the clinic versus undergoing vaginoscopy under sedation. Limited laboratory evaluation and transabdominal pelvic ultrasonography may provide sufficient information for appropriate clinical decision making and management without internal examination. If symptoms persist or do not respond to first-line treatment, an internal exam should be performed.

Patients of any age may experience anxiety or physical discomfort or may even delay or avoid seeking care because of fear of a pelvic exam. However, providers of reproductive health care for children and adolescents can offer early education, reassurance, and a more comfortable experience when pelvic examination is necessary in this population.

References

  1. American College of Obstetricians and Gynecologists Committee on Adolescent Health Care. Committee Opinion No. 598: Committee on Adolescent Health Care: the initial reproductive health visit. Obstet Gynecol. 2014;123(5):1143-1147.
  2. Braverman PK, Breech L; Committee on Adolescence. American Academy of Pediatrics. Clinical report: gynecologic examination for adolescents in the pediatric office setting. Pediatrics. 2010;126(3):583-590.

 


Dr. Mansouri Zinn is Assistant Professor, Department of Women's Health, University of Texas at Austin.


Dr. Williams is Assistant Professor, Clinical Pediatrics, Section of Adolescent Medicine, Indiana University School of Medicine, Indianapolis.

Developed in collaboration with the North American Society for Pediatric and Adolescent Gynecology


The authors report no financial relationships relevant to this article.

How will the USPSTF statement affect practice?

In an editorial in the Journal of the American Medical Association commenting on the USPSTF statement, McNicholas and Peipert stated, “Based on the recommendation from the task force, clinicians may ask whether the pelvic examination should be abandoned. The answer is not found in this recommendation statement, but instead in a renewed commitment to shared decision making.”6 We wholeheartedly agree with this statement. The health care provider and the patient should make the decision, taking into consideration the patient’s risk factors for gynecologic cancers and other conditions, her personal preferences, and her overall values.

This new USPSTF recommendation statement will not change how we currently practice, and the statement’s grade I rating should not impact insurance coverage for pelvic exams. Additionally, further research is needed to better elucidate the role of the pelvic exam at well-woman visits, with hopes of obtaining more precise guidelines from the USPSTF and ACOG.

 

Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.

More than 44 million pelvic examinations are performed annually in the United States.1 In March 2017, the United States Preventive Services Task Force (USPSTF) published an updated recommendation statement regarding the need for routine screening pelvic examinations in asymptomatic adult women (18 years and older) receiving primary care: “The USPSTF concludes that the current evidence is insufficient to assess the balance of benefits and harms of performing screening pelvic examinations in asymptomatic, nonpregnant adult women.”2

That statement, however, was assigned a grade of I, which means that evidence is lacking, of poor quality, or conflicting, and that the balance of benefits and harms cannot be determined. This USPSTF recommendation statement thus will not change practice for ObGyn providers but likely will renew our commitment to provide individualized well-woman care. There was inadequate or poor quality evidence for benefits related to all-cause mortality, disease-specific morbidity, and quality of life, as well as inadequate evidence on harms related to false-positive findings and anxiety stemming from screening pelvic exams.

Read about coding and billing for a standard pelvic exam

 

 

The pelvic examination and insurance coverage

Melanie Witt, RN, MA

Coding and billing for the care provided at a well-woman visit can be uncomplicated if you know the right codes for the right program. The information presented here concerns straightforward preventive care and assumes that the patient also has not presented with a significant problem at the same visit.

First, a patient who is not Medicare-eligible might have insurance coverage for an annual preventive care examination every year. Normally, this service would be billed using the Current Procedural Terminology (CPT) preventive medicine codes, but some insurers require the use of special codes for an annual gynecologic exam. These special codes are:

  • S0610, Annual gynecological examination, new patient
  • S0612, Annual gynecological examination, established patient
  • S0613, Annual gynecological examination; clinical breast examination without pelvic evaluation.

Notably, Aetna, Cigna, and UnitedHealthcare require these codes to signify that a pelvic examination has been performed (except for code S0613), but many Blue Cross Blue Shield programs, for whom these codes were originally created, are now reverting to the CPT preventive medicine codes for all preventive care.

CPT outlines the requirements for use of the preventive medicine codes as: an initial or periodic comprehensive preventive medicine evaluation or reevaluation and management (E/M) service, which includes an age- and gender-appropriate history, examination, counseling/anticipatory guidance/risk factor reduction interventions, and the ordering of laboratory/diagnostic procedures. The codes are divided into new or established patient categories by age range as follows:

The Medicare E/M documentation guidelines do not apply to preventive services, and a head-to-toe examination also is not required. CPT recognizes the American College of Obstetricians and Gynecologists (ACOG) as an authoritative body to make recommendations for the expected preventive service for women, and if such a service is provided and documented, the preventive care codes are to be reported. The payers who use the S codes for a gynecologic exam will require that a pelvic examination has been performed, but such an examination would not be required when using the CPT codes or ACOG's guidelines if the physician and patient agreed that such an exam was not warranted every year. The other components of a preventive service applicable to the female patient's age, however, should be documented in order to report the CPT codes for preventive medicine services.

If a pelvic examination is not performed, say because the patient is young and not sexually active, but an examination of other areas is carried out, the diagnosis code would change from Z01.411, Encounter for gynecological examination (general) (routine) with abnormal findings, or Z01.419, Encounter for gynecological examination (general) (routine) without abnormal findings, to a general health exam: Z00.00, Encounter for general adult medical examination without abnormal findings, or Z00.01, Encounter for general adult medical examination with abnormal findings.  

What about Medicare?

Medicare requirements are somewhat different. First, Medicare covers only a small portion of the preventive care service; that is, it covers a physical examination of the genital organs and breasts and the collection and conveyance of a Pap specimen to the laboratory every 2 years for a low-risk patient. Second, the codes required to get reimbursed for the examination are:

  • G0101, Cervical or vaginal cancer screening; pelvic and clinical breast examination
  • Q0091, Screening Papanicolaou smear; obtaining, preparing, and conveyance of cervical or vaginal smear to laboratory.

It is not necessary to perform both of these services every 2 years (for instance, the patient may not need a Pap smear every 2 years based on her age and history), but the benefit is available if the service is performed. If the woman is at high risk for developing cervical or vaginal cancer, Medicare will cover this portion of the encounter every year so long as the Medicare-defined criteria for high risk have been documented at the time of the exam.

Related article:
GYN coding changes to note for your maximized reimbursement


Ms. Witt is an independent coding and documentation consultant and former program manager, department of coding and nomenclature, American Congress of Obstetricians and Gynecologists.


The author reports no financial relationships relevant to this article.

Read the authors’ interpretation of the new USPSTF statement

 

 

Interpreting the new USPSTF statement

We understand the USPSTF statement to mean that pelvic exams should not be abandoned, but rather should be individualized to each patient for her specific visit. We agree that for visits focused on counseling and routine screening in asymptomatic, nonpregnant women, pelvic exams likely will not increase the early detection and treatment of disease and more benefit likely would be derived by performing and discussing evidence-based and age-appropriate health services. A classic example would be for initiation or maintenance of oral contraception in an 18-year-old patient for whom an exam could cause unnecessary trauma, pain, or psychological distress leading to future avoidance or barriers to seeking health care. For long-acting reversible contraception placement, however, a pelvic exam clearly would be necessary for insertion of an intrauterine device.


Related article:
Women’s Preventive Services Initiative Guidelines provide consensus for practicing ObGyns

Indications for pelvic examination

Remember that the pelvic examination has 3 distinct parts (and that not all parts need to be routinely conducted)3:

  • general inspection of the external genitalia and vulva
  • speculum examination and evaluation of the vagina and cervix
  • bimanual examination with possible rectovaginal examination in age-appropriate or symptomatic women.

According to the Well-Woman Task Force of the American College of Obstetricians and Gynecologists (ACOG), “For women 21 years and older, external exam may be performed annually and that inclusion of speculum examination, bimanual examination, or both in otherwise healthy women should be a shared, informed decision between patient and provider.”4

Indications for performing certain parts of the pelvic exam include4:

  • routine screening for cervical cancer (Pap test)
  • routine screening for gonorrhea, chlamydia infection, and other sexually transmitted infections
  • evaluation of abnormal vaginal discharge
  • evaluation of abnormal bleeding, pelvic pain, and pelvic floor disorders, such as prolapse, urinary incontinence, and accidental bowel leakage
  • evaluation of menopausal symptoms, such as dryness, dyspareunia, and the genitourinary syndrome of menopause
  • evaluation of women at increased risk for gynecologic malignancy, such as women with known hereditary breast–ovarian cancer syndromes.

In 2016, ACOG launched the Women’s Preventive Services Initiative (WPSI) in conjunction with the Health Resources and Services Administration (HRSA) of the US Department of Health and Human Services. In this 5-year collaboration, the agencies are endeavoring to review and update the recommendations for women’s preventive health care services, including well-woman visits, human papillomavirus testing, and contraception, among many others.5 Once the HRSA adopts these recommendations, women will be able to access comprehensive preventive health services without incurring any out-of-pocket expenses.

The pediatric and adolescent gynecologist perspective

Roshanak Mansouri Zinn, MD, and Rebekah L. Williams, MD, MS

No literature addresses the utility of screening pelvic examination in the pediatric and adolescent population. According to the American College of Obstetricians and Gynecologists Committee on Adolescent Health Care opinion on the initial reproductive health visit for screening and preventive reproductive health care (reaffirmed in 2016), a screening internal exam is not necessary, but an external genital exam may be indicated and may vary depending on the patient's concerns and prior clinical encounters.1 The American Academy of Pediatrics promotes annual screening external genital examination for all female patients as part of routine primary care, with internal examinations only as indicated.2

Age-appropriate pelvic examination for girls and nonsexually active adolescents usually is limited to an external genital exam to evaluate the anatomy and note the sexual maturity rating (Tanner stage), an important indicator of normal pubertal development. As in adults, the potential benefits of screening examination in this population include detection of benign gynecologic conditions (including vulvar skin conditions and abnormalities of hymenal or vaginal development). Additionally, early reproductive health visits are an important time for clinicians to build rapport with younger patients and to provide anticipatory education on menstruation, hygiene, and anatomy. These visits can destigmatize and demystify the pelvic examination and help young women seek care more appropriately and more comfortably if problems do arise.

Even when a pelvic exam is indicated, a patient's young age can give providers pause as to what type of exam to perform. Patients with vulvovaginal symptoms, abnormal vaginal bleeding, vaginal discharge, or pelvic or abdominal pain should receive complete evaluation with external genital examination. If external vaginal examination does not allow for complete assessment of the problem, the patient and provider can assess the likelihood of her tolerating an internal exam in the clinic versus undergoing vaginoscopy under sedation. Limited laboratory evaluation and transabdominal pelvic ultrasonography may provide sufficient information for appropriate clinical decision making and management without internal examination. If symptoms persist or do not respond to first-line treatment, an internal exam should be performed.

Patients of any age may experience anxiety or physical discomfort or may even delay or avoid seeking care because of fear of a pelvic exam. However, providers of reproductive health care for children and adolescents can offer early education, reassurance, and a more comfortable experience when pelvic examination is necessary in this population.

References

  1. American College of Obstetricians and Gynecologists Committee on Adolescent Health Care. Committee Opinion No. 598: Committee on Adolescent Health Care: the initial reproductive health visit. Obstet Gynecol. 2014;123(5):1143-1147.
  2. Braverman PK, Breech L; Committee on Adolescence. American Academy of Pediatrics. Clinical report: gynecologic examination for adolescents in the pediatric office setting. Pediatrics. 2010;126(3):583-590.

 


Dr. Mansouri Zinn is Assistant Professor, Department of Women's Health, University of Texas at Austin.


Dr. Williams is Assistant Professor, Clinical Pediatrics, Section of Adolescent Medicine, Indiana University School of Medicine, Indianapolis.

Developed in collaboration with the North American Society for Pediatric and Adolescent Gynecology


The authors report no financial relationships relevant to this article.

How will the USPSTF statement affect practice?

In an editorial in the Journal of the American Medical Association commenting on the USPSTF statement, McNicholas and Peipert stated, “Based on the recommendation from the task force, clinicians may ask whether the pelvic examination should be abandoned. The answer is not found in this recommendation statement, but instead in a renewed commitment to shared decision making.”6 We wholeheartedly agree with this statement. The health care provider and the patient should make the decision, taking into consideration the patient’s risk factors for gynecologic cancers and other conditions, her personal preferences, and her overall values.

This new USPSTF recommendation statement will not change how we currently practice, and the statement’s grade I rating should not impact insurance coverage for pelvic exams. Additionally, further research is needed to better elucidate the role of the pelvic exam at well-woman visits, with hopes of obtaining more precise guidelines from the USPSTF and ACOG.

 

Share your thoughts! Send your Letter to the Editor to [email protected]. Please include your name and the city and state in which you practice.

References
  1. Centers for Disease Control and Prevention. National Center for Health Statistics. National Ambulatory Medical Care Survey: 2012 state and national summary tables. https://www.cdc.gov/nchs/data/ahcd/namcs_summary/2012_namcs_web_tables.pdf. Accessed May 11, 2017.
  2. Bibbins-Domingo K, Grossman DC, Curry SJ, et al; US Preventive Services Task Force. Screening for gynecologic conditions with pelvic examination: US Preventive Services Task Force recommendation statement. JAMA. 2017;317(9):947–953.
  3. American College of Obstetricians and Gynecologists Committee on Gynecologic Practice. Committee Opinion No. 534: Well-woman visit. Obstet Gynecol. 2012;120(2 pt 1):421–424.
  4. Conry JA, Brown H. Well-Woman Task Force: components of the well-woman visit. Obstet Gynecol. 2015;126(4):697–701.
  5. American College of Obstetricians and Gynecologists. The Women’s Preventive Services Initiative (WPSI). https://www.womenspreventivehealth.org. Accessed May 11, 2017.
  6. McNicholas C, Peipert JF. Is it time to abandon the routine pelvic examination in asymptomatic nonpregnant women? JAMA. 2017;317(9):910–911.
References
  1. Centers for Disease Control and Prevention. National Center for Health Statistics. National Ambulatory Medical Care Survey: 2012 state and national summary tables. https://www.cdc.gov/nchs/data/ahcd/namcs_summary/2012_namcs_web_tables.pdf. Accessed May 11, 2017.
  2. Bibbins-Domingo K, Grossman DC, Curry SJ, et al; US Preventive Services Task Force. Screening for gynecologic conditions with pelvic examination: US Preventive Services Task Force recommendation statement. JAMA. 2017;317(9):947–953.
  3. American College of Obstetricians and Gynecologists Committee on Gynecologic Practice. Committee Opinion No. 534: Well-woman visit. Obstet Gynecol. 2012;120(2 pt 1):421–424.
  4. Conry JA, Brown H. Well-Woman Task Force: components of the well-woman visit. Obstet Gynecol. 2015;126(4):697–701.
  5. American College of Obstetricians and Gynecologists. The Women’s Preventive Services Initiative (WPSI). https://www.womenspreventivehealth.org. Accessed May 11, 2017.
  6. McNicholas C, Peipert JF. Is it time to abandon the routine pelvic examination in asymptomatic nonpregnant women? JAMA. 2017;317(9):910–911.
Issue
OBG Management - 29(8)
Issue
OBG Management - 29(8)
Page Number
12-16
Page Number
12-16
Publications
Publications
Topics
Article Type
Display Headline
The pelvic exam revisited
Display Headline
The pelvic exam revisited
Sections
Inside the Article
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

Antimicrobial Stewardship Programs: Effects on Clinical and Economic Outcomes and Future Directions

Article Type
Changed
Mon, 03/18/2019 - 10:08
Display Headline
Antimicrobial Stewardship Programs: Effects on Clinical and Economic Outcomes and Future Directions

From the Ernest Mario School of Pharmacy, Rutgers, The State University of New Jersey, Piscataway, NJ.

 

Abstract

  • Objective: To review the evidence evaluating inpatient antimicrobial stewardship programs (ASPs) with a focus on clinical and economic outcomes.
  • Methods: Pubmed/MEDLINE and the Cochrane Database of Systematic Reviews were used to identify systematic reviews, meta-analyses, randomized controlled trials, and other relevant literature evaluating the clinical and economic impact of ASP interventions.
  • Results: A total of 5 meta-analyses, 3 systematic reviews, and 10 clinical studies (2 randomized controlled, 5 observational, and 3 quasi-experimental studies) were identified for analysis. ASPs were associated with a reduction in antimicrobial consumption and use. However, due to the heterogeneity of outcomes measured among studies, the effectiveness of ASPs varied with the measures used. There are data supporting the cost savings associated with ASPs, but these studies are more sparse. Most of the available evidence supporting ASPs is of low quality, and intervention strategies vary widely among available studies.
  • Conclusion: Much of the evidence reviewed supports the assertion that ASPs result in a more judicious use of antimicrobials and lead to better patient care in the inpatient setting. While clinical outcomes vary between programs, there are ubiquitous positive benefits associated with ASPs in terms of antimicrobial consumption, C. difficile infection rates, and resistance, with few adverse effects. To date, economic outcomes have been difficult to uniformly quantify, but there are data supporting the economic benefits of ASPs. As the number of ASPs continues to grow, it is imperative that standardized metrics are considered in order to accurately measure the benefits of these essential programs.

Key words: Antimicrobial stewardship; antimicrobial consumption; resistance.

 

Antimicrobial resistance is a public health concern that has been escalating over the years and is now identified as a global crisis [1–3]. This is partly due to the widespread use of the same antibiotics that have existed for decades, combined with a lack of sufficient novel antibiotic discovery and development [4]. Bacteria that are resistant to our last-line-of-defense medications have recently emerged, and these resistant organisms may spread to treatment-naive patients [5]. Multidrug-resistant organisms are often found, treated, and likely originate within the hospital practice setting, where antimicrobials can be prescribed by any licensed provider [6]. Upwards of 50% of antibiotics administered are unnecessary and contribute to the problem of increasing resistance [7]. The seriousness of this situation is increasingly apparent; in 2014 the World Health Organization (WHO), President Obama, and Prime Minister Cameron issued statements urging solutions to the resistance crisis [8].

While the urgency of the situation is recognized today, efforts aimed at a more judicious use of antibiotics to curb resistance began as early as the 1960s and led to the first antimicrobial stewardship programs (ASPs) [9–11]. ASPs have since been defined as “coordinated interventions designed to improve and measure the appropriate use of antimicrobial agents by promoting the selection of the optimal antimicrobial drug regimen including dosing, duration of therapy, and route of administration” [1]. The primary objectives of these types of programs are to avoid or reduce adverse events (eg, Clostridium difficile infection) and resistance driven by a shift in minimum inhibitory concentrations (MICs) and to reverse the unnecessary economic burden caused by the inappropriate prescribing of these agents [1].

This article examines the evidence evaluating the reported effectiveness of inpatient ASPs, examining both clinical and economic outcomes. In addition, we touch on ASP history, current status, and future directions in light of current trends. While ASPs are expanding into the outpatient and nursing home settings, we will limit our review here to the inpatient setting.

 

 

Historical Background

Modern antibiotics date back to the late 1930s when penicillin and sulfonamides were introduced to the medical market, and resistance to these drug classes was reported just a few years after their introduction. The same bacterial resistance mechanisms that neutralized their efficacy then exist today, and these mechanisms continue to confer resistance among those classes [5].

While “stewardship” was not described as such until the late 1990s [12], institutions have historically been proactive in creating standards around antimicrobial utilization to encourage judicious use of these agents. The earliest form of tracking antibiotic use was in the form of paper charts as “antibiotic logs” [9] and “punch cards” [10] in the 1960s. The idea of a team approach to stewardship dates back to the 1970s, with the example of Hartford Hospital in Hartford, Connecticut, which employed an antimicrobial standards model run by an infectious disease (ID) physician and clinical pharmacists [11]. In 1977, the Infectious Diseases Society of America (IDSA) released a statement that clinical pharmacists may have a substantial impact on patient care, including in ID, contributing to the idea that a team of physicians collaborating with pharmacists presents the best way to combat inappropriate medication use. Pharmacist involvement has since been shown to restrict broad overutilized antimicrobial agents and reduce the rate of C. difficile infection by a significant amount [13].

In 1997 the IDSA and the Society for Healthcare Epidemiology of America (SHEA) published guidelines to assist in the prevention of the growing issue of resistance, mentioning the importance of antimicrobial stewardship [14]. A decade later they released joint guidelines for ASP implementation [15], and the Pediatric Infectious Disease Society (PIDS) joined them in 2012 to publish a joint statement acknowledging and endorsing stewardship [16]. In 2014, the Centers of Disease Control and Prevention (CDC) recommended that every hospital should have an ASP. As of 1 January 2017, the Joint Commission requires an ASP as a standard for accreditation at hospitals, critical access hospitals, and nursing care [17]. Guidelines for implementation of an ASP are currently available through the IDSA and SHEA [1,16].

ASP Interventions

There are 2 main strategies that ASPs have to combat inappropriate antimicrobial use, and each has its own set of systematic interventions. These strategies are referred to as “prospective audit with intervention and feedback” and “prior authorization” [6]. Although most ASPs will incorporate these main strategies, each institution typically creates its own strategies and regulations independently.

Prospective audit with intervention and feedback describes the process of providing recommendations after reviewing utilization and trends of antimicrobial use. This is sometimes referred to as the “back-end” intervention, in which decisions are made after antibiotics have been administered. Interventions that are commonly used under this strategy include discontinuation of antibiotics due to culture data, de-escalation to drugs with narrower spectra, IV to oral conversions, and cessation of surgical prophylaxis [6].

Prior authorization, also referred to as a “front-end” intervention, is the process of approving medications before they are used. Interventions include a restricted formulary for antimicrobials that can be managed through a paging system or a built-in computer restriction program, as well as other guidelines and protocols for dosing and duration of therapy. Restrictions typically focus on broad spectrum antibiotics as well as the more costly drugs on formularies. These solutions reduce the need for manual intervention as technology makes it possible to create automated restriction-based services that prevent inappropriate prescribing [6].

Aside from these main techniques, other strategies are taken to achieve the goal of attaining optimal clinical outcomes while limiting further antimicrobial resistance and adverse effects. Different clinical settings have different needs, and ASPs are customized to each setting’s resources, prescribing habits, and other local specificities [1]. These differences present difficulty with interpreting diverse datasets, but certain themes arise in the literature: commonly assessed clinical outcomes of inpatient ASPs include hospital length of stay (LOS) and readmission, reinfection, mortality, and resistance rates. These outcomes are putatively driven by the more prudent use of antimicrobials, particularly by decreased rates of antimicrobial consumption.

ASP Team Members

While ASPs may differ between institutions, the staff members involved are typically the same, and leadership is always an important aspect of a program. The CDC recommends that ASP leadership consist of a program leader (an ID physician) and a pharmacy leader, who co-lead the team [18]. In addition, the Joint Commission recommends that the multidisciplinary team should include an infection preventionist (ie, infection control and hospital epidemiologist) and practitioner [17]; these specialists have a role in prevention, awareness, and policy [19]. The integration of infection control with stewardship yields the best results [15], as infection control aims to prevent antibiotic use altogether, while stewardship increases the quality of antibiotic regimens that are being prescribed [20].

It is also beneficial to incorporate a microbiologist as an integral part of the team, responsible for performing and interpreting laboratory data (ie, cultures). Nurses should be integrated into ASPs due to the overlap of their routine activities with ASP interventions [21]; other clinicians (regardless of their infectious disease clinical background), quality control, information technology, and environmental services should all collaborate in the hospital-wide systems related to the program where appropriate [18].

Evidence Review

To assess the effectiveness of inpatient ASPs, we performed a literature search using Pubmed, Cochrane Database of Systematic Reviews, and MEDLINE/OVIDSp up to 1 September 2016. The search terms used are listed in the Table. Included in this review were studies evaluating clinical or economic outcomes related to inpatient ASPs; excluded were editorials, opinion pieces, articles not containing original clinical or economic ASP outcome data, ASPs not performed in the inpatient setting, and studies that were included in identified systematic reviews or meta-analyses. Also excluded from this review were studies that reviewed ASPs performed in niche settings or for applications in which ASPs were not yet prevalent, as assessed by the authors. The search initially yielded 182 articles. After removing duplicates and excluded articles, 18 articles were identified for review: 8 meta-analyses and systematic reviews and 10 additional clinical studies (2 randomized controlled, 5 observational, and 3 quasi-experimental studies) evaluating clinical and economic outcomes not contained in the identified aggregated studies. Systematic reviews, meta-analyses, and other studies were screened to identify any other relevant literature not captured in the original search. The articles included in this review are summarized in 2 Tables, which may be accessed at www.turner-white.com/pdf/jcom_jul17_antimicrobial_appendix.pdf.

 

 

Results

Antimicrobial Usage

The most widely studied aspect of ASPs in the current review was the effect of ASP interventions on antimicrobial consumption and use. Three systematic reviews [22–24] showed improved antibiotic prescribing practices and reduced consumption rates overall, as did several studies inside and outside the intensive care unit (ICU) [25–31].One study found an insignificant declining usage trend [32]. An important underlying facet of this observation is that even as total antibiotic consumption decreases, certain antibiotic and antibiotic class consumption may increase. This is evident in several studies, which showed that as aminoglycoside, carbapenem, and β-lactam-β-lactamase inhibitor use increased, clindamycin (1 case), glycopeptide, fluoroquinolone, and macrolide use decreased [27,28,30]. A potential confounding factor relating to decreased glycopeptide use in Bevilacqua et al [30] was that there was an epidemic of glycopeptide-resistant enterococci during the study period, potentially causing prescribers to naturally avoid it. In any case, since the aim of ASPs is to encourage a more judicious usage of antimicrobials, the observed decreases in consumption of those restricted medications is intuitive. These observations about antimicrobial consumption related to ASPs are relevant because they putatively drive improvements in clinical outcomes, especially those related to reduced adverse events associated with these agents, such as the risk of C. difficile infection with certain drugs (eg, fluoroquinolones, clindamycin, and broad-spectrum antibiotics) and prolonged antibiotic usage [33–35]. There is evidence that these benefits are not limited to antibiotics but extend to antifungal agents and possibly antivirals [22,27,36].

Utilization, Mortality, and Infection Rates

ASPs typically intend to improve patient-focused clinical parameters such as hospital LOS, hospital readmissions, mortality, and incidence of infections acquired secondary to antibiotic usage during a hospital stay, especially C. difficile infection. Most of the reviewed evidence indicates that there has been no significant LOS benefit due to stewardship interventions [24–26,32,37], and one meta-analysis noted that when overall hospital LOS was significantly reduced, ICU-specific LOS was not [22]. Generally, there was also not a significant change in hospital readmission rates [24,26,32]. However, 2 retrospective observational studies found mixed results for both LOS and readmission rates relative to ASP interventions; while both noted a significantly reduced LOS, one study [38] showed an all-cause readmission benefit in a fairly healthy patient population (but no benefit for readmissions due to the specific infections of interest), and the another [29] showed a benefit for readmissions due to infections but an increased rate of readmissions in the intervention group overall. In this latter study, hospitalizations within the previous 3 months were significantly higher at baseline for the intervention group (55% vs. 46%, P = 0.042), suggesting sicker patients and possibly providing an explanation for this unique observation. Even so, a meta-analysis of 5 studies found a significantly elevated risk of readmission associated with ASP interventions (RR 1.26, 95% CI 1.02–1.57; P = 0.03); the authors noted that non–infection-related readmissions accounted for 61% of readmissions, but this was not significantly different between intervention and non-intervention arms [37].

With regard to mortality, most studies found no significant reductions related to stewardship interventions [22,24,26,29,32]. In a prospective randomized controlled trial, all reported deaths (7/160, 4.4%) were in the ASP intervention arm, but these were attributed to the severities of infection or an underlying, chronic disease [25]. One meta-analysis, however, found that there were significant mortality reductions related to stewardship guidelines for empirical antibiotic treatment (OR 0.65, 95% CI 0.54–0.80, P < 0.001; I= 65%) and to de-escalation of therapy based on culture results (RR 0.44, 95% CI 0.30–0.66, P < 0.001; I= 59%), based on 40 and 25 studies, respectively [39]; but both results exhibited substantial heterogeneity (defined as I= 50%–90% [40]) among the relevant studies. Another meta-analysis found that there was no significant change in mortality related to stewardship interventions intending to improve antibiotic appropriateness (RR 0.92, 95% CI 0.69–1.2, P = 0.56; I= 72%) or intending to reduce excessive prescribing (RR 0.92, 95% CI 0.81–1.06, P = 0.25; I= 0%), but that there was a significant mortality benefit associated with interventions aimed at increasing guideline compliance for pneumonia diagnoses (RR 0.89, 95% CI 0.82–0.97, P = 0.005; I= 0%) [37]. In the case of Schuts et al [39], search criteria specifically sought studies that assessed clinical outcomes (eg, mortality), whereas the search of Davey et al [37] focused on studies whose aim was to improve antibiotic prescribing, with a main comparison being between restrictive and persuasive interventions; while the difference may seem subtle, the body of data compiled from these searches may characterize the ASP effect of mortality differently. No significant evidence was found to suggest that reduced antimicrobial consumption increases mortality.

Improving the use of antimicrobial agents should limit collateral damage associated with their use (eg, damage to normal flora and increased resistance), and ideally infections should be better managed. As previously mentioned, one of the concerns with antibiotic usage (particularly fluoroquinolones, macrolides, and broad-spectrum agents) is that collateral damage could lead to increased rates of C. difficile infection. One meta-analysis showed no significant reduction in the rate of C. difficile infection (as well as overall infection rate) relative to ASPs [22]; however, this finding was based on only 3 of the 26 studies analyzed, and only 1 of those 3 studies utilized restrictions for flouroquinolones and cephalosporins. An interrupted time series (ITS) study similarly found no significant reduction in C. difficile infection rate [32]; however, this study was conducted in a hospital with low baseline antibiotic prescribing (it was ranked second-to-last in terms of antibiotic usage among its peer institutions), inherently limiting the risk of C. difficile infection among patients in the pre-ASP setting. In contrast to these findings, a meta-analysis specifically designed to assess the incidence of C. difficile infection relative to stewardship programs found a significantly reduced risk of infection based on 16 studies (RR 0.48, 95% CI 0.38–0.62, P < 0.001; I= 76%) [41], and the systematic review conducted by Filice et al [24] found a significant benefit with regard to the C. difficile infection rate in 4 of 6 studies. These results are consistent with those presented as evidence for the impact of stewardship on C. difficile infection by the CDC [42]. Aside from C. difficile infection, one retrospective observational study found that the 14-day reinfection rate (ie, reinfection with the same infection at the same anatomical location) was significantly reduced following stewardship intervention (0% vs. 10%, P = 0.009) [29]. This finding, combined with the C. difficile infection examples, provide evidence for better infection management of ASPs.

While the general trend seems to suggest mixed or no significant benefit for several clinical outcomes, it is important to note that variation in outcomes could be due to differences in the types of ASP interventions and intervention study periods across differing programs. Davey et al [37] found variation in prescribing outcomes based on whether restrictive (ie, restrict prescriber freedom with antimicrobials) or persuasive (ie, suggest changes to prescriber) interventions were used, and on the timeframe in which they were used. At one month into an ASP, restrictive interventions resulted in better prescribing practices relative to persuasive interventions based on 27 studies (effect size 32.0%, 95% CI 2.5%–61.4%), but by 6 months the 2 were not statistically different (effect size 10.1%, 95% CI –47.5% to 66.0%). At 12 and 24 months, persuasive interventions demonstrated greater effects on prescribing outcomes, but these were not significant. These findings provide evidence that different study timeframes can impact ASP practices differently (and these already vary widely in the literature). Considering the variety of ASP interventions employed across the different studies, these factors almost certainly impact the reported antimicrobial consumption rates and outcomes to different degrees as a consequence. A high degree of heterogeneity among an analyzed dataset could itself be the reason for net non-significance within single systematic reviews and meta-analyses.

Resistance

Another goal of ASPs is the prevention of antimicrobial resistance, an area where the evidence generally suggests benefit associated with ASP interventions. Resistance rates to common troublesome organisms, such as methicillin-resistant S. aureus (MRSA), imipenem-resistant P. aeruginosa, and extended-spectrum β-lactamase (ESBL)–producing Klebsiella spp were significantly reduced in a meta-analysis; ESBL-producing E. coli infections were not, however [22]. An ITS study found significantly reduced MRSA resistance, as well as reduced Pseudomonal resistance to imipenem-cilastin and levofloxacin (all P < 0.001), but no significant changes with respect to piperacillin/tazobactam, cefepime, or amikacin resistance [32]. This study also noted increased E. coli resistance to levofloxacin and ceftriaxone (both P < 0.001). No significant changes in resistance were noted for vancomycin-resistant enterococci. It may be a reasonable expectation that decreasing inappropriate antimicrobial use may decrease long-term antimicrobial resistance; but as most studies only span a few years, only the minute changes in resistance are understood [23]. Longer duration studies are needed to better understand resistance outcomes.

Of note is a phenomenon known as the “squeezing the balloon” effect. This can be associated with ASPs, potentially resulting in paradoxically increased resistance [43]. That is, when usage restrictions are placed on certain antibiotics, the use of other non-restricted antibiotics may increase, possibly leading to increased resistance of those non-restricted antibiotics [22] (“constraining one end [of a balloon] causes the other end to bulge … limiting the use of one class of compounds may be counteracted by corresponding changes in prescribing and drug resistance that are even more ominous” [43]). Karanika et al [22] took this phenomonen into consideration, and assessed restricted and non-restricted antimicrobial consumption separately. They found a reduction in consumption for both restricted and non-restricted antibiotics, which included “high potential resistance” antibiotics, specifically carbapenems and glycopeptides. In the study conducted by Cairns et al [28], a similar effect was observed; while the use of other classes of antibiotics decreased (eg, cephalosporins and aminoglycosides), the use of β–lactam–β–lactamase inhibitor combinations actually increased by 48% (change in use: +48.2% [95% CI 21.8%–47.9%]). Hohn et al [26] noted an increased usage rate of carbapenems, even though several other classes of antibiotics had reduced usage. Unfortunately, neither study reported resistance rates, so the impact of these findings is unknown. Finally, Jenkins et al [32] assessed trends in antimicrobial use as changes in rates of consumption. Among the various antibiotics assessed in this study, the rate of flouroquinolone use decreased both before and after the intervention period, although the rate of decreased usage slowed post-ASP (the change in rate post-ASP was +2.2% [95% CI 1.4%–3.1%], P < 0.001). They observed a small (but significant) increase in resistance of E. coli to levofloxacin pre- vs. post-intervention (11.0% vs. 13.9%, P < 0.001); in contrast, a significant decrease in resistance of P. aeruginosa was observed (30.5% vs. 21.4%, P < 0.001). While these examples help illustrate the concept of changes in antibiotic usage patterns associated with an ASP, at best they approximate the “squeezing the balloon” effect since these studies present data for antibiotics that were either restricted or for which restriction was not clearly specified. The “squeezing the balloon” effect is most relevant for the unintended, potentially increased usage of non-restricted drugs secondary to ASP restrictions. Higher resistance rates among certain drug classes observed in the context of this effect would constitute a drawback to an ASP program.

Adverse Effects

Reduced toxicities and adverse effects are expected with reduced usage of antimicrobials. The systematic review conducted by Filice et al [24] examined the incidence of adverse effects related to antibiotic usage, and their findings suggest, at the least, that stewardship programs generally do not cause harm, as only 2 of the studies they examined reported adverse events. Following stewardship interventions, 5.5% of the patients deteriorated; and of those, the large majority (75%) deteriorated due to progression of oncological malignancies. To further illustrate the effect of stewardship interventions on toxicities and side effects of antimicrobials, Schuts et al demonstrated that the risk of nephrotoxicity while on antimicrobial therapy was reduced based on 14 studies of moderate heterogeneity as a result of an ASP (OR 0.46, 95% CI 0.28–0.77, P = 0.003; I= 34%) [39,44]. It is intuitive that reduced drug exposure results in reduced adverse effects, as such these results are expected.

Economic Outcomes

Although the focus of ASPs is often to improve clinical outcomes, economic outcomes are an important component of ASPs; these programs bring associated economic value that should be highlighted and further detailed [22,45,46]. Since clinical outcomes are often the main objective of ASPs, most available studies have been clinical effect studies (rather than economic analyses), in which economic assessments are often a secondary consideration, if included.

As a result, cost evaluations are conducted on direct cost reductions whereas indirect cost reductions are often not critically evaluated. ASPs reduce hospital expenditures by limiting hospital-acquired infections and the associated medical costs where they are effective at decreasing consumption of antimicrobials [22,45], and by reducing antibiotic misuse, iatrogenic infections, and the rates of antibiotic-resistant organisms [47]. In one retrospective observational study, annual costs of antibiotics dropped by 33% with re-implementation of an ASP, mirrored by an overall decrease in antibiotic consumption of about 10%, over the course of the intervention study period [30]. Of note is that at 1 year post-ASP re-implementation, antibiotic consumption actually increased (by 5.4%); however, because antibiotic usage had changed to more appropriate and cost-effective therapies, cost expenditures associated with antibiotics were still reduced by 13% for that year relative to pre-ASP re-implementation. Aside from economic evaluations centered on consumption rates, there is the potential to further evaluate economic benefits associated with stewardship when looking at other outcomes, including hospital LOS [22], as well as indirect costs such as morbidity and mortality, societal, and operational costs [46]. Currently, these detailed analyses are lacking. In conjunction with more standardized clinical metrics, these assessments are needed to better delineate the full cost effectiveness of ASPs.

 

 

Evidence Summary

The evidence for inpatient ASP effectiveness is promising but mixed. Much of the evidence is low-level, based on observational studies that are retrospective in nature, and systematic reviews and meta-analyses are based on these types of studies. Studies have been conducted over a range of years, and the duration of intervention periods often vary widely between studies; it is difficult to capture and account for all of the infection, prescribing, and drug availability patterns (as well as the intervention differences or new drug approvals) throughout these time periods. To complicate the matter, both the quality of data as well as the quality of the ASPs are highly variable.

As such, the findings across pooled studies for ASPs are hard to amalgamate and draw concrete conclusions from. This difficulty is due to the inherent heterogeneity when comparing smaller individual studies in systematic reviews and meta-analyses. Currently, there are numerous ways to implement an ASP, but there is not a standardized system of specific interventions or metrics. Until we can directly compare similar ASPs and interventions among various institutions, it will be challenging to generalize positive benefits from systematic reviews and meta-analyses. Currently, the CDC is involved in a new initiative in which data from various hospitals are compiled to create a surveillance database [48]. Although this is a step in the right direction for standardized metrics for stewardship, for the current review the lack of standard metrics leads to conflicting results of heterogenic studies, making it difficult to show clear benefits in clinical outcomes.

Despite the vast array of ASPs, their differences, and a range of clinical measures—many with conflicting evidence—there is a noticeable trend toward a more prudent use of antimicrobials. Based on the review of available evidence, inpatient ASPs improve patient care and preserve an important health care resource—antibiotics. As has been presented, this is demonstrated by the alterations in consumption of these agents, has ramifications for secondary outcomes such as reduced instances of C. difficile infections, resistance, and adverse effects, and overall translates into better patient care and reduced costs. But while we can conclude that the direct interventions of stewardship in reducing and restricting antibiotic use have been effective, we cannot clearly state the overall magnitude of benefit, the effectiveness of various ASP structures and components on clinical outcomes (such as LOS, mortality, etc.), and the cost savings due to the heterogeneity of the available evidence.

Future Directions

Moving forward, the future of ASPs encompasses several potential developments. First and foremost, as technological advancements continue to develop, there is a need to integrate and utilize developments in information technology (IT). Baysari et al conducted a review on the value of utilizing IT interventions, focusing mainly on decision support (stand-alone or as a component of other hospital procedures), approval, and surveillance systems [49]. There was benefit associated with these IT interventions in terms of the improvement in the appropriate use of antimicrobials (RR 1.49, 95% CI, 1.07–2.08, P < 0.05; I= 93%), but there was no demonstrated benefit in terms of patient mortality or hospital LOS. Aside from this study, broad evidence is still lacking to support the use of IT systems in ASPs because meaningful comparisons amongst the interventions have not been made due to widespread variability in study design and outcome measures. However, it is generally agreed that ASPs must integrate with IT systems as the widespread use of technology within the healthcare field continues to grow. Evidence needs to be provided in the form of higher quality studies centered on similar outcomes to show appropriate approaches for ASPs to leverage IT systems. At a minimum, the integration of IT into ASPs should not hinder clinical outcomes. An important consideration is the variation in practice settings where antibiotic stewardship is to be implemented; eg, a small community hospital will be less equipped to incorporate and support technological tools compared to a large tertiary teaching hospital. Therefore, any antibiotic stewardship IT intervention must be customized to meet local needs, prescriber behaviors, minimize barriers to implementation, and utilize available resources.

Another area of focus for future ASPs is the use of rapid diagnostics. Currently, when patients present with signs and symptoms of an infection, an empiric antimicrobial regimen is started that is then de-escalated as necessary; rapid testing will help to initiate appropriate therapy more quickly and increase antimicrobial effectiveness. Rapid tests range from rapid polymerase chain reaction (PCR)-based screening [50], to Verigene gram-positive blood culture (BC-GP) tests [51], next-generation sequencing methods, and matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) [52]. Rapid diagnostic tools should be viewed as aides to assist ASPs in decreasing antibiotic consumption and improving patient outcomes; these various tools have been shown to improve clinical outcomes when integrated into ASPs, but offer little value addressing the goals of ASPs when used outside of stewardship programs and their sensitive timeframes [53].

In terms of future ASP expansion, stewardship implementation can become more unified and broad in scope. ASPs should expand to include antifungal interventions, an area which is showing progress [36]. ASPs can also be implemented in new areas throughout the hospital (eg, pediatrics and emergency room), as well as areas outside of the hospital setting, including long-term care facilities, dialysis centers, and other institutions [54–56]. A prospective randomized control study was conducted in 30 nursing homes to evaluate the use of a novel resident antimicrobial management plan (RAMP) for improved use of antimicrobials [57]. This study found that the RAMP had no associated adverse effects and suggests that ASP is an important tool in nursing homes. In addition, the general outpatient and pediatric settings show promise for ASPs [56,58,59], but more research is needed to support expansion and to identify how ASP interventions should be applied in these various practice settings. The antimicrobial stewardship interventions that will be utilized will need to be carefully delineated to consider the scale, underlying need, and potential challenges in those settings.

While the future of antibiotic stewardship is unclear, there is certainty that it will continue to develop in both scope and depth to encompass new areas of focus, new settings to improve outcomes, and employ new tools to refine approaches. An important first step for the continued development of ASPs is alignment and standardization, since without alignment it will continue to be difficult to compare outcomes. This issue is currently being addressed by a number of different organizations. With current support from the Joint Commission, the CDC, as well as the President’s Council of Advisors on Science and Technology (PCAST) [8], regulatory requirements for ASPs are well underway, and these drivers will appropriately position ASPs for further advancements. By reducing variability amongst ASPs and delineating implementation of ASPs, there can be a clear identification of both economic and clinical benefits associated with specific interventions.

 

Corresponding author: Luigi Brunetti, PharmD, MPH, Rutgers, The State University of New Jersey, 160 Frelinghuysen Rd., Piscataway, NJ 08854, [email protected].

Financial disclosures: None.

References

1. Barlam TF, Cosgrove SE, Abbo AM, et al. Implementing an antimicrobial stewardship program: guidelines by the Infectious Diseases Society of America and the Society of Healthcare Epidemiology of America. Clin Infect Dis 2016;62:e51–77.

2. Hughes D. Selection and evolution of resistance to antimicrobial drugs. IUBMB Life 2014;66:521–9.

3. World Health Organzation. The evolving threat of antimicrobial resistance – options for action. Geneva: WHO Press; 2012.

4. Gould IM, Bal AM. New antibiotic agents in the pipeline and how they can help overcome microbial resistance. Virulence 2013;4:185–91.

5. Davies J, Davies D. Origins and evolution of antibiotic resistance. Microbiol Mol Biol Rev 2010;74:417–33.

6. Owens RC Jr. Antimicrobial stewardship: concepts and strategies in the 21st century. Diagn Microbiol Infect Dis 2008;61:110–28.

7. Antibiotic resistance threats in the United States, 2013 [Internet]. Centers for Disease Control and Prevention. Available at www.cdc.gov/drugresistance/pdf/ar-threats-2013-508.pdf.

8. Nathan C, Cars O. Antibiotic resistance – problems, progress, prospects. N Engl J Med 2014;371:1761–3.

9. McGoldrick, M. Antimicrobial stewardship. Home Healthc Nurse 2014;32:559–60.

10. Ruedy J. A method of determining patterns of use of antibacterial drugs. Can Med Assoc J 1966;95:807–12.

11. Briceland LL, Nightingdale CH, Quintiliani R, et al. Antibiotic streamlining from combination therapy to monotherapy utilizing an interdisciplinary approach. Arch Inter Med 1988;148:2019–22.

12. McGowan JE Jr, Gerding DN. Does antibiotic restriction prevent resistance? New Horiz 1996;4: 370–6.

13. Cappelletty D, Jacobs D. Evaluating the impact of a pharmacist’s absence from an antimicrobial stewardship team. Am J Health Syst Pharm 2013;70:1065–69.

14. Shales DM, Gerding DN, John JF Jr, et al. Society for Healthcare Epidemiology of America and Infectious Diseases Society of America Joint Committee on the prevention of antimicrobial resistance: guidelines for the prevention of antimicrobial resistance in hospitals. Infect Control Hosp Epidemiol 1997;18:275–91.

15. Dellit TH, Owens RC, McGowan JE, et al. Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America guidelines for developing an institutional program to enhance antimicrobial stewardship. Clin Infect Dis 2007;44:159–77.

16. Policy statement on antimicrobial stewardship by the Society for Healthcare Epidemiology of America (SHEA), the Infectious Diseases Society of America (IDSA), and the Pediatric Infectious Diseases Society (PIDS). Infect Ctrl Hosp Epidemiol 2012;33:322–7.

17. The Joint Commission. Approved: New antimicrobial stewardship standard. Joint Commission Perspectives 2016;36:1–8.

18. Pollack LA, Srinivasan A. Core elements of hospital antibiotic stewardship programs from the Centers for Disease Control and Prevention. Clin Infect Dis 2014;59(Suppl 3):S97–100.

19. Moody J. Infection preventionists have a role in accelerating progress toward preventing the emergence and cross-transmission of MDROs. Prevention Strategist 2012 Summer:52–6.

20. Spellberg B, Bartlett JG, Gilbert DN. The future of antibiotics and resistance. N Engl J Med 2013;368:299–302.

21. Olans RN, Olans RD, Demaria A. The critical role of the staff nurse in antimicrobial stewardship--unrecognized, but already there. Clin Infect Dis 2016;62:84–9.

22. Karanika S, Paudel S, Grigoras C, et al. Systematic review and meta-analysis of clinical and economic outcomes from the implementation of hospital-based antimicrobial stewardship programs. Antimicrob Agents Chemother 2016;60:4840–52.

23. Wagner B, Filice GA, Drekonja D, et al. Antimicrobial stewardship programs in inpatient hospital settings: a systematic review. Infect Control Hosp Epidemiol 2014;35:1209–28.

24. Filice G, Drekonja D, Greer N, et al. Antimicrobial stewardship programs in inpatient settings: a systematic review. VA-ESP Project #09-009; 2013.

25. Cairns KA, Doyle JS, Trevillyan JM, et al. The impact of a multidisciplinary antimicrobial stewardship team on the timeliness of antimicrobial therapy in patients with positive blood cultures: a randomized controlled trial. J Antimicrob Chemother 2016;71:3276–83.

26. Hohn A, Heising B, Hertel S, et al. Antibiotic consumption after implementation of a procalcitonin-guided antimicrobial stewardship programme in surgical patients admitted to an intensive care unit: a retrospective before-and-after analysis. Infection 2015;43:405–12.

27. Singh S, Zhang YZ, Chalkley S, et al. A three-point time series study of antibiotic usage on an intensive care unit, following an antibiotic stewardship programme, after an outbreak of multi-resistant Acinetobacter baumannii. Eur J Clin Microbiol Infect Dis 2015;34:1893–900.

28. Cairns KA, Jenney AW, Abbott IJ, et al. Prescribing trends before and after implementation of an antimicrobial stewardship program. Med J Aust 2013;198:262–6.

29. Liew YX, Lee W, Loh JC, et al. Impact of an antimicrobial stewardship programme on patient safety in Singapore General Hospital. Int J Antimicrob Agents 2012;40:55–60.

30. Bevilacqua S, Demoré B, Boschetti E, et al. 15 years of antibiotic stewardship policy in the Nancy Teaching Hospital. Med Mal Infect 2011;41:532–9.

31. Danaher PJ, Milazzo NA, Kerr KJ, et al. The antibiotic support team--a successful educational approach to antibiotic stewardship. Mil Med 2009;174:201–5.

32. Jenkins TC, Knepper BC, Shihadeh K, et al. Long-term outcomes of an antimicrobial stewardship program implemented in a hospital with low baseline antibiotic use. Infect Control Hosp Epidemiol 2015;36:664–72.

33. Brown KA, Khanafer N, Daneman N, Fisman DN. Meta-analysis of antibiotics and the risk of community-associated Clostridium difficile infection. Antimicrob Agents Chemother 2013;57:2326–32.

34. Deshpande A, Pasupuleti V, Thota P, et al. Community-associated Clostridium difficile infection and antibiotics: a meta-analysis. J Antimicrob Chemother 2013;68:1951–61.

35. Slimings C, Riley TV. Antibiotics and hospital-acquired Clostridium difficile infection: update of systematic review and meta-analysis. J Antimicrob Chemother 2014;69:881–91.

36. Antworth A, Collins CD, Kunapuli A, et al. Impact of an antimicrobial stewardship program comprehensive care bundle on management of candidemia. Pharmacotherapy 2013;33:137–43.

37. Davey P, Brown E, Charani E, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev 2013;4:CD003543.

38. Pasquale TR, Trienski TL, Olexia DE, et al. Impact of an antimicrobial stewardship program on patients with acute bacterial skin and skin structure infections. Am J Health Syst Pharm 2014;71:1136–9.

39. Schuts EC, Hulscher ME, Mouton JW, et al. Current evidence on hospital antimicrobial stewardship objectives: a systematic review and meta-analysis. Lancet Infect Dis 2016;16:847–56.

40. Higgins JPT, Green S, editors. Identifying and measuring heterogeneity. Cochrane Handbook for Systematic Reviews of Interventions, version 5.1.0. [Internet]. The Cochrane Collaboration, March 2011. Available at http://handbook.cochrane.org/chapter_9/9_5_2_identifying_and_measuring_heterogeneity.htm.

41. Feazel LM, Malhotra A, Perencevich EN, et al. Effect of antibiotic stewardship programmes on Clostridium difficile incidence: a systematic review and meta-analysis. J Antimicrob Chemother 2014;69:1748–54.

42. Impact of antibiotic stewardship programs on Clostridium difficile (C. diff) infections [Internet]. Centers for Disease Control and Prevention. [Updated 2016 May 13; cited 2016 Oct 11]. Available at www.cdc.gov/getsmart/healthcare/evidence/asp-int-cdiff.html.

43. Burke JP. Antibiotic resistance – squeezing the balloon? JAMA 1998;280:1270–1.

44. This nephrotoxicity result is corrected from the originally published result; communicated by Jan M Prins on behalf of the authors for reference [39]. Prins, JM (Department of Internal Medicine, Division of Infectious Diseases, Academic Medical Centre, Amsterdam, Netherlands). Email communication with Joseph Eckart (Pharmacy Practice & Administration, Ernest Mario School of Pharmacy, Rutgers University, Piscataway, NJ). 2016 Oct 9.

45. Coulter S, Merollini K, Roberts JA, et al. The need for cost-effectiveness analyses of antimicrobial stewardship programmes: a structured review. Int J Antimicrob Agents 2015;46:140–9.

46. Dik J, Vemer P, Friedrich A, et al. Financial evaluations of antibiotic stewardship programs—a systematic review. Frontiers Microbiol 2015;6:317.

47. Campbell KA, Stein S, Looze C, Bosco JA. Antibiotic stewardship in orthopaedic surgery: principles and practice. J Am Acad Orthop Surg 2014;22:772–81.

48. Surveillance for antimicrobial use and antimicrobial resistance options, 2015 [Internet]. Centers for Disease Control and Prevention. [Updated 2016 May 3; cited 2016 Nov 22]. Available at www.cdc.gov/nhsn/acute-care-hospital/aur/index.html.

49. Baysari MT, Lehnbom EC, Li L, Hargreaves A, et al. The effectiveness of information technology to improve antimicrobial prescribing in hospitals: a systematic review and meta-analysis. Int J Med Inform. 2016;92:15-34.

50. Bauer KA, West JE, Balada-llasat JM, et al. An antimicrobial stewardship program’s impact with rapid polymerase chain reaction methicillin-resistant Staphylococcus aureus/S. aureus blood culture test in patients with S. aureus bacteremia. Clin Infect Dis 2010;51:1074–80.

51. Sango A, Mccarter YS, Johnson D, et al. Stewardship approach for optimizing antimicrobial therapy through use of a rapid microarray assay on blood cultures positive for Enterococcus species. J Clin Microbiol 2013;51:4008–11.

52. Perez KK, Olsen RJ, Musick WL, et al. Integrating rapid diagnostics and antimicrobial stewardship improves outcomes in patients with antibiotic-resistant Gram-negative bacteremia. J Infect 2014;69:216–25.

53. Bauer KA, Perez KK, Forrest GN, Goff DA. Review of rapid diagnostic tests used by antimicrobial stewardship programs. Clin Infect Dis 2014;59 Suppl 3:S134–145.

54. Dyar OJ, Pagani L, Pulcini C. Strategies and challenges of antimicrobial stewardship in long-term care facilities. Clin Microbiol Infect 2015;21:10–9.

55. D’Agata EM. Antimicrobial use and stewardship programs among dialysis centers. Semin Dial 2013;26:457–64.

56. Smith MJ, Gerber JS, Hersh AL. Inpatient antimicrobial stewardship in pediatrics: a systematic review. J Pediatric Infect Dis Soc 2015;4:e127–135.

57. Fleet E, Gopal Rao G, Patel B, et al. Impact of implementation of a novel antimicrobial stewardship tool on antibiotic use in nursing homes: a prospective cluster randomized control pilot study. J Antimicrob Chemother 2014;69:2265–73.

58. Drekonja DM, Filice GA, Greer N, et al. Antimicrobial stewardship in outpatient settings: a systematic review. Infect Control Hosp Epidemiol 2015;36:142–52.

59. Drekonja D, Filice G, Greer N, et al. Antimicrobial stewardship programs in outpatient settings: a systematic review. VA-ESP Project #09-009; 2014.

60. Zhang YZ, Singh S. Antibiotic stewardship programmes in intensive care units: why, how, and where are they leading us. World J Crit Care Med 2015;4:13–28. (referenced in online Table)

Issue
Journal of Clinical Outcomes Management - July 2017, Vol. 24, No. 7
Publications
Topics
Sections

From the Ernest Mario School of Pharmacy, Rutgers, The State University of New Jersey, Piscataway, NJ.

 

Abstract

  • Objective: To review the evidence evaluating inpatient antimicrobial stewardship programs (ASPs) with a focus on clinical and economic outcomes.
  • Methods: Pubmed/MEDLINE and the Cochrane Database of Systematic Reviews were used to identify systematic reviews, meta-analyses, randomized controlled trials, and other relevant literature evaluating the clinical and economic impact of ASP interventions.
  • Results: A total of 5 meta-analyses, 3 systematic reviews, and 10 clinical studies (2 randomized controlled, 5 observational, and 3 quasi-experimental studies) were identified for analysis. ASPs were associated with a reduction in antimicrobial consumption and use. However, due to the heterogeneity of outcomes measured among studies, the effectiveness of ASPs varied with the measures used. There are data supporting the cost savings associated with ASPs, but these studies are more sparse. Most of the available evidence supporting ASPs is of low quality, and intervention strategies vary widely among available studies.
  • Conclusion: Much of the evidence reviewed supports the assertion that ASPs result in a more judicious use of antimicrobials and lead to better patient care in the inpatient setting. While clinical outcomes vary between programs, there are ubiquitous positive benefits associated with ASPs in terms of antimicrobial consumption, C. difficile infection rates, and resistance, with few adverse effects. To date, economic outcomes have been difficult to uniformly quantify, but there are data supporting the economic benefits of ASPs. As the number of ASPs continues to grow, it is imperative that standardized metrics are considered in order to accurately measure the benefits of these essential programs.

Key words: Antimicrobial stewardship; antimicrobial consumption; resistance.

 

Antimicrobial resistance is a public health concern that has been escalating over the years and is now identified as a global crisis [1–3]. This is partly due to the widespread use of the same antibiotics that have existed for decades, combined with a lack of sufficient novel antibiotic discovery and development [4]. Bacteria that are resistant to our last-line-of-defense medications have recently emerged, and these resistant organisms may spread to treatment-naive patients [5]. Multidrug-resistant organisms are often found, treated, and likely originate within the hospital practice setting, where antimicrobials can be prescribed by any licensed provider [6]. Upwards of 50% of antibiotics administered are unnecessary and contribute to the problem of increasing resistance [7]. The seriousness of this situation is increasingly apparent; in 2014 the World Health Organization (WHO), President Obama, and Prime Minister Cameron issued statements urging solutions to the resistance crisis [8].

While the urgency of the situation is recognized today, efforts aimed at a more judicious use of antibiotics to curb resistance began as early as the 1960s and led to the first antimicrobial stewardship programs (ASPs) [9–11]. ASPs have since been defined as “coordinated interventions designed to improve and measure the appropriate use of antimicrobial agents by promoting the selection of the optimal antimicrobial drug regimen including dosing, duration of therapy, and route of administration” [1]. The primary objectives of these types of programs are to avoid or reduce adverse events (eg, Clostridium difficile infection) and resistance driven by a shift in minimum inhibitory concentrations (MICs) and to reverse the unnecessary economic burden caused by the inappropriate prescribing of these agents [1].

This article examines the evidence evaluating the reported effectiveness of inpatient ASPs, examining both clinical and economic outcomes. In addition, we touch on ASP history, current status, and future directions in light of current trends. While ASPs are expanding into the outpatient and nursing home settings, we will limit our review here to the inpatient setting.

 

 

Historical Background

Modern antibiotics date back to the late 1930s when penicillin and sulfonamides were introduced to the medical market, and resistance to these drug classes was reported just a few years after their introduction. The same bacterial resistance mechanisms that neutralized their efficacy then exist today, and these mechanisms continue to confer resistance among those classes [5].

While “stewardship” was not described as such until the late 1990s [12], institutions have historically been proactive in creating standards around antimicrobial utilization to encourage judicious use of these agents. The earliest form of tracking antibiotic use was in the form of paper charts as “antibiotic logs” [9] and “punch cards” [10] in the 1960s. The idea of a team approach to stewardship dates back to the 1970s, with the example of Hartford Hospital in Hartford, Connecticut, which employed an antimicrobial standards model run by an infectious disease (ID) physician and clinical pharmacists [11]. In 1977, the Infectious Diseases Society of America (IDSA) released a statement that clinical pharmacists may have a substantial impact on patient care, including in ID, contributing to the idea that a team of physicians collaborating with pharmacists presents the best way to combat inappropriate medication use. Pharmacist involvement has since been shown to restrict broad overutilized antimicrobial agents and reduce the rate of C. difficile infection by a significant amount [13].

In 1997 the IDSA and the Society for Healthcare Epidemiology of America (SHEA) published guidelines to assist in the prevention of the growing issue of resistance, mentioning the importance of antimicrobial stewardship [14]. A decade later they released joint guidelines for ASP implementation [15], and the Pediatric Infectious Disease Society (PIDS) joined them in 2012 to publish a joint statement acknowledging and endorsing stewardship [16]. In 2014, the Centers of Disease Control and Prevention (CDC) recommended that every hospital should have an ASP. As of 1 January 2017, the Joint Commission requires an ASP as a standard for accreditation at hospitals, critical access hospitals, and nursing care [17]. Guidelines for implementation of an ASP are currently available through the IDSA and SHEA [1,16].

ASP Interventions

There are 2 main strategies that ASPs have to combat inappropriate antimicrobial use, and each has its own set of systematic interventions. These strategies are referred to as “prospective audit with intervention and feedback” and “prior authorization” [6]. Although most ASPs will incorporate these main strategies, each institution typically creates its own strategies and regulations independently.

Prospective audit with intervention and feedback describes the process of providing recommendations after reviewing utilization and trends of antimicrobial use. This is sometimes referred to as the “back-end” intervention, in which decisions are made after antibiotics have been administered. Interventions that are commonly used under this strategy include discontinuation of antibiotics due to culture data, de-escalation to drugs with narrower spectra, IV to oral conversions, and cessation of surgical prophylaxis [6].

Prior authorization, also referred to as a “front-end” intervention, is the process of approving medications before they are used. Interventions include a restricted formulary for antimicrobials that can be managed through a paging system or a built-in computer restriction program, as well as other guidelines and protocols for dosing and duration of therapy. Restrictions typically focus on broad spectrum antibiotics as well as the more costly drugs on formularies. These solutions reduce the need for manual intervention as technology makes it possible to create automated restriction-based services that prevent inappropriate prescribing [6].

Aside from these main techniques, other strategies are taken to achieve the goal of attaining optimal clinical outcomes while limiting further antimicrobial resistance and adverse effects. Different clinical settings have different needs, and ASPs are customized to each setting’s resources, prescribing habits, and other local specificities [1]. These differences present difficulty with interpreting diverse datasets, but certain themes arise in the literature: commonly assessed clinical outcomes of inpatient ASPs include hospital length of stay (LOS) and readmission, reinfection, mortality, and resistance rates. These outcomes are putatively driven by the more prudent use of antimicrobials, particularly by decreased rates of antimicrobial consumption.

ASP Team Members

While ASPs may differ between institutions, the staff members involved are typically the same, and leadership is always an important aspect of a program. The CDC recommends that ASP leadership consist of a program leader (an ID physician) and a pharmacy leader, who co-lead the team [18]. In addition, the Joint Commission recommends that the multidisciplinary team should include an infection preventionist (ie, infection control and hospital epidemiologist) and practitioner [17]; these specialists have a role in prevention, awareness, and policy [19]. The integration of infection control with stewardship yields the best results [15], as infection control aims to prevent antibiotic use altogether, while stewardship increases the quality of antibiotic regimens that are being prescribed [20].

It is also beneficial to incorporate a microbiologist as an integral part of the team, responsible for performing and interpreting laboratory data (ie, cultures). Nurses should be integrated into ASPs due to the overlap of their routine activities with ASP interventions [21]; other clinicians (regardless of their infectious disease clinical background), quality control, information technology, and environmental services should all collaborate in the hospital-wide systems related to the program where appropriate [18].

Evidence Review

To assess the effectiveness of inpatient ASPs, we performed a literature search using Pubmed, Cochrane Database of Systematic Reviews, and MEDLINE/OVIDSp up to 1 September 2016. The search terms used are listed in the Table. Included in this review were studies evaluating clinical or economic outcomes related to inpatient ASPs; excluded were editorials, opinion pieces, articles not containing original clinical or economic ASP outcome data, ASPs not performed in the inpatient setting, and studies that were included in identified systematic reviews or meta-analyses. Also excluded from this review were studies that reviewed ASPs performed in niche settings or for applications in which ASPs were not yet prevalent, as assessed by the authors. The search initially yielded 182 articles. After removing duplicates and excluded articles, 18 articles were identified for review: 8 meta-analyses and systematic reviews and 10 additional clinical studies (2 randomized controlled, 5 observational, and 3 quasi-experimental studies) evaluating clinical and economic outcomes not contained in the identified aggregated studies. Systematic reviews, meta-analyses, and other studies were screened to identify any other relevant literature not captured in the original search. The articles included in this review are summarized in 2 Tables, which may be accessed at www.turner-white.com/pdf/jcom_jul17_antimicrobial_appendix.pdf.

 

 

Results

Antimicrobial Usage

The most widely studied aspect of ASPs in the current review was the effect of ASP interventions on antimicrobial consumption and use. Three systematic reviews [22–24] showed improved antibiotic prescribing practices and reduced consumption rates overall, as did several studies inside and outside the intensive care unit (ICU) [25–31].One study found an insignificant declining usage trend [32]. An important underlying facet of this observation is that even as total antibiotic consumption decreases, certain antibiotic and antibiotic class consumption may increase. This is evident in several studies, which showed that as aminoglycoside, carbapenem, and β-lactam-β-lactamase inhibitor use increased, clindamycin (1 case), glycopeptide, fluoroquinolone, and macrolide use decreased [27,28,30]. A potential confounding factor relating to decreased glycopeptide use in Bevilacqua et al [30] was that there was an epidemic of glycopeptide-resistant enterococci during the study period, potentially causing prescribers to naturally avoid it. In any case, since the aim of ASPs is to encourage a more judicious usage of antimicrobials, the observed decreases in consumption of those restricted medications is intuitive. These observations about antimicrobial consumption related to ASPs are relevant because they putatively drive improvements in clinical outcomes, especially those related to reduced adverse events associated with these agents, such as the risk of C. difficile infection with certain drugs (eg, fluoroquinolones, clindamycin, and broad-spectrum antibiotics) and prolonged antibiotic usage [33–35]. There is evidence that these benefits are not limited to antibiotics but extend to antifungal agents and possibly antivirals [22,27,36].

Utilization, Mortality, and Infection Rates

ASPs typically intend to improve patient-focused clinical parameters such as hospital LOS, hospital readmissions, mortality, and incidence of infections acquired secondary to antibiotic usage during a hospital stay, especially C. difficile infection. Most of the reviewed evidence indicates that there has been no significant LOS benefit due to stewardship interventions [24–26,32,37], and one meta-analysis noted that when overall hospital LOS was significantly reduced, ICU-specific LOS was not [22]. Generally, there was also not a significant change in hospital readmission rates [24,26,32]. However, 2 retrospective observational studies found mixed results for both LOS and readmission rates relative to ASP interventions; while both noted a significantly reduced LOS, one study [38] showed an all-cause readmission benefit in a fairly healthy patient population (but no benefit for readmissions due to the specific infections of interest), and the another [29] showed a benefit for readmissions due to infections but an increased rate of readmissions in the intervention group overall. In this latter study, hospitalizations within the previous 3 months were significantly higher at baseline for the intervention group (55% vs. 46%, P = 0.042), suggesting sicker patients and possibly providing an explanation for this unique observation. Even so, a meta-analysis of 5 studies found a significantly elevated risk of readmission associated with ASP interventions (RR 1.26, 95% CI 1.02–1.57; P = 0.03); the authors noted that non–infection-related readmissions accounted for 61% of readmissions, but this was not significantly different between intervention and non-intervention arms [37].

With regard to mortality, most studies found no significant reductions related to stewardship interventions [22,24,26,29,32]. In a prospective randomized controlled trial, all reported deaths (7/160, 4.4%) were in the ASP intervention arm, but these were attributed to the severities of infection or an underlying, chronic disease [25]. One meta-analysis, however, found that there were significant mortality reductions related to stewardship guidelines for empirical antibiotic treatment (OR 0.65, 95% CI 0.54–0.80, P < 0.001; I= 65%) and to de-escalation of therapy based on culture results (RR 0.44, 95% CI 0.30–0.66, P < 0.001; I= 59%), based on 40 and 25 studies, respectively [39]; but both results exhibited substantial heterogeneity (defined as I= 50%–90% [40]) among the relevant studies. Another meta-analysis found that there was no significant change in mortality related to stewardship interventions intending to improve antibiotic appropriateness (RR 0.92, 95% CI 0.69–1.2, P = 0.56; I= 72%) or intending to reduce excessive prescribing (RR 0.92, 95% CI 0.81–1.06, P = 0.25; I= 0%), but that there was a significant mortality benefit associated with interventions aimed at increasing guideline compliance for pneumonia diagnoses (RR 0.89, 95% CI 0.82–0.97, P = 0.005; I= 0%) [37]. In the case of Schuts et al [39], search criteria specifically sought studies that assessed clinical outcomes (eg, mortality), whereas the search of Davey et al [37] focused on studies whose aim was to improve antibiotic prescribing, with a main comparison being between restrictive and persuasive interventions; while the difference may seem subtle, the body of data compiled from these searches may characterize the ASP effect of mortality differently. No significant evidence was found to suggest that reduced antimicrobial consumption increases mortality.

Improving the use of antimicrobial agents should limit collateral damage associated with their use (eg, damage to normal flora and increased resistance), and ideally infections should be better managed. As previously mentioned, one of the concerns with antibiotic usage (particularly fluoroquinolones, macrolides, and broad-spectrum agents) is that collateral damage could lead to increased rates of C. difficile infection. One meta-analysis showed no significant reduction in the rate of C. difficile infection (as well as overall infection rate) relative to ASPs [22]; however, this finding was based on only 3 of the 26 studies analyzed, and only 1 of those 3 studies utilized restrictions for flouroquinolones and cephalosporins. An interrupted time series (ITS) study similarly found no significant reduction in C. difficile infection rate [32]; however, this study was conducted in a hospital with low baseline antibiotic prescribing (it was ranked second-to-last in terms of antibiotic usage among its peer institutions), inherently limiting the risk of C. difficile infection among patients in the pre-ASP setting. In contrast to these findings, a meta-analysis specifically designed to assess the incidence of C. difficile infection relative to stewardship programs found a significantly reduced risk of infection based on 16 studies (RR 0.48, 95% CI 0.38–0.62, P < 0.001; I= 76%) [41], and the systematic review conducted by Filice et al [24] found a significant benefit with regard to the C. difficile infection rate in 4 of 6 studies. These results are consistent with those presented as evidence for the impact of stewardship on C. difficile infection by the CDC [42]. Aside from C. difficile infection, one retrospective observational study found that the 14-day reinfection rate (ie, reinfection with the same infection at the same anatomical location) was significantly reduced following stewardship intervention (0% vs. 10%, P = 0.009) [29]. This finding, combined with the C. difficile infection examples, provide evidence for better infection management of ASPs.

While the general trend seems to suggest mixed or no significant benefit for several clinical outcomes, it is important to note that variation in outcomes could be due to differences in the types of ASP interventions and intervention study periods across differing programs. Davey et al [37] found variation in prescribing outcomes based on whether restrictive (ie, restrict prescriber freedom with antimicrobials) or persuasive (ie, suggest changes to prescriber) interventions were used, and on the timeframe in which they were used. At one month into an ASP, restrictive interventions resulted in better prescribing practices relative to persuasive interventions based on 27 studies (effect size 32.0%, 95% CI 2.5%–61.4%), but by 6 months the 2 were not statistically different (effect size 10.1%, 95% CI –47.5% to 66.0%). At 12 and 24 months, persuasive interventions demonstrated greater effects on prescribing outcomes, but these were not significant. These findings provide evidence that different study timeframes can impact ASP practices differently (and these already vary widely in the literature). Considering the variety of ASP interventions employed across the different studies, these factors almost certainly impact the reported antimicrobial consumption rates and outcomes to different degrees as a consequence. A high degree of heterogeneity among an analyzed dataset could itself be the reason for net non-significance within single systematic reviews and meta-analyses.

Resistance

Another goal of ASPs is the prevention of antimicrobial resistance, an area where the evidence generally suggests benefit associated with ASP interventions. Resistance rates to common troublesome organisms, such as methicillin-resistant S. aureus (MRSA), imipenem-resistant P. aeruginosa, and extended-spectrum β-lactamase (ESBL)–producing Klebsiella spp were significantly reduced in a meta-analysis; ESBL-producing E. coli infections were not, however [22]. An ITS study found significantly reduced MRSA resistance, as well as reduced Pseudomonal resistance to imipenem-cilastin and levofloxacin (all P < 0.001), but no significant changes with respect to piperacillin/tazobactam, cefepime, or amikacin resistance [32]. This study also noted increased E. coli resistance to levofloxacin and ceftriaxone (both P < 0.001). No significant changes in resistance were noted for vancomycin-resistant enterococci. It may be a reasonable expectation that decreasing inappropriate antimicrobial use may decrease long-term antimicrobial resistance; but as most studies only span a few years, only the minute changes in resistance are understood [23]. Longer duration studies are needed to better understand resistance outcomes.

Of note is a phenomenon known as the “squeezing the balloon” effect. This can be associated with ASPs, potentially resulting in paradoxically increased resistance [43]. That is, when usage restrictions are placed on certain antibiotics, the use of other non-restricted antibiotics may increase, possibly leading to increased resistance of those non-restricted antibiotics [22] (“constraining one end [of a balloon] causes the other end to bulge … limiting the use of one class of compounds may be counteracted by corresponding changes in prescribing and drug resistance that are even more ominous” [43]). Karanika et al [22] took this phenomonen into consideration, and assessed restricted and non-restricted antimicrobial consumption separately. They found a reduction in consumption for both restricted and non-restricted antibiotics, which included “high potential resistance” antibiotics, specifically carbapenems and glycopeptides. In the study conducted by Cairns et al [28], a similar effect was observed; while the use of other classes of antibiotics decreased (eg, cephalosporins and aminoglycosides), the use of β–lactam–β–lactamase inhibitor combinations actually increased by 48% (change in use: +48.2% [95% CI 21.8%–47.9%]). Hohn et al [26] noted an increased usage rate of carbapenems, even though several other classes of antibiotics had reduced usage. Unfortunately, neither study reported resistance rates, so the impact of these findings is unknown. Finally, Jenkins et al [32] assessed trends in antimicrobial use as changes in rates of consumption. Among the various antibiotics assessed in this study, the rate of flouroquinolone use decreased both before and after the intervention period, although the rate of decreased usage slowed post-ASP (the change in rate post-ASP was +2.2% [95% CI 1.4%–3.1%], P < 0.001). They observed a small (but significant) increase in resistance of E. coli to levofloxacin pre- vs. post-intervention (11.0% vs. 13.9%, P < 0.001); in contrast, a significant decrease in resistance of P. aeruginosa was observed (30.5% vs. 21.4%, P < 0.001). While these examples help illustrate the concept of changes in antibiotic usage patterns associated with an ASP, at best they approximate the “squeezing the balloon” effect since these studies present data for antibiotics that were either restricted or for which restriction was not clearly specified. The “squeezing the balloon” effect is most relevant for the unintended, potentially increased usage of non-restricted drugs secondary to ASP restrictions. Higher resistance rates among certain drug classes observed in the context of this effect would constitute a drawback to an ASP program.

Adverse Effects

Reduced toxicities and adverse effects are expected with reduced usage of antimicrobials. The systematic review conducted by Filice et al [24] examined the incidence of adverse effects related to antibiotic usage, and their findings suggest, at the least, that stewardship programs generally do not cause harm, as only 2 of the studies they examined reported adverse events. Following stewardship interventions, 5.5% of the patients deteriorated; and of those, the large majority (75%) deteriorated due to progression of oncological malignancies. To further illustrate the effect of stewardship interventions on toxicities and side effects of antimicrobials, Schuts et al demonstrated that the risk of nephrotoxicity while on antimicrobial therapy was reduced based on 14 studies of moderate heterogeneity as a result of an ASP (OR 0.46, 95% CI 0.28–0.77, P = 0.003; I= 34%) [39,44]. It is intuitive that reduced drug exposure results in reduced adverse effects, as such these results are expected.

Economic Outcomes

Although the focus of ASPs is often to improve clinical outcomes, economic outcomes are an important component of ASPs; these programs bring associated economic value that should be highlighted and further detailed [22,45,46]. Since clinical outcomes are often the main objective of ASPs, most available studies have been clinical effect studies (rather than economic analyses), in which economic assessments are often a secondary consideration, if included.

As a result, cost evaluations are conducted on direct cost reductions whereas indirect cost reductions are often not critically evaluated. ASPs reduce hospital expenditures by limiting hospital-acquired infections and the associated medical costs where they are effective at decreasing consumption of antimicrobials [22,45], and by reducing antibiotic misuse, iatrogenic infections, and the rates of antibiotic-resistant organisms [47]. In one retrospective observational study, annual costs of antibiotics dropped by 33% with re-implementation of an ASP, mirrored by an overall decrease in antibiotic consumption of about 10%, over the course of the intervention study period [30]. Of note is that at 1 year post-ASP re-implementation, antibiotic consumption actually increased (by 5.4%); however, because antibiotic usage had changed to more appropriate and cost-effective therapies, cost expenditures associated with antibiotics were still reduced by 13% for that year relative to pre-ASP re-implementation. Aside from economic evaluations centered on consumption rates, there is the potential to further evaluate economic benefits associated with stewardship when looking at other outcomes, including hospital LOS [22], as well as indirect costs such as morbidity and mortality, societal, and operational costs [46]. Currently, these detailed analyses are lacking. In conjunction with more standardized clinical metrics, these assessments are needed to better delineate the full cost effectiveness of ASPs.

 

 

Evidence Summary

The evidence for inpatient ASP effectiveness is promising but mixed. Much of the evidence is low-level, based on observational studies that are retrospective in nature, and systematic reviews and meta-analyses are based on these types of studies. Studies have been conducted over a range of years, and the duration of intervention periods often vary widely between studies; it is difficult to capture and account for all of the infection, prescribing, and drug availability patterns (as well as the intervention differences or new drug approvals) throughout these time periods. To complicate the matter, both the quality of data as well as the quality of the ASPs are highly variable.

As such, the findings across pooled studies for ASPs are hard to amalgamate and draw concrete conclusions from. This difficulty is due to the inherent heterogeneity when comparing smaller individual studies in systematic reviews and meta-analyses. Currently, there are numerous ways to implement an ASP, but there is not a standardized system of specific interventions or metrics. Until we can directly compare similar ASPs and interventions among various institutions, it will be challenging to generalize positive benefits from systematic reviews and meta-analyses. Currently, the CDC is involved in a new initiative in which data from various hospitals are compiled to create a surveillance database [48]. Although this is a step in the right direction for standardized metrics for stewardship, for the current review the lack of standard metrics leads to conflicting results of heterogenic studies, making it difficult to show clear benefits in clinical outcomes.

Despite the vast array of ASPs, their differences, and a range of clinical measures—many with conflicting evidence—there is a noticeable trend toward a more prudent use of antimicrobials. Based on the review of available evidence, inpatient ASPs improve patient care and preserve an important health care resource—antibiotics. As has been presented, this is demonstrated by the alterations in consumption of these agents, has ramifications for secondary outcomes such as reduced instances of C. difficile infections, resistance, and adverse effects, and overall translates into better patient care and reduced costs. But while we can conclude that the direct interventions of stewardship in reducing and restricting antibiotic use have been effective, we cannot clearly state the overall magnitude of benefit, the effectiveness of various ASP structures and components on clinical outcomes (such as LOS, mortality, etc.), and the cost savings due to the heterogeneity of the available evidence.

Future Directions

Moving forward, the future of ASPs encompasses several potential developments. First and foremost, as technological advancements continue to develop, there is a need to integrate and utilize developments in information technology (IT). Baysari et al conducted a review on the value of utilizing IT interventions, focusing mainly on decision support (stand-alone or as a component of other hospital procedures), approval, and surveillance systems [49]. There was benefit associated with these IT interventions in terms of the improvement in the appropriate use of antimicrobials (RR 1.49, 95% CI, 1.07–2.08, P < 0.05; I= 93%), but there was no demonstrated benefit in terms of patient mortality or hospital LOS. Aside from this study, broad evidence is still lacking to support the use of IT systems in ASPs because meaningful comparisons amongst the interventions have not been made due to widespread variability in study design and outcome measures. However, it is generally agreed that ASPs must integrate with IT systems as the widespread use of technology within the healthcare field continues to grow. Evidence needs to be provided in the form of higher quality studies centered on similar outcomes to show appropriate approaches for ASPs to leverage IT systems. At a minimum, the integration of IT into ASPs should not hinder clinical outcomes. An important consideration is the variation in practice settings where antibiotic stewardship is to be implemented; eg, a small community hospital will be less equipped to incorporate and support technological tools compared to a large tertiary teaching hospital. Therefore, any antibiotic stewardship IT intervention must be customized to meet local needs, prescriber behaviors, minimize barriers to implementation, and utilize available resources.

Another area of focus for future ASPs is the use of rapid diagnostics. Currently, when patients present with signs and symptoms of an infection, an empiric antimicrobial regimen is started that is then de-escalated as necessary; rapid testing will help to initiate appropriate therapy more quickly and increase antimicrobial effectiveness. Rapid tests range from rapid polymerase chain reaction (PCR)-based screening [50], to Verigene gram-positive blood culture (BC-GP) tests [51], next-generation sequencing methods, and matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) [52]. Rapid diagnostic tools should be viewed as aides to assist ASPs in decreasing antibiotic consumption and improving patient outcomes; these various tools have been shown to improve clinical outcomes when integrated into ASPs, but offer little value addressing the goals of ASPs when used outside of stewardship programs and their sensitive timeframes [53].

In terms of future ASP expansion, stewardship implementation can become more unified and broad in scope. ASPs should expand to include antifungal interventions, an area which is showing progress [36]. ASPs can also be implemented in new areas throughout the hospital (eg, pediatrics and emergency room), as well as areas outside of the hospital setting, including long-term care facilities, dialysis centers, and other institutions [54–56]. A prospective randomized control study was conducted in 30 nursing homes to evaluate the use of a novel resident antimicrobial management plan (RAMP) for improved use of antimicrobials [57]. This study found that the RAMP had no associated adverse effects and suggests that ASP is an important tool in nursing homes. In addition, the general outpatient and pediatric settings show promise for ASPs [56,58,59], but more research is needed to support expansion and to identify how ASP interventions should be applied in these various practice settings. The antimicrobial stewardship interventions that will be utilized will need to be carefully delineated to consider the scale, underlying need, and potential challenges in those settings.

While the future of antibiotic stewardship is unclear, there is certainty that it will continue to develop in both scope and depth to encompass new areas of focus, new settings to improve outcomes, and employ new tools to refine approaches. An important first step for the continued development of ASPs is alignment and standardization, since without alignment it will continue to be difficult to compare outcomes. This issue is currently being addressed by a number of different organizations. With current support from the Joint Commission, the CDC, as well as the President’s Council of Advisors on Science and Technology (PCAST) [8], regulatory requirements for ASPs are well underway, and these drivers will appropriately position ASPs for further advancements. By reducing variability amongst ASPs and delineating implementation of ASPs, there can be a clear identification of both economic and clinical benefits associated with specific interventions.

 

Corresponding author: Luigi Brunetti, PharmD, MPH, Rutgers, The State University of New Jersey, 160 Frelinghuysen Rd., Piscataway, NJ 08854, [email protected].

Financial disclosures: None.

From the Ernest Mario School of Pharmacy, Rutgers, The State University of New Jersey, Piscataway, NJ.

 

Abstract

  • Objective: To review the evidence evaluating inpatient antimicrobial stewardship programs (ASPs) with a focus on clinical and economic outcomes.
  • Methods: Pubmed/MEDLINE and the Cochrane Database of Systematic Reviews were used to identify systematic reviews, meta-analyses, randomized controlled trials, and other relevant literature evaluating the clinical and economic impact of ASP interventions.
  • Results: A total of 5 meta-analyses, 3 systematic reviews, and 10 clinical studies (2 randomized controlled, 5 observational, and 3 quasi-experimental studies) were identified for analysis. ASPs were associated with a reduction in antimicrobial consumption and use. However, due to the heterogeneity of outcomes measured among studies, the effectiveness of ASPs varied with the measures used. There are data supporting the cost savings associated with ASPs, but these studies are more sparse. Most of the available evidence supporting ASPs is of low quality, and intervention strategies vary widely among available studies.
  • Conclusion: Much of the evidence reviewed supports the assertion that ASPs result in a more judicious use of antimicrobials and lead to better patient care in the inpatient setting. While clinical outcomes vary between programs, there are ubiquitous positive benefits associated with ASPs in terms of antimicrobial consumption, C. difficile infection rates, and resistance, with few adverse effects. To date, economic outcomes have been difficult to uniformly quantify, but there are data supporting the economic benefits of ASPs. As the number of ASPs continues to grow, it is imperative that standardized metrics are considered in order to accurately measure the benefits of these essential programs.

Key words: Antimicrobial stewardship; antimicrobial consumption; resistance.

 

Antimicrobial resistance is a public health concern that has been escalating over the years and is now identified as a global crisis [1–3]. This is partly due to the widespread use of the same antibiotics that have existed for decades, combined with a lack of sufficient novel antibiotic discovery and development [4]. Bacteria that are resistant to our last-line-of-defense medications have recently emerged, and these resistant organisms may spread to treatment-naive patients [5]. Multidrug-resistant organisms are often found, treated, and likely originate within the hospital practice setting, where antimicrobials can be prescribed by any licensed provider [6]. Upwards of 50% of antibiotics administered are unnecessary and contribute to the problem of increasing resistance [7]. The seriousness of this situation is increasingly apparent; in 2014 the World Health Organization (WHO), President Obama, and Prime Minister Cameron issued statements urging solutions to the resistance crisis [8].

While the urgency of the situation is recognized today, efforts aimed at a more judicious use of antibiotics to curb resistance began as early as the 1960s and led to the first antimicrobial stewardship programs (ASPs) [9–11]. ASPs have since been defined as “coordinated interventions designed to improve and measure the appropriate use of antimicrobial agents by promoting the selection of the optimal antimicrobial drug regimen including dosing, duration of therapy, and route of administration” [1]. The primary objectives of these types of programs are to avoid or reduce adverse events (eg, Clostridium difficile infection) and resistance driven by a shift in minimum inhibitory concentrations (MICs) and to reverse the unnecessary economic burden caused by the inappropriate prescribing of these agents [1].

This article examines the evidence evaluating the reported effectiveness of inpatient ASPs, examining both clinical and economic outcomes. In addition, we touch on ASP history, current status, and future directions in light of current trends. While ASPs are expanding into the outpatient and nursing home settings, we will limit our review here to the inpatient setting.

 

 

Historical Background

Modern antibiotics date back to the late 1930s when penicillin and sulfonamides were introduced to the medical market, and resistance to these drug classes was reported just a few years after their introduction. The same bacterial resistance mechanisms that neutralized their efficacy then exist today, and these mechanisms continue to confer resistance among those classes [5].

While “stewardship” was not described as such until the late 1990s [12], institutions have historically been proactive in creating standards around antimicrobial utilization to encourage judicious use of these agents. The earliest form of tracking antibiotic use was in the form of paper charts as “antibiotic logs” [9] and “punch cards” [10] in the 1960s. The idea of a team approach to stewardship dates back to the 1970s, with the example of Hartford Hospital in Hartford, Connecticut, which employed an antimicrobial standards model run by an infectious disease (ID) physician and clinical pharmacists [11]. In 1977, the Infectious Diseases Society of America (IDSA) released a statement that clinical pharmacists may have a substantial impact on patient care, including in ID, contributing to the idea that a team of physicians collaborating with pharmacists presents the best way to combat inappropriate medication use. Pharmacist involvement has since been shown to restrict broad overutilized antimicrobial agents and reduce the rate of C. difficile infection by a significant amount [13].

In 1997 the IDSA and the Society for Healthcare Epidemiology of America (SHEA) published guidelines to assist in the prevention of the growing issue of resistance, mentioning the importance of antimicrobial stewardship [14]. A decade later they released joint guidelines for ASP implementation [15], and the Pediatric Infectious Disease Society (PIDS) joined them in 2012 to publish a joint statement acknowledging and endorsing stewardship [16]. In 2014, the Centers of Disease Control and Prevention (CDC) recommended that every hospital should have an ASP. As of 1 January 2017, the Joint Commission requires an ASP as a standard for accreditation at hospitals, critical access hospitals, and nursing care [17]. Guidelines for implementation of an ASP are currently available through the IDSA and SHEA [1,16].

ASP Interventions

There are 2 main strategies that ASPs have to combat inappropriate antimicrobial use, and each has its own set of systematic interventions. These strategies are referred to as “prospective audit with intervention and feedback” and “prior authorization” [6]. Although most ASPs will incorporate these main strategies, each institution typically creates its own strategies and regulations independently.

Prospective audit with intervention and feedback describes the process of providing recommendations after reviewing utilization and trends of antimicrobial use. This is sometimes referred to as the “back-end” intervention, in which decisions are made after antibiotics have been administered. Interventions that are commonly used under this strategy include discontinuation of antibiotics due to culture data, de-escalation to drugs with narrower spectra, IV to oral conversions, and cessation of surgical prophylaxis [6].

Prior authorization, also referred to as a “front-end” intervention, is the process of approving medications before they are used. Interventions include a restricted formulary for antimicrobials that can be managed through a paging system or a built-in computer restriction program, as well as other guidelines and protocols for dosing and duration of therapy. Restrictions typically focus on broad spectrum antibiotics as well as the more costly drugs on formularies. These solutions reduce the need for manual intervention as technology makes it possible to create automated restriction-based services that prevent inappropriate prescribing [6].

Aside from these main techniques, other strategies are taken to achieve the goal of attaining optimal clinical outcomes while limiting further antimicrobial resistance and adverse effects. Different clinical settings have different needs, and ASPs are customized to each setting’s resources, prescribing habits, and other local specificities [1]. These differences present difficulty with interpreting diverse datasets, but certain themes arise in the literature: commonly assessed clinical outcomes of inpatient ASPs include hospital length of stay (LOS) and readmission, reinfection, mortality, and resistance rates. These outcomes are putatively driven by the more prudent use of antimicrobials, particularly by decreased rates of antimicrobial consumption.

ASP Team Members

While ASPs may differ between institutions, the staff members involved are typically the same, and leadership is always an important aspect of a program. The CDC recommends that ASP leadership consist of a program leader (an ID physician) and a pharmacy leader, who co-lead the team [18]. In addition, the Joint Commission recommends that the multidisciplinary team should include an infection preventionist (ie, infection control and hospital epidemiologist) and practitioner [17]; these specialists have a role in prevention, awareness, and policy [19]. The integration of infection control with stewardship yields the best results [15], as infection control aims to prevent antibiotic use altogether, while stewardship increases the quality of antibiotic regimens that are being prescribed [20].

It is also beneficial to incorporate a microbiologist as an integral part of the team, responsible for performing and interpreting laboratory data (ie, cultures). Nurses should be integrated into ASPs due to the overlap of their routine activities with ASP interventions [21]; other clinicians (regardless of their infectious disease clinical background), quality control, information technology, and environmental services should all collaborate in the hospital-wide systems related to the program where appropriate [18].

Evidence Review

To assess the effectiveness of inpatient ASPs, we performed a literature search using Pubmed, Cochrane Database of Systematic Reviews, and MEDLINE/OVIDSp up to 1 September 2016. The search terms used are listed in the Table. Included in this review were studies evaluating clinical or economic outcomes related to inpatient ASPs; excluded were editorials, opinion pieces, articles not containing original clinical or economic ASP outcome data, ASPs not performed in the inpatient setting, and studies that were included in identified systematic reviews or meta-analyses. Also excluded from this review were studies that reviewed ASPs performed in niche settings or for applications in which ASPs were not yet prevalent, as assessed by the authors. The search initially yielded 182 articles. After removing duplicates and excluded articles, 18 articles were identified for review: 8 meta-analyses and systematic reviews and 10 additional clinical studies (2 randomized controlled, 5 observational, and 3 quasi-experimental studies) evaluating clinical and economic outcomes not contained in the identified aggregated studies. Systematic reviews, meta-analyses, and other studies were screened to identify any other relevant literature not captured in the original search. The articles included in this review are summarized in 2 Tables, which may be accessed at www.turner-white.com/pdf/jcom_jul17_antimicrobial_appendix.pdf.

 

 

Results

Antimicrobial Usage

The most widely studied aspect of ASPs in the current review was the effect of ASP interventions on antimicrobial consumption and use. Three systematic reviews [22–24] showed improved antibiotic prescribing practices and reduced consumption rates overall, as did several studies inside and outside the intensive care unit (ICU) [25–31].One study found an insignificant declining usage trend [32]. An important underlying facet of this observation is that even as total antibiotic consumption decreases, certain antibiotic and antibiotic class consumption may increase. This is evident in several studies, which showed that as aminoglycoside, carbapenem, and β-lactam-β-lactamase inhibitor use increased, clindamycin (1 case), glycopeptide, fluoroquinolone, and macrolide use decreased [27,28,30]. A potential confounding factor relating to decreased glycopeptide use in Bevilacqua et al [30] was that there was an epidemic of glycopeptide-resistant enterococci during the study period, potentially causing prescribers to naturally avoid it. In any case, since the aim of ASPs is to encourage a more judicious usage of antimicrobials, the observed decreases in consumption of those restricted medications is intuitive. These observations about antimicrobial consumption related to ASPs are relevant because they putatively drive improvements in clinical outcomes, especially those related to reduced adverse events associated with these agents, such as the risk of C. difficile infection with certain drugs (eg, fluoroquinolones, clindamycin, and broad-spectrum antibiotics) and prolonged antibiotic usage [33–35]. There is evidence that these benefits are not limited to antibiotics but extend to antifungal agents and possibly antivirals [22,27,36].

Utilization, Mortality, and Infection Rates

ASPs typically intend to improve patient-focused clinical parameters such as hospital LOS, hospital readmissions, mortality, and incidence of infections acquired secondary to antibiotic usage during a hospital stay, especially C. difficile infection. Most of the reviewed evidence indicates that there has been no significant LOS benefit due to stewardship interventions [24–26,32,37], and one meta-analysis noted that when overall hospital LOS was significantly reduced, ICU-specific LOS was not [22]. Generally, there was also not a significant change in hospital readmission rates [24,26,32]. However, 2 retrospective observational studies found mixed results for both LOS and readmission rates relative to ASP interventions; while both noted a significantly reduced LOS, one study [38] showed an all-cause readmission benefit in a fairly healthy patient population (but no benefit for readmissions due to the specific infections of interest), and the another [29] showed a benefit for readmissions due to infections but an increased rate of readmissions in the intervention group overall. In this latter study, hospitalizations within the previous 3 months were significantly higher at baseline for the intervention group (55% vs. 46%, P = 0.042), suggesting sicker patients and possibly providing an explanation for this unique observation. Even so, a meta-analysis of 5 studies found a significantly elevated risk of readmission associated with ASP interventions (RR 1.26, 95% CI 1.02–1.57; P = 0.03); the authors noted that non–infection-related readmissions accounted for 61% of readmissions, but this was not significantly different between intervention and non-intervention arms [37].

With regard to mortality, most studies found no significant reductions related to stewardship interventions [22,24,26,29,32]. In a prospective randomized controlled trial, all reported deaths (7/160, 4.4%) were in the ASP intervention arm, but these were attributed to the severities of infection or an underlying, chronic disease [25]. One meta-analysis, however, found that there were significant mortality reductions related to stewardship guidelines for empirical antibiotic treatment (OR 0.65, 95% CI 0.54–0.80, P < 0.001; I= 65%) and to de-escalation of therapy based on culture results (RR 0.44, 95% CI 0.30–0.66, P < 0.001; I= 59%), based on 40 and 25 studies, respectively [39]; but both results exhibited substantial heterogeneity (defined as I= 50%–90% [40]) among the relevant studies. Another meta-analysis found that there was no significant change in mortality related to stewardship interventions intending to improve antibiotic appropriateness (RR 0.92, 95% CI 0.69–1.2, P = 0.56; I= 72%) or intending to reduce excessive prescribing (RR 0.92, 95% CI 0.81–1.06, P = 0.25; I= 0%), but that there was a significant mortality benefit associated with interventions aimed at increasing guideline compliance for pneumonia diagnoses (RR 0.89, 95% CI 0.82–0.97, P = 0.005; I= 0%) [37]. In the case of Schuts et al [39], search criteria specifically sought studies that assessed clinical outcomes (eg, mortality), whereas the search of Davey et al [37] focused on studies whose aim was to improve antibiotic prescribing, with a main comparison being between restrictive and persuasive interventions; while the difference may seem subtle, the body of data compiled from these searches may characterize the ASP effect of mortality differently. No significant evidence was found to suggest that reduced antimicrobial consumption increases mortality.

Improving the use of antimicrobial agents should limit collateral damage associated with their use (eg, damage to normal flora and increased resistance), and ideally infections should be better managed. As previously mentioned, one of the concerns with antibiotic usage (particularly fluoroquinolones, macrolides, and broad-spectrum agents) is that collateral damage could lead to increased rates of C. difficile infection. One meta-analysis showed no significant reduction in the rate of C. difficile infection (as well as overall infection rate) relative to ASPs [22]; however, this finding was based on only 3 of the 26 studies analyzed, and only 1 of those 3 studies utilized restrictions for flouroquinolones and cephalosporins. An interrupted time series (ITS) study similarly found no significant reduction in C. difficile infection rate [32]; however, this study was conducted in a hospital with low baseline antibiotic prescribing (it was ranked second-to-last in terms of antibiotic usage among its peer institutions), inherently limiting the risk of C. difficile infection among patients in the pre-ASP setting. In contrast to these findings, a meta-analysis specifically designed to assess the incidence of C. difficile infection relative to stewardship programs found a significantly reduced risk of infection based on 16 studies (RR 0.48, 95% CI 0.38–0.62, P < 0.001; I= 76%) [41], and the systematic review conducted by Filice et al [24] found a significant benefit with regard to the C. difficile infection rate in 4 of 6 studies. These results are consistent with those presented as evidence for the impact of stewardship on C. difficile infection by the CDC [42]. Aside from C. difficile infection, one retrospective observational study found that the 14-day reinfection rate (ie, reinfection with the same infection at the same anatomical location) was significantly reduced following stewardship intervention (0% vs. 10%, P = 0.009) [29]. This finding, combined with the C. difficile infection examples, provide evidence for better infection management of ASPs.

While the general trend seems to suggest mixed or no significant benefit for several clinical outcomes, it is important to note that variation in outcomes could be due to differences in the types of ASP interventions and intervention study periods across differing programs. Davey et al [37] found variation in prescribing outcomes based on whether restrictive (ie, restrict prescriber freedom with antimicrobials) or persuasive (ie, suggest changes to prescriber) interventions were used, and on the timeframe in which they were used. At one month into an ASP, restrictive interventions resulted in better prescribing practices relative to persuasive interventions based on 27 studies (effect size 32.0%, 95% CI 2.5%–61.4%), but by 6 months the 2 were not statistically different (effect size 10.1%, 95% CI –47.5% to 66.0%). At 12 and 24 months, persuasive interventions demonstrated greater effects on prescribing outcomes, but these were not significant. These findings provide evidence that different study timeframes can impact ASP practices differently (and these already vary widely in the literature). Considering the variety of ASP interventions employed across the different studies, these factors almost certainly impact the reported antimicrobial consumption rates and outcomes to different degrees as a consequence. A high degree of heterogeneity among an analyzed dataset could itself be the reason for net non-significance within single systematic reviews and meta-analyses.

Resistance

Another goal of ASPs is the prevention of antimicrobial resistance, an area where the evidence generally suggests benefit associated with ASP interventions. Resistance rates to common troublesome organisms, such as methicillin-resistant S. aureus (MRSA), imipenem-resistant P. aeruginosa, and extended-spectrum β-lactamase (ESBL)–producing Klebsiella spp were significantly reduced in a meta-analysis; ESBL-producing E. coli infections were not, however [22]. An ITS study found significantly reduced MRSA resistance, as well as reduced Pseudomonal resistance to imipenem-cilastin and levofloxacin (all P < 0.001), but no significant changes with respect to piperacillin/tazobactam, cefepime, or amikacin resistance [32]. This study also noted increased E. coli resistance to levofloxacin and ceftriaxone (both P < 0.001). No significant changes in resistance were noted for vancomycin-resistant enterococci. It may be a reasonable expectation that decreasing inappropriate antimicrobial use may decrease long-term antimicrobial resistance; but as most studies only span a few years, only the minute changes in resistance are understood [23]. Longer duration studies are needed to better understand resistance outcomes.

Of note is a phenomenon known as the “squeezing the balloon” effect. This can be associated with ASPs, potentially resulting in paradoxically increased resistance [43]. That is, when usage restrictions are placed on certain antibiotics, the use of other non-restricted antibiotics may increase, possibly leading to increased resistance of those non-restricted antibiotics [22] (“constraining one end [of a balloon] causes the other end to bulge … limiting the use of one class of compounds may be counteracted by corresponding changes in prescribing and drug resistance that are even more ominous” [43]). Karanika et al [22] took this phenomonen into consideration, and assessed restricted and non-restricted antimicrobial consumption separately. They found a reduction in consumption for both restricted and non-restricted antibiotics, which included “high potential resistance” antibiotics, specifically carbapenems and glycopeptides. In the study conducted by Cairns et al [28], a similar effect was observed; while the use of other classes of antibiotics decreased (eg, cephalosporins and aminoglycosides), the use of β–lactam–β–lactamase inhibitor combinations actually increased by 48% (change in use: +48.2% [95% CI 21.8%–47.9%]). Hohn et al [26] noted an increased usage rate of carbapenems, even though several other classes of antibiotics had reduced usage. Unfortunately, neither study reported resistance rates, so the impact of these findings is unknown. Finally, Jenkins et al [32] assessed trends in antimicrobial use as changes in rates of consumption. Among the various antibiotics assessed in this study, the rate of flouroquinolone use decreased both before and after the intervention period, although the rate of decreased usage slowed post-ASP (the change in rate post-ASP was +2.2% [95% CI 1.4%–3.1%], P < 0.001). They observed a small (but significant) increase in resistance of E. coli to levofloxacin pre- vs. post-intervention (11.0% vs. 13.9%, P < 0.001); in contrast, a significant decrease in resistance of P. aeruginosa was observed (30.5% vs. 21.4%, P < 0.001). While these examples help illustrate the concept of changes in antibiotic usage patterns associated with an ASP, at best they approximate the “squeezing the balloon” effect since these studies present data for antibiotics that were either restricted or for which restriction was not clearly specified. The “squeezing the balloon” effect is most relevant for the unintended, potentially increased usage of non-restricted drugs secondary to ASP restrictions. Higher resistance rates among certain drug classes observed in the context of this effect would constitute a drawback to an ASP program.

Adverse Effects

Reduced toxicities and adverse effects are expected with reduced usage of antimicrobials. The systematic review conducted by Filice et al [24] examined the incidence of adverse effects related to antibiotic usage, and their findings suggest, at the least, that stewardship programs generally do not cause harm, as only 2 of the studies they examined reported adverse events. Following stewardship interventions, 5.5% of the patients deteriorated; and of those, the large majority (75%) deteriorated due to progression of oncological malignancies. To further illustrate the effect of stewardship interventions on toxicities and side effects of antimicrobials, Schuts et al demonstrated that the risk of nephrotoxicity while on antimicrobial therapy was reduced based on 14 studies of moderate heterogeneity as a result of an ASP (OR 0.46, 95% CI 0.28–0.77, P = 0.003; I= 34%) [39,44]. It is intuitive that reduced drug exposure results in reduced adverse effects, as such these results are expected.

Economic Outcomes

Although the focus of ASPs is often to improve clinical outcomes, economic outcomes are an important component of ASPs; these programs bring associated economic value that should be highlighted and further detailed [22,45,46]. Since clinical outcomes are often the main objective of ASPs, most available studies have been clinical effect studies (rather than economic analyses), in which economic assessments are often a secondary consideration, if included.

As a result, cost evaluations are conducted on direct cost reductions whereas indirect cost reductions are often not critically evaluated. ASPs reduce hospital expenditures by limiting hospital-acquired infections and the associated medical costs where they are effective at decreasing consumption of antimicrobials [22,45], and by reducing antibiotic misuse, iatrogenic infections, and the rates of antibiotic-resistant organisms [47]. In one retrospective observational study, annual costs of antibiotics dropped by 33% with re-implementation of an ASP, mirrored by an overall decrease in antibiotic consumption of about 10%, over the course of the intervention study period [30]. Of note is that at 1 year post-ASP re-implementation, antibiotic consumption actually increased (by 5.4%); however, because antibiotic usage had changed to more appropriate and cost-effective therapies, cost expenditures associated with antibiotics were still reduced by 13% for that year relative to pre-ASP re-implementation. Aside from economic evaluations centered on consumption rates, there is the potential to further evaluate economic benefits associated with stewardship when looking at other outcomes, including hospital LOS [22], as well as indirect costs such as morbidity and mortality, societal, and operational costs [46]. Currently, these detailed analyses are lacking. In conjunction with more standardized clinical metrics, these assessments are needed to better delineate the full cost effectiveness of ASPs.

 

 

Evidence Summary

The evidence for inpatient ASP effectiveness is promising but mixed. Much of the evidence is low-level, based on observational studies that are retrospective in nature, and systematic reviews and meta-analyses are based on these types of studies. Studies have been conducted over a range of years, and the duration of intervention periods often vary widely between studies; it is difficult to capture and account for all of the infection, prescribing, and drug availability patterns (as well as the intervention differences or new drug approvals) throughout these time periods. To complicate the matter, both the quality of data as well as the quality of the ASPs are highly variable.

As such, the findings across pooled studies for ASPs are hard to amalgamate and draw concrete conclusions from. This difficulty is due to the inherent heterogeneity when comparing smaller individual studies in systematic reviews and meta-analyses. Currently, there are numerous ways to implement an ASP, but there is not a standardized system of specific interventions or metrics. Until we can directly compare similar ASPs and interventions among various institutions, it will be challenging to generalize positive benefits from systematic reviews and meta-analyses. Currently, the CDC is involved in a new initiative in which data from various hospitals are compiled to create a surveillance database [48]. Although this is a step in the right direction for standardized metrics for stewardship, for the current review the lack of standard metrics leads to conflicting results of heterogenic studies, making it difficult to show clear benefits in clinical outcomes.

Despite the vast array of ASPs, their differences, and a range of clinical measures—many with conflicting evidence—there is a noticeable trend toward a more prudent use of antimicrobials. Based on the review of available evidence, inpatient ASPs improve patient care and preserve an important health care resource—antibiotics. As has been presented, this is demonstrated by the alterations in consumption of these agents, has ramifications for secondary outcomes such as reduced instances of C. difficile infections, resistance, and adverse effects, and overall translates into better patient care and reduced costs. But while we can conclude that the direct interventions of stewardship in reducing and restricting antibiotic use have been effective, we cannot clearly state the overall magnitude of benefit, the effectiveness of various ASP structures and components on clinical outcomes (such as LOS, mortality, etc.), and the cost savings due to the heterogeneity of the available evidence.

Future Directions

Moving forward, the future of ASPs encompasses several potential developments. First and foremost, as technological advancements continue to develop, there is a need to integrate and utilize developments in information technology (IT). Baysari et al conducted a review on the value of utilizing IT interventions, focusing mainly on decision support (stand-alone or as a component of other hospital procedures), approval, and surveillance systems [49]. There was benefit associated with these IT interventions in terms of the improvement in the appropriate use of antimicrobials (RR 1.49, 95% CI, 1.07–2.08, P < 0.05; I= 93%), but there was no demonstrated benefit in terms of patient mortality or hospital LOS. Aside from this study, broad evidence is still lacking to support the use of IT systems in ASPs because meaningful comparisons amongst the interventions have not been made due to widespread variability in study design and outcome measures. However, it is generally agreed that ASPs must integrate with IT systems as the widespread use of technology within the healthcare field continues to grow. Evidence needs to be provided in the form of higher quality studies centered on similar outcomes to show appropriate approaches for ASPs to leverage IT systems. At a minimum, the integration of IT into ASPs should not hinder clinical outcomes. An important consideration is the variation in practice settings where antibiotic stewardship is to be implemented; eg, a small community hospital will be less equipped to incorporate and support technological tools compared to a large tertiary teaching hospital. Therefore, any antibiotic stewardship IT intervention must be customized to meet local needs, prescriber behaviors, minimize barriers to implementation, and utilize available resources.

Another area of focus for future ASPs is the use of rapid diagnostics. Currently, when patients present with signs and symptoms of an infection, an empiric antimicrobial regimen is started that is then de-escalated as necessary; rapid testing will help to initiate appropriate therapy more quickly and increase antimicrobial effectiveness. Rapid tests range from rapid polymerase chain reaction (PCR)-based screening [50], to Verigene gram-positive blood culture (BC-GP) tests [51], next-generation sequencing methods, and matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) [52]. Rapid diagnostic tools should be viewed as aides to assist ASPs in decreasing antibiotic consumption and improving patient outcomes; these various tools have been shown to improve clinical outcomes when integrated into ASPs, but offer little value addressing the goals of ASPs when used outside of stewardship programs and their sensitive timeframes [53].

In terms of future ASP expansion, stewardship implementation can become more unified and broad in scope. ASPs should expand to include antifungal interventions, an area which is showing progress [36]. ASPs can also be implemented in new areas throughout the hospital (eg, pediatrics and emergency room), as well as areas outside of the hospital setting, including long-term care facilities, dialysis centers, and other institutions [54–56]. A prospective randomized control study was conducted in 30 nursing homes to evaluate the use of a novel resident antimicrobial management plan (RAMP) for improved use of antimicrobials [57]. This study found that the RAMP had no associated adverse effects and suggests that ASP is an important tool in nursing homes. In addition, the general outpatient and pediatric settings show promise for ASPs [56,58,59], but more research is needed to support expansion and to identify how ASP interventions should be applied in these various practice settings. The antimicrobial stewardship interventions that will be utilized will need to be carefully delineated to consider the scale, underlying need, and potential challenges in those settings.

While the future of antibiotic stewardship is unclear, there is certainty that it will continue to develop in both scope and depth to encompass new areas of focus, new settings to improve outcomes, and employ new tools to refine approaches. An important first step for the continued development of ASPs is alignment and standardization, since without alignment it will continue to be difficult to compare outcomes. This issue is currently being addressed by a number of different organizations. With current support from the Joint Commission, the CDC, as well as the President’s Council of Advisors on Science and Technology (PCAST) [8], regulatory requirements for ASPs are well underway, and these drivers will appropriately position ASPs for further advancements. By reducing variability amongst ASPs and delineating implementation of ASPs, there can be a clear identification of both economic and clinical benefits associated with specific interventions.

 

Corresponding author: Luigi Brunetti, PharmD, MPH, Rutgers, The State University of New Jersey, 160 Frelinghuysen Rd., Piscataway, NJ 08854, [email protected].

Financial disclosures: None.

References

1. Barlam TF, Cosgrove SE, Abbo AM, et al. Implementing an antimicrobial stewardship program: guidelines by the Infectious Diseases Society of America and the Society of Healthcare Epidemiology of America. Clin Infect Dis 2016;62:e51–77.

2. Hughes D. Selection and evolution of resistance to antimicrobial drugs. IUBMB Life 2014;66:521–9.

3. World Health Organzation. The evolving threat of antimicrobial resistance – options for action. Geneva: WHO Press; 2012.

4. Gould IM, Bal AM. New antibiotic agents in the pipeline and how they can help overcome microbial resistance. Virulence 2013;4:185–91.

5. Davies J, Davies D. Origins and evolution of antibiotic resistance. Microbiol Mol Biol Rev 2010;74:417–33.

6. Owens RC Jr. Antimicrobial stewardship: concepts and strategies in the 21st century. Diagn Microbiol Infect Dis 2008;61:110–28.

7. Antibiotic resistance threats in the United States, 2013 [Internet]. Centers for Disease Control and Prevention. Available at www.cdc.gov/drugresistance/pdf/ar-threats-2013-508.pdf.

8. Nathan C, Cars O. Antibiotic resistance – problems, progress, prospects. N Engl J Med 2014;371:1761–3.

9. McGoldrick, M. Antimicrobial stewardship. Home Healthc Nurse 2014;32:559–60.

10. Ruedy J. A method of determining patterns of use of antibacterial drugs. Can Med Assoc J 1966;95:807–12.

11. Briceland LL, Nightingdale CH, Quintiliani R, et al. Antibiotic streamlining from combination therapy to monotherapy utilizing an interdisciplinary approach. Arch Inter Med 1988;148:2019–22.

12. McGowan JE Jr, Gerding DN. Does antibiotic restriction prevent resistance? New Horiz 1996;4: 370–6.

13. Cappelletty D, Jacobs D. Evaluating the impact of a pharmacist’s absence from an antimicrobial stewardship team. Am J Health Syst Pharm 2013;70:1065–69.

14. Shales DM, Gerding DN, John JF Jr, et al. Society for Healthcare Epidemiology of America and Infectious Diseases Society of America Joint Committee on the prevention of antimicrobial resistance: guidelines for the prevention of antimicrobial resistance in hospitals. Infect Control Hosp Epidemiol 1997;18:275–91.

15. Dellit TH, Owens RC, McGowan JE, et al. Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America guidelines for developing an institutional program to enhance antimicrobial stewardship. Clin Infect Dis 2007;44:159–77.

16. Policy statement on antimicrobial stewardship by the Society for Healthcare Epidemiology of America (SHEA), the Infectious Diseases Society of America (IDSA), and the Pediatric Infectious Diseases Society (PIDS). Infect Ctrl Hosp Epidemiol 2012;33:322–7.

17. The Joint Commission. Approved: New antimicrobial stewardship standard. Joint Commission Perspectives 2016;36:1–8.

18. Pollack LA, Srinivasan A. Core elements of hospital antibiotic stewardship programs from the Centers for Disease Control and Prevention. Clin Infect Dis 2014;59(Suppl 3):S97–100.

19. Moody J. Infection preventionists have a role in accelerating progress toward preventing the emergence and cross-transmission of MDROs. Prevention Strategist 2012 Summer:52–6.

20. Spellberg B, Bartlett JG, Gilbert DN. The future of antibiotics and resistance. N Engl J Med 2013;368:299–302.

21. Olans RN, Olans RD, Demaria A. The critical role of the staff nurse in antimicrobial stewardship--unrecognized, but already there. Clin Infect Dis 2016;62:84–9.

22. Karanika S, Paudel S, Grigoras C, et al. Systematic review and meta-analysis of clinical and economic outcomes from the implementation of hospital-based antimicrobial stewardship programs. Antimicrob Agents Chemother 2016;60:4840–52.

23. Wagner B, Filice GA, Drekonja D, et al. Antimicrobial stewardship programs in inpatient hospital settings: a systematic review. Infect Control Hosp Epidemiol 2014;35:1209–28.

24. Filice G, Drekonja D, Greer N, et al. Antimicrobial stewardship programs in inpatient settings: a systematic review. VA-ESP Project #09-009; 2013.

25. Cairns KA, Doyle JS, Trevillyan JM, et al. The impact of a multidisciplinary antimicrobial stewardship team on the timeliness of antimicrobial therapy in patients with positive blood cultures: a randomized controlled trial. J Antimicrob Chemother 2016;71:3276–83.

26. Hohn A, Heising B, Hertel S, et al. Antibiotic consumption after implementation of a procalcitonin-guided antimicrobial stewardship programme in surgical patients admitted to an intensive care unit: a retrospective before-and-after analysis. Infection 2015;43:405–12.

27. Singh S, Zhang YZ, Chalkley S, et al. A three-point time series study of antibiotic usage on an intensive care unit, following an antibiotic stewardship programme, after an outbreak of multi-resistant Acinetobacter baumannii. Eur J Clin Microbiol Infect Dis 2015;34:1893–900.

28. Cairns KA, Jenney AW, Abbott IJ, et al. Prescribing trends before and after implementation of an antimicrobial stewardship program. Med J Aust 2013;198:262–6.

29. Liew YX, Lee W, Loh JC, et al. Impact of an antimicrobial stewardship programme on patient safety in Singapore General Hospital. Int J Antimicrob Agents 2012;40:55–60.

30. Bevilacqua S, Demoré B, Boschetti E, et al. 15 years of antibiotic stewardship policy in the Nancy Teaching Hospital. Med Mal Infect 2011;41:532–9.

31. Danaher PJ, Milazzo NA, Kerr KJ, et al. The antibiotic support team--a successful educational approach to antibiotic stewardship. Mil Med 2009;174:201–5.

32. Jenkins TC, Knepper BC, Shihadeh K, et al. Long-term outcomes of an antimicrobial stewardship program implemented in a hospital with low baseline antibiotic use. Infect Control Hosp Epidemiol 2015;36:664–72.

33. Brown KA, Khanafer N, Daneman N, Fisman DN. Meta-analysis of antibiotics and the risk of community-associated Clostridium difficile infection. Antimicrob Agents Chemother 2013;57:2326–32.

34. Deshpande A, Pasupuleti V, Thota P, et al. Community-associated Clostridium difficile infection and antibiotics: a meta-analysis. J Antimicrob Chemother 2013;68:1951–61.

35. Slimings C, Riley TV. Antibiotics and hospital-acquired Clostridium difficile infection: update of systematic review and meta-analysis. J Antimicrob Chemother 2014;69:881–91.

36. Antworth A, Collins CD, Kunapuli A, et al. Impact of an antimicrobial stewardship program comprehensive care bundle on management of candidemia. Pharmacotherapy 2013;33:137–43.

37. Davey P, Brown E, Charani E, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev 2013;4:CD003543.

38. Pasquale TR, Trienski TL, Olexia DE, et al. Impact of an antimicrobial stewardship program on patients with acute bacterial skin and skin structure infections. Am J Health Syst Pharm 2014;71:1136–9.

39. Schuts EC, Hulscher ME, Mouton JW, et al. Current evidence on hospital antimicrobial stewardship objectives: a systematic review and meta-analysis. Lancet Infect Dis 2016;16:847–56.

40. Higgins JPT, Green S, editors. Identifying and measuring heterogeneity. Cochrane Handbook for Systematic Reviews of Interventions, version 5.1.0. [Internet]. The Cochrane Collaboration, March 2011. Available at http://handbook.cochrane.org/chapter_9/9_5_2_identifying_and_measuring_heterogeneity.htm.

41. Feazel LM, Malhotra A, Perencevich EN, et al. Effect of antibiotic stewardship programmes on Clostridium difficile incidence: a systematic review and meta-analysis. J Antimicrob Chemother 2014;69:1748–54.

42. Impact of antibiotic stewardship programs on Clostridium difficile (C. diff) infections [Internet]. Centers for Disease Control and Prevention. [Updated 2016 May 13; cited 2016 Oct 11]. Available at www.cdc.gov/getsmart/healthcare/evidence/asp-int-cdiff.html.

43. Burke JP. Antibiotic resistance – squeezing the balloon? JAMA 1998;280:1270–1.

44. This nephrotoxicity result is corrected from the originally published result; communicated by Jan M Prins on behalf of the authors for reference [39]. Prins, JM (Department of Internal Medicine, Division of Infectious Diseases, Academic Medical Centre, Amsterdam, Netherlands). Email communication with Joseph Eckart (Pharmacy Practice & Administration, Ernest Mario School of Pharmacy, Rutgers University, Piscataway, NJ). 2016 Oct 9.

45. Coulter S, Merollini K, Roberts JA, et al. The need for cost-effectiveness analyses of antimicrobial stewardship programmes: a structured review. Int J Antimicrob Agents 2015;46:140–9.

46. Dik J, Vemer P, Friedrich A, et al. Financial evaluations of antibiotic stewardship programs—a systematic review. Frontiers Microbiol 2015;6:317.

47. Campbell KA, Stein S, Looze C, Bosco JA. Antibiotic stewardship in orthopaedic surgery: principles and practice. J Am Acad Orthop Surg 2014;22:772–81.

48. Surveillance for antimicrobial use and antimicrobial resistance options, 2015 [Internet]. Centers for Disease Control and Prevention. [Updated 2016 May 3; cited 2016 Nov 22]. Available at www.cdc.gov/nhsn/acute-care-hospital/aur/index.html.

49. Baysari MT, Lehnbom EC, Li L, Hargreaves A, et al. The effectiveness of information technology to improve antimicrobial prescribing in hospitals: a systematic review and meta-analysis. Int J Med Inform. 2016;92:15-34.

50. Bauer KA, West JE, Balada-llasat JM, et al. An antimicrobial stewardship program’s impact with rapid polymerase chain reaction methicillin-resistant Staphylococcus aureus/S. aureus blood culture test in patients with S. aureus bacteremia. Clin Infect Dis 2010;51:1074–80.

51. Sango A, Mccarter YS, Johnson D, et al. Stewardship approach for optimizing antimicrobial therapy through use of a rapid microarray assay on blood cultures positive for Enterococcus species. J Clin Microbiol 2013;51:4008–11.

52. Perez KK, Olsen RJ, Musick WL, et al. Integrating rapid diagnostics and antimicrobial stewardship improves outcomes in patients with antibiotic-resistant Gram-negative bacteremia. J Infect 2014;69:216–25.

53. Bauer KA, Perez KK, Forrest GN, Goff DA. Review of rapid diagnostic tests used by antimicrobial stewardship programs. Clin Infect Dis 2014;59 Suppl 3:S134–145.

54. Dyar OJ, Pagani L, Pulcini C. Strategies and challenges of antimicrobial stewardship in long-term care facilities. Clin Microbiol Infect 2015;21:10–9.

55. D’Agata EM. Antimicrobial use and stewardship programs among dialysis centers. Semin Dial 2013;26:457–64.

56. Smith MJ, Gerber JS, Hersh AL. Inpatient antimicrobial stewardship in pediatrics: a systematic review. J Pediatric Infect Dis Soc 2015;4:e127–135.

57. Fleet E, Gopal Rao G, Patel B, et al. Impact of implementation of a novel antimicrobial stewardship tool on antibiotic use in nursing homes: a prospective cluster randomized control pilot study. J Antimicrob Chemother 2014;69:2265–73.

58. Drekonja DM, Filice GA, Greer N, et al. Antimicrobial stewardship in outpatient settings: a systematic review. Infect Control Hosp Epidemiol 2015;36:142–52.

59. Drekonja D, Filice G, Greer N, et al. Antimicrobial stewardship programs in outpatient settings: a systematic review. VA-ESP Project #09-009; 2014.

60. Zhang YZ, Singh S. Antibiotic stewardship programmes in intensive care units: why, how, and where are they leading us. World J Crit Care Med 2015;4:13–28. (referenced in online Table)

References

1. Barlam TF, Cosgrove SE, Abbo AM, et al. Implementing an antimicrobial stewardship program: guidelines by the Infectious Diseases Society of America and the Society of Healthcare Epidemiology of America. Clin Infect Dis 2016;62:e51–77.

2. Hughes D. Selection and evolution of resistance to antimicrobial drugs. IUBMB Life 2014;66:521–9.

3. World Health Organzation. The evolving threat of antimicrobial resistance – options for action. Geneva: WHO Press; 2012.

4. Gould IM, Bal AM. New antibiotic agents in the pipeline and how they can help overcome microbial resistance. Virulence 2013;4:185–91.

5. Davies J, Davies D. Origins and evolution of antibiotic resistance. Microbiol Mol Biol Rev 2010;74:417–33.

6. Owens RC Jr. Antimicrobial stewardship: concepts and strategies in the 21st century. Diagn Microbiol Infect Dis 2008;61:110–28.

7. Antibiotic resistance threats in the United States, 2013 [Internet]. Centers for Disease Control and Prevention. Available at www.cdc.gov/drugresistance/pdf/ar-threats-2013-508.pdf.

8. Nathan C, Cars O. Antibiotic resistance – problems, progress, prospects. N Engl J Med 2014;371:1761–3.

9. McGoldrick, M. Antimicrobial stewardship. Home Healthc Nurse 2014;32:559–60.

10. Ruedy J. A method of determining patterns of use of antibacterial drugs. Can Med Assoc J 1966;95:807–12.

11. Briceland LL, Nightingdale CH, Quintiliani R, et al. Antibiotic streamlining from combination therapy to monotherapy utilizing an interdisciplinary approach. Arch Inter Med 1988;148:2019–22.

12. McGowan JE Jr, Gerding DN. Does antibiotic restriction prevent resistance? New Horiz 1996;4: 370–6.

13. Cappelletty D, Jacobs D. Evaluating the impact of a pharmacist’s absence from an antimicrobial stewardship team. Am J Health Syst Pharm 2013;70:1065–69.

14. Shales DM, Gerding DN, John JF Jr, et al. Society for Healthcare Epidemiology of America and Infectious Diseases Society of America Joint Committee on the prevention of antimicrobial resistance: guidelines for the prevention of antimicrobial resistance in hospitals. Infect Control Hosp Epidemiol 1997;18:275–91.

15. Dellit TH, Owens RC, McGowan JE, et al. Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America guidelines for developing an institutional program to enhance antimicrobial stewardship. Clin Infect Dis 2007;44:159–77.

16. Policy statement on antimicrobial stewardship by the Society for Healthcare Epidemiology of America (SHEA), the Infectious Diseases Society of America (IDSA), and the Pediatric Infectious Diseases Society (PIDS). Infect Ctrl Hosp Epidemiol 2012;33:322–7.

17. The Joint Commission. Approved: New antimicrobial stewardship standard. Joint Commission Perspectives 2016;36:1–8.

18. Pollack LA, Srinivasan A. Core elements of hospital antibiotic stewardship programs from the Centers for Disease Control and Prevention. Clin Infect Dis 2014;59(Suppl 3):S97–100.

19. Moody J. Infection preventionists have a role in accelerating progress toward preventing the emergence and cross-transmission of MDROs. Prevention Strategist 2012 Summer:52–6.

20. Spellberg B, Bartlett JG, Gilbert DN. The future of antibiotics and resistance. N Engl J Med 2013;368:299–302.

21. Olans RN, Olans RD, Demaria A. The critical role of the staff nurse in antimicrobial stewardship--unrecognized, but already there. Clin Infect Dis 2016;62:84–9.

22. Karanika S, Paudel S, Grigoras C, et al. Systematic review and meta-analysis of clinical and economic outcomes from the implementation of hospital-based antimicrobial stewardship programs. Antimicrob Agents Chemother 2016;60:4840–52.

23. Wagner B, Filice GA, Drekonja D, et al. Antimicrobial stewardship programs in inpatient hospital settings: a systematic review. Infect Control Hosp Epidemiol 2014;35:1209–28.

24. Filice G, Drekonja D, Greer N, et al. Antimicrobial stewardship programs in inpatient settings: a systematic review. VA-ESP Project #09-009; 2013.

25. Cairns KA, Doyle JS, Trevillyan JM, et al. The impact of a multidisciplinary antimicrobial stewardship team on the timeliness of antimicrobial therapy in patients with positive blood cultures: a randomized controlled trial. J Antimicrob Chemother 2016;71:3276–83.

26. Hohn A, Heising B, Hertel S, et al. Antibiotic consumption after implementation of a procalcitonin-guided antimicrobial stewardship programme in surgical patients admitted to an intensive care unit: a retrospective before-and-after analysis. Infection 2015;43:405–12.

27. Singh S, Zhang YZ, Chalkley S, et al. A three-point time series study of antibiotic usage on an intensive care unit, following an antibiotic stewardship programme, after an outbreak of multi-resistant Acinetobacter baumannii. Eur J Clin Microbiol Infect Dis 2015;34:1893–900.

28. Cairns KA, Jenney AW, Abbott IJ, et al. Prescribing trends before and after implementation of an antimicrobial stewardship program. Med J Aust 2013;198:262–6.

29. Liew YX, Lee W, Loh JC, et al. Impact of an antimicrobial stewardship programme on patient safety in Singapore General Hospital. Int J Antimicrob Agents 2012;40:55–60.

30. Bevilacqua S, Demoré B, Boschetti E, et al. 15 years of antibiotic stewardship policy in the Nancy Teaching Hospital. Med Mal Infect 2011;41:532–9.

31. Danaher PJ, Milazzo NA, Kerr KJ, et al. The antibiotic support team--a successful educational approach to antibiotic stewardship. Mil Med 2009;174:201–5.

32. Jenkins TC, Knepper BC, Shihadeh K, et al. Long-term outcomes of an antimicrobial stewardship program implemented in a hospital with low baseline antibiotic use. Infect Control Hosp Epidemiol 2015;36:664–72.

33. Brown KA, Khanafer N, Daneman N, Fisman DN. Meta-analysis of antibiotics and the risk of community-associated Clostridium difficile infection. Antimicrob Agents Chemother 2013;57:2326–32.

34. Deshpande A, Pasupuleti V, Thota P, et al. Community-associated Clostridium difficile infection and antibiotics: a meta-analysis. J Antimicrob Chemother 2013;68:1951–61.

35. Slimings C, Riley TV. Antibiotics and hospital-acquired Clostridium difficile infection: update of systematic review and meta-analysis. J Antimicrob Chemother 2014;69:881–91.

36. Antworth A, Collins CD, Kunapuli A, et al. Impact of an antimicrobial stewardship program comprehensive care bundle on management of candidemia. Pharmacotherapy 2013;33:137–43.

37. Davey P, Brown E, Charani E, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev 2013;4:CD003543.

38. Pasquale TR, Trienski TL, Olexia DE, et al. Impact of an antimicrobial stewardship program on patients with acute bacterial skin and skin structure infections. Am J Health Syst Pharm 2014;71:1136–9.

39. Schuts EC, Hulscher ME, Mouton JW, et al. Current evidence on hospital antimicrobial stewardship objectives: a systematic review and meta-analysis. Lancet Infect Dis 2016;16:847–56.

40. Higgins JPT, Green S, editors. Identifying and measuring heterogeneity. Cochrane Handbook for Systematic Reviews of Interventions, version 5.1.0. [Internet]. The Cochrane Collaboration, March 2011. Available at http://handbook.cochrane.org/chapter_9/9_5_2_identifying_and_measuring_heterogeneity.htm.

41. Feazel LM, Malhotra A, Perencevich EN, et al. Effect of antibiotic stewardship programmes on Clostridium difficile incidence: a systematic review and meta-analysis. J Antimicrob Chemother 2014;69:1748–54.

42. Impact of antibiotic stewardship programs on Clostridium difficile (C. diff) infections [Internet]. Centers for Disease Control and Prevention. [Updated 2016 May 13; cited 2016 Oct 11]. Available at www.cdc.gov/getsmart/healthcare/evidence/asp-int-cdiff.html.

43. Burke JP. Antibiotic resistance – squeezing the balloon? JAMA 1998;280:1270–1.

44. This nephrotoxicity result is corrected from the originally published result; communicated by Jan M Prins on behalf of the authors for reference [39]. Prins, JM (Department of Internal Medicine, Division of Infectious Diseases, Academic Medical Centre, Amsterdam, Netherlands). Email communication with Joseph Eckart (Pharmacy Practice & Administration, Ernest Mario School of Pharmacy, Rutgers University, Piscataway, NJ). 2016 Oct 9.

45. Coulter S, Merollini K, Roberts JA, et al. The need for cost-effectiveness analyses of antimicrobial stewardship programmes: a structured review. Int J Antimicrob Agents 2015;46:140–9.

46. Dik J, Vemer P, Friedrich A, et al. Financial evaluations of antibiotic stewardship programs—a systematic review. Frontiers Microbiol 2015;6:317.

47. Campbell KA, Stein S, Looze C, Bosco JA. Antibiotic stewardship in orthopaedic surgery: principles and practice. J Am Acad Orthop Surg 2014;22:772–81.

48. Surveillance for antimicrobial use and antimicrobial resistance options, 2015 [Internet]. Centers for Disease Control and Prevention. [Updated 2016 May 3; cited 2016 Nov 22]. Available at www.cdc.gov/nhsn/acute-care-hospital/aur/index.html.

49. Baysari MT, Lehnbom EC, Li L, Hargreaves A, et al. The effectiveness of information technology to improve antimicrobial prescribing in hospitals: a systematic review and meta-analysis. Int J Med Inform. 2016;92:15-34.

50. Bauer KA, West JE, Balada-llasat JM, et al. An antimicrobial stewardship program’s impact with rapid polymerase chain reaction methicillin-resistant Staphylococcus aureus/S. aureus blood culture test in patients with S. aureus bacteremia. Clin Infect Dis 2010;51:1074–80.

51. Sango A, Mccarter YS, Johnson D, et al. Stewardship approach for optimizing antimicrobial therapy through use of a rapid microarray assay on blood cultures positive for Enterococcus species. J Clin Microbiol 2013;51:4008–11.

52. Perez KK, Olsen RJ, Musick WL, et al. Integrating rapid diagnostics and antimicrobial stewardship improves outcomes in patients with antibiotic-resistant Gram-negative bacteremia. J Infect 2014;69:216–25.

53. Bauer KA, Perez KK, Forrest GN, Goff DA. Review of rapid diagnostic tests used by antimicrobial stewardship programs. Clin Infect Dis 2014;59 Suppl 3:S134–145.

54. Dyar OJ, Pagani L, Pulcini C. Strategies and challenges of antimicrobial stewardship in long-term care facilities. Clin Microbiol Infect 2015;21:10–9.

55. D’Agata EM. Antimicrobial use and stewardship programs among dialysis centers. Semin Dial 2013;26:457–64.

56. Smith MJ, Gerber JS, Hersh AL. Inpatient antimicrobial stewardship in pediatrics: a systematic review. J Pediatric Infect Dis Soc 2015;4:e127–135.

57. Fleet E, Gopal Rao G, Patel B, et al. Impact of implementation of a novel antimicrobial stewardship tool on antibiotic use in nursing homes: a prospective cluster randomized control pilot study. J Antimicrob Chemother 2014;69:2265–73.

58. Drekonja DM, Filice GA, Greer N, et al. Antimicrobial stewardship in outpatient settings: a systematic review. Infect Control Hosp Epidemiol 2015;36:142–52.

59. Drekonja D, Filice G, Greer N, et al. Antimicrobial stewardship programs in outpatient settings: a systematic review. VA-ESP Project #09-009; 2014.

60. Zhang YZ, Singh S. Antibiotic stewardship programmes in intensive care units: why, how, and where are they leading us. World J Crit Care Med 2015;4:13–28. (referenced in online Table)

Issue
Journal of Clinical Outcomes Management - July 2017, Vol. 24, No. 7
Issue
Journal of Clinical Outcomes Management - July 2017, Vol. 24, No. 7
Publications
Publications
Topics
Article Type
Display Headline
Antimicrobial Stewardship Programs: Effects on Clinical and Economic Outcomes and Future Directions
Display Headline
Antimicrobial Stewardship Programs: Effects on Clinical and Economic Outcomes and Future Directions
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Determinants of Suboptimal Migraine Diagnosis and Treatment in the Primary Care Setting

Article Type
Changed
Wed, 02/28/2018 - 14:27
Display Headline
Determinants of Suboptimal Migraine Diagnosis and Treatment in the Primary Care Setting

From the Mayo Clinic, Scottsdale, AZ.

 

Abstract

  • Objective: To review the impact of migraine and explore the barriers to optimal migraine diagnosis and treatment.
  • Methods: Review of the literature.
  • Results: Several factors may play a role in the inadequate care of migraine patients, including issues related to poor access to care, diagnostic insight, misdiagnosis, adherence to treatment, and management of comorbidities. Both patient and physician factors play an important role and many be modifiable.
  • Conclusions: A focus on education of both patients and physicians is of paramount importance to improve the care provided to migraine patients. Patient evaluations should be multisystemic and include addressing comorbid conditions as well as a discussion about appropriate use of prevention and avoidance of medication overuse.

Key words: migraine; triptans; medication overuse headache; medication adherence; primary care.

 

Migraine is a common, debilitating condition that is a significant source of reduced productivity and increased disability [1]. According to the latest government statistics, 14.2% of US adults have reported having migraine or severe headaches in the previous 3 months, with an overall age-adjusted 3-month prevalence of 19.1% in females and 9.0% in males [2]. In a self-administered headache questionnaire mailed to 120,000 representative US households, the 1-year period prevalence for migraine was 11.7% (17.1% in women and 5.6% in men). Prevalence peaked in middle life and was lower in adolescents and those older than age 60 years [3]. Migraine is an important cause of reduced health-related quality of life and has a very high economic burden [4]. This effect is even more marked in those with chronic migraine, who are even more likely to have professional and social absenteeism and experience more severe disability [4].

Migraine and headache are a common reason for primary care physician (PCP) visits. Some estimates suggest that as many as 10% of primary care consultations are due to headache [5]. Approximately 75% of all patients complaining of headache in primary care will eventually be diagnosed with migraine [6]. Of these, as many as 1% to 5% will have chronic migraine [6].

Despite the high frequency and social and economic impact of migraine, migraine is underrecognized and undertreated. A survey of US households revealed that only 13% of migraineurs were currently using a preventive thrapy while 43.3% had never used one [3]. This is despite the fact that 32.4% met expert criteria for consideration of a preventive medication [3]. The reasons for underrecognition and undertreatment are multifactorial and include both patient and physician factors.

 

 

 

Physician Factors

Although migraine and headache are a leading cause of physicians visits, most physicians have had little formal training in headache. In the United States, medical students spend an average of 1 hour of preclinical and 2 hours of clinical education in headache [7]. Furthermore, primary care physicians receive little formal training in headache during residency [8]. In addition to the lack of formal training, there is also a lack of substantial clinic time available to fully evaluate and treat a new headache patient in the primary care setting [8]. Headache consultations can often be timely and detail-driven in order to determine the correct diagnosis and treatment [9].

Misdiagnosis

Evidence suggests that misdiagnosis plays a large role in the suboptimal management of migraineurs. Studies have shown that as many as 59.7% of migraineurs were not given a diagnosis of migraine by their primary care provider [10]. Common mistaken diagnoses include tension-type headache [11], “sinus headache” [12], cervical pain syndrome or cervicogenic headache [13], and “stress headache” [14].

The reasons for these misdiagnoses is not certain. It may be that the patient and practitioner assume that location of the pain is suggestive of the cause [13]. This is even though more than half of those with migraine have associated neck pain [15]. A recent study suggests that 60% of migraineurs who self-reported a diagnosis of cervical pain have been subsequently diagnosed with cervicalgia by a physician [13]. If patients endorse stress as a precipitant or the presence of cervical pain, they are more likely to obtain a diagnosis other than migraine. The presence of aura in association with the headache appears to be protective against misdiagnosis [13].

Similarly, patients are often given a diagnosis of “sinus headache.” This diagnosis is often made without radiologic evidence of sinusitis and even in those with a more typical migraine headache [16]. In one survey, 40% of patients meeting criteria for migraine were given this diagnosis. Many of these patients did have nasal symptoms or facial pain without clear evidence or rhinosinusitis, and in some cases these symptoms would respond to migraine treatments [16]. This is a particularly important misdiagnosis to highlight, as attributing symptoms to sinus disease may lead to unnecessary consultations and even sinus instrumentation.

In addition to common misdiagnoses, many PCPs are unfamiliar with the “red flags” that may indicate a secondary headache disorder and are also unfamiliar with appropriate use of neuroimaging in headache patients [17].

Misuse of As-Needed Medications

Studies have suggested that a large proportion of PCPs will prescribe nonspecific analgesics for migraine rather than migraine-specific medications [18]. These treatments may include NSAIDs, acetaminophen, barbiturates, and even opiates. This appears to be the pattern even for those with severe attacks [18], suggesting that migraine-specific medications such as triptans may be underused in the primary care setting. Postulated reasons for this pattern include lack of physician knowledge regarding the specific recommendations for managing migraine, the cost of medications, as well as lack of insurance coverage for these medications [19]. Misuse of as-needed medications can lead to medication overuse headache (MOH), which is an underrecognized problem in the primary care setting [20]. In a survey of PCPs in Boston, only 54% of PCPs were aware that barbiturates can cause MOH and only 34% were aware that opiates can cause MOH [17]. The same survey revealed that approximately 20% of PCPs had never made the diagnosis of MOH [17].

Underuse of Preventive Medications

As many as 40% of migraineurs need preventive therapy, but only approximately 13% are currently receiving it [3]. Additionally, the average time from diagnosis of migraine to instituting preventive treatment is 4.3 years, and often there is only a single preventive medication trial if one is instituted [21]. The reasons for this appear to be complex. The physician factors contributing to the underuse of preventive medications include inadequate education, discomfort and inadequate time for assessments. Only 27.8% of surveyed PCPs were aware of the American Academy of Neurology guidelines for prescribing preventive medications [17].

There may be an underestimate of the disability experienced by migraineurs, which can explain some of the underuse of preventive medications. While many PCPs endorse inquiring about headache-related disability, many do not used validated scales such as the Migraine Disability Assessment Score (MIDAS) or the Headache Impact Test (HIT) [17]. In addition, patients often underreport their headache days and report only their severe exacerbations unless clearly asked about a daily headache [22]. This may be part of the reason why only 20% of migraineurs who meet criteria for chronic migraine are diagnosed as such and why preventatives may not be offered [23].

After preventatives are started, less than 25% of patients will be adherent to oral migraine preventive agents at 1 year [24]. Common reasons for discontinuing preventives include adverse effects and perceived inefficacy [22]. Preventive medications may need a 6- to 8-week trial before efficacy is determined, but in practice medications may be stopped before this threshold is reached. Inadequate follow-up and lack of detail with regard to medication trials may result in the perception of an intractable patient prematurely. It has been suggested that a systematic approach to documenting and choosing preventive agents is helpful in the treatment of migraine [25], although this is not always practical in the primary care setting.

Another contributor to underuse of effective prophylaxis is related to access. Treatment with onabotulinumtoxin A, an efficacious prophylactic treatment approved for select chronic migraine patients [26], will usually require referral to a headache specialist, which is not always available to PCPs in a timely manner [7].

Nonpharmacologic Approaches

Effective nonpharmacologic treatment modalities for migraine, such as cognitive-behavioral therapy and biofeedback [27], are not commonly recommended by PCPs [17]. Instead, there appears to be more focus on avoidance of triggers and referral to non–evidence-based resources, such as special diets and massage therapy [17]. While these methods are not always inappropriate, it should be noted that they often have little or no evidence for efficacy.

Patients often wish for non-medication approaches to migraine management, but for those with significant and severe disability, these are probably insufficient. In these patients, non-medication approaches may best be used as a supplement to pharmacological treatment, with education on pharmacologic prevention given. Neuromodulation is a promising, novel approach that is emerging as a new treatment for migraine, but likely will require referral to a headache specialist.

 

 

Suboptimal Management of Migraine Comorbidities

There are several disorders that are commonly comorbid with migraine. Among the most common are anxiety, depression, medication (and caffeine) overuse, obesity, and sleep disorders [22]. A survey of PCPs reveals that only 50.6% of PCPs screen for anxiety, 60.2% for depression, and 73.5% for sleep disorders [17]. They are, for the most part, modifiable or treatable conditions and their proper management may help ease migraine disability.

In addition, the presence of these comorbidities may alter choice of treatment, for example, favoring the use of an serotonin and norepinephrine reuptake inhibitor such as venlafaxine for treatment  in those with comorbid anxiety and depression. It is also worthwhile to have a high index of suspicion for obstructive sleep apnea in patients with headache, particularly in the obese and in those who endorse nonrestorative sleep or excessive daytime somnolence. It appears that patients who are adherent to the treatment of sleep apnea are more likely to report improvement in their headache [28].

Given the time constraints that often exist in the PCP office setting, addressing these comorbidities thoroughly is not always possible. It is reasonable, however, to have patients use screening tools while in the waiting room or prior to an appointment, to better identify those with modifiable comorbidities. Depression, anxiety, and excessive daytime sleepiness can all be screened for relatively easily with tools such as the PHQ-9 [29], GAD-7 [30] and Epworth Sleepiness Scale [31], respectively. A positive screen on any of these could lead the PCP to further investigate these entities as a possible contributor to migraine.

Patient Factors

In addition to the physician factors identified above, patient factors can contribute to the suboptimal management of migraine as well. These factors include a lack insight into diagnosis, poor compliance with treatment of migraine or its comorbidities, and overuse of abortive medications. There are also less modifiable patient factors such as socioeconomic status and the stigma that may be associated with migraine.

Poor Insight Into Diagnosis

Despite the high prevalence and burden of migraine in the general population, there is a staggering lack of awareness among migraineurs. Some estimates state that as many as 54% of patients were unaware that their headaches represented migraine [32]. The most common self-reported diagnoses in migraineurs are sinus headache (39%), tension-type headache (31%) and stress headache (29%) [14]. In addition, many patients believe they are suffering from cervical spine–related pain [13]. This is likely due to the common presence of posteriorly located pain, attacks triggered by poor sleep, or attacks associated with weather changes [13]. Patients presenting with aura are more likely to report and to receive a physician diagnosis of migraine [14]. Women are more likely to receive and report a diagnosis of migraine compared with men [32].

There are many factors that play a role in poor insight. Many patients appear to believe that the location of the pain is suggestive of the cause [13]. Many patients never seek out consultation for their headaches, and thus never receive a proper diagnosis [33]. Some patients may seek out medical care for their headaches, but fail to remember their diagnosis or receive an improper diagnosis [34].

Poor Adherence

The body of literature examining adherence with headache treatment is growing, but remains small [35]. In a recent systematic review of treatment adherence in pediatric and adult patients with headache, adherence rates in adults with headache ranged from 25% to 94% [35]. In this review, prescription claims data analyses found poor persistence in patients prescribed triptans for migraine treatment. In one large claims-based study, 53.8% of patients  receiving a new triptan prescription did not persistently refill their index triptan [36]. Although some of these patients switched to an alternative triptan, the majority switched to a non-triptan migraine medication, including opioids and nonsteroidal anti-inflammatory drugs [36].

Cady and colleagues’ study of lapsed and sustained triptan users found that sustained users were significantly more satisfied with their medication, confident in the medication’s ability to control headache, and reported control of migraine with fewer doses of medication [37]. The authors concluded that the findings suggest that lapsed users may not be receiving optimal treatment. In a review by Rains et al [38], the authors found that headache treatment adherence declines “with more frequent and complex dosing regimens, side effects, and costs, and is subject to a wide range of psychosocial influences.”

Adherence issues also exist for migraine prevention. Less than 25% of chronic migraine patients continue to take oral preventive therapies at 1 year [24]. The reasons for this nonadherence are not completely clear, but are likely multifactorial. Preventives may take several weeks to months to become effective, which may contribute to noncompliance. In addition, migraineurs appears to have inadequate follow-up for migraine. Studies from France suggest that only 18% of those aware of their migraine diagnosis received medical follow-up [39].

Medication Overuse

While the data is not entirely clear, it is likely that overuse of as-needed medication plays a role in migraine chronification [40]. The reasons for medication overuse in the migraine population include some of the issues already highlighted above, including inadequate patient education, poor insight into diagnosis, not seeking care, misdiagnosis, and treatment nonadherence. Patients should be educated on the proper use of as-needed medication. Limits to medication use should be set during the physician-patient encounter. Patients should be counselled to limit their as-needed medication to no more than 10 days per month to reduce the risk of medication overuse headache. Ideally, opiates and barbiturates should be avoided, and never used as first-line therapy in patients who lack contraindications to NSAIDs and triptans. If their use in unavoidable for other reasons, they should be used sparingly, as use on as few as 5 to 8 days per month can be problematic [41]. Furthermore it is important to note that if patients are using several different acute analgesics, the combined total use of all as-needed pain medications needs to be less than 10 days per month to reduce the potential for medication overuse headache.

 

 

Socioeconomic Factors

Low socioeconomic status has been associated with an increased prevalence for all headache forms and an increased migraine attack frequency [42], but there appear to be few studies looking at the impact of low socioeconomic status and treatment. Lipton et al found that health insurance status was an important predictor of persons with migraine consulting a health care professional [43]. Among consulters, women were far more likely to be diagnosed than men, suggesting that gender bias in diagnosis may be an important barrier for men. Higher household income appeared to be a predictor for receiving a correct diagnosis of migraine. These researchers also found economic barriers related to use of appropriate prescription medications [43]. Differences in diagnosis and treatment may indicate racial and ethnic disparities in access and quality of care for minority patients [44].

Stigma

At least 1 study has reported that migraine patients experience stigma. In Young et al’s study of  123 episodic migraine patients, 123 chronic migraine patients, and 62 epilepsy patients, adjusted stigma was similar for chronic migraine and epilepsy, which were greater than for episodic migraine [45]. Stigma correlated most strongly with inability to work. Migraine patients reported equally high stigma scores across age, income, and education. The stigma of migraine may pose a barrier to seeking consultation and treatment. Further, the perception that migraine is “just a headache” may lead to stigmatizing attitudes on the part of friends, family, and coworkers of patients with migraine.

Conclusions and Recommendations

Migraine is a prevalent and frequently disabling condition that is underrecognized and undertreated in the primary care setting. Both physician and patient factors pose barriers to the optimal diagnosis and treatment of migraine. Remedies to address these barriers include education of both patients and physicians first and foremost. Targeting physician education in medical school and during residency training, including in primary care subspecialties, could include additional didactic teaching, but also clinical encounters in headache subspecialty clinics to increase exposure. Patient advocacy groups and public campaigns to improve understanding of migraine in the community may be a means for improving patient education and reducing stigma. Patients should be encouraged to seek out consultations for headache to reduce long-term headache disability. Management of comorbidities is paramount, and screening tools for migraine-associated disability, anxiety, depression, and medication use may be helpful to implement in the primary care setting as they are easy to use and time saving.

Recent surveys of PCPs suggest that the resource that is most desired is ready access to subspecialists for advice and “curb-side” consultation [17]. While this solution is not always practical, it may be worthwhile exploring closer relationships between primary care and subspecialty headache clinics, or perhaps more access to e-consultation or telephone consultation for more rural areas. Recently, Minen et al examined education strategies for PCPs. While in-person education sessions with PCPs were poorly attended, multiple possibilities for further education were identified. It was suggested that PCPs having real-time access to resources during the patient encounter would improve their comfort in managing patients. This includes online databases, simple algorithms for treatment, and directions for when to refer to a neurologist [46]. In addition, it may be worthwhile to train not only PCPs but also nursing and allied health staff so that they can provide headache education to patients. This may help ease some of the time burden on PCPs as well as provide a collaborative environment in which headache can be managed [46].

 

Corresponding author: William S. Kingston, MD, Mayo Clinic, 13400 E. Shea Blvd., Scottsdale, AZ 85259.

Financial disclosures: None.

References

1. Stewart WF, Schechter A, Lipton RB. Migraine heterogeneity. Disability, pain intensity and attack frequency and duration. Neurology 1994; 44(suppl 4):S24–S39

2. Burch RC, Loder S, Loder E, Smitherman TA. The prevalence of migraine and severe headache in the United States: updated statistics from government health surveillance studies. Headache 2015;55:21–34.

3. Lipton RB, Bigal ME, Diamond M, et al. Migraine prevalence, disease burden, and the need for preventive therapy. Neurology 2007;68:343–9.

4. Blumenfeld AM, Varon SF, Wilcox TK, et al. Disability, HRQoL and resource use amoung chronic and episodic migraineurs: results from the International Burden of Migraine Study (IBMS). Cephalalgia 2011;31:301–15.

5. Ahmed F. Headache disorders: differentiating and managing the common subtypes. Br J Pain 2012;6:124–32.

6. Natoli JL, Manack A, Dean B, et al. Global prevalence of chronic migraine: a systemic review. Cephalalgia 2010;30: 599–609.

7. Finkel AG. American academic headache specialists in neurology: Practice characteristics and culture. Cephalalgia 2004; 24:522–7.

8. Sheftell FD, Cady RK, Borchert LD, et al. Optimizing the diagnosis and treatment of migraine. J Am Acad Nurse Pract 2005;17:309–17.

9. Lipton RB, Scher AI, Steiner TJ, et al. Patterns of health care utilization for migraine in England and in the United States. Neurology 2003;60:441–8.

10. De Diego EV, Lanteri-Minet M. Recognition and management of migraine in primary care: Influence of functional impact measures by the Headache Impact Test (HIT). Cephalalgia 2005;25:184–90.

11. Miller S, Matharu MS. Migraine is underdiagnosed and undertreated. Practitioner 2014;258:19–24.

12. Al-Hashel JY, Ahmed SF, Alroughani R, et al. Migraine misdiagnosis as sinusitis, a delay that can last for many years. J Headache Pain 2013;14:97.

13. Viana M, Sances G, Terrazzino S, et al. When cervical pain is actually migraine: an observational study in 207 patients. Cephalalgia 2016. Epub ahead of print.

14. Diamond MD, Bigal ME, Silberstein S, et al. Patterns of diagnosis and acute and preventive treatment for migraine in the United States: Results from the American Migraine Prevalence and Prevention Study. Headache 2007;47:355–63.

15. Aguila MR, Rebbeck T, Mendoza KG, et al. Definitions and participant characteristics of frequent recurrent headache types in clinical trials: A systematic review. Cephalalgia 2017. Epub ahead of print.

16. Senbil N, Yavus Gurer YK, Uner C, Barut Y. Sinusitis in children and adolescents with chronic or recurrent headache: A case-control study. J Headache Pain 2008;9:33–6.

17. Minen MT, Loder E, Tishler L, Silbersweig D. Migraine diagnosis and treatment: A knowledge and needs assessment amoung primary care providers. Cephalalgia 2016; 36:358–70.

18. MacGregor EA, Brandes J, Eikerman A. Migraine prevalence and treatment patterns: The global migraine and zolmitriptan evaluation survey. Headache 2003;33:19–26.

19. Khan S, Mascarenhas A, Moore JE, et al. Access to triptans for acute episodic migraine: a qualitative study. Headache 2015; 44(suppl 4):199–211.

20. Tepper SJ. Medication-overuse headache. Continuum 2012;18:807–22.

21. Dekker F, Dielemann J, Neven AK, et al. Preventive treatment for migraine in primary care, a population based study in the Netherlands. Cephalalgia 2013;33:1170–8.

22. Starling AJ, Dodick DW. Best practices for patients with chronic migraine: burden, diagnosis and management in primary care. Mayo Clin Proc 2015;90:408–14.

23. Bigal ME, Serrano D, Reed M, Lipton RB. Chronic migraine in the population: burden, diagnosis, and satisfaction with treatment. Neurology 2008;71:559–66.

24. Hepp Z, Dodick D, Varon S, et al. Adherence to oral migraine preventive-medications among patients with chronic migraine. Cephalalgia 2015;35:478–88.

25. Smith JH, Schwedt TJ. What constitutes an “adequate” trial in migraine prevention? Curr Pain Headache Rep 2015;19:52.

26. Dodick DW, Turkel CC, DeGryse RE, et al. OnabotulinumtoxinA for treatment of chronic migraine: pooled results from the double blind, randomized, placebo-controlled phases of the PREEMPT clinical program. Headache 2010;50:921–36.

27. Silberstein SD. Practice parameter: evidence based guidelines for migraine headache (an evidence-based review): report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology 2000;55: 754–62.

28. Johnson KG, Ziemba AM, Garb JL. Improvement in headaches with continuous positive airway pressure for obstructive sleep apnea: a retrospective analysis. Headache 2013;53:333–43.

29. Altura KC, Patten SB, FIest KM, et al. Suicidal ideation in persons with neurological conditions: prevalence, associations and validation of the PHQ-9 for suicidal ideation. Gen Hosp Psychiatry 2016;42:22–6.

30. Seo JG, Park SP. Validation of the Generalized Anxiety Disorder-7 (GAD-7) and GAD-2 in patients with migraine. J Headache Pain 2015;16:97.

31. Corlateanu A, Pylchenko S, DIrcu V, Botnaru V. Predictors of daytime sleepiness in patients with obstructive sleep apnea. Pneumologia 2015;64:21–5.

32. Linde M, Dahlof C. Attitudes and burden of disease among self-considered migraineurs – a nation-wide population-based survey in Sweden. Cephalalgia 2004;24:455–65.

33. Osterhaus JT, Gutterman DL, Plachetka JR. Health care resources and lost labor costs of migraine headaches in the United States. Pharmacoeconomics 1992;36:69–76.

34. Tepper SJ, Dahlof CG, Dowson A et al. Prevalence and diagnosis of migraine in patients consulting their physician with a complaint of headache: Data from the Landmark Study. Headache 2004;44:856–64.

35.  Ramsey RR, Ryan JL, Hershey AD, et al. Treatment adherence in patients with headache: a systematic review. Headache 2014;54:795–816.

36. Katic BJ, Rajagopalan S, Ho TW, et al. Triptan persistency among newly initiated users in a pharmacy claims database. Cephalalgia 2011;31:488–500.

37.  Cady RK, Maizels M, Reeves DL, Levinson DM, Evans JK. Predictors of adherence to triptans: factors of sustained vs lapsed users. Headache 2009;49:386–94.

38.  Rains JC, Lipchik GL, Penzien DB. Behavioral facilitation of medical treatment for headache--part I: Review of headache treatment compliance. Headache 2006;46:1387–94.

39. Lucas C, Chaffaut C, Artaz MA, Lanteri-Minet M. FRAMIG 2000: Medical and therapeutic management of migraine in France. Cephalalgia 2005;25:267–79.

40. Bigal ME, Serrano D, Buse D et al. Acute migraine medications and evolution from episodic to chronic migraine: a longitudinal population-based study. Headache 2008;48:1157–68.

41. Diener HC, Limmroth V. Medication-overuse headache: a worldwide problem. Lancet Neurol 2004;3:475–83.

42.  Winter AC, Berger K, Buring JE, Kurth T. Associations of socioeconomic status with migraine and non-migraine headache. Cephalalgia 2012;32:159–70.

43. Lipton, RB, Serrano D, Holland S et al. Barriers to the diagnosis and treatment of migraine: effects of sex, income, and headache features. Headache 2013;53: 81–92.

44.  Loder S, Sheikh HU, Loder E. The prevalence, burden, and treatment of severe, frequent, and migraine headaches in US minority populations: statistics from National Survey studies. Headache 2015;55:214–28.

45. Young WB, Park JE, Tian IX, Kempner J. The stigma of migraine. PLoS One 2013;8:e54074.

46. Minen A, Shome A, Hapern A, et al. A migraine training program for primary care providers: an overview of a survey and pilot study findings, lessons learned, and consideration for further research. Headache 2016;56:725–40.

Issue
Journal of Clinical Outcomes Management - July 2017, Vol. 24, No. 7
Publications
Topics
Sections

From the Mayo Clinic, Scottsdale, AZ.

 

Abstract

  • Objective: To review the impact of migraine and explore the barriers to optimal migraine diagnosis and treatment.
  • Methods: Review of the literature.
  • Results: Several factors may play a role in the inadequate care of migraine patients, including issues related to poor access to care, diagnostic insight, misdiagnosis, adherence to treatment, and management of comorbidities. Both patient and physician factors play an important role and many be modifiable.
  • Conclusions: A focus on education of both patients and physicians is of paramount importance to improve the care provided to migraine patients. Patient evaluations should be multisystemic and include addressing comorbid conditions as well as a discussion about appropriate use of prevention and avoidance of medication overuse.

Key words: migraine; triptans; medication overuse headache; medication adherence; primary care.

 

Migraine is a common, debilitating condition that is a significant source of reduced productivity and increased disability [1]. According to the latest government statistics, 14.2% of US adults have reported having migraine or severe headaches in the previous 3 months, with an overall age-adjusted 3-month prevalence of 19.1% in females and 9.0% in males [2]. In a self-administered headache questionnaire mailed to 120,000 representative US households, the 1-year period prevalence for migraine was 11.7% (17.1% in women and 5.6% in men). Prevalence peaked in middle life and was lower in adolescents and those older than age 60 years [3]. Migraine is an important cause of reduced health-related quality of life and has a very high economic burden [4]. This effect is even more marked in those with chronic migraine, who are even more likely to have professional and social absenteeism and experience more severe disability [4].

Migraine and headache are a common reason for primary care physician (PCP) visits. Some estimates suggest that as many as 10% of primary care consultations are due to headache [5]. Approximately 75% of all patients complaining of headache in primary care will eventually be diagnosed with migraine [6]. Of these, as many as 1% to 5% will have chronic migraine [6].

Despite the high frequency and social and economic impact of migraine, migraine is underrecognized and undertreated. A survey of US households revealed that only 13% of migraineurs were currently using a preventive thrapy while 43.3% had never used one [3]. This is despite the fact that 32.4% met expert criteria for consideration of a preventive medication [3]. The reasons for underrecognition and undertreatment are multifactorial and include both patient and physician factors.

 

 

 

Physician Factors

Although migraine and headache are a leading cause of physicians visits, most physicians have had little formal training in headache. In the United States, medical students spend an average of 1 hour of preclinical and 2 hours of clinical education in headache [7]. Furthermore, primary care physicians receive little formal training in headache during residency [8]. In addition to the lack of formal training, there is also a lack of substantial clinic time available to fully evaluate and treat a new headache patient in the primary care setting [8]. Headache consultations can often be timely and detail-driven in order to determine the correct diagnosis and treatment [9].

Misdiagnosis

Evidence suggests that misdiagnosis plays a large role in the suboptimal management of migraineurs. Studies have shown that as many as 59.7% of migraineurs were not given a diagnosis of migraine by their primary care provider [10]. Common mistaken diagnoses include tension-type headache [11], “sinus headache” [12], cervical pain syndrome or cervicogenic headache [13], and “stress headache” [14].

The reasons for these misdiagnoses is not certain. It may be that the patient and practitioner assume that location of the pain is suggestive of the cause [13]. This is even though more than half of those with migraine have associated neck pain [15]. A recent study suggests that 60% of migraineurs who self-reported a diagnosis of cervical pain have been subsequently diagnosed with cervicalgia by a physician [13]. If patients endorse stress as a precipitant or the presence of cervical pain, they are more likely to obtain a diagnosis other than migraine. The presence of aura in association with the headache appears to be protective against misdiagnosis [13].

Similarly, patients are often given a diagnosis of “sinus headache.” This diagnosis is often made without radiologic evidence of sinusitis and even in those with a more typical migraine headache [16]. In one survey, 40% of patients meeting criteria for migraine were given this diagnosis. Many of these patients did have nasal symptoms or facial pain without clear evidence or rhinosinusitis, and in some cases these symptoms would respond to migraine treatments [16]. This is a particularly important misdiagnosis to highlight, as attributing symptoms to sinus disease may lead to unnecessary consultations and even sinus instrumentation.

In addition to common misdiagnoses, many PCPs are unfamiliar with the “red flags” that may indicate a secondary headache disorder and are also unfamiliar with appropriate use of neuroimaging in headache patients [17].

Misuse of As-Needed Medications

Studies have suggested that a large proportion of PCPs will prescribe nonspecific analgesics for migraine rather than migraine-specific medications [18]. These treatments may include NSAIDs, acetaminophen, barbiturates, and even opiates. This appears to be the pattern even for those with severe attacks [18], suggesting that migraine-specific medications such as triptans may be underused in the primary care setting. Postulated reasons for this pattern include lack of physician knowledge regarding the specific recommendations for managing migraine, the cost of medications, as well as lack of insurance coverage for these medications [19]. Misuse of as-needed medications can lead to medication overuse headache (MOH), which is an underrecognized problem in the primary care setting [20]. In a survey of PCPs in Boston, only 54% of PCPs were aware that barbiturates can cause MOH and only 34% were aware that opiates can cause MOH [17]. The same survey revealed that approximately 20% of PCPs had never made the diagnosis of MOH [17].

Underuse of Preventive Medications

As many as 40% of migraineurs need preventive therapy, but only approximately 13% are currently receiving it [3]. Additionally, the average time from diagnosis of migraine to instituting preventive treatment is 4.3 years, and often there is only a single preventive medication trial if one is instituted [21]. The reasons for this appear to be complex. The physician factors contributing to the underuse of preventive medications include inadequate education, discomfort and inadequate time for assessments. Only 27.8% of surveyed PCPs were aware of the American Academy of Neurology guidelines for prescribing preventive medications [17].

There may be an underestimate of the disability experienced by migraineurs, which can explain some of the underuse of preventive medications. While many PCPs endorse inquiring about headache-related disability, many do not used validated scales such as the Migraine Disability Assessment Score (MIDAS) or the Headache Impact Test (HIT) [17]. In addition, patients often underreport their headache days and report only their severe exacerbations unless clearly asked about a daily headache [22]. This may be part of the reason why only 20% of migraineurs who meet criteria for chronic migraine are diagnosed as such and why preventatives may not be offered [23].

After preventatives are started, less than 25% of patients will be adherent to oral migraine preventive agents at 1 year [24]. Common reasons for discontinuing preventives include adverse effects and perceived inefficacy [22]. Preventive medications may need a 6- to 8-week trial before efficacy is determined, but in practice medications may be stopped before this threshold is reached. Inadequate follow-up and lack of detail with regard to medication trials may result in the perception of an intractable patient prematurely. It has been suggested that a systematic approach to documenting and choosing preventive agents is helpful in the treatment of migraine [25], although this is not always practical in the primary care setting.

Another contributor to underuse of effective prophylaxis is related to access. Treatment with onabotulinumtoxin A, an efficacious prophylactic treatment approved for select chronic migraine patients [26], will usually require referral to a headache specialist, which is not always available to PCPs in a timely manner [7].

Nonpharmacologic Approaches

Effective nonpharmacologic treatment modalities for migraine, such as cognitive-behavioral therapy and biofeedback [27], are not commonly recommended by PCPs [17]. Instead, there appears to be more focus on avoidance of triggers and referral to non–evidence-based resources, such as special diets and massage therapy [17]. While these methods are not always inappropriate, it should be noted that they often have little or no evidence for efficacy.

Patients often wish for non-medication approaches to migraine management, but for those with significant and severe disability, these are probably insufficient. In these patients, non-medication approaches may best be used as a supplement to pharmacological treatment, with education on pharmacologic prevention given. Neuromodulation is a promising, novel approach that is emerging as a new treatment for migraine, but likely will require referral to a headache specialist.

 

 

Suboptimal Management of Migraine Comorbidities

There are several disorders that are commonly comorbid with migraine. Among the most common are anxiety, depression, medication (and caffeine) overuse, obesity, and sleep disorders [22]. A survey of PCPs reveals that only 50.6% of PCPs screen for anxiety, 60.2% for depression, and 73.5% for sleep disorders [17]. They are, for the most part, modifiable or treatable conditions and their proper management may help ease migraine disability.

In addition, the presence of these comorbidities may alter choice of treatment, for example, favoring the use of an serotonin and norepinephrine reuptake inhibitor such as venlafaxine for treatment  in those with comorbid anxiety and depression. It is also worthwhile to have a high index of suspicion for obstructive sleep apnea in patients with headache, particularly in the obese and in those who endorse nonrestorative sleep or excessive daytime somnolence. It appears that patients who are adherent to the treatment of sleep apnea are more likely to report improvement in their headache [28].

Given the time constraints that often exist in the PCP office setting, addressing these comorbidities thoroughly is not always possible. It is reasonable, however, to have patients use screening tools while in the waiting room or prior to an appointment, to better identify those with modifiable comorbidities. Depression, anxiety, and excessive daytime sleepiness can all be screened for relatively easily with tools such as the PHQ-9 [29], GAD-7 [30] and Epworth Sleepiness Scale [31], respectively. A positive screen on any of these could lead the PCP to further investigate these entities as a possible contributor to migraine.

Patient Factors

In addition to the physician factors identified above, patient factors can contribute to the suboptimal management of migraine as well. These factors include a lack insight into diagnosis, poor compliance with treatment of migraine or its comorbidities, and overuse of abortive medications. There are also less modifiable patient factors such as socioeconomic status and the stigma that may be associated with migraine.

Poor Insight Into Diagnosis

Despite the high prevalence and burden of migraine in the general population, there is a staggering lack of awareness among migraineurs. Some estimates state that as many as 54% of patients were unaware that their headaches represented migraine [32]. The most common self-reported diagnoses in migraineurs are sinus headache (39%), tension-type headache (31%) and stress headache (29%) [14]. In addition, many patients believe they are suffering from cervical spine–related pain [13]. This is likely due to the common presence of posteriorly located pain, attacks triggered by poor sleep, or attacks associated with weather changes [13]. Patients presenting with aura are more likely to report and to receive a physician diagnosis of migraine [14]. Women are more likely to receive and report a diagnosis of migraine compared with men [32].

There are many factors that play a role in poor insight. Many patients appear to believe that the location of the pain is suggestive of the cause [13]. Many patients never seek out consultation for their headaches, and thus never receive a proper diagnosis [33]. Some patients may seek out medical care for their headaches, but fail to remember their diagnosis or receive an improper diagnosis [34].

Poor Adherence

The body of literature examining adherence with headache treatment is growing, but remains small [35]. In a recent systematic review of treatment adherence in pediatric and adult patients with headache, adherence rates in adults with headache ranged from 25% to 94% [35]. In this review, prescription claims data analyses found poor persistence in patients prescribed triptans for migraine treatment. In one large claims-based study, 53.8% of patients  receiving a new triptan prescription did not persistently refill their index triptan [36]. Although some of these patients switched to an alternative triptan, the majority switched to a non-triptan migraine medication, including opioids and nonsteroidal anti-inflammatory drugs [36].

Cady and colleagues’ study of lapsed and sustained triptan users found that sustained users were significantly more satisfied with their medication, confident in the medication’s ability to control headache, and reported control of migraine with fewer doses of medication [37]. The authors concluded that the findings suggest that lapsed users may not be receiving optimal treatment. In a review by Rains et al [38], the authors found that headache treatment adherence declines “with more frequent and complex dosing regimens, side effects, and costs, and is subject to a wide range of psychosocial influences.”

Adherence issues also exist for migraine prevention. Less than 25% of chronic migraine patients continue to take oral preventive therapies at 1 year [24]. The reasons for this nonadherence are not completely clear, but are likely multifactorial. Preventives may take several weeks to months to become effective, which may contribute to noncompliance. In addition, migraineurs appears to have inadequate follow-up for migraine. Studies from France suggest that only 18% of those aware of their migraine diagnosis received medical follow-up [39].

Medication Overuse

While the data is not entirely clear, it is likely that overuse of as-needed medication plays a role in migraine chronification [40]. The reasons for medication overuse in the migraine population include some of the issues already highlighted above, including inadequate patient education, poor insight into diagnosis, not seeking care, misdiagnosis, and treatment nonadherence. Patients should be educated on the proper use of as-needed medication. Limits to medication use should be set during the physician-patient encounter. Patients should be counselled to limit their as-needed medication to no more than 10 days per month to reduce the risk of medication overuse headache. Ideally, opiates and barbiturates should be avoided, and never used as first-line therapy in patients who lack contraindications to NSAIDs and triptans. If their use in unavoidable for other reasons, they should be used sparingly, as use on as few as 5 to 8 days per month can be problematic [41]. Furthermore it is important to note that if patients are using several different acute analgesics, the combined total use of all as-needed pain medications needs to be less than 10 days per month to reduce the potential for medication overuse headache.

 

 

Socioeconomic Factors

Low socioeconomic status has been associated with an increased prevalence for all headache forms and an increased migraine attack frequency [42], but there appear to be few studies looking at the impact of low socioeconomic status and treatment. Lipton et al found that health insurance status was an important predictor of persons with migraine consulting a health care professional [43]. Among consulters, women were far more likely to be diagnosed than men, suggesting that gender bias in diagnosis may be an important barrier for men. Higher household income appeared to be a predictor for receiving a correct diagnosis of migraine. These researchers also found economic barriers related to use of appropriate prescription medications [43]. Differences in diagnosis and treatment may indicate racial and ethnic disparities in access and quality of care for minority patients [44].

Stigma

At least 1 study has reported that migraine patients experience stigma. In Young et al’s study of  123 episodic migraine patients, 123 chronic migraine patients, and 62 epilepsy patients, adjusted stigma was similar for chronic migraine and epilepsy, which were greater than for episodic migraine [45]. Stigma correlated most strongly with inability to work. Migraine patients reported equally high stigma scores across age, income, and education. The stigma of migraine may pose a barrier to seeking consultation and treatment. Further, the perception that migraine is “just a headache” may lead to stigmatizing attitudes on the part of friends, family, and coworkers of patients with migraine.

Conclusions and Recommendations

Migraine is a prevalent and frequently disabling condition that is underrecognized and undertreated in the primary care setting. Both physician and patient factors pose barriers to the optimal diagnosis and treatment of migraine. Remedies to address these barriers include education of both patients and physicians first and foremost. Targeting physician education in medical school and during residency training, including in primary care subspecialties, could include additional didactic teaching, but also clinical encounters in headache subspecialty clinics to increase exposure. Patient advocacy groups and public campaigns to improve understanding of migraine in the community may be a means for improving patient education and reducing stigma. Patients should be encouraged to seek out consultations for headache to reduce long-term headache disability. Management of comorbidities is paramount, and screening tools for migraine-associated disability, anxiety, depression, and medication use may be helpful to implement in the primary care setting as they are easy to use and time saving.

Recent surveys of PCPs suggest that the resource that is most desired is ready access to subspecialists for advice and “curb-side” consultation [17]. While this solution is not always practical, it may be worthwhile exploring closer relationships between primary care and subspecialty headache clinics, or perhaps more access to e-consultation or telephone consultation for more rural areas. Recently, Minen et al examined education strategies for PCPs. While in-person education sessions with PCPs were poorly attended, multiple possibilities for further education were identified. It was suggested that PCPs having real-time access to resources during the patient encounter would improve their comfort in managing patients. This includes online databases, simple algorithms for treatment, and directions for when to refer to a neurologist [46]. In addition, it may be worthwhile to train not only PCPs but also nursing and allied health staff so that they can provide headache education to patients. This may help ease some of the time burden on PCPs as well as provide a collaborative environment in which headache can be managed [46].

 

Corresponding author: William S. Kingston, MD, Mayo Clinic, 13400 E. Shea Blvd., Scottsdale, AZ 85259.

Financial disclosures: None.

From the Mayo Clinic, Scottsdale, AZ.

 

Abstract

  • Objective: To review the impact of migraine and explore the barriers to optimal migraine diagnosis and treatment.
  • Methods: Review of the literature.
  • Results: Several factors may play a role in the inadequate care of migraine patients, including issues related to poor access to care, diagnostic insight, misdiagnosis, adherence to treatment, and management of comorbidities. Both patient and physician factors play an important role and many be modifiable.
  • Conclusions: A focus on education of both patients and physicians is of paramount importance to improve the care provided to migraine patients. Patient evaluations should be multisystemic and include addressing comorbid conditions as well as a discussion about appropriate use of prevention and avoidance of medication overuse.

Key words: migraine; triptans; medication overuse headache; medication adherence; primary care.

 

Migraine is a common, debilitating condition that is a significant source of reduced productivity and increased disability [1]. According to the latest government statistics, 14.2% of US adults have reported having migraine or severe headaches in the previous 3 months, with an overall age-adjusted 3-month prevalence of 19.1% in females and 9.0% in males [2]. In a self-administered headache questionnaire mailed to 120,000 representative US households, the 1-year period prevalence for migraine was 11.7% (17.1% in women and 5.6% in men). Prevalence peaked in middle life and was lower in adolescents and those older than age 60 years [3]. Migraine is an important cause of reduced health-related quality of life and has a very high economic burden [4]. This effect is even more marked in those with chronic migraine, who are even more likely to have professional and social absenteeism and experience more severe disability [4].

Migraine and headache are a common reason for primary care physician (PCP) visits. Some estimates suggest that as many as 10% of primary care consultations are due to headache [5]. Approximately 75% of all patients complaining of headache in primary care will eventually be diagnosed with migraine [6]. Of these, as many as 1% to 5% will have chronic migraine [6].

Despite the high frequency and social and economic impact of migraine, migraine is underrecognized and undertreated. A survey of US households revealed that only 13% of migraineurs were currently using a preventive thrapy while 43.3% had never used one [3]. This is despite the fact that 32.4% met expert criteria for consideration of a preventive medication [3]. The reasons for underrecognition and undertreatment are multifactorial and include both patient and physician factors.

 

 

 

Physician Factors

Although migraine and headache are a leading cause of physicians visits, most physicians have had little formal training in headache. In the United States, medical students spend an average of 1 hour of preclinical and 2 hours of clinical education in headache [7]. Furthermore, primary care physicians receive little formal training in headache during residency [8]. In addition to the lack of formal training, there is also a lack of substantial clinic time available to fully evaluate and treat a new headache patient in the primary care setting [8]. Headache consultations can often be timely and detail-driven in order to determine the correct diagnosis and treatment [9].

Misdiagnosis

Evidence suggests that misdiagnosis plays a large role in the suboptimal management of migraineurs. Studies have shown that as many as 59.7% of migraineurs were not given a diagnosis of migraine by their primary care provider [10]. Common mistaken diagnoses include tension-type headache [11], “sinus headache” [12], cervical pain syndrome or cervicogenic headache [13], and “stress headache” [14].

The reasons for these misdiagnoses is not certain. It may be that the patient and practitioner assume that location of the pain is suggestive of the cause [13]. This is even though more than half of those with migraine have associated neck pain [15]. A recent study suggests that 60% of migraineurs who self-reported a diagnosis of cervical pain have been subsequently diagnosed with cervicalgia by a physician [13]. If patients endorse stress as a precipitant or the presence of cervical pain, they are more likely to obtain a diagnosis other than migraine. The presence of aura in association with the headache appears to be protective against misdiagnosis [13].

Similarly, patients are often given a diagnosis of “sinus headache.” This diagnosis is often made without radiologic evidence of sinusitis and even in those with a more typical migraine headache [16]. In one survey, 40% of patients meeting criteria for migraine were given this diagnosis. Many of these patients did have nasal symptoms or facial pain without clear evidence or rhinosinusitis, and in some cases these symptoms would respond to migraine treatments [16]. This is a particularly important misdiagnosis to highlight, as attributing symptoms to sinus disease may lead to unnecessary consultations and even sinus instrumentation.

In addition to common misdiagnoses, many PCPs are unfamiliar with the “red flags” that may indicate a secondary headache disorder and are also unfamiliar with appropriate use of neuroimaging in headache patients [17].

Misuse of As-Needed Medications

Studies have suggested that a large proportion of PCPs will prescribe nonspecific analgesics for migraine rather than migraine-specific medications [18]. These treatments may include NSAIDs, acetaminophen, barbiturates, and even opiates. This appears to be the pattern even for those with severe attacks [18], suggesting that migraine-specific medications such as triptans may be underused in the primary care setting. Postulated reasons for this pattern include lack of physician knowledge regarding the specific recommendations for managing migraine, the cost of medications, as well as lack of insurance coverage for these medications [19]. Misuse of as-needed medications can lead to medication overuse headache (MOH), which is an underrecognized problem in the primary care setting [20]. In a survey of PCPs in Boston, only 54% of PCPs were aware that barbiturates can cause MOH and only 34% were aware that opiates can cause MOH [17]. The same survey revealed that approximately 20% of PCPs had never made the diagnosis of MOH [17].

Underuse of Preventive Medications

As many as 40% of migraineurs need preventive therapy, but only approximately 13% are currently receiving it [3]. Additionally, the average time from diagnosis of migraine to instituting preventive treatment is 4.3 years, and often there is only a single preventive medication trial if one is instituted [21]. The reasons for this appear to be complex. The physician factors contributing to the underuse of preventive medications include inadequate education, discomfort and inadequate time for assessments. Only 27.8% of surveyed PCPs were aware of the American Academy of Neurology guidelines for prescribing preventive medications [17].

There may be an underestimate of the disability experienced by migraineurs, which can explain some of the underuse of preventive medications. While many PCPs endorse inquiring about headache-related disability, many do not used validated scales such as the Migraine Disability Assessment Score (MIDAS) or the Headache Impact Test (HIT) [17]. In addition, patients often underreport their headache days and report only their severe exacerbations unless clearly asked about a daily headache [22]. This may be part of the reason why only 20% of migraineurs who meet criteria for chronic migraine are diagnosed as such and why preventatives may not be offered [23].

After preventatives are started, less than 25% of patients will be adherent to oral migraine preventive agents at 1 year [24]. Common reasons for discontinuing preventives include adverse effects and perceived inefficacy [22]. Preventive medications may need a 6- to 8-week trial before efficacy is determined, but in practice medications may be stopped before this threshold is reached. Inadequate follow-up and lack of detail with regard to medication trials may result in the perception of an intractable patient prematurely. It has been suggested that a systematic approach to documenting and choosing preventive agents is helpful in the treatment of migraine [25], although this is not always practical in the primary care setting.

Another contributor to underuse of effective prophylaxis is related to access. Treatment with onabotulinumtoxin A, an efficacious prophylactic treatment approved for select chronic migraine patients [26], will usually require referral to a headache specialist, which is not always available to PCPs in a timely manner [7].

Nonpharmacologic Approaches

Effective nonpharmacologic treatment modalities for migraine, such as cognitive-behavioral therapy and biofeedback [27], are not commonly recommended by PCPs [17]. Instead, there appears to be more focus on avoidance of triggers and referral to non–evidence-based resources, such as special diets and massage therapy [17]. While these methods are not always inappropriate, it should be noted that they often have little or no evidence for efficacy.

Patients often wish for non-medication approaches to migraine management, but for those with significant and severe disability, these are probably insufficient. In these patients, non-medication approaches may best be used as a supplement to pharmacological treatment, with education on pharmacologic prevention given. Neuromodulation is a promising, novel approach that is emerging as a new treatment for migraine, but likely will require referral to a headache specialist.

 

 

Suboptimal Management of Migraine Comorbidities

There are several disorders that are commonly comorbid with migraine. Among the most common are anxiety, depression, medication (and caffeine) overuse, obesity, and sleep disorders [22]. A survey of PCPs reveals that only 50.6% of PCPs screen for anxiety, 60.2% for depression, and 73.5% for sleep disorders [17]. They are, for the most part, modifiable or treatable conditions and their proper management may help ease migraine disability.

In addition, the presence of these comorbidities may alter choice of treatment, for example, favoring the use of an serotonin and norepinephrine reuptake inhibitor such as venlafaxine for treatment  in those with comorbid anxiety and depression. It is also worthwhile to have a high index of suspicion for obstructive sleep apnea in patients with headache, particularly in the obese and in those who endorse nonrestorative sleep or excessive daytime somnolence. It appears that patients who are adherent to the treatment of sleep apnea are more likely to report improvement in their headache [28].

Given the time constraints that often exist in the PCP office setting, addressing these comorbidities thoroughly is not always possible. It is reasonable, however, to have patients use screening tools while in the waiting room or prior to an appointment, to better identify those with modifiable comorbidities. Depression, anxiety, and excessive daytime sleepiness can all be screened for relatively easily with tools such as the PHQ-9 [29], GAD-7 [30] and Epworth Sleepiness Scale [31], respectively. A positive screen on any of these could lead the PCP to further investigate these entities as a possible contributor to migraine.

Patient Factors

In addition to the physician factors identified above, patient factors can contribute to the suboptimal management of migraine as well. These factors include a lack insight into diagnosis, poor compliance with treatment of migraine or its comorbidities, and overuse of abortive medications. There are also less modifiable patient factors such as socioeconomic status and the stigma that may be associated with migraine.

Poor Insight Into Diagnosis

Despite the high prevalence and burden of migraine in the general population, there is a staggering lack of awareness among migraineurs. Some estimates state that as many as 54% of patients were unaware that their headaches represented migraine [32]. The most common self-reported diagnoses in migraineurs are sinus headache (39%), tension-type headache (31%) and stress headache (29%) [14]. In addition, many patients believe they are suffering from cervical spine–related pain [13]. This is likely due to the common presence of posteriorly located pain, attacks triggered by poor sleep, or attacks associated with weather changes [13]. Patients presenting with aura are more likely to report and to receive a physician diagnosis of migraine [14]. Women are more likely to receive and report a diagnosis of migraine compared with men [32].

There are many factors that play a role in poor insight. Many patients appear to believe that the location of the pain is suggestive of the cause [13]. Many patients never seek out consultation for their headaches, and thus never receive a proper diagnosis [33]. Some patients may seek out medical care for their headaches, but fail to remember their diagnosis or receive an improper diagnosis [34].

Poor Adherence

The body of literature examining adherence with headache treatment is growing, but remains small [35]. In a recent systematic review of treatment adherence in pediatric and adult patients with headache, adherence rates in adults with headache ranged from 25% to 94% [35]. In this review, prescription claims data analyses found poor persistence in patients prescribed triptans for migraine treatment. In one large claims-based study, 53.8% of patients  receiving a new triptan prescription did not persistently refill their index triptan [36]. Although some of these patients switched to an alternative triptan, the majority switched to a non-triptan migraine medication, including opioids and nonsteroidal anti-inflammatory drugs [36].

Cady and colleagues’ study of lapsed and sustained triptan users found that sustained users were significantly more satisfied with their medication, confident in the medication’s ability to control headache, and reported control of migraine with fewer doses of medication [37]. The authors concluded that the findings suggest that lapsed users may not be receiving optimal treatment. In a review by Rains et al [38], the authors found that headache treatment adherence declines “with more frequent and complex dosing regimens, side effects, and costs, and is subject to a wide range of psychosocial influences.”

Adherence issues also exist for migraine prevention. Less than 25% of chronic migraine patients continue to take oral preventive therapies at 1 year [24]. The reasons for this nonadherence are not completely clear, but are likely multifactorial. Preventives may take several weeks to months to become effective, which may contribute to noncompliance. In addition, migraineurs appears to have inadequate follow-up for migraine. Studies from France suggest that only 18% of those aware of their migraine diagnosis received medical follow-up [39].

Medication Overuse

While the data is not entirely clear, it is likely that overuse of as-needed medication plays a role in migraine chronification [40]. The reasons for medication overuse in the migraine population include some of the issues already highlighted above, including inadequate patient education, poor insight into diagnosis, not seeking care, misdiagnosis, and treatment nonadherence. Patients should be educated on the proper use of as-needed medication. Limits to medication use should be set during the physician-patient encounter. Patients should be counselled to limit their as-needed medication to no more than 10 days per month to reduce the risk of medication overuse headache. Ideally, opiates and barbiturates should be avoided, and never used as first-line therapy in patients who lack contraindications to NSAIDs and triptans. If their use in unavoidable for other reasons, they should be used sparingly, as use on as few as 5 to 8 days per month can be problematic [41]. Furthermore it is important to note that if patients are using several different acute analgesics, the combined total use of all as-needed pain medications needs to be less than 10 days per month to reduce the potential for medication overuse headache.

 

 

Socioeconomic Factors

Low socioeconomic status has been associated with an increased prevalence for all headache forms and an increased migraine attack frequency [42], but there appear to be few studies looking at the impact of low socioeconomic status and treatment. Lipton et al found that health insurance status was an important predictor of persons with migraine consulting a health care professional [43]. Among consulters, women were far more likely to be diagnosed than men, suggesting that gender bias in diagnosis may be an important barrier for men. Higher household income appeared to be a predictor for receiving a correct diagnosis of migraine. These researchers also found economic barriers related to use of appropriate prescription medications [43]. Differences in diagnosis and treatment may indicate racial and ethnic disparities in access and quality of care for minority patients [44].

Stigma

At least 1 study has reported that migraine patients experience stigma. In Young et al’s study of  123 episodic migraine patients, 123 chronic migraine patients, and 62 epilepsy patients, adjusted stigma was similar for chronic migraine and epilepsy, which were greater than for episodic migraine [45]. Stigma correlated most strongly with inability to work. Migraine patients reported equally high stigma scores across age, income, and education. The stigma of migraine may pose a barrier to seeking consultation and treatment. Further, the perception that migraine is “just a headache” may lead to stigmatizing attitudes on the part of friends, family, and coworkers of patients with migraine.

Conclusions and Recommendations

Migraine is a prevalent and frequently disabling condition that is underrecognized and undertreated in the primary care setting. Both physician and patient factors pose barriers to the optimal diagnosis and treatment of migraine. Remedies to address these barriers include education of both patients and physicians first and foremost. Targeting physician education in medical school and during residency training, including in primary care subspecialties, could include additional didactic teaching, but also clinical encounters in headache subspecialty clinics to increase exposure. Patient advocacy groups and public campaigns to improve understanding of migraine in the community may be a means for improving patient education and reducing stigma. Patients should be encouraged to seek out consultations for headache to reduce long-term headache disability. Management of comorbidities is paramount, and screening tools for migraine-associated disability, anxiety, depression, and medication use may be helpful to implement in the primary care setting as they are easy to use and time saving.

Recent surveys of PCPs suggest that the resource that is most desired is ready access to subspecialists for advice and “curb-side” consultation [17]. While this solution is not always practical, it may be worthwhile exploring closer relationships between primary care and subspecialty headache clinics, or perhaps more access to e-consultation or telephone consultation for more rural areas. Recently, Minen et al examined education strategies for PCPs. While in-person education sessions with PCPs were poorly attended, multiple possibilities for further education were identified. It was suggested that PCPs having real-time access to resources during the patient encounter would improve their comfort in managing patients. This includes online databases, simple algorithms for treatment, and directions for when to refer to a neurologist [46]. In addition, it may be worthwhile to train not only PCPs but also nursing and allied health staff so that they can provide headache education to patients. This may help ease some of the time burden on PCPs as well as provide a collaborative environment in which headache can be managed [46].

 

Corresponding author: William S. Kingston, MD, Mayo Clinic, 13400 E. Shea Blvd., Scottsdale, AZ 85259.

Financial disclosures: None.

References

1. Stewart WF, Schechter A, Lipton RB. Migraine heterogeneity. Disability, pain intensity and attack frequency and duration. Neurology 1994; 44(suppl 4):S24–S39

2. Burch RC, Loder S, Loder E, Smitherman TA. The prevalence of migraine and severe headache in the United States: updated statistics from government health surveillance studies. Headache 2015;55:21–34.

3. Lipton RB, Bigal ME, Diamond M, et al. Migraine prevalence, disease burden, and the need for preventive therapy. Neurology 2007;68:343–9.

4. Blumenfeld AM, Varon SF, Wilcox TK, et al. Disability, HRQoL and resource use amoung chronic and episodic migraineurs: results from the International Burden of Migraine Study (IBMS). Cephalalgia 2011;31:301–15.

5. Ahmed F. Headache disorders: differentiating and managing the common subtypes. Br J Pain 2012;6:124–32.

6. Natoli JL, Manack A, Dean B, et al. Global prevalence of chronic migraine: a systemic review. Cephalalgia 2010;30: 599–609.

7. Finkel AG. American academic headache specialists in neurology: Practice characteristics and culture. Cephalalgia 2004; 24:522–7.

8. Sheftell FD, Cady RK, Borchert LD, et al. Optimizing the diagnosis and treatment of migraine. J Am Acad Nurse Pract 2005;17:309–17.

9. Lipton RB, Scher AI, Steiner TJ, et al. Patterns of health care utilization for migraine in England and in the United States. Neurology 2003;60:441–8.

10. De Diego EV, Lanteri-Minet M. Recognition and management of migraine in primary care: Influence of functional impact measures by the Headache Impact Test (HIT). Cephalalgia 2005;25:184–90.

11. Miller S, Matharu MS. Migraine is underdiagnosed and undertreated. Practitioner 2014;258:19–24.

12. Al-Hashel JY, Ahmed SF, Alroughani R, et al. Migraine misdiagnosis as sinusitis, a delay that can last for many years. J Headache Pain 2013;14:97.

13. Viana M, Sances G, Terrazzino S, et al. When cervical pain is actually migraine: an observational study in 207 patients. Cephalalgia 2016. Epub ahead of print.

14. Diamond MD, Bigal ME, Silberstein S, et al. Patterns of diagnosis and acute and preventive treatment for migraine in the United States: Results from the American Migraine Prevalence and Prevention Study. Headache 2007;47:355–63.

15. Aguila MR, Rebbeck T, Mendoza KG, et al. Definitions and participant characteristics of frequent recurrent headache types in clinical trials: A systematic review. Cephalalgia 2017. Epub ahead of print.

16. Senbil N, Yavus Gurer YK, Uner C, Barut Y. Sinusitis in children and adolescents with chronic or recurrent headache: A case-control study. J Headache Pain 2008;9:33–6.

17. Minen MT, Loder E, Tishler L, Silbersweig D. Migraine diagnosis and treatment: A knowledge and needs assessment amoung primary care providers. Cephalalgia 2016; 36:358–70.

18. MacGregor EA, Brandes J, Eikerman A. Migraine prevalence and treatment patterns: The global migraine and zolmitriptan evaluation survey. Headache 2003;33:19–26.

19. Khan S, Mascarenhas A, Moore JE, et al. Access to triptans for acute episodic migraine: a qualitative study. Headache 2015; 44(suppl 4):199–211.

20. Tepper SJ. Medication-overuse headache. Continuum 2012;18:807–22.

21. Dekker F, Dielemann J, Neven AK, et al. Preventive treatment for migraine in primary care, a population based study in the Netherlands. Cephalalgia 2013;33:1170–8.

22. Starling AJ, Dodick DW. Best practices for patients with chronic migraine: burden, diagnosis and management in primary care. Mayo Clin Proc 2015;90:408–14.

23. Bigal ME, Serrano D, Reed M, Lipton RB. Chronic migraine in the population: burden, diagnosis, and satisfaction with treatment. Neurology 2008;71:559–66.

24. Hepp Z, Dodick D, Varon S, et al. Adherence to oral migraine preventive-medications among patients with chronic migraine. Cephalalgia 2015;35:478–88.

25. Smith JH, Schwedt TJ. What constitutes an “adequate” trial in migraine prevention? Curr Pain Headache Rep 2015;19:52.

26. Dodick DW, Turkel CC, DeGryse RE, et al. OnabotulinumtoxinA for treatment of chronic migraine: pooled results from the double blind, randomized, placebo-controlled phases of the PREEMPT clinical program. Headache 2010;50:921–36.

27. Silberstein SD. Practice parameter: evidence based guidelines for migraine headache (an evidence-based review): report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology 2000;55: 754–62.

28. Johnson KG, Ziemba AM, Garb JL. Improvement in headaches with continuous positive airway pressure for obstructive sleep apnea: a retrospective analysis. Headache 2013;53:333–43.

29. Altura KC, Patten SB, FIest KM, et al. Suicidal ideation in persons with neurological conditions: prevalence, associations and validation of the PHQ-9 for suicidal ideation. Gen Hosp Psychiatry 2016;42:22–6.

30. Seo JG, Park SP. Validation of the Generalized Anxiety Disorder-7 (GAD-7) and GAD-2 in patients with migraine. J Headache Pain 2015;16:97.

31. Corlateanu A, Pylchenko S, DIrcu V, Botnaru V. Predictors of daytime sleepiness in patients with obstructive sleep apnea. Pneumologia 2015;64:21–5.

32. Linde M, Dahlof C. Attitudes and burden of disease among self-considered migraineurs – a nation-wide population-based survey in Sweden. Cephalalgia 2004;24:455–65.

33. Osterhaus JT, Gutterman DL, Plachetka JR. Health care resources and lost labor costs of migraine headaches in the United States. Pharmacoeconomics 1992;36:69–76.

34. Tepper SJ, Dahlof CG, Dowson A et al. Prevalence and diagnosis of migraine in patients consulting their physician with a complaint of headache: Data from the Landmark Study. Headache 2004;44:856–64.

35.  Ramsey RR, Ryan JL, Hershey AD, et al. Treatment adherence in patients with headache: a systematic review. Headache 2014;54:795–816.

36. Katic BJ, Rajagopalan S, Ho TW, et al. Triptan persistency among newly initiated users in a pharmacy claims database. Cephalalgia 2011;31:488–500.

37.  Cady RK, Maizels M, Reeves DL, Levinson DM, Evans JK. Predictors of adherence to triptans: factors of sustained vs lapsed users. Headache 2009;49:386–94.

38.  Rains JC, Lipchik GL, Penzien DB. Behavioral facilitation of medical treatment for headache--part I: Review of headache treatment compliance. Headache 2006;46:1387–94.

39. Lucas C, Chaffaut C, Artaz MA, Lanteri-Minet M. FRAMIG 2000: Medical and therapeutic management of migraine in France. Cephalalgia 2005;25:267–79.

40. Bigal ME, Serrano D, Buse D et al. Acute migraine medications and evolution from episodic to chronic migraine: a longitudinal population-based study. Headache 2008;48:1157–68.

41. Diener HC, Limmroth V. Medication-overuse headache: a worldwide problem. Lancet Neurol 2004;3:475–83.

42.  Winter AC, Berger K, Buring JE, Kurth T. Associations of socioeconomic status with migraine and non-migraine headache. Cephalalgia 2012;32:159–70.

43. Lipton, RB, Serrano D, Holland S et al. Barriers to the diagnosis and treatment of migraine: effects of sex, income, and headache features. Headache 2013;53: 81–92.

44.  Loder S, Sheikh HU, Loder E. The prevalence, burden, and treatment of severe, frequent, and migraine headaches in US minority populations: statistics from National Survey studies. Headache 2015;55:214–28.

45. Young WB, Park JE, Tian IX, Kempner J. The stigma of migraine. PLoS One 2013;8:e54074.

46. Minen A, Shome A, Hapern A, et al. A migraine training program for primary care providers: an overview of a survey and pilot study findings, lessons learned, and consideration for further research. Headache 2016;56:725–40.

References

1. Stewart WF, Schechter A, Lipton RB. Migraine heterogeneity. Disability, pain intensity and attack frequency and duration. Neurology 1994; 44(suppl 4):S24–S39

2. Burch RC, Loder S, Loder E, Smitherman TA. The prevalence of migraine and severe headache in the United States: updated statistics from government health surveillance studies. Headache 2015;55:21–34.

3. Lipton RB, Bigal ME, Diamond M, et al. Migraine prevalence, disease burden, and the need for preventive therapy. Neurology 2007;68:343–9.

4. Blumenfeld AM, Varon SF, Wilcox TK, et al. Disability, HRQoL and resource use amoung chronic and episodic migraineurs: results from the International Burden of Migraine Study (IBMS). Cephalalgia 2011;31:301–15.

5. Ahmed F. Headache disorders: differentiating and managing the common subtypes. Br J Pain 2012;6:124–32.

6. Natoli JL, Manack A, Dean B, et al. Global prevalence of chronic migraine: a systemic review. Cephalalgia 2010;30: 599–609.

7. Finkel AG. American academic headache specialists in neurology: Practice characteristics and culture. Cephalalgia 2004; 24:522–7.

8. Sheftell FD, Cady RK, Borchert LD, et al. Optimizing the diagnosis and treatment of migraine. J Am Acad Nurse Pract 2005;17:309–17.

9. Lipton RB, Scher AI, Steiner TJ, et al. Patterns of health care utilization for migraine in England and in the United States. Neurology 2003;60:441–8.

10. De Diego EV, Lanteri-Minet M. Recognition and management of migraine in primary care: Influence of functional impact measures by the Headache Impact Test (HIT). Cephalalgia 2005;25:184–90.

11. Miller S, Matharu MS. Migraine is underdiagnosed and undertreated. Practitioner 2014;258:19–24.

12. Al-Hashel JY, Ahmed SF, Alroughani R, et al. Migraine misdiagnosis as sinusitis, a delay that can last for many years. J Headache Pain 2013;14:97.

13. Viana M, Sances G, Terrazzino S, et al. When cervical pain is actually migraine: an observational study in 207 patients. Cephalalgia 2016. Epub ahead of print.

14. Diamond MD, Bigal ME, Silberstein S, et al. Patterns of diagnosis and acute and preventive treatment for migraine in the United States: Results from the American Migraine Prevalence and Prevention Study. Headache 2007;47:355–63.

15. Aguila MR, Rebbeck T, Mendoza KG, et al. Definitions and participant characteristics of frequent recurrent headache types in clinical trials: A systematic review. Cephalalgia 2017. Epub ahead of print.

16. Senbil N, Yavus Gurer YK, Uner C, Barut Y. Sinusitis in children and adolescents with chronic or recurrent headache: A case-control study. J Headache Pain 2008;9:33–6.

17. Minen MT, Loder E, Tishler L, Silbersweig D. Migraine diagnosis and treatment: A knowledge and needs assessment amoung primary care providers. Cephalalgia 2016; 36:358–70.

18. MacGregor EA, Brandes J, Eikerman A. Migraine prevalence and treatment patterns: The global migraine and zolmitriptan evaluation survey. Headache 2003;33:19–26.

19. Khan S, Mascarenhas A, Moore JE, et al. Access to triptans for acute episodic migraine: a qualitative study. Headache 2015; 44(suppl 4):199–211.

20. Tepper SJ. Medication-overuse headache. Continuum 2012;18:807–22.

21. Dekker F, Dielemann J, Neven AK, et al. Preventive treatment for migraine in primary care, a population based study in the Netherlands. Cephalalgia 2013;33:1170–8.

22. Starling AJ, Dodick DW. Best practices for patients with chronic migraine: burden, diagnosis and management in primary care. Mayo Clin Proc 2015;90:408–14.

23. Bigal ME, Serrano D, Reed M, Lipton RB. Chronic migraine in the population: burden, diagnosis, and satisfaction with treatment. Neurology 2008;71:559–66.

24. Hepp Z, Dodick D, Varon S, et al. Adherence to oral migraine preventive-medications among patients with chronic migraine. Cephalalgia 2015;35:478–88.

25. Smith JH, Schwedt TJ. What constitutes an “adequate” trial in migraine prevention? Curr Pain Headache Rep 2015;19:52.

26. Dodick DW, Turkel CC, DeGryse RE, et al. OnabotulinumtoxinA for treatment of chronic migraine: pooled results from the double blind, randomized, placebo-controlled phases of the PREEMPT clinical program. Headache 2010;50:921–36.

27. Silberstein SD. Practice parameter: evidence based guidelines for migraine headache (an evidence-based review): report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology 2000;55: 754–62.

28. Johnson KG, Ziemba AM, Garb JL. Improvement in headaches with continuous positive airway pressure for obstructive sleep apnea: a retrospective analysis. Headache 2013;53:333–43.

29. Altura KC, Patten SB, FIest KM, et al. Suicidal ideation in persons with neurological conditions: prevalence, associations and validation of the PHQ-9 for suicidal ideation. Gen Hosp Psychiatry 2016;42:22–6.

30. Seo JG, Park SP. Validation of the Generalized Anxiety Disorder-7 (GAD-7) and GAD-2 in patients with migraine. J Headache Pain 2015;16:97.

31. Corlateanu A, Pylchenko S, DIrcu V, Botnaru V. Predictors of daytime sleepiness in patients with obstructive sleep apnea. Pneumologia 2015;64:21–5.

32. Linde M, Dahlof C. Attitudes and burden of disease among self-considered migraineurs – a nation-wide population-based survey in Sweden. Cephalalgia 2004;24:455–65.

33. Osterhaus JT, Gutterman DL, Plachetka JR. Health care resources and lost labor costs of migraine headaches in the United States. Pharmacoeconomics 1992;36:69–76.

34. Tepper SJ, Dahlof CG, Dowson A et al. Prevalence and diagnosis of migraine in patients consulting their physician with a complaint of headache: Data from the Landmark Study. Headache 2004;44:856–64.

35.  Ramsey RR, Ryan JL, Hershey AD, et al. Treatment adherence in patients with headache: a systematic review. Headache 2014;54:795–816.

36. Katic BJ, Rajagopalan S, Ho TW, et al. Triptan persistency among newly initiated users in a pharmacy claims database. Cephalalgia 2011;31:488–500.

37.  Cady RK, Maizels M, Reeves DL, Levinson DM, Evans JK. Predictors of adherence to triptans: factors of sustained vs lapsed users. Headache 2009;49:386–94.

38.  Rains JC, Lipchik GL, Penzien DB. Behavioral facilitation of medical treatment for headache--part I: Review of headache treatment compliance. Headache 2006;46:1387–94.

39. Lucas C, Chaffaut C, Artaz MA, Lanteri-Minet M. FRAMIG 2000: Medical and therapeutic management of migraine in France. Cephalalgia 2005;25:267–79.

40. Bigal ME, Serrano D, Buse D et al. Acute migraine medications and evolution from episodic to chronic migraine: a longitudinal population-based study. Headache 2008;48:1157–68.

41. Diener HC, Limmroth V. Medication-overuse headache: a worldwide problem. Lancet Neurol 2004;3:475–83.

42.  Winter AC, Berger K, Buring JE, Kurth T. Associations of socioeconomic status with migraine and non-migraine headache. Cephalalgia 2012;32:159–70.

43. Lipton, RB, Serrano D, Holland S et al. Barriers to the diagnosis and treatment of migraine: effects of sex, income, and headache features. Headache 2013;53: 81–92.

44.  Loder S, Sheikh HU, Loder E. The prevalence, burden, and treatment of severe, frequent, and migraine headaches in US minority populations: statistics from National Survey studies. Headache 2015;55:214–28.

45. Young WB, Park JE, Tian IX, Kempner J. The stigma of migraine. PLoS One 2013;8:e54074.

46. Minen A, Shome A, Hapern A, et al. A migraine training program for primary care providers: an overview of a survey and pilot study findings, lessons learned, and consideration for further research. Headache 2016;56:725–40.

Issue
Journal of Clinical Outcomes Management - July 2017, Vol. 24, No. 7
Issue
Journal of Clinical Outcomes Management - July 2017, Vol. 24, No. 7
Publications
Publications
Topics
Article Type
Display Headline
Determinants of Suboptimal Migraine Diagnosis and Treatment in the Primary Care Setting
Display Headline
Determinants of Suboptimal Migraine Diagnosis and Treatment in the Primary Care Setting
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

First EDition: ED Visits Increased in States That Expanded Medicaid, more

Article Type
Changed
Thu, 03/28/2019 - 14:50
Display Headline
First EDition: ED Visits Increased in States That Expanded Medicaid, more

BY JEFF BAUER

There was a substantial increase in the number of ED visits in states that expanded Medicaid coverage in 2014, after the Affordable Care Act was implemented, and a decrease in the number of ED visits by uninsured patients, according to a study published in Annals of Emergency Medicine.

Researchers analyzed quarterly data on ED visits from the Agency for Healthcare Research and Quality’s Fast Stats database, which is an early-release, aggregated version of the State Emergency Department Databases and State Inpatient Databases. They compared changes in ED visits per capita and changes in share of ED visits by payer (Medicaid, uninsured, and private insurance) in states that did and did not expand Medicaid coverage in 2014.

The analysis included 25 states: 14 Medicaid expansion states (Arizona, California, Hawaii, Iowa, Illinois, Kentucky, Maryland, Minnesota, North Dakota, New Jersey, Nevada, New York, Rhode Island, and Vermont) and 11 nonexpansion states (Florida, Georgia, Indiana, Kansas, Missouri, North Carolina, Nebraska, South Carolina, South Dakota, Tennessee, and Wisconsin). Researchers defined visits that occurred during all 4 quarters of 2012 and the first 3 quarters of 2013 as the pre-expansion period, and visits from the first through fourth quarters of 2014 as the postexpansion period. Visits that occurred during the fourth quarter of 2013 were not included in the analysis because Medicaid coverage began to increase in the final quarter of 2013 for most states.

Overall, researchers found that after 2014, ED use per 1,000 people per quarter increased by 2.5 visits more in expansion states compared to nonexpansion states. Researchers estimated that 1.13 million ED visits in 2014 could be attributed to Medicaid expansion in these states. In expansion states, the share of ED visits by Medicaid patients increased by 8.8 percentage points and the share of visits by insured patients decreased by 5.3 percentage points, compared to nonexpansion states. The share of visits by insured patients did not change for expansion states but increased slightly for nonexpansion states.

An American College of Emergency Physicians press release about this study included editorial comments by Ari Friedman, MD, of Beth Israel Deaconess Medical Center in Boston, who said, “More emergency department visits by Medicaid beneficiaries is neither clearly bad nor clearly good. Insurance increases access to care, including emergency department care. We need to move beyond the value judgments that have dominated so much study of emergency department utilization towards a more rational basis for how we structure unscheduled visits in the health system. If we want to meet patients’ care needs as patients themselves define them, the emergency department has a key role to play in a flexible system.”

Nikpay S, Freedman S, Levy H, Buchmueller T. Effect of the Affordable Care Act Medicaid expansion on emergency department visits: evidence from state-level emergency department databases. Ann Emerg Med. 2017 June 26. [Epub ahead of print]. doi:http://dx.doi.org/10.1016/j.annemergmed.2017.03.023.

Child Firearm Suicide at Highest Rate in More Than a Decade

MOLLIE KALAYCIO

FRONTLINE MEDICAL NEWS

Boys, older children, and minorities are disproportionately affected when it comes to firearm injuries and deaths in US children and adolescents, and child firearm suicide rates are at the highest they have been in more than a decade, new study results revealed.

Approximately 19 children are either medically treated for a gunshot wound or killed by one every day in the United States. “The majority of these children are boys 13-17 years old, African American in the case of firearm homicide, and white and American Indian in the case of firearm suicide. Pediatric firearm injuries and deaths are an important public health problem in the United States, contributing substantially each year to premature death, illness, and disability of children,” said Katherine A. Fowler, PhD, of the National Center for Injury Prevention and Control, Atlanta, and her associates. “Finding ways to prevent such injuries and ensure that all children have safe, stable, nurturing relationships and environments remains one of our most important priorities.”

National data on fatal firearm injuries in 2011-2014 for this study were derived from death certificate data from the Centers for Disease Control and Prevention’s (CDC’s) National Vital Statistics System, obtained via the CDC’s Web-based Injury Statistics Query and Reporting System. Data on nonfatal firearm injuries for 2011-2014 were obtained from the National Electronic Injury Surveillance System.

“From 2012 to 2014, the average annual case fatality rate was 74% for firearm-related self-harm, 14% for firearm-related assaults, and 6% for unintentional firearm injuries,” the investigators reported.

Boys accounted for 82% of all child firearm deaths from 2012 to 2014. In this time period, the annual rate of firearm death for boys was 4.5 times higher than the annual rate for girls (2.8 vs. 0.6 per 100,000). This difference was even more pronounced by age, with the rate for 13- to 17-year-old boys being six times higher than the rate for same-aged girls. Similarly, boys suffer the majority of nonfatal firearm injuries treated in US EDs, accounting for 84% of all nonfatal firearm injuries medically treated each year from 2012 to 2014. The average annual rate of nonfatal firearm injuries for boys was five times the rate for girls at 13 vs. 3 per 100,000.

The annual rate of firearm homicide was 10 times higher among 13- to 17-year-olds vs. 0- to 12-year-olds (3 vs. 0.3 per 100,000). Unintentional firearm death rates were approximately twice as high when comparing these two groups (0.2 vs. 0.1 per 100,000).

Dr Fowler and her associates wrote, “Our findings indicate that most children who died of unintentional firearm injuries were shot by another child in their own age range and most often in the context of playing with a gun or showing it to others. More than one-third of the deaths of older children occurred in incidents in which the shooter thought that the gun was unloaded or thought that the safety was engaged.”

“Child firearm suicide rates showed a significant upward trend between 2007 and 2014, increasing 60% from 1.0 to 1.6 (P < .05) to the highest rate seen over the period examined,” Dr Fowler and her associates said.

Firearm suicide rates were 11 times higher among 13- to 17-year-olds vs. 10- to 12-year-olds (2 vs. 0.2 per 100,000). Older children also accounted for 88% of all nonfatal firearm injuries treated in an ED. The overall average annual rate of nonfatal firearm injuries for older children was 19 times that of younger children (24 vs. 1 per 100,000).

The annual firearm homicide rate for African American children was nearly 10 times higher than the rate for white children (4 vs. 0.4 per 100,000). However, the annual rate of firearm suicide among white children was nearly four times higher than the rate for African American children (2. vs. 0.6 per 100,000).

Awareness of the availability of firearms during times of crisis is crucial because suicides are often impulsive in young people, Dr Fowler and her associates said, “with previous findings indicating that many who attempt suicide spend 10 minutes or less deliberating.‍ Safe storage practices (ie, unloading and locking all firearms and ammunition) can potentially be lifesaving in these instances,” as the results of previous studies in this age group attest.

Firearm deaths are the third leading cause of death overall among children in the United States aged 1-17 years, beating pediatric congenital anomalies, heart disease, influenza/pneumonia, chronic lower respiratory disease, and cerebrovascular causes. Understanding the nature, scale, and impact of firearm violence against children is an important first step, concluded Dr Fowler and her associates.

 

 

Fowler KA, Dahlberg LL, Haileyesus T, Gutierrez C, Bacon S. Childhood firearm injuries in the united states. Pediatrics. 2017;140(1):e20163486.

Article PDF
Issue
Emergency Medicine - 49(7)
Publications
Topics
Page Number
294-295
Sections
Article PDF
Article PDF

BY JEFF BAUER

There was a substantial increase in the number of ED visits in states that expanded Medicaid coverage in 2014, after the Affordable Care Act was implemented, and a decrease in the number of ED visits by uninsured patients, according to a study published in Annals of Emergency Medicine.

Researchers analyzed quarterly data on ED visits from the Agency for Healthcare Research and Quality’s Fast Stats database, which is an early-release, aggregated version of the State Emergency Department Databases and State Inpatient Databases. They compared changes in ED visits per capita and changes in share of ED visits by payer (Medicaid, uninsured, and private insurance) in states that did and did not expand Medicaid coverage in 2014.

The analysis included 25 states: 14 Medicaid expansion states (Arizona, California, Hawaii, Iowa, Illinois, Kentucky, Maryland, Minnesota, North Dakota, New Jersey, Nevada, New York, Rhode Island, and Vermont) and 11 nonexpansion states (Florida, Georgia, Indiana, Kansas, Missouri, North Carolina, Nebraska, South Carolina, South Dakota, Tennessee, and Wisconsin). Researchers defined visits that occurred during all 4 quarters of 2012 and the first 3 quarters of 2013 as the pre-expansion period, and visits from the first through fourth quarters of 2014 as the postexpansion period. Visits that occurred during the fourth quarter of 2013 were not included in the analysis because Medicaid coverage began to increase in the final quarter of 2013 for most states.

Overall, researchers found that after 2014, ED use per 1,000 people per quarter increased by 2.5 visits more in expansion states compared to nonexpansion states. Researchers estimated that 1.13 million ED visits in 2014 could be attributed to Medicaid expansion in these states. In expansion states, the share of ED visits by Medicaid patients increased by 8.8 percentage points and the share of visits by insured patients decreased by 5.3 percentage points, compared to nonexpansion states. The share of visits by insured patients did not change for expansion states but increased slightly for nonexpansion states.

An American College of Emergency Physicians press release about this study included editorial comments by Ari Friedman, MD, of Beth Israel Deaconess Medical Center in Boston, who said, “More emergency department visits by Medicaid beneficiaries is neither clearly bad nor clearly good. Insurance increases access to care, including emergency department care. We need to move beyond the value judgments that have dominated so much study of emergency department utilization towards a more rational basis for how we structure unscheduled visits in the health system. If we want to meet patients’ care needs as patients themselves define them, the emergency department has a key role to play in a flexible system.”

Nikpay S, Freedman S, Levy H, Buchmueller T. Effect of the Affordable Care Act Medicaid expansion on emergency department visits: evidence from state-level emergency department databases. Ann Emerg Med. 2017 June 26. [Epub ahead of print]. doi:http://dx.doi.org/10.1016/j.annemergmed.2017.03.023.

Child Firearm Suicide at Highest Rate in More Than a Decade

MOLLIE KALAYCIO

FRONTLINE MEDICAL NEWS

Boys, older children, and minorities are disproportionately affected when it comes to firearm injuries and deaths in US children and adolescents, and child firearm suicide rates are at the highest they have been in more than a decade, new study results revealed.

Approximately 19 children are either medically treated for a gunshot wound or killed by one every day in the United States. “The majority of these children are boys 13-17 years old, African American in the case of firearm homicide, and white and American Indian in the case of firearm suicide. Pediatric firearm injuries and deaths are an important public health problem in the United States, contributing substantially each year to premature death, illness, and disability of children,” said Katherine A. Fowler, PhD, of the National Center for Injury Prevention and Control, Atlanta, and her associates. “Finding ways to prevent such injuries and ensure that all children have safe, stable, nurturing relationships and environments remains one of our most important priorities.”

National data on fatal firearm injuries in 2011-2014 for this study were derived from death certificate data from the Centers for Disease Control and Prevention’s (CDC’s) National Vital Statistics System, obtained via the CDC’s Web-based Injury Statistics Query and Reporting System. Data on nonfatal firearm injuries for 2011-2014 were obtained from the National Electronic Injury Surveillance System.

“From 2012 to 2014, the average annual case fatality rate was 74% for firearm-related self-harm, 14% for firearm-related assaults, and 6% for unintentional firearm injuries,” the investigators reported.

Boys accounted for 82% of all child firearm deaths from 2012 to 2014. In this time period, the annual rate of firearm death for boys was 4.5 times higher than the annual rate for girls (2.8 vs. 0.6 per 100,000). This difference was even more pronounced by age, with the rate for 13- to 17-year-old boys being six times higher than the rate for same-aged girls. Similarly, boys suffer the majority of nonfatal firearm injuries treated in US EDs, accounting for 84% of all nonfatal firearm injuries medically treated each year from 2012 to 2014. The average annual rate of nonfatal firearm injuries for boys was five times the rate for girls at 13 vs. 3 per 100,000.

The annual rate of firearm homicide was 10 times higher among 13- to 17-year-olds vs. 0- to 12-year-olds (3 vs. 0.3 per 100,000). Unintentional firearm death rates were approximately twice as high when comparing these two groups (0.2 vs. 0.1 per 100,000).

Dr Fowler and her associates wrote, “Our findings indicate that most children who died of unintentional firearm injuries were shot by another child in their own age range and most often in the context of playing with a gun or showing it to others. More than one-third of the deaths of older children occurred in incidents in which the shooter thought that the gun was unloaded or thought that the safety was engaged.”

“Child firearm suicide rates showed a significant upward trend between 2007 and 2014, increasing 60% from 1.0 to 1.6 (P < .05) to the highest rate seen over the period examined,” Dr Fowler and her associates said.

Firearm suicide rates were 11 times higher among 13- to 17-year-olds vs. 10- to 12-year-olds (2 vs. 0.2 per 100,000). Older children also accounted for 88% of all nonfatal firearm injuries treated in an ED. The overall average annual rate of nonfatal firearm injuries for older children was 19 times that of younger children (24 vs. 1 per 100,000).

The annual firearm homicide rate for African American children was nearly 10 times higher than the rate for white children (4 vs. 0.4 per 100,000). However, the annual rate of firearm suicide among white children was nearly four times higher than the rate for African American children (2. vs. 0.6 per 100,000).

Awareness of the availability of firearms during times of crisis is crucial because suicides are often impulsive in young people, Dr Fowler and her associates said, “with previous findings indicating that many who attempt suicide spend 10 minutes or less deliberating.‍ Safe storage practices (ie, unloading and locking all firearms and ammunition) can potentially be lifesaving in these instances,” as the results of previous studies in this age group attest.

Firearm deaths are the third leading cause of death overall among children in the United States aged 1-17 years, beating pediatric congenital anomalies, heart disease, influenza/pneumonia, chronic lower respiratory disease, and cerebrovascular causes. Understanding the nature, scale, and impact of firearm violence against children is an important first step, concluded Dr Fowler and her associates.

 

 

Fowler KA, Dahlberg LL, Haileyesus T, Gutierrez C, Bacon S. Childhood firearm injuries in the united states. Pediatrics. 2017;140(1):e20163486.

BY JEFF BAUER

There was a substantial increase in the number of ED visits in states that expanded Medicaid coverage in 2014, after the Affordable Care Act was implemented, and a decrease in the number of ED visits by uninsured patients, according to a study published in Annals of Emergency Medicine.

Researchers analyzed quarterly data on ED visits from the Agency for Healthcare Research and Quality’s Fast Stats database, which is an early-release, aggregated version of the State Emergency Department Databases and State Inpatient Databases. They compared changes in ED visits per capita and changes in share of ED visits by payer (Medicaid, uninsured, and private insurance) in states that did and did not expand Medicaid coverage in 2014.

The analysis included 25 states: 14 Medicaid expansion states (Arizona, California, Hawaii, Iowa, Illinois, Kentucky, Maryland, Minnesota, North Dakota, New Jersey, Nevada, New York, Rhode Island, and Vermont) and 11 nonexpansion states (Florida, Georgia, Indiana, Kansas, Missouri, North Carolina, Nebraska, South Carolina, South Dakota, Tennessee, and Wisconsin). Researchers defined visits that occurred during all 4 quarters of 2012 and the first 3 quarters of 2013 as the pre-expansion period, and visits from the first through fourth quarters of 2014 as the postexpansion period. Visits that occurred during the fourth quarter of 2013 were not included in the analysis because Medicaid coverage began to increase in the final quarter of 2013 for most states.

Overall, researchers found that after 2014, ED use per 1,000 people per quarter increased by 2.5 visits more in expansion states compared to nonexpansion states. Researchers estimated that 1.13 million ED visits in 2014 could be attributed to Medicaid expansion in these states. In expansion states, the share of ED visits by Medicaid patients increased by 8.8 percentage points and the share of visits by insured patients decreased by 5.3 percentage points, compared to nonexpansion states. The share of visits by insured patients did not change for expansion states but increased slightly for nonexpansion states.

An American College of Emergency Physicians press release about this study included editorial comments by Ari Friedman, MD, of Beth Israel Deaconess Medical Center in Boston, who said, “More emergency department visits by Medicaid beneficiaries is neither clearly bad nor clearly good. Insurance increases access to care, including emergency department care. We need to move beyond the value judgments that have dominated so much study of emergency department utilization towards a more rational basis for how we structure unscheduled visits in the health system. If we want to meet patients’ care needs as patients themselves define them, the emergency department has a key role to play in a flexible system.”

Nikpay S, Freedman S, Levy H, Buchmueller T. Effect of the Affordable Care Act Medicaid expansion on emergency department visits: evidence from state-level emergency department databases. Ann Emerg Med. 2017 June 26. [Epub ahead of print]. doi:http://dx.doi.org/10.1016/j.annemergmed.2017.03.023.

Child Firearm Suicide at Highest Rate in More Than a Decade

MOLLIE KALAYCIO

FRONTLINE MEDICAL NEWS

Boys, older children, and minorities are disproportionately affected when it comes to firearm injuries and deaths in US children and adolescents, and child firearm suicide rates are at the highest they have been in more than a decade, new study results revealed.

Approximately 19 children are either medically treated for a gunshot wound or killed by one every day in the United States. “The majority of these children are boys 13-17 years old, African American in the case of firearm homicide, and white and American Indian in the case of firearm suicide. Pediatric firearm injuries and deaths are an important public health problem in the United States, contributing substantially each year to premature death, illness, and disability of children,” said Katherine A. Fowler, PhD, of the National Center for Injury Prevention and Control, Atlanta, and her associates. “Finding ways to prevent such injuries and ensure that all children have safe, stable, nurturing relationships and environments remains one of our most important priorities.”

National data on fatal firearm injuries in 2011-2014 for this study were derived from death certificate data from the Centers for Disease Control and Prevention’s (CDC’s) National Vital Statistics System, obtained via the CDC’s Web-based Injury Statistics Query and Reporting System. Data on nonfatal firearm injuries for 2011-2014 were obtained from the National Electronic Injury Surveillance System.

“From 2012 to 2014, the average annual case fatality rate was 74% for firearm-related self-harm, 14% for firearm-related assaults, and 6% for unintentional firearm injuries,” the investigators reported.

Boys accounted for 82% of all child firearm deaths from 2012 to 2014. In this time period, the annual rate of firearm death for boys was 4.5 times higher than the annual rate for girls (2.8 vs. 0.6 per 100,000). This difference was even more pronounced by age, with the rate for 13- to 17-year-old boys being six times higher than the rate for same-aged girls. Similarly, boys suffer the majority of nonfatal firearm injuries treated in US EDs, accounting for 84% of all nonfatal firearm injuries medically treated each year from 2012 to 2014. The average annual rate of nonfatal firearm injuries for boys was five times the rate for girls at 13 vs. 3 per 100,000.

The annual rate of firearm homicide was 10 times higher among 13- to 17-year-olds vs. 0- to 12-year-olds (3 vs. 0.3 per 100,000). Unintentional firearm death rates were approximately twice as high when comparing these two groups (0.2 vs. 0.1 per 100,000).

Dr Fowler and her associates wrote, “Our findings indicate that most children who died of unintentional firearm injuries were shot by another child in their own age range and most often in the context of playing with a gun or showing it to others. More than one-third of the deaths of older children occurred in incidents in which the shooter thought that the gun was unloaded or thought that the safety was engaged.”

“Child firearm suicide rates showed a significant upward trend between 2007 and 2014, increasing 60% from 1.0 to 1.6 (P < .05) to the highest rate seen over the period examined,” Dr Fowler and her associates said.

Firearm suicide rates were 11 times higher among 13- to 17-year-olds vs. 10- to 12-year-olds (2 vs. 0.2 per 100,000). Older children also accounted for 88% of all nonfatal firearm injuries treated in an ED. The overall average annual rate of nonfatal firearm injuries for older children was 19 times that of younger children (24 vs. 1 per 100,000).

The annual firearm homicide rate for African American children was nearly 10 times higher than the rate for white children (4 vs. 0.4 per 100,000). However, the annual rate of firearm suicide among white children was nearly four times higher than the rate for African American children (2. vs. 0.6 per 100,000).

Awareness of the availability of firearms during times of crisis is crucial because suicides are often impulsive in young people, Dr Fowler and her associates said, “with previous findings indicating that many who attempt suicide spend 10 minutes or less deliberating.‍ Safe storage practices (ie, unloading and locking all firearms and ammunition) can potentially be lifesaving in these instances,” as the results of previous studies in this age group attest.

Firearm deaths are the third leading cause of death overall among children in the United States aged 1-17 years, beating pediatric congenital anomalies, heart disease, influenza/pneumonia, chronic lower respiratory disease, and cerebrovascular causes. Understanding the nature, scale, and impact of firearm violence against children is an important first step, concluded Dr Fowler and her associates.

 

 

Fowler KA, Dahlberg LL, Haileyesus T, Gutierrez C, Bacon S. Childhood firearm injuries in the united states. Pediatrics. 2017;140(1):e20163486.

Issue
Emergency Medicine - 49(7)
Issue
Emergency Medicine - 49(7)
Page Number
294-295
Page Number
294-295
Publications
Publications
Topics
Article Type
Display Headline
First EDition: ED Visits Increased in States That Expanded Medicaid, more
Display Headline
First EDition: ED Visits Increased in States That Expanded Medicaid, more
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

Topical Cannabinoids in Dermatology

Article Type
Changed
Thu, 12/15/2022 - 14:53
Display Headline
Topical Cannabinoids in Dermatology

The prevalence of topical cannabinoids has risen sharply in recent years. Commercial advertisers promote their usage as a safe means to treat a multitude of skin disorders, including atopic dermatitis (AD), psoriasis, and acne. Topical compounds have garnered interest in laboratory studies, but the purchase of commercial formulations is limited to over-the-counter products from unregulated suppliers. In this article, we review the scientific evidence behind topical cannabinoids and evaluate their role in clinical dermatology.

Background

Cannabis is designated as a Schedule I drug, according to the Controlled Substances Act of 1970. This listing is given to substances with no therapeutic value and a high potential for abuse. However, as of 2017, 29 states and the District of Columbia have laws legalizing cannabis in some capacity. These regulations typically apply to medicinal use, though several states have now legalized recreational use.

Cannabinoids represent a broad class of chemical compounds derived from the cannabis plant. Originally, this class only comprised phytocannabinoids, cannabinoids produced by the cannabis plant. Tetrahydrocannabinol (THC) is the most well-known phytocannabinoid and leads to the psychoactive effects typically associated with cannabis use. Later investigation led to the discovery of endocannabinoids, cannabinoids that are naturally produced by human and animal bodies, as well as synthetic cannabinoids.1 Cannabidiol is a phytocannabinoid that has been investigated in neurologic and anti-inflammatory conditions.2-4

Cannabinoids act as agonists on 2 principal receptors— cannabinoid receptor type 1 (CB1) and cannabinoid receptor type 2 (CB2)—which are both G protein–coupled receptors (Figure).5 Both have distinct distributions throughout different organ systems, to which cannabinoids (eg, THC, cannabidiol, endocannabinoids) show differential binding.6,7 Importantly, the expression of CB1 and CB2 has been identified on sensory nerve fibers, inflammatory cells, and adnexal structures of human skin.8 Based on these associations, topical application of cannabinoids has become a modality of interest for dermatological disorders. These formulations aim to influence cutaneous morphology without producing psychoactive effects.

Signaling pathways associated with cannabinoid receptor activation. CB1 indicates cannabinoid receptor type 1; CB2, cannabinoid receptor type 2; AC, adenylyl cyclase; cAMP, cyclic adenosine monophosphate; PKA, protein kinase A; MAPK, mitogen-activated protein kinase.

Topical Cannabinoids in Inflammatory Disorders

Atopic dermatitis has emerged as an active area of investigation for cannabinoid receptors and topical agonists (Table 1). In an animal model, Kim et al9 examined the effects of CB1 agonism on skin inflammation. Mice treated with topical CB1 agonists showed greater recovery of epidermal barrier function in acutely abrogated skin relative to those treated with a vehicle preparation. In addition, agonism of CB1 led to significant (P<.001) decreases in skin fold thickness among models of acute and chronic skin inflammation.9

Nam et al10 also examined the role of topical CB1 agonists in mice with induced AD-like symptoms. Relative to treatment with vehicle, CB1 agonists significantly reduced the recruitment of mast cells (P<.01) and lowered the blood concentration of histamine (P<.05). Given the noted decrease in the release of inflammatory mediators, the authors speculated that topical agonsim of CB1 may prove useful in several conditions related to mast cell activation, such as AD, contact dermatitis, and psoriasis.10

The anti-inflammatory properties of topical THC were evaluated by Gaffal et al.11 In a mouse model of allergic contact dermatitis, mice treated with topical THC showed decreases in myeloid immune cell infiltration, with these beneficial effects existing even in mice with deficient CB1 and CB2 receptors. These results support a potentially wide anti-inflammatory activity of topical THC.11

Topical Cannabinoids in Pain Management

The effects of smoked cannabis in treating pain have undergone thorough investigation over recent years. Benefits have been noted in treating neuropathic pain, particularly in human immunodeficiency virus–associated sensory neuropathy.12-15 Smoked cannabis also may provide value as a synergistic therapy with opioids, thereby allowing for lower opioid doses.16

In contrast, research into the relationship between topical application of cannabinoids and nociception remains in preliminary stages (Table 2). In a mouse model, Dogrul et al17 assessed the topical antinociceptive potential of a mixed CB1-CB2 agonist. Results showed significant (P<.01) and dose-dependent antinociceptive effects relative to treatment with a vehicle.17 In a related study, Yesilyurt et al18 evaluated whether a mixed CB1-CB2 agonist could enhance the antinociceptive effects of topical opioids. Among mice treated with the combination of a cannabinoid agonist and topical morphine, a significantly (P<.05) greater analgesic effect was demonstrated relative to topical morphine alone.18

Studies in humans have been far more limited. Phan et al19 conducted a small, nonrandomized, open-label trial of a topical cannabinoid cream in patients with facial postherpetic neuralgia. Of 8 patients treated, 5 noted a mean pain reduction of 87.8%. No comparison vehicle was used. Based on this narrow study design, it is difficult to extrapolate these positive results to a broader patient population.19

 

 

Commercial Products

Although preliminary models with topical cannabinoids have shown potential, large-scale clinical trials in humans have yet to be performed. Despite this lack of investigation, commercial formulations of topical cannabinoids are available to dermatology patients. These formulations are nonstandardized, and no safety data exists regarding their use. Topical cannabinoids on the market may contain various amounts of active ingredient and may be combined with a range of other compounds.

In dermatology offices, it is not uncommon for patients to express an intention to use topical cannabinoid products following their planned treatment or procedure. Patients also have been known to use topical cannabinoid products prior to dermatologic procedures, sometimes in place of an approved topical anesthetic, without consulting the physician performing the procedure. With interventions that lead to active areas of wound healing, the application of such products may increase the risk for contamination and infection. Therefore, patients should be counseled that the use of commercial topical cannabinoids could jeopardize the success of their planned procedure, put them at risk for infection, and possibly lead to systemic absorption and/or changes in wound-healing capacities.

Conclusion

Based on the results from recent animal models, cannabinoids may have a role in future treatment algorithms for several inflammatory conditions. However, current efficacy and safety data are almost entirely limited to preliminary animal studies in rodents. In addition, the formulation of topical cannabinoid products is nonstandardized and poorly regulated. As such, the present evidence does not support the use of topical cannabinoids in dermatology practices. Dermatologists should ask patients about the use of any cannabinoid products as part of a treatment program, especially given the unsubstantiated claims often made by unscrupulous advertisers. This issue highlights the need for further research and regulation.

References
  1. Pacher P, Batkai S, Kunos G. The endocannabinoid system as an emerging target of pharmacotherapy. Pharmacol Rev. 2006;58:389-462.
  2. Giacoppo S, Galuppo M, Pollastro F, et al. A new formulation of cannabidiol in cream shows therapeutic effects in a mouse model of experimental autoimmune encephalomyelitis. Daru. 2015;23:48.
  3. Hammell DC, Zhang LP, Ma F, et al. Transdermal cannabidiol reduces inflammation and pain-related behaviours in a rat model of arthritis. Eur J Pain. 2016;20:936-948.
  4. Schicho R, Storr M. Topical and systemic cannabidiol improves trinitrobenzene sulfonic acid colitis in mice. Pharmacology. 2012;89:149-155.
  5. Howlett AC, Barth F, Bonner TI, et al. International Union of Pharmacology. XXVII. Classification of cannabinoid receptors. Pharmacol Rev. 2002;54:161-202.
  6. Pertwee RG. The diverse CB1 and CB2 receptor pharmacology of three plant cannabinoids: delta9-tetrahydrocannabinol, cannabidiol and delta9-tetrahydrocannabivarin. Br J Pharmacol. 2008;153:199-215.
  7. Svizenska I, Dubovy P, Sulcova A. Cannabinoid receptors 1 and 2 (CB1 and CB2), their distribution, ligands and functional involvement in nervous system structures—a short review. Pharmacol Biochem Behav. 2008;90:501-511.
  8. Stander S, Schmelz M, Metze D, et al. Distribution of cannabinoid receptor 1 (CB1) and 2 (CB2) on sensory nerve fibers and adnexal structures in human skin. J Dermatol Sci. 2005;38:177-188.
  9. Kim HJ, Kim B, Park BM, et al. Topical cannabinoid receptor 1 agonist attenuates the cutaneous inflammatory responses in oxazolone-induced atopic dermatitis model. Int J Dermatol. 2015;54:E401-E408.
  10. Nam G, Jeong SK, Park BM, et al. Selective cannabinoid receptor-1 agonists regulate mast cell activation in an oxazolone-induced atopic dermatitis model. Ann Dermatol. 2016;28:22-29.
  11. Gaffal E, Cron M, Glodde N, et al. Anti-inflammatory activity of topical THC in DNFB-mediated mouse allergic contact dermatitis independent of CB1 and CB2 receptors. Allergy. 2013;68:994-1000.
  12. Abrams DI, Jay CA, Shade SB, et al. Cannabis in painful HIV-associated sensory neuropathy: a randomized placebo-controlled trial. Neurology. 2007;68:515-521.
  13. Ellis RJ, Toperoff W, Vaida F, et al. Smoked medicinal cannabis for neuropathic pain in HIV: a randomized, crossover clinical trial. Neuropsychopharmacology. 2009;34:672-680.
  14. Wilsey B, Marcotte T, Deutsch R, et al. Low-dose vaporized cannabis significantly improves neuropathic pain. J Pain. 2013;14:136-148.
  15. Wilsey B, Marcotte T, Tsodikov A, et al. A randomized, placebo-controlled, crossover trial of cannabis cigarettes in neuropathic pain. J Pain. 2008;9:506-521.
  16. Abrams DI, Couey P, Shade SB, et al. Cannabinoid-opioid interaction in chronic pain. Clin Pharmacol Ther. 2011;90:844-851.
  17. Dogrul A, Gul H, Akar A, et al. Topical cannabinoid antinociception: synergy with spinal sites. Pain. 2003;105:11-16.
  18. Yesilyurt O, Dogrul A, Gul H, et al. Topical cannabinoid enhances topical morphine antinociception. Pain. 2003;105:303-308.
  19. Phan NQ, Siepmann D, Gralow I, et al. Adjuvant topical therapy with a cannabinoid receptor agonist in facial postherpetic neuralgia. J Dtsch Dermatol Ges. 2010;8:88-91.
Article PDF
Author and Disclosure Information

Drs. Hashim and Goldenberg are from the Department of Dermatology, Icahn School of Medicine at Mount Sinai, New York, New York. Dr. Cohen is from AboutSkin Dermatology and DermSurgery, both in Englewood, Colorado; the Department of Dermatology, University of Colorado Denver, Aurora; and the Department of Dermatology, University of California at Irvine. Dr. Pompei is from Baruch College, City University of New York, New York.

The authors report no conflict of interest.

Correspondence: Gary Goldenberg, MD, Department of Dermatology, Icahn School of Medicine at Mount Sinai Medical Center, 5 E 98th St, New York, NY 10029 ([email protected]).

Issue
Cutis - 100(1)
Publications
Topics
Page Number
50-52
Sections
Author and Disclosure Information

Drs. Hashim and Goldenberg are from the Department of Dermatology, Icahn School of Medicine at Mount Sinai, New York, New York. Dr. Cohen is from AboutSkin Dermatology and DermSurgery, both in Englewood, Colorado; the Department of Dermatology, University of Colorado Denver, Aurora; and the Department of Dermatology, University of California at Irvine. Dr. Pompei is from Baruch College, City University of New York, New York.

The authors report no conflict of interest.

Correspondence: Gary Goldenberg, MD, Department of Dermatology, Icahn School of Medicine at Mount Sinai Medical Center, 5 E 98th St, New York, NY 10029 ([email protected]).

Author and Disclosure Information

Drs. Hashim and Goldenberg are from the Department of Dermatology, Icahn School of Medicine at Mount Sinai, New York, New York. Dr. Cohen is from AboutSkin Dermatology and DermSurgery, both in Englewood, Colorado; the Department of Dermatology, University of Colorado Denver, Aurora; and the Department of Dermatology, University of California at Irvine. Dr. Pompei is from Baruch College, City University of New York, New York.

The authors report no conflict of interest.

Correspondence: Gary Goldenberg, MD, Department of Dermatology, Icahn School of Medicine at Mount Sinai Medical Center, 5 E 98th St, New York, NY 10029 ([email protected]).

Article PDF
Article PDF
Related Articles

The prevalence of topical cannabinoids has risen sharply in recent years. Commercial advertisers promote their usage as a safe means to treat a multitude of skin disorders, including atopic dermatitis (AD), psoriasis, and acne. Topical compounds have garnered interest in laboratory studies, but the purchase of commercial formulations is limited to over-the-counter products from unregulated suppliers. In this article, we review the scientific evidence behind topical cannabinoids and evaluate their role in clinical dermatology.

Background

Cannabis is designated as a Schedule I drug, according to the Controlled Substances Act of 1970. This listing is given to substances with no therapeutic value and a high potential for abuse. However, as of 2017, 29 states and the District of Columbia have laws legalizing cannabis in some capacity. These regulations typically apply to medicinal use, though several states have now legalized recreational use.

Cannabinoids represent a broad class of chemical compounds derived from the cannabis plant. Originally, this class only comprised phytocannabinoids, cannabinoids produced by the cannabis plant. Tetrahydrocannabinol (THC) is the most well-known phytocannabinoid and leads to the psychoactive effects typically associated with cannabis use. Later investigation led to the discovery of endocannabinoids, cannabinoids that are naturally produced by human and animal bodies, as well as synthetic cannabinoids.1 Cannabidiol is a phytocannabinoid that has been investigated in neurologic and anti-inflammatory conditions.2-4

Cannabinoids act as agonists on 2 principal receptors— cannabinoid receptor type 1 (CB1) and cannabinoid receptor type 2 (CB2)—which are both G protein–coupled receptors (Figure).5 Both have distinct distributions throughout different organ systems, to which cannabinoids (eg, THC, cannabidiol, endocannabinoids) show differential binding.6,7 Importantly, the expression of CB1 and CB2 has been identified on sensory nerve fibers, inflammatory cells, and adnexal structures of human skin.8 Based on these associations, topical application of cannabinoids has become a modality of interest for dermatological disorders. These formulations aim to influence cutaneous morphology without producing psychoactive effects.

Signaling pathways associated with cannabinoid receptor activation. CB1 indicates cannabinoid receptor type 1; CB2, cannabinoid receptor type 2; AC, adenylyl cyclase; cAMP, cyclic adenosine monophosphate; PKA, protein kinase A; MAPK, mitogen-activated protein kinase.

Topical Cannabinoids in Inflammatory Disorders

Atopic dermatitis has emerged as an active area of investigation for cannabinoid receptors and topical agonists (Table 1). In an animal model, Kim et al9 examined the effects of CB1 agonism on skin inflammation. Mice treated with topical CB1 agonists showed greater recovery of epidermal barrier function in acutely abrogated skin relative to those treated with a vehicle preparation. In addition, agonism of CB1 led to significant (P<.001) decreases in skin fold thickness among models of acute and chronic skin inflammation.9

Nam et al10 also examined the role of topical CB1 agonists in mice with induced AD-like symptoms. Relative to treatment with vehicle, CB1 agonists significantly reduced the recruitment of mast cells (P<.01) and lowered the blood concentration of histamine (P<.05). Given the noted decrease in the release of inflammatory mediators, the authors speculated that topical agonsim of CB1 may prove useful in several conditions related to mast cell activation, such as AD, contact dermatitis, and psoriasis.10

The anti-inflammatory properties of topical THC were evaluated by Gaffal et al.11 In a mouse model of allergic contact dermatitis, mice treated with topical THC showed decreases in myeloid immune cell infiltration, with these beneficial effects existing even in mice with deficient CB1 and CB2 receptors. These results support a potentially wide anti-inflammatory activity of topical THC.11

Topical Cannabinoids in Pain Management

The effects of smoked cannabis in treating pain have undergone thorough investigation over recent years. Benefits have been noted in treating neuropathic pain, particularly in human immunodeficiency virus–associated sensory neuropathy.12-15 Smoked cannabis also may provide value as a synergistic therapy with opioids, thereby allowing for lower opioid doses.16

In contrast, research into the relationship between topical application of cannabinoids and nociception remains in preliminary stages (Table 2). In a mouse model, Dogrul et al17 assessed the topical antinociceptive potential of a mixed CB1-CB2 agonist. Results showed significant (P<.01) and dose-dependent antinociceptive effects relative to treatment with a vehicle.17 In a related study, Yesilyurt et al18 evaluated whether a mixed CB1-CB2 agonist could enhance the antinociceptive effects of topical opioids. Among mice treated with the combination of a cannabinoid agonist and topical morphine, a significantly (P<.05) greater analgesic effect was demonstrated relative to topical morphine alone.18

Studies in humans have been far more limited. Phan et al19 conducted a small, nonrandomized, open-label trial of a topical cannabinoid cream in patients with facial postherpetic neuralgia. Of 8 patients treated, 5 noted a mean pain reduction of 87.8%. No comparison vehicle was used. Based on this narrow study design, it is difficult to extrapolate these positive results to a broader patient population.19

 

 

Commercial Products

Although preliminary models with topical cannabinoids have shown potential, large-scale clinical trials in humans have yet to be performed. Despite this lack of investigation, commercial formulations of topical cannabinoids are available to dermatology patients. These formulations are nonstandardized, and no safety data exists regarding their use. Topical cannabinoids on the market may contain various amounts of active ingredient and may be combined with a range of other compounds.

In dermatology offices, it is not uncommon for patients to express an intention to use topical cannabinoid products following their planned treatment or procedure. Patients also have been known to use topical cannabinoid products prior to dermatologic procedures, sometimes in place of an approved topical anesthetic, without consulting the physician performing the procedure. With interventions that lead to active areas of wound healing, the application of such products may increase the risk for contamination and infection. Therefore, patients should be counseled that the use of commercial topical cannabinoids could jeopardize the success of their planned procedure, put them at risk for infection, and possibly lead to systemic absorption and/or changes in wound-healing capacities.

Conclusion

Based on the results from recent animal models, cannabinoids may have a role in future treatment algorithms for several inflammatory conditions. However, current efficacy and safety data are almost entirely limited to preliminary animal studies in rodents. In addition, the formulation of topical cannabinoid products is nonstandardized and poorly regulated. As such, the present evidence does not support the use of topical cannabinoids in dermatology practices. Dermatologists should ask patients about the use of any cannabinoid products as part of a treatment program, especially given the unsubstantiated claims often made by unscrupulous advertisers. This issue highlights the need for further research and regulation.

The prevalence of topical cannabinoids has risen sharply in recent years. Commercial advertisers promote their usage as a safe means to treat a multitude of skin disorders, including atopic dermatitis (AD), psoriasis, and acne. Topical compounds have garnered interest in laboratory studies, but the purchase of commercial formulations is limited to over-the-counter products from unregulated suppliers. In this article, we review the scientific evidence behind topical cannabinoids and evaluate their role in clinical dermatology.

Background

Cannabis is designated as a Schedule I drug, according to the Controlled Substances Act of 1970. This listing is given to substances with no therapeutic value and a high potential for abuse. However, as of 2017, 29 states and the District of Columbia have laws legalizing cannabis in some capacity. These regulations typically apply to medicinal use, though several states have now legalized recreational use.

Cannabinoids represent a broad class of chemical compounds derived from the cannabis plant. Originally, this class only comprised phytocannabinoids, cannabinoids produced by the cannabis plant. Tetrahydrocannabinol (THC) is the most well-known phytocannabinoid and leads to the psychoactive effects typically associated with cannabis use. Later investigation led to the discovery of endocannabinoids, cannabinoids that are naturally produced by human and animal bodies, as well as synthetic cannabinoids.1 Cannabidiol is a phytocannabinoid that has been investigated in neurologic and anti-inflammatory conditions.2-4

Cannabinoids act as agonists on 2 principal receptors— cannabinoid receptor type 1 (CB1) and cannabinoid receptor type 2 (CB2)—which are both G protein–coupled receptors (Figure).5 Both have distinct distributions throughout different organ systems, to which cannabinoids (eg, THC, cannabidiol, endocannabinoids) show differential binding.6,7 Importantly, the expression of CB1 and CB2 has been identified on sensory nerve fibers, inflammatory cells, and adnexal structures of human skin.8 Based on these associations, topical application of cannabinoids has become a modality of interest for dermatological disorders. These formulations aim to influence cutaneous morphology without producing psychoactive effects.

Signaling pathways associated with cannabinoid receptor activation. CB1 indicates cannabinoid receptor type 1; CB2, cannabinoid receptor type 2; AC, adenylyl cyclase; cAMP, cyclic adenosine monophosphate; PKA, protein kinase A; MAPK, mitogen-activated protein kinase.

Topical Cannabinoids in Inflammatory Disorders

Atopic dermatitis has emerged as an active area of investigation for cannabinoid receptors and topical agonists (Table 1). In an animal model, Kim et al9 examined the effects of CB1 agonism on skin inflammation. Mice treated with topical CB1 agonists showed greater recovery of epidermal barrier function in acutely abrogated skin relative to those treated with a vehicle preparation. In addition, agonism of CB1 led to significant (P<.001) decreases in skin fold thickness among models of acute and chronic skin inflammation.9

Nam et al10 also examined the role of topical CB1 agonists in mice with induced AD-like symptoms. Relative to treatment with vehicle, CB1 agonists significantly reduced the recruitment of mast cells (P<.01) and lowered the blood concentration of histamine (P<.05). Given the noted decrease in the release of inflammatory mediators, the authors speculated that topical agonsim of CB1 may prove useful in several conditions related to mast cell activation, such as AD, contact dermatitis, and psoriasis.10

The anti-inflammatory properties of topical THC were evaluated by Gaffal et al.11 In a mouse model of allergic contact dermatitis, mice treated with topical THC showed decreases in myeloid immune cell infiltration, with these beneficial effects existing even in mice with deficient CB1 and CB2 receptors. These results support a potentially wide anti-inflammatory activity of topical THC.11

Topical Cannabinoids in Pain Management

The effects of smoked cannabis in treating pain have undergone thorough investigation over recent years. Benefits have been noted in treating neuropathic pain, particularly in human immunodeficiency virus–associated sensory neuropathy.12-15 Smoked cannabis also may provide value as a synergistic therapy with opioids, thereby allowing for lower opioid doses.16

In contrast, research into the relationship between topical application of cannabinoids and nociception remains in preliminary stages (Table 2). In a mouse model, Dogrul et al17 assessed the topical antinociceptive potential of a mixed CB1-CB2 agonist. Results showed significant (P<.01) and dose-dependent antinociceptive effects relative to treatment with a vehicle.17 In a related study, Yesilyurt et al18 evaluated whether a mixed CB1-CB2 agonist could enhance the antinociceptive effects of topical opioids. Among mice treated with the combination of a cannabinoid agonist and topical morphine, a significantly (P<.05) greater analgesic effect was demonstrated relative to topical morphine alone.18

Studies in humans have been far more limited. Phan et al19 conducted a small, nonrandomized, open-label trial of a topical cannabinoid cream in patients with facial postherpetic neuralgia. Of 8 patients treated, 5 noted a mean pain reduction of 87.8%. No comparison vehicle was used. Based on this narrow study design, it is difficult to extrapolate these positive results to a broader patient population.19

 

 

Commercial Products

Although preliminary models with topical cannabinoids have shown potential, large-scale clinical trials in humans have yet to be performed. Despite this lack of investigation, commercial formulations of topical cannabinoids are available to dermatology patients. These formulations are nonstandardized, and no safety data exists regarding their use. Topical cannabinoids on the market may contain various amounts of active ingredient and may be combined with a range of other compounds.

In dermatology offices, it is not uncommon for patients to express an intention to use topical cannabinoid products following their planned treatment or procedure. Patients also have been known to use topical cannabinoid products prior to dermatologic procedures, sometimes in place of an approved topical anesthetic, without consulting the physician performing the procedure. With interventions that lead to active areas of wound healing, the application of such products may increase the risk for contamination and infection. Therefore, patients should be counseled that the use of commercial topical cannabinoids could jeopardize the success of their planned procedure, put them at risk for infection, and possibly lead to systemic absorption and/or changes in wound-healing capacities.

Conclusion

Based on the results from recent animal models, cannabinoids may have a role in future treatment algorithms for several inflammatory conditions. However, current efficacy and safety data are almost entirely limited to preliminary animal studies in rodents. In addition, the formulation of topical cannabinoid products is nonstandardized and poorly regulated. As such, the present evidence does not support the use of topical cannabinoids in dermatology practices. Dermatologists should ask patients about the use of any cannabinoid products as part of a treatment program, especially given the unsubstantiated claims often made by unscrupulous advertisers. This issue highlights the need for further research and regulation.

References
  1. Pacher P, Batkai S, Kunos G. The endocannabinoid system as an emerging target of pharmacotherapy. Pharmacol Rev. 2006;58:389-462.
  2. Giacoppo S, Galuppo M, Pollastro F, et al. A new formulation of cannabidiol in cream shows therapeutic effects in a mouse model of experimental autoimmune encephalomyelitis. Daru. 2015;23:48.
  3. Hammell DC, Zhang LP, Ma F, et al. Transdermal cannabidiol reduces inflammation and pain-related behaviours in a rat model of arthritis. Eur J Pain. 2016;20:936-948.
  4. Schicho R, Storr M. Topical and systemic cannabidiol improves trinitrobenzene sulfonic acid colitis in mice. Pharmacology. 2012;89:149-155.
  5. Howlett AC, Barth F, Bonner TI, et al. International Union of Pharmacology. XXVII. Classification of cannabinoid receptors. Pharmacol Rev. 2002;54:161-202.
  6. Pertwee RG. The diverse CB1 and CB2 receptor pharmacology of three plant cannabinoids: delta9-tetrahydrocannabinol, cannabidiol and delta9-tetrahydrocannabivarin. Br J Pharmacol. 2008;153:199-215.
  7. Svizenska I, Dubovy P, Sulcova A. Cannabinoid receptors 1 and 2 (CB1 and CB2), their distribution, ligands and functional involvement in nervous system structures—a short review. Pharmacol Biochem Behav. 2008;90:501-511.
  8. Stander S, Schmelz M, Metze D, et al. Distribution of cannabinoid receptor 1 (CB1) and 2 (CB2) on sensory nerve fibers and adnexal structures in human skin. J Dermatol Sci. 2005;38:177-188.
  9. Kim HJ, Kim B, Park BM, et al. Topical cannabinoid receptor 1 agonist attenuates the cutaneous inflammatory responses in oxazolone-induced atopic dermatitis model. Int J Dermatol. 2015;54:E401-E408.
  10. Nam G, Jeong SK, Park BM, et al. Selective cannabinoid receptor-1 agonists regulate mast cell activation in an oxazolone-induced atopic dermatitis model. Ann Dermatol. 2016;28:22-29.
  11. Gaffal E, Cron M, Glodde N, et al. Anti-inflammatory activity of topical THC in DNFB-mediated mouse allergic contact dermatitis independent of CB1 and CB2 receptors. Allergy. 2013;68:994-1000.
  12. Abrams DI, Jay CA, Shade SB, et al. Cannabis in painful HIV-associated sensory neuropathy: a randomized placebo-controlled trial. Neurology. 2007;68:515-521.
  13. Ellis RJ, Toperoff W, Vaida F, et al. Smoked medicinal cannabis for neuropathic pain in HIV: a randomized, crossover clinical trial. Neuropsychopharmacology. 2009;34:672-680.
  14. Wilsey B, Marcotte T, Deutsch R, et al. Low-dose vaporized cannabis significantly improves neuropathic pain. J Pain. 2013;14:136-148.
  15. Wilsey B, Marcotte T, Tsodikov A, et al. A randomized, placebo-controlled, crossover trial of cannabis cigarettes in neuropathic pain. J Pain. 2008;9:506-521.
  16. Abrams DI, Couey P, Shade SB, et al. Cannabinoid-opioid interaction in chronic pain. Clin Pharmacol Ther. 2011;90:844-851.
  17. Dogrul A, Gul H, Akar A, et al. Topical cannabinoid antinociception: synergy with spinal sites. Pain. 2003;105:11-16.
  18. Yesilyurt O, Dogrul A, Gul H, et al. Topical cannabinoid enhances topical morphine antinociception. Pain. 2003;105:303-308.
  19. Phan NQ, Siepmann D, Gralow I, et al. Adjuvant topical therapy with a cannabinoid receptor agonist in facial postherpetic neuralgia. J Dtsch Dermatol Ges. 2010;8:88-91.
References
  1. Pacher P, Batkai S, Kunos G. The endocannabinoid system as an emerging target of pharmacotherapy. Pharmacol Rev. 2006;58:389-462.
  2. Giacoppo S, Galuppo M, Pollastro F, et al. A new formulation of cannabidiol in cream shows therapeutic effects in a mouse model of experimental autoimmune encephalomyelitis. Daru. 2015;23:48.
  3. Hammell DC, Zhang LP, Ma F, et al. Transdermal cannabidiol reduces inflammation and pain-related behaviours in a rat model of arthritis. Eur J Pain. 2016;20:936-948.
  4. Schicho R, Storr M. Topical and systemic cannabidiol improves trinitrobenzene sulfonic acid colitis in mice. Pharmacology. 2012;89:149-155.
  5. Howlett AC, Barth F, Bonner TI, et al. International Union of Pharmacology. XXVII. Classification of cannabinoid receptors. Pharmacol Rev. 2002;54:161-202.
  6. Pertwee RG. The diverse CB1 and CB2 receptor pharmacology of three plant cannabinoids: delta9-tetrahydrocannabinol, cannabidiol and delta9-tetrahydrocannabivarin. Br J Pharmacol. 2008;153:199-215.
  7. Svizenska I, Dubovy P, Sulcova A. Cannabinoid receptors 1 and 2 (CB1 and CB2), their distribution, ligands and functional involvement in nervous system structures—a short review. Pharmacol Biochem Behav. 2008;90:501-511.
  8. Stander S, Schmelz M, Metze D, et al. Distribution of cannabinoid receptor 1 (CB1) and 2 (CB2) on sensory nerve fibers and adnexal structures in human skin. J Dermatol Sci. 2005;38:177-188.
  9. Kim HJ, Kim B, Park BM, et al. Topical cannabinoid receptor 1 agonist attenuates the cutaneous inflammatory responses in oxazolone-induced atopic dermatitis model. Int J Dermatol. 2015;54:E401-E408.
  10. Nam G, Jeong SK, Park BM, et al. Selective cannabinoid receptor-1 agonists regulate mast cell activation in an oxazolone-induced atopic dermatitis model. Ann Dermatol. 2016;28:22-29.
  11. Gaffal E, Cron M, Glodde N, et al. Anti-inflammatory activity of topical THC in DNFB-mediated mouse allergic contact dermatitis independent of CB1 and CB2 receptors. Allergy. 2013;68:994-1000.
  12. Abrams DI, Jay CA, Shade SB, et al. Cannabis in painful HIV-associated sensory neuropathy: a randomized placebo-controlled trial. Neurology. 2007;68:515-521.
  13. Ellis RJ, Toperoff W, Vaida F, et al. Smoked medicinal cannabis for neuropathic pain in HIV: a randomized, crossover clinical trial. Neuropsychopharmacology. 2009;34:672-680.
  14. Wilsey B, Marcotte T, Deutsch R, et al. Low-dose vaporized cannabis significantly improves neuropathic pain. J Pain. 2013;14:136-148.
  15. Wilsey B, Marcotte T, Tsodikov A, et al. A randomized, placebo-controlled, crossover trial of cannabis cigarettes in neuropathic pain. J Pain. 2008;9:506-521.
  16. Abrams DI, Couey P, Shade SB, et al. Cannabinoid-opioid interaction in chronic pain. Clin Pharmacol Ther. 2011;90:844-851.
  17. Dogrul A, Gul H, Akar A, et al. Topical cannabinoid antinociception: synergy with spinal sites. Pain. 2003;105:11-16.
  18. Yesilyurt O, Dogrul A, Gul H, et al. Topical cannabinoid enhances topical morphine antinociception. Pain. 2003;105:303-308.
  19. Phan NQ, Siepmann D, Gralow I, et al. Adjuvant topical therapy with a cannabinoid receptor agonist in facial postherpetic neuralgia. J Dtsch Dermatol Ges. 2010;8:88-91.
Issue
Cutis - 100(1)
Issue
Cutis - 100(1)
Page Number
50-52
Page Number
50-52
Publications
Publications
Topics
Article Type
Display Headline
Topical Cannabinoids in Dermatology
Display Headline
Topical Cannabinoids in Dermatology
Sections
Inside the Article

Practice Points

  • Topical cannabinoids are advertised by companies as treatment options for numerous dermatologic conditions.
  • Despite promising data in rodent models, there have been no rigorous studies to date confirming efficacy or safety in humans.
  • Dermatologists should therefore inquire with patients about the use of any topical cannabinoid products, especially around the time of planned procedures, as they may affect treatment outcomes.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

Systematic Review of Novel Synovial Fluid Markers and Polymerase Chain Reaction in the Diagnosis of Prosthetic Joint Infection

Article Type
Changed
Thu, 09/19/2019 - 13:21
Display Headline
Systematic Review of Novel Synovial Fluid Markers and Polymerase Chain Reaction in the Diagnosis of Prosthetic Joint Infection

Take-Home Points

  • Novel synovial markers and PCR have the potential to improve the detection of PJIs.
  • 10Difficult-to-detect infections of prosthetic joints pose a diagnostic problem to surgeons and can lead to suboptimal outcomes.
  • AD is a highly sensitive and specific synovial fluid marker for detecting PJIs.
  • AD has shown promising results in detecting low virulence organisms.
  • Studies are needed to determine how to best incorporate novel synovial markers and PCR to current diagnostic criteria in order to improve diagnostic accuracy.

Approximately 7 million Americans are living with a hip or knee replacement.1 According to projections, primary hip arthroplasties will increase by 174% and knee arthroplasties by 673% by 2030. Revision arthroplasties are projected to increase by 137% for hips and 601% for knees during the same time period.2 Infection and aseptic loosening are the most common causes of implant failure.3 The literature shows that infection is the most common cause of failure within 2 years after surgery and that aseptic loosening is the most common cause for late revision.3

Recent studies suggest that prosthetic joint infection (PJI) may be underreported because of difficulty making a diagnosis and that cases of aseptic loosening may in fact be attributable to infections with low-virulence organisms.2,3 These findings have led to new efforts to develop uniform criteria for diagnosing PJIs. In 2011, the Musculoskeletal Infection Society (MSIS) offered a new definition for PJI diagnosis, based on clinical and laboratory criteria, to increase the accuracy of PJI diagnosis.4 The MSIS committee acknowledged that PJI may be present even if these criteria are not met, particularly in the case of low-virulence organisms, as patients may not present with clinical signs of infection and may have normal inflammatory markers and joint aspirates. Reports of PJI cases misdiagnosed as aseptic loosening suggest that current screening and diagnostic tools are not sensitive enough to detect all infections and that PJI is likely underdiagnosed.

According to MSIS criteria, the diagnosis of PJI can be made when there is a sinus tract communicating with the prosthesis, when a pathogen is isolated by culture from 2 or more separate tissue or fluid samples obtained from the affected prosthetic joint, or when 4 of 6 criteria are met. The 6 criteria are (1) elevated serum erythrocyte sedimentation rate (ESR) (>30 mm/hour) and elevated C-reactive protein (CRP) level (>10 mg/L); (2) elevated synovial white blood cell (WBC) count (1100-4000 cells/μL); (3) elevated synovial polymorphonuclear leukocytes (>64%); (4) purulence in affected joint; (5) isolation of a microorganism in a culture of periprosthetic tissue or fluid; and (6) more than 5 neutrophils per high-power field in 5 high-power fields observed.

In this review article, we discuss recently developed novel synovial biomarkers and polymerase chain reaction (PCR) technologies that may help increase the sensitivity and specificity of diagnostic guidelines for PJI.

Methods

Using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), we performed a systematic review of specific synovial fluid markers and PCR used in PJI diagnosis. In May 2016, we searched the PubMed database for these criteria: ((((((PCR[Text Word]) OR IL-6[Text Word]) OR leukocyte esterase[Text Word]) OR alpha defensin[Text Word]) AND ((“infection/diagnosis”[MeSH Terms] OR “infection/surgery”[MeSH Terms])))) AND (prosthetic joint infection[MeSH Terms] OR periprosthetic joint infection[MeSH Terms]).

We included patients who had undergone total hip, knee, or shoulder arthroplasty (THA, TKA, TSA). Index tests were PCR and the synovial fluid markers α-defensin (AD), interleukin 6 (IL-6), and leukocyte esterase (LE). Reference tests included joint fluid/serum analysis or tissue analysis (ESR/CRP level, cell count, culture, frozen section), which defined the MSIS criteria for PJI. Primary outcomes of interest were sensitivity and specificity, and secondary outcomes of interest included positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (+LR), and negative likelihood ratio (–LR). Randomized controlled trials and controlled cohort studies in humans published within the past 10 years were included.

Results

Our full-text review yielded 15 papers that met our study inclusion criteria (Figure 1).

α-Defensin

One of the novel synovial biomarkers that has shown significant promise in diagnosing PJIs, even with difficult-to-detect organisms, is AD.

Figure 1.
Frangiamore and colleagues5 conducted a prospective study comparing patients with painful TSAs that required revision (n = 33). Patients were grouped based on objective clinical, laboratory, and histologic criteria of infection, which included preoperative clinical signs (swelling, sinus track, redness, drainage), elevated serum ESR or CRP, intraoperative gross findings (purulence, necrosis) and positive intraoperative frozen section. Synovial fluid aspiration was obtained preoperatively or intraoperatively. Of the 33 patients, 11 patients met the authors criteria for suspected PJI prior to final intraoperative culture results; 22 patients did not. Of the samples taken intraoperatively, Propionibacterium acnes was the most commonly isolated organism (9 cases), followed by coagulase-negative Staphylococcus (4 cases). AD demonstrated a sensitivity of 63%, specificity of 95%, +LR ratio of 12.1, and –LR ratio of 0.38. AD showed a strong association with growth of P acnes in the infected group (median signal-to-cutoff ratio, 4.45) compared with the noninfected group (median signal-to-cutoff ratio, 1.33) as well as strong associations with frozen section histology. Frangiamore and colleagues5 concluded that the use of AD in diagnosing PJIs with difficult-to-detect organisms was promising.

AD has shown even more impressive results as a biomarker for PJI in the hip and knee, where infection with low virulence organism is less common. In 2014, Deirmengian and colleagues6 conducted a prospective clinical study of 149 patients who underwent revision THA or TKA for aseptic loosening (n = 112) or PJI (n = 37) as defined by MSIS criteria. Aseptic loosening was diagnosed when there was no identifiable reason for pain, and MSIS criteria were not met. Synovial fluid aspirates were collected before or during surgery. AD correctly identified 143 of the 149 patients with confirmed infection with sensitivity of 97.3% (95% confidence interval [CI], 85.8%-99.6%) and specificity of 95.5% (95% CI, 89.9%-98.5%). Similarly, Bingham and colleagues7 conducted a retrospective clinical study of 61 assays done on 57 patients who underwent revision arthroplasty for PJI as defined by MSIS criteria. Synovial fluid aspirates were collected before or during surgery. AD correctly identified all 19 PJIs with sensitivity of 100% (95% CI, 79%-100%) and specificity of 95% (95% CI, 83%-99%). Sensitivity and specificity of the AD assay more accurately predicted infection than synovial cell count or serum ESR/CRP level did.

These results are supported by another prospective study by Deirmengian and colleagues8 differentiating aseptic failures and PJIs in THA or TKA. The sensitivity and specificity of AD in diagnosing PJI were 100% (95% CI, 85.05%-100%).

Table 1.
Synovial fluid was collected from 46 patients before and during surgery: 23 with PJI and 23 with aseptic failure as defined by MSIS criteria. All patients were tested for AD or LE. Of the 23 PJI cases, 18 were associated with a positive culture, with the most common organism being Staphylococcus epidermidis (n = 6). AD correctly diagnosed 100% of PJIs, whereas LE correctly diagnosed only 78%; the difference was statistically significant (P < 0.001).

In a prospective study of 102 patients who underwent revision THA or TKA secondary to aseptic loosening or PJI, Frangiamore and colleaguesalso demonstrated the value of AD as a diagnostic for PJI in primary and revision hip and knee arthroplasty.
Figure 2.
Based on MSIS criteria, 54 cases were classified as non-infected first-stage revision, 24 as infected first-stage revision, 35 as non-infected second-stage revision, and 3 as infected second-stage revision. For patients with first-stage revision THA or TKA, AD had sensitivity of 100% (95% CI, 86%-100%), specificity of 98% (95% CI, 90%-100%), PPV of 96% (95% CI, 80%-99%), and NPV of 100% (95% CI, 93%-100%). +LR was 54 (95% CI, 8-376), and –LR was 0. When combining all patients, AD outperformed serum ESR and CRP and synovial cell count as a biomarker for predicting PJI.

Table 1 and Figure 2 provide a concise review of the findings of each study.

Interleukin 6

Another synovial fluid biomarker that has shown promise in PJI diagnosis is IL-6. In 2015, Frangiamore and colleagues10 conducted a prospective clinical study of 32 patients who underwent revision TSA. Synovial fluid aspiration was obtained before or during surgery. MSIS criteria were used to establish the diagnosis of PJI. IL-6 had sensitivity of 87% and specificity of 90%, with +LR of 8.45 and –LR of 0.15 in predicting PJI. Synovial fluid IL-6 had strong associations with frozen section histology and growth of P acnes. Frangiamore and colleagues10 recommended an ideal IL-6 cutoff of 359.1 pg/mL and reported that, though not as accurate as AD, synovial fluid IL-6 levels can help predict positive cultures in patients who undergo revision TSA.

Lenski and Scherer11 conducted another retrospective clinical study of the diagnostic value of IL-6 in PJI.

Table 2.
Revision total joint arthroplasty (TJA) was performed for aseptic loosening (38 patients) or PJI (31 patients) based on criteria modeled after MSIS criteria. All joints were aspirated for synovial fluid IL-6, synovial fluid lactate dehydrogenase, synovial fluid glucose, synovial fluid lactate, synovial fluid WBCs, and serum CRP. IL-6 had sensitivity of 90.9%, specificity of 94.7%, +LR of 17.27, and –LR of 0.10. An optimal IL-6 cutoff value of 30,750 pg/mL was determined.

Randau and colleagues12 conducted a prospective clinical study of 120 patients who presented with painful THA or TKA and underwent revision for PJI, aseptic failure, or aseptic revision without signs of infection or loosening. Synovial fluid aspirate was collected before or during surgery.
Figure 3.
PJI was diagnosed with the modified MSIS criteria. IL-6 sensitivity and specificity depended on the cutoff value. A cutoff of >2100 pg/mL yielded sensitivity of 62.5% (95% CI, 43.69%-78.9%) and specificity of 85.71% (95% CI, 71.46%-94.57%), and a cutoff of >9000 pg/mL yielded sensitivity of 46.9% (95% CI, 29.09%-65.26%) and specificity of 97.62% (95% CI, 87.43%-99.94%). The authors concluded that synovial IL-6 is a more accurate marker than synovial WBC count.

Table 2 and Figure 3 provide a concise review of the findings of each study.

Leukocyte Esterase

LE strips are an inexpensive screening tool for PJI, according to some studies. In a prospective clinical study of 364 endoprosthetic joint (hip, knee, shoulder) interventions, Guenther and colleagues13 collected synovial fluid before surgery. Samples were tested with graded LE strips using PJI criteria set by the authors. Results were correlated with preoperative synovial fluid aspirations, serum CRP level, serum WBC count, and intraoperative histopathologic and microbiological findings. Whereas 293 (93.31%) of the 314 aseptic cases had negative test strip readings, 100% of the 50 infected cases were positive. LE had sensitivity of 100%, specificity of 96.5%, PPV of 82%, and NPV of 100%.

Wetters et al14 performed a prospective clinical study on 223 patients who underwent TKAs and THAs for suspected PJI based on having criteria defined by the authors of the study. Synovial fluid samples were collected either preoperatively or intraoperatively.

Table 3.
Using a synovial fluid WBC >3k WBC per microliter, the sensitivity, specificity, PPV, and NPV were 92.9%, 88.8%, 75%, and 97.2%, respectively. Using positive cultures or the presence of a draining sinus tract, the sensitivity, specificity, PPV, and NPV were 93.3%, 77%, 37.8%, and 98.7%, respectively. Of note, the most common organism found at the time of revision for infection was coagulase-negative Staphylococcus (6 out of 39).

Other authors have reported different findings that LE is an unreliable marker in PJI diagnosis. In one prospective clinical study of 85 patients who underwent primary or revision TSA, synovial fluid was collected during surgery.15 According to MSIS criteria, only 5 positive LE results predicted PJI among 21 primary and revision patients with positive cultures. Of the 7 revision patients who met the MSIS criteria for PJI, only 2 had a positive LE test. LE had sensitivity of 28.6%, specificity of 63.6%, PPV of 28.6%, and NPV of 87.5%. Six of the 7 revision patients grew P acnes. These results showed that LE was unreliable in detecting shoulder PJI.15

In another prospective clinical study, Tischler and colleagues16 enrolled 189 patients who underwent revision TKA or THA for aseptic failure or PJI as defined by the MSIS criteria. Synovial fluid was collected intraoperatively.
Figure 4.
Fifteen of the 52 patients with a MSIS defined PJI had positive cultures with the most common organism being coagulase-negative Staphylococcus (7). Two thresholds were used to consider a positive LE test. When using the first threshold that had a lower acceptance level for positivity, the sensitivity, specificity, PPV, and NPV were 79.2% (95% CI, 65.9%-89.2%), 80.8 (95% CI, 73.3%-87.1%), 61.8% (95% CI, 49.2%-73.3%), and 90.1% (95% CI, 84.3%-95.4%), respectively. When using the higher threshold, the sensitivity, specificity, PPV, and NPV were 66% (95% CI, 51.7%-78.5%), 97.1% (95% CI, 92.6%-99.2%), 89.7% (95% CI, 75.8%-97.1%), and 88% (95% CI, 81.7%-92.7%), respectively. Once again, these results were in line with LE not being a reliable marker in diagnosing PJI.

Table 3 and Figure 4 provide a concise review of the findings of each study.

 

 

Polymerase Chain Reaction

Studies have found that PCR analysis of synovial fluid is effective in detecting bacteria on the surface of implants removed during revision arthroplasties. Comparison of the 16S rRNA gene sequences of bacterial genomes showed a diverse range of bacterial species within biofilms on the surface of clinical and subclinical infections.17 These findings, along with those of other studies, suggest that PCR analysis of synovial fluid is useful in diagnosing PJI and identifying organisms and their sensitivities to antibiotics.

Gallo and colleagues18 performed a prospective clinical study on 115 patients who underwent revision TKAs or THAs. Synovial fluid was collected intraoperatively. PCR assays targeting the 16S rDNA were carried out on 101 patients. PJIs were classified based on criteria of the authors of this study, of which there were 42. The sensitivity, specificity, PPV, NPV, +LR, and -LR for PCR were 71.4% (95% CI, 61.5%-75.5%), 97% (95% CI, 91.7%-99.1%), 92.6% (95% CI, 79.8%-97.9%), 86.5% (95% CI, 81.8%-88.4%), 23.6 (95% CI, 5.9%-93.8%), and 0.29 (95% CI, 0.17%-0.49%), respectively. Of note the most common organism detected in 42 PJIs was coagulase-negative Staphylococcus.

Marin and colleagues19 conducted a prospective study of 122 patients who underwent arthroplasty for suspected infection or aseptic loosening as defined by the authors’ clinicohistopathologic criteria. Synovial fluid and biopsy specimens were collected during surgery, and 40 patients met the infection criteria. The authors concluded that 16S PCR is more specific and has better PPV than culture does as one positive 16S PCR resulted in a specificity and PPV of PJI of 96.3% and 91.7%, respectively. However, they noted that culture was more sensitive in diagnosing PJI.

Jacovides and colleagues20 conducted a prospective study on 82 patients undergoing primary TKA, revision TKA, and revision THA.

Table 4.
The synovial fluid aspirate was collected intraoperatively. PJI was diagnosed based on study specific criteria, which was a combination of clinical suspicion and standard laboratory tests (ESR, CRP, cell count and tissue culture). Using the study’s criteria, PJI was diagnosed in 23 samples, and 57 samples were diagnosed as uninfected. When 1 or more species were present, the PCR-Electrospray Ionization Mass Spectrometry (PCR-ESI/MS) yielded a sensitivity, specificity, PPV, and NPV value of 95.7%, 12.3%, 30.6%, and 87.5%, respectively.

The low PCR sensitivities reported in the literature were explained in a review by Hartley and Harris.21 They wrote that BR 16S rDNA and sequencing of PJI samples inherently have low sensitivity because of the contamination that can occur from the PCR reagents themselves or from sample mishandling. Techniques that address contaminant (extraneous DNA) removal, such as ultraviolet irradiation and DNase treatment, reduce Taq DNA polymerase activity, which reduces PCR sensitivity.
Figure 5.
The simplest way to avoid the effects of “low-level contaminants” is to decrease the number of PCR cycles, which also reduces sensitivity. However, loss of contaminants has resulted in increased specificities in studies that have used BR 16S rDNA PCR. The authors also stated that, when PCR incorporates cloning and sequencing, mass spectroscopic detection, or species-specific PCR, sensitivity is higher with increased contamination.

Table 4 and Figure 5 provide a concise review of the findings of each study.

Discussion

Although there is no gold standard for the diagnosis of PJIs, several clinical and laboratory criteria guidelines are currently used to help clinicians diagnose infections of prosthetic joints. However, despite standardization of diagnostic criteria, PJI continue to be a diagnostic challenge.

Table 5.
Diagnosing PJI has been difficult for several reasons, including lack of highly sensitive and specific clinical findings and laboratory tests, as well as difficulty in culturing organisms, particularly fastidious organisms. More effective diagnostic tools are needed to avoid failing to accurately detect infections which lead to poor outcomes in patients who undergo TJA. Moreover, PJIs with low-virulence organisms are especially troublesome, as they can present with normal serum inflammatory markers and negative synovial fluid analysis and cultures from joint aspiration.22

AD is a highly sensitive and specific synovial fluid biomarker in detecting common PJIs.

Table 6.
AD has a higher sensitivity and specificity for detecting PJI, as compared to synovial fluid cell count, culture, ESR, and CRP.15,16,19 Moreover, it has been shown that as many as 38% to 88% of patients diagnosed with aseptic loosening have PJIs with low-grade organisms,23,24 such as Coagulase-negative S acnes and P acnes. Several studies reviewed in this article have demonstrated that AD can detect infections with these low virulence organisms. Our systematic review supports the claim that AD can potentially be used as a screening tool for PJI with common, as well as difficult-to-detect, organisms.
Figure 6.
Our findings also support the claim that novel synovial fluid biomarkers have the potential to become of significant diagnostic use and help improve the ability to diagnose PJIs when combined with current laboratory and clinical diagnostic criteria.

In summary, 5 AD studies5-9 had sensitivity ranging from 63% to 100% and specificity ranging from 95% to 100%; 3 IL-6 studies10-12 had sensitivity ranging from 46.8% to 90.9% and specificity ranging from 85.7% to 97.6%; 4 LE studies13-16 had sensitivity ranging from 28.6% to 100% and specificity ranging from 63.6% to 96.5%; and 3 PCR studies18-20 had sensitivity ranging from 67.1% to 95.7% and specificity ranging from 12.3% to 97.8%. Sensitivity and specificity were consistently higher for AD than for IL-6, LE, and PCR, though there was significant overlap, heterogeneity, and variation across all the included studies.
Figure 7.
Moreover, the outlier study with the lowest sensitivity for AD (63%) was in patients undergoing TSA, where P acnes infection is more common and has been reported to be more difficult to detect by standard diagnostic tools. Tables 5, 6 and Figures 6, 7 provide the data for each of these studies.

Although the overall incidence of PJI is low, infected revisions remain a substantial financial burden to hospitals, as annual costs of infected revisions is estimated to exceed $1.62 billion by 2020.25 The usefulness of novel biomarkers and PCR in diagnosing PJI can be found in their ability to diagnose infections and facilitate appropriate early treatment. Several of these tests are readily available commercially and have the potential to be cost-effective diagnostic tools. The price to perform an AD test from Synovasure TM (Zimmer Biomet) ranges from $93 to $143. LE also provides an economic option for diagnosing PJI, as LE strips are commercially available for the cost of about 25 cents. PCR has also become an economic option, as costs can average $15.50 per sample extraction or PCR assay and $42.50 per amplicon sequence as reported in a study by Vandercam and colleagues.26 Future studies are needed to determine a diagnostic algorithm which incorporates these novel synovial markers to improve diagnostic accuracy of PJI in the most cost effective manner.

The current literature supports that AD can potentially be used to screen for PJI. Our findings suggest novel synovial fluid biomarkers may become of significant diagnostic use when combined with current laboratory and clinical diagnostic criteria. We recommend use of AD in cases in which pain, stiffness, and poor TJA outcome cannot be explained by errors in surgical technique, and infection is suspected despite MSIS criteria not being met.

The studies reviewed in this manuscript were limited in that none presented level I evidence (12 had level II evidence, and 3 had level III evidence), and there was significant heterogeneity (some studies used their own diagnostic standard, and others used the MSIS criteria). Larger scale prospective studies comparing serum ESR/CRP level and synovial fluid analysis to novel synovial markers are needed.

Am J Orthop. 2017;46(4):190-198. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Maradit Kremers H, Larson DR, Crowson CS, et al. Prevalence of total hip and knee replacement in the United States. J Bone Joint Surg Am. 2015;97(17):1386-1397.

2. Kurtz S, Ong K, Lau E, Mowat F, Halpern M. Projections of primary and revision hip and knee arthroplasty in the United States from 2005 to 2030. J Bone Joint Surg Am. 2007;89(4):780-785.

3. Sharkey PF, Lichstein PM, Shen C, Tokarski AT, Parvizi J. Why are total knee arthroplasties failing today—has anything changed after 10 years? J Arthroplasty. 2014;29(9):1774-1778.

4. Butler-Wu SM, Burns EM, Pottinger PS, et al. Optimization of periprosthetic culture for diagnosis of Propionibacterium acnes prosthetic joint infection. J Clin Microbiol. 2011;49(7):2490-2495.

5. Frangiamore SJ, Saleh A, Grosso MJ, et al. α-Defensin as a predictor of periprosthetic shoulder infection. J Shoulder Elbow Surg. 2015;24(7):1021-1027.

6. Deirmengian C, Kardos K, Kilmartin P, Cameron A, Schiller K, Parvizi J. Combined measurement of synovial fluid α-defensin and C-reactive protein levels: highly accurate for diagnosing periprosthetic joint infection. J Bone Joint Surg Am. 2014;96(17):1439-1445.

7. Bingham J, Clarke H, Spangehl M, Schwartz A, Beauchamp C, Goldberg B. The alpha defensin-1 biomarker assay can be used to evaluate the potentially infected total joint arthroplasty. Clin Orthop Relat Res. 2014;472(12):4006-4009.

8. Deirmengian C, Kardos K, Kilmartin P, et al. The alpha-defensin test for periprosthetic joint infection outperforms the leukocyte esterase test strip. Clin Orthop Relat Res. 2015;473(1):198-203.

9. Frangiamore SJ, Gajewski ND, Saleh A, Farias-Kovac M, Barsoum WK, Higuera CA. α-Defensin accuracy to diagnose periprosthetic joint infection—best available test? J Arthroplasty. 2016;31(2):456-460.

10. Frangiamore SJ, Saleh A, Kovac MF, et al. Synovial fluid interleukin-6 as a predictor of periprosthetic shoulder infection. J Bone Joint Surg Am. 2015;97(1):63-70.

11. Lenski M, Scherer MA. Synovial IL-6 as inflammatory marker in periprosthetic joint infections. J Arthroplasty. 2014;29(6):1105-1109.

12. Randau TM, Friedrich MJ, Wimmer MD, et al. Interleukin-6 in serum and in synovial fluid enhances the differentiation between periprosthetic joint infection and aseptic loosening. PLoS One. 2014;9(2):e89045.

13. Guenther D, Kokenge T, Jacobs O, et al. Excluding infections in arthroplasty using leucocyte esterase test. Int Orthop. 2014;38(11):2385-2390.

14. Wetters NG, Berend KR, Lombardi AV, Morris MJ, Tucker TL, Della Valle CJ. Leukocyte esterase reagent strips for the rapid diagnosis of periprosthetic joint infection. J Arthroplasty. 2012;27(8 suppl):8-11.

15. Nelson GN, Paxton ES, Narzikul A, Williams G, Lazarus MD, Abboud JA. Leukocyte esterase in the diagnosis of shoulder periprosthetic joint infection. J Shoulder Elbow Surg. 2015;24(9):1421-1426.

16. Tischler EH, Cavanaugh PK, Parvizi J. Leukocyte esterase strip test: matched for Musculoskeletal Infection Society criteria. J Bone Joint Surg Am. 2014;96(22):1917-1920.

17. Dempsey KE, Riggio MP, Lennon A, et al. Identification of bacteria on the surface of clinically infected and non-infected prosthetic hip joints removed during revision arthroplasties by 16S rRNA gene sequencing and by microbiological culture. Arthritis Res Ther. 2007;9(3):R46.

18. Gallo J, Kolar M, Dendis M, et al. Culture and PCR analysis of joint fluid in the diagnosis of prosthetic joint infection. New Microbiol. 2008;31(1):97-104.

19. Marin M, Garcia-Lechuz JM, Alonso P, et al. Role of universal 16S rRNA gene PCR and sequencing in diagnosis of prosthetic joint infection. J Clin Microbiol. 2012;50(3):583-589.

20. Jacovides CL, Kreft R, Adeli B, Hozack B, Ehrlich GD, Parvizi J. Successful identification of pathogens by polymerase chain reaction (PCR)-based electron spray ionization time-of-flight mass spectrometry (ESI-TOF-MS) in culture-negative periprosthetic joint infection. J Bone Joint Surg Am. 2012;94(24):2247-2254.

21. Hartley JC, Harris KA. Molecular techniques for diagnosing prosthetic joint infections. J Antimicrob Chemother. 2014;69(suppl 1):i21-i24.

22. Zappe B, Graf S, Ochsner PE, Zimmerli W, Sendi P. Propionibacterium spp. in prosthetic joint infections: a diagnostic challenge. Arch Orthop Trauma Surg. 2008;128(10):1039-1046.

23. Rasouli MR, Harandi AA, Adeli B, Purtill JJ, Parvizi J. Revision total knee arthroplasty: infection should be ruled out in all cases. J Arthroplasty. 2012;27(6):1239-1243.e1-e2.

24. Hunt RW, Bond MJ, Pater GD. Psychological responses to cancer: a case for cancer support groups. Community Health Stud. 1990;14(1):35-38.

25. Kurtz SM, Lau E, Schmier J, Ong KL, Zhao K, Parvizi J. Infection burden for hip and knee arthroplasty in the United States. J Arthroplasty. 2008;23(7):984-991.

26. Vandercam B, Jeumont S, Cornu O, et al. Amplification-based DNA analysis in the diagnosis of prosthetic joint infection. J Mol Diagn. 2008;10(6):537-543.

Article PDF
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Acknowledgments: This article was presented as a paper at the annual meeting of the Clinical Orthopedic Society, September 29-October 1, 2016, New Orleans, LA, and at the Annual Pan Pacific Orthopaedic Congress, August 10-13, 2016, Waikoloa, HI.

Issue
The American Journal of Orthopedics - 46(4)
Publications
Topics
Page Number
190-198
Sections
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Acknowledgments: This article was presented as a paper at the annual meeting of the Clinical Orthopedic Society, September 29-October 1, 2016, New Orleans, LA, and at the Annual Pan Pacific Orthopaedic Congress, August 10-13, 2016, Waikoloa, HI.

Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Acknowledgments: This article was presented as a paper at the annual meeting of the Clinical Orthopedic Society, September 29-October 1, 2016, New Orleans, LA, and at the Annual Pan Pacific Orthopaedic Congress, August 10-13, 2016, Waikoloa, HI.

Article PDF
Article PDF

Take-Home Points

  • Novel synovial markers and PCR have the potential to improve the detection of PJIs.
  • 10Difficult-to-detect infections of prosthetic joints pose a diagnostic problem to surgeons and can lead to suboptimal outcomes.
  • AD is a highly sensitive and specific synovial fluid marker for detecting PJIs.
  • AD has shown promising results in detecting low virulence organisms.
  • Studies are needed to determine how to best incorporate novel synovial markers and PCR to current diagnostic criteria in order to improve diagnostic accuracy.

Approximately 7 million Americans are living with a hip or knee replacement.1 According to projections, primary hip arthroplasties will increase by 174% and knee arthroplasties by 673% by 2030. Revision arthroplasties are projected to increase by 137% for hips and 601% for knees during the same time period.2 Infection and aseptic loosening are the most common causes of implant failure.3 The literature shows that infection is the most common cause of failure within 2 years after surgery and that aseptic loosening is the most common cause for late revision.3

Recent studies suggest that prosthetic joint infection (PJI) may be underreported because of difficulty making a diagnosis and that cases of aseptic loosening may in fact be attributable to infections with low-virulence organisms.2,3 These findings have led to new efforts to develop uniform criteria for diagnosing PJIs. In 2011, the Musculoskeletal Infection Society (MSIS) offered a new definition for PJI diagnosis, based on clinical and laboratory criteria, to increase the accuracy of PJI diagnosis.4 The MSIS committee acknowledged that PJI may be present even if these criteria are not met, particularly in the case of low-virulence organisms, as patients may not present with clinical signs of infection and may have normal inflammatory markers and joint aspirates. Reports of PJI cases misdiagnosed as aseptic loosening suggest that current screening and diagnostic tools are not sensitive enough to detect all infections and that PJI is likely underdiagnosed.

According to MSIS criteria, the diagnosis of PJI can be made when there is a sinus tract communicating with the prosthesis, when a pathogen is isolated by culture from 2 or more separate tissue or fluid samples obtained from the affected prosthetic joint, or when 4 of 6 criteria are met. The 6 criteria are (1) elevated serum erythrocyte sedimentation rate (ESR) (>30 mm/hour) and elevated C-reactive protein (CRP) level (>10 mg/L); (2) elevated synovial white blood cell (WBC) count (1100-4000 cells/μL); (3) elevated synovial polymorphonuclear leukocytes (>64%); (4) purulence in affected joint; (5) isolation of a microorganism in a culture of periprosthetic tissue or fluid; and (6) more than 5 neutrophils per high-power field in 5 high-power fields observed.

In this review article, we discuss recently developed novel synovial biomarkers and polymerase chain reaction (PCR) technologies that may help increase the sensitivity and specificity of diagnostic guidelines for PJI.

Methods

Using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), we performed a systematic review of specific synovial fluid markers and PCR used in PJI diagnosis. In May 2016, we searched the PubMed database for these criteria: ((((((PCR[Text Word]) OR IL-6[Text Word]) OR leukocyte esterase[Text Word]) OR alpha defensin[Text Word]) AND ((“infection/diagnosis”[MeSH Terms] OR “infection/surgery”[MeSH Terms])))) AND (prosthetic joint infection[MeSH Terms] OR periprosthetic joint infection[MeSH Terms]).

We included patients who had undergone total hip, knee, or shoulder arthroplasty (THA, TKA, TSA). Index tests were PCR and the synovial fluid markers α-defensin (AD), interleukin 6 (IL-6), and leukocyte esterase (LE). Reference tests included joint fluid/serum analysis or tissue analysis (ESR/CRP level, cell count, culture, frozen section), which defined the MSIS criteria for PJI. Primary outcomes of interest were sensitivity and specificity, and secondary outcomes of interest included positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (+LR), and negative likelihood ratio (–LR). Randomized controlled trials and controlled cohort studies in humans published within the past 10 years were included.

Results

Our full-text review yielded 15 papers that met our study inclusion criteria (Figure 1).

α-Defensin

One of the novel synovial biomarkers that has shown significant promise in diagnosing PJIs, even with difficult-to-detect organisms, is AD.

Figure 1.
Frangiamore and colleagues5 conducted a prospective study comparing patients with painful TSAs that required revision (n = 33). Patients were grouped based on objective clinical, laboratory, and histologic criteria of infection, which included preoperative clinical signs (swelling, sinus track, redness, drainage), elevated serum ESR or CRP, intraoperative gross findings (purulence, necrosis) and positive intraoperative frozen section. Synovial fluid aspiration was obtained preoperatively or intraoperatively. Of the 33 patients, 11 patients met the authors criteria for suspected PJI prior to final intraoperative culture results; 22 patients did not. Of the samples taken intraoperatively, Propionibacterium acnes was the most commonly isolated organism (9 cases), followed by coagulase-negative Staphylococcus (4 cases). AD demonstrated a sensitivity of 63%, specificity of 95%, +LR ratio of 12.1, and –LR ratio of 0.38. AD showed a strong association with growth of P acnes in the infected group (median signal-to-cutoff ratio, 4.45) compared with the noninfected group (median signal-to-cutoff ratio, 1.33) as well as strong associations with frozen section histology. Frangiamore and colleagues5 concluded that the use of AD in diagnosing PJIs with difficult-to-detect organisms was promising.

AD has shown even more impressive results as a biomarker for PJI in the hip and knee, where infection with low virulence organism is less common. In 2014, Deirmengian and colleagues6 conducted a prospective clinical study of 149 patients who underwent revision THA or TKA for aseptic loosening (n = 112) or PJI (n = 37) as defined by MSIS criteria. Aseptic loosening was diagnosed when there was no identifiable reason for pain, and MSIS criteria were not met. Synovial fluid aspirates were collected before or during surgery. AD correctly identified 143 of the 149 patients with confirmed infection with sensitivity of 97.3% (95% confidence interval [CI], 85.8%-99.6%) and specificity of 95.5% (95% CI, 89.9%-98.5%). Similarly, Bingham and colleagues7 conducted a retrospective clinical study of 61 assays done on 57 patients who underwent revision arthroplasty for PJI as defined by MSIS criteria. Synovial fluid aspirates were collected before or during surgery. AD correctly identified all 19 PJIs with sensitivity of 100% (95% CI, 79%-100%) and specificity of 95% (95% CI, 83%-99%). Sensitivity and specificity of the AD assay more accurately predicted infection than synovial cell count or serum ESR/CRP level did.

These results are supported by another prospective study by Deirmengian and colleagues8 differentiating aseptic failures and PJIs in THA or TKA. The sensitivity and specificity of AD in diagnosing PJI were 100% (95% CI, 85.05%-100%).

Table 1.
Synovial fluid was collected from 46 patients before and during surgery: 23 with PJI and 23 with aseptic failure as defined by MSIS criteria. All patients were tested for AD or LE. Of the 23 PJI cases, 18 were associated with a positive culture, with the most common organism being Staphylococcus epidermidis (n = 6). AD correctly diagnosed 100% of PJIs, whereas LE correctly diagnosed only 78%; the difference was statistically significant (P < 0.001).

In a prospective study of 102 patients who underwent revision THA or TKA secondary to aseptic loosening or PJI, Frangiamore and colleaguesalso demonstrated the value of AD as a diagnostic for PJI in primary and revision hip and knee arthroplasty.
Figure 2.
Based on MSIS criteria, 54 cases were classified as non-infected first-stage revision, 24 as infected first-stage revision, 35 as non-infected second-stage revision, and 3 as infected second-stage revision. For patients with first-stage revision THA or TKA, AD had sensitivity of 100% (95% CI, 86%-100%), specificity of 98% (95% CI, 90%-100%), PPV of 96% (95% CI, 80%-99%), and NPV of 100% (95% CI, 93%-100%). +LR was 54 (95% CI, 8-376), and –LR was 0. When combining all patients, AD outperformed serum ESR and CRP and synovial cell count as a biomarker for predicting PJI.

Table 1 and Figure 2 provide a concise review of the findings of each study.

Interleukin 6

Another synovial fluid biomarker that has shown promise in PJI diagnosis is IL-6. In 2015, Frangiamore and colleagues10 conducted a prospective clinical study of 32 patients who underwent revision TSA. Synovial fluid aspiration was obtained before or during surgery. MSIS criteria were used to establish the diagnosis of PJI. IL-6 had sensitivity of 87% and specificity of 90%, with +LR of 8.45 and –LR of 0.15 in predicting PJI. Synovial fluid IL-6 had strong associations with frozen section histology and growth of P acnes. Frangiamore and colleagues10 recommended an ideal IL-6 cutoff of 359.1 pg/mL and reported that, though not as accurate as AD, synovial fluid IL-6 levels can help predict positive cultures in patients who undergo revision TSA.

Lenski and Scherer11 conducted another retrospective clinical study of the diagnostic value of IL-6 in PJI.

Table 2.
Revision total joint arthroplasty (TJA) was performed for aseptic loosening (38 patients) or PJI (31 patients) based on criteria modeled after MSIS criteria. All joints were aspirated for synovial fluid IL-6, synovial fluid lactate dehydrogenase, synovial fluid glucose, synovial fluid lactate, synovial fluid WBCs, and serum CRP. IL-6 had sensitivity of 90.9%, specificity of 94.7%, +LR of 17.27, and –LR of 0.10. An optimal IL-6 cutoff value of 30,750 pg/mL was determined.

Randau and colleagues12 conducted a prospective clinical study of 120 patients who presented with painful THA or TKA and underwent revision for PJI, aseptic failure, or aseptic revision without signs of infection or loosening. Synovial fluid aspirate was collected before or during surgery.
Figure 3.
PJI was diagnosed with the modified MSIS criteria. IL-6 sensitivity and specificity depended on the cutoff value. A cutoff of >2100 pg/mL yielded sensitivity of 62.5% (95% CI, 43.69%-78.9%) and specificity of 85.71% (95% CI, 71.46%-94.57%), and a cutoff of >9000 pg/mL yielded sensitivity of 46.9% (95% CI, 29.09%-65.26%) and specificity of 97.62% (95% CI, 87.43%-99.94%). The authors concluded that synovial IL-6 is a more accurate marker than synovial WBC count.

Table 2 and Figure 3 provide a concise review of the findings of each study.

Leukocyte Esterase

LE strips are an inexpensive screening tool for PJI, according to some studies. In a prospective clinical study of 364 endoprosthetic joint (hip, knee, shoulder) interventions, Guenther and colleagues13 collected synovial fluid before surgery. Samples were tested with graded LE strips using PJI criteria set by the authors. Results were correlated with preoperative synovial fluid aspirations, serum CRP level, serum WBC count, and intraoperative histopathologic and microbiological findings. Whereas 293 (93.31%) of the 314 aseptic cases had negative test strip readings, 100% of the 50 infected cases were positive. LE had sensitivity of 100%, specificity of 96.5%, PPV of 82%, and NPV of 100%.

Wetters et al14 performed a prospective clinical study on 223 patients who underwent TKAs and THAs for suspected PJI based on having criteria defined by the authors of the study. Synovial fluid samples were collected either preoperatively or intraoperatively.

Table 3.
Using a synovial fluid WBC >3k WBC per microliter, the sensitivity, specificity, PPV, and NPV were 92.9%, 88.8%, 75%, and 97.2%, respectively. Using positive cultures or the presence of a draining sinus tract, the sensitivity, specificity, PPV, and NPV were 93.3%, 77%, 37.8%, and 98.7%, respectively. Of note, the most common organism found at the time of revision for infection was coagulase-negative Staphylococcus (6 out of 39).

Other authors have reported different findings that LE is an unreliable marker in PJI diagnosis. In one prospective clinical study of 85 patients who underwent primary or revision TSA, synovial fluid was collected during surgery.15 According to MSIS criteria, only 5 positive LE results predicted PJI among 21 primary and revision patients with positive cultures. Of the 7 revision patients who met the MSIS criteria for PJI, only 2 had a positive LE test. LE had sensitivity of 28.6%, specificity of 63.6%, PPV of 28.6%, and NPV of 87.5%. Six of the 7 revision patients grew P acnes. These results showed that LE was unreliable in detecting shoulder PJI.15

In another prospective clinical study, Tischler and colleagues16 enrolled 189 patients who underwent revision TKA or THA for aseptic failure or PJI as defined by the MSIS criteria. Synovial fluid was collected intraoperatively.
Figure 4.
Fifteen of the 52 patients with a MSIS defined PJI had positive cultures with the most common organism being coagulase-negative Staphylococcus (7). Two thresholds were used to consider a positive LE test. When using the first threshold that had a lower acceptance level for positivity, the sensitivity, specificity, PPV, and NPV were 79.2% (95% CI, 65.9%-89.2%), 80.8 (95% CI, 73.3%-87.1%), 61.8% (95% CI, 49.2%-73.3%), and 90.1% (95% CI, 84.3%-95.4%), respectively. When using the higher threshold, the sensitivity, specificity, PPV, and NPV were 66% (95% CI, 51.7%-78.5%), 97.1% (95% CI, 92.6%-99.2%), 89.7% (95% CI, 75.8%-97.1%), and 88% (95% CI, 81.7%-92.7%), respectively. Once again, these results were in line with LE not being a reliable marker in diagnosing PJI.

Table 3 and Figure 4 provide a concise review of the findings of each study.

 

 

Polymerase Chain Reaction

Studies have found that PCR analysis of synovial fluid is effective in detecting bacteria on the surface of implants removed during revision arthroplasties. Comparison of the 16S rRNA gene sequences of bacterial genomes showed a diverse range of bacterial species within biofilms on the surface of clinical and subclinical infections.17 These findings, along with those of other studies, suggest that PCR analysis of synovial fluid is useful in diagnosing PJI and identifying organisms and their sensitivities to antibiotics.

Gallo and colleagues18 performed a prospective clinical study on 115 patients who underwent revision TKAs or THAs. Synovial fluid was collected intraoperatively. PCR assays targeting the 16S rDNA were carried out on 101 patients. PJIs were classified based on criteria of the authors of this study, of which there were 42. The sensitivity, specificity, PPV, NPV, +LR, and -LR for PCR were 71.4% (95% CI, 61.5%-75.5%), 97% (95% CI, 91.7%-99.1%), 92.6% (95% CI, 79.8%-97.9%), 86.5% (95% CI, 81.8%-88.4%), 23.6 (95% CI, 5.9%-93.8%), and 0.29 (95% CI, 0.17%-0.49%), respectively. Of note the most common organism detected in 42 PJIs was coagulase-negative Staphylococcus.

Marin and colleagues19 conducted a prospective study of 122 patients who underwent arthroplasty for suspected infection or aseptic loosening as defined by the authors’ clinicohistopathologic criteria. Synovial fluid and biopsy specimens were collected during surgery, and 40 patients met the infection criteria. The authors concluded that 16S PCR is more specific and has better PPV than culture does as one positive 16S PCR resulted in a specificity and PPV of PJI of 96.3% and 91.7%, respectively. However, they noted that culture was more sensitive in diagnosing PJI.

Jacovides and colleagues20 conducted a prospective study on 82 patients undergoing primary TKA, revision TKA, and revision THA.

Table 4.
The synovial fluid aspirate was collected intraoperatively. PJI was diagnosed based on study specific criteria, which was a combination of clinical suspicion and standard laboratory tests (ESR, CRP, cell count and tissue culture). Using the study’s criteria, PJI was diagnosed in 23 samples, and 57 samples were diagnosed as uninfected. When 1 or more species were present, the PCR-Electrospray Ionization Mass Spectrometry (PCR-ESI/MS) yielded a sensitivity, specificity, PPV, and NPV value of 95.7%, 12.3%, 30.6%, and 87.5%, respectively.

The low PCR sensitivities reported in the literature were explained in a review by Hartley and Harris.21 They wrote that BR 16S rDNA and sequencing of PJI samples inherently have low sensitivity because of the contamination that can occur from the PCR reagents themselves or from sample mishandling. Techniques that address contaminant (extraneous DNA) removal, such as ultraviolet irradiation and DNase treatment, reduce Taq DNA polymerase activity, which reduces PCR sensitivity.
Figure 5.
The simplest way to avoid the effects of “low-level contaminants” is to decrease the number of PCR cycles, which also reduces sensitivity. However, loss of contaminants has resulted in increased specificities in studies that have used BR 16S rDNA PCR. The authors also stated that, when PCR incorporates cloning and sequencing, mass spectroscopic detection, or species-specific PCR, sensitivity is higher with increased contamination.

Table 4 and Figure 5 provide a concise review of the findings of each study.

Discussion

Although there is no gold standard for the diagnosis of PJIs, several clinical and laboratory criteria guidelines are currently used to help clinicians diagnose infections of prosthetic joints. However, despite standardization of diagnostic criteria, PJI continue to be a diagnostic challenge.

Table 5.
Diagnosing PJI has been difficult for several reasons, including lack of highly sensitive and specific clinical findings and laboratory tests, as well as difficulty in culturing organisms, particularly fastidious organisms. More effective diagnostic tools are needed to avoid failing to accurately detect infections which lead to poor outcomes in patients who undergo TJA. Moreover, PJIs with low-virulence organisms are especially troublesome, as they can present with normal serum inflammatory markers and negative synovial fluid analysis and cultures from joint aspiration.22

AD is a highly sensitive and specific synovial fluid biomarker in detecting common PJIs.

Table 6.
AD has a higher sensitivity and specificity for detecting PJI, as compared to synovial fluid cell count, culture, ESR, and CRP.15,16,19 Moreover, it has been shown that as many as 38% to 88% of patients diagnosed with aseptic loosening have PJIs with low-grade organisms,23,24 such as Coagulase-negative S acnes and P acnes. Several studies reviewed in this article have demonstrated that AD can detect infections with these low virulence organisms. Our systematic review supports the claim that AD can potentially be used as a screening tool for PJI with common, as well as difficult-to-detect, organisms.
Figure 6.
Our findings also support the claim that novel synovial fluid biomarkers have the potential to become of significant diagnostic use and help improve the ability to diagnose PJIs when combined with current laboratory and clinical diagnostic criteria.

In summary, 5 AD studies5-9 had sensitivity ranging from 63% to 100% and specificity ranging from 95% to 100%; 3 IL-6 studies10-12 had sensitivity ranging from 46.8% to 90.9% and specificity ranging from 85.7% to 97.6%; 4 LE studies13-16 had sensitivity ranging from 28.6% to 100% and specificity ranging from 63.6% to 96.5%; and 3 PCR studies18-20 had sensitivity ranging from 67.1% to 95.7% and specificity ranging from 12.3% to 97.8%. Sensitivity and specificity were consistently higher for AD than for IL-6, LE, and PCR, though there was significant overlap, heterogeneity, and variation across all the included studies.
Figure 7.
Moreover, the outlier study with the lowest sensitivity for AD (63%) was in patients undergoing TSA, where P acnes infection is more common and has been reported to be more difficult to detect by standard diagnostic tools. Tables 5, 6 and Figures 6, 7 provide the data for each of these studies.

Although the overall incidence of PJI is low, infected revisions remain a substantial financial burden to hospitals, as annual costs of infected revisions is estimated to exceed $1.62 billion by 2020.25 The usefulness of novel biomarkers and PCR in diagnosing PJI can be found in their ability to diagnose infections and facilitate appropriate early treatment. Several of these tests are readily available commercially and have the potential to be cost-effective diagnostic tools. The price to perform an AD test from Synovasure TM (Zimmer Biomet) ranges from $93 to $143. LE also provides an economic option for diagnosing PJI, as LE strips are commercially available for the cost of about 25 cents. PCR has also become an economic option, as costs can average $15.50 per sample extraction or PCR assay and $42.50 per amplicon sequence as reported in a study by Vandercam and colleagues.26 Future studies are needed to determine a diagnostic algorithm which incorporates these novel synovial markers to improve diagnostic accuracy of PJI in the most cost effective manner.

The current literature supports that AD can potentially be used to screen for PJI. Our findings suggest novel synovial fluid biomarkers may become of significant diagnostic use when combined with current laboratory and clinical diagnostic criteria. We recommend use of AD in cases in which pain, stiffness, and poor TJA outcome cannot be explained by errors in surgical technique, and infection is suspected despite MSIS criteria not being met.

The studies reviewed in this manuscript were limited in that none presented level I evidence (12 had level II evidence, and 3 had level III evidence), and there was significant heterogeneity (some studies used their own diagnostic standard, and others used the MSIS criteria). Larger scale prospective studies comparing serum ESR/CRP level and synovial fluid analysis to novel synovial markers are needed.

Am J Orthop. 2017;46(4):190-198. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

Take-Home Points

  • Novel synovial markers and PCR have the potential to improve the detection of PJIs.
  • 10Difficult-to-detect infections of prosthetic joints pose a diagnostic problem to surgeons and can lead to suboptimal outcomes.
  • AD is a highly sensitive and specific synovial fluid marker for detecting PJIs.
  • AD has shown promising results in detecting low virulence organisms.
  • Studies are needed to determine how to best incorporate novel synovial markers and PCR to current diagnostic criteria in order to improve diagnostic accuracy.

Approximately 7 million Americans are living with a hip or knee replacement.1 According to projections, primary hip arthroplasties will increase by 174% and knee arthroplasties by 673% by 2030. Revision arthroplasties are projected to increase by 137% for hips and 601% for knees during the same time period.2 Infection and aseptic loosening are the most common causes of implant failure.3 The literature shows that infection is the most common cause of failure within 2 years after surgery and that aseptic loosening is the most common cause for late revision.3

Recent studies suggest that prosthetic joint infection (PJI) may be underreported because of difficulty making a diagnosis and that cases of aseptic loosening may in fact be attributable to infections with low-virulence organisms.2,3 These findings have led to new efforts to develop uniform criteria for diagnosing PJIs. In 2011, the Musculoskeletal Infection Society (MSIS) offered a new definition for PJI diagnosis, based on clinical and laboratory criteria, to increase the accuracy of PJI diagnosis.4 The MSIS committee acknowledged that PJI may be present even if these criteria are not met, particularly in the case of low-virulence organisms, as patients may not present with clinical signs of infection and may have normal inflammatory markers and joint aspirates. Reports of PJI cases misdiagnosed as aseptic loosening suggest that current screening and diagnostic tools are not sensitive enough to detect all infections and that PJI is likely underdiagnosed.

According to MSIS criteria, the diagnosis of PJI can be made when there is a sinus tract communicating with the prosthesis, when a pathogen is isolated by culture from 2 or more separate tissue or fluid samples obtained from the affected prosthetic joint, or when 4 of 6 criteria are met. The 6 criteria are (1) elevated serum erythrocyte sedimentation rate (ESR) (>30 mm/hour) and elevated C-reactive protein (CRP) level (>10 mg/L); (2) elevated synovial white blood cell (WBC) count (1100-4000 cells/μL); (3) elevated synovial polymorphonuclear leukocytes (>64%); (4) purulence in affected joint; (5) isolation of a microorganism in a culture of periprosthetic tissue or fluid; and (6) more than 5 neutrophils per high-power field in 5 high-power fields observed.

In this review article, we discuss recently developed novel synovial biomarkers and polymerase chain reaction (PCR) technologies that may help increase the sensitivity and specificity of diagnostic guidelines for PJI.

Methods

Using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), we performed a systematic review of specific synovial fluid markers and PCR used in PJI diagnosis. In May 2016, we searched the PubMed database for these criteria: ((((((PCR[Text Word]) OR IL-6[Text Word]) OR leukocyte esterase[Text Word]) OR alpha defensin[Text Word]) AND ((“infection/diagnosis”[MeSH Terms] OR “infection/surgery”[MeSH Terms])))) AND (prosthetic joint infection[MeSH Terms] OR periprosthetic joint infection[MeSH Terms]).

We included patients who had undergone total hip, knee, or shoulder arthroplasty (THA, TKA, TSA). Index tests were PCR and the synovial fluid markers α-defensin (AD), interleukin 6 (IL-6), and leukocyte esterase (LE). Reference tests included joint fluid/serum analysis or tissue analysis (ESR/CRP level, cell count, culture, frozen section), which defined the MSIS criteria for PJI. Primary outcomes of interest were sensitivity and specificity, and secondary outcomes of interest included positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (+LR), and negative likelihood ratio (–LR). Randomized controlled trials and controlled cohort studies in humans published within the past 10 years were included.

Results

Our full-text review yielded 15 papers that met our study inclusion criteria (Figure 1).

α-Defensin

One of the novel synovial biomarkers that has shown significant promise in diagnosing PJIs, even with difficult-to-detect organisms, is AD.

Figure 1.
Frangiamore and colleagues5 conducted a prospective study comparing patients with painful TSAs that required revision (n = 33). Patients were grouped based on objective clinical, laboratory, and histologic criteria of infection, which included preoperative clinical signs (swelling, sinus track, redness, drainage), elevated serum ESR or CRP, intraoperative gross findings (purulence, necrosis) and positive intraoperative frozen section. Synovial fluid aspiration was obtained preoperatively or intraoperatively. Of the 33 patients, 11 patients met the authors criteria for suspected PJI prior to final intraoperative culture results; 22 patients did not. Of the samples taken intraoperatively, Propionibacterium acnes was the most commonly isolated organism (9 cases), followed by coagulase-negative Staphylococcus (4 cases). AD demonstrated a sensitivity of 63%, specificity of 95%, +LR ratio of 12.1, and –LR ratio of 0.38. AD showed a strong association with growth of P acnes in the infected group (median signal-to-cutoff ratio, 4.45) compared with the noninfected group (median signal-to-cutoff ratio, 1.33) as well as strong associations with frozen section histology. Frangiamore and colleagues5 concluded that the use of AD in diagnosing PJIs with difficult-to-detect organisms was promising.

AD has shown even more impressive results as a biomarker for PJI in the hip and knee, where infection with low virulence organism is less common. In 2014, Deirmengian and colleagues6 conducted a prospective clinical study of 149 patients who underwent revision THA or TKA for aseptic loosening (n = 112) or PJI (n = 37) as defined by MSIS criteria. Aseptic loosening was diagnosed when there was no identifiable reason for pain, and MSIS criteria were not met. Synovial fluid aspirates were collected before or during surgery. AD correctly identified 143 of the 149 patients with confirmed infection with sensitivity of 97.3% (95% confidence interval [CI], 85.8%-99.6%) and specificity of 95.5% (95% CI, 89.9%-98.5%). Similarly, Bingham and colleagues7 conducted a retrospective clinical study of 61 assays done on 57 patients who underwent revision arthroplasty for PJI as defined by MSIS criteria. Synovial fluid aspirates were collected before or during surgery. AD correctly identified all 19 PJIs with sensitivity of 100% (95% CI, 79%-100%) and specificity of 95% (95% CI, 83%-99%). Sensitivity and specificity of the AD assay more accurately predicted infection than synovial cell count or serum ESR/CRP level did.

These results are supported by another prospective study by Deirmengian and colleagues8 differentiating aseptic failures and PJIs in THA or TKA. The sensitivity and specificity of AD in diagnosing PJI were 100% (95% CI, 85.05%-100%).

Table 1.
Synovial fluid was collected from 46 patients before and during surgery: 23 with PJI and 23 with aseptic failure as defined by MSIS criteria. All patients were tested for AD or LE. Of the 23 PJI cases, 18 were associated with a positive culture, with the most common organism being Staphylococcus epidermidis (n = 6). AD correctly diagnosed 100% of PJIs, whereas LE correctly diagnosed only 78%; the difference was statistically significant (P < 0.001).

In a prospective study of 102 patients who underwent revision THA or TKA secondary to aseptic loosening or PJI, Frangiamore and colleaguesalso demonstrated the value of AD as a diagnostic for PJI in primary and revision hip and knee arthroplasty.
Figure 2.
Based on MSIS criteria, 54 cases were classified as non-infected first-stage revision, 24 as infected first-stage revision, 35 as non-infected second-stage revision, and 3 as infected second-stage revision. For patients with first-stage revision THA or TKA, AD had sensitivity of 100% (95% CI, 86%-100%), specificity of 98% (95% CI, 90%-100%), PPV of 96% (95% CI, 80%-99%), and NPV of 100% (95% CI, 93%-100%). +LR was 54 (95% CI, 8-376), and –LR was 0. When combining all patients, AD outperformed serum ESR and CRP and synovial cell count as a biomarker for predicting PJI.

Table 1 and Figure 2 provide a concise review of the findings of each study.

Interleukin 6

Another synovial fluid biomarker that has shown promise in PJI diagnosis is IL-6. In 2015, Frangiamore and colleagues10 conducted a prospective clinical study of 32 patients who underwent revision TSA. Synovial fluid aspiration was obtained before or during surgery. MSIS criteria were used to establish the diagnosis of PJI. IL-6 had sensitivity of 87% and specificity of 90%, with +LR of 8.45 and –LR of 0.15 in predicting PJI. Synovial fluid IL-6 had strong associations with frozen section histology and growth of P acnes. Frangiamore and colleagues10 recommended an ideal IL-6 cutoff of 359.1 pg/mL and reported that, though not as accurate as AD, synovial fluid IL-6 levels can help predict positive cultures in patients who undergo revision TSA.

Lenski and Scherer11 conducted another retrospective clinical study of the diagnostic value of IL-6 in PJI.

Table 2.
Revision total joint arthroplasty (TJA) was performed for aseptic loosening (38 patients) or PJI (31 patients) based on criteria modeled after MSIS criteria. All joints were aspirated for synovial fluid IL-6, synovial fluid lactate dehydrogenase, synovial fluid glucose, synovial fluid lactate, synovial fluid WBCs, and serum CRP. IL-6 had sensitivity of 90.9%, specificity of 94.7%, +LR of 17.27, and –LR of 0.10. An optimal IL-6 cutoff value of 30,750 pg/mL was determined.

Randau and colleagues12 conducted a prospective clinical study of 120 patients who presented with painful THA or TKA and underwent revision for PJI, aseptic failure, or aseptic revision without signs of infection or loosening. Synovial fluid aspirate was collected before or during surgery.
Figure 3.
PJI was diagnosed with the modified MSIS criteria. IL-6 sensitivity and specificity depended on the cutoff value. A cutoff of >2100 pg/mL yielded sensitivity of 62.5% (95% CI, 43.69%-78.9%) and specificity of 85.71% (95% CI, 71.46%-94.57%), and a cutoff of >9000 pg/mL yielded sensitivity of 46.9% (95% CI, 29.09%-65.26%) and specificity of 97.62% (95% CI, 87.43%-99.94%). The authors concluded that synovial IL-6 is a more accurate marker than synovial WBC count.

Table 2 and Figure 3 provide a concise review of the findings of each study.

Leukocyte Esterase

LE strips are an inexpensive screening tool for PJI, according to some studies. In a prospective clinical study of 364 endoprosthetic joint (hip, knee, shoulder) interventions, Guenther and colleagues13 collected synovial fluid before surgery. Samples were tested with graded LE strips using PJI criteria set by the authors. Results were correlated with preoperative synovial fluid aspirations, serum CRP level, serum WBC count, and intraoperative histopathologic and microbiological findings. Whereas 293 (93.31%) of the 314 aseptic cases had negative test strip readings, 100% of the 50 infected cases were positive. LE had sensitivity of 100%, specificity of 96.5%, PPV of 82%, and NPV of 100%.

Wetters et al14 performed a prospective clinical study on 223 patients who underwent TKAs and THAs for suspected PJI based on having criteria defined by the authors of the study. Synovial fluid samples were collected either preoperatively or intraoperatively.

Table 3.
Using a synovial fluid WBC >3k WBC per microliter, the sensitivity, specificity, PPV, and NPV were 92.9%, 88.8%, 75%, and 97.2%, respectively. Using positive cultures or the presence of a draining sinus tract, the sensitivity, specificity, PPV, and NPV were 93.3%, 77%, 37.8%, and 98.7%, respectively. Of note, the most common organism found at the time of revision for infection was coagulase-negative Staphylococcus (6 out of 39).

Other authors have reported different findings that LE is an unreliable marker in PJI diagnosis. In one prospective clinical study of 85 patients who underwent primary or revision TSA, synovial fluid was collected during surgery.15 According to MSIS criteria, only 5 positive LE results predicted PJI among 21 primary and revision patients with positive cultures. Of the 7 revision patients who met the MSIS criteria for PJI, only 2 had a positive LE test. LE had sensitivity of 28.6%, specificity of 63.6%, PPV of 28.6%, and NPV of 87.5%. Six of the 7 revision patients grew P acnes. These results showed that LE was unreliable in detecting shoulder PJI.15

In another prospective clinical study, Tischler and colleagues16 enrolled 189 patients who underwent revision TKA or THA for aseptic failure or PJI as defined by the MSIS criteria. Synovial fluid was collected intraoperatively.
Figure 4.
Fifteen of the 52 patients with a MSIS defined PJI had positive cultures with the most common organism being coagulase-negative Staphylococcus (7). Two thresholds were used to consider a positive LE test. When using the first threshold that had a lower acceptance level for positivity, the sensitivity, specificity, PPV, and NPV were 79.2% (95% CI, 65.9%-89.2%), 80.8 (95% CI, 73.3%-87.1%), 61.8% (95% CI, 49.2%-73.3%), and 90.1% (95% CI, 84.3%-95.4%), respectively. When using the higher threshold, the sensitivity, specificity, PPV, and NPV were 66% (95% CI, 51.7%-78.5%), 97.1% (95% CI, 92.6%-99.2%), 89.7% (95% CI, 75.8%-97.1%), and 88% (95% CI, 81.7%-92.7%), respectively. Once again, these results were in line with LE not being a reliable marker in diagnosing PJI.

Table 3 and Figure 4 provide a concise review of the findings of each study.

 

 

Polymerase Chain Reaction

Studies have found that PCR analysis of synovial fluid is effective in detecting bacteria on the surface of implants removed during revision arthroplasties. Comparison of the 16S rRNA gene sequences of bacterial genomes showed a diverse range of bacterial species within biofilms on the surface of clinical and subclinical infections.17 These findings, along with those of other studies, suggest that PCR analysis of synovial fluid is useful in diagnosing PJI and identifying organisms and their sensitivities to antibiotics.

Gallo and colleagues18 performed a prospective clinical study on 115 patients who underwent revision TKAs or THAs. Synovial fluid was collected intraoperatively. PCR assays targeting the 16S rDNA were carried out on 101 patients. PJIs were classified based on criteria of the authors of this study, of which there were 42. The sensitivity, specificity, PPV, NPV, +LR, and -LR for PCR were 71.4% (95% CI, 61.5%-75.5%), 97% (95% CI, 91.7%-99.1%), 92.6% (95% CI, 79.8%-97.9%), 86.5% (95% CI, 81.8%-88.4%), 23.6 (95% CI, 5.9%-93.8%), and 0.29 (95% CI, 0.17%-0.49%), respectively. Of note the most common organism detected in 42 PJIs was coagulase-negative Staphylococcus.

Marin and colleagues19 conducted a prospective study of 122 patients who underwent arthroplasty for suspected infection or aseptic loosening as defined by the authors’ clinicohistopathologic criteria. Synovial fluid and biopsy specimens were collected during surgery, and 40 patients met the infection criteria. The authors concluded that 16S PCR is more specific and has better PPV than culture does as one positive 16S PCR resulted in a specificity and PPV of PJI of 96.3% and 91.7%, respectively. However, they noted that culture was more sensitive in diagnosing PJI.

Jacovides and colleagues20 conducted a prospective study on 82 patients undergoing primary TKA, revision TKA, and revision THA.

Table 4.
The synovial fluid aspirate was collected intraoperatively. PJI was diagnosed based on study specific criteria, which was a combination of clinical suspicion and standard laboratory tests (ESR, CRP, cell count and tissue culture). Using the study’s criteria, PJI was diagnosed in 23 samples, and 57 samples were diagnosed as uninfected. When 1 or more species were present, the PCR-Electrospray Ionization Mass Spectrometry (PCR-ESI/MS) yielded a sensitivity, specificity, PPV, and NPV value of 95.7%, 12.3%, 30.6%, and 87.5%, respectively.

The low PCR sensitivities reported in the literature were explained in a review by Hartley and Harris.21 They wrote that BR 16S rDNA and sequencing of PJI samples inherently have low sensitivity because of the contamination that can occur from the PCR reagents themselves or from sample mishandling. Techniques that address contaminant (extraneous DNA) removal, such as ultraviolet irradiation and DNase treatment, reduce Taq DNA polymerase activity, which reduces PCR sensitivity.
Figure 5.
The simplest way to avoid the effects of “low-level contaminants” is to decrease the number of PCR cycles, which also reduces sensitivity. However, loss of contaminants has resulted in increased specificities in studies that have used BR 16S rDNA PCR. The authors also stated that, when PCR incorporates cloning and sequencing, mass spectroscopic detection, or species-specific PCR, sensitivity is higher with increased contamination.

Table 4 and Figure 5 provide a concise review of the findings of each study.

Discussion

Although there is no gold standard for the diagnosis of PJIs, several clinical and laboratory criteria guidelines are currently used to help clinicians diagnose infections of prosthetic joints. However, despite standardization of diagnostic criteria, PJI continue to be a diagnostic challenge.

Table 5.
Diagnosing PJI has been difficult for several reasons, including lack of highly sensitive and specific clinical findings and laboratory tests, as well as difficulty in culturing organisms, particularly fastidious organisms. More effective diagnostic tools are needed to avoid failing to accurately detect infections which lead to poor outcomes in patients who undergo TJA. Moreover, PJIs with low-virulence organisms are especially troublesome, as they can present with normal serum inflammatory markers and negative synovial fluid analysis and cultures from joint aspiration.22

AD is a highly sensitive and specific synovial fluid biomarker in detecting common PJIs.

Table 6.
AD has a higher sensitivity and specificity for detecting PJI, as compared to synovial fluid cell count, culture, ESR, and CRP.15,16,19 Moreover, it has been shown that as many as 38% to 88% of patients diagnosed with aseptic loosening have PJIs with low-grade organisms,23,24 such as Coagulase-negative S acnes and P acnes. Several studies reviewed in this article have demonstrated that AD can detect infections with these low virulence organisms. Our systematic review supports the claim that AD can potentially be used as a screening tool for PJI with common, as well as difficult-to-detect, organisms.
Figure 6.
Our findings also support the claim that novel synovial fluid biomarkers have the potential to become of significant diagnostic use and help improve the ability to diagnose PJIs when combined with current laboratory and clinical diagnostic criteria.

In summary, 5 AD studies5-9 had sensitivity ranging from 63% to 100% and specificity ranging from 95% to 100%; 3 IL-6 studies10-12 had sensitivity ranging from 46.8% to 90.9% and specificity ranging from 85.7% to 97.6%; 4 LE studies13-16 had sensitivity ranging from 28.6% to 100% and specificity ranging from 63.6% to 96.5%; and 3 PCR studies18-20 had sensitivity ranging from 67.1% to 95.7% and specificity ranging from 12.3% to 97.8%. Sensitivity and specificity were consistently higher for AD than for IL-6, LE, and PCR, though there was significant overlap, heterogeneity, and variation across all the included studies.
Figure 7.
Moreover, the outlier study with the lowest sensitivity for AD (63%) was in patients undergoing TSA, where P acnes infection is more common and has been reported to be more difficult to detect by standard diagnostic tools. Tables 5, 6 and Figures 6, 7 provide the data for each of these studies.

Although the overall incidence of PJI is low, infected revisions remain a substantial financial burden to hospitals, as annual costs of infected revisions is estimated to exceed $1.62 billion by 2020.25 The usefulness of novel biomarkers and PCR in diagnosing PJI can be found in their ability to diagnose infections and facilitate appropriate early treatment. Several of these tests are readily available commercially and have the potential to be cost-effective diagnostic tools. The price to perform an AD test from Synovasure TM (Zimmer Biomet) ranges from $93 to $143. LE also provides an economic option for diagnosing PJI, as LE strips are commercially available for the cost of about 25 cents. PCR has also become an economic option, as costs can average $15.50 per sample extraction or PCR assay and $42.50 per amplicon sequence as reported in a study by Vandercam and colleagues.26 Future studies are needed to determine a diagnostic algorithm which incorporates these novel synovial markers to improve diagnostic accuracy of PJI in the most cost effective manner.

The current literature supports that AD can potentially be used to screen for PJI. Our findings suggest novel synovial fluid biomarkers may become of significant diagnostic use when combined with current laboratory and clinical diagnostic criteria. We recommend use of AD in cases in which pain, stiffness, and poor TJA outcome cannot be explained by errors in surgical technique, and infection is suspected despite MSIS criteria not being met.

The studies reviewed in this manuscript were limited in that none presented level I evidence (12 had level II evidence, and 3 had level III evidence), and there was significant heterogeneity (some studies used their own diagnostic standard, and others used the MSIS criteria). Larger scale prospective studies comparing serum ESR/CRP level and synovial fluid analysis to novel synovial markers are needed.

Am J Orthop. 2017;46(4):190-198. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Maradit Kremers H, Larson DR, Crowson CS, et al. Prevalence of total hip and knee replacement in the United States. J Bone Joint Surg Am. 2015;97(17):1386-1397.

2. Kurtz S, Ong K, Lau E, Mowat F, Halpern M. Projections of primary and revision hip and knee arthroplasty in the United States from 2005 to 2030. J Bone Joint Surg Am. 2007;89(4):780-785.

3. Sharkey PF, Lichstein PM, Shen C, Tokarski AT, Parvizi J. Why are total knee arthroplasties failing today—has anything changed after 10 years? J Arthroplasty. 2014;29(9):1774-1778.

4. Butler-Wu SM, Burns EM, Pottinger PS, et al. Optimization of periprosthetic culture for diagnosis of Propionibacterium acnes prosthetic joint infection. J Clin Microbiol. 2011;49(7):2490-2495.

5. Frangiamore SJ, Saleh A, Grosso MJ, et al. α-Defensin as a predictor of periprosthetic shoulder infection. J Shoulder Elbow Surg. 2015;24(7):1021-1027.

6. Deirmengian C, Kardos K, Kilmartin P, Cameron A, Schiller K, Parvizi J. Combined measurement of synovial fluid α-defensin and C-reactive protein levels: highly accurate for diagnosing periprosthetic joint infection. J Bone Joint Surg Am. 2014;96(17):1439-1445.

7. Bingham J, Clarke H, Spangehl M, Schwartz A, Beauchamp C, Goldberg B. The alpha defensin-1 biomarker assay can be used to evaluate the potentially infected total joint arthroplasty. Clin Orthop Relat Res. 2014;472(12):4006-4009.

8. Deirmengian C, Kardos K, Kilmartin P, et al. The alpha-defensin test for periprosthetic joint infection outperforms the leukocyte esterase test strip. Clin Orthop Relat Res. 2015;473(1):198-203.

9. Frangiamore SJ, Gajewski ND, Saleh A, Farias-Kovac M, Barsoum WK, Higuera CA. α-Defensin accuracy to diagnose periprosthetic joint infection—best available test? J Arthroplasty. 2016;31(2):456-460.

10. Frangiamore SJ, Saleh A, Kovac MF, et al. Synovial fluid interleukin-6 as a predictor of periprosthetic shoulder infection. J Bone Joint Surg Am. 2015;97(1):63-70.

11. Lenski M, Scherer MA. Synovial IL-6 as inflammatory marker in periprosthetic joint infections. J Arthroplasty. 2014;29(6):1105-1109.

12. Randau TM, Friedrich MJ, Wimmer MD, et al. Interleukin-6 in serum and in synovial fluid enhances the differentiation between periprosthetic joint infection and aseptic loosening. PLoS One. 2014;9(2):e89045.

13. Guenther D, Kokenge T, Jacobs O, et al. Excluding infections in arthroplasty using leucocyte esterase test. Int Orthop. 2014;38(11):2385-2390.

14. Wetters NG, Berend KR, Lombardi AV, Morris MJ, Tucker TL, Della Valle CJ. Leukocyte esterase reagent strips for the rapid diagnosis of periprosthetic joint infection. J Arthroplasty. 2012;27(8 suppl):8-11.

15. Nelson GN, Paxton ES, Narzikul A, Williams G, Lazarus MD, Abboud JA. Leukocyte esterase in the diagnosis of shoulder periprosthetic joint infection. J Shoulder Elbow Surg. 2015;24(9):1421-1426.

16. Tischler EH, Cavanaugh PK, Parvizi J. Leukocyte esterase strip test: matched for Musculoskeletal Infection Society criteria. J Bone Joint Surg Am. 2014;96(22):1917-1920.

17. Dempsey KE, Riggio MP, Lennon A, et al. Identification of bacteria on the surface of clinically infected and non-infected prosthetic hip joints removed during revision arthroplasties by 16S rRNA gene sequencing and by microbiological culture. Arthritis Res Ther. 2007;9(3):R46.

18. Gallo J, Kolar M, Dendis M, et al. Culture and PCR analysis of joint fluid in the diagnosis of prosthetic joint infection. New Microbiol. 2008;31(1):97-104.

19. Marin M, Garcia-Lechuz JM, Alonso P, et al. Role of universal 16S rRNA gene PCR and sequencing in diagnosis of prosthetic joint infection. J Clin Microbiol. 2012;50(3):583-589.

20. Jacovides CL, Kreft R, Adeli B, Hozack B, Ehrlich GD, Parvizi J. Successful identification of pathogens by polymerase chain reaction (PCR)-based electron spray ionization time-of-flight mass spectrometry (ESI-TOF-MS) in culture-negative periprosthetic joint infection. J Bone Joint Surg Am. 2012;94(24):2247-2254.

21. Hartley JC, Harris KA. Molecular techniques for diagnosing prosthetic joint infections. J Antimicrob Chemother. 2014;69(suppl 1):i21-i24.

22. Zappe B, Graf S, Ochsner PE, Zimmerli W, Sendi P. Propionibacterium spp. in prosthetic joint infections: a diagnostic challenge. Arch Orthop Trauma Surg. 2008;128(10):1039-1046.

23. Rasouli MR, Harandi AA, Adeli B, Purtill JJ, Parvizi J. Revision total knee arthroplasty: infection should be ruled out in all cases. J Arthroplasty. 2012;27(6):1239-1243.e1-e2.

24. Hunt RW, Bond MJ, Pater GD. Psychological responses to cancer: a case for cancer support groups. Community Health Stud. 1990;14(1):35-38.

25. Kurtz SM, Lau E, Schmier J, Ong KL, Zhao K, Parvizi J. Infection burden for hip and knee arthroplasty in the United States. J Arthroplasty. 2008;23(7):984-991.

26. Vandercam B, Jeumont S, Cornu O, et al. Amplification-based DNA analysis in the diagnosis of prosthetic joint infection. J Mol Diagn. 2008;10(6):537-543.

References

1. Maradit Kremers H, Larson DR, Crowson CS, et al. Prevalence of total hip and knee replacement in the United States. J Bone Joint Surg Am. 2015;97(17):1386-1397.

2. Kurtz S, Ong K, Lau E, Mowat F, Halpern M. Projections of primary and revision hip and knee arthroplasty in the United States from 2005 to 2030. J Bone Joint Surg Am. 2007;89(4):780-785.

3. Sharkey PF, Lichstein PM, Shen C, Tokarski AT, Parvizi J. Why are total knee arthroplasties failing today—has anything changed after 10 years? J Arthroplasty. 2014;29(9):1774-1778.

4. Butler-Wu SM, Burns EM, Pottinger PS, et al. Optimization of periprosthetic culture for diagnosis of Propionibacterium acnes prosthetic joint infection. J Clin Microbiol. 2011;49(7):2490-2495.

5. Frangiamore SJ, Saleh A, Grosso MJ, et al. α-Defensin as a predictor of periprosthetic shoulder infection. J Shoulder Elbow Surg. 2015;24(7):1021-1027.

6. Deirmengian C, Kardos K, Kilmartin P, Cameron A, Schiller K, Parvizi J. Combined measurement of synovial fluid α-defensin and C-reactive protein levels: highly accurate for diagnosing periprosthetic joint infection. J Bone Joint Surg Am. 2014;96(17):1439-1445.

7. Bingham J, Clarke H, Spangehl M, Schwartz A, Beauchamp C, Goldberg B. The alpha defensin-1 biomarker assay can be used to evaluate the potentially infected total joint arthroplasty. Clin Orthop Relat Res. 2014;472(12):4006-4009.

8. Deirmengian C, Kardos K, Kilmartin P, et al. The alpha-defensin test for periprosthetic joint infection outperforms the leukocyte esterase test strip. Clin Orthop Relat Res. 2015;473(1):198-203.

9. Frangiamore SJ, Gajewski ND, Saleh A, Farias-Kovac M, Barsoum WK, Higuera CA. α-Defensin accuracy to diagnose periprosthetic joint infection—best available test? J Arthroplasty. 2016;31(2):456-460.

10. Frangiamore SJ, Saleh A, Kovac MF, et al. Synovial fluid interleukin-6 as a predictor of periprosthetic shoulder infection. J Bone Joint Surg Am. 2015;97(1):63-70.

11. Lenski M, Scherer MA. Synovial IL-6 as inflammatory marker in periprosthetic joint infections. J Arthroplasty. 2014;29(6):1105-1109.

12. Randau TM, Friedrich MJ, Wimmer MD, et al. Interleukin-6 in serum and in synovial fluid enhances the differentiation between periprosthetic joint infection and aseptic loosening. PLoS One. 2014;9(2):e89045.

13. Guenther D, Kokenge T, Jacobs O, et al. Excluding infections in arthroplasty using leucocyte esterase test. Int Orthop. 2014;38(11):2385-2390.

14. Wetters NG, Berend KR, Lombardi AV, Morris MJ, Tucker TL, Della Valle CJ. Leukocyte esterase reagent strips for the rapid diagnosis of periprosthetic joint infection. J Arthroplasty. 2012;27(8 suppl):8-11.

15. Nelson GN, Paxton ES, Narzikul A, Williams G, Lazarus MD, Abboud JA. Leukocyte esterase in the diagnosis of shoulder periprosthetic joint infection. J Shoulder Elbow Surg. 2015;24(9):1421-1426.

16. Tischler EH, Cavanaugh PK, Parvizi J. Leukocyte esterase strip test: matched for Musculoskeletal Infection Society criteria. J Bone Joint Surg Am. 2014;96(22):1917-1920.

17. Dempsey KE, Riggio MP, Lennon A, et al. Identification of bacteria on the surface of clinically infected and non-infected prosthetic hip joints removed during revision arthroplasties by 16S rRNA gene sequencing and by microbiological culture. Arthritis Res Ther. 2007;9(3):R46.

18. Gallo J, Kolar M, Dendis M, et al. Culture and PCR analysis of joint fluid in the diagnosis of prosthetic joint infection. New Microbiol. 2008;31(1):97-104.

19. Marin M, Garcia-Lechuz JM, Alonso P, et al. Role of universal 16S rRNA gene PCR and sequencing in diagnosis of prosthetic joint infection. J Clin Microbiol. 2012;50(3):583-589.

20. Jacovides CL, Kreft R, Adeli B, Hozack B, Ehrlich GD, Parvizi J. Successful identification of pathogens by polymerase chain reaction (PCR)-based electron spray ionization time-of-flight mass spectrometry (ESI-TOF-MS) in culture-negative periprosthetic joint infection. J Bone Joint Surg Am. 2012;94(24):2247-2254.

21. Hartley JC, Harris KA. Molecular techniques for diagnosing prosthetic joint infections. J Antimicrob Chemother. 2014;69(suppl 1):i21-i24.

22. Zappe B, Graf S, Ochsner PE, Zimmerli W, Sendi P. Propionibacterium spp. in prosthetic joint infections: a diagnostic challenge. Arch Orthop Trauma Surg. 2008;128(10):1039-1046.

23. Rasouli MR, Harandi AA, Adeli B, Purtill JJ, Parvizi J. Revision total knee arthroplasty: infection should be ruled out in all cases. J Arthroplasty. 2012;27(6):1239-1243.e1-e2.

24. Hunt RW, Bond MJ, Pater GD. Psychological responses to cancer: a case for cancer support groups. Community Health Stud. 1990;14(1):35-38.

25. Kurtz SM, Lau E, Schmier J, Ong KL, Zhao K, Parvizi J. Infection burden for hip and knee arthroplasty in the United States. J Arthroplasty. 2008;23(7):984-991.

26. Vandercam B, Jeumont S, Cornu O, et al. Amplification-based DNA analysis in the diagnosis of prosthetic joint infection. J Mol Diagn. 2008;10(6):537-543.

Issue
The American Journal of Orthopedics - 46(4)
Issue
The American Journal of Orthopedics - 46(4)
Page Number
190-198
Page Number
190-198
Publications
Publications
Topics
Article Type
Display Headline
Systematic Review of Novel Synovial Fluid Markers and Polymerase Chain Reaction in the Diagnosis of Prosthetic Joint Infection
Display Headline
Systematic Review of Novel Synovial Fluid Markers and Polymerase Chain Reaction in the Diagnosis of Prosthetic Joint Infection
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

Traumatic Anterior Shoulder Instability: The US Military Experience

Article Type
Changed
Thu, 09/19/2019 - 13:21
Display Headline
Traumatic Anterior Shoulder Instability: The US Military Experience

Take-Home Points

  • Arthroscopic stabilization performed early results in better outcomes in patients with Bankart lesions.
  • A subcritical level of bone loss of 13.5% has been shown to have a significant effect on outcomes, in addition to the established “critical amount”.
  • Bone loss is a bipolar issue. Both sides must be considered in order to properly address shoulder instability.
  • Off-track measurement has been shown to be even more positively predictive of outcomes than glenoid bone loss assessment.
  • There are several bone loss management options including, the most common coracoid transfer, as well as distal tibial allograft and distal clavicular autograft.

Given its relatively young age, high activity level, and centralized medical care system, the US military population is ideal for studying traumatic anterior shoulder instability. There is a long history of military surgeons who have made significant contributions that have advanced our understanding of this pathology and its treatment and results. In this article, we describe the scope, treatment, and results of this pathology in the US military population.

Incidence and Pathology

At the United States Military Academy (USMA), Owens and colleagues1 studied the incidence of shoulder instability, including dislocation and subluxation, and found anterior instability events were far more common than in civilian populations. The incidence of shoulder instability was 0.08 per 1000 person-years in the general US population vs 1.69 per 1000 person-years in US military personnel. The factors associated with increased risk of shoulder instability injury in the military population were male sex, white race, junior enlisted rank, and age under 30 years. Owens and colleagues2 noted that subluxation accounted for almost 85% of the total anterior instability events. Owens and colleagues3 found the pathology in subluxation events was similar to that in full dislocations, with a soft-tissue anterior Bankart lesion and a Hill-Sachs lesion detected on magnetic resonance imaging in more than 90% of patients. In another study at the USMA, DeBerardino and colleagues4 noted that 97% of arthroscopically assessed shoulders in first-time dislocators involved complete detachment of the capsuloligamentous complex from the anterior glenoid rim and neck—a so-called Bankart lesion. Thus, in a military population, anterior instability resulting from subluxation or dislocation is a common finding that is often represented by a soft-tissue Bankart lesion and a Hill-Sachs defect.

Natural History of Traumatic Anterior Shoulder Instability in the Military

Several studies have evaluated the outcomes of nonoperative and operative treatment of shoulder instability. Although most have found better outcomes with operative intervention, Aronen and Regan5 reported good results (25% recurrence at nearly 3-year follow-up) with nonoperative treatment and adherence to a strict rehabilitation program. Most other comparative studies in this population have published contrary results. Wheeler and colleagues6 studied the natural history of anterior shoulder dislocations in a USMA cadet cohort and found recurrent instability after shoulder dislocation in 92% of cadets who had nonoperative treatment. Similarly, DeBerardino and colleagues4 found that, in the USMA, 90% of first-time traumatic anterior shoulder dislocations managed nonoperatively experienced recurrent instability. In a series of Army soldiers with shoulder instability, Bottoni and colleagues7 reported that 75% of nonoperatively managed patients had recurrent instability, and, of these, 67% progressed to surgical intervention. Nonoperative treatment for a first-time dislocation is still reasonable if a cadet or soldier needs to quickly return to functional duties. Athletes who develop shoulder instability during their playing season have been studied in a military population as well. In a multicenter study of service academy athletes with anterior instability, Dickens and colleagues8 found that, with conservative management and accelerated rehabilitation of in-season shoulder instability, 73% of athletes returned to sport by a mean of 5 days. However, the durability of this treatment should be questioned, as 64% later experienced recurrence.

Arthroscopic Stabilization of Acute Anterior Shoulder Dislocations

In an early series of cases of traumatic anterior shoulder instability in USMA cadets, Wheeler and colleagues6 found that, at 14 months, 78% of arthroscopically stabilized cases and 92% of nonoperatively treated cases were successful. Then, in the 1990s, DeBerardino and colleagues4 studied a series of young, active patients in the USMA and noted significantly better results with arthroscopic treatment, vs nonoperative treatment, at 2- to 5-year follow-up. Of the arthroscopically treated shoulders, 88% remained stable during the study and returned to preinjury activity levels, and 12% experienced recurrent instability (risk factors included 2+ sulcus sign, poor capsular labral tissue, and history of bilateral shoulder instability). In a long-term follow-up (mean, 11.7 years; range, 9.1-13.9 years) of the same cohort, Owens and colleagues9 found that 14% of patients available for follow-up had undergone revision stabilization surgery, and, of these, 21% reported experiencing subluxation events. The authors concluded that, in first-time dislocators in this active military population, acute arthroscopic Bankart repair resulted in excellent return to athletics and subjective function, and had acceptable recurrence and reoperation rates. Bottoni and colleagues,7 in a prospective, randomized evaluation of arthroscopic stabilization of acute, traumatic, first-time shoulder dislocations in the Army, noted an 89% success rate for arthroscopic treatment at an average follow-up of 36 months, with no recurrent instability. DeBerardino and colleagues10 compared West Point patients treated nonoperatively with those arthroscopically treated with staples, transglenoid sutures, or bioabsorbable anchors. Recurrence rates were 85% for nonoperative treatment, 22% for staples, 14% for transglenoid sutures, and 10% for bioabsorbable anchors.

Arthroscopic Versus Open Stabilization of Anterior Shoulder Instability

In a prospective, randomized clinical trial comparing open and arthroscopic shoulder stabilization for recurrent anterior instability in active-duty Army personnel, Bottoni and colleagues11 found comparable clinical outcomes. Stabilization surgery failed clinically in only 3 cases, 2 open and 1 arthroscopic. The authors concluded that arthroscopic stabilization can be safely performed for recurrent shoulder instability and that arthroscopic outcomes are similar to open outcomes. In a series of anterior shoulder subluxations in young athletes with Bankart lesions, Owens and colleagues12 found that open and arthroscopic stabilization performed early resulted in better outcomes, regardless of technique used. Recurrent subluxation occurred at a mean of 17 months in 3 of the 10 patients in the open group and 3 of the 9 patients in the arthroscopic group, for an overall recurrence rate of 31%. The authors concluded that, in this patient population with Bankart lesions caused by anterior subluxation events, surgery should be performed early.

Bone Lesions

Burkhart and De Beer13 first noted that bone loss has emerged as one of the most important considerations in the setting of shoulder instability in active patients. Other authors have found this to be true in military populations.14,15

The diagnosis of bone loss may include historical findings, such as increased number and ease of dislocations, as well as dislocation in lower positions of abduction. Physical examination findings may include apprehension in the midrange of motion. Advanced imaging, such as magnetic resonance arthrography, has since been validated as equivalent to 3-dimensional computed tomography (3-D CT) in determining glenoid bone loss.16 In 2007, Mologne and colleagues15 studied the amount of glenoid bone loss and the presence of fragmented bone or attritional bone loss and its effect on outcomes. They evaluated 21 patients who had arthroscopic treatment for anterior instability with anteroinferior glenoid bone loss between 20% and 30%. Average follow-up was 34 months. All patients received 3 or 4 anterior anchors. No patient with a bone fragment incorporated into the repair experienced recurrence or subluxation, whereas 30% of patients with attritional bone loss had recurrent instability.15

 

 

Classifying Bone Loss and Recognizing Its Effects

Burkhart and De Beer13 helped define the role and significance of bone loss in the setting of shoulder instability. They defined significant bone loss as an engaging Hill-Sachs lesion of the humerus in an abducted and externally rotated position or an “inverted pear” lesion of the glenoid. Overall analysis revealed recurrence in 4% of cases without significant bone loss and 65% of cases with significant bone loss. In a subanalysis of contact-sport athletes in the setting of bone loss, the failure rate increased to 89%, from 6.5%. Aiding in the quantitative assessment of glenoid bone loss, Itoi and colleagues17 showed that 21% glenoid bone loss resulted in instability that would not be corrected by a soft-tissue procedure alone. Bone loss of 20% to 25% has since been considered a “critical amount,” above which an arthroscopic Bankart has been questioned. More recently, several authors have shown that even less bone loss can have a significant effect on outcomes. Shaha and colleagues18 established that a subcritical level of bone loss (13.5%) on the anteroinferior glenoid resulted in clinical failure (as determined with the Western Ontario Shoulder Instability Index) even in cases in which frank recurrence or subluxation was avoided. It is thought that, in recurrent instability, glenoid bone loss incident rate is as high as 90%, and the corresponding percentage of patients with Hill-Sachs lesions is almost 100%.19,20 Thus, it is increasingly understood that bone loss is a bipolar issue and that both sides must be considered in order to properly address shoulder instability in this setting. In 2007, Yamamoto and colleagues21 introduced the glenoid track, a method for predicting whether a Hill-Sachs lesion will engage. Di Giacomo and colleagues22 refined the track concept to quantitatively determine which lesions will engage in the setting of both glenoid and humeral bone loss. Metzger and colleagues,23 confirming the track concept arthroscopically, found that manipulation with anesthesia and arthroscopic visualization was well predicted by preoperative track measurements, and thus these measurements can be a good guide for surgical management (Figures 1A, 1B).

Figure 1.
At Tripler Army Medical Center, Shaha and colleagues14 clinically validated the concept in a series of arthroscopic stabilization cases. They found that the recurrence rate was 8% for “on-track” patients’ and 75% for “off-track” patients treated with the same intervention. In addition, positive predictive value was 75% for the off-track measurement and 44% for the glenoid bone loss assessment alone. The authors recommended the preoperative off-track measurement over the glenoid bone loss assessment.
Figure 2.
In an analysis of computer modeling of 3-D CT of patients who underwent Bankart repair, Arciero and colleagues24 found that bipolar bone defects (glenoid bone loss combined with humeral head Hill-Sachs lesion) had an additive and combined negative effect on soft-tissue Bankart repair. In particular, soft-tissue Bankart repair could be compromised by a 2-mm glenoid defect combined with a medium-size Hill-Sachs lesion or, conversely, by a 4-mm glenoid defect combined with a small Hill-Sachs lesion (Figures 2A, 2B).

Strategies for Addressing Bone Loss in Anterior Shoulder Instability

Several approaches for managing bone loss in shoulder instability have been described—the most common being coracoid transfer (Latarjet procedure). Waterman and colleagues25 recently studied the effects of coracoid transfer, distal tibial allograft, and iliac crest augmentation on anterior shoulder instability in US military patients treated between 2006 and 2012. Of 64 patients who underwent a bone block procedure, 16 (25%) had a complication during short-term follow-up. Complications included neurologic injury, pain, infection, hardware failure, and recurrent instability.

Figure 3.
After undergoing 1 of the 3 procedures, 33% of patients had persistent pain, and 23% had recurrent instability. In an older, long-term study of Naval Academy midshipmen, patients who underwent a modified Bristow procedure between 1975 and 1979 demonstrated 70% good to excellent results at an average follow-up of 26.4 years.26
Figure 4.
The recurrent instability rate was 15%, with 9% of the cohort dislocating again and 6% of the cohort experiencing recurrent subluxation. Direct bone grafting to the glenoid has also been described. Provencher and colleagues27 introduced use of distal tibial allograft in addressing bony deficiency, and clinical results were promising (Figures 3A-3C). Tokish and colleagues28 introduced use of distal clavicular autograft in addressing these deficiencies but did not report clinical outcomes (Figures 4A-4C).

Conclusion

Traumatic anterior shoulder instability is a common pathology that continues to significantly challenge the readiness of the US military. Military surgeon-researchers have a long history of investigating approaches to the treatment of this pathology—applying good science to a large controlled population, using a single medical record, and demonstrating a commitment to return service members to the ready defense of the nation.

Am J Orthop. 2017;46(4):184-189. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Owens BD, Dawson L, Burks R, Cameron KL. Incidence of shoulder dislocation in the United States military: demographic considerations from a high-risk population. J Bone Joint Surg Am. 2009;91(4):791-796.

2. Owens BD, Duffey ML, Nelson BJ, DeBerardino TM, Taylor DC, Mountcastle SB. The incidence and characteristics of shoulder instability at the United States Military Academy. Am J Sports Med. 2007;35(7):1168-1173.

3. Owens BD, Nelson BJ, Duffey ML, et al. Pathoanatomy of first-time, traumatic, anterior glenohumeral subluxation events. J Bone Joint Surg Am. 2010;92(7):1605-1611.

4. DeBerardino TM, Arciero RA, Taylor DC, Uhorchak JM. Prospective evaluation of arthroscopic stabilization of acute, initial anterior shoulder dislocations in young athletes. Two- to five-year follow-up. Am J Sports Med. 2001;29(5):586-592.

5. Aronen JG, Regan K. Decreasing the incidence of recurrence of first time anterior shoulder dislocations with rehabilitation. Am J Sports Med. 1984;12(4):283-291.

6. Wheeler JH, Ryan JB, Arciero RA, Molinari RN. Arthroscopic versus nonoperative treatment of acute shoulder dislocations in young athletes. Arthroscopy. 1989;5(3):213-217.

7. Bottoni CR, Wilckens JH, DeBerardino TM, et al. A prospective, randomized evaluation of arthroscopic stabilization versus nonoperative treatment in patients with acute, traumatic, first-time shoulder dislocations. Am J Sports Med. 2002;30(4):576-580.

8. Dickens JF, Owens BD, Cameron KL, et al. Return to play and recurrent instability after in-season anterior shoulder instability: a prospective multicenter study. Am J Sports Med. 2014;42(12):2842-2850.

9. Owens BD, DeBerardino TM, Nelson BJ, et al. Long-term follow-up of acute arthroscopic Bankart repair for initial anterior shoulder dislocations in young athletes. Am J Sports Med. 2009;37(4):669-673.

10. DeBerardino TM, Arciero RA, Taylor DC. Arthroscopic stabilization of acute initial anterior shoulder dislocation: the West Point experience. J South Orthop Assoc. 1996;5(4):263-271.

11. Bottoni CR, Smith EL, Berkowitz MJ, Towle RB, Moore JH. Arthroscopic versus open shoulder stabilization for recurrent anterior instability: a prospective randomized clinical trial. Am J Sports Med. 2006;34(11):1730-1737.

12. Owens BD, Cameron KL, Peck KY, et al. Arthroscopic versus open stabilization for anterior shoulder subluxations. Orthop J Sports Med. 2015;3(1):2325967115571084.

13. Burkhart SS, De Beer JF. Traumatic glenohumeral bone defects and their relationship to failure of arthroscopic Bankart repairs: significance of the inverted-pear glenoid and the humeral engaging Hill-Sachs lesion. Arthroscopy. 2000;16(7):677-694.14. Shaha JS, Cook JB, Rowles DJ, Bottoni CR, Shaha SH, Tokish JM. Clinical validation of the glenoid track concept in anterior glenohumeral instability. J Bone Joint Surg Am. 2016;98(22):1918-1923.

15. Mologne TS, Provencher MT, Menzel KA, Vachon TA, Dewing CB. Arthroscopic stabilization in patients with an inverted pear glenoid: results in patients with bone loss of the anterior glenoid. Am J Sports Med. 2007;35(8):1276-1283.

16. Markenstein JE, Jaspars KC, van der Hulst VP, Willems WJ. The quantification of glenoid bone loss in anterior shoulder instability; MR-arthro compared to 3D-CT. Skeletal Radiol. 2014;43(4):475-483.

17. Itoi E, Lee SB, Berglund LJ, Berge LL, An KN. The effect of a glenoid defect on anteroinferior stability of the shoulder after Bankart repair: a cadaveric study. J Bone Joint Surg Am. 2000;82(1):35-46.

18. Shaha JS, Cook JB, Song DJ, et al. Redefining “critical” bone loss in shoulder instability: functional outcomes worsen with “subcritical” bone loss. Am J Sports Med. 2015;43(7):1719-1725.

19. Piasecki DP, Verma NN, Romeo AA, Levine WN, Bach BR Jr, Provencher MT. Glenoid bone deficiency in recurrent anterior shoulder instability: diagnosis and management. J Am Acad Orthop Surg. 2009;17(8):482-493.

20. Provencher MT, Frank RM, Leclere LE, et al. The Hill-Sachs lesion: diagnosis, classification, and management. J Am Acad Orthop Surg. 2012;20(4):242-252.

21. Yamamoto N, Itoi E, Abe H, et al. Contact between the glenoid and the humeral head in abduction, external rotation, and horizontal extension: a new concept of glenoid track. J Shoulder Elbow Surg. 2007;16(5):649-656.

22. Di Giacomo G, Itoi E, Burkhart SS. Evolving concept of bipolar bone loss and the Hill-Sachs lesion: from “engaging/non-engaging” lesion to “on-track/off-track” lesion. Arthroscopy. 2014;30(1):90-98.

23. Metzger PD, Barlow B, Leonardelli D, Peace W, Solomon DJ, Provencher MT. Clinical application of the “glenoid track” concept for defining humeral head engagement in anterior shoulder instability: a preliminary report. Orthop J Sports Med. 2013;1(2):2325967113496213.

24. Arciero RA, Parrino A, Bernhardson AS, et al. The effect of a combined glenoid and Hill-Sachs defect on glenohumeral stability: a biomechanical cadaveric study using 3-dimensional modeling of 142 patients. Am J Sports Med. 2015;43(6):1422-1429.

25. Waterman BR, Chandler PJ, Teague E, Provencher MT, Tokish JM, Pallis MP. Short-term outcomes of glenoid bone block augmentation for complex anterior shoulder instability in a high-risk population. Arthroscopy. 2016;32(9):1784-1790.

26. Schroder DT, Provencher MT, Mologne TS, Muldoon MP, Cox JS. The modified Bristow procedure for anterior shoulder instability: 26-year outcomes in Naval Academy midshipmen. Am J Sports Med. 2006;34(5):778-786.

27. Provencher MT, Frank RM, Golijanin P, et al. Distal tibia allograft glenoid reconstruction in recurrent anterior shoulder instability: clinical and radiographic outcomes. Arthroscopy. 2017;33(5):891-897.

28. Tokish JM, Fitzpatrick K, Cook JB, Mallon WJ. Arthroscopic distal clavicular autograft for treating shoulder instability with glenoid bone loss. Arthrosc Tech. 2014;3(4):e475-e481.

Article PDF
Author and Disclosure Information

Authors’ Disclosure Statement: Dr. Provencher reports that he receives support from Arthrex and is a consultant to JRF Ortho, patent numbers (issued): 9226743, 20150164498, 20150150594, 20110040339, and receives publishing royalties from Arthrex and SLACK. Dr. Mannava reports that he receives support from the Arthroscopy Association of North America as a board member. Dr. Tokish reports that he receives support from the Arthroscopy Association of North America, the Journal of Shoulder and Elbow Surgery, Orthopedics Today, and the Hawkins Foundation as a board member; is a paid consultant to Arthrex, Mitek, and DePuy Synthes; and is a paid presenter for Arthrex. Dr. Rogers reports no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 46(4)
Publications
Topics
Page Number
184-189
Sections
Author and Disclosure Information

Authors’ Disclosure Statement: Dr. Provencher reports that he receives support from Arthrex and is a consultant to JRF Ortho, patent numbers (issued): 9226743, 20150164498, 20150150594, 20110040339, and receives publishing royalties from Arthrex and SLACK. Dr. Mannava reports that he receives support from the Arthroscopy Association of North America as a board member. Dr. Tokish reports that he receives support from the Arthroscopy Association of North America, the Journal of Shoulder and Elbow Surgery, Orthopedics Today, and the Hawkins Foundation as a board member; is a paid consultant to Arthrex, Mitek, and DePuy Synthes; and is a paid presenter for Arthrex. Dr. Rogers reports no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Authors’ Disclosure Statement: Dr. Provencher reports that he receives support from Arthrex and is a consultant to JRF Ortho, patent numbers (issued): 9226743, 20150164498, 20150150594, 20110040339, and receives publishing royalties from Arthrex and SLACK. Dr. Mannava reports that he receives support from the Arthroscopy Association of North America as a board member. Dr. Tokish reports that he receives support from the Arthroscopy Association of North America, the Journal of Shoulder and Elbow Surgery, Orthopedics Today, and the Hawkins Foundation as a board member; is a paid consultant to Arthrex, Mitek, and DePuy Synthes; and is a paid presenter for Arthrex. Dr. Rogers reports no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

Take-Home Points

  • Arthroscopic stabilization performed early results in better outcomes in patients with Bankart lesions.
  • A subcritical level of bone loss of 13.5% has been shown to have a significant effect on outcomes, in addition to the established “critical amount”.
  • Bone loss is a bipolar issue. Both sides must be considered in order to properly address shoulder instability.
  • Off-track measurement has been shown to be even more positively predictive of outcomes than glenoid bone loss assessment.
  • There are several bone loss management options including, the most common coracoid transfer, as well as distal tibial allograft and distal clavicular autograft.

Given its relatively young age, high activity level, and centralized medical care system, the US military population is ideal for studying traumatic anterior shoulder instability. There is a long history of military surgeons who have made significant contributions that have advanced our understanding of this pathology and its treatment and results. In this article, we describe the scope, treatment, and results of this pathology in the US military population.

Incidence and Pathology

At the United States Military Academy (USMA), Owens and colleagues1 studied the incidence of shoulder instability, including dislocation and subluxation, and found anterior instability events were far more common than in civilian populations. The incidence of shoulder instability was 0.08 per 1000 person-years in the general US population vs 1.69 per 1000 person-years in US military personnel. The factors associated with increased risk of shoulder instability injury in the military population were male sex, white race, junior enlisted rank, and age under 30 years. Owens and colleagues2 noted that subluxation accounted for almost 85% of the total anterior instability events. Owens and colleagues3 found the pathology in subluxation events was similar to that in full dislocations, with a soft-tissue anterior Bankart lesion and a Hill-Sachs lesion detected on magnetic resonance imaging in more than 90% of patients. In another study at the USMA, DeBerardino and colleagues4 noted that 97% of arthroscopically assessed shoulders in first-time dislocators involved complete detachment of the capsuloligamentous complex from the anterior glenoid rim and neck—a so-called Bankart lesion. Thus, in a military population, anterior instability resulting from subluxation or dislocation is a common finding that is often represented by a soft-tissue Bankart lesion and a Hill-Sachs defect.

Natural History of Traumatic Anterior Shoulder Instability in the Military

Several studies have evaluated the outcomes of nonoperative and operative treatment of shoulder instability. Although most have found better outcomes with operative intervention, Aronen and Regan5 reported good results (25% recurrence at nearly 3-year follow-up) with nonoperative treatment and adherence to a strict rehabilitation program. Most other comparative studies in this population have published contrary results. Wheeler and colleagues6 studied the natural history of anterior shoulder dislocations in a USMA cadet cohort and found recurrent instability after shoulder dislocation in 92% of cadets who had nonoperative treatment. Similarly, DeBerardino and colleagues4 found that, in the USMA, 90% of first-time traumatic anterior shoulder dislocations managed nonoperatively experienced recurrent instability. In a series of Army soldiers with shoulder instability, Bottoni and colleagues7 reported that 75% of nonoperatively managed patients had recurrent instability, and, of these, 67% progressed to surgical intervention. Nonoperative treatment for a first-time dislocation is still reasonable if a cadet or soldier needs to quickly return to functional duties. Athletes who develop shoulder instability during their playing season have been studied in a military population as well. In a multicenter study of service academy athletes with anterior instability, Dickens and colleagues8 found that, with conservative management and accelerated rehabilitation of in-season shoulder instability, 73% of athletes returned to sport by a mean of 5 days. However, the durability of this treatment should be questioned, as 64% later experienced recurrence.

Arthroscopic Stabilization of Acute Anterior Shoulder Dislocations

In an early series of cases of traumatic anterior shoulder instability in USMA cadets, Wheeler and colleagues6 found that, at 14 months, 78% of arthroscopically stabilized cases and 92% of nonoperatively treated cases were successful. Then, in the 1990s, DeBerardino and colleagues4 studied a series of young, active patients in the USMA and noted significantly better results with arthroscopic treatment, vs nonoperative treatment, at 2- to 5-year follow-up. Of the arthroscopically treated shoulders, 88% remained stable during the study and returned to preinjury activity levels, and 12% experienced recurrent instability (risk factors included 2+ sulcus sign, poor capsular labral tissue, and history of bilateral shoulder instability). In a long-term follow-up (mean, 11.7 years; range, 9.1-13.9 years) of the same cohort, Owens and colleagues9 found that 14% of patients available for follow-up had undergone revision stabilization surgery, and, of these, 21% reported experiencing subluxation events. The authors concluded that, in first-time dislocators in this active military population, acute arthroscopic Bankart repair resulted in excellent return to athletics and subjective function, and had acceptable recurrence and reoperation rates. Bottoni and colleagues,7 in a prospective, randomized evaluation of arthroscopic stabilization of acute, traumatic, first-time shoulder dislocations in the Army, noted an 89% success rate for arthroscopic treatment at an average follow-up of 36 months, with no recurrent instability. DeBerardino and colleagues10 compared West Point patients treated nonoperatively with those arthroscopically treated with staples, transglenoid sutures, or bioabsorbable anchors. Recurrence rates were 85% for nonoperative treatment, 22% for staples, 14% for transglenoid sutures, and 10% for bioabsorbable anchors.

Arthroscopic Versus Open Stabilization of Anterior Shoulder Instability

In a prospective, randomized clinical trial comparing open and arthroscopic shoulder stabilization for recurrent anterior instability in active-duty Army personnel, Bottoni and colleagues11 found comparable clinical outcomes. Stabilization surgery failed clinically in only 3 cases, 2 open and 1 arthroscopic. The authors concluded that arthroscopic stabilization can be safely performed for recurrent shoulder instability and that arthroscopic outcomes are similar to open outcomes. In a series of anterior shoulder subluxations in young athletes with Bankart lesions, Owens and colleagues12 found that open and arthroscopic stabilization performed early resulted in better outcomes, regardless of technique used. Recurrent subluxation occurred at a mean of 17 months in 3 of the 10 patients in the open group and 3 of the 9 patients in the arthroscopic group, for an overall recurrence rate of 31%. The authors concluded that, in this patient population with Bankart lesions caused by anterior subluxation events, surgery should be performed early.

Bone Lesions

Burkhart and De Beer13 first noted that bone loss has emerged as one of the most important considerations in the setting of shoulder instability in active patients. Other authors have found this to be true in military populations.14,15

The diagnosis of bone loss may include historical findings, such as increased number and ease of dislocations, as well as dislocation in lower positions of abduction. Physical examination findings may include apprehension in the midrange of motion. Advanced imaging, such as magnetic resonance arthrography, has since been validated as equivalent to 3-dimensional computed tomography (3-D CT) in determining glenoid bone loss.16 In 2007, Mologne and colleagues15 studied the amount of glenoid bone loss and the presence of fragmented bone or attritional bone loss and its effect on outcomes. They evaluated 21 patients who had arthroscopic treatment for anterior instability with anteroinferior glenoid bone loss between 20% and 30%. Average follow-up was 34 months. All patients received 3 or 4 anterior anchors. No patient with a bone fragment incorporated into the repair experienced recurrence or subluxation, whereas 30% of patients with attritional bone loss had recurrent instability.15

 

 

Classifying Bone Loss and Recognizing Its Effects

Burkhart and De Beer13 helped define the role and significance of bone loss in the setting of shoulder instability. They defined significant bone loss as an engaging Hill-Sachs lesion of the humerus in an abducted and externally rotated position or an “inverted pear” lesion of the glenoid. Overall analysis revealed recurrence in 4% of cases without significant bone loss and 65% of cases with significant bone loss. In a subanalysis of contact-sport athletes in the setting of bone loss, the failure rate increased to 89%, from 6.5%. Aiding in the quantitative assessment of glenoid bone loss, Itoi and colleagues17 showed that 21% glenoid bone loss resulted in instability that would not be corrected by a soft-tissue procedure alone. Bone loss of 20% to 25% has since been considered a “critical amount,” above which an arthroscopic Bankart has been questioned. More recently, several authors have shown that even less bone loss can have a significant effect on outcomes. Shaha and colleagues18 established that a subcritical level of bone loss (13.5%) on the anteroinferior glenoid resulted in clinical failure (as determined with the Western Ontario Shoulder Instability Index) even in cases in which frank recurrence or subluxation was avoided. It is thought that, in recurrent instability, glenoid bone loss incident rate is as high as 90%, and the corresponding percentage of patients with Hill-Sachs lesions is almost 100%.19,20 Thus, it is increasingly understood that bone loss is a bipolar issue and that both sides must be considered in order to properly address shoulder instability in this setting. In 2007, Yamamoto and colleagues21 introduced the glenoid track, a method for predicting whether a Hill-Sachs lesion will engage. Di Giacomo and colleagues22 refined the track concept to quantitatively determine which lesions will engage in the setting of both glenoid and humeral bone loss. Metzger and colleagues,23 confirming the track concept arthroscopically, found that manipulation with anesthesia and arthroscopic visualization was well predicted by preoperative track measurements, and thus these measurements can be a good guide for surgical management (Figures 1A, 1B).

Figure 1.
At Tripler Army Medical Center, Shaha and colleagues14 clinically validated the concept in a series of arthroscopic stabilization cases. They found that the recurrence rate was 8% for “on-track” patients’ and 75% for “off-track” patients treated with the same intervention. In addition, positive predictive value was 75% for the off-track measurement and 44% for the glenoid bone loss assessment alone. The authors recommended the preoperative off-track measurement over the glenoid bone loss assessment.
Figure 2.
In an analysis of computer modeling of 3-D CT of patients who underwent Bankart repair, Arciero and colleagues24 found that bipolar bone defects (glenoid bone loss combined with humeral head Hill-Sachs lesion) had an additive and combined negative effect on soft-tissue Bankart repair. In particular, soft-tissue Bankart repair could be compromised by a 2-mm glenoid defect combined with a medium-size Hill-Sachs lesion or, conversely, by a 4-mm glenoid defect combined with a small Hill-Sachs lesion (Figures 2A, 2B).

Strategies for Addressing Bone Loss in Anterior Shoulder Instability

Several approaches for managing bone loss in shoulder instability have been described—the most common being coracoid transfer (Latarjet procedure). Waterman and colleagues25 recently studied the effects of coracoid transfer, distal tibial allograft, and iliac crest augmentation on anterior shoulder instability in US military patients treated between 2006 and 2012. Of 64 patients who underwent a bone block procedure, 16 (25%) had a complication during short-term follow-up. Complications included neurologic injury, pain, infection, hardware failure, and recurrent instability.

Figure 3.
After undergoing 1 of the 3 procedures, 33% of patients had persistent pain, and 23% had recurrent instability. In an older, long-term study of Naval Academy midshipmen, patients who underwent a modified Bristow procedure between 1975 and 1979 demonstrated 70% good to excellent results at an average follow-up of 26.4 years.26
Figure 4.
The recurrent instability rate was 15%, with 9% of the cohort dislocating again and 6% of the cohort experiencing recurrent subluxation. Direct bone grafting to the glenoid has also been described. Provencher and colleagues27 introduced use of distal tibial allograft in addressing bony deficiency, and clinical results were promising (Figures 3A-3C). Tokish and colleagues28 introduced use of distal clavicular autograft in addressing these deficiencies but did not report clinical outcomes (Figures 4A-4C).

Conclusion

Traumatic anterior shoulder instability is a common pathology that continues to significantly challenge the readiness of the US military. Military surgeon-researchers have a long history of investigating approaches to the treatment of this pathology—applying good science to a large controlled population, using a single medical record, and demonstrating a commitment to return service members to the ready defense of the nation.

Am J Orthop. 2017;46(4):184-189. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

Take-Home Points

  • Arthroscopic stabilization performed early results in better outcomes in patients with Bankart lesions.
  • A subcritical level of bone loss of 13.5% has been shown to have a significant effect on outcomes, in addition to the established “critical amount”.
  • Bone loss is a bipolar issue. Both sides must be considered in order to properly address shoulder instability.
  • Off-track measurement has been shown to be even more positively predictive of outcomes than glenoid bone loss assessment.
  • There are several bone loss management options including, the most common coracoid transfer, as well as distal tibial allograft and distal clavicular autograft.

Given its relatively young age, high activity level, and centralized medical care system, the US military population is ideal for studying traumatic anterior shoulder instability. There is a long history of military surgeons who have made significant contributions that have advanced our understanding of this pathology and its treatment and results. In this article, we describe the scope, treatment, and results of this pathology in the US military population.

Incidence and Pathology

At the United States Military Academy (USMA), Owens and colleagues1 studied the incidence of shoulder instability, including dislocation and subluxation, and found anterior instability events were far more common than in civilian populations. The incidence of shoulder instability was 0.08 per 1000 person-years in the general US population vs 1.69 per 1000 person-years in US military personnel. The factors associated with increased risk of shoulder instability injury in the military population were male sex, white race, junior enlisted rank, and age under 30 years. Owens and colleagues2 noted that subluxation accounted for almost 85% of the total anterior instability events. Owens and colleagues3 found the pathology in subluxation events was similar to that in full dislocations, with a soft-tissue anterior Bankart lesion and a Hill-Sachs lesion detected on magnetic resonance imaging in more than 90% of patients. In another study at the USMA, DeBerardino and colleagues4 noted that 97% of arthroscopically assessed shoulders in first-time dislocators involved complete detachment of the capsuloligamentous complex from the anterior glenoid rim and neck—a so-called Bankart lesion. Thus, in a military population, anterior instability resulting from subluxation or dislocation is a common finding that is often represented by a soft-tissue Bankart lesion and a Hill-Sachs defect.

Natural History of Traumatic Anterior Shoulder Instability in the Military

Several studies have evaluated the outcomes of nonoperative and operative treatment of shoulder instability. Although most have found better outcomes with operative intervention, Aronen and Regan5 reported good results (25% recurrence at nearly 3-year follow-up) with nonoperative treatment and adherence to a strict rehabilitation program. Most other comparative studies in this population have published contrary results. Wheeler and colleagues6 studied the natural history of anterior shoulder dislocations in a USMA cadet cohort and found recurrent instability after shoulder dislocation in 92% of cadets who had nonoperative treatment. Similarly, DeBerardino and colleagues4 found that, in the USMA, 90% of first-time traumatic anterior shoulder dislocations managed nonoperatively experienced recurrent instability. In a series of Army soldiers with shoulder instability, Bottoni and colleagues7 reported that 75% of nonoperatively managed patients had recurrent instability, and, of these, 67% progressed to surgical intervention. Nonoperative treatment for a first-time dislocation is still reasonable if a cadet or soldier needs to quickly return to functional duties. Athletes who develop shoulder instability during their playing season have been studied in a military population as well. In a multicenter study of service academy athletes with anterior instability, Dickens and colleagues8 found that, with conservative management and accelerated rehabilitation of in-season shoulder instability, 73% of athletes returned to sport by a mean of 5 days. However, the durability of this treatment should be questioned, as 64% later experienced recurrence.

Arthroscopic Stabilization of Acute Anterior Shoulder Dislocations

In an early series of cases of traumatic anterior shoulder instability in USMA cadets, Wheeler and colleagues6 found that, at 14 months, 78% of arthroscopically stabilized cases and 92% of nonoperatively treated cases were successful. Then, in the 1990s, DeBerardino and colleagues4 studied a series of young, active patients in the USMA and noted significantly better results with arthroscopic treatment, vs nonoperative treatment, at 2- to 5-year follow-up. Of the arthroscopically treated shoulders, 88% remained stable during the study and returned to preinjury activity levels, and 12% experienced recurrent instability (risk factors included 2+ sulcus sign, poor capsular labral tissue, and history of bilateral shoulder instability). In a long-term follow-up (mean, 11.7 years; range, 9.1-13.9 years) of the same cohort, Owens and colleagues9 found that 14% of patients available for follow-up had undergone revision stabilization surgery, and, of these, 21% reported experiencing subluxation events. The authors concluded that, in first-time dislocators in this active military population, acute arthroscopic Bankart repair resulted in excellent return to athletics and subjective function, and had acceptable recurrence and reoperation rates. Bottoni and colleagues,7 in a prospective, randomized evaluation of arthroscopic stabilization of acute, traumatic, first-time shoulder dislocations in the Army, noted an 89% success rate for arthroscopic treatment at an average follow-up of 36 months, with no recurrent instability. DeBerardino and colleagues10 compared West Point patients treated nonoperatively with those arthroscopically treated with staples, transglenoid sutures, or bioabsorbable anchors. Recurrence rates were 85% for nonoperative treatment, 22% for staples, 14% for transglenoid sutures, and 10% for bioabsorbable anchors.

Arthroscopic Versus Open Stabilization of Anterior Shoulder Instability

In a prospective, randomized clinical trial comparing open and arthroscopic shoulder stabilization for recurrent anterior instability in active-duty Army personnel, Bottoni and colleagues11 found comparable clinical outcomes. Stabilization surgery failed clinically in only 3 cases, 2 open and 1 arthroscopic. The authors concluded that arthroscopic stabilization can be safely performed for recurrent shoulder instability and that arthroscopic outcomes are similar to open outcomes. In a series of anterior shoulder subluxations in young athletes with Bankart lesions, Owens and colleagues12 found that open and arthroscopic stabilization performed early resulted in better outcomes, regardless of technique used. Recurrent subluxation occurred at a mean of 17 months in 3 of the 10 patients in the open group and 3 of the 9 patients in the arthroscopic group, for an overall recurrence rate of 31%. The authors concluded that, in this patient population with Bankart lesions caused by anterior subluxation events, surgery should be performed early.

Bone Lesions

Burkhart and De Beer13 first noted that bone loss has emerged as one of the most important considerations in the setting of shoulder instability in active patients. Other authors have found this to be true in military populations.14,15

The diagnosis of bone loss may include historical findings, such as increased number and ease of dislocations, as well as dislocation in lower positions of abduction. Physical examination findings may include apprehension in the midrange of motion. Advanced imaging, such as magnetic resonance arthrography, has since been validated as equivalent to 3-dimensional computed tomography (3-D CT) in determining glenoid bone loss.16 In 2007, Mologne and colleagues15 studied the amount of glenoid bone loss and the presence of fragmented bone or attritional bone loss and its effect on outcomes. They evaluated 21 patients who had arthroscopic treatment for anterior instability with anteroinferior glenoid bone loss between 20% and 30%. Average follow-up was 34 months. All patients received 3 or 4 anterior anchors. No patient with a bone fragment incorporated into the repair experienced recurrence or subluxation, whereas 30% of patients with attritional bone loss had recurrent instability.15

 

 

Classifying Bone Loss and Recognizing Its Effects

Burkhart and De Beer13 helped define the role and significance of bone loss in the setting of shoulder instability. They defined significant bone loss as an engaging Hill-Sachs lesion of the humerus in an abducted and externally rotated position or an “inverted pear” lesion of the glenoid. Overall analysis revealed recurrence in 4% of cases without significant bone loss and 65% of cases with significant bone loss. In a subanalysis of contact-sport athletes in the setting of bone loss, the failure rate increased to 89%, from 6.5%. Aiding in the quantitative assessment of glenoid bone loss, Itoi and colleagues17 showed that 21% glenoid bone loss resulted in instability that would not be corrected by a soft-tissue procedure alone. Bone loss of 20% to 25% has since been considered a “critical amount,” above which an arthroscopic Bankart has been questioned. More recently, several authors have shown that even less bone loss can have a significant effect on outcomes. Shaha and colleagues18 established that a subcritical level of bone loss (13.5%) on the anteroinferior glenoid resulted in clinical failure (as determined with the Western Ontario Shoulder Instability Index) even in cases in which frank recurrence or subluxation was avoided. It is thought that, in recurrent instability, glenoid bone loss incident rate is as high as 90%, and the corresponding percentage of patients with Hill-Sachs lesions is almost 100%.19,20 Thus, it is increasingly understood that bone loss is a bipolar issue and that both sides must be considered in order to properly address shoulder instability in this setting. In 2007, Yamamoto and colleagues21 introduced the glenoid track, a method for predicting whether a Hill-Sachs lesion will engage. Di Giacomo and colleagues22 refined the track concept to quantitatively determine which lesions will engage in the setting of both glenoid and humeral bone loss. Metzger and colleagues,23 confirming the track concept arthroscopically, found that manipulation with anesthesia and arthroscopic visualization was well predicted by preoperative track measurements, and thus these measurements can be a good guide for surgical management (Figures 1A, 1B).

Figure 1.
At Tripler Army Medical Center, Shaha and colleagues14 clinically validated the concept in a series of arthroscopic stabilization cases. They found that the recurrence rate was 8% for “on-track” patients’ and 75% for “off-track” patients treated with the same intervention. In addition, positive predictive value was 75% for the off-track measurement and 44% for the glenoid bone loss assessment alone. The authors recommended the preoperative off-track measurement over the glenoid bone loss assessment.
Figure 2.
In an analysis of computer modeling of 3-D CT of patients who underwent Bankart repair, Arciero and colleagues24 found that bipolar bone defects (glenoid bone loss combined with humeral head Hill-Sachs lesion) had an additive and combined negative effect on soft-tissue Bankart repair. In particular, soft-tissue Bankart repair could be compromised by a 2-mm glenoid defect combined with a medium-size Hill-Sachs lesion or, conversely, by a 4-mm glenoid defect combined with a small Hill-Sachs lesion (Figures 2A, 2B).

Strategies for Addressing Bone Loss in Anterior Shoulder Instability

Several approaches for managing bone loss in shoulder instability have been described—the most common being coracoid transfer (Latarjet procedure). Waterman and colleagues25 recently studied the effects of coracoid transfer, distal tibial allograft, and iliac crest augmentation on anterior shoulder instability in US military patients treated between 2006 and 2012. Of 64 patients who underwent a bone block procedure, 16 (25%) had a complication during short-term follow-up. Complications included neurologic injury, pain, infection, hardware failure, and recurrent instability.

Figure 3.
After undergoing 1 of the 3 procedures, 33% of patients had persistent pain, and 23% had recurrent instability. In an older, long-term study of Naval Academy midshipmen, patients who underwent a modified Bristow procedure between 1975 and 1979 demonstrated 70% good to excellent results at an average follow-up of 26.4 years.26
Figure 4.
The recurrent instability rate was 15%, with 9% of the cohort dislocating again and 6% of the cohort experiencing recurrent subluxation. Direct bone grafting to the glenoid has also been described. Provencher and colleagues27 introduced use of distal tibial allograft in addressing bony deficiency, and clinical results were promising (Figures 3A-3C). Tokish and colleagues28 introduced use of distal clavicular autograft in addressing these deficiencies but did not report clinical outcomes (Figures 4A-4C).

Conclusion

Traumatic anterior shoulder instability is a common pathology that continues to significantly challenge the readiness of the US military. Military surgeon-researchers have a long history of investigating approaches to the treatment of this pathology—applying good science to a large controlled population, using a single medical record, and demonstrating a commitment to return service members to the ready defense of the nation.

Am J Orthop. 2017;46(4):184-189. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Owens BD, Dawson L, Burks R, Cameron KL. Incidence of shoulder dislocation in the United States military: demographic considerations from a high-risk population. J Bone Joint Surg Am. 2009;91(4):791-796.

2. Owens BD, Duffey ML, Nelson BJ, DeBerardino TM, Taylor DC, Mountcastle SB. The incidence and characteristics of shoulder instability at the United States Military Academy. Am J Sports Med. 2007;35(7):1168-1173.

3. Owens BD, Nelson BJ, Duffey ML, et al. Pathoanatomy of first-time, traumatic, anterior glenohumeral subluxation events. J Bone Joint Surg Am. 2010;92(7):1605-1611.

4. DeBerardino TM, Arciero RA, Taylor DC, Uhorchak JM. Prospective evaluation of arthroscopic stabilization of acute, initial anterior shoulder dislocations in young athletes. Two- to five-year follow-up. Am J Sports Med. 2001;29(5):586-592.

5. Aronen JG, Regan K. Decreasing the incidence of recurrence of first time anterior shoulder dislocations with rehabilitation. Am J Sports Med. 1984;12(4):283-291.

6. Wheeler JH, Ryan JB, Arciero RA, Molinari RN. Arthroscopic versus nonoperative treatment of acute shoulder dislocations in young athletes. Arthroscopy. 1989;5(3):213-217.

7. Bottoni CR, Wilckens JH, DeBerardino TM, et al. A prospective, randomized evaluation of arthroscopic stabilization versus nonoperative treatment in patients with acute, traumatic, first-time shoulder dislocations. Am J Sports Med. 2002;30(4):576-580.

8. Dickens JF, Owens BD, Cameron KL, et al. Return to play and recurrent instability after in-season anterior shoulder instability: a prospective multicenter study. Am J Sports Med. 2014;42(12):2842-2850.

9. Owens BD, DeBerardino TM, Nelson BJ, et al. Long-term follow-up of acute arthroscopic Bankart repair for initial anterior shoulder dislocations in young athletes. Am J Sports Med. 2009;37(4):669-673.

10. DeBerardino TM, Arciero RA, Taylor DC. Arthroscopic stabilization of acute initial anterior shoulder dislocation: the West Point experience. J South Orthop Assoc. 1996;5(4):263-271.

11. Bottoni CR, Smith EL, Berkowitz MJ, Towle RB, Moore JH. Arthroscopic versus open shoulder stabilization for recurrent anterior instability: a prospective randomized clinical trial. Am J Sports Med. 2006;34(11):1730-1737.

12. Owens BD, Cameron KL, Peck KY, et al. Arthroscopic versus open stabilization for anterior shoulder subluxations. Orthop J Sports Med. 2015;3(1):2325967115571084.

13. Burkhart SS, De Beer JF. Traumatic glenohumeral bone defects and their relationship to failure of arthroscopic Bankart repairs: significance of the inverted-pear glenoid and the humeral engaging Hill-Sachs lesion. Arthroscopy. 2000;16(7):677-694.14. Shaha JS, Cook JB, Rowles DJ, Bottoni CR, Shaha SH, Tokish JM. Clinical validation of the glenoid track concept in anterior glenohumeral instability. J Bone Joint Surg Am. 2016;98(22):1918-1923.

15. Mologne TS, Provencher MT, Menzel KA, Vachon TA, Dewing CB. Arthroscopic stabilization in patients with an inverted pear glenoid: results in patients with bone loss of the anterior glenoid. Am J Sports Med. 2007;35(8):1276-1283.

16. Markenstein JE, Jaspars KC, van der Hulst VP, Willems WJ. The quantification of glenoid bone loss in anterior shoulder instability; MR-arthro compared to 3D-CT. Skeletal Radiol. 2014;43(4):475-483.

17. Itoi E, Lee SB, Berglund LJ, Berge LL, An KN. The effect of a glenoid defect on anteroinferior stability of the shoulder after Bankart repair: a cadaveric study. J Bone Joint Surg Am. 2000;82(1):35-46.

18. Shaha JS, Cook JB, Song DJ, et al. Redefining “critical” bone loss in shoulder instability: functional outcomes worsen with “subcritical” bone loss. Am J Sports Med. 2015;43(7):1719-1725.

19. Piasecki DP, Verma NN, Romeo AA, Levine WN, Bach BR Jr, Provencher MT. Glenoid bone deficiency in recurrent anterior shoulder instability: diagnosis and management. J Am Acad Orthop Surg. 2009;17(8):482-493.

20. Provencher MT, Frank RM, Leclere LE, et al. The Hill-Sachs lesion: diagnosis, classification, and management. J Am Acad Orthop Surg. 2012;20(4):242-252.

21. Yamamoto N, Itoi E, Abe H, et al. Contact between the glenoid and the humeral head in abduction, external rotation, and horizontal extension: a new concept of glenoid track. J Shoulder Elbow Surg. 2007;16(5):649-656.

22. Di Giacomo G, Itoi E, Burkhart SS. Evolving concept of bipolar bone loss and the Hill-Sachs lesion: from “engaging/non-engaging” lesion to “on-track/off-track” lesion. Arthroscopy. 2014;30(1):90-98.

23. Metzger PD, Barlow B, Leonardelli D, Peace W, Solomon DJ, Provencher MT. Clinical application of the “glenoid track” concept for defining humeral head engagement in anterior shoulder instability: a preliminary report. Orthop J Sports Med. 2013;1(2):2325967113496213.

24. Arciero RA, Parrino A, Bernhardson AS, et al. The effect of a combined glenoid and Hill-Sachs defect on glenohumeral stability: a biomechanical cadaveric study using 3-dimensional modeling of 142 patients. Am J Sports Med. 2015;43(6):1422-1429.

25. Waterman BR, Chandler PJ, Teague E, Provencher MT, Tokish JM, Pallis MP. Short-term outcomes of glenoid bone block augmentation for complex anterior shoulder instability in a high-risk population. Arthroscopy. 2016;32(9):1784-1790.

26. Schroder DT, Provencher MT, Mologne TS, Muldoon MP, Cox JS. The modified Bristow procedure for anterior shoulder instability: 26-year outcomes in Naval Academy midshipmen. Am J Sports Med. 2006;34(5):778-786.

27. Provencher MT, Frank RM, Golijanin P, et al. Distal tibia allograft glenoid reconstruction in recurrent anterior shoulder instability: clinical and radiographic outcomes. Arthroscopy. 2017;33(5):891-897.

28. Tokish JM, Fitzpatrick K, Cook JB, Mallon WJ. Arthroscopic distal clavicular autograft for treating shoulder instability with glenoid bone loss. Arthrosc Tech. 2014;3(4):e475-e481.

References

1. Owens BD, Dawson L, Burks R, Cameron KL. Incidence of shoulder dislocation in the United States military: demographic considerations from a high-risk population. J Bone Joint Surg Am. 2009;91(4):791-796.

2. Owens BD, Duffey ML, Nelson BJ, DeBerardino TM, Taylor DC, Mountcastle SB. The incidence and characteristics of shoulder instability at the United States Military Academy. Am J Sports Med. 2007;35(7):1168-1173.

3. Owens BD, Nelson BJ, Duffey ML, et al. Pathoanatomy of first-time, traumatic, anterior glenohumeral subluxation events. J Bone Joint Surg Am. 2010;92(7):1605-1611.

4. DeBerardino TM, Arciero RA, Taylor DC, Uhorchak JM. Prospective evaluation of arthroscopic stabilization of acute, initial anterior shoulder dislocations in young athletes. Two- to five-year follow-up. Am J Sports Med. 2001;29(5):586-592.

5. Aronen JG, Regan K. Decreasing the incidence of recurrence of first time anterior shoulder dislocations with rehabilitation. Am J Sports Med. 1984;12(4):283-291.

6. Wheeler JH, Ryan JB, Arciero RA, Molinari RN. Arthroscopic versus nonoperative treatment of acute shoulder dislocations in young athletes. Arthroscopy. 1989;5(3):213-217.

7. Bottoni CR, Wilckens JH, DeBerardino TM, et al. A prospective, randomized evaluation of arthroscopic stabilization versus nonoperative treatment in patients with acute, traumatic, first-time shoulder dislocations. Am J Sports Med. 2002;30(4):576-580.

8. Dickens JF, Owens BD, Cameron KL, et al. Return to play and recurrent instability after in-season anterior shoulder instability: a prospective multicenter study. Am J Sports Med. 2014;42(12):2842-2850.

9. Owens BD, DeBerardino TM, Nelson BJ, et al. Long-term follow-up of acute arthroscopic Bankart repair for initial anterior shoulder dislocations in young athletes. Am J Sports Med. 2009;37(4):669-673.

10. DeBerardino TM, Arciero RA, Taylor DC. Arthroscopic stabilization of acute initial anterior shoulder dislocation: the West Point experience. J South Orthop Assoc. 1996;5(4):263-271.

11. Bottoni CR, Smith EL, Berkowitz MJ, Towle RB, Moore JH. Arthroscopic versus open shoulder stabilization for recurrent anterior instability: a prospective randomized clinical trial. Am J Sports Med. 2006;34(11):1730-1737.

12. Owens BD, Cameron KL, Peck KY, et al. Arthroscopic versus open stabilization for anterior shoulder subluxations. Orthop J Sports Med. 2015;3(1):2325967115571084.

13. Burkhart SS, De Beer JF. Traumatic glenohumeral bone defects and their relationship to failure of arthroscopic Bankart repairs: significance of the inverted-pear glenoid and the humeral engaging Hill-Sachs lesion. Arthroscopy. 2000;16(7):677-694.14. Shaha JS, Cook JB, Rowles DJ, Bottoni CR, Shaha SH, Tokish JM. Clinical validation of the glenoid track concept in anterior glenohumeral instability. J Bone Joint Surg Am. 2016;98(22):1918-1923.

15. Mologne TS, Provencher MT, Menzel KA, Vachon TA, Dewing CB. Arthroscopic stabilization in patients with an inverted pear glenoid: results in patients with bone loss of the anterior glenoid. Am J Sports Med. 2007;35(8):1276-1283.

16. Markenstein JE, Jaspars KC, van der Hulst VP, Willems WJ. The quantification of glenoid bone loss in anterior shoulder instability; MR-arthro compared to 3D-CT. Skeletal Radiol. 2014;43(4):475-483.

17. Itoi E, Lee SB, Berglund LJ, Berge LL, An KN. The effect of a glenoid defect on anteroinferior stability of the shoulder after Bankart repair: a cadaveric study. J Bone Joint Surg Am. 2000;82(1):35-46.

18. Shaha JS, Cook JB, Song DJ, et al. Redefining “critical” bone loss in shoulder instability: functional outcomes worsen with “subcritical” bone loss. Am J Sports Med. 2015;43(7):1719-1725.

19. Piasecki DP, Verma NN, Romeo AA, Levine WN, Bach BR Jr, Provencher MT. Glenoid bone deficiency in recurrent anterior shoulder instability: diagnosis and management. J Am Acad Orthop Surg. 2009;17(8):482-493.

20. Provencher MT, Frank RM, Leclere LE, et al. The Hill-Sachs lesion: diagnosis, classification, and management. J Am Acad Orthop Surg. 2012;20(4):242-252.

21. Yamamoto N, Itoi E, Abe H, et al. Contact between the glenoid and the humeral head in abduction, external rotation, and horizontal extension: a new concept of glenoid track. J Shoulder Elbow Surg. 2007;16(5):649-656.

22. Di Giacomo G, Itoi E, Burkhart SS. Evolving concept of bipolar bone loss and the Hill-Sachs lesion: from “engaging/non-engaging” lesion to “on-track/off-track” lesion. Arthroscopy. 2014;30(1):90-98.

23. Metzger PD, Barlow B, Leonardelli D, Peace W, Solomon DJ, Provencher MT. Clinical application of the “glenoid track” concept for defining humeral head engagement in anterior shoulder instability: a preliminary report. Orthop J Sports Med. 2013;1(2):2325967113496213.

24. Arciero RA, Parrino A, Bernhardson AS, et al. The effect of a combined glenoid and Hill-Sachs defect on glenohumeral stability: a biomechanical cadaveric study using 3-dimensional modeling of 142 patients. Am J Sports Med. 2015;43(6):1422-1429.

25. Waterman BR, Chandler PJ, Teague E, Provencher MT, Tokish JM, Pallis MP. Short-term outcomes of glenoid bone block augmentation for complex anterior shoulder instability in a high-risk population. Arthroscopy. 2016;32(9):1784-1790.

26. Schroder DT, Provencher MT, Mologne TS, Muldoon MP, Cox JS. The modified Bristow procedure for anterior shoulder instability: 26-year outcomes in Naval Academy midshipmen. Am J Sports Med. 2006;34(5):778-786.

27. Provencher MT, Frank RM, Golijanin P, et al. Distal tibia allograft glenoid reconstruction in recurrent anterior shoulder instability: clinical and radiographic outcomes. Arthroscopy. 2017;33(5):891-897.

28. Tokish JM, Fitzpatrick K, Cook JB, Mallon WJ. Arthroscopic distal clavicular autograft for treating shoulder instability with glenoid bone loss. Arthrosc Tech. 2014;3(4):e475-e481.

Issue
The American Journal of Orthopedics - 46(4)
Issue
The American Journal of Orthopedics - 46(4)
Page Number
184-189
Page Number
184-189
Publications
Publications
Topics
Article Type
Display Headline
Traumatic Anterior Shoulder Instability: The US Military Experience
Display Headline
Traumatic Anterior Shoulder Instability: The US Military Experience
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

Bone Stress Injuries in the Military: Diagnosis, Management, and Prevention

Article Type
Changed
Thu, 09/19/2019 - 13:21
Display Headline
Bone Stress Injuries in the Military: Diagnosis, Management, and Prevention

Take-Home Points

  • Stress injuries, specifically of the lower extremity, are very common in new military trainees.
  • Stress injury can range from benign periosteal reaction to displaced fracture.
  • Stress injury should be treated on a case-by-case basis, depending on the severity of injury, the location of the injury, and the likelihood of healing with nonoperative management.
  • Modifiable risk factors such as nutritional status, training regiment, and even footwear should be investigated to determine potential causes of injury.
  • Prevention is a crucial part of the treatment of these injuries, and early intervention such as careful pre-enrollment physicals and vitamin supplementation can be essential in lowering injury rates.

Bone stress injuries, which are common in military recruits, present in weight-bearing (WB) areas as indolent pain caused by repetitive stress and microtrauma. They were first reported in the metatarsals of Prussian soldiers in 1855.1 Today, stress injuries are increasingly common. One study estimated they account for 10% of patients seen by sports medicine practitioners.2 This injury most commonly affects military members, endurance athletes, and dancers.3-5 Specifically, the incidence of stress fractures in military members has been reported to range from 0.8% to 6.9% for men and from 3.4% to 21.0% for women.4 Because of repetitive vigorous lower extremity loading, stress fractures typically occur in the pelvis, femoral neck, tibial shaft, and metatarsals. Delayed diagnosis and the subsequent duration of treatment required for adequate healing can result in significant morbidity. In a 2009 to 2012 study of US military members, Waterman and colleagues6 found an incidence rate of 5.69 stress fractures per 1000 person-years. Fractures most frequently involved the tibia/fibula (2.26/1000), followed by the metatarsals (0.92/1000) and the femoral neck (0.49/1000).6 In addition, these injuries were most commonly encountered in new recruits, who were less accustomed to the high-volume, high-intensity training required during basic training.4,7 Enlisted junior service members have been reported to account for 77.5% of all stress fractures.6 Age under 20 years or over 40 years and white race have also been found to be risk factors for stress injury.6

The pathogenesis of stress injury is controversial. Stanitski and colleagues8 theorized that multiple submaximal mechanical insults create cumulative stress greater than bone capacity, eventually leading to fracture. Johnson9 conducted a biopsy study and postulated that an accelerated remodeling phase was responsible, whereas Friedenberg10 argued that stress injuries are a form of reduced healing, not an attempt to increase healing, caused by the absence of callous formation in the disease process.

Various other nonmodifiable and modifiable risk factors predispose military service members to stress injury. Nonmodifiable risk factors include sex, bone geometry, limb alignment, race, age, and anatomy. Lower extremity movement biomechanics resulting from dynamic limb alignment during activity may be important. Cameron and colleagues11 examined 1843 patients and found that those with knees in >5° of valgus or >5° of external rotation had higher injury rates. Although variables such as sex and limb alignment cannot be changed, proper identification of modifiable risk factors can assist with injury prevention, and nonmodifiable risk factors can help clinicians and researchers target injury prevention interventions to patients at highest risk.

Metabolic, hormonal, and nutritional status is crucial to overall bone health. Multiple studies have found that low body mass index (BMI) is a significant risk factor for stress fracture.7,12,13 Although low BMI is a concern, patients with abnormally high BMI may also be at increased risk for bone stress injury. In a recently released consensus statement on relative energy deficiency in sport (RED-S), the International Olympic Committee addressed the complex interplay of impairments in physiologic function—including metabolic rate, menstrual function, bone health, immunity, protein synthesis, and cardiovascular health—caused by relative energy deficiency.14 The committee stated that the cause of this syndrome is energy deficiency relative to the balance between dietary energy intake and energy expenditure required for health and activities of daily living, growth, and sporting activities. This finding reveals that conditions such as stress injury often may represent a much broader systemic deficit that may be influenced by a patient’s overall physiologic imbalance.

Diagnosis

History and Physical Examination

The onset of stress reaction typically is insidious, with the classic presentation being a new military recruit who is experiencing a sudden increase in pain during physical activity.15 Pain typically is initially present only during activity, and is relieved with rest, but with disease progression this evolves to pain at rest. It is crucial that the physician elicit the patient’s history of training and physical activity. Hsu and colleagues7 reported increased prevalence of overweight civilian recruits, indicating an increase in the number of new recruits having limited experience with the repetitive physical activity encountered in basic training. Stress injury should be suspected in the setting of worsening, indolent lower extremity pain that has been present for several days, especially in the higher-risk patient populations mentioned. Diet should be assessed, with specific attention given to the intake of fruits, vegetables, and foods high in vitamin D and calcium and, most important, the energy balance between intake and output.16 Special attention should also be given to female patients, who may experience the female athlete triad, a spectrum of low energy availability, menstrual dysfunction, and impaired bone turnover (high amount of resorption relative to formation). A key part of the RED-S consensus statement14 alerted healthcare providers that metabolic derangements do not solely affect female patients. These types of patients sustain a major insult to the homeostatic balance of the hormones that sustain adequate bone health. Beck and colleagues17 found that women with disrupted menstrual cycles are 2 to 4 times more likely to sustain a stress fracture than women without disrupted menstrual cycles, making this abnormality an important part of the history.

Examination should begin with careful evaluation of limb alignment and specific attention given to varus or valgus alignment of the knees.11 The feet should also be inspected, as pes planus or cavus foot may increase the risk of stress fracture.18 Identification of the area of maximal tenderness is important. The area in question may also be erythematous or warm secondary to the inflammatory response associated with attempted fracture healing. In chronic fractures in superficial areas such as the metatarsals, callus may be palpable. Although there are few specific tests for stress injury, pain may be reproducible with deep palpation and WB.

Figure 1.
If a femoral fracture is suspected, the fulcrum test can be performed by applying downward pressure on the patient’s knee while levering the thigh over the examiner’s opposite arm or thigh (Figure 1).19 Patients with sacral stress fractures may have pain when standing or hopping on the affected side (positive flamingo test).20

Laboratory Testing

When a pathology is thought to have a nutritional or metabolic cause, particularly in a low-weight or underweight patient, a laboratory workup should be obtained. Specific laboratory tests that all patients should undergo are 25-hydroxyvitamin D3, complete blood cell count, and basic chemistry panel, including calcium and thyroid-stimulating hormone levels. Although not necessary for diagnosis, phosphate, parathyroid hormone, albumin, and prealbumin should also be considered. Females should undergo testing of follicle stimulating hormone, luteinizing hormone, estradiol, and testosterone and have a urine pregnancy test. In patients with signs of excessive cortisone, a dexamethasone suppression test can be administered.21 In males, low testosterone is a documented risk factor for stress injury.22

Imaging

Given their low cost and availability, plain radiographs typically are used for initial examination of a suspected stress injury. However, they often lack sensitivity, particularly in the early stages of stress fracture development (Figure 2).

Figure 2.
Although a fracture line or callus formation is present occasionally, findings may be subtler. Images should be inspected for blunting of cortical bone and periosteal reaction, which should be correlated with the site of maximal tenderness.11 When there is high clinical suspicion based on history and physical examination, but radiographs are negative, magnetic resonance imaging (MRI) or bone scan can be useful.23 MRI is the most accurate imaging modality, with sensitivity ranging from 86% to 100% and specificity as high as 100%.2,24,25 On MRI, stress fractures typically are seen as bright areas of increased edema. Arendt and Griffiths24 proposed an MRI-based grading system for stress fractures, with grades 1 and 2 representing low-grade injuries, and 3 and 4 representing high grade. Computed tomography (CT) also has a role in diagnosis and may be better than MRI in imaging stress fractures in the pelvis and sacrum.2 In a study involving tibial stress fractures, Gaeta and colleagues26 found MRI was 88% sensitive and 100% specific and had a positive predictive value of 100%, and CT was 42% sensitive and 100% specific and had a positive predictive value of 100%. They concluded MRI was superior to CT in the diagnosis of tibial stress fractures.

Management

Management of bone stress injury depends on many factors, including symptom duration, fracture location and severity, and risk of progression or nonunion (Table).13

Table.
Patients thought to have an underlying metabolic or nutritional derangement should be treated accordingly. Injuries with a low risk of nonunion or further displacement typically can be managed with a period of modified physical activity or reduced or non-WB (NWB) ambulation; higher risk injuries may require operative intervention.5

 

 

Pelvis

Pelvic stress fractures are rare and represent only 1.6% to 7.1% of all stress fractures.13,27,28 Given the low frequency, physicians must have a high index of suspicion to make the correct diagnosis. These fractures typically occur in marathon runners and other patients who present with persistent pain and a history of high levels of activity. As pelvic stress fractures typically involve the superior or inferior pubic rami, or sacrum, and are at low risk for nonunion,13 most are managed with nonoperative treatment and activity modification for 8 to 12 weeks.27

Femur

Femoral stress fractures are also relatively uncommon, accounting for about 10% of all stress fractures. Depending on their location, these fractures can be at high risk for progression, nonunion, and significant morbidity.29 Especially concerning are femoral neck stress fractures, which can involve either the tension side (lateral cortex) or the compression side (medial cortex) of the bone. Suspicion of a femoral neck stress fracture should prompt immediate NWB.5 Early recognition of these injuries is crucial because once displacement occurs, their complication and morbidity rates become high.13 Patients with compression-side fractures should undergo NWB treatment for 4 to 6 weeks and then slow progression to WB activity. Most return to light-impact activity by 3 to 4 months. By contrast, tension-side fractures are less likely to heal without operative intervention.11 All tension-side fractures (and any compression-side fractures >50% of the width of the femoral neck) should be treated with percutaneous placement of cannulated screws (Figure 3).

Figure 3.
Displaced fractures should be addressed urgently with open reduction and internal fixation to avoid avascular necrosis and other long-term sequelae.5 Results of operative treatment of femoral neck fractures in active individuals have been mixed. Neubauer and colleagues30 examined 48 runners who underwent surgical fixation for these injuries. Preinjury activity levels were resumed by a higher percentage of low-performance runners (72%, 23/32) than low-performance runners (31%, 5/16). Reporting on femoral neck stress fracture outcomes in Royal Marine recruits, Evans and colleagues31 found that, after operative intervention, all fractures united by 11 months on average. However, union in 50% of fractures took more than 1 year, revealing the difficulty in managing these injuries as well as the lengthy resulting disability.

Stress fractures of the femoral shaft are less common than those of the femoral neck and represent as little as 3% of all stress fractures.32 However, femoral shaft stress fractures are more common in military populations. In French military recruits, Niva and colleagues33 found an 18% incidence. Similar to femoral neck fractures, femoral shaft fractures typically are diagnosed with advanced imaging, though the fulcrum test and pain on WB can aid in the diagnosis.19 These injuries are often managed nonoperatively with NWB for a period. Weishaar and colleagues34 described US military cadets treated with progressive rehabilitation who returned to full activity within 12 weeks. Displaced femoral shaft fractures associated with bone stress injury are even less common, and should be managed operatively. Salminen and colleagues35 found an incidence of 1.5 fractures per 100,000 years of military service. Over a 20-year period, they surgically treated 10 of these fractures. Average time from intramedullary nailing to union was 3.5 months.

Tibia

The tibia is one of the more common locations for stress injury and fracture. In a prospective study with members of the military, Giladi and colleagues36 found that 71% of stress fractures were tibia fractures. In addition, a large study of 320 athletes with stress fractures found 49.1% in the tibia.37 Fractures typically are diaphyseal and transverse, usually occurring along the posteromedial cortex, where the bone experiences maximal compressive forces (Figure 4).5,13

Figure 4.
Fractures on the anterior cortex—thought to result from tensile forces applied by the large posterior musculature of the gastrocnemius during repetitive activity38—are more concerning.
Figure 5.
Compared with fractures on the compression side, fractures of the anterior tibial cortex are at higher risk for nonunion (reported nonunion rate, 4.6%).39 Radiographs of anterior tibial cortex fractures may show the “dreaded black line” (Figure 5).

Compression-side fractures often heal with nonoperative management, though healing may take several months. Swenson and colleagues40 studied the effects of pneumatic bracing on conservative management and return to play in athletes with tibial stress fractures. Patients with bracing returned to light activity within 7 days and full activity within 21 days, whereas those without bracing returned to light activity within 21 days and full activity within 77 days. Pulsed electromagnetic therapy is of controversial benefit in the management of these injuries. Rettig and colleagues41 conducted a prospective randomized trial in the treatment of US Navy midshipmen and found no reduction in healing time in those who underwent electromagnetic therapy. Stress fractures with displacement and fractures that have failed nonoperative treatment should undergo surgery. Reamed intramedullary nailing is the gold standard of operative management of these injuries.5 Varner and colleagues42 reported the outcomes of treating 11 tibial stress fractures with intramedullary nailing after nonoperative management (4 months minimum) had failed. With surgery, the union rate was 100%, and patients returned to full activity by a mean of 4 months.

Metatarsals

Stress fractures were first discovered by Briethaupt1 in the painful swollen feet of Prussian army members in 1855 and were initially named march fractures. Waterman and colleagues6 reported that metatarsal stress fractures accounted for 16% of all stress fractures in the US military between 2009 and 2012. The second metatarsal neck is the most common location for stress fractures, followed by the third and fourth metatarsals, with the fifth metatarsal being the least common.5 The second metatarsal is thought to sustain these injuries more often than the other metatarsals because of its relative lack of immobility. Donahue and Sharkey43 found that the dorsal aspect of the second metatarsal experiences twice the amount of strain experienced by the fifth metatarsal during gait, and that peak strain in the second metatarsal was further increased by simulated muscle fatigue. The risk of stress fracture can be additionally increased with use of minimalist footwear, as shown by Giuliani and colleagues,44 particularly in the absence of a progressive transition in gait and training volume with a change toward minimalist footwear. In patients with a suspected or confirmed fracture of the second, third, or fourth metatarsal, treatment typically is NWB and immobilization for at least 4 weeks.5 Fifth metatarsal stress injuries (Figure 2) typically are treated differently because of their higher risk of nonunion. Patients with a fifth metatarsal stress fracture complain of lateral midfoot pain with running and jumping. For those who present with this fracture early, acceptable treatment consists of 6 weeks of casting and NWB.5 In cases of failed nonoperative therapy, or presentation with radiographic evidence of nonunion, treatment should be intramedullary screw fixation, with bone graft supplementation based on surgeon preference. DeLee and colleagues45 reported on the results of 10 athletes with fifth metatarsal stress fractures treated with intramedullary screw fixation without bone grafting. All 10 experienced fracture union, at a mean of 7.5 weeks, and returned to sport within 8.5 weeks. One complication with this procedure is pain at the screw insertion site, but this can be successfully managed with footwear modification.45

Prevention

Proper identification of patients at high risk for stress injuries has the potential of reducing the incidence of these injuries. Lappe and colleagues46 prospectively examined female army recruits before and after 8 weeks of basic training and found that those who developed a stress fracture were more likely to have a smoking history, to drink more than 10 alcoholic beverages a week, to have a history of corticosteroid or depot medroxyprogesterone use, and to have lower body weight. In addition, the authors found that a history of prolonged exercise before enrollment was protective against fracture. This finding identifies the importance of having new recruits undergo risk factor screening, which could result in adjusting training regimens to try to reduce injury. The RED-S consensus statement14 offers a comprehensive description of the physiologic factors that can contribute to such injury. Similar to proper risk factor identification, implementation of proper exercise progression programs is a simple, modifiable method of limiting stress injuries. For new recruits or athletes who are resuming activity, injury can be effectively prevented by adjusting the frequency, duration, and intensity of training and the training loads used.47

Vitamin D and calcium supplementation is a simple intervention that can be helpful in injury prevention, and its use has very little downside. A double-blind study found a 20% lower incidence of stress fracture in female navy recruits who took 2000 mg of calcium and 800 IU of vitamin D as daily supplemention.48 Of importance, a meta-analysis of more than 65,000 patients found vitamin D supplementation was effective in reducing fracture risk only when combined with calcium, irrespective of age, sex, or prior fracture.49 In female patients with the female athlete triad, psychological counseling and nutritional consultation are essential in bone health maintenance and long-term prevention.50 Other therapies have been evaluated as well. Use of bisphosphonates is controversial for both treatment and prevention of stress fractures. In a randomized, double-blind study of the potential prophylactic effects of risedronate in 324 new infantry recruits, Milgrom and colleagues51 found no statistically significant differences in tibial, femoral, metatarsal, or total stress fracture incidence between the treatment and placebo groups. Therefore, bisphosphonates are seldom recommended as prevention or in primary management of stress fracture.

In addition to nutritional and pharmacologic therapy, activity modification may have a role in injury prevention. Gait retraining has been identified as a potential intervention for reducing stress fractures in patients with poor biomechanics.47 Crowell and Davis52 investigated the effect of gait retraining on the forces operating in the tibia in runners. After 1 month of gait retraining, tibial acceleration while running decreased by 50%, vertical force loading rate by 30%, and peak vertical force impact by 20%. Such studies indicate the importance of proper mechanics during repetitive activity, especially in patients not as accustomed to the rigorous training methods used with new military recruits. However, whether these reduced loads translate into reduced risk of stress fracture remains unclear. In addition, biomechanical shoe orthoses may lower the stress fracture risk in military recruits by reducing peak tibial strain.53 Warden and colleagues54 found a mechanical loading program was effective in enchaining the structural properties of bone in rats, leading the authors to hypothesize that a similar program aimed at modifying bone structure in humans could help prevent stress fracture. Although there have been no studies of such a strategy in humans, pretraining may be an area for future research, especially for military recruits.

Conclusion

Compared with the general population, members of the military (new recruits in particular) are at increased risk for bone stress injuries. Most of these injuries occur during basic training, when recruits significantly increase their repetitive physical activity. Although the exact pathophysiology of stress injury is debated, nutritional and metabolic abnormalities are contributors. The indolent nature of these injuries, and their high rate of false-negative plain radiographs, may result in a significant delay in diagnosis in the absence of advanced imaging studies. Although a majority of injuries heal with nonoperative management and NWB, several patterns, especially those on the tension side of the bone, are at high risk for progression to fracture and nonunion. These include lateral femoral cortex stress injuries and anterior tibial cortex fractures. There should be a low threshold for operative management in the setting of delayed union or failed nonoperative therapy. Of equal importance to orthopedic management of these injuries is the management of underlying systemic deficits, which may have subjected the patient to injury in the first place. Supplementation with vitamin D and calcium can be an important prophylaxis against stress injury. In addition, military recruits and athletes with underlying metabolic or hormonal deficiencies should receive proper attention with a focus on balancing energy intake and energy expenditure. Stress injury leading to fracture—increasingly common in military populations—often requires a multimodal approach for treatment and subsequent prevention.

Am J Orthop. 2017;46(4):176-183. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Briethaupt MD. Zur Pathologie des menschlichen Fusses [To the pathology of the human foot]. Med Zeitung. 1855;24:169-177.

2. Berger FH, de Jonge MC, Maas M. Stress fractures in the lower extremity. Eur J Radiol. 2007;62(1):16-26.

3. Almeida SA, Williams KM, Shaffer RA, Brodine SK. Epidemiological patterns of musculoskeletal injuries and physical training. Med Sci Sports Exerc. 1999;31(8):1176-1182.

4. Jones BH, Thacker SB, Gilchrist J, Kimsey CD, Sosin DM. Prevention of lower extremity stress fractures in athletes and soldiers: a systematic review. Epidemiol Rev. 2002;24(2):228-247.

5. Jacobs JM, Cameron KL, Bojescul JA. Lower extremity stress fractures in the military. Clin Sports Med. 2014;33(4):591-613.

6. Waterman BR, Gun B, Bader JO, Orr JD, Belmont PJ. Epidemiology of lower extremity stress fractures in the United States military. Mil Med. 2016;181(10):1308-1313.

7. Hsu LL, Nevin RL, Tobler SK, Rubertone MV. Trends in overweight and obesity among 18-year-old applicants to the United States military, 1993–2006. J Adolesc Health. 2007;41(6):610-612.

8. Stanitski CL, McMaster JH, Scranton PE. On the nature of stress fractures. Am J Sports Med. 1978;6(6):391-396.

9. Johnson LC. Histogenesis of stress fractures [annual lecture]. Washington, DC: Armed Forces Institute of Pathology; 1963.

10. Friedenberg ZB. Fatigue fractures of the tibia. Clin Orthop Relat Res. 1971;(76):111-115.

11. Cameron KL, Peck KY, Owens BD, et al. Biomechanical risk factors for lower extremity stress fracture. Orthop J Sports Med. 2013;1(4 suppl).

12. Knapik J, Montain S, McGraw S, Grier T, Ely M, Jones B. Stress fracture risk factors in basic combat training. Int J Sports Med. 2012;33(11):940-946.

13. Behrens SB, Deren ME, Matson A, Fadale PD, Monchik KO. Stress fractures of the pelvis and legs in athletes. Sports Health. 2013;5(2):165-174.

14. Mountjoy M, Sundgot-Borgen J, Burke L, et al. The IOC consensus statement: beyond the female athlete triad—relative energy deficiency in sport (RED-S). Br J Sports Med. 2014;48(7):491-497.

15. Maitra RS, Johnson DL. Stress fractures. Clinical history and physical examination. Clin Sports Med. 1997;16(2):259-274.

16. Nieves JW, Melsop K, Curtis M, et al. Nutritional factors that influence change in bone density and stress fracture risk among young female cross-country runners. PM R. 2010;2(8):740-750.

17. Beck BR, Matheson GO, Bergman G, et al. Do capacitively coupled electric fields accelerate tibial stress fracture healing? Am J Sports Med. 2008;36(3):545-553.

18. Simkin A, Leichter I, Giladi M, Stein M, Milgrom C. Combined effect of foot arch structure and an orthotic device on stress fractures. Foot Ankle. 1989;10(1):25-29.

19. Johnson AW, Weiss CB, Wheeler DL. Stress fractures of the femoral shaft in athletes—more common than expected: a new clinical test. Am J Sports Med. 1994;22(2):248-256.

20. Clement D, Ammann W, Taunton J, et al. Exercise-induced stress injuries to the femur. Int J Sports Med. 1993;14(6):347-352.

21. Wood PJ, Barth JH, Freedman DB, Perry L, Sheridan B. Evidence for the low dose dexamethasone suppression test to screen for Cushing’s syndrome—recommendations for a protocol for biochemistry laboratories. Ann Clin Biochem. 1997;34(pt 3):222-229.

22. Bennell K, Matheson G, Meeuwisse W, Brukner P. Risk factors for stress fractures. Sports Med. 1999;28(2):91-122.

23. Prather JL, Nusynowitz ML, Snowdy HA, Hughes AD, McCartney WH, Bagg RJ. Scintigraphic findings in stress fractures. J Bone Joint Surg Am. 1977;59(7):869-874.

24. Arendt EA, Griffiths HJ. The use of MR imaging in the assessment and clinical management of stress reactions of bone in high-performance athletes. Clin Sports Med. 1997;16(2):291-306.

25. Boden BP, Osbahr DC. High-risk stress fractures: evaluation and treatment. J Am Acad Orthop Surg. 2000;8(6):344-353.

26. Gaeta M, Minutoli F, Scribano E, et al. CT and MR imaging findings in athletes with early tibial stress injuries: comparison with bone scintigraphy findings and emphasis on cortical abnormalities. Radiology. 2005;235(2):553-561.

27. Matheson GO, Clement DB, Mckenzie DC, Taunton JE, Lloyd-Smith DR, Macintyre JG. Stress fractures in athletes. Am J Sports Med. 1987;15(1):46-58.

28. Iwamoto J, Takeda T. Stress fractures in athletes: review of 196 cases. J Orthop Sci. 2003;8(3):273-278.

29. Noakes TD, Smith JA, Lindenberg G, Wills CE. Pelvic stress fractures in long distance runners. Am J Sports Med. 1985;13(2):120-123.

30. Neubauer T, Brand J, Lidder S, Krawany M. Stress fractures of the femoral neck in runners: a review. Res Sports Med. 2016;24(3):283-297.

31. Evans JT, Guyver PM, Kassam AM, Hubble MJW. Displaced femoral neck stress fractures in Royal Marine recruits—management and results of operative treatment. J R Nav Med Serv. 2012;98(2):3-5.

32. Orava S. Stress fractures. Br J Sports Med. 1980;14(1):40-44.

 

 

33. Niva MH, Kiuru MJ, Haataja R, Pihlajamäki HK. Fatigue injuries of the femur. J Bone Joint Surg Br. 2005;87(10):1385-1390.

34. Weishaar MD, McMillian DJ, Moore JH. Identification and management of 2 femoral shaft stress injuries. J Orthop Sports Phys Ther. 2005;35(10):665-673.

35. Salminen ST, Pihlajamäki HK, Visuri TI, Böstman OM. Displaced fatigue fractures of the femoral shaft. Clin Orthop Relat Res. 2003;(409):250-259.

36. Giladi M, Ahronson Z, Stein M, Danon YL, Milgrom C. Unusual distribution and onset of stress fractures in soldiers. Clin Orthop Relat Res. 1985;(192):142-146.

37. Matheson GO, Clement DB, Mckenzie DC, Taunton JE, Lloyd-Smith DR, Macintyre JG. Stress fractures in athletes. Am J Sports Med. 1987;15(1):46-58.

38. Green NE, Rogers RA, Lipscomb AB. Nonunions of stress fractures of the tibia. Am J Sports Med. 1985;13(3):171-176.

39. Orava S, Hulkko A. Stress fracture of the mid-tibial shaft. Acta Orthop Scand. 1984;55(1):35-37.

40. Swenson EJ Jr, DeHaven KE, Sebastianelli WJ, Hanks G, Kalenak A, Lynch JM. The effect of a pneumatic leg brace on return to play in athletes with tibial stress fractures. Am J Sports Med. 1997;25(3):322-328.

41. Rettig AC, Shelbourne KD, McCarroll JR, Bisesi M, Watts J. The natural history and treatment of delayed union stress fractures of the anterior cortex of the tibia. Am J Sports Med. 1988;16(3):250-255.

42. Varner KE, Younas SA, Lintner DM, Marymont JV. Chronic anterior midtibial stress fractures in athletes treated with reamed intramedullary nailing. Am J Sports Med. 2005;33(7):1071-1076.

43. Donahue SW, Sharkey NA. Strains in the metatarsals during the stance phase of gait: implications for stress fractures. J Bone Joint Surg Am. 1999;81(9):1236-1244.

44. Giuliani J, Masini B, Alitz C, Owens BD. Barefoot-simulating footwear associated with metatarsal stress injury in 2 runners. Orthopedics. 2011;34(7):e320-e323.

45. DeLee JC, Evans JP, Julian J. Stress fracture of the fifth metatarsal. Am J Sports Med. 1983;11(5):349-353.

46. Lappe JM, Stegman MR, Recker RR. The impact of lifestyle factors on stress fractures in female army recruits. Osteoporos Int. 2001;12(1):35-42.

47. Friedl KE, Evans RK, Moran DS. Stress fracture and military medical readiness: bridging basic and applied research. Med Sci Sports Exerc. 2008;40(11 suppl):S609-S622.

48. Lappe J, Cullen D, Haynatzki G, Recker R, Ahlf R, Thompson K. Calcium and vitamin D supplementation decreases incidence of stress fractures in female navy recruits. J Bone Miner Res. 2008;23(5):741-749.

49. DIPART (Vitamin D Individual Patient Analysis of Randomized Trials) Group. Patient level pooled analysis of 68 500 patients from seven major vitamin D fracture trials in US and Europe. BMJ. 2010;340:b5463.

50. Duckham RL, Peirce N, Meyer C, Summers GD, Cameron N, Brooke-Wavell K. Risk factors for stress fracture in female endurance athletes: a cross-sectional study. BMJ Open. 2012;2(6).

51. Milgrom C, Finestone A, Novack V, et al. The effect of prophylactic treatment with risedronate on stress fracture incidence among infantry recruits. Bone. 2004;35(2):418-424.

52. Crowell HP, Davis IS. Gait retraining to reduce lower extremity loading in runners. Clin Biomech. 2011;26(1):78-83.

53. Ekenman I, Milgrom C, Finestone A, et al. The role of biomechanical shoe orthoses in tibial stress fracture prevention. Am J Sports Med. 2002;30(6):866-870.

54. Warden SJ, Hurst JA, Sanders MS, Turner CH, Burr DB, Li J. Bone adaptation to a mechanical loading program significantly increases skeletal fatigue resistance. J Bone Miner Res. 2005;20(5):809-816.

Article PDF
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 46(4)
Publications
Topics
Page Number
176-183
Sections
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

Take-Home Points

  • Stress injuries, specifically of the lower extremity, are very common in new military trainees.
  • Stress injury can range from benign periosteal reaction to displaced fracture.
  • Stress injury should be treated on a case-by-case basis, depending on the severity of injury, the location of the injury, and the likelihood of healing with nonoperative management.
  • Modifiable risk factors such as nutritional status, training regiment, and even footwear should be investigated to determine potential causes of injury.
  • Prevention is a crucial part of the treatment of these injuries, and early intervention such as careful pre-enrollment physicals and vitamin supplementation can be essential in lowering injury rates.

Bone stress injuries, which are common in military recruits, present in weight-bearing (WB) areas as indolent pain caused by repetitive stress and microtrauma. They were first reported in the metatarsals of Prussian soldiers in 1855.1 Today, stress injuries are increasingly common. One study estimated they account for 10% of patients seen by sports medicine practitioners.2 This injury most commonly affects military members, endurance athletes, and dancers.3-5 Specifically, the incidence of stress fractures in military members has been reported to range from 0.8% to 6.9% for men and from 3.4% to 21.0% for women.4 Because of repetitive vigorous lower extremity loading, stress fractures typically occur in the pelvis, femoral neck, tibial shaft, and metatarsals. Delayed diagnosis and the subsequent duration of treatment required for adequate healing can result in significant morbidity. In a 2009 to 2012 study of US military members, Waterman and colleagues6 found an incidence rate of 5.69 stress fractures per 1000 person-years. Fractures most frequently involved the tibia/fibula (2.26/1000), followed by the metatarsals (0.92/1000) and the femoral neck (0.49/1000).6 In addition, these injuries were most commonly encountered in new recruits, who were less accustomed to the high-volume, high-intensity training required during basic training.4,7 Enlisted junior service members have been reported to account for 77.5% of all stress fractures.6 Age under 20 years or over 40 years and white race have also been found to be risk factors for stress injury.6

The pathogenesis of stress injury is controversial. Stanitski and colleagues8 theorized that multiple submaximal mechanical insults create cumulative stress greater than bone capacity, eventually leading to fracture. Johnson9 conducted a biopsy study and postulated that an accelerated remodeling phase was responsible, whereas Friedenberg10 argued that stress injuries are a form of reduced healing, not an attempt to increase healing, caused by the absence of callous formation in the disease process.

Various other nonmodifiable and modifiable risk factors predispose military service members to stress injury. Nonmodifiable risk factors include sex, bone geometry, limb alignment, race, age, and anatomy. Lower extremity movement biomechanics resulting from dynamic limb alignment during activity may be important. Cameron and colleagues11 examined 1843 patients and found that those with knees in >5° of valgus or >5° of external rotation had higher injury rates. Although variables such as sex and limb alignment cannot be changed, proper identification of modifiable risk factors can assist with injury prevention, and nonmodifiable risk factors can help clinicians and researchers target injury prevention interventions to patients at highest risk.

Metabolic, hormonal, and nutritional status is crucial to overall bone health. Multiple studies have found that low body mass index (BMI) is a significant risk factor for stress fracture.7,12,13 Although low BMI is a concern, patients with abnormally high BMI may also be at increased risk for bone stress injury. In a recently released consensus statement on relative energy deficiency in sport (RED-S), the International Olympic Committee addressed the complex interplay of impairments in physiologic function—including metabolic rate, menstrual function, bone health, immunity, protein synthesis, and cardiovascular health—caused by relative energy deficiency.14 The committee stated that the cause of this syndrome is energy deficiency relative to the balance between dietary energy intake and energy expenditure required for health and activities of daily living, growth, and sporting activities. This finding reveals that conditions such as stress injury often may represent a much broader systemic deficit that may be influenced by a patient’s overall physiologic imbalance.

Diagnosis

History and Physical Examination

The onset of stress reaction typically is insidious, with the classic presentation being a new military recruit who is experiencing a sudden increase in pain during physical activity.15 Pain typically is initially present only during activity, and is relieved with rest, but with disease progression this evolves to pain at rest. It is crucial that the physician elicit the patient’s history of training and physical activity. Hsu and colleagues7 reported increased prevalence of overweight civilian recruits, indicating an increase in the number of new recruits having limited experience with the repetitive physical activity encountered in basic training. Stress injury should be suspected in the setting of worsening, indolent lower extremity pain that has been present for several days, especially in the higher-risk patient populations mentioned. Diet should be assessed, with specific attention given to the intake of fruits, vegetables, and foods high in vitamin D and calcium and, most important, the energy balance between intake and output.16 Special attention should also be given to female patients, who may experience the female athlete triad, a spectrum of low energy availability, menstrual dysfunction, and impaired bone turnover (high amount of resorption relative to formation). A key part of the RED-S consensus statement14 alerted healthcare providers that metabolic derangements do not solely affect female patients. These types of patients sustain a major insult to the homeostatic balance of the hormones that sustain adequate bone health. Beck and colleagues17 found that women with disrupted menstrual cycles are 2 to 4 times more likely to sustain a stress fracture than women without disrupted menstrual cycles, making this abnormality an important part of the history.

Examination should begin with careful evaluation of limb alignment and specific attention given to varus or valgus alignment of the knees.11 The feet should also be inspected, as pes planus or cavus foot may increase the risk of stress fracture.18 Identification of the area of maximal tenderness is important. The area in question may also be erythematous or warm secondary to the inflammatory response associated with attempted fracture healing. In chronic fractures in superficial areas such as the metatarsals, callus may be palpable. Although there are few specific tests for stress injury, pain may be reproducible with deep palpation and WB.

Figure 1.
If a femoral fracture is suspected, the fulcrum test can be performed by applying downward pressure on the patient’s knee while levering the thigh over the examiner’s opposite arm or thigh (Figure 1).19 Patients with sacral stress fractures may have pain when standing or hopping on the affected side (positive flamingo test).20

Laboratory Testing

When a pathology is thought to have a nutritional or metabolic cause, particularly in a low-weight or underweight patient, a laboratory workup should be obtained. Specific laboratory tests that all patients should undergo are 25-hydroxyvitamin D3, complete blood cell count, and basic chemistry panel, including calcium and thyroid-stimulating hormone levels. Although not necessary for diagnosis, phosphate, parathyroid hormone, albumin, and prealbumin should also be considered. Females should undergo testing of follicle stimulating hormone, luteinizing hormone, estradiol, and testosterone and have a urine pregnancy test. In patients with signs of excessive cortisone, a dexamethasone suppression test can be administered.21 In males, low testosterone is a documented risk factor for stress injury.22

Imaging

Given their low cost and availability, plain radiographs typically are used for initial examination of a suspected stress injury. However, they often lack sensitivity, particularly in the early stages of stress fracture development (Figure 2).

Figure 2.
Although a fracture line or callus formation is present occasionally, findings may be subtler. Images should be inspected for blunting of cortical bone and periosteal reaction, which should be correlated with the site of maximal tenderness.11 When there is high clinical suspicion based on history and physical examination, but radiographs are negative, magnetic resonance imaging (MRI) or bone scan can be useful.23 MRI is the most accurate imaging modality, with sensitivity ranging from 86% to 100% and specificity as high as 100%.2,24,25 On MRI, stress fractures typically are seen as bright areas of increased edema. Arendt and Griffiths24 proposed an MRI-based grading system for stress fractures, with grades 1 and 2 representing low-grade injuries, and 3 and 4 representing high grade. Computed tomography (CT) also has a role in diagnosis and may be better than MRI in imaging stress fractures in the pelvis and sacrum.2 In a study involving tibial stress fractures, Gaeta and colleagues26 found MRI was 88% sensitive and 100% specific and had a positive predictive value of 100%, and CT was 42% sensitive and 100% specific and had a positive predictive value of 100%. They concluded MRI was superior to CT in the diagnosis of tibial stress fractures.

Management

Management of bone stress injury depends on many factors, including symptom duration, fracture location and severity, and risk of progression or nonunion (Table).13

Table.
Patients thought to have an underlying metabolic or nutritional derangement should be treated accordingly. Injuries with a low risk of nonunion or further displacement typically can be managed with a period of modified physical activity or reduced or non-WB (NWB) ambulation; higher risk injuries may require operative intervention.5

 

 

Pelvis

Pelvic stress fractures are rare and represent only 1.6% to 7.1% of all stress fractures.13,27,28 Given the low frequency, physicians must have a high index of suspicion to make the correct diagnosis. These fractures typically occur in marathon runners and other patients who present with persistent pain and a history of high levels of activity. As pelvic stress fractures typically involve the superior or inferior pubic rami, or sacrum, and are at low risk for nonunion,13 most are managed with nonoperative treatment and activity modification for 8 to 12 weeks.27

Femur

Femoral stress fractures are also relatively uncommon, accounting for about 10% of all stress fractures. Depending on their location, these fractures can be at high risk for progression, nonunion, and significant morbidity.29 Especially concerning are femoral neck stress fractures, which can involve either the tension side (lateral cortex) or the compression side (medial cortex) of the bone. Suspicion of a femoral neck stress fracture should prompt immediate NWB.5 Early recognition of these injuries is crucial because once displacement occurs, their complication and morbidity rates become high.13 Patients with compression-side fractures should undergo NWB treatment for 4 to 6 weeks and then slow progression to WB activity. Most return to light-impact activity by 3 to 4 months. By contrast, tension-side fractures are less likely to heal without operative intervention.11 All tension-side fractures (and any compression-side fractures >50% of the width of the femoral neck) should be treated with percutaneous placement of cannulated screws (Figure 3).

Figure 3.
Displaced fractures should be addressed urgently with open reduction and internal fixation to avoid avascular necrosis and other long-term sequelae.5 Results of operative treatment of femoral neck fractures in active individuals have been mixed. Neubauer and colleagues30 examined 48 runners who underwent surgical fixation for these injuries. Preinjury activity levels were resumed by a higher percentage of low-performance runners (72%, 23/32) than low-performance runners (31%, 5/16). Reporting on femoral neck stress fracture outcomes in Royal Marine recruits, Evans and colleagues31 found that, after operative intervention, all fractures united by 11 months on average. However, union in 50% of fractures took more than 1 year, revealing the difficulty in managing these injuries as well as the lengthy resulting disability.

Stress fractures of the femoral shaft are less common than those of the femoral neck and represent as little as 3% of all stress fractures.32 However, femoral shaft stress fractures are more common in military populations. In French military recruits, Niva and colleagues33 found an 18% incidence. Similar to femoral neck fractures, femoral shaft fractures typically are diagnosed with advanced imaging, though the fulcrum test and pain on WB can aid in the diagnosis.19 These injuries are often managed nonoperatively with NWB for a period. Weishaar and colleagues34 described US military cadets treated with progressive rehabilitation who returned to full activity within 12 weeks. Displaced femoral shaft fractures associated with bone stress injury are even less common, and should be managed operatively. Salminen and colleagues35 found an incidence of 1.5 fractures per 100,000 years of military service. Over a 20-year period, they surgically treated 10 of these fractures. Average time from intramedullary nailing to union was 3.5 months.

Tibia

The tibia is one of the more common locations for stress injury and fracture. In a prospective study with members of the military, Giladi and colleagues36 found that 71% of stress fractures were tibia fractures. In addition, a large study of 320 athletes with stress fractures found 49.1% in the tibia.37 Fractures typically are diaphyseal and transverse, usually occurring along the posteromedial cortex, where the bone experiences maximal compressive forces (Figure 4).5,13

Figure 4.
Fractures on the anterior cortex—thought to result from tensile forces applied by the large posterior musculature of the gastrocnemius during repetitive activity38—are more concerning.
Figure 5.
Compared with fractures on the compression side, fractures of the anterior tibial cortex are at higher risk for nonunion (reported nonunion rate, 4.6%).39 Radiographs of anterior tibial cortex fractures may show the “dreaded black line” (Figure 5).

Compression-side fractures often heal with nonoperative management, though healing may take several months. Swenson and colleagues40 studied the effects of pneumatic bracing on conservative management and return to play in athletes with tibial stress fractures. Patients with bracing returned to light activity within 7 days and full activity within 21 days, whereas those without bracing returned to light activity within 21 days and full activity within 77 days. Pulsed electromagnetic therapy is of controversial benefit in the management of these injuries. Rettig and colleagues41 conducted a prospective randomized trial in the treatment of US Navy midshipmen and found no reduction in healing time in those who underwent electromagnetic therapy. Stress fractures with displacement and fractures that have failed nonoperative treatment should undergo surgery. Reamed intramedullary nailing is the gold standard of operative management of these injuries.5 Varner and colleagues42 reported the outcomes of treating 11 tibial stress fractures with intramedullary nailing after nonoperative management (4 months minimum) had failed. With surgery, the union rate was 100%, and patients returned to full activity by a mean of 4 months.

Metatarsals

Stress fractures were first discovered by Briethaupt1 in the painful swollen feet of Prussian army members in 1855 and were initially named march fractures. Waterman and colleagues6 reported that metatarsal stress fractures accounted for 16% of all stress fractures in the US military between 2009 and 2012. The second metatarsal neck is the most common location for stress fractures, followed by the third and fourth metatarsals, with the fifth metatarsal being the least common.5 The second metatarsal is thought to sustain these injuries more often than the other metatarsals because of its relative lack of immobility. Donahue and Sharkey43 found that the dorsal aspect of the second metatarsal experiences twice the amount of strain experienced by the fifth metatarsal during gait, and that peak strain in the second metatarsal was further increased by simulated muscle fatigue. The risk of stress fracture can be additionally increased with use of minimalist footwear, as shown by Giuliani and colleagues,44 particularly in the absence of a progressive transition in gait and training volume with a change toward minimalist footwear. In patients with a suspected or confirmed fracture of the second, third, or fourth metatarsal, treatment typically is NWB and immobilization for at least 4 weeks.5 Fifth metatarsal stress injuries (Figure 2) typically are treated differently because of their higher risk of nonunion. Patients with a fifth metatarsal stress fracture complain of lateral midfoot pain with running and jumping. For those who present with this fracture early, acceptable treatment consists of 6 weeks of casting and NWB.5 In cases of failed nonoperative therapy, or presentation with radiographic evidence of nonunion, treatment should be intramedullary screw fixation, with bone graft supplementation based on surgeon preference. DeLee and colleagues45 reported on the results of 10 athletes with fifth metatarsal stress fractures treated with intramedullary screw fixation without bone grafting. All 10 experienced fracture union, at a mean of 7.5 weeks, and returned to sport within 8.5 weeks. One complication with this procedure is pain at the screw insertion site, but this can be successfully managed with footwear modification.45

Prevention

Proper identification of patients at high risk for stress injuries has the potential of reducing the incidence of these injuries. Lappe and colleagues46 prospectively examined female army recruits before and after 8 weeks of basic training and found that those who developed a stress fracture were more likely to have a smoking history, to drink more than 10 alcoholic beverages a week, to have a history of corticosteroid or depot medroxyprogesterone use, and to have lower body weight. In addition, the authors found that a history of prolonged exercise before enrollment was protective against fracture. This finding identifies the importance of having new recruits undergo risk factor screening, which could result in adjusting training regimens to try to reduce injury. The RED-S consensus statement14 offers a comprehensive description of the physiologic factors that can contribute to such injury. Similar to proper risk factor identification, implementation of proper exercise progression programs is a simple, modifiable method of limiting stress injuries. For new recruits or athletes who are resuming activity, injury can be effectively prevented by adjusting the frequency, duration, and intensity of training and the training loads used.47

Vitamin D and calcium supplementation is a simple intervention that can be helpful in injury prevention, and its use has very little downside. A double-blind study found a 20% lower incidence of stress fracture in female navy recruits who took 2000 mg of calcium and 800 IU of vitamin D as daily supplemention.48 Of importance, a meta-analysis of more than 65,000 patients found vitamin D supplementation was effective in reducing fracture risk only when combined with calcium, irrespective of age, sex, or prior fracture.49 In female patients with the female athlete triad, psychological counseling and nutritional consultation are essential in bone health maintenance and long-term prevention.50 Other therapies have been evaluated as well. Use of bisphosphonates is controversial for both treatment and prevention of stress fractures. In a randomized, double-blind study of the potential prophylactic effects of risedronate in 324 new infantry recruits, Milgrom and colleagues51 found no statistically significant differences in tibial, femoral, metatarsal, or total stress fracture incidence between the treatment and placebo groups. Therefore, bisphosphonates are seldom recommended as prevention or in primary management of stress fracture.

In addition to nutritional and pharmacologic therapy, activity modification may have a role in injury prevention. Gait retraining has been identified as a potential intervention for reducing stress fractures in patients with poor biomechanics.47 Crowell and Davis52 investigated the effect of gait retraining on the forces operating in the tibia in runners. After 1 month of gait retraining, tibial acceleration while running decreased by 50%, vertical force loading rate by 30%, and peak vertical force impact by 20%. Such studies indicate the importance of proper mechanics during repetitive activity, especially in patients not as accustomed to the rigorous training methods used with new military recruits. However, whether these reduced loads translate into reduced risk of stress fracture remains unclear. In addition, biomechanical shoe orthoses may lower the stress fracture risk in military recruits by reducing peak tibial strain.53 Warden and colleagues54 found a mechanical loading program was effective in enchaining the structural properties of bone in rats, leading the authors to hypothesize that a similar program aimed at modifying bone structure in humans could help prevent stress fracture. Although there have been no studies of such a strategy in humans, pretraining may be an area for future research, especially for military recruits.

Conclusion

Compared with the general population, members of the military (new recruits in particular) are at increased risk for bone stress injuries. Most of these injuries occur during basic training, when recruits significantly increase their repetitive physical activity. Although the exact pathophysiology of stress injury is debated, nutritional and metabolic abnormalities are contributors. The indolent nature of these injuries, and their high rate of false-negative plain radiographs, may result in a significant delay in diagnosis in the absence of advanced imaging studies. Although a majority of injuries heal with nonoperative management and NWB, several patterns, especially those on the tension side of the bone, are at high risk for progression to fracture and nonunion. These include lateral femoral cortex stress injuries and anterior tibial cortex fractures. There should be a low threshold for operative management in the setting of delayed union or failed nonoperative therapy. Of equal importance to orthopedic management of these injuries is the management of underlying systemic deficits, which may have subjected the patient to injury in the first place. Supplementation with vitamin D and calcium can be an important prophylaxis against stress injury. In addition, military recruits and athletes with underlying metabolic or hormonal deficiencies should receive proper attention with a focus on balancing energy intake and energy expenditure. Stress injury leading to fracture—increasingly common in military populations—often requires a multimodal approach for treatment and subsequent prevention.

Am J Orthop. 2017;46(4):176-183. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

Take-Home Points

  • Stress injuries, specifically of the lower extremity, are very common in new military trainees.
  • Stress injury can range from benign periosteal reaction to displaced fracture.
  • Stress injury should be treated on a case-by-case basis, depending on the severity of injury, the location of the injury, and the likelihood of healing with nonoperative management.
  • Modifiable risk factors such as nutritional status, training regiment, and even footwear should be investigated to determine potential causes of injury.
  • Prevention is a crucial part of the treatment of these injuries, and early intervention such as careful pre-enrollment physicals and vitamin supplementation can be essential in lowering injury rates.

Bone stress injuries, which are common in military recruits, present in weight-bearing (WB) areas as indolent pain caused by repetitive stress and microtrauma. They were first reported in the metatarsals of Prussian soldiers in 1855.1 Today, stress injuries are increasingly common. One study estimated they account for 10% of patients seen by sports medicine practitioners.2 This injury most commonly affects military members, endurance athletes, and dancers.3-5 Specifically, the incidence of stress fractures in military members has been reported to range from 0.8% to 6.9% for men and from 3.4% to 21.0% for women.4 Because of repetitive vigorous lower extremity loading, stress fractures typically occur in the pelvis, femoral neck, tibial shaft, and metatarsals. Delayed diagnosis and the subsequent duration of treatment required for adequate healing can result in significant morbidity. In a 2009 to 2012 study of US military members, Waterman and colleagues6 found an incidence rate of 5.69 stress fractures per 1000 person-years. Fractures most frequently involved the tibia/fibula (2.26/1000), followed by the metatarsals (0.92/1000) and the femoral neck (0.49/1000).6 In addition, these injuries were most commonly encountered in new recruits, who were less accustomed to the high-volume, high-intensity training required during basic training.4,7 Enlisted junior service members have been reported to account for 77.5% of all stress fractures.6 Age under 20 years or over 40 years and white race have also been found to be risk factors for stress injury.6

The pathogenesis of stress injury is controversial. Stanitski and colleagues8 theorized that multiple submaximal mechanical insults create cumulative stress greater than bone capacity, eventually leading to fracture. Johnson9 conducted a biopsy study and postulated that an accelerated remodeling phase was responsible, whereas Friedenberg10 argued that stress injuries are a form of reduced healing, not an attempt to increase healing, caused by the absence of callous formation in the disease process.

Various other nonmodifiable and modifiable risk factors predispose military service members to stress injury. Nonmodifiable risk factors include sex, bone geometry, limb alignment, race, age, and anatomy. Lower extremity movement biomechanics resulting from dynamic limb alignment during activity may be important. Cameron and colleagues11 examined 1843 patients and found that those with knees in >5° of valgus or >5° of external rotation had higher injury rates. Although variables such as sex and limb alignment cannot be changed, proper identification of modifiable risk factors can assist with injury prevention, and nonmodifiable risk factors can help clinicians and researchers target injury prevention interventions to patients at highest risk.

Metabolic, hormonal, and nutritional status is crucial to overall bone health. Multiple studies have found that low body mass index (BMI) is a significant risk factor for stress fracture.7,12,13 Although low BMI is a concern, patients with abnormally high BMI may also be at increased risk for bone stress injury. In a recently released consensus statement on relative energy deficiency in sport (RED-S), the International Olympic Committee addressed the complex interplay of impairments in physiologic function—including metabolic rate, menstrual function, bone health, immunity, protein synthesis, and cardiovascular health—caused by relative energy deficiency.14 The committee stated that the cause of this syndrome is energy deficiency relative to the balance between dietary energy intake and energy expenditure required for health and activities of daily living, growth, and sporting activities. This finding reveals that conditions such as stress injury often may represent a much broader systemic deficit that may be influenced by a patient’s overall physiologic imbalance.

Diagnosis

History and Physical Examination

The onset of stress reaction typically is insidious, with the classic presentation being a new military recruit who is experiencing a sudden increase in pain during physical activity.15 Pain typically is initially present only during activity, and is relieved with rest, but with disease progression this evolves to pain at rest. It is crucial that the physician elicit the patient’s history of training and physical activity. Hsu and colleagues7 reported increased prevalence of overweight civilian recruits, indicating an increase in the number of new recruits having limited experience with the repetitive physical activity encountered in basic training. Stress injury should be suspected in the setting of worsening, indolent lower extremity pain that has been present for several days, especially in the higher-risk patient populations mentioned. Diet should be assessed, with specific attention given to the intake of fruits, vegetables, and foods high in vitamin D and calcium and, most important, the energy balance between intake and output.16 Special attention should also be given to female patients, who may experience the female athlete triad, a spectrum of low energy availability, menstrual dysfunction, and impaired bone turnover (high amount of resorption relative to formation). A key part of the RED-S consensus statement14 alerted healthcare providers that metabolic derangements do not solely affect female patients. These types of patients sustain a major insult to the homeostatic balance of the hormones that sustain adequate bone health. Beck and colleagues17 found that women with disrupted menstrual cycles are 2 to 4 times more likely to sustain a stress fracture than women without disrupted menstrual cycles, making this abnormality an important part of the history.

Examination should begin with careful evaluation of limb alignment and specific attention given to varus or valgus alignment of the knees.11 The feet should also be inspected, as pes planus or cavus foot may increase the risk of stress fracture.18 Identification of the area of maximal tenderness is important. The area in question may also be erythematous or warm secondary to the inflammatory response associated with attempted fracture healing. In chronic fractures in superficial areas such as the metatarsals, callus may be palpable. Although there are few specific tests for stress injury, pain may be reproducible with deep palpation and WB.

Figure 1.
If a femoral fracture is suspected, the fulcrum test can be performed by applying downward pressure on the patient’s knee while levering the thigh over the examiner’s opposite arm or thigh (Figure 1).19 Patients with sacral stress fractures may have pain when standing or hopping on the affected side (positive flamingo test).20

Laboratory Testing

When a pathology is thought to have a nutritional or metabolic cause, particularly in a low-weight or underweight patient, a laboratory workup should be obtained. Specific laboratory tests that all patients should undergo are 25-hydroxyvitamin D3, complete blood cell count, and basic chemistry panel, including calcium and thyroid-stimulating hormone levels. Although not necessary for diagnosis, phosphate, parathyroid hormone, albumin, and prealbumin should also be considered. Females should undergo testing of follicle stimulating hormone, luteinizing hormone, estradiol, and testosterone and have a urine pregnancy test. In patients with signs of excessive cortisone, a dexamethasone suppression test can be administered.21 In males, low testosterone is a documented risk factor for stress injury.22

Imaging

Given their low cost and availability, plain radiographs typically are used for initial examination of a suspected stress injury. However, they often lack sensitivity, particularly in the early stages of stress fracture development (Figure 2).

Figure 2.
Although a fracture line or callus formation is present occasionally, findings may be subtler. Images should be inspected for blunting of cortical bone and periosteal reaction, which should be correlated with the site of maximal tenderness.11 When there is high clinical suspicion based on history and physical examination, but radiographs are negative, magnetic resonance imaging (MRI) or bone scan can be useful.23 MRI is the most accurate imaging modality, with sensitivity ranging from 86% to 100% and specificity as high as 100%.2,24,25 On MRI, stress fractures typically are seen as bright areas of increased edema. Arendt and Griffiths24 proposed an MRI-based grading system for stress fractures, with grades 1 and 2 representing low-grade injuries, and 3 and 4 representing high grade. Computed tomography (CT) also has a role in diagnosis and may be better than MRI in imaging stress fractures in the pelvis and sacrum.2 In a study involving tibial stress fractures, Gaeta and colleagues26 found MRI was 88% sensitive and 100% specific and had a positive predictive value of 100%, and CT was 42% sensitive and 100% specific and had a positive predictive value of 100%. They concluded MRI was superior to CT in the diagnosis of tibial stress fractures.

Management

Management of bone stress injury depends on many factors, including symptom duration, fracture location and severity, and risk of progression or nonunion (Table).13

Table.
Patients thought to have an underlying metabolic or nutritional derangement should be treated accordingly. Injuries with a low risk of nonunion or further displacement typically can be managed with a period of modified physical activity or reduced or non-WB (NWB) ambulation; higher risk injuries may require operative intervention.5

 

 

Pelvis

Pelvic stress fractures are rare and represent only 1.6% to 7.1% of all stress fractures.13,27,28 Given the low frequency, physicians must have a high index of suspicion to make the correct diagnosis. These fractures typically occur in marathon runners and other patients who present with persistent pain and a history of high levels of activity. As pelvic stress fractures typically involve the superior or inferior pubic rami, or sacrum, and are at low risk for nonunion,13 most are managed with nonoperative treatment and activity modification for 8 to 12 weeks.27

Femur

Femoral stress fractures are also relatively uncommon, accounting for about 10% of all stress fractures. Depending on their location, these fractures can be at high risk for progression, nonunion, and significant morbidity.29 Especially concerning are femoral neck stress fractures, which can involve either the tension side (lateral cortex) or the compression side (medial cortex) of the bone. Suspicion of a femoral neck stress fracture should prompt immediate NWB.5 Early recognition of these injuries is crucial because once displacement occurs, their complication and morbidity rates become high.13 Patients with compression-side fractures should undergo NWB treatment for 4 to 6 weeks and then slow progression to WB activity. Most return to light-impact activity by 3 to 4 months. By contrast, tension-side fractures are less likely to heal without operative intervention.11 All tension-side fractures (and any compression-side fractures >50% of the width of the femoral neck) should be treated with percutaneous placement of cannulated screws (Figure 3).

Figure 3.
Displaced fractures should be addressed urgently with open reduction and internal fixation to avoid avascular necrosis and other long-term sequelae.5 Results of operative treatment of femoral neck fractures in active individuals have been mixed. Neubauer and colleagues30 examined 48 runners who underwent surgical fixation for these injuries. Preinjury activity levels were resumed by a higher percentage of low-performance runners (72%, 23/32) than low-performance runners (31%, 5/16). Reporting on femoral neck stress fracture outcomes in Royal Marine recruits, Evans and colleagues31 found that, after operative intervention, all fractures united by 11 months on average. However, union in 50% of fractures took more than 1 year, revealing the difficulty in managing these injuries as well as the lengthy resulting disability.

Stress fractures of the femoral shaft are less common than those of the femoral neck and represent as little as 3% of all stress fractures.32 However, femoral shaft stress fractures are more common in military populations. In French military recruits, Niva and colleagues33 found an 18% incidence. Similar to femoral neck fractures, femoral shaft fractures typically are diagnosed with advanced imaging, though the fulcrum test and pain on WB can aid in the diagnosis.19 These injuries are often managed nonoperatively with NWB for a period. Weishaar and colleagues34 described US military cadets treated with progressive rehabilitation who returned to full activity within 12 weeks. Displaced femoral shaft fractures associated with bone stress injury are even less common, and should be managed operatively. Salminen and colleagues35 found an incidence of 1.5 fractures per 100,000 years of military service. Over a 20-year period, they surgically treated 10 of these fractures. Average time from intramedullary nailing to union was 3.5 months.

Tibia

The tibia is one of the more common locations for stress injury and fracture. In a prospective study with members of the military, Giladi and colleagues36 found that 71% of stress fractures were tibia fractures. In addition, a large study of 320 athletes with stress fractures found 49.1% in the tibia.37 Fractures typically are diaphyseal and transverse, usually occurring along the posteromedial cortex, where the bone experiences maximal compressive forces (Figure 4).5,13

Figure 4.
Fractures on the anterior cortex—thought to result from tensile forces applied by the large posterior musculature of the gastrocnemius during repetitive activity38—are more concerning.
Figure 5.
Compared with fractures on the compression side, fractures of the anterior tibial cortex are at higher risk for nonunion (reported nonunion rate, 4.6%).39 Radiographs of anterior tibial cortex fractures may show the “dreaded black line” (Figure 5).

Compression-side fractures often heal with nonoperative management, though healing may take several months. Swenson and colleagues40 studied the effects of pneumatic bracing on conservative management and return to play in athletes with tibial stress fractures. Patients with bracing returned to light activity within 7 days and full activity within 21 days, whereas those without bracing returned to light activity within 21 days and full activity within 77 days. Pulsed electromagnetic therapy is of controversial benefit in the management of these injuries. Rettig and colleagues41 conducted a prospective randomized trial in the treatment of US Navy midshipmen and found no reduction in healing time in those who underwent electromagnetic therapy. Stress fractures with displacement and fractures that have failed nonoperative treatment should undergo surgery. Reamed intramedullary nailing is the gold standard of operative management of these injuries.5 Varner and colleagues42 reported the outcomes of treating 11 tibial stress fractures with intramedullary nailing after nonoperative management (4 months minimum) had failed. With surgery, the union rate was 100%, and patients returned to full activity by a mean of 4 months.

Metatarsals

Stress fractures were first discovered by Briethaupt1 in the painful swollen feet of Prussian army members in 1855 and were initially named march fractures. Waterman and colleagues6 reported that metatarsal stress fractures accounted for 16% of all stress fractures in the US military between 2009 and 2012. The second metatarsal neck is the most common location for stress fractures, followed by the third and fourth metatarsals, with the fifth metatarsal being the least common.5 The second metatarsal is thought to sustain these injuries more often than the other metatarsals because of its relative lack of immobility. Donahue and Sharkey43 found that the dorsal aspect of the second metatarsal experiences twice the amount of strain experienced by the fifth metatarsal during gait, and that peak strain in the second metatarsal was further increased by simulated muscle fatigue. The risk of stress fracture can be additionally increased with use of minimalist footwear, as shown by Giuliani and colleagues,44 particularly in the absence of a progressive transition in gait and training volume with a change toward minimalist footwear. In patients with a suspected or confirmed fracture of the second, third, or fourth metatarsal, treatment typically is NWB and immobilization for at least 4 weeks.5 Fifth metatarsal stress injuries (Figure 2) typically are treated differently because of their higher risk of nonunion. Patients with a fifth metatarsal stress fracture complain of lateral midfoot pain with running and jumping. For those who present with this fracture early, acceptable treatment consists of 6 weeks of casting and NWB.5 In cases of failed nonoperative therapy, or presentation with radiographic evidence of nonunion, treatment should be intramedullary screw fixation, with bone graft supplementation based on surgeon preference. DeLee and colleagues45 reported on the results of 10 athletes with fifth metatarsal stress fractures treated with intramedullary screw fixation without bone grafting. All 10 experienced fracture union, at a mean of 7.5 weeks, and returned to sport within 8.5 weeks. One complication with this procedure is pain at the screw insertion site, but this can be successfully managed with footwear modification.45

Prevention

Proper identification of patients at high risk for stress injuries has the potential of reducing the incidence of these injuries. Lappe and colleagues46 prospectively examined female army recruits before and after 8 weeks of basic training and found that those who developed a stress fracture were more likely to have a smoking history, to drink more than 10 alcoholic beverages a week, to have a history of corticosteroid or depot medroxyprogesterone use, and to have lower body weight. In addition, the authors found that a history of prolonged exercise before enrollment was protective against fracture. This finding identifies the importance of having new recruits undergo risk factor screening, which could result in adjusting training regimens to try to reduce injury. The RED-S consensus statement14 offers a comprehensive description of the physiologic factors that can contribute to such injury. Similar to proper risk factor identification, implementation of proper exercise progression programs is a simple, modifiable method of limiting stress injuries. For new recruits or athletes who are resuming activity, injury can be effectively prevented by adjusting the frequency, duration, and intensity of training and the training loads used.47

Vitamin D and calcium supplementation is a simple intervention that can be helpful in injury prevention, and its use has very little downside. A double-blind study found a 20% lower incidence of stress fracture in female navy recruits who took 2000 mg of calcium and 800 IU of vitamin D as daily supplemention.48 Of importance, a meta-analysis of more than 65,000 patients found vitamin D supplementation was effective in reducing fracture risk only when combined with calcium, irrespective of age, sex, or prior fracture.49 In female patients with the female athlete triad, psychological counseling and nutritional consultation are essential in bone health maintenance and long-term prevention.50 Other therapies have been evaluated as well. Use of bisphosphonates is controversial for both treatment and prevention of stress fractures. In a randomized, double-blind study of the potential prophylactic effects of risedronate in 324 new infantry recruits, Milgrom and colleagues51 found no statistically significant differences in tibial, femoral, metatarsal, or total stress fracture incidence between the treatment and placebo groups. Therefore, bisphosphonates are seldom recommended as prevention or in primary management of stress fracture.

In addition to nutritional and pharmacologic therapy, activity modification may have a role in injury prevention. Gait retraining has been identified as a potential intervention for reducing stress fractures in patients with poor biomechanics.47 Crowell and Davis52 investigated the effect of gait retraining on the forces operating in the tibia in runners. After 1 month of gait retraining, tibial acceleration while running decreased by 50%, vertical force loading rate by 30%, and peak vertical force impact by 20%. Such studies indicate the importance of proper mechanics during repetitive activity, especially in patients not as accustomed to the rigorous training methods used with new military recruits. However, whether these reduced loads translate into reduced risk of stress fracture remains unclear. In addition, biomechanical shoe orthoses may lower the stress fracture risk in military recruits by reducing peak tibial strain.53 Warden and colleagues54 found a mechanical loading program was effective in enchaining the structural properties of bone in rats, leading the authors to hypothesize that a similar program aimed at modifying bone structure in humans could help prevent stress fracture. Although there have been no studies of such a strategy in humans, pretraining may be an area for future research, especially for military recruits.

Conclusion

Compared with the general population, members of the military (new recruits in particular) are at increased risk for bone stress injuries. Most of these injuries occur during basic training, when recruits significantly increase their repetitive physical activity. Although the exact pathophysiology of stress injury is debated, nutritional and metabolic abnormalities are contributors. The indolent nature of these injuries, and their high rate of false-negative plain radiographs, may result in a significant delay in diagnosis in the absence of advanced imaging studies. Although a majority of injuries heal with nonoperative management and NWB, several patterns, especially those on the tension side of the bone, are at high risk for progression to fracture and nonunion. These include lateral femoral cortex stress injuries and anterior tibial cortex fractures. There should be a low threshold for operative management in the setting of delayed union or failed nonoperative therapy. Of equal importance to orthopedic management of these injuries is the management of underlying systemic deficits, which may have subjected the patient to injury in the first place. Supplementation with vitamin D and calcium can be an important prophylaxis against stress injury. In addition, military recruits and athletes with underlying metabolic or hormonal deficiencies should receive proper attention with a focus on balancing energy intake and energy expenditure. Stress injury leading to fracture—increasingly common in military populations—often requires a multimodal approach for treatment and subsequent prevention.

Am J Orthop. 2017;46(4):176-183. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Briethaupt MD. Zur Pathologie des menschlichen Fusses [To the pathology of the human foot]. Med Zeitung. 1855;24:169-177.

2. Berger FH, de Jonge MC, Maas M. Stress fractures in the lower extremity. Eur J Radiol. 2007;62(1):16-26.

3. Almeida SA, Williams KM, Shaffer RA, Brodine SK. Epidemiological patterns of musculoskeletal injuries and physical training. Med Sci Sports Exerc. 1999;31(8):1176-1182.

4. Jones BH, Thacker SB, Gilchrist J, Kimsey CD, Sosin DM. Prevention of lower extremity stress fractures in athletes and soldiers: a systematic review. Epidemiol Rev. 2002;24(2):228-247.

5. Jacobs JM, Cameron KL, Bojescul JA. Lower extremity stress fractures in the military. Clin Sports Med. 2014;33(4):591-613.

6. Waterman BR, Gun B, Bader JO, Orr JD, Belmont PJ. Epidemiology of lower extremity stress fractures in the United States military. Mil Med. 2016;181(10):1308-1313.

7. Hsu LL, Nevin RL, Tobler SK, Rubertone MV. Trends in overweight and obesity among 18-year-old applicants to the United States military, 1993–2006. J Adolesc Health. 2007;41(6):610-612.

8. Stanitski CL, McMaster JH, Scranton PE. On the nature of stress fractures. Am J Sports Med. 1978;6(6):391-396.

9. Johnson LC. Histogenesis of stress fractures [annual lecture]. Washington, DC: Armed Forces Institute of Pathology; 1963.

10. Friedenberg ZB. Fatigue fractures of the tibia. Clin Orthop Relat Res. 1971;(76):111-115.

11. Cameron KL, Peck KY, Owens BD, et al. Biomechanical risk factors for lower extremity stress fracture. Orthop J Sports Med. 2013;1(4 suppl).

12. Knapik J, Montain S, McGraw S, Grier T, Ely M, Jones B. Stress fracture risk factors in basic combat training. Int J Sports Med. 2012;33(11):940-946.

13. Behrens SB, Deren ME, Matson A, Fadale PD, Monchik KO. Stress fractures of the pelvis and legs in athletes. Sports Health. 2013;5(2):165-174.

14. Mountjoy M, Sundgot-Borgen J, Burke L, et al. The IOC consensus statement: beyond the female athlete triad—relative energy deficiency in sport (RED-S). Br J Sports Med. 2014;48(7):491-497.

15. Maitra RS, Johnson DL. Stress fractures. Clinical history and physical examination. Clin Sports Med. 1997;16(2):259-274.

16. Nieves JW, Melsop K, Curtis M, et al. Nutritional factors that influence change in bone density and stress fracture risk among young female cross-country runners. PM R. 2010;2(8):740-750.

17. Beck BR, Matheson GO, Bergman G, et al. Do capacitively coupled electric fields accelerate tibial stress fracture healing? Am J Sports Med. 2008;36(3):545-553.

18. Simkin A, Leichter I, Giladi M, Stein M, Milgrom C. Combined effect of foot arch structure and an orthotic device on stress fractures. Foot Ankle. 1989;10(1):25-29.

19. Johnson AW, Weiss CB, Wheeler DL. Stress fractures of the femoral shaft in athletes—more common than expected: a new clinical test. Am J Sports Med. 1994;22(2):248-256.

20. Clement D, Ammann W, Taunton J, et al. Exercise-induced stress injuries to the femur. Int J Sports Med. 1993;14(6):347-352.

21. Wood PJ, Barth JH, Freedman DB, Perry L, Sheridan B. Evidence for the low dose dexamethasone suppression test to screen for Cushing’s syndrome—recommendations for a protocol for biochemistry laboratories. Ann Clin Biochem. 1997;34(pt 3):222-229.

22. Bennell K, Matheson G, Meeuwisse W, Brukner P. Risk factors for stress fractures. Sports Med. 1999;28(2):91-122.

23. Prather JL, Nusynowitz ML, Snowdy HA, Hughes AD, McCartney WH, Bagg RJ. Scintigraphic findings in stress fractures. J Bone Joint Surg Am. 1977;59(7):869-874.

24. Arendt EA, Griffiths HJ. The use of MR imaging in the assessment and clinical management of stress reactions of bone in high-performance athletes. Clin Sports Med. 1997;16(2):291-306.

25. Boden BP, Osbahr DC. High-risk stress fractures: evaluation and treatment. J Am Acad Orthop Surg. 2000;8(6):344-353.

26. Gaeta M, Minutoli F, Scribano E, et al. CT and MR imaging findings in athletes with early tibial stress injuries: comparison with bone scintigraphy findings and emphasis on cortical abnormalities. Radiology. 2005;235(2):553-561.

27. Matheson GO, Clement DB, Mckenzie DC, Taunton JE, Lloyd-Smith DR, Macintyre JG. Stress fractures in athletes. Am J Sports Med. 1987;15(1):46-58.

28. Iwamoto J, Takeda T. Stress fractures in athletes: review of 196 cases. J Orthop Sci. 2003;8(3):273-278.

29. Noakes TD, Smith JA, Lindenberg G, Wills CE. Pelvic stress fractures in long distance runners. Am J Sports Med. 1985;13(2):120-123.

30. Neubauer T, Brand J, Lidder S, Krawany M. Stress fractures of the femoral neck in runners: a review. Res Sports Med. 2016;24(3):283-297.

31. Evans JT, Guyver PM, Kassam AM, Hubble MJW. Displaced femoral neck stress fractures in Royal Marine recruits—management and results of operative treatment. J R Nav Med Serv. 2012;98(2):3-5.

32. Orava S. Stress fractures. Br J Sports Med. 1980;14(1):40-44.

 

 

33. Niva MH, Kiuru MJ, Haataja R, Pihlajamäki HK. Fatigue injuries of the femur. J Bone Joint Surg Br. 2005;87(10):1385-1390.

34. Weishaar MD, McMillian DJ, Moore JH. Identification and management of 2 femoral shaft stress injuries. J Orthop Sports Phys Ther. 2005;35(10):665-673.

35. Salminen ST, Pihlajamäki HK, Visuri TI, Böstman OM. Displaced fatigue fractures of the femoral shaft. Clin Orthop Relat Res. 2003;(409):250-259.

36. Giladi M, Ahronson Z, Stein M, Danon YL, Milgrom C. Unusual distribution and onset of stress fractures in soldiers. Clin Orthop Relat Res. 1985;(192):142-146.

37. Matheson GO, Clement DB, Mckenzie DC, Taunton JE, Lloyd-Smith DR, Macintyre JG. Stress fractures in athletes. Am J Sports Med. 1987;15(1):46-58.

38. Green NE, Rogers RA, Lipscomb AB. Nonunions of stress fractures of the tibia. Am J Sports Med. 1985;13(3):171-176.

39. Orava S, Hulkko A. Stress fracture of the mid-tibial shaft. Acta Orthop Scand. 1984;55(1):35-37.

40. Swenson EJ Jr, DeHaven KE, Sebastianelli WJ, Hanks G, Kalenak A, Lynch JM. The effect of a pneumatic leg brace on return to play in athletes with tibial stress fractures. Am J Sports Med. 1997;25(3):322-328.

41. Rettig AC, Shelbourne KD, McCarroll JR, Bisesi M, Watts J. The natural history and treatment of delayed union stress fractures of the anterior cortex of the tibia. Am J Sports Med. 1988;16(3):250-255.

42. Varner KE, Younas SA, Lintner DM, Marymont JV. Chronic anterior midtibial stress fractures in athletes treated with reamed intramedullary nailing. Am J Sports Med. 2005;33(7):1071-1076.

43. Donahue SW, Sharkey NA. Strains in the metatarsals during the stance phase of gait: implications for stress fractures. J Bone Joint Surg Am. 1999;81(9):1236-1244.

44. Giuliani J, Masini B, Alitz C, Owens BD. Barefoot-simulating footwear associated with metatarsal stress injury in 2 runners. Orthopedics. 2011;34(7):e320-e323.

45. DeLee JC, Evans JP, Julian J. Stress fracture of the fifth metatarsal. Am J Sports Med. 1983;11(5):349-353.

46. Lappe JM, Stegman MR, Recker RR. The impact of lifestyle factors on stress fractures in female army recruits. Osteoporos Int. 2001;12(1):35-42.

47. Friedl KE, Evans RK, Moran DS. Stress fracture and military medical readiness: bridging basic and applied research. Med Sci Sports Exerc. 2008;40(11 suppl):S609-S622.

48. Lappe J, Cullen D, Haynatzki G, Recker R, Ahlf R, Thompson K. Calcium and vitamin D supplementation decreases incidence of stress fractures in female navy recruits. J Bone Miner Res. 2008;23(5):741-749.

49. DIPART (Vitamin D Individual Patient Analysis of Randomized Trials) Group. Patient level pooled analysis of 68 500 patients from seven major vitamin D fracture trials in US and Europe. BMJ. 2010;340:b5463.

50. Duckham RL, Peirce N, Meyer C, Summers GD, Cameron N, Brooke-Wavell K. Risk factors for stress fracture in female endurance athletes: a cross-sectional study. BMJ Open. 2012;2(6).

51. Milgrom C, Finestone A, Novack V, et al. The effect of prophylactic treatment with risedronate on stress fracture incidence among infantry recruits. Bone. 2004;35(2):418-424.

52. Crowell HP, Davis IS. Gait retraining to reduce lower extremity loading in runners. Clin Biomech. 2011;26(1):78-83.

53. Ekenman I, Milgrom C, Finestone A, et al. The role of biomechanical shoe orthoses in tibial stress fracture prevention. Am J Sports Med. 2002;30(6):866-870.

54. Warden SJ, Hurst JA, Sanders MS, Turner CH, Burr DB, Li J. Bone adaptation to a mechanical loading program significantly increases skeletal fatigue resistance. J Bone Miner Res. 2005;20(5):809-816.

References

1. Briethaupt MD. Zur Pathologie des menschlichen Fusses [To the pathology of the human foot]. Med Zeitung. 1855;24:169-177.

2. Berger FH, de Jonge MC, Maas M. Stress fractures in the lower extremity. Eur J Radiol. 2007;62(1):16-26.

3. Almeida SA, Williams KM, Shaffer RA, Brodine SK. Epidemiological patterns of musculoskeletal injuries and physical training. Med Sci Sports Exerc. 1999;31(8):1176-1182.

4. Jones BH, Thacker SB, Gilchrist J, Kimsey CD, Sosin DM. Prevention of lower extremity stress fractures in athletes and soldiers: a systematic review. Epidemiol Rev. 2002;24(2):228-247.

5. Jacobs JM, Cameron KL, Bojescul JA. Lower extremity stress fractures in the military. Clin Sports Med. 2014;33(4):591-613.

6. Waterman BR, Gun B, Bader JO, Orr JD, Belmont PJ. Epidemiology of lower extremity stress fractures in the United States military. Mil Med. 2016;181(10):1308-1313.

7. Hsu LL, Nevin RL, Tobler SK, Rubertone MV. Trends in overweight and obesity among 18-year-old applicants to the United States military, 1993–2006. J Adolesc Health. 2007;41(6):610-612.

8. Stanitski CL, McMaster JH, Scranton PE. On the nature of stress fractures. Am J Sports Med. 1978;6(6):391-396.

9. Johnson LC. Histogenesis of stress fractures [annual lecture]. Washington, DC: Armed Forces Institute of Pathology; 1963.

10. Friedenberg ZB. Fatigue fractures of the tibia. Clin Orthop Relat Res. 1971;(76):111-115.

11. Cameron KL, Peck KY, Owens BD, et al. Biomechanical risk factors for lower extremity stress fracture. Orthop J Sports Med. 2013;1(4 suppl).

12. Knapik J, Montain S, McGraw S, Grier T, Ely M, Jones B. Stress fracture risk factors in basic combat training. Int J Sports Med. 2012;33(11):940-946.

13. Behrens SB, Deren ME, Matson A, Fadale PD, Monchik KO. Stress fractures of the pelvis and legs in athletes. Sports Health. 2013;5(2):165-174.

14. Mountjoy M, Sundgot-Borgen J, Burke L, et al. The IOC consensus statement: beyond the female athlete triad—relative energy deficiency in sport (RED-S). Br J Sports Med. 2014;48(7):491-497.

15. Maitra RS, Johnson DL. Stress fractures. Clinical history and physical examination. Clin Sports Med. 1997;16(2):259-274.

16. Nieves JW, Melsop K, Curtis M, et al. Nutritional factors that influence change in bone density and stress fracture risk among young female cross-country runners. PM R. 2010;2(8):740-750.

17. Beck BR, Matheson GO, Bergman G, et al. Do capacitively coupled electric fields accelerate tibial stress fracture healing? Am J Sports Med. 2008;36(3):545-553.

18. Simkin A, Leichter I, Giladi M, Stein M, Milgrom C. Combined effect of foot arch structure and an orthotic device on stress fractures. Foot Ankle. 1989;10(1):25-29.

19. Johnson AW, Weiss CB, Wheeler DL. Stress fractures of the femoral shaft in athletes—more common than expected: a new clinical test. Am J Sports Med. 1994;22(2):248-256.

20. Clement D, Ammann W, Taunton J, et al. Exercise-induced stress injuries to the femur. Int J Sports Med. 1993;14(6):347-352.

21. Wood PJ, Barth JH, Freedman DB, Perry L, Sheridan B. Evidence for the low dose dexamethasone suppression test to screen for Cushing’s syndrome—recommendations for a protocol for biochemistry laboratories. Ann Clin Biochem. 1997;34(pt 3):222-229.

22. Bennell K, Matheson G, Meeuwisse W, Brukner P. Risk factors for stress fractures. Sports Med. 1999;28(2):91-122.

23. Prather JL, Nusynowitz ML, Snowdy HA, Hughes AD, McCartney WH, Bagg RJ. Scintigraphic findings in stress fractures. J Bone Joint Surg Am. 1977;59(7):869-874.

24. Arendt EA, Griffiths HJ. The use of MR imaging in the assessment and clinical management of stress reactions of bone in high-performance athletes. Clin Sports Med. 1997;16(2):291-306.

25. Boden BP, Osbahr DC. High-risk stress fractures: evaluation and treatment. J Am Acad Orthop Surg. 2000;8(6):344-353.

26. Gaeta M, Minutoli F, Scribano E, et al. CT and MR imaging findings in athletes with early tibial stress injuries: comparison with bone scintigraphy findings and emphasis on cortical abnormalities. Radiology. 2005;235(2):553-561.

27. Matheson GO, Clement DB, Mckenzie DC, Taunton JE, Lloyd-Smith DR, Macintyre JG. Stress fractures in athletes. Am J Sports Med. 1987;15(1):46-58.

28. Iwamoto J, Takeda T. Stress fractures in athletes: review of 196 cases. J Orthop Sci. 2003;8(3):273-278.

29. Noakes TD, Smith JA, Lindenberg G, Wills CE. Pelvic stress fractures in long distance runners. Am J Sports Med. 1985;13(2):120-123.

30. Neubauer T, Brand J, Lidder S, Krawany M. Stress fractures of the femoral neck in runners: a review. Res Sports Med. 2016;24(3):283-297.

31. Evans JT, Guyver PM, Kassam AM, Hubble MJW. Displaced femoral neck stress fractures in Royal Marine recruits—management and results of operative treatment. J R Nav Med Serv. 2012;98(2):3-5.

32. Orava S. Stress fractures. Br J Sports Med. 1980;14(1):40-44.

 

 

33. Niva MH, Kiuru MJ, Haataja R, Pihlajamäki HK. Fatigue injuries of the femur. J Bone Joint Surg Br. 2005;87(10):1385-1390.

34. Weishaar MD, McMillian DJ, Moore JH. Identification and management of 2 femoral shaft stress injuries. J Orthop Sports Phys Ther. 2005;35(10):665-673.

35. Salminen ST, Pihlajamäki HK, Visuri TI, Böstman OM. Displaced fatigue fractures of the femoral shaft. Clin Orthop Relat Res. 2003;(409):250-259.

36. Giladi M, Ahronson Z, Stein M, Danon YL, Milgrom C. Unusual distribution and onset of stress fractures in soldiers. Clin Orthop Relat Res. 1985;(192):142-146.

37. Matheson GO, Clement DB, Mckenzie DC, Taunton JE, Lloyd-Smith DR, Macintyre JG. Stress fractures in athletes. Am J Sports Med. 1987;15(1):46-58.

38. Green NE, Rogers RA, Lipscomb AB. Nonunions of stress fractures of the tibia. Am J Sports Med. 1985;13(3):171-176.

39. Orava S, Hulkko A. Stress fracture of the mid-tibial shaft. Acta Orthop Scand. 1984;55(1):35-37.

40. Swenson EJ Jr, DeHaven KE, Sebastianelli WJ, Hanks G, Kalenak A, Lynch JM. The effect of a pneumatic leg brace on return to play in athletes with tibial stress fractures. Am J Sports Med. 1997;25(3):322-328.

41. Rettig AC, Shelbourne KD, McCarroll JR, Bisesi M, Watts J. The natural history and treatment of delayed union stress fractures of the anterior cortex of the tibia. Am J Sports Med. 1988;16(3):250-255.

42. Varner KE, Younas SA, Lintner DM, Marymont JV. Chronic anterior midtibial stress fractures in athletes treated with reamed intramedullary nailing. Am J Sports Med. 2005;33(7):1071-1076.

43. Donahue SW, Sharkey NA. Strains in the metatarsals during the stance phase of gait: implications for stress fractures. J Bone Joint Surg Am. 1999;81(9):1236-1244.

44. Giuliani J, Masini B, Alitz C, Owens BD. Barefoot-simulating footwear associated with metatarsal stress injury in 2 runners. Orthopedics. 2011;34(7):e320-e323.

45. DeLee JC, Evans JP, Julian J. Stress fracture of the fifth metatarsal. Am J Sports Med. 1983;11(5):349-353.

46. Lappe JM, Stegman MR, Recker RR. The impact of lifestyle factors on stress fractures in female army recruits. Osteoporos Int. 2001;12(1):35-42.

47. Friedl KE, Evans RK, Moran DS. Stress fracture and military medical readiness: bridging basic and applied research. Med Sci Sports Exerc. 2008;40(11 suppl):S609-S622.

48. Lappe J, Cullen D, Haynatzki G, Recker R, Ahlf R, Thompson K. Calcium and vitamin D supplementation decreases incidence of stress fractures in female navy recruits. J Bone Miner Res. 2008;23(5):741-749.

49. DIPART (Vitamin D Individual Patient Analysis of Randomized Trials) Group. Patient level pooled analysis of 68 500 patients from seven major vitamin D fracture trials in US and Europe. BMJ. 2010;340:b5463.

50. Duckham RL, Peirce N, Meyer C, Summers GD, Cameron N, Brooke-Wavell K. Risk factors for stress fracture in female endurance athletes: a cross-sectional study. BMJ Open. 2012;2(6).

51. Milgrom C, Finestone A, Novack V, et al. The effect of prophylactic treatment with risedronate on stress fracture incidence among infantry recruits. Bone. 2004;35(2):418-424.

52. Crowell HP, Davis IS. Gait retraining to reduce lower extremity loading in runners. Clin Biomech. 2011;26(1):78-83.

53. Ekenman I, Milgrom C, Finestone A, et al. The role of biomechanical shoe orthoses in tibial stress fracture prevention. Am J Sports Med. 2002;30(6):866-870.

54. Warden SJ, Hurst JA, Sanders MS, Turner CH, Burr DB, Li J. Bone adaptation to a mechanical loading program significantly increases skeletal fatigue resistance. J Bone Miner Res. 2005;20(5):809-816.

Issue
The American Journal of Orthopedics - 46(4)
Issue
The American Journal of Orthopedics - 46(4)
Page Number
176-183
Page Number
176-183
Publications
Publications
Topics
Article Type
Display Headline
Bone Stress Injuries in the Military: Diagnosis, Management, and Prevention
Display Headline
Bone Stress Injuries in the Military: Diagnosis, Management, and Prevention
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

Applying Military Strategy to Complex Knee Reconstruction: Tips for Planning and Executing Advanced Surgery

Article Type
Changed
Thu, 09/19/2019 - 13:21
Display Headline
Applying Military Strategy to Complex Knee Reconstruction: Tips for Planning and Executing Advanced Surgery

Take-Home Points

  • Thorough preoperative planning is imperative and inclusive of history, physical examination, radiographs, and MRI and potentially CT scan.
  • Plan carefully for needed graft sources (autografts and allografts).
  • Rehabilitation starts preoperatively and a detailed individualized plan is often warranted.
  • Indicated ligamentous repair or augmented repair with reconstruction is more likely to succeed when performed within 2 weeks of injury.
  • Complex combined knee restoration surgery can be safely performed in an outpatient setting.

Complex combined knee restoration surgery can be safely performed in an outpatient setting. The term complex knee restoration is used to describe management of knee injuries that are more involved—that is, there is damage to the menisci, cartilage, ligaments, and bones. Management entails not only determining the best treatment options but navigating the more complex logistics of making sure all necessary grafts (fresh and frozen allografts and autografts), implants, and instrumentation are readily available as these cases come to fruition.

The military healthcare paradigm often involves the added logistics of transporting the service member to the correct military treatment facility at the correct time and ensuring the patient’s work-up is complete before he or she arrives for the complex knee restoration. Such cases require significant rehabilitation and time away from family and work, so anything that reduces the morbidity of the surgical undertaking and the overall “morbidity footprint” of time away and that helps the patient return to normal function are value-added and worthy of our attention and diligence in developing an efficient system for managing complex cases.

The globally integrated military healthcare system that is in place has matured over the past decades to allow for the significant majority of the necessary preoperative work-up to be performed at a soldier’s current duty station, wherever in the world that may be, under the guidance of local healthcare providers with specific inputs from the knee restoration surgeon who eventually receives the patient for the planned surgical intervention.

Algorithm for Knee Restoration Planning

Alignment Issues

The first task is to confirm the realignment indication. Realignment may be performed with a proximal opening-wedge medial tibial osteotomy (OWMTO), a distal opening-wedge lateral femoral osteotomy (OWLFO), or a tibial tubercle osteotomy (TTO).1 Given the reproducible clinical improvement achieved and the robust nature of the fixation, these osteotomies are often the first surgical step in complex knee restorations.2 The final determination, made by the surgeon in consultation with the patient, is whether to perform the indicated osteotomy alone or in combination with the rest of the planned restoration surgery. In the vast majority of cases I have managed over the past 2 decades, I have performed the entire knee restoration in a single operation.3 Within the past 5 years, combining the procedures has become even more feasible with the important progress made in multimodal pain management and with the close collaboration of anesthesiologists.4

Meniscus and Cartilage Status

The integration status of meniscus and cartilage within the medial and lateral tibiofemoral compartments is crucial to the comprehensive restoration plan. In fact, the success of the restoration can be said to be dependent on the functional status and health of meniscus and cartilage—which either succeed together or fail apart.

Important covariables are age, prior surgical interventions, activity level expected or allowed after surgery, and size, location, and depth of cartilage injury.5 Whether a cartilage injury is monopolar or bipolar is determined with advanced imaging (magnetic resonance imaging [MRI], computed tomography [CT], weight-bearing radiography) along with analysis of a thorough history (including a review of prior operative reports and arthroscopic images) and a knee examination. Bipolar injuries that involve the condyle and juxtaposed plateau often bode poorly for good clinical outcomes—compared with unipolar lesions, which usually involve the condylar surfaces in isolation. The same thinking regarding the patellofemoral compartment is appropriate. Cartilage lesions that involve the juxtaposed surfaces of the patellar and trochlear groove do poorer than isolated lesions, which are more amenable to cartilage restoration options. The literature on potential cartilage restoration options for the patella and trochlea is expanding. I use the 3-dimensional cartilage restoration option of a fresh patellar osteochondral allograft (OCA) for high-grade cartilage lesions thought to be clinically significant. Other options, such as microfracture, cell-based cartilage restoration, and Osteochondral Autograft Transfer System (Arthrex) procedures (from the thinner condylar cartilage), have varied in their outcomes for patellar lesions. According to more recent literature and a review of my clinical results, fresh patellar OCAs are a good option for patellar lesions.6 Similarly, trochlear lesions can be managed with microfracture, cell-based therapies, or fresh OCAs, depending on surgeon preference.

Functional total or subtotal meniscectomies are often best managed with meniscal allograft transplantation (MAT). An intact or replaced medial or lateral meniscus works synergistically with any planned anterior cruciate ligament (ACL) reconstruction. Again, the adage that meniscus and cartilage succeed together or fail apart is appropriate when planning complex knee restoration. Signs of extrusion or joint-space narrowing and root avulsion or significant loss of meniscal tissue, visualized on MRI or on prior surgical images, often help substantiate a MAT plan. MAT has had the best long-term results when performed in compartments with cartilage damage limited to grade I and grade II changes, in stable knees, and in knees that can be concurrently stabilized.5 Technological advances have increased the value of MAT by limiting the morbidity of the operation and thus allowing for other surgery to be performed concomitantly and safely as part of comprehensive knee restoration. Over the past 20 years, I have arthroscopically performed MAT with bone plugs for medial and lateral procedures, and my results with active-duty soldiers have been promising, paralleling the clinic success reported in the literature.5 Alignment must be considered when performing MAT or cartilage restoration. If the addition of meniscal transplantation or cartilage restoration leaves the knee with residual malalignment of 6° or more, corrective osteotomy is performed.

My view and practice have been to plan for an unloading chondroprotective osteotomy. The goal is a balanced mechanical axis, whether achieved with mere joint-space restoration or with an osteotomy added.

Ligament Status

A comprehensive plan for establishing ligamentous stability is paramount to the overall clinical success of complex knee restorations. Meniscus and cartilage restoration efforts are wasted if clinically significant ligamentous laxity is not concomitantly treated with reconstruction surgery. Revision ACL surgery is by far the most commonly performed surgery in complex knee cases. Diligence in interpreting advanced MRI and physical examination findings is required to make sure there are no concomitant patholaxities in the medial, lateral, posterior, posteromedial, and posterolateral ligamentous complexes. Appropriate ligamentous reconstruction is warranted to maximize clinical results in complex knee restorations. Such cases more commonly require allograft tissue, as the availability of autograft tissue is the limiting issue with 2 or more ligament reconstructions. Military treatment facilities, in which comprehensive knee restorations are performed, have soft-tissue allografts on hand at all times. Having tissue readily available makes it less imperative to determine the most appropriate combined ligamentous reconstruction surgery before the patient arrives—a process that is often difficult. This situation is in contradistinction to the need for specific matched-for-size allograft frozen meniscus and fresh cartilage tissues, both of which require tissue-form procurement in advance of planned restoration surgery.

Rehabilitation Plan

The rehabilitation plan is driven by the part of the complex knee restoration that demands the most caution with respect to weight-bearing and range of motion (ROM) during the first 6 weeks after surgery. The most limiting restorative surgeries involve meniscus and cartilage. Recent clinical trial results support weight-bearing soon after tibial osteotomy performed in the absence of meniscus and cartilage restoration that would otherwise limit weight-bearing for 6 weeks.7 Therefore, most of these complex knee restorations are appropriately managed with a hinged brace locked in extension for toe-touch weight-bearing ambulation, with ROM usually limited to 0° to 90° during the first 6 weeks. Quadriceps rehabilitation with straight-leg raises and isometric contractions is prescribed with a focus on maintaining full extension as the default resting knee position until normalized resting quadriceps tone returns. Full weight-bearing and advancement to full flexion are routinely allowed by 6 weeks.

Case Report

A 41-year-old male service member who was overseas was referred to my clinic for high tibial osteotomy consideration and possible revision ACL reconstruction. His symptoms were medial pain, recurrent instability, and patellofemoral crepitance. Three years earlier, he underwent autograft transtibial ACL reconstruction with significant débridement of the medial meniscus. Before his trip to the United States, I asked that new MRI scans, full-length standing hip–knee–ankle bilateral alignment radiographs, and a 4-view weight-bearing knee series (including a posteroanterior Rosenberg view) be obtained and sent for my review (Figure 1).

Figure 1.
In retrospect, this request obviated the need for the patient to make a second overseas trip.

Review of the patient’s detailed preoperative imaging work-up and electronic medical record (available through the military’s healthcare system) made it clear that far more surgical intervention was needed than originally assumed. A significant full-thickness chondral lesion of the patella and a subtotal medial meniscectomy would necessitate patellar cartilage restoration and medial MAT in addition to the high tibial osteotomy and revision ACL reconstruction.

Had this patient been sent through the military medical evacuation system, he would have had to make 2 overseas trips—one trip for preoperative evaluation and advanced imaging, whereby he would have been placed on a match list and had to wait for a requested meniscal allograft and an appropriate graft for his patella, and the other trip for his complex surgery. Fortunately, the military’s integrated healthcare network with true 2-way communication and the collaborative use of integrated electronic medical records proved extremely valuable in making management of this complex knee restoration as efficient as possible. From the perspective of the soldier and his military unit, only 1 big overseas trip was needed; from the perspective of the military healthcare system, responsible use of healthcare personnel and monetary resources and well-planned complex knee restoration surgery saved a knee and allowed a soldier-athlete to rejoin the fields of friendly strife.

 

 


This patient had undergone functional complete medial meniscectomy and had significant medial compartment pain, varus alignment, and minimal medial joint-space narrowing (assumed grossly intact cartilage about plateau and condyle), plus patellofemoral pain and crepitance with a large high-grade posttraumatic patellar chondral lesion with normal patellofemoral alignment. He also had an isolated failed ACL graft from prior ACL reconstruction. The previous hardware placement was analyzed, and it was determined that the femoral interference screw could be left in place and that the tibial interference screw most likely would be removed. The mechanical axis determined from the bilateral long-leg standing images dictated a need for proximal OWMTO for correction up to 8° to allow the axis to cross the center of the knee. The 8° correction is the measured correction needed to move the axis from its pass through the medial compartment to a more balanced position across the middle of the knee.

The overall plan encompassed major concomitant corrective and restorative surgery: tibial osteotomy, medial MAT, revision ACL reconstruction, and fresh mega-patellar OCA. Once the frozen meniscus and eventually the fresh patella (both matched for size) were obtained, arrangements for the patient’s trip for the complex surgery were finalized.

Surgery was started with brief arthroscopic evaluation to confirm the overall appropriateness of the planned procedure and to determine if any other minor deficiencies would warrant operative intervention. Once confirmed, the restoration proceeded as planned. The OWMTO was performed with a PEEK (polyetheretherketone) wedge implant (iBalance; Arthrex) followed by arthroscopic preparation for medial MAT with removal of any meniscal remnants and placement of passing sutures (Figure 2A).
Figure 2.
The meniscus was delivered across the compartment through an enlarged medial portal. The posterior horn bone plug was secured in the retrosocket with sutures tied off to an anterior tibial cortical 2-hole button (Figure 2B). The body of the posterior third of the meniscus was secured to the posterior capsule by tying the 2 previously placed vertical sutures to each other over the intervening capsule. The anterior horn bone plug (10 mm in diameter × 7 mm thick) was then secured within a 10-mm socket drilled antegrade to a depth of 10 mm with a SwiveLock anchor (Arthrex) for interference bony fixation and recapitulation of the normal hoop stresses. Inside-out sutures were placed to secure the capsule to the meniscus and thereby prevent iatrogenic meniscal extrusion. A standard all-inside allograft revision ACL reconstruction was performed with an 11-mm FlipCutter and guide system (Arthrex) to make the femoral and tibial retrosockets.
Figure 3.
Passing sutures were used to deploy the ACL graft construct, which was fashioned into a quadruple-stranded GraftLink construct (Arthrex) from a 28-mm allograft peroneus longus tendon (Figure 3).

When the arthroscopic portion of the surgery was finished, a medial parapatellar arthrotomy was made to allow the patella to be inverted and complete fresh mega-patellar OCA placement (Figure 4).
Figure 4.
A drill guide system was used to prepare the host patella with the largest contained circular socket (35 mm) with a 1-mm to 2-mm cortical margin to a marginal bony depth and an 8- to 10-mm central bony depth.
Figure 5.
The donor patella was then prepared on the graft preparation guide to allow a mega-patellar osteochondral plug to be press-fit into the recipient socket after thorough pulse lavaging of the bony portion of the graft to negate as much of the marrow cellular elements as possible (Figure 5). After appropriate tracking was confirmed, the arthrotomy and skin incision were closed.

The knee was placed in a ROM brace locked in full extension. The patient was able to do straight-leg raises and calf pumps in the recovery room and was discharged home with a saphenous nerve block and an iPACK (Interspace between the Popliteal Artery and the Capsule of the posterior Knee) nerve block in place. Home-based therapy was started immediately. After the patient’s first postoperative visit, formal therapy (discussed earlier) was initiated (Figure 6).
Figure 6.
Toe-touch weight-bearing with the brace locked in extension and ROM limited to 0° to 90° were maintained until 6 weeks, when full weight-bearing and full ROM were allowed. The rehabilitation course was uneventful. The patient continued on active duty and completed his military service, retiring 3 years later with 20 years of service.

Discussion

All-inside GraftLink ACL reconstruction with cortical suspensory fixation appears well suited to combined medial and lateral MAT and/or cartilage restoration—whether it be large fresh OCA combined with medial MAT (as in this patient’s case) or another form of cartilage restoration. Arthroscopic MAT with anatomically fashioned and placed bone plugs minimizes the morbidity within the notch footprints and allows for discrete revision socket formation for both femoral and tibial ACL graft placement. In this case, preparation for the medial MAT and ACL sockets was followed by MAT/ACL construct implantation and secure fixation. The arthrotomy was thereby minimized and placed to allow for efficient mega-patellar OCA graft placement.

Over the past decade, I have performed similar concomitant procedures using the same surgical principles that allow for efficient and reproducible complex knee restoration (Figure 7).

Figure 7.
Common examples are multiligamentous reconstructions (ACL–posterior cruciate ligament–posterolateral corner, ACL–posterior cruciate ligament–medial collateral ligament, ACL–anterolateral ligament, and ACL–medial patellofemoral ligament) combined with concomitant meniscus and cartilage restoration and various osteotomies.

Although use of an algorithm for the management of complex knee restorations is not universally feasible, I offer guidelines for complex knee injuries:

  • At each decision point, determine whether the knee and the patient can withstand the planned surgical intervention.
  • After deciding to proceed with knee restoration, list the meniscus, cartilage, and ligament injuries that must be addressed.
  • Determine which repairs (meniscus, cartilage, ligament) are warranted. Repairs generally are best performed within a period of 7 to 14 days.
  • Determine which ligament injuries warrant reconstruction. Allograft tissue typically is used for multiligament reconstruction.
  • Rank-order the ligament reconstruction requirements. It is fine to proceed with all of the reconstructions if the case is moving smoothly, if there are no developing tourniquet-time issues, and if the soft-tissue envelope is responding as expected.
  • Consider autograft and/or allograft tissue needs for concomitant or staged meniscus and cartilage restoration options/requirements.


Am J Orthop. 2017;46(4):170-175, 202. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Uquillas C, Rossy W, Nathasingh CK, Strauss E, Jazrawi L, Gonzalez-Lomas G. Osteotomies about the knee: AAOS exhibit selection. J Bone Joint Surg Am. 2014;96(24):e199.

2. Mehl J, Paul J, Feucht MJ, et al. ACL deficiency and varus osteoarthritis: high tibial osteotomy alone or combined with ACL reconstruction? Arch Orthop Trauma Surg. 2017;137(2):233-240.

3. Scordino LE, DeBerardino TM. Surgical treatment of osteoarthritis in the middle-aged athlete: new horizons in high tibial osteotomies. Sports Med Arthrosc. 2013;21(1):47-51.

4. Ferrari D, Lopes TJ, França PF, Azevedo FM, Pappas E. Outpatient versus inpatient anterior cruciate ligament reconstruction: a systematic review with meta-analysis. Knee. 2017;24(2):197-206.

5. Weber AE, Gitelis ME, McCarthy MA, Yanke AB, Cole BJ. Malalignment: a requirement for cartilage and organ restoration. Sports Med Arthrosc. 2016;24(2):e14-e22.

6. Prince MR, King AH, Stuart MJ, Dahm DL, Krych AJ. Treatment of patellofemoral cartilage lesions in the young, active patient. J Knee Surg. 2015;28(4):285-295.

7. Scordino LE, DeBerardino TM. Surgical treatment of osteoarthritis in the middle-aged athlete: new horizons in high tibial osteotomies. Sports Med Arthrosc. 2013;21(1):47-51.

Article PDF
Author and Disclosure Information

Author’s Disclosure Statement: The author reports no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 46(4)
Publications
Topics
Page Number
170-175, 202
Sections
Author and Disclosure Information

Author’s Disclosure Statement: The author reports no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Author’s Disclosure Statement: The author reports no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

Take-Home Points

  • Thorough preoperative planning is imperative and inclusive of history, physical examination, radiographs, and MRI and potentially CT scan.
  • Plan carefully for needed graft sources (autografts and allografts).
  • Rehabilitation starts preoperatively and a detailed individualized plan is often warranted.
  • Indicated ligamentous repair or augmented repair with reconstruction is more likely to succeed when performed within 2 weeks of injury.
  • Complex combined knee restoration surgery can be safely performed in an outpatient setting.

Complex combined knee restoration surgery can be safely performed in an outpatient setting. The term complex knee restoration is used to describe management of knee injuries that are more involved—that is, there is damage to the menisci, cartilage, ligaments, and bones. Management entails not only determining the best treatment options but navigating the more complex logistics of making sure all necessary grafts (fresh and frozen allografts and autografts), implants, and instrumentation are readily available as these cases come to fruition.

The military healthcare paradigm often involves the added logistics of transporting the service member to the correct military treatment facility at the correct time and ensuring the patient’s work-up is complete before he or she arrives for the complex knee restoration. Such cases require significant rehabilitation and time away from family and work, so anything that reduces the morbidity of the surgical undertaking and the overall “morbidity footprint” of time away and that helps the patient return to normal function are value-added and worthy of our attention and diligence in developing an efficient system for managing complex cases.

The globally integrated military healthcare system that is in place has matured over the past decades to allow for the significant majority of the necessary preoperative work-up to be performed at a soldier’s current duty station, wherever in the world that may be, under the guidance of local healthcare providers with specific inputs from the knee restoration surgeon who eventually receives the patient for the planned surgical intervention.

Algorithm for Knee Restoration Planning

Alignment Issues

The first task is to confirm the realignment indication. Realignment may be performed with a proximal opening-wedge medial tibial osteotomy (OWMTO), a distal opening-wedge lateral femoral osteotomy (OWLFO), or a tibial tubercle osteotomy (TTO).1 Given the reproducible clinical improvement achieved and the robust nature of the fixation, these osteotomies are often the first surgical step in complex knee restorations.2 The final determination, made by the surgeon in consultation with the patient, is whether to perform the indicated osteotomy alone or in combination with the rest of the planned restoration surgery. In the vast majority of cases I have managed over the past 2 decades, I have performed the entire knee restoration in a single operation.3 Within the past 5 years, combining the procedures has become even more feasible with the important progress made in multimodal pain management and with the close collaboration of anesthesiologists.4

Meniscus and Cartilage Status

The integration status of meniscus and cartilage within the medial and lateral tibiofemoral compartments is crucial to the comprehensive restoration plan. In fact, the success of the restoration can be said to be dependent on the functional status and health of meniscus and cartilage—which either succeed together or fail apart.

Important covariables are age, prior surgical interventions, activity level expected or allowed after surgery, and size, location, and depth of cartilage injury.5 Whether a cartilage injury is monopolar or bipolar is determined with advanced imaging (magnetic resonance imaging [MRI], computed tomography [CT], weight-bearing radiography) along with analysis of a thorough history (including a review of prior operative reports and arthroscopic images) and a knee examination. Bipolar injuries that involve the condyle and juxtaposed plateau often bode poorly for good clinical outcomes—compared with unipolar lesions, which usually involve the condylar surfaces in isolation. The same thinking regarding the patellofemoral compartment is appropriate. Cartilage lesions that involve the juxtaposed surfaces of the patellar and trochlear groove do poorer than isolated lesions, which are more amenable to cartilage restoration options. The literature on potential cartilage restoration options for the patella and trochlea is expanding. I use the 3-dimensional cartilage restoration option of a fresh patellar osteochondral allograft (OCA) for high-grade cartilage lesions thought to be clinically significant. Other options, such as microfracture, cell-based cartilage restoration, and Osteochondral Autograft Transfer System (Arthrex) procedures (from the thinner condylar cartilage), have varied in their outcomes for patellar lesions. According to more recent literature and a review of my clinical results, fresh patellar OCAs are a good option for patellar lesions.6 Similarly, trochlear lesions can be managed with microfracture, cell-based therapies, or fresh OCAs, depending on surgeon preference.

Functional total or subtotal meniscectomies are often best managed with meniscal allograft transplantation (MAT). An intact or replaced medial or lateral meniscus works synergistically with any planned anterior cruciate ligament (ACL) reconstruction. Again, the adage that meniscus and cartilage succeed together or fail apart is appropriate when planning complex knee restoration. Signs of extrusion or joint-space narrowing and root avulsion or significant loss of meniscal tissue, visualized on MRI or on prior surgical images, often help substantiate a MAT plan. MAT has had the best long-term results when performed in compartments with cartilage damage limited to grade I and grade II changes, in stable knees, and in knees that can be concurrently stabilized.5 Technological advances have increased the value of MAT by limiting the morbidity of the operation and thus allowing for other surgery to be performed concomitantly and safely as part of comprehensive knee restoration. Over the past 20 years, I have arthroscopically performed MAT with bone plugs for medial and lateral procedures, and my results with active-duty soldiers have been promising, paralleling the clinic success reported in the literature.5 Alignment must be considered when performing MAT or cartilage restoration. If the addition of meniscal transplantation or cartilage restoration leaves the knee with residual malalignment of 6° or more, corrective osteotomy is performed.

My view and practice have been to plan for an unloading chondroprotective osteotomy. The goal is a balanced mechanical axis, whether achieved with mere joint-space restoration or with an osteotomy added.

Ligament Status

A comprehensive plan for establishing ligamentous stability is paramount to the overall clinical success of complex knee restorations. Meniscus and cartilage restoration efforts are wasted if clinically significant ligamentous laxity is not concomitantly treated with reconstruction surgery. Revision ACL surgery is by far the most commonly performed surgery in complex knee cases. Diligence in interpreting advanced MRI and physical examination findings is required to make sure there are no concomitant patholaxities in the medial, lateral, posterior, posteromedial, and posterolateral ligamentous complexes. Appropriate ligamentous reconstruction is warranted to maximize clinical results in complex knee restorations. Such cases more commonly require allograft tissue, as the availability of autograft tissue is the limiting issue with 2 or more ligament reconstructions. Military treatment facilities, in which comprehensive knee restorations are performed, have soft-tissue allografts on hand at all times. Having tissue readily available makes it less imperative to determine the most appropriate combined ligamentous reconstruction surgery before the patient arrives—a process that is often difficult. This situation is in contradistinction to the need for specific matched-for-size allograft frozen meniscus and fresh cartilage tissues, both of which require tissue-form procurement in advance of planned restoration surgery.

Rehabilitation Plan

The rehabilitation plan is driven by the part of the complex knee restoration that demands the most caution with respect to weight-bearing and range of motion (ROM) during the first 6 weeks after surgery. The most limiting restorative surgeries involve meniscus and cartilage. Recent clinical trial results support weight-bearing soon after tibial osteotomy performed in the absence of meniscus and cartilage restoration that would otherwise limit weight-bearing for 6 weeks.7 Therefore, most of these complex knee restorations are appropriately managed with a hinged brace locked in extension for toe-touch weight-bearing ambulation, with ROM usually limited to 0° to 90° during the first 6 weeks. Quadriceps rehabilitation with straight-leg raises and isometric contractions is prescribed with a focus on maintaining full extension as the default resting knee position until normalized resting quadriceps tone returns. Full weight-bearing and advancement to full flexion are routinely allowed by 6 weeks.

Case Report

A 41-year-old male service member who was overseas was referred to my clinic for high tibial osteotomy consideration and possible revision ACL reconstruction. His symptoms were medial pain, recurrent instability, and patellofemoral crepitance. Three years earlier, he underwent autograft transtibial ACL reconstruction with significant débridement of the medial meniscus. Before his trip to the United States, I asked that new MRI scans, full-length standing hip–knee–ankle bilateral alignment radiographs, and a 4-view weight-bearing knee series (including a posteroanterior Rosenberg view) be obtained and sent for my review (Figure 1).

Figure 1.
In retrospect, this request obviated the need for the patient to make a second overseas trip.

Review of the patient’s detailed preoperative imaging work-up and electronic medical record (available through the military’s healthcare system) made it clear that far more surgical intervention was needed than originally assumed. A significant full-thickness chondral lesion of the patella and a subtotal medial meniscectomy would necessitate patellar cartilage restoration and medial MAT in addition to the high tibial osteotomy and revision ACL reconstruction.

Had this patient been sent through the military medical evacuation system, he would have had to make 2 overseas trips—one trip for preoperative evaluation and advanced imaging, whereby he would have been placed on a match list and had to wait for a requested meniscal allograft and an appropriate graft for his patella, and the other trip for his complex surgery. Fortunately, the military’s integrated healthcare network with true 2-way communication and the collaborative use of integrated electronic medical records proved extremely valuable in making management of this complex knee restoration as efficient as possible. From the perspective of the soldier and his military unit, only 1 big overseas trip was needed; from the perspective of the military healthcare system, responsible use of healthcare personnel and monetary resources and well-planned complex knee restoration surgery saved a knee and allowed a soldier-athlete to rejoin the fields of friendly strife.

 

 


This patient had undergone functional complete medial meniscectomy and had significant medial compartment pain, varus alignment, and minimal medial joint-space narrowing (assumed grossly intact cartilage about plateau and condyle), plus patellofemoral pain and crepitance with a large high-grade posttraumatic patellar chondral lesion with normal patellofemoral alignment. He also had an isolated failed ACL graft from prior ACL reconstruction. The previous hardware placement was analyzed, and it was determined that the femoral interference screw could be left in place and that the tibial interference screw most likely would be removed. The mechanical axis determined from the bilateral long-leg standing images dictated a need for proximal OWMTO for correction up to 8° to allow the axis to cross the center of the knee. The 8° correction is the measured correction needed to move the axis from its pass through the medial compartment to a more balanced position across the middle of the knee.

The overall plan encompassed major concomitant corrective and restorative surgery: tibial osteotomy, medial MAT, revision ACL reconstruction, and fresh mega-patellar OCA. Once the frozen meniscus and eventually the fresh patella (both matched for size) were obtained, arrangements for the patient’s trip for the complex surgery were finalized.

Surgery was started with brief arthroscopic evaluation to confirm the overall appropriateness of the planned procedure and to determine if any other minor deficiencies would warrant operative intervention. Once confirmed, the restoration proceeded as planned. The OWMTO was performed with a PEEK (polyetheretherketone) wedge implant (iBalance; Arthrex) followed by arthroscopic preparation for medial MAT with removal of any meniscal remnants and placement of passing sutures (Figure 2A).
Figure 2.
The meniscus was delivered across the compartment through an enlarged medial portal. The posterior horn bone plug was secured in the retrosocket with sutures tied off to an anterior tibial cortical 2-hole button (Figure 2B). The body of the posterior third of the meniscus was secured to the posterior capsule by tying the 2 previously placed vertical sutures to each other over the intervening capsule. The anterior horn bone plug (10 mm in diameter × 7 mm thick) was then secured within a 10-mm socket drilled antegrade to a depth of 10 mm with a SwiveLock anchor (Arthrex) for interference bony fixation and recapitulation of the normal hoop stresses. Inside-out sutures were placed to secure the capsule to the meniscus and thereby prevent iatrogenic meniscal extrusion. A standard all-inside allograft revision ACL reconstruction was performed with an 11-mm FlipCutter and guide system (Arthrex) to make the femoral and tibial retrosockets.
Figure 3.
Passing sutures were used to deploy the ACL graft construct, which was fashioned into a quadruple-stranded GraftLink construct (Arthrex) from a 28-mm allograft peroneus longus tendon (Figure 3).

When the arthroscopic portion of the surgery was finished, a medial parapatellar arthrotomy was made to allow the patella to be inverted and complete fresh mega-patellar OCA placement (Figure 4).
Figure 4.
A drill guide system was used to prepare the host patella with the largest contained circular socket (35 mm) with a 1-mm to 2-mm cortical margin to a marginal bony depth and an 8- to 10-mm central bony depth.
Figure 5.
The donor patella was then prepared on the graft preparation guide to allow a mega-patellar osteochondral plug to be press-fit into the recipient socket after thorough pulse lavaging of the bony portion of the graft to negate as much of the marrow cellular elements as possible (Figure 5). After appropriate tracking was confirmed, the arthrotomy and skin incision were closed.

The knee was placed in a ROM brace locked in full extension. The patient was able to do straight-leg raises and calf pumps in the recovery room and was discharged home with a saphenous nerve block and an iPACK (Interspace between the Popliteal Artery and the Capsule of the posterior Knee) nerve block in place. Home-based therapy was started immediately. After the patient’s first postoperative visit, formal therapy (discussed earlier) was initiated (Figure 6).
Figure 6.
Toe-touch weight-bearing with the brace locked in extension and ROM limited to 0° to 90° were maintained until 6 weeks, when full weight-bearing and full ROM were allowed. The rehabilitation course was uneventful. The patient continued on active duty and completed his military service, retiring 3 years later with 20 years of service.

Discussion

All-inside GraftLink ACL reconstruction with cortical suspensory fixation appears well suited to combined medial and lateral MAT and/or cartilage restoration—whether it be large fresh OCA combined with medial MAT (as in this patient’s case) or another form of cartilage restoration. Arthroscopic MAT with anatomically fashioned and placed bone plugs minimizes the morbidity within the notch footprints and allows for discrete revision socket formation for both femoral and tibial ACL graft placement. In this case, preparation for the medial MAT and ACL sockets was followed by MAT/ACL construct implantation and secure fixation. The arthrotomy was thereby minimized and placed to allow for efficient mega-patellar OCA graft placement.

Over the past decade, I have performed similar concomitant procedures using the same surgical principles that allow for efficient and reproducible complex knee restoration (Figure 7).

Figure 7.
Common examples are multiligamentous reconstructions (ACL–posterior cruciate ligament–posterolateral corner, ACL–posterior cruciate ligament–medial collateral ligament, ACL–anterolateral ligament, and ACL–medial patellofemoral ligament) combined with concomitant meniscus and cartilage restoration and various osteotomies.

Although use of an algorithm for the management of complex knee restorations is not universally feasible, I offer guidelines for complex knee injuries:

  • At each decision point, determine whether the knee and the patient can withstand the planned surgical intervention.
  • After deciding to proceed with knee restoration, list the meniscus, cartilage, and ligament injuries that must be addressed.
  • Determine which repairs (meniscus, cartilage, ligament) are warranted. Repairs generally are best performed within a period of 7 to 14 days.
  • Determine which ligament injuries warrant reconstruction. Allograft tissue typically is used for multiligament reconstruction.
  • Rank-order the ligament reconstruction requirements. It is fine to proceed with all of the reconstructions if the case is moving smoothly, if there are no developing tourniquet-time issues, and if the soft-tissue envelope is responding as expected.
  • Consider autograft and/or allograft tissue needs for concomitant or staged meniscus and cartilage restoration options/requirements.


Am J Orthop. 2017;46(4):170-175, 202. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

Take-Home Points

  • Thorough preoperative planning is imperative and inclusive of history, physical examination, radiographs, and MRI and potentially CT scan.
  • Plan carefully for needed graft sources (autografts and allografts).
  • Rehabilitation starts preoperatively and a detailed individualized plan is often warranted.
  • Indicated ligamentous repair or augmented repair with reconstruction is more likely to succeed when performed within 2 weeks of injury.
  • Complex combined knee restoration surgery can be safely performed in an outpatient setting.

Complex combined knee restoration surgery can be safely performed in an outpatient setting. The term complex knee restoration is used to describe management of knee injuries that are more involved—that is, there is damage to the menisci, cartilage, ligaments, and bones. Management entails not only determining the best treatment options but navigating the more complex logistics of making sure all necessary grafts (fresh and frozen allografts and autografts), implants, and instrumentation are readily available as these cases come to fruition.

The military healthcare paradigm often involves the added logistics of transporting the service member to the correct military treatment facility at the correct time and ensuring the patient’s work-up is complete before he or she arrives for the complex knee restoration. Such cases require significant rehabilitation and time away from family and work, so anything that reduces the morbidity of the surgical undertaking and the overall “morbidity footprint” of time away and that helps the patient return to normal function are value-added and worthy of our attention and diligence in developing an efficient system for managing complex cases.

The globally integrated military healthcare system that is in place has matured over the past decades to allow for the significant majority of the necessary preoperative work-up to be performed at a soldier’s current duty station, wherever in the world that may be, under the guidance of local healthcare providers with specific inputs from the knee restoration surgeon who eventually receives the patient for the planned surgical intervention.

Algorithm for Knee Restoration Planning

Alignment Issues

The first task is to confirm the realignment indication. Realignment may be performed with a proximal opening-wedge medial tibial osteotomy (OWMTO), a distal opening-wedge lateral femoral osteotomy (OWLFO), or a tibial tubercle osteotomy (TTO).1 Given the reproducible clinical improvement achieved and the robust nature of the fixation, these osteotomies are often the first surgical step in complex knee restorations.2 The final determination, made by the surgeon in consultation with the patient, is whether to perform the indicated osteotomy alone or in combination with the rest of the planned restoration surgery. In the vast majority of cases I have managed over the past 2 decades, I have performed the entire knee restoration in a single operation.3 Within the past 5 years, combining the procedures has become even more feasible with the important progress made in multimodal pain management and with the close collaboration of anesthesiologists.4

Meniscus and Cartilage Status

The integration status of meniscus and cartilage within the medial and lateral tibiofemoral compartments is crucial to the comprehensive restoration plan. In fact, the success of the restoration can be said to be dependent on the functional status and health of meniscus and cartilage—which either succeed together or fail apart.

Important covariables are age, prior surgical interventions, activity level expected or allowed after surgery, and size, location, and depth of cartilage injury.5 Whether a cartilage injury is monopolar or bipolar is determined with advanced imaging (magnetic resonance imaging [MRI], computed tomography [CT], weight-bearing radiography) along with analysis of a thorough history (including a review of prior operative reports and arthroscopic images) and a knee examination. Bipolar injuries that involve the condyle and juxtaposed plateau often bode poorly for good clinical outcomes—compared with unipolar lesions, which usually involve the condylar surfaces in isolation. The same thinking regarding the patellofemoral compartment is appropriate. Cartilage lesions that involve the juxtaposed surfaces of the patellar and trochlear groove do poorer than isolated lesions, which are more amenable to cartilage restoration options. The literature on potential cartilage restoration options for the patella and trochlea is expanding. I use the 3-dimensional cartilage restoration option of a fresh patellar osteochondral allograft (OCA) for high-grade cartilage lesions thought to be clinically significant. Other options, such as microfracture, cell-based cartilage restoration, and Osteochondral Autograft Transfer System (Arthrex) procedures (from the thinner condylar cartilage), have varied in their outcomes for patellar lesions. According to more recent literature and a review of my clinical results, fresh patellar OCAs are a good option for patellar lesions.6 Similarly, trochlear lesions can be managed with microfracture, cell-based therapies, or fresh OCAs, depending on surgeon preference.

Functional total or subtotal meniscectomies are often best managed with meniscal allograft transplantation (MAT). An intact or replaced medial or lateral meniscus works synergistically with any planned anterior cruciate ligament (ACL) reconstruction. Again, the adage that meniscus and cartilage succeed together or fail apart is appropriate when planning complex knee restoration. Signs of extrusion or joint-space narrowing and root avulsion or significant loss of meniscal tissue, visualized on MRI or on prior surgical images, often help substantiate a MAT plan. MAT has had the best long-term results when performed in compartments with cartilage damage limited to grade I and grade II changes, in stable knees, and in knees that can be concurrently stabilized.5 Technological advances have increased the value of MAT by limiting the morbidity of the operation and thus allowing for other surgery to be performed concomitantly and safely as part of comprehensive knee restoration. Over the past 20 years, I have arthroscopically performed MAT with bone plugs for medial and lateral procedures, and my results with active-duty soldiers have been promising, paralleling the clinic success reported in the literature.5 Alignment must be considered when performing MAT or cartilage restoration. If the addition of meniscal transplantation or cartilage restoration leaves the knee with residual malalignment of 6° or more, corrective osteotomy is performed.

My view and practice have been to plan for an unloading chondroprotective osteotomy. The goal is a balanced mechanical axis, whether achieved with mere joint-space restoration or with an osteotomy added.

Ligament Status

A comprehensive plan for establishing ligamentous stability is paramount to the overall clinical success of complex knee restorations. Meniscus and cartilage restoration efforts are wasted if clinically significant ligamentous laxity is not concomitantly treated with reconstruction surgery. Revision ACL surgery is by far the most commonly performed surgery in complex knee cases. Diligence in interpreting advanced MRI and physical examination findings is required to make sure there are no concomitant patholaxities in the medial, lateral, posterior, posteromedial, and posterolateral ligamentous complexes. Appropriate ligamentous reconstruction is warranted to maximize clinical results in complex knee restorations. Such cases more commonly require allograft tissue, as the availability of autograft tissue is the limiting issue with 2 or more ligament reconstructions. Military treatment facilities, in which comprehensive knee restorations are performed, have soft-tissue allografts on hand at all times. Having tissue readily available makes it less imperative to determine the most appropriate combined ligamentous reconstruction surgery before the patient arrives—a process that is often difficult. This situation is in contradistinction to the need for specific matched-for-size allograft frozen meniscus and fresh cartilage tissues, both of which require tissue-form procurement in advance of planned restoration surgery.

Rehabilitation Plan

The rehabilitation plan is driven by the part of the complex knee restoration that demands the most caution with respect to weight-bearing and range of motion (ROM) during the first 6 weeks after surgery. The most limiting restorative surgeries involve meniscus and cartilage. Recent clinical trial results support weight-bearing soon after tibial osteotomy performed in the absence of meniscus and cartilage restoration that would otherwise limit weight-bearing for 6 weeks.7 Therefore, most of these complex knee restorations are appropriately managed with a hinged brace locked in extension for toe-touch weight-bearing ambulation, with ROM usually limited to 0° to 90° during the first 6 weeks. Quadriceps rehabilitation with straight-leg raises and isometric contractions is prescribed with a focus on maintaining full extension as the default resting knee position until normalized resting quadriceps tone returns. Full weight-bearing and advancement to full flexion are routinely allowed by 6 weeks.

Case Report

A 41-year-old male service member who was overseas was referred to my clinic for high tibial osteotomy consideration and possible revision ACL reconstruction. His symptoms were medial pain, recurrent instability, and patellofemoral crepitance. Three years earlier, he underwent autograft transtibial ACL reconstruction with significant débridement of the medial meniscus. Before his trip to the United States, I asked that new MRI scans, full-length standing hip–knee–ankle bilateral alignment radiographs, and a 4-view weight-bearing knee series (including a posteroanterior Rosenberg view) be obtained and sent for my review (Figure 1).

Figure 1.
In retrospect, this request obviated the need for the patient to make a second overseas trip.

Review of the patient’s detailed preoperative imaging work-up and electronic medical record (available through the military’s healthcare system) made it clear that far more surgical intervention was needed than originally assumed. A significant full-thickness chondral lesion of the patella and a subtotal medial meniscectomy would necessitate patellar cartilage restoration and medial MAT in addition to the high tibial osteotomy and revision ACL reconstruction.

Had this patient been sent through the military medical evacuation system, he would have had to make 2 overseas trips—one trip for preoperative evaluation and advanced imaging, whereby he would have been placed on a match list and had to wait for a requested meniscal allograft and an appropriate graft for his patella, and the other trip for his complex surgery. Fortunately, the military’s integrated healthcare network with true 2-way communication and the collaborative use of integrated electronic medical records proved extremely valuable in making management of this complex knee restoration as efficient as possible. From the perspective of the soldier and his military unit, only 1 big overseas trip was needed; from the perspective of the military healthcare system, responsible use of healthcare personnel and monetary resources and well-planned complex knee restoration surgery saved a knee and allowed a soldier-athlete to rejoin the fields of friendly strife.

 

 


This patient had undergone functional complete medial meniscectomy and had significant medial compartment pain, varus alignment, and minimal medial joint-space narrowing (assumed grossly intact cartilage about plateau and condyle), plus patellofemoral pain and crepitance with a large high-grade posttraumatic patellar chondral lesion with normal patellofemoral alignment. He also had an isolated failed ACL graft from prior ACL reconstruction. The previous hardware placement was analyzed, and it was determined that the femoral interference screw could be left in place and that the tibial interference screw most likely would be removed. The mechanical axis determined from the bilateral long-leg standing images dictated a need for proximal OWMTO for correction up to 8° to allow the axis to cross the center of the knee. The 8° correction is the measured correction needed to move the axis from its pass through the medial compartment to a more balanced position across the middle of the knee.

The overall plan encompassed major concomitant corrective and restorative surgery: tibial osteotomy, medial MAT, revision ACL reconstruction, and fresh mega-patellar OCA. Once the frozen meniscus and eventually the fresh patella (both matched for size) were obtained, arrangements for the patient’s trip for the complex surgery were finalized.

Surgery was started with brief arthroscopic evaluation to confirm the overall appropriateness of the planned procedure and to determine if any other minor deficiencies would warrant operative intervention. Once confirmed, the restoration proceeded as planned. The OWMTO was performed with a PEEK (polyetheretherketone) wedge implant (iBalance; Arthrex) followed by arthroscopic preparation for medial MAT with removal of any meniscal remnants and placement of passing sutures (Figure 2A).
Figure 2.
The meniscus was delivered across the compartment through an enlarged medial portal. The posterior horn bone plug was secured in the retrosocket with sutures tied off to an anterior tibial cortical 2-hole button (Figure 2B). The body of the posterior third of the meniscus was secured to the posterior capsule by tying the 2 previously placed vertical sutures to each other over the intervening capsule. The anterior horn bone plug (10 mm in diameter × 7 mm thick) was then secured within a 10-mm socket drilled antegrade to a depth of 10 mm with a SwiveLock anchor (Arthrex) for interference bony fixation and recapitulation of the normal hoop stresses. Inside-out sutures were placed to secure the capsule to the meniscus and thereby prevent iatrogenic meniscal extrusion. A standard all-inside allograft revision ACL reconstruction was performed with an 11-mm FlipCutter and guide system (Arthrex) to make the femoral and tibial retrosockets.
Figure 3.
Passing sutures were used to deploy the ACL graft construct, which was fashioned into a quadruple-stranded GraftLink construct (Arthrex) from a 28-mm allograft peroneus longus tendon (Figure 3).

When the arthroscopic portion of the surgery was finished, a medial parapatellar arthrotomy was made to allow the patella to be inverted and complete fresh mega-patellar OCA placement (Figure 4).
Figure 4.
A drill guide system was used to prepare the host patella with the largest contained circular socket (35 mm) with a 1-mm to 2-mm cortical margin to a marginal bony depth and an 8- to 10-mm central bony depth.
Figure 5.
The donor patella was then prepared on the graft preparation guide to allow a mega-patellar osteochondral plug to be press-fit into the recipient socket after thorough pulse lavaging of the bony portion of the graft to negate as much of the marrow cellular elements as possible (Figure 5). After appropriate tracking was confirmed, the arthrotomy and skin incision were closed.

The knee was placed in a ROM brace locked in full extension. The patient was able to do straight-leg raises and calf pumps in the recovery room and was discharged home with a saphenous nerve block and an iPACK (Interspace between the Popliteal Artery and the Capsule of the posterior Knee) nerve block in place. Home-based therapy was started immediately. After the patient’s first postoperative visit, formal therapy (discussed earlier) was initiated (Figure 6).
Figure 6.
Toe-touch weight-bearing with the brace locked in extension and ROM limited to 0° to 90° were maintained until 6 weeks, when full weight-bearing and full ROM were allowed. The rehabilitation course was uneventful. The patient continued on active duty and completed his military service, retiring 3 years later with 20 years of service.

Discussion

All-inside GraftLink ACL reconstruction with cortical suspensory fixation appears well suited to combined medial and lateral MAT and/or cartilage restoration—whether it be large fresh OCA combined with medial MAT (as in this patient’s case) or another form of cartilage restoration. Arthroscopic MAT with anatomically fashioned and placed bone plugs minimizes the morbidity within the notch footprints and allows for discrete revision socket formation for both femoral and tibial ACL graft placement. In this case, preparation for the medial MAT and ACL sockets was followed by MAT/ACL construct implantation and secure fixation. The arthrotomy was thereby minimized and placed to allow for efficient mega-patellar OCA graft placement.

Over the past decade, I have performed similar concomitant procedures using the same surgical principles that allow for efficient and reproducible complex knee restoration (Figure 7).

Figure 7.
Common examples are multiligamentous reconstructions (ACL–posterior cruciate ligament–posterolateral corner, ACL–posterior cruciate ligament–medial collateral ligament, ACL–anterolateral ligament, and ACL–medial patellofemoral ligament) combined with concomitant meniscus and cartilage restoration and various osteotomies.

Although use of an algorithm for the management of complex knee restorations is not universally feasible, I offer guidelines for complex knee injuries:

  • At each decision point, determine whether the knee and the patient can withstand the planned surgical intervention.
  • After deciding to proceed with knee restoration, list the meniscus, cartilage, and ligament injuries that must be addressed.
  • Determine which repairs (meniscus, cartilage, ligament) are warranted. Repairs generally are best performed within a period of 7 to 14 days.
  • Determine which ligament injuries warrant reconstruction. Allograft tissue typically is used for multiligament reconstruction.
  • Rank-order the ligament reconstruction requirements. It is fine to proceed with all of the reconstructions if the case is moving smoothly, if there are no developing tourniquet-time issues, and if the soft-tissue envelope is responding as expected.
  • Consider autograft and/or allograft tissue needs for concomitant or staged meniscus and cartilage restoration options/requirements.


Am J Orthop. 2017;46(4):170-175, 202. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Uquillas C, Rossy W, Nathasingh CK, Strauss E, Jazrawi L, Gonzalez-Lomas G. Osteotomies about the knee: AAOS exhibit selection. J Bone Joint Surg Am. 2014;96(24):e199.

2. Mehl J, Paul J, Feucht MJ, et al. ACL deficiency and varus osteoarthritis: high tibial osteotomy alone or combined with ACL reconstruction? Arch Orthop Trauma Surg. 2017;137(2):233-240.

3. Scordino LE, DeBerardino TM. Surgical treatment of osteoarthritis in the middle-aged athlete: new horizons in high tibial osteotomies. Sports Med Arthrosc. 2013;21(1):47-51.

4. Ferrari D, Lopes TJ, França PF, Azevedo FM, Pappas E. Outpatient versus inpatient anterior cruciate ligament reconstruction: a systematic review with meta-analysis. Knee. 2017;24(2):197-206.

5. Weber AE, Gitelis ME, McCarthy MA, Yanke AB, Cole BJ. Malalignment: a requirement for cartilage and organ restoration. Sports Med Arthrosc. 2016;24(2):e14-e22.

6. Prince MR, King AH, Stuart MJ, Dahm DL, Krych AJ. Treatment of patellofemoral cartilage lesions in the young, active patient. J Knee Surg. 2015;28(4):285-295.

7. Scordino LE, DeBerardino TM. Surgical treatment of osteoarthritis in the middle-aged athlete: new horizons in high tibial osteotomies. Sports Med Arthrosc. 2013;21(1):47-51.

References

1. Uquillas C, Rossy W, Nathasingh CK, Strauss E, Jazrawi L, Gonzalez-Lomas G. Osteotomies about the knee: AAOS exhibit selection. J Bone Joint Surg Am. 2014;96(24):e199.

2. Mehl J, Paul J, Feucht MJ, et al. ACL deficiency and varus osteoarthritis: high tibial osteotomy alone or combined with ACL reconstruction? Arch Orthop Trauma Surg. 2017;137(2):233-240.

3. Scordino LE, DeBerardino TM. Surgical treatment of osteoarthritis in the middle-aged athlete: new horizons in high tibial osteotomies. Sports Med Arthrosc. 2013;21(1):47-51.

4. Ferrari D, Lopes TJ, França PF, Azevedo FM, Pappas E. Outpatient versus inpatient anterior cruciate ligament reconstruction: a systematic review with meta-analysis. Knee. 2017;24(2):197-206.

5. Weber AE, Gitelis ME, McCarthy MA, Yanke AB, Cole BJ. Malalignment: a requirement for cartilage and organ restoration. Sports Med Arthrosc. 2016;24(2):e14-e22.

6. Prince MR, King AH, Stuart MJ, Dahm DL, Krych AJ. Treatment of patellofemoral cartilage lesions in the young, active patient. J Knee Surg. 2015;28(4):285-295.

7. Scordino LE, DeBerardino TM. Surgical treatment of osteoarthritis in the middle-aged athlete: new horizons in high tibial osteotomies. Sports Med Arthrosc. 2013;21(1):47-51.

Issue
The American Journal of Orthopedics - 46(4)
Issue
The American Journal of Orthopedics - 46(4)
Page Number
170-175, 202
Page Number
170-175, 202
Publications
Publications
Topics
Article Type
Display Headline
Applying Military Strategy to Complex Knee Reconstruction: Tips for Planning and Executing Advanced Surgery
Display Headline
Applying Military Strategy to Complex Knee Reconstruction: Tips for Planning and Executing Advanced Surgery
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media