Slot System
Featured Buckets
Featured Buckets Admin

Postoperative pain: Meeting new expectations

Article Type
Changed
Mon, 09/25/2017 - 11:32
Display Headline
Postoperative pain: Meeting new expectations

One of the most common questions patients ask when they hear that they need surgery is, “How much pain will I have, and how will you manage it?”

Pain is a common human experience that provokes both fear and anxiety, which in some cases can last a lifetime. The medical community has been slow to meet the challenge of managing it. The US National Institutes of Health states that more than 80% of patients suffer postoperative pain, with fewer than 50% receiving adequate relief.1 Patients have spoken out loudly through the Hospital Consumer Assessment of Healthcare Providers and Systems scores, demonstrating that the issue of inadequate postoperative pain management is real.

See related article

Clearly, as the push to tie reimbursement to patient satisfaction grows, clinicians have both a moral and a financial imperative to address postoperative pain.

The management of acute postoperative pain is evolving, and recognition of acute pain has progressed from considering it an afterthought or nuisance to realizing that improperly or inadequately treated postoperative pain can have a number of adverse effects, including debilitating chronic pain syndromes.2 Inadequately treated pain is also contributing to the calamitous rise in addiction to illegal substances and prescription medications.3 The time has come to take responsibility and meet the expectations of our patients.

OPIOIDS HAVE MAJOR DRAWBACKS

Opioid derivatives are potent analgesics and have been the traditional first-line therapy for pain. “Judicious use of opium” for painful maladies has been a mainstay of Western medicine since the 16th century and was described in writings from Mesopotamia and China more than 2,000 years ago.

The ease of administration of these drugs coupled with their efficacy in managing a broad spectrum of pain syndromes has led to their frequent and widespread use, often, unfortunately, without consideration of the potential for negative short-term and long-term consequences. Headache, drowsiness, and pruritus are common adverse effects. Less common is a slowing of bowel motility, leading to constipation, bloating, or nausea. Additionally, in 5% to 10% of patients, narcotics may actually sensitize the nerves and make bowel-related pain worse. This narcotic bowel syndrome, as discussed by Agito and Rizk in this issue of the Journal, may make the patient uncomfortable and may lead to delays in recovery and hospital discharge.4

Opioid-related respiratory depression is especially devastating in the postoperative period, potentially causing respiratory arrest and death. The frequency of drug-induced respiratory depression and clinically significant adverse outcomes prompted the Anesthesia Patient Safety Foundation (APSF) to declare in 2011, “No patient shall be harmed by opioid-induced respiratory depression.”5 The APSF has recommended using new monitoring technology to enhance detection.

While many clinicians have been moving towards aggressive pain-management practice, hospital infrastructure has not kept pace. It is often ill-equipped to adequately monitor breathing patterns and to alert personnel to the need for rapid intervention. In the 21st century, we need to respond to this challenge with a combination of tools and technology, including improved clinical assessment and monitoring equipment that has proven to save lives in the perioperative setting.

A MULTIMODAL APPROACH IS BEST

Pain management professionals have also been moving from a predominantly opioid-based regimen to a more balanced, multimodal approach. The goal is to effectively treat acute postoperative pain while reducing the use of opioids and increasing the use of nonopioid drugs and alternative therapies for both pain management and convalescence.

Studies have shown the benefits of nonopioid drugs such as nonsteroidal anti-inflammatory drugs, paracetamol (intravenous acetaminophen), antidepressants, antiepileptics, and regional or local anesthetics combined with nontraditional treatments such as Reiki, massage therapy, and deep breathing.6

Each patient’s experience of pain is unique and responds to medications and alternative therapies in a distinctly different manner. We should not assume that one intervention is suitable for every patient. It is more beneficial to individualize treatment based on protocols that target different pain pathways. This may lead to better pain management and patient satisfaction while reducing the incidence of drug overdose and unwanted side effects.

WHAT WE NEED TO DO

Although many health care professionals have the authority to prescribe potent anesthetics and analgesics, we believe that there is a lack of adequate education, supervision, and experience, and this exposes patients to risks of prescription drug overdose.7,8 All medical professionals who provide postoperative care need specific education and training to offer the best care to this vulnerable patient population. This includes specific and more extensive training in the appropriate use of controlled medications before receiving their controlled substance registration from the Drug Enforcement Agency. We must also extend education to patients and family members regarding the dangers of drug abuse and the safe use of prescription drugs.8

Finally, we need to engage and communicate more effectively with our patients, especially when they are in acute pain. How long should a patient expect to remain in pain while waiting for an assessment and intervention? The medical community must commit to rapid and consistent coverage throughout the day for all patients experiencing a new or changing pattern of pain not responding to current therapy. Problems do not end at 5 pm or at a shift change. We need to build a process of timely intervention, perhaps by using a model similar to that of the rapid response and resuscitation team, which has been effective in many institutions. When a patient is in pain, minutes spent waiting for relief seem like an eternity. The empathy we show patients by validating, not minimizing, their pain and by following a defined yet tailored therapeutic intervention may not only improve their physical discomfort, but improve their overall patient experience.

Margo McCaffery, RN, a pioneer in pain management nursing, defined pain as “whatever the experiencing person says it is, existing whenever the experiencing person says it does.”9 We have come a long way from the days when attending staff in the post-anesthesia care unit would routinely declare, “Pain never killed anyone.” As caregivers, we need to become engaged, empathetic, and effective as we meet the challenges of managing acute postoperative pain and improving our patients’ experience and outcomes.

References
  1. Relieving Pain in America. Institute of Medicine 2011. National Academies Press (US). 2011 ISBN-13: 978-0-309-21484-1.
  2. Lamacraft G. The link between acute postoperative pain and chronic pain syndromes. South Afr J Anaesth Analg 2012; 18:4550.
  3. Binyamin R, Trescot AM, Datta S, et al. Opioid complications and side effects. Pain Physician 2008; 11:S105S120.
  4. Grunkemeier DMS, Cassara JE, Dalton CB, Drossman DA. The narcotic bowel syndrome: clinical features, pathophysiology, and management. Clin Gastroenterol Hepatol 2007; 5:11261139.
  5. Anesthesia Patient Safety Foundation. Proceedings of “Essential Monitoring Strategies to Detect Clinically Significant Drug-Induced Respiratory Depression in the Postoperative Period” Conference, 2011. http://www.apsf.org/newsletters/pdf/fall_2011.pdf. Accessed May 13, 2013.
  6. So PS, Jiang JY, Qin Y. Touch therapies for pain relief in adults. Cochrane Database of Systematic Reviews 2008, Issue 4. Art. No.: CD006535. DOI: 10.1002/14651858.CD006535.pub2.
  7. Polydorou S, Gunderson EW, Levin FR. Training physicians to treat substance use disorders. Curr Psychiatry Rep 2008; 10:399404.
  8. CDC Grand Rounds. Prescription Drug Overdoses – a U.S Epidemic MMWR January 13, 2012/61(01);10–13.
  9. McCaffery M, Pasero C. Pain: Clinical Manual. 2nd ed. St. Louis: Mosby, 1999.
Article PDF
Author and Disclosure Information

Steven R. Insler, DO
Department of Cardiothoracic Anesthesiology and Department of Outcomes Research, Anesthesiology Institute, and Department of Critical Care Medicine, Heart and Vascular Institute, Cleveland Clinic

Michael S. O'Connor, DO, MPH
Department of Cardiothoracic Anesthesiology, Anesthesiology Institute, and Department of Critical Care Medicine, Heart and Vascular Institute, Cleveland Clinic

Address: Steven R. Insler, DO, Cardiothoracic Anesthesiology, J4-331, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Issue
Cleveland Clinic Journal of Medicine - 80(7)
Publications
Topics
Page Number
441-442
Sections
Author and Disclosure Information

Steven R. Insler, DO
Department of Cardiothoracic Anesthesiology and Department of Outcomes Research, Anesthesiology Institute, and Department of Critical Care Medicine, Heart and Vascular Institute, Cleveland Clinic

Michael S. O'Connor, DO, MPH
Department of Cardiothoracic Anesthesiology, Anesthesiology Institute, and Department of Critical Care Medicine, Heart and Vascular Institute, Cleveland Clinic

Address: Steven R. Insler, DO, Cardiothoracic Anesthesiology, J4-331, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Author and Disclosure Information

Steven R. Insler, DO
Department of Cardiothoracic Anesthesiology and Department of Outcomes Research, Anesthesiology Institute, and Department of Critical Care Medicine, Heart and Vascular Institute, Cleveland Clinic

Michael S. O'Connor, DO, MPH
Department of Cardiothoracic Anesthesiology, Anesthesiology Institute, and Department of Critical Care Medicine, Heart and Vascular Institute, Cleveland Clinic

Address: Steven R. Insler, DO, Cardiothoracic Anesthesiology, J4-331, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Article PDF
Article PDF

One of the most common questions patients ask when they hear that they need surgery is, “How much pain will I have, and how will you manage it?”

Pain is a common human experience that provokes both fear and anxiety, which in some cases can last a lifetime. The medical community has been slow to meet the challenge of managing it. The US National Institutes of Health states that more than 80% of patients suffer postoperative pain, with fewer than 50% receiving adequate relief.1 Patients have spoken out loudly through the Hospital Consumer Assessment of Healthcare Providers and Systems scores, demonstrating that the issue of inadequate postoperative pain management is real.

See related article

Clearly, as the push to tie reimbursement to patient satisfaction grows, clinicians have both a moral and a financial imperative to address postoperative pain.

The management of acute postoperative pain is evolving, and recognition of acute pain has progressed from considering it an afterthought or nuisance to realizing that improperly or inadequately treated postoperative pain can have a number of adverse effects, including debilitating chronic pain syndromes.2 Inadequately treated pain is also contributing to the calamitous rise in addiction to illegal substances and prescription medications.3 The time has come to take responsibility and meet the expectations of our patients.

OPIOIDS HAVE MAJOR DRAWBACKS

Opioid derivatives are potent analgesics and have been the traditional first-line therapy for pain. “Judicious use of opium” for painful maladies has been a mainstay of Western medicine since the 16th century and was described in writings from Mesopotamia and China more than 2,000 years ago.

The ease of administration of these drugs coupled with their efficacy in managing a broad spectrum of pain syndromes has led to their frequent and widespread use, often, unfortunately, without consideration of the potential for negative short-term and long-term consequences. Headache, drowsiness, and pruritus are common adverse effects. Less common is a slowing of bowel motility, leading to constipation, bloating, or nausea. Additionally, in 5% to 10% of patients, narcotics may actually sensitize the nerves and make bowel-related pain worse. This narcotic bowel syndrome, as discussed by Agito and Rizk in this issue of the Journal, may make the patient uncomfortable and may lead to delays in recovery and hospital discharge.4

Opioid-related respiratory depression is especially devastating in the postoperative period, potentially causing respiratory arrest and death. The frequency of drug-induced respiratory depression and clinically significant adverse outcomes prompted the Anesthesia Patient Safety Foundation (APSF) to declare in 2011, “No patient shall be harmed by opioid-induced respiratory depression.”5 The APSF has recommended using new monitoring technology to enhance detection.

While many clinicians have been moving towards aggressive pain-management practice, hospital infrastructure has not kept pace. It is often ill-equipped to adequately monitor breathing patterns and to alert personnel to the need for rapid intervention. In the 21st century, we need to respond to this challenge with a combination of tools and technology, including improved clinical assessment and monitoring equipment that has proven to save lives in the perioperative setting.

A MULTIMODAL APPROACH IS BEST

Pain management professionals have also been moving from a predominantly opioid-based regimen to a more balanced, multimodal approach. The goal is to effectively treat acute postoperative pain while reducing the use of opioids and increasing the use of nonopioid drugs and alternative therapies for both pain management and convalescence.

Studies have shown the benefits of nonopioid drugs such as nonsteroidal anti-inflammatory drugs, paracetamol (intravenous acetaminophen), antidepressants, antiepileptics, and regional or local anesthetics combined with nontraditional treatments such as Reiki, massage therapy, and deep breathing.6

Each patient’s experience of pain is unique and responds to medications and alternative therapies in a distinctly different manner. We should not assume that one intervention is suitable for every patient. It is more beneficial to individualize treatment based on protocols that target different pain pathways. This may lead to better pain management and patient satisfaction while reducing the incidence of drug overdose and unwanted side effects.

WHAT WE NEED TO DO

Although many health care professionals have the authority to prescribe potent anesthetics and analgesics, we believe that there is a lack of adequate education, supervision, and experience, and this exposes patients to risks of prescription drug overdose.7,8 All medical professionals who provide postoperative care need specific education and training to offer the best care to this vulnerable patient population. This includes specific and more extensive training in the appropriate use of controlled medications before receiving their controlled substance registration from the Drug Enforcement Agency. We must also extend education to patients and family members regarding the dangers of drug abuse and the safe use of prescription drugs.8

Finally, we need to engage and communicate more effectively with our patients, especially when they are in acute pain. How long should a patient expect to remain in pain while waiting for an assessment and intervention? The medical community must commit to rapid and consistent coverage throughout the day for all patients experiencing a new or changing pattern of pain not responding to current therapy. Problems do not end at 5 pm or at a shift change. We need to build a process of timely intervention, perhaps by using a model similar to that of the rapid response and resuscitation team, which has been effective in many institutions. When a patient is in pain, minutes spent waiting for relief seem like an eternity. The empathy we show patients by validating, not minimizing, their pain and by following a defined yet tailored therapeutic intervention may not only improve their physical discomfort, but improve their overall patient experience.

Margo McCaffery, RN, a pioneer in pain management nursing, defined pain as “whatever the experiencing person says it is, existing whenever the experiencing person says it does.”9 We have come a long way from the days when attending staff in the post-anesthesia care unit would routinely declare, “Pain never killed anyone.” As caregivers, we need to become engaged, empathetic, and effective as we meet the challenges of managing acute postoperative pain and improving our patients’ experience and outcomes.

One of the most common questions patients ask when they hear that they need surgery is, “How much pain will I have, and how will you manage it?”

Pain is a common human experience that provokes both fear and anxiety, which in some cases can last a lifetime. The medical community has been slow to meet the challenge of managing it. The US National Institutes of Health states that more than 80% of patients suffer postoperative pain, with fewer than 50% receiving adequate relief.1 Patients have spoken out loudly through the Hospital Consumer Assessment of Healthcare Providers and Systems scores, demonstrating that the issue of inadequate postoperative pain management is real.

See related article

Clearly, as the push to tie reimbursement to patient satisfaction grows, clinicians have both a moral and a financial imperative to address postoperative pain.

The management of acute postoperative pain is evolving, and recognition of acute pain has progressed from considering it an afterthought or nuisance to realizing that improperly or inadequately treated postoperative pain can have a number of adverse effects, including debilitating chronic pain syndromes.2 Inadequately treated pain is also contributing to the calamitous rise in addiction to illegal substances and prescription medications.3 The time has come to take responsibility and meet the expectations of our patients.

OPIOIDS HAVE MAJOR DRAWBACKS

Opioid derivatives are potent analgesics and have been the traditional first-line therapy for pain. “Judicious use of opium” for painful maladies has been a mainstay of Western medicine since the 16th century and was described in writings from Mesopotamia and China more than 2,000 years ago.

The ease of administration of these drugs coupled with their efficacy in managing a broad spectrum of pain syndromes has led to their frequent and widespread use, often, unfortunately, without consideration of the potential for negative short-term and long-term consequences. Headache, drowsiness, and pruritus are common adverse effects. Less common is a slowing of bowel motility, leading to constipation, bloating, or nausea. Additionally, in 5% to 10% of patients, narcotics may actually sensitize the nerves and make bowel-related pain worse. This narcotic bowel syndrome, as discussed by Agito and Rizk in this issue of the Journal, may make the patient uncomfortable and may lead to delays in recovery and hospital discharge.4

Opioid-related respiratory depression is especially devastating in the postoperative period, potentially causing respiratory arrest and death. The frequency of drug-induced respiratory depression and clinically significant adverse outcomes prompted the Anesthesia Patient Safety Foundation (APSF) to declare in 2011, “No patient shall be harmed by opioid-induced respiratory depression.”5 The APSF has recommended using new monitoring technology to enhance detection.

While many clinicians have been moving towards aggressive pain-management practice, hospital infrastructure has not kept pace. It is often ill-equipped to adequately monitor breathing patterns and to alert personnel to the need for rapid intervention. In the 21st century, we need to respond to this challenge with a combination of tools and technology, including improved clinical assessment and monitoring equipment that has proven to save lives in the perioperative setting.

A MULTIMODAL APPROACH IS BEST

Pain management professionals have also been moving from a predominantly opioid-based regimen to a more balanced, multimodal approach. The goal is to effectively treat acute postoperative pain while reducing the use of opioids and increasing the use of nonopioid drugs and alternative therapies for both pain management and convalescence.

Studies have shown the benefits of nonopioid drugs such as nonsteroidal anti-inflammatory drugs, paracetamol (intravenous acetaminophen), antidepressants, antiepileptics, and regional or local anesthetics combined with nontraditional treatments such as Reiki, massage therapy, and deep breathing.6

Each patient’s experience of pain is unique and responds to medications and alternative therapies in a distinctly different manner. We should not assume that one intervention is suitable for every patient. It is more beneficial to individualize treatment based on protocols that target different pain pathways. This may lead to better pain management and patient satisfaction while reducing the incidence of drug overdose and unwanted side effects.

WHAT WE NEED TO DO

Although many health care professionals have the authority to prescribe potent anesthetics and analgesics, we believe that there is a lack of adequate education, supervision, and experience, and this exposes patients to risks of prescription drug overdose.7,8 All medical professionals who provide postoperative care need specific education and training to offer the best care to this vulnerable patient population. This includes specific and more extensive training in the appropriate use of controlled medications before receiving their controlled substance registration from the Drug Enforcement Agency. We must also extend education to patients and family members regarding the dangers of drug abuse and the safe use of prescription drugs.8

Finally, we need to engage and communicate more effectively with our patients, especially when they are in acute pain. How long should a patient expect to remain in pain while waiting for an assessment and intervention? The medical community must commit to rapid and consistent coverage throughout the day for all patients experiencing a new or changing pattern of pain not responding to current therapy. Problems do not end at 5 pm or at a shift change. We need to build a process of timely intervention, perhaps by using a model similar to that of the rapid response and resuscitation team, which has been effective in many institutions. When a patient is in pain, minutes spent waiting for relief seem like an eternity. The empathy we show patients by validating, not minimizing, their pain and by following a defined yet tailored therapeutic intervention may not only improve their physical discomfort, but improve their overall patient experience.

Margo McCaffery, RN, a pioneer in pain management nursing, defined pain as “whatever the experiencing person says it is, existing whenever the experiencing person says it does.”9 We have come a long way from the days when attending staff in the post-anesthesia care unit would routinely declare, “Pain never killed anyone.” As caregivers, we need to become engaged, empathetic, and effective as we meet the challenges of managing acute postoperative pain and improving our patients’ experience and outcomes.

References
  1. Relieving Pain in America. Institute of Medicine 2011. National Academies Press (US). 2011 ISBN-13: 978-0-309-21484-1.
  2. Lamacraft G. The link between acute postoperative pain and chronic pain syndromes. South Afr J Anaesth Analg 2012; 18:4550.
  3. Binyamin R, Trescot AM, Datta S, et al. Opioid complications and side effects. Pain Physician 2008; 11:S105S120.
  4. Grunkemeier DMS, Cassara JE, Dalton CB, Drossman DA. The narcotic bowel syndrome: clinical features, pathophysiology, and management. Clin Gastroenterol Hepatol 2007; 5:11261139.
  5. Anesthesia Patient Safety Foundation. Proceedings of “Essential Monitoring Strategies to Detect Clinically Significant Drug-Induced Respiratory Depression in the Postoperative Period” Conference, 2011. http://www.apsf.org/newsletters/pdf/fall_2011.pdf. Accessed May 13, 2013.
  6. So PS, Jiang JY, Qin Y. Touch therapies for pain relief in adults. Cochrane Database of Systematic Reviews 2008, Issue 4. Art. No.: CD006535. DOI: 10.1002/14651858.CD006535.pub2.
  7. Polydorou S, Gunderson EW, Levin FR. Training physicians to treat substance use disorders. Curr Psychiatry Rep 2008; 10:399404.
  8. CDC Grand Rounds. Prescription Drug Overdoses – a U.S Epidemic MMWR January 13, 2012/61(01);10–13.
  9. McCaffery M, Pasero C. Pain: Clinical Manual. 2nd ed. St. Louis: Mosby, 1999.
References
  1. Relieving Pain in America. Institute of Medicine 2011. National Academies Press (US). 2011 ISBN-13: 978-0-309-21484-1.
  2. Lamacraft G. The link between acute postoperative pain and chronic pain syndromes. South Afr J Anaesth Analg 2012; 18:4550.
  3. Binyamin R, Trescot AM, Datta S, et al. Opioid complications and side effects. Pain Physician 2008; 11:S105S120.
  4. Grunkemeier DMS, Cassara JE, Dalton CB, Drossman DA. The narcotic bowel syndrome: clinical features, pathophysiology, and management. Clin Gastroenterol Hepatol 2007; 5:11261139.
  5. Anesthesia Patient Safety Foundation. Proceedings of “Essential Monitoring Strategies to Detect Clinically Significant Drug-Induced Respiratory Depression in the Postoperative Period” Conference, 2011. http://www.apsf.org/newsletters/pdf/fall_2011.pdf. Accessed May 13, 2013.
  6. So PS, Jiang JY, Qin Y. Touch therapies for pain relief in adults. Cochrane Database of Systematic Reviews 2008, Issue 4. Art. No.: CD006535. DOI: 10.1002/14651858.CD006535.pub2.
  7. Polydorou S, Gunderson EW, Levin FR. Training physicians to treat substance use disorders. Curr Psychiatry Rep 2008; 10:399404.
  8. CDC Grand Rounds. Prescription Drug Overdoses – a U.S Epidemic MMWR January 13, 2012/61(01);10–13.
  9. McCaffery M, Pasero C. Pain: Clinical Manual. 2nd ed. St. Louis: Mosby, 1999.
Issue
Cleveland Clinic Journal of Medicine - 80(7)
Issue
Cleveland Clinic Journal of Medicine - 80(7)
Page Number
441-442
Page Number
441-442
Publications
Publications
Topics
Article Type
Display Headline
Postoperative pain: Meeting new expectations
Display Headline
Postoperative pain: Meeting new expectations
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Electronic health records: We need to find needles, not stack more hay

Article Type
Changed
Thu, 03/28/2019 - 16:02
Display Headline
Electronic health records: We need to find needles, not stack more hay

In this edition of the Cleveland Clinic Journal of Medicine, Dr. Jamie Stoller raises the issue of “electronic silos,” an unintended consequence of using an electronic health record (EHR) system. Dr. Stoller observes that ever since we began using EHRs, clinicians have been talking to each other less.

See related article

As a hospitalist, I would agree. I only need to go to the nursing station on any given morning to confirm this. Working in the hospital, a clinician has two hubs of activity, the patient and the chart. With the advent of the EHR, the chart is now virtual and I no longer need to be physically present in the nursing station.

Our environment has changed, and the EHR provides us a new world in which we must interact as providers. Understanding these challenges will begin to shift our approach to this new world. In addition to this, and to Dr. Stoller’s observations, I would add that we also need to expect more from our EHR. We need an EHR that works for us, one that extends our abilities and improves the care we give. I believe the best is yet to come.

WE GOT WHAT WE ASKED FOR

Clinical communication is the cornerstone of patient safety. In a seminal report, the Institutes of Medicine estimated that 98,000 people die in any given year from medical errors, and most of the errors are from poor communication.1 Findings such as this gave momentum to the movement to convert from a paper-based health delivery system to an electronic one.2

However, a requirement in designing these systems was to mimic paper-based tasks. We asked for the EHR to look like paper, and we got it, and that has truly affected the way we practice, interact, and use electronic health information. Although Dr. Stoller and others want to improve communication and workflow through the EHR, there has been little research into the cognitive requirements or workflow paths needed to make this a reality. A National Research Council report states that current EHRs are not designed on the basis of human-computer interaction, human factors, or ergonomic design principles, and these design failures contribute to their inefficient use and to the potential propagation of error.3

‘HUMAN FACTORS ENGINEERING’ COULD IMPROVE EHR DESIGN

In industries other than health care, the effect of technology on the workplace has been studied in a discipline called human factors engineering. Studies show significant lags between the adoption of workplace automation and the redesign of the workplace to accommodate the new technology and workforce needs.4

In health care, even computerized physician order entry, one of the central drivers of EHR adoption to promote patient safety, is fallible as a result of poor human factors engineering. Poor design can introduce new errors into the care delivery system if the technology and the environment in which it is deployed are not well understood.5

We must mitigate this risk of poor design and error by applying the principles of human factors engineering to health care. Three areas need to be taken into account to prevent failure: the user, the device, and the environment in which the device is used. For example, a glucometer with a small display would be difficult to use for patients with impaired vision from diabetic retinopathy—the user needs to be taken into account. We have all had experience with devices that are too complicated to use, with an unfriendly user interface or too much irrelevant material in the display. And in the noisy environment of an operating room full of beeping machines, yet another beep may not be a good way to alert the user. The outcomes of these domains together yield either a safe and effective experience or an ineffective experience that promotes error and puts patient safety at risk.

We can start to achieve good design in health care by first applying the techniques of human factors engineering that have been well honed outside of medicine. Information about the patient should be displayed on a “dashboard” in a way that is intuitive and easy to understand, making for more efficient use of the clinician’s brain cells. Visionaries such as Edward Tuft are investigating how to compile discrete data into a cohesive visual experience.6 Application of analytics and predicative modeling can pull together information in a way that tells the provider not only about what has happened, but also about what might happen.

Second, the EHR should include tools for effectively sharing information. I agree with Dr. Stoller about the idea of embedding virtual care teams in the record. I can see when my friends are online with social networking tools—why not extend this feature to the record? Beyond enabling simple physician-to-physician exchanges, the EHR affords new powerful care opportunities that paper never could: the wisdom of the cohort. Virtual care of a population is a promising way to manage patients who share attributes. Beyond improved clinical outcomes, digital collaborative care has the additional benefit of allowing input from nonclinical teams. Combining clinical, operational, and financial data can help make sure we achieve the best quality of care, at the best cost, with the best outcome. That is the value proposition of health care reform.

 

 

FINDING THE NEEDLE, NOT STORING MORE HAY

Beyond poor design, another problem with current EHR systems is that they overload us with information, so that our time is spent sifting through data rather than synthesizing it. We are seeing an unprecedented proliferation of both clinical data in the EHR and supporting research data. This combination has not helped the physician find the “needle.” Rather, it has managed to just store more hay.

All health care providers need to know how to read a chart quickly and efficiently to ascertain the story. In medical school, we teach new doctors about what makes for a good consult: synthesize the data and ask for an opinion. While a first-year medical school student would say, “I need a GI consult: the hemoglobin is 6, platelets are low, and there is blood in the stool,” a resident would say, “I need a GI consult for upper endoscopy, as I suspect this patient has alcoholic cirrhosis and likely portal hypertension: I am worried about variceal bleeding.” We should expect the same from our EHR.

Our relationship with health technology needs to shift. We need not view the EHR merely as a record, as something to physically hold data, but rather as a system that digests data to produce knowledge. The EHR needs to be viewed as a mentor and a colleague, a place that not only records data, but that also ascertains data incongruities, displays information that is relevant, and gives providers rapid, at-a-glance knowledge of the patient’s condition. The silo Dr. Stoller describes is not just the physical separation of providers, it is also the separation of providers and knowledge. We are still hunters and gatherers of information. Let the EHR work for the clinician. Tell me that I have not addressed my patient’s hyperkalemia. Tell me that my gastroenterology consultant is online and has just completed a consult note. Tell me that my patient is having uncontrolled pain now, rather than my having to discover this 9 hours later. We should expect our EHR to deliver the right information to the right person at the right time in the right format. The electronic health colleague might be a more apt term.

MAKING THE EHR WORK FOR US

So, has the EHR destroyed clinician collaboration? Certainly not. It has just changed the environment and the way we interact with the medical system. In fact, I argue that it could actually make it better, if we shift our expectations of our EHR systems. The future state of collaboration may not be in the traditional form of speaking to a colleague next to you, but rather in having a system that supports real-time access and sharing of digested knowledge about the patient. This knowledge can then be shared with other providers, finance systems, national health exchanges, predictive models, and even the patient, breaking the silos.

Someday the EHR might give back time to the provider, and we might say, “I just finished my patient panel early—let’s go get a cup of coffee and catch up.”

References
  1. Kohn LT, Corrigan JM, Donaldson MS, editors. Committee on Quality of Health Care in America. Institute of Medicine. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999.
  2. Institute of Medicine (US). Health IT and Patient Safety: Building Safer Systems for Better Care. Committee on Patient Safety and Health Information Technology, Board on Health Care Services. Washington, DC: The National Academies Press; 2012.
  3. Stead W, Lin HS, editors. Committee on Engaging the Computer Science Research Community in Health Care Informatics, Computer Science and Telecommunications Board, Division on Engineering and Physical Sciences, National Research Council of the National Academies. Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions. Washington, DC: The National Academies Press; 2009.
  4. Smith MJ, Carayon P. New technology, automation, and work organization: stress problems and improved technology implementation strategies. Int J Hum Factors Manuf 1995; 5:99116.
  5. Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293:11971203.
  6. Powsner SM, Tufte ER. Graphical summary of patient status. Lancet 1994; 344:386389.
Article PDF
Author and Disclosure Information

William H. Morris, MD
Associate Chief Medical Information Office, Medical Operations; Hospital Medicine, Medicine Institute, Cleveland Clinic; Clinical Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Address: William Morris, MD, Medical Operations, Hospital Medicine, JJN6-432, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Issue
Cleveland Clinic Journal of Medicine - 80(7)
Publications
Topics
Page Number
410-411, 414
Sections
Author and Disclosure Information

William H. Morris, MD
Associate Chief Medical Information Office, Medical Operations; Hospital Medicine, Medicine Institute, Cleveland Clinic; Clinical Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Address: William Morris, MD, Medical Operations, Hospital Medicine, JJN6-432, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Author and Disclosure Information

William H. Morris, MD
Associate Chief Medical Information Office, Medical Operations; Hospital Medicine, Medicine Institute, Cleveland Clinic; Clinical Assistant Professor, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Address: William Morris, MD, Medical Operations, Hospital Medicine, JJN6-432, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Article PDF
Article PDF

In this edition of the Cleveland Clinic Journal of Medicine, Dr. Jamie Stoller raises the issue of “electronic silos,” an unintended consequence of using an electronic health record (EHR) system. Dr. Stoller observes that ever since we began using EHRs, clinicians have been talking to each other less.

See related article

As a hospitalist, I would agree. I only need to go to the nursing station on any given morning to confirm this. Working in the hospital, a clinician has two hubs of activity, the patient and the chart. With the advent of the EHR, the chart is now virtual and I no longer need to be physically present in the nursing station.

Our environment has changed, and the EHR provides us a new world in which we must interact as providers. Understanding these challenges will begin to shift our approach to this new world. In addition to this, and to Dr. Stoller’s observations, I would add that we also need to expect more from our EHR. We need an EHR that works for us, one that extends our abilities and improves the care we give. I believe the best is yet to come.

WE GOT WHAT WE ASKED FOR

Clinical communication is the cornerstone of patient safety. In a seminal report, the Institutes of Medicine estimated that 98,000 people die in any given year from medical errors, and most of the errors are from poor communication.1 Findings such as this gave momentum to the movement to convert from a paper-based health delivery system to an electronic one.2

However, a requirement in designing these systems was to mimic paper-based tasks. We asked for the EHR to look like paper, and we got it, and that has truly affected the way we practice, interact, and use electronic health information. Although Dr. Stoller and others want to improve communication and workflow through the EHR, there has been little research into the cognitive requirements or workflow paths needed to make this a reality. A National Research Council report states that current EHRs are not designed on the basis of human-computer interaction, human factors, or ergonomic design principles, and these design failures contribute to their inefficient use and to the potential propagation of error.3

‘HUMAN FACTORS ENGINEERING’ COULD IMPROVE EHR DESIGN

In industries other than health care, the effect of technology on the workplace has been studied in a discipline called human factors engineering. Studies show significant lags between the adoption of workplace automation and the redesign of the workplace to accommodate the new technology and workforce needs.4

In health care, even computerized physician order entry, one of the central drivers of EHR adoption to promote patient safety, is fallible as a result of poor human factors engineering. Poor design can introduce new errors into the care delivery system if the technology and the environment in which it is deployed are not well understood.5

We must mitigate this risk of poor design and error by applying the principles of human factors engineering to health care. Three areas need to be taken into account to prevent failure: the user, the device, and the environment in which the device is used. For example, a glucometer with a small display would be difficult to use for patients with impaired vision from diabetic retinopathy—the user needs to be taken into account. We have all had experience with devices that are too complicated to use, with an unfriendly user interface or too much irrelevant material in the display. And in the noisy environment of an operating room full of beeping machines, yet another beep may not be a good way to alert the user. The outcomes of these domains together yield either a safe and effective experience or an ineffective experience that promotes error and puts patient safety at risk.

We can start to achieve good design in health care by first applying the techniques of human factors engineering that have been well honed outside of medicine. Information about the patient should be displayed on a “dashboard” in a way that is intuitive and easy to understand, making for more efficient use of the clinician’s brain cells. Visionaries such as Edward Tuft are investigating how to compile discrete data into a cohesive visual experience.6 Application of analytics and predicative modeling can pull together information in a way that tells the provider not only about what has happened, but also about what might happen.

Second, the EHR should include tools for effectively sharing information. I agree with Dr. Stoller about the idea of embedding virtual care teams in the record. I can see when my friends are online with social networking tools—why not extend this feature to the record? Beyond enabling simple physician-to-physician exchanges, the EHR affords new powerful care opportunities that paper never could: the wisdom of the cohort. Virtual care of a population is a promising way to manage patients who share attributes. Beyond improved clinical outcomes, digital collaborative care has the additional benefit of allowing input from nonclinical teams. Combining clinical, operational, and financial data can help make sure we achieve the best quality of care, at the best cost, with the best outcome. That is the value proposition of health care reform.

 

 

FINDING THE NEEDLE, NOT STORING MORE HAY

Beyond poor design, another problem with current EHR systems is that they overload us with information, so that our time is spent sifting through data rather than synthesizing it. We are seeing an unprecedented proliferation of both clinical data in the EHR and supporting research data. This combination has not helped the physician find the “needle.” Rather, it has managed to just store more hay.

All health care providers need to know how to read a chart quickly and efficiently to ascertain the story. In medical school, we teach new doctors about what makes for a good consult: synthesize the data and ask for an opinion. While a first-year medical school student would say, “I need a GI consult: the hemoglobin is 6, platelets are low, and there is blood in the stool,” a resident would say, “I need a GI consult for upper endoscopy, as I suspect this patient has alcoholic cirrhosis and likely portal hypertension: I am worried about variceal bleeding.” We should expect the same from our EHR.

Our relationship with health technology needs to shift. We need not view the EHR merely as a record, as something to physically hold data, but rather as a system that digests data to produce knowledge. The EHR needs to be viewed as a mentor and a colleague, a place that not only records data, but that also ascertains data incongruities, displays information that is relevant, and gives providers rapid, at-a-glance knowledge of the patient’s condition. The silo Dr. Stoller describes is not just the physical separation of providers, it is also the separation of providers and knowledge. We are still hunters and gatherers of information. Let the EHR work for the clinician. Tell me that I have not addressed my patient’s hyperkalemia. Tell me that my gastroenterology consultant is online and has just completed a consult note. Tell me that my patient is having uncontrolled pain now, rather than my having to discover this 9 hours later. We should expect our EHR to deliver the right information to the right person at the right time in the right format. The electronic health colleague might be a more apt term.

MAKING THE EHR WORK FOR US

So, has the EHR destroyed clinician collaboration? Certainly not. It has just changed the environment and the way we interact with the medical system. In fact, I argue that it could actually make it better, if we shift our expectations of our EHR systems. The future state of collaboration may not be in the traditional form of speaking to a colleague next to you, but rather in having a system that supports real-time access and sharing of digested knowledge about the patient. This knowledge can then be shared with other providers, finance systems, national health exchanges, predictive models, and even the patient, breaking the silos.

Someday the EHR might give back time to the provider, and we might say, “I just finished my patient panel early—let’s go get a cup of coffee and catch up.”

In this edition of the Cleveland Clinic Journal of Medicine, Dr. Jamie Stoller raises the issue of “electronic silos,” an unintended consequence of using an electronic health record (EHR) system. Dr. Stoller observes that ever since we began using EHRs, clinicians have been talking to each other less.

See related article

As a hospitalist, I would agree. I only need to go to the nursing station on any given morning to confirm this. Working in the hospital, a clinician has two hubs of activity, the patient and the chart. With the advent of the EHR, the chart is now virtual and I no longer need to be physically present in the nursing station.

Our environment has changed, and the EHR provides us a new world in which we must interact as providers. Understanding these challenges will begin to shift our approach to this new world. In addition to this, and to Dr. Stoller’s observations, I would add that we also need to expect more from our EHR. We need an EHR that works for us, one that extends our abilities and improves the care we give. I believe the best is yet to come.

WE GOT WHAT WE ASKED FOR

Clinical communication is the cornerstone of patient safety. In a seminal report, the Institutes of Medicine estimated that 98,000 people die in any given year from medical errors, and most of the errors are from poor communication.1 Findings such as this gave momentum to the movement to convert from a paper-based health delivery system to an electronic one.2

However, a requirement in designing these systems was to mimic paper-based tasks. We asked for the EHR to look like paper, and we got it, and that has truly affected the way we practice, interact, and use electronic health information. Although Dr. Stoller and others want to improve communication and workflow through the EHR, there has been little research into the cognitive requirements or workflow paths needed to make this a reality. A National Research Council report states that current EHRs are not designed on the basis of human-computer interaction, human factors, or ergonomic design principles, and these design failures contribute to their inefficient use and to the potential propagation of error.3

‘HUMAN FACTORS ENGINEERING’ COULD IMPROVE EHR DESIGN

In industries other than health care, the effect of technology on the workplace has been studied in a discipline called human factors engineering. Studies show significant lags between the adoption of workplace automation and the redesign of the workplace to accommodate the new technology and workforce needs.4

In health care, even computerized physician order entry, one of the central drivers of EHR adoption to promote patient safety, is fallible as a result of poor human factors engineering. Poor design can introduce new errors into the care delivery system if the technology and the environment in which it is deployed are not well understood.5

We must mitigate this risk of poor design and error by applying the principles of human factors engineering to health care. Three areas need to be taken into account to prevent failure: the user, the device, and the environment in which the device is used. For example, a glucometer with a small display would be difficult to use for patients with impaired vision from diabetic retinopathy—the user needs to be taken into account. We have all had experience with devices that are too complicated to use, with an unfriendly user interface or too much irrelevant material in the display. And in the noisy environment of an operating room full of beeping machines, yet another beep may not be a good way to alert the user. The outcomes of these domains together yield either a safe and effective experience or an ineffective experience that promotes error and puts patient safety at risk.

We can start to achieve good design in health care by first applying the techniques of human factors engineering that have been well honed outside of medicine. Information about the patient should be displayed on a “dashboard” in a way that is intuitive and easy to understand, making for more efficient use of the clinician’s brain cells. Visionaries such as Edward Tuft are investigating how to compile discrete data into a cohesive visual experience.6 Application of analytics and predicative modeling can pull together information in a way that tells the provider not only about what has happened, but also about what might happen.

Second, the EHR should include tools for effectively sharing information. I agree with Dr. Stoller about the idea of embedding virtual care teams in the record. I can see when my friends are online with social networking tools—why not extend this feature to the record? Beyond enabling simple physician-to-physician exchanges, the EHR affords new powerful care opportunities that paper never could: the wisdom of the cohort. Virtual care of a population is a promising way to manage patients who share attributes. Beyond improved clinical outcomes, digital collaborative care has the additional benefit of allowing input from nonclinical teams. Combining clinical, operational, and financial data can help make sure we achieve the best quality of care, at the best cost, with the best outcome. That is the value proposition of health care reform.

 

 

FINDING THE NEEDLE, NOT STORING MORE HAY

Beyond poor design, another problem with current EHR systems is that they overload us with information, so that our time is spent sifting through data rather than synthesizing it. We are seeing an unprecedented proliferation of both clinical data in the EHR and supporting research data. This combination has not helped the physician find the “needle.” Rather, it has managed to just store more hay.

All health care providers need to know how to read a chart quickly and efficiently to ascertain the story. In medical school, we teach new doctors about what makes for a good consult: synthesize the data and ask for an opinion. While a first-year medical school student would say, “I need a GI consult: the hemoglobin is 6, platelets are low, and there is blood in the stool,” a resident would say, “I need a GI consult for upper endoscopy, as I suspect this patient has alcoholic cirrhosis and likely portal hypertension: I am worried about variceal bleeding.” We should expect the same from our EHR.

Our relationship with health technology needs to shift. We need not view the EHR merely as a record, as something to physically hold data, but rather as a system that digests data to produce knowledge. The EHR needs to be viewed as a mentor and a colleague, a place that not only records data, but that also ascertains data incongruities, displays information that is relevant, and gives providers rapid, at-a-glance knowledge of the patient’s condition. The silo Dr. Stoller describes is not just the physical separation of providers, it is also the separation of providers and knowledge. We are still hunters and gatherers of information. Let the EHR work for the clinician. Tell me that I have not addressed my patient’s hyperkalemia. Tell me that my gastroenterology consultant is online and has just completed a consult note. Tell me that my patient is having uncontrolled pain now, rather than my having to discover this 9 hours later. We should expect our EHR to deliver the right information to the right person at the right time in the right format. The electronic health colleague might be a more apt term.

MAKING THE EHR WORK FOR US

So, has the EHR destroyed clinician collaboration? Certainly not. It has just changed the environment and the way we interact with the medical system. In fact, I argue that it could actually make it better, if we shift our expectations of our EHR systems. The future state of collaboration may not be in the traditional form of speaking to a colleague next to you, but rather in having a system that supports real-time access and sharing of digested knowledge about the patient. This knowledge can then be shared with other providers, finance systems, national health exchanges, predictive models, and even the patient, breaking the silos.

Someday the EHR might give back time to the provider, and we might say, “I just finished my patient panel early—let’s go get a cup of coffee and catch up.”

References
  1. Kohn LT, Corrigan JM, Donaldson MS, editors. Committee on Quality of Health Care in America. Institute of Medicine. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999.
  2. Institute of Medicine (US). Health IT and Patient Safety: Building Safer Systems for Better Care. Committee on Patient Safety and Health Information Technology, Board on Health Care Services. Washington, DC: The National Academies Press; 2012.
  3. Stead W, Lin HS, editors. Committee on Engaging the Computer Science Research Community in Health Care Informatics, Computer Science and Telecommunications Board, Division on Engineering and Physical Sciences, National Research Council of the National Academies. Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions. Washington, DC: The National Academies Press; 2009.
  4. Smith MJ, Carayon P. New technology, automation, and work organization: stress problems and improved technology implementation strategies. Int J Hum Factors Manuf 1995; 5:99116.
  5. Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293:11971203.
  6. Powsner SM, Tufte ER. Graphical summary of patient status. Lancet 1994; 344:386389.
References
  1. Kohn LT, Corrigan JM, Donaldson MS, editors. Committee on Quality of Health Care in America. Institute of Medicine. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999.
  2. Institute of Medicine (US). Health IT and Patient Safety: Building Safer Systems for Better Care. Committee on Patient Safety and Health Information Technology, Board on Health Care Services. Washington, DC: The National Academies Press; 2012.
  3. Stead W, Lin HS, editors. Committee on Engaging the Computer Science Research Community in Health Care Informatics, Computer Science and Telecommunications Board, Division on Engineering and Physical Sciences, National Research Council of the National Academies. Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions. Washington, DC: The National Academies Press; 2009.
  4. Smith MJ, Carayon P. New technology, automation, and work organization: stress problems and improved technology implementation strategies. Int J Hum Factors Manuf 1995; 5:99116.
  5. Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293:11971203.
  6. Powsner SM, Tufte ER. Graphical summary of patient status. Lancet 1994; 344:386389.
Issue
Cleveland Clinic Journal of Medicine - 80(7)
Issue
Cleveland Clinic Journal of Medicine - 80(7)
Page Number
410-411, 414
Page Number
410-411, 414
Publications
Publications
Topics
Article Type
Display Headline
Electronic health records: We need to find needles, not stack more hay
Display Headline
Electronic health records: We need to find needles, not stack more hay
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Does coronary artery calcification scoring still have a role in practice?

Article Type
Changed
Mon, 09/25/2017 - 10:31
Display Headline
Does coronary artery calcification scoring still have a role in practice?

To try to identify and treat people who are at highest risk of cardiovascular events, including death, we use comprehensive risk-prediction models. Unfortunately, these models have limited accuracy and precision and do not predict very well.

See related article

Attractive, then, is the idea of using a noninvasive imaging test to measure coronary atherosclerosis before it causes trouble and thereby individualize the risk assessment. Noncontrast computed tomography (CT) can measure the amount of calcification in the coronary arteries, and therefore it can estimate the coronary atherosclerotic burden. It seems like an ideal test, and calcification as a marker of subclinical atherosclerosis has been extensively investigated.

However, despite more than 2 decades of use and data from hundreds of thousands of patients, the test remains poorly understood. Many physicians seem to use it solely as a means of placating “worried well” patients and do not truly appreciate its implications. Others proceed to ordering CT angiography, a more expensive test that involves the added risks of using higher x-ray doses and iodinated contrast, even when a correctly interpreted calcification score would provide ample information.

In this issue of the Cleveland Clinic Journal of Medicine, Chauffe and Winchester review the utility of coronary artery calcification scoring in current practice. We wish to supplement their review by suggesting some considerations to take into account before ordering this test:

  • Does the patient have symptoms of coronary artery disease, and what is his or her risk-factor profile? Baseline patient characteristics are important to consider if we are to use this test appropriately.
  • How should the result be interpreted, and does the ordering physician have the confidence to accept the result?

BEST USED IN ASYMPTOMATIC PATIENTS AT INTERMEDIATE RISK

Many large retrospective and prospective registries have demonstrated the predictive value of coronary artery calcification in diverse cohorts of patients without symptoms.

In three prospective registries—the Multi-Ethnic Study of Atherosclerosis1 (MESA) with 6,722 patients, the Coronary CT Angiography Evaluation for Clinical Outcomes2 (CONFIRM) with 7,590 patients, and the Heinz Nixdorf Recall (NHR) study3 with 4,129 patients—most of the patients who had heart attacks had a calcification score greater than 100. And conversely, data from more than 100,000 people show that the absence of calcification (ie, a score of 0) denotes a very low risk (< 1% over 5 years).1–6

The pretest probability of coronary artery disease needs to be considered. The data clearly indicate that a Bayesian approach is warranted and that coronary artery calcification scoring should mainly be done in patients at intermediate or low-intermediate risk. Trials have shown that calcification scoring will reclassify more than 50% of intermediate-risk patients into the high-risk or low-risk category.3

The implications of these findings were eloquently assessed in the Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin (JUPITER). In this trial, it was estimated that for patients with no calcification who would otherwise fulfill the criteria for treatment with a statin, 549 patients would need to be treated to prevent one coronary event, compared with 24 similar patients with a calcification score greater than 100.7

Although such analyses have potential shortcomings, in this era of greater concern about how to allocate finite resources, using a simple, inexpensive test to individualize long-term treatment is an attractive idea. Further, measuring calcification does not appear to increase testing “downstream” and indeed reduces it as compared with no calcification scoring. It also results in better adherence to drug therapy and lifestyle changes.

Because calcification scoring provides additional prognostic data and accurately discriminates and reclassifies risk, the American College of Cardiology and the American Heart Association have awarded it a class IIa recommendation for asymptomatic patients at intermediate risk, meaning that there is conflicting evidence or a divergence of opinion about its usefulness, but the weight of evidence or opinion favors it.8

 

 

ITS ROLE IS MORE CONTROVERSIAL IN SYMPTOMATIC PATIENTS

Perhaps a less established and more controversial use of coronary artery calcification scoring is in patients who are having coronary symptoms. In patients at high cardiovascular risk, this test by itself may miss an unacceptable number of those who truly have significant stenoses.9 However, when the appropriate population is selected, there is substantial evidence that it can be an important means of risk stratification.

In patients at low to intermediate risk, the absence of calcification indicates a very low likelihood of significant coronary artery stenosis, as demonstrated in the Coronary CT Angiography Evaluation for Clinical Outcomes: An International Multicenter (CONFIRM) registry.10 In the 10,037 symptomatic patients evaluated, a score of 0 had a 99% negative predictive value for excluding stenosis greater than 70% and was associated with a 2-year event rate less than 1%. These data were supported by a meta-analysis of nearly 1,000 symptomatic patients with a score of 0, in whom the 2-year event rate was less than 2%.4

Taken together, these data suggest that the absence of coronary calcification in people at low to intermediate risk indicates a very low likelihood of significant stenotic coronary artery disease and foretells an excellent prognosis.

These data have already been incorporated into the British National Institute for Health and Clinical Excellence (NICE) guidelines, in which calcification scoring is an integral part of the management algorithm in patients with chest pain who are at low risk.

WHY NOT JUST DO CT ANGIOGRAPHY?

But why bother with coronary artery calcification scoring when we can do CT angiography instead? The angiography scanners we have today can cover the entire heart in a single gantry rotation. Dual-source scanners provide temporal resolution as low as 75 ms, and sequential, prospective electrocardiographic gating and iterative reconstruction can routinely achieve scans with doses of radiation as low as 3 mSv that provide coronary artery images of exquisite quality.

On the other hand, calcification scoring is fast and easy to perform and poses less potential harm to the patient, since it uses lower doses of radiation and no contrast agents. In addition, the quantification is semi-automated, so the results can be interpreted quickly and are reproducible.

In the CONFIRM trial, prediction by CT angiography was no better than calcification scoring in asymptomatic patients, so it is not recommended in this population.2 In symptomatic patients, the CONFIRM trial data suggest that almost 1,000 additional CT angiography procedures would need to be done to identify one myocardial infarction and more than 1,500 procedures to identify one patient at risk of death missed by calcification scoring of 0 in patients at low to intermediate risk.11

Chauffe and Winchester nicely summarize the limitations of calcification scoring. However, we would emphasize the potential implications of the above findings. Appropriately utilized, calcification scoring is safe, reproducible, and inexpensive and helps individualize treatment in asymptomatic patients at low to intermediate risk, thereby avoiding under- and overtreatment and potentially reducing downstream costs while improving compliance.

In patients at low to intermediate risk who present with chest pain, documenting the absence of calcification can rationalize downstream testing and reliably, quickly, and safely permit patient discharge from emergency departments. In a time of increasing costs and patient demands and finite resources, clinicians should remain cognizant of the usefulness of evaluating coronary artery calcification.

References
  1. Budoff MJ, McClelland RL, Nasir K, et al. Cardiovascular events with absent or minimal coronary calcification: the Multi-Ethnic Study of Atherosclerosis (MESA). Am Heart J 2009; 158:554561.
  2. Cho I, Chang HJ, Sung JM, et al; CONFIRM Investigators. Coronary computed tomographic angiography and risk of all-cause mortality and nonfatal myocardial infarction in subjects without chest pain syndrome from the CONFIRM Registry. Circulation 2012; 126:304313.
  3. Erbel R, Möhlenkamp S, Moebus S, et al; Heinz Nixdorf Recall Study Investigative Group. Coronary risk stratification, discrimination, and reclassification improvement based on quantification of subclinical coronary atherosclerosis: the Heinz Nixdorf Recall study. J Am Coll Cardiol 2010; 56:13971406.
  4. Sarwar A, Shaw LJ, Shapiro MD, et al. Diagnostic and prognostic value of absence of coronary artery calcification. JACC Cardiovasc Imaging 2009; 2:675688.
  5. Blaha M, Budoff MJ, Shaw LJ, et al. Absence of coronary artery calcification and all-cause mortality. JACC Cardiovasc Imaging 2009; 2:692700.
  6. Graham G, Blaha MJ, Budoff MJ, et al. Impact of coronary artery calcification on all-cause mortality in individuals with and without hypertension. Atherosclerosis 2012; 225:432437.
  7. Blaha MJ, Budoff MJ, DeFilippis AP, et al. Associations between C-reactive protein, coronary artery calcium, and cardiovascular events: implications for the JUPITER population from MESA, a population-based cohort study. Lancet 2011; 378:684692.
  8. Greenland P, Alpert JS, Beller GA, et al. 2010 ACCF/AHA guideline for assessment of cardiovascular risk in asymptomatic adults. J Am Coll Cardiol 2010; 56:e50e103.
  9. Gottlieb I, Miller JM, Arbab-Zadeh A, et al. The absence of coronary calcification does not exclude obstructive coronary artery disease or the need for revascularization in patients referred for conventional coronary angiography. J Am Coll Cardiol 2010; 55:627634.
  10. Villines TC, Hulten EA, Shaw LJ, et al; CONFIRM Registry Investigators. Prevalence and severity of coronary artery disease and adverse events among symptomatic patients with coronary artery calcification scores of zero undergoing coronary computed tomography angiography: results from the CONFIRM registry. J Am Coll Cardiol 2011; 58:25332540.
  11. Joshi PH, Blaha MJ, Blumenthal RS, Blankstein R, Nasir K. What is the role of calcium scoring in the age of coronary computed tomographic angiography? J Nucl Cardiol 2012; 19:12261235.
Article PDF
Author and Disclosure Information

Dermot Phelan, MB, BCh, BAO, PhD
Department of Cardiovascular Medicine, Cleveland Clinic

Milind Y. Desai, MD
Department of Cardiovascular Medicine and Department of Diagnostic Radiology, Cleveland Clinic

Address: Milind Y. Desai, MD, Department of Cardiovascular Medicine, J1-5, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Issue
Cleveland Clinic Journal of Medicine - 80(6)
Publications
Topics
Page Number
374-376
Sections
Author and Disclosure Information

Dermot Phelan, MB, BCh, BAO, PhD
Department of Cardiovascular Medicine, Cleveland Clinic

Milind Y. Desai, MD
Department of Cardiovascular Medicine and Department of Diagnostic Radiology, Cleveland Clinic

Address: Milind Y. Desai, MD, Department of Cardiovascular Medicine, J1-5, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Author and Disclosure Information

Dermot Phelan, MB, BCh, BAO, PhD
Department of Cardiovascular Medicine, Cleveland Clinic

Milind Y. Desai, MD
Department of Cardiovascular Medicine and Department of Diagnostic Radiology, Cleveland Clinic

Address: Milind Y. Desai, MD, Department of Cardiovascular Medicine, J1-5, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Article PDF
Article PDF

To try to identify and treat people who are at highest risk of cardiovascular events, including death, we use comprehensive risk-prediction models. Unfortunately, these models have limited accuracy and precision and do not predict very well.

See related article

Attractive, then, is the idea of using a noninvasive imaging test to measure coronary atherosclerosis before it causes trouble and thereby individualize the risk assessment. Noncontrast computed tomography (CT) can measure the amount of calcification in the coronary arteries, and therefore it can estimate the coronary atherosclerotic burden. It seems like an ideal test, and calcification as a marker of subclinical atherosclerosis has been extensively investigated.

However, despite more than 2 decades of use and data from hundreds of thousands of patients, the test remains poorly understood. Many physicians seem to use it solely as a means of placating “worried well” patients and do not truly appreciate its implications. Others proceed to ordering CT angiography, a more expensive test that involves the added risks of using higher x-ray doses and iodinated contrast, even when a correctly interpreted calcification score would provide ample information.

In this issue of the Cleveland Clinic Journal of Medicine, Chauffe and Winchester review the utility of coronary artery calcification scoring in current practice. We wish to supplement their review by suggesting some considerations to take into account before ordering this test:

  • Does the patient have symptoms of coronary artery disease, and what is his or her risk-factor profile? Baseline patient characteristics are important to consider if we are to use this test appropriately.
  • How should the result be interpreted, and does the ordering physician have the confidence to accept the result?

BEST USED IN ASYMPTOMATIC PATIENTS AT INTERMEDIATE RISK

Many large retrospective and prospective registries have demonstrated the predictive value of coronary artery calcification in diverse cohorts of patients without symptoms.

In three prospective registries—the Multi-Ethnic Study of Atherosclerosis1 (MESA) with 6,722 patients, the Coronary CT Angiography Evaluation for Clinical Outcomes2 (CONFIRM) with 7,590 patients, and the Heinz Nixdorf Recall (NHR) study3 with 4,129 patients—most of the patients who had heart attacks had a calcification score greater than 100. And conversely, data from more than 100,000 people show that the absence of calcification (ie, a score of 0) denotes a very low risk (< 1% over 5 years).1–6

The pretest probability of coronary artery disease needs to be considered. The data clearly indicate that a Bayesian approach is warranted and that coronary artery calcification scoring should mainly be done in patients at intermediate or low-intermediate risk. Trials have shown that calcification scoring will reclassify more than 50% of intermediate-risk patients into the high-risk or low-risk category.3

The implications of these findings were eloquently assessed in the Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin (JUPITER). In this trial, it was estimated that for patients with no calcification who would otherwise fulfill the criteria for treatment with a statin, 549 patients would need to be treated to prevent one coronary event, compared with 24 similar patients with a calcification score greater than 100.7

Although such analyses have potential shortcomings, in this era of greater concern about how to allocate finite resources, using a simple, inexpensive test to individualize long-term treatment is an attractive idea. Further, measuring calcification does not appear to increase testing “downstream” and indeed reduces it as compared with no calcification scoring. It also results in better adherence to drug therapy and lifestyle changes.

Because calcification scoring provides additional prognostic data and accurately discriminates and reclassifies risk, the American College of Cardiology and the American Heart Association have awarded it a class IIa recommendation for asymptomatic patients at intermediate risk, meaning that there is conflicting evidence or a divergence of opinion about its usefulness, but the weight of evidence or opinion favors it.8

 

 

ITS ROLE IS MORE CONTROVERSIAL IN SYMPTOMATIC PATIENTS

Perhaps a less established and more controversial use of coronary artery calcification scoring is in patients who are having coronary symptoms. In patients at high cardiovascular risk, this test by itself may miss an unacceptable number of those who truly have significant stenoses.9 However, when the appropriate population is selected, there is substantial evidence that it can be an important means of risk stratification.

In patients at low to intermediate risk, the absence of calcification indicates a very low likelihood of significant coronary artery stenosis, as demonstrated in the Coronary CT Angiography Evaluation for Clinical Outcomes: An International Multicenter (CONFIRM) registry.10 In the 10,037 symptomatic patients evaluated, a score of 0 had a 99% negative predictive value for excluding stenosis greater than 70% and was associated with a 2-year event rate less than 1%. These data were supported by a meta-analysis of nearly 1,000 symptomatic patients with a score of 0, in whom the 2-year event rate was less than 2%.4

Taken together, these data suggest that the absence of coronary calcification in people at low to intermediate risk indicates a very low likelihood of significant stenotic coronary artery disease and foretells an excellent prognosis.

These data have already been incorporated into the British National Institute for Health and Clinical Excellence (NICE) guidelines, in which calcification scoring is an integral part of the management algorithm in patients with chest pain who are at low risk.

WHY NOT JUST DO CT ANGIOGRAPHY?

But why bother with coronary artery calcification scoring when we can do CT angiography instead? The angiography scanners we have today can cover the entire heart in a single gantry rotation. Dual-source scanners provide temporal resolution as low as 75 ms, and sequential, prospective electrocardiographic gating and iterative reconstruction can routinely achieve scans with doses of radiation as low as 3 mSv that provide coronary artery images of exquisite quality.

On the other hand, calcification scoring is fast and easy to perform and poses less potential harm to the patient, since it uses lower doses of radiation and no contrast agents. In addition, the quantification is semi-automated, so the results can be interpreted quickly and are reproducible.

In the CONFIRM trial, prediction by CT angiography was no better than calcification scoring in asymptomatic patients, so it is not recommended in this population.2 In symptomatic patients, the CONFIRM trial data suggest that almost 1,000 additional CT angiography procedures would need to be done to identify one myocardial infarction and more than 1,500 procedures to identify one patient at risk of death missed by calcification scoring of 0 in patients at low to intermediate risk.11

Chauffe and Winchester nicely summarize the limitations of calcification scoring. However, we would emphasize the potential implications of the above findings. Appropriately utilized, calcification scoring is safe, reproducible, and inexpensive and helps individualize treatment in asymptomatic patients at low to intermediate risk, thereby avoiding under- and overtreatment and potentially reducing downstream costs while improving compliance.

In patients at low to intermediate risk who present with chest pain, documenting the absence of calcification can rationalize downstream testing and reliably, quickly, and safely permit patient discharge from emergency departments. In a time of increasing costs and patient demands and finite resources, clinicians should remain cognizant of the usefulness of evaluating coronary artery calcification.

To try to identify and treat people who are at highest risk of cardiovascular events, including death, we use comprehensive risk-prediction models. Unfortunately, these models have limited accuracy and precision and do not predict very well.

See related article

Attractive, then, is the idea of using a noninvasive imaging test to measure coronary atherosclerosis before it causes trouble and thereby individualize the risk assessment. Noncontrast computed tomography (CT) can measure the amount of calcification in the coronary arteries, and therefore it can estimate the coronary atherosclerotic burden. It seems like an ideal test, and calcification as a marker of subclinical atherosclerosis has been extensively investigated.

However, despite more than 2 decades of use and data from hundreds of thousands of patients, the test remains poorly understood. Many physicians seem to use it solely as a means of placating “worried well” patients and do not truly appreciate its implications. Others proceed to ordering CT angiography, a more expensive test that involves the added risks of using higher x-ray doses and iodinated contrast, even when a correctly interpreted calcification score would provide ample information.

In this issue of the Cleveland Clinic Journal of Medicine, Chauffe and Winchester review the utility of coronary artery calcification scoring in current practice. We wish to supplement their review by suggesting some considerations to take into account before ordering this test:

  • Does the patient have symptoms of coronary artery disease, and what is his or her risk-factor profile? Baseline patient characteristics are important to consider if we are to use this test appropriately.
  • How should the result be interpreted, and does the ordering physician have the confidence to accept the result?

BEST USED IN ASYMPTOMATIC PATIENTS AT INTERMEDIATE RISK

Many large retrospective and prospective registries have demonstrated the predictive value of coronary artery calcification in diverse cohorts of patients without symptoms.

In three prospective registries—the Multi-Ethnic Study of Atherosclerosis1 (MESA) with 6,722 patients, the Coronary CT Angiography Evaluation for Clinical Outcomes2 (CONFIRM) with 7,590 patients, and the Heinz Nixdorf Recall (NHR) study3 with 4,129 patients—most of the patients who had heart attacks had a calcification score greater than 100. And conversely, data from more than 100,000 people show that the absence of calcification (ie, a score of 0) denotes a very low risk (< 1% over 5 years).1–6

The pretest probability of coronary artery disease needs to be considered. The data clearly indicate that a Bayesian approach is warranted and that coronary artery calcification scoring should mainly be done in patients at intermediate or low-intermediate risk. Trials have shown that calcification scoring will reclassify more than 50% of intermediate-risk patients into the high-risk or low-risk category.3

The implications of these findings were eloquently assessed in the Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin (JUPITER). In this trial, it was estimated that for patients with no calcification who would otherwise fulfill the criteria for treatment with a statin, 549 patients would need to be treated to prevent one coronary event, compared with 24 similar patients with a calcification score greater than 100.7

Although such analyses have potential shortcomings, in this era of greater concern about how to allocate finite resources, using a simple, inexpensive test to individualize long-term treatment is an attractive idea. Further, measuring calcification does not appear to increase testing “downstream” and indeed reduces it as compared with no calcification scoring. It also results in better adherence to drug therapy and lifestyle changes.

Because calcification scoring provides additional prognostic data and accurately discriminates and reclassifies risk, the American College of Cardiology and the American Heart Association have awarded it a class IIa recommendation for asymptomatic patients at intermediate risk, meaning that there is conflicting evidence or a divergence of opinion about its usefulness, but the weight of evidence or opinion favors it.8

 

 

ITS ROLE IS MORE CONTROVERSIAL IN SYMPTOMATIC PATIENTS

Perhaps a less established and more controversial use of coronary artery calcification scoring is in patients who are having coronary symptoms. In patients at high cardiovascular risk, this test by itself may miss an unacceptable number of those who truly have significant stenoses.9 However, when the appropriate population is selected, there is substantial evidence that it can be an important means of risk stratification.

In patients at low to intermediate risk, the absence of calcification indicates a very low likelihood of significant coronary artery stenosis, as demonstrated in the Coronary CT Angiography Evaluation for Clinical Outcomes: An International Multicenter (CONFIRM) registry.10 In the 10,037 symptomatic patients evaluated, a score of 0 had a 99% negative predictive value for excluding stenosis greater than 70% and was associated with a 2-year event rate less than 1%. These data were supported by a meta-analysis of nearly 1,000 symptomatic patients with a score of 0, in whom the 2-year event rate was less than 2%.4

Taken together, these data suggest that the absence of coronary calcification in people at low to intermediate risk indicates a very low likelihood of significant stenotic coronary artery disease and foretells an excellent prognosis.

These data have already been incorporated into the British National Institute for Health and Clinical Excellence (NICE) guidelines, in which calcification scoring is an integral part of the management algorithm in patients with chest pain who are at low risk.

WHY NOT JUST DO CT ANGIOGRAPHY?

But why bother with coronary artery calcification scoring when we can do CT angiography instead? The angiography scanners we have today can cover the entire heart in a single gantry rotation. Dual-source scanners provide temporal resolution as low as 75 ms, and sequential, prospective electrocardiographic gating and iterative reconstruction can routinely achieve scans with doses of radiation as low as 3 mSv that provide coronary artery images of exquisite quality.

On the other hand, calcification scoring is fast and easy to perform and poses less potential harm to the patient, since it uses lower doses of radiation and no contrast agents. In addition, the quantification is semi-automated, so the results can be interpreted quickly and are reproducible.

In the CONFIRM trial, prediction by CT angiography was no better than calcification scoring in asymptomatic patients, so it is not recommended in this population.2 In symptomatic patients, the CONFIRM trial data suggest that almost 1,000 additional CT angiography procedures would need to be done to identify one myocardial infarction and more than 1,500 procedures to identify one patient at risk of death missed by calcification scoring of 0 in patients at low to intermediate risk.11

Chauffe and Winchester nicely summarize the limitations of calcification scoring. However, we would emphasize the potential implications of the above findings. Appropriately utilized, calcification scoring is safe, reproducible, and inexpensive and helps individualize treatment in asymptomatic patients at low to intermediate risk, thereby avoiding under- and overtreatment and potentially reducing downstream costs while improving compliance.

In patients at low to intermediate risk who present with chest pain, documenting the absence of calcification can rationalize downstream testing and reliably, quickly, and safely permit patient discharge from emergency departments. In a time of increasing costs and patient demands and finite resources, clinicians should remain cognizant of the usefulness of evaluating coronary artery calcification.

References
  1. Budoff MJ, McClelland RL, Nasir K, et al. Cardiovascular events with absent or minimal coronary calcification: the Multi-Ethnic Study of Atherosclerosis (MESA). Am Heart J 2009; 158:554561.
  2. Cho I, Chang HJ, Sung JM, et al; CONFIRM Investigators. Coronary computed tomographic angiography and risk of all-cause mortality and nonfatal myocardial infarction in subjects without chest pain syndrome from the CONFIRM Registry. Circulation 2012; 126:304313.
  3. Erbel R, Möhlenkamp S, Moebus S, et al; Heinz Nixdorf Recall Study Investigative Group. Coronary risk stratification, discrimination, and reclassification improvement based on quantification of subclinical coronary atherosclerosis: the Heinz Nixdorf Recall study. J Am Coll Cardiol 2010; 56:13971406.
  4. Sarwar A, Shaw LJ, Shapiro MD, et al. Diagnostic and prognostic value of absence of coronary artery calcification. JACC Cardiovasc Imaging 2009; 2:675688.
  5. Blaha M, Budoff MJ, Shaw LJ, et al. Absence of coronary artery calcification and all-cause mortality. JACC Cardiovasc Imaging 2009; 2:692700.
  6. Graham G, Blaha MJ, Budoff MJ, et al. Impact of coronary artery calcification on all-cause mortality in individuals with and without hypertension. Atherosclerosis 2012; 225:432437.
  7. Blaha MJ, Budoff MJ, DeFilippis AP, et al. Associations between C-reactive protein, coronary artery calcium, and cardiovascular events: implications for the JUPITER population from MESA, a population-based cohort study. Lancet 2011; 378:684692.
  8. Greenland P, Alpert JS, Beller GA, et al. 2010 ACCF/AHA guideline for assessment of cardiovascular risk in asymptomatic adults. J Am Coll Cardiol 2010; 56:e50e103.
  9. Gottlieb I, Miller JM, Arbab-Zadeh A, et al. The absence of coronary calcification does not exclude obstructive coronary artery disease or the need for revascularization in patients referred for conventional coronary angiography. J Am Coll Cardiol 2010; 55:627634.
  10. Villines TC, Hulten EA, Shaw LJ, et al; CONFIRM Registry Investigators. Prevalence and severity of coronary artery disease and adverse events among symptomatic patients with coronary artery calcification scores of zero undergoing coronary computed tomography angiography: results from the CONFIRM registry. J Am Coll Cardiol 2011; 58:25332540.
  11. Joshi PH, Blaha MJ, Blumenthal RS, Blankstein R, Nasir K. What is the role of calcium scoring in the age of coronary computed tomographic angiography? J Nucl Cardiol 2012; 19:12261235.
References
  1. Budoff MJ, McClelland RL, Nasir K, et al. Cardiovascular events with absent or minimal coronary calcification: the Multi-Ethnic Study of Atherosclerosis (MESA). Am Heart J 2009; 158:554561.
  2. Cho I, Chang HJ, Sung JM, et al; CONFIRM Investigators. Coronary computed tomographic angiography and risk of all-cause mortality and nonfatal myocardial infarction in subjects without chest pain syndrome from the CONFIRM Registry. Circulation 2012; 126:304313.
  3. Erbel R, Möhlenkamp S, Moebus S, et al; Heinz Nixdorf Recall Study Investigative Group. Coronary risk stratification, discrimination, and reclassification improvement based on quantification of subclinical coronary atherosclerosis: the Heinz Nixdorf Recall study. J Am Coll Cardiol 2010; 56:13971406.
  4. Sarwar A, Shaw LJ, Shapiro MD, et al. Diagnostic and prognostic value of absence of coronary artery calcification. JACC Cardiovasc Imaging 2009; 2:675688.
  5. Blaha M, Budoff MJ, Shaw LJ, et al. Absence of coronary artery calcification and all-cause mortality. JACC Cardiovasc Imaging 2009; 2:692700.
  6. Graham G, Blaha MJ, Budoff MJ, et al. Impact of coronary artery calcification on all-cause mortality in individuals with and without hypertension. Atherosclerosis 2012; 225:432437.
  7. Blaha MJ, Budoff MJ, DeFilippis AP, et al. Associations between C-reactive protein, coronary artery calcium, and cardiovascular events: implications for the JUPITER population from MESA, a population-based cohort study. Lancet 2011; 378:684692.
  8. Greenland P, Alpert JS, Beller GA, et al. 2010 ACCF/AHA guideline for assessment of cardiovascular risk in asymptomatic adults. J Am Coll Cardiol 2010; 56:e50e103.
  9. Gottlieb I, Miller JM, Arbab-Zadeh A, et al. The absence of coronary calcification does not exclude obstructive coronary artery disease or the need for revascularization in patients referred for conventional coronary angiography. J Am Coll Cardiol 2010; 55:627634.
  10. Villines TC, Hulten EA, Shaw LJ, et al; CONFIRM Registry Investigators. Prevalence and severity of coronary artery disease and adverse events among symptomatic patients with coronary artery calcification scores of zero undergoing coronary computed tomography angiography: results from the CONFIRM registry. J Am Coll Cardiol 2011; 58:25332540.
  11. Joshi PH, Blaha MJ, Blumenthal RS, Blankstein R, Nasir K. What is the role of calcium scoring in the age of coronary computed tomographic angiography? J Nucl Cardiol 2012; 19:12261235.
Issue
Cleveland Clinic Journal of Medicine - 80(6)
Issue
Cleveland Clinic Journal of Medicine - 80(6)
Page Number
374-376
Page Number
374-376
Publications
Publications
Topics
Article Type
Display Headline
Does coronary artery calcification scoring still have a role in practice?
Display Headline
Does coronary artery calcification scoring still have a role in practice?
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Aortic valve replacement: Options, improvements, and costs

Article Type
Changed
Mon, 09/18/2017 - 15:52
Display Headline
Aortic valve replacement: Options, improvements, and costs

How aortic valve disease is managed continues to evolve, with novel approaches for both aortic valve stenosis and regurgitation.1–8 Indeed, because of the spectrum of procedures, a multispecialty committee was formed to provide a detailed guideline to help physicians work through the various options.4

See related article

The paper by Aksoy and colleagues in this issue of the Journal gives further insight into the complexities of decision-making.

As a rule, the indications for a procedure to treat aortic valvular disease continue to be based on whether the patient develops certain symptoms (fatigue, exertional dyspnea, shortness of breath, syncope, chest pain), myocardial deterioration, reduced ejection fraction, or ventricular dilatation.4 Furthermore, the options depend on whether the patient has comorbid disease and is a candidate for surgical aortic valve replacement.

OPEN SURGERY: THE MAINSTAY OF TREATMENT

Open surgery—including in recent years minimally invasive J-incision “keyhole” repair or replacement—has been the mainstay of treatment. The results of surgical aortic valve repair have been excellent, so that 10 years after surgery 95% of patients who have undergone a modified David reimplantation operation have not needed a repeat operation.3 The results are comparable for repair of bicuspid aortic valves.2,3

Furthermore, surgical aortic valve replacement has become very safe. At Cleveland Clinic in 2011, only 3 (0.6%) of 479 patients died during isolated aortic valve replacement, and in 2012 the mortality rate was even better, with only 1 death (0.2%) among 495 patients as of November 2012.

GOOD RESULTS WITH TRANSCATHETER AORTIC VALVE REPLACEMENT

For a new valve procedure to be accepted into practice, it must be easy to do, safe, and consistently good in performance measures such as producing low gradients, eliminating aortic regurgitation, and leading to high rates of long-term freedom from reoperation and of survival. To see if percutaneous aortic valve replacement meets these criteria, it was evaluated by both us at Cleveland Clinic and our colleagues at other institutions in the laboratory and also in feasibility trials in the United States.

The subsequent Placement of Transcatheter Aortic Valves (PARTNER) trial established the benefit of this procedure in terms of superior survival for patients who could not undergo surgery.8 Hence, the transcatheter device was approved for patients who cannot undergo surgery who meet certain criteria (valve area < 0.8 cm2; mean gradient > 40 mm Hg or peak gradient > 64 mm Hg). Of note, the cost per procedure was $78,000, or approximately $50,000 per year of life saved.

The PARTNER A trial showed that the risk of death after transcatheter aortic valve replacement was as low as after open surgery, although the risk of stroke or transient ischemic attack risk was higher—indeed, with the transfemoral approach it was 3 times higher (4.6% vs 1.4%, P < .05).9,10 Furthermore, half the patients had perivalvular leakage after the new procedure, and even mild leakage reduced the survival rate at 2 years.11

Nevertheless, we have now done nearly 400 transcatheter aortic valve replacement procedures in patients who could not undergo open surgery or who would have been at extreme risk during surgery. With the transfemoral approach, in 267 patients, 1 patient died (0.4%), and 2 had strokes (0.7%). (In the rest of the patients, we used alternatives to the transfemoral approach, such as the transaortic, transapical, and transaxillary approaches, also with good results.)

Thus, transcatheter aortic valve replacement in properly selected patients can meet the above criteria.

COSTS AND THE FUTURE

Based on the PARTNER trial results, the Centers for Medicare and Medicaid Services (CMS) agreed to pay for this procedure at the same rate as for surgical aortic valve replacement for patients who cannot or should not undergo surgery, with the approval of two surgeons and within the context of a national registry.10

The reimbursement is adjusted for geographic area. In the United States, for example, hospitals on the East Coast or West Coast receive $88,000 to $94,000 per case, while most other areas receive $32,000 to $62,000.

The surgeon and cardiologist share the professional fee of approximately $2,500, although typically we have a team of eight to 10 physicians (representing the fields of anesthesia, echocardiography, surgery, and cardiology) in the operating room for every procedure, in addition to nursing and technical staff. The challenge for institutions and providers, however, is that the device costs $32,500, and CMS reimbursement does not cover the cost of both the valve and the procedure in many localities. This may affect how widely the valve is eventually used.

While many more options are available now for management of aortic valve disease (minimally invasive repair or replacement, and newer devices), the future usage of transcatheter aortic valve replacement may become dependent on costs, newer devices, cheaper iterations, competition, and CMS reimbursement.

There are now two additional trials, SURTAVI and PARTNER A2, evaluating transcatheter vs open aortic valve replacement in lower-risk patients. The issues that will have to be addressed with new iterations are the risk of stroke and transient ischemic attack, perivalvular leakage, and the costs of the devices.

Newer reports would suggest that the results with transcatheter aortic valve replacement in inoperable and high-risk patients continue to improve as experience evolves.

References
  1. Svensson LG, Blackstone EH, Cosgrove DM. Surgical options in young adults with aortic valve disease. Curr Probl Cardiol 2003; 28:417480.
  2. Svensson LG, Kim KH, Blackstone EH, et al. Bicuspid aortic valve surgery with proactive ascending aorta repair. J Thorac Cardiovasc Surg 2011; 142:622629.e1–e3.
  3. Svensson LG, Batizy LH, Blackstone EH, et al. Results of matching valve and root repair to aortic valve and root pathology. J Thorac Cardiovasc Surg 2011; 142:14911498.e7.
  4. Svensson LG, Adams DH, Bonow RO, et al. Aortic valve and ascending aorta guidelines for management and quality measures: executive summary. Ann Thorac Surg 2013; 10.1016/j.athoracsur.2012.12.027, Epub ahead of print
  5. Svensson LG, D’Agostino RS. “J” incision minimal-access valve operations”. Ann Thorac Surg 1998; 66:11101112.
  6. Johnston DR, Atik FA, Rajeswaran J, et al. Outcomes of less invasive J-incision approach to aortic valve surgery. J Thorac Cardiovasc Surg 2012; 144:852858.e3.
  7. Albacker TB, Blackstone EH, Williams SJ, et al. Should less-invasive aortic valve replacement be avoided in patients with pulmonary dysfunction? J Thorac Cardiovasc Surg 2013; Epub ahead of print.
  8. Leon MB, Smith CR, Mack M, et al; PARTNER Trial Investigators. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 2010; 363:15971607.
  9. Smith CR, Leon MB, Mack MJ, et al; PARTNER Trial Investigators. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 2011; 364:21872198.
  10. Svensson LG, Tuzcu M, Kapadia S, et al. A comprehensive review of the PARTNER trial. J Thorac Cardiovasc Surg 2013; 145(suppl):S11S16.
  11. Kodali SK, Williams MR, Smith CR, et al; PARTNER Trial Investigators. Two-year outcomes after transcatheter or surgical aortic-valve replacement. N Engl J Med 2012; 366:16861695.
Article PDF
Author and Disclosure Information

Lars G. Svensson, MD, PhD
The Aortic Center, Heart and Vascular Institute, Cleveland Clinic; Professor of Surgery, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Address: Lars G. Svensson, MD, PhD, Heart and Vascular Institute, J4-1, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Issue
Cleveland Clinic Journal of Medicine - 80(4)
Publications
Topics
Page Number
253-254
Sections
Author and Disclosure Information

Lars G. Svensson, MD, PhD
The Aortic Center, Heart and Vascular Institute, Cleveland Clinic; Professor of Surgery, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Address: Lars G. Svensson, MD, PhD, Heart and Vascular Institute, J4-1, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Author and Disclosure Information

Lars G. Svensson, MD, PhD
The Aortic Center, Heart and Vascular Institute, Cleveland Clinic; Professor of Surgery, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH

Address: Lars G. Svensson, MD, PhD, Heart and Vascular Institute, J4-1, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail: [email protected]

Article PDF
Article PDF

How aortic valve disease is managed continues to evolve, with novel approaches for both aortic valve stenosis and regurgitation.1–8 Indeed, because of the spectrum of procedures, a multispecialty committee was formed to provide a detailed guideline to help physicians work through the various options.4

See related article

The paper by Aksoy and colleagues in this issue of the Journal gives further insight into the complexities of decision-making.

As a rule, the indications for a procedure to treat aortic valvular disease continue to be based on whether the patient develops certain symptoms (fatigue, exertional dyspnea, shortness of breath, syncope, chest pain), myocardial deterioration, reduced ejection fraction, or ventricular dilatation.4 Furthermore, the options depend on whether the patient has comorbid disease and is a candidate for surgical aortic valve replacement.

OPEN SURGERY: THE MAINSTAY OF TREATMENT

Open surgery—including in recent years minimally invasive J-incision “keyhole” repair or replacement—has been the mainstay of treatment. The results of surgical aortic valve repair have been excellent, so that 10 years after surgery 95% of patients who have undergone a modified David reimplantation operation have not needed a repeat operation.3 The results are comparable for repair of bicuspid aortic valves.2,3

Furthermore, surgical aortic valve replacement has become very safe. At Cleveland Clinic in 2011, only 3 (0.6%) of 479 patients died during isolated aortic valve replacement, and in 2012 the mortality rate was even better, with only 1 death (0.2%) among 495 patients as of November 2012.

GOOD RESULTS WITH TRANSCATHETER AORTIC VALVE REPLACEMENT

For a new valve procedure to be accepted into practice, it must be easy to do, safe, and consistently good in performance measures such as producing low gradients, eliminating aortic regurgitation, and leading to high rates of long-term freedom from reoperation and of survival. To see if percutaneous aortic valve replacement meets these criteria, it was evaluated by both us at Cleveland Clinic and our colleagues at other institutions in the laboratory and also in feasibility trials in the United States.

The subsequent Placement of Transcatheter Aortic Valves (PARTNER) trial established the benefit of this procedure in terms of superior survival for patients who could not undergo surgery.8 Hence, the transcatheter device was approved for patients who cannot undergo surgery who meet certain criteria (valve area < 0.8 cm2; mean gradient > 40 mm Hg or peak gradient > 64 mm Hg). Of note, the cost per procedure was $78,000, or approximately $50,000 per year of life saved.

The PARTNER A trial showed that the risk of death after transcatheter aortic valve replacement was as low as after open surgery, although the risk of stroke or transient ischemic attack risk was higher—indeed, with the transfemoral approach it was 3 times higher (4.6% vs 1.4%, P < .05).9,10 Furthermore, half the patients had perivalvular leakage after the new procedure, and even mild leakage reduced the survival rate at 2 years.11

Nevertheless, we have now done nearly 400 transcatheter aortic valve replacement procedures in patients who could not undergo open surgery or who would have been at extreme risk during surgery. With the transfemoral approach, in 267 patients, 1 patient died (0.4%), and 2 had strokes (0.7%). (In the rest of the patients, we used alternatives to the transfemoral approach, such as the transaortic, transapical, and transaxillary approaches, also with good results.)

Thus, transcatheter aortic valve replacement in properly selected patients can meet the above criteria.

COSTS AND THE FUTURE

Based on the PARTNER trial results, the Centers for Medicare and Medicaid Services (CMS) agreed to pay for this procedure at the same rate as for surgical aortic valve replacement for patients who cannot or should not undergo surgery, with the approval of two surgeons and within the context of a national registry.10

The reimbursement is adjusted for geographic area. In the United States, for example, hospitals on the East Coast or West Coast receive $88,000 to $94,000 per case, while most other areas receive $32,000 to $62,000.

The surgeon and cardiologist share the professional fee of approximately $2,500, although typically we have a team of eight to 10 physicians (representing the fields of anesthesia, echocardiography, surgery, and cardiology) in the operating room for every procedure, in addition to nursing and technical staff. The challenge for institutions and providers, however, is that the device costs $32,500, and CMS reimbursement does not cover the cost of both the valve and the procedure in many localities. This may affect how widely the valve is eventually used.

While many more options are available now for management of aortic valve disease (minimally invasive repair or replacement, and newer devices), the future usage of transcatheter aortic valve replacement may become dependent on costs, newer devices, cheaper iterations, competition, and CMS reimbursement.

There are now two additional trials, SURTAVI and PARTNER A2, evaluating transcatheter vs open aortic valve replacement in lower-risk patients. The issues that will have to be addressed with new iterations are the risk of stroke and transient ischemic attack, perivalvular leakage, and the costs of the devices.

Newer reports would suggest that the results with transcatheter aortic valve replacement in inoperable and high-risk patients continue to improve as experience evolves.

How aortic valve disease is managed continues to evolve, with novel approaches for both aortic valve stenosis and regurgitation.1–8 Indeed, because of the spectrum of procedures, a multispecialty committee was formed to provide a detailed guideline to help physicians work through the various options.4

See related article

The paper by Aksoy and colleagues in this issue of the Journal gives further insight into the complexities of decision-making.

As a rule, the indications for a procedure to treat aortic valvular disease continue to be based on whether the patient develops certain symptoms (fatigue, exertional dyspnea, shortness of breath, syncope, chest pain), myocardial deterioration, reduced ejection fraction, or ventricular dilatation.4 Furthermore, the options depend on whether the patient has comorbid disease and is a candidate for surgical aortic valve replacement.

OPEN SURGERY: THE MAINSTAY OF TREATMENT

Open surgery—including in recent years minimally invasive J-incision “keyhole” repair or replacement—has been the mainstay of treatment. The results of surgical aortic valve repair have been excellent, so that 10 years after surgery 95% of patients who have undergone a modified David reimplantation operation have not needed a repeat operation.3 The results are comparable for repair of bicuspid aortic valves.2,3

Furthermore, surgical aortic valve replacement has become very safe. At Cleveland Clinic in 2011, only 3 (0.6%) of 479 patients died during isolated aortic valve replacement, and in 2012 the mortality rate was even better, with only 1 death (0.2%) among 495 patients as of November 2012.

GOOD RESULTS WITH TRANSCATHETER AORTIC VALVE REPLACEMENT

For a new valve procedure to be accepted into practice, it must be easy to do, safe, and consistently good in performance measures such as producing low gradients, eliminating aortic regurgitation, and leading to high rates of long-term freedom from reoperation and of survival. To see if percutaneous aortic valve replacement meets these criteria, it was evaluated by both us at Cleveland Clinic and our colleagues at other institutions in the laboratory and also in feasibility trials in the United States.

The subsequent Placement of Transcatheter Aortic Valves (PARTNER) trial established the benefit of this procedure in terms of superior survival for patients who could not undergo surgery.8 Hence, the transcatheter device was approved for patients who cannot undergo surgery who meet certain criteria (valve area < 0.8 cm2; mean gradient > 40 mm Hg or peak gradient > 64 mm Hg). Of note, the cost per procedure was $78,000, or approximately $50,000 per year of life saved.

The PARTNER A trial showed that the risk of death after transcatheter aortic valve replacement was as low as after open surgery, although the risk of stroke or transient ischemic attack risk was higher—indeed, with the transfemoral approach it was 3 times higher (4.6% vs 1.4%, P < .05).9,10 Furthermore, half the patients had perivalvular leakage after the new procedure, and even mild leakage reduced the survival rate at 2 years.11

Nevertheless, we have now done nearly 400 transcatheter aortic valve replacement procedures in patients who could not undergo open surgery or who would have been at extreme risk during surgery. With the transfemoral approach, in 267 patients, 1 patient died (0.4%), and 2 had strokes (0.7%). (In the rest of the patients, we used alternatives to the transfemoral approach, such as the transaortic, transapical, and transaxillary approaches, also with good results.)

Thus, transcatheter aortic valve replacement in properly selected patients can meet the above criteria.

COSTS AND THE FUTURE

Based on the PARTNER trial results, the Centers for Medicare and Medicaid Services (CMS) agreed to pay for this procedure at the same rate as for surgical aortic valve replacement for patients who cannot or should not undergo surgery, with the approval of two surgeons and within the context of a national registry.10

The reimbursement is adjusted for geographic area. In the United States, for example, hospitals on the East Coast or West Coast receive $88,000 to $94,000 per case, while most other areas receive $32,000 to $62,000.

The surgeon and cardiologist share the professional fee of approximately $2,500, although typically we have a team of eight to 10 physicians (representing the fields of anesthesia, echocardiography, surgery, and cardiology) in the operating room for every procedure, in addition to nursing and technical staff. The challenge for institutions and providers, however, is that the device costs $32,500, and CMS reimbursement does not cover the cost of both the valve and the procedure in many localities. This may affect how widely the valve is eventually used.

While many more options are available now for management of aortic valve disease (minimally invasive repair or replacement, and newer devices), the future usage of transcatheter aortic valve replacement may become dependent on costs, newer devices, cheaper iterations, competition, and CMS reimbursement.

There are now two additional trials, SURTAVI and PARTNER A2, evaluating transcatheter vs open aortic valve replacement in lower-risk patients. The issues that will have to be addressed with new iterations are the risk of stroke and transient ischemic attack, perivalvular leakage, and the costs of the devices.

Newer reports would suggest that the results with transcatheter aortic valve replacement in inoperable and high-risk patients continue to improve as experience evolves.

References
  1. Svensson LG, Blackstone EH, Cosgrove DM. Surgical options in young adults with aortic valve disease. Curr Probl Cardiol 2003; 28:417480.
  2. Svensson LG, Kim KH, Blackstone EH, et al. Bicuspid aortic valve surgery with proactive ascending aorta repair. J Thorac Cardiovasc Surg 2011; 142:622629.e1–e3.
  3. Svensson LG, Batizy LH, Blackstone EH, et al. Results of matching valve and root repair to aortic valve and root pathology. J Thorac Cardiovasc Surg 2011; 142:14911498.e7.
  4. Svensson LG, Adams DH, Bonow RO, et al. Aortic valve and ascending aorta guidelines for management and quality measures: executive summary. Ann Thorac Surg 2013; 10.1016/j.athoracsur.2012.12.027, Epub ahead of print
  5. Svensson LG, D’Agostino RS. “J” incision minimal-access valve operations”. Ann Thorac Surg 1998; 66:11101112.
  6. Johnston DR, Atik FA, Rajeswaran J, et al. Outcomes of less invasive J-incision approach to aortic valve surgery. J Thorac Cardiovasc Surg 2012; 144:852858.e3.
  7. Albacker TB, Blackstone EH, Williams SJ, et al. Should less-invasive aortic valve replacement be avoided in patients with pulmonary dysfunction? J Thorac Cardiovasc Surg 2013; Epub ahead of print.
  8. Leon MB, Smith CR, Mack M, et al; PARTNER Trial Investigators. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 2010; 363:15971607.
  9. Smith CR, Leon MB, Mack MJ, et al; PARTNER Trial Investigators. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 2011; 364:21872198.
  10. Svensson LG, Tuzcu M, Kapadia S, et al. A comprehensive review of the PARTNER trial. J Thorac Cardiovasc Surg 2013; 145(suppl):S11S16.
  11. Kodali SK, Williams MR, Smith CR, et al; PARTNER Trial Investigators. Two-year outcomes after transcatheter or surgical aortic-valve replacement. N Engl J Med 2012; 366:16861695.
References
  1. Svensson LG, Blackstone EH, Cosgrove DM. Surgical options in young adults with aortic valve disease. Curr Probl Cardiol 2003; 28:417480.
  2. Svensson LG, Kim KH, Blackstone EH, et al. Bicuspid aortic valve surgery with proactive ascending aorta repair. J Thorac Cardiovasc Surg 2011; 142:622629.e1–e3.
  3. Svensson LG, Batizy LH, Blackstone EH, et al. Results of matching valve and root repair to aortic valve and root pathology. J Thorac Cardiovasc Surg 2011; 142:14911498.e7.
  4. Svensson LG, Adams DH, Bonow RO, et al. Aortic valve and ascending aorta guidelines for management and quality measures: executive summary. Ann Thorac Surg 2013; 10.1016/j.athoracsur.2012.12.027, Epub ahead of print
  5. Svensson LG, D’Agostino RS. “J” incision minimal-access valve operations”. Ann Thorac Surg 1998; 66:11101112.
  6. Johnston DR, Atik FA, Rajeswaran J, et al. Outcomes of less invasive J-incision approach to aortic valve surgery. J Thorac Cardiovasc Surg 2012; 144:852858.e3.
  7. Albacker TB, Blackstone EH, Williams SJ, et al. Should less-invasive aortic valve replacement be avoided in patients with pulmonary dysfunction? J Thorac Cardiovasc Surg 2013; Epub ahead of print.
  8. Leon MB, Smith CR, Mack M, et al; PARTNER Trial Investigators. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 2010; 363:15971607.
  9. Smith CR, Leon MB, Mack MJ, et al; PARTNER Trial Investigators. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 2011; 364:21872198.
  10. Svensson LG, Tuzcu M, Kapadia S, et al. A comprehensive review of the PARTNER trial. J Thorac Cardiovasc Surg 2013; 145(suppl):S11S16.
  11. Kodali SK, Williams MR, Smith CR, et al; PARTNER Trial Investigators. Two-year outcomes after transcatheter or surgical aortic-valve replacement. N Engl J Med 2012; 366:16861695.
Issue
Cleveland Clinic Journal of Medicine - 80(4)
Issue
Cleveland Clinic Journal of Medicine - 80(4)
Page Number
253-254
Page Number
253-254
Publications
Publications
Topics
Article Type
Display Headline
Aortic valve replacement: Options, improvements, and costs
Display Headline
Aortic valve replacement: Options, improvements, and costs
Sections
Disallow All Ads
Alternative CME
Article PDF Media

What should be the interval between bone density screenings?

Article Type
Changed
Mon, 09/18/2017 - 15:51
Display Headline
What should be the interval between bone density screenings?

In 2010, the United States Preventive Services Task Force recommended screening for osteoporosis by measuring bone mineral density in women age 65 and older and also in younger women if their fracture risk is equal to or greater than that of a 65-year-old white woman who has no additional risk factors.

See related article

But what should be the interval between screenings? The Task Force stated that evidence on the optimum screening interval is lacking, that 2 years may be the minimum interval due to precision error, but that longer intervals may be necessary to improve fracture risk prediction.1 They also cited a study showing that repeating the test up to 8 years after an initial test did not improve the ability of screening to predict fractures.2 This was recently confirmed in a study from Canada.3

GOURLAY ET AL: TEST AGAIN IN 1 TO 15 YEARS

In response to this information void, Gourlay and colleagues4 analyzed data from the Study of Osteoporotic Fractures. Because these investigators were interested in the interval between screening measurements of bone mineral density, they included only women who did not already have osteoporosis or take medication for osteoporosis. They wanted to know how long it took for 10% of women to develop osteoporosis, and found that this interval varied from 1 to 15 years depending on the initial bone density.

I did not think these results were surprising. The durations in which osteoporosis developed were similar to what one would predict from cross-sectional reference ranges. The average woman loses a little less than 1% of bone density per year after age 65. A T score of −1.0 is 22% higher than a T score of −2.5, so on average it would take more than 20 years to go from early osteopenia to osteoporosis.

AN ONGOING DEBATE ON SCREENING

The report generated a debate about the value and timing of repeated screening.5,6

In their article “More bone density testing is needed, not less,”5 Lewiecki et al criticized the Gourlay analysis because it did not include spine measurements or screen for asymptomatic vertebral fractures, and because it did not include enough clinical risk factors.5,6 They claimed that media attention suggested that dual-energy absorptiometry (DXA) was overused and expensive, citing three news reports. One of the news reports did misinterpret the Gourlay study and suggested that fewer women should be screened.7 The others, however, accurately described the findings that many women did not need to undergo DXA every 2 years.8,9

In this issue of the Cleveland Clinic Journal of Medicine, Doshi and colleagues express their opinion that the interval between bone mineral density testings should be guided by an assessment of clinical risk factors and not just T scores.10

Doshi et al are also concerned about erroneous conclusions drawn by the media. However, when I reviewed the news reports that they cited, I thought the reports were well written and conveyed the results appropriately. One report, by Alice Park,11 cautioned: “doctors need to remain flexible in advising women about when to get tested. A patient who has a normal T score but then develops cancer and loses a lot of weight, for example, may be more vulnerable to developing osteoporosis and therefore may need to get screened before the 15-year interval.”11 The other, by Gina Kolata, also explained that those taking high doses of corticosteroids for another medical condition would lose bone rapidly, but the findings “cover most normal women.”9 Neither report discouraged patients from getting screening in the first place.

Both Lewiecki et al and Doshi et al say that clinical factors should be considered, but do not specify which factors should be included in addition to the ones already evaluated by Gourlay et al (age, body mass index, estrogen use at baseline, any fracture after 50 years of age, current smoking, current or past use of oral glucocorticoids, and self-reported rheumatoid arthritis). These did not change the estimated time to develop osteoporosis for 90% of the study participants.

Furthermore, Gourlay et al had already noted that “clinicians may choose to reevaluate patients before our estimated screening intervals if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in our analyses.”4 Thus, patients with serious diseases should undergo DXA not for screening but for monitoring disease progression, and the Gourlay study results do not apply to them.

PATIENTS ON GLUCOCORTICOIDS: A SPECIAL SUBSET

Patients who are treated with glucocorticoids deserve further discussion. Consider the example described by Doshi et al of a woman with rheumatoid arthritis, taking prednisone, with a T score of −1.4. She would have to lose about 17% of her bone density to reach a T score at the osteoporosis level. One clinical trial in patients taking glucocorticoids, most of whom had rheumatoid arthritis, reported a loss of 2% after 2 years in the placebo group,12 so it is unlikely that this patient would have bone density in the osteoporosis range for at least several years.

However, clinicians know that these patients get fractures, especially in the spine, even with a normal bone density. Therefore, vertebral fracture assessment would be more important than bone density screening in this patient. Currently, there is uncertainty about the best time to initiate treatment in patients taking these glucocortical steroids, as well as the choice of initial medication. More research about long-term benefits of treatment are especially needed in this population.

VERTEBRAL FRACTURES: NO FIRM RECOMMENDATIONS

Doshi et al state that the Gourlay study was biased towards longer screening intervals because it included women with asymptomatic vertebral fractures. This does not make sense, because women who have untreated asymptomatic fractures would not be expected to lose bone at a slower rate. This does not mean that the asymptomatic fractures are trivial.

Instead of getting more frequent bone density measurements, I think it would be more logical to evaluate vertebral fractures using radiographs or vertebral fracture assessment, but we can’t make a firm recommendation without studies of the effectiveness of screening for vertebral fractures.

WHAT ABOUT OSTEOPENIA?

Critics of the Gourlay study point out that most fractures occur in the osteopenic population. This is true, but it does not mean that bone density should be measured more frequently. The bisphosphonates are not effective at preventing a first fracture unless the T score is lower than −2.5.13 Patients who have risk factors in addition to osteopenia may have a higher risk of fracture, but it is not clear if this can be treated with medication. For example, rodeo riders have a high fracture risk, but they would not benefit from taking alendronate. In some cases, such as people who smoke or drink alcohol to excess, treating the risk factor would be more appropriate.

As Doshi et al and others have noted, the study by Gourlay et al has limitations, and of course clinical judgment must be used in implementing the findings of any study. But doctors should not order unnecessary and expensive tests, and physicians who perform bone densitometry should not recommend frequent repeat testing that does not benefit the patient.

References
  1. US Preventive Services Task Force. Screening for osteoporosis: US preventive services task force recommendation statement. Ann Intern Med 2011; 154:356364.
  2. Hillier TA, Stone KL, Bauer DC, et al. Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the study of osteoporotic fractures. Arch Intern Med 2007; 167:155160.
  3. Leslie WD, Morin SN, Lix LM; Manitoba Bone Density Program. Rate of bone density change does not enhance fracture prediction in routine clinical practice. J Clin Endocrinol Metab 2012; 97:12111218.
  4. Gourlay ML, Fine J P, Preisser JS, et al; Study of Osteoporotic Fractures Research Group. Bone-density testing interval and transition to osteoporosis in older women. N Engl J Med 2012; 366:225233.
  5. Lewiecki EM, Laster AJ, Miller PD, Bilezikian JP. More bone density testing is needed, not less. J Bone Miner Res 2012; 27:739742.
  6. Yu EW, Finkelstein JS. Bone density screening intervals for osteoporosis: one size does not fit all. JAMA 2012; 307:25912592.
  7. Frier S. Women receive bone tests too often for osteoporosis, study finds. Bloomberg News; 2012. http://www.bloomberg.com/news/2012-01-18/many-women-screened-for-osteoporosis-don-t-need-it-researchers-report.html. Accessed January 3, 2013.
  8. Knox R. Many older women may not need frequent bone scans. National Public Radio; 2012. http://www.npr.org/blogs/health/2012/01/19/145419138/many-older-women-may-not-need-frequent-bone-scans?ps=sh_sthdl. Accessed January 3, 2013.
  9. Kolata G. Patients with normal bone density can delay retests, study suggests. The New York Times; 2012. http://www.nytimes.com/2012/01/19/health/bone-density-tests-for-osteoporosis-can-wait-study-says.html. Accessed January 3, 2013.
  10. Doshi KB, Khan LZ, Williams SE, Licata AA. Bone mineral density testing interval and transition to osteoporosis in older women: Is a T-score enough to determine a screening interval? Cleve Clin J Med 2013; 80:234239.
  11. Park A. How often do women really need bone density tests? Time Healthland; 2012. http://healthland.time.com/2012/01/19/most-women-may-be-getting-too-many-bone-density-tests/. Accessed January 3, 2013.
  12. Adachi JD, Saag KG, Delmas PD, et al. Two-year effects of alendronate on bone mineral density and vertebral fracture in patients receiving glucocorticoids: a randomized, double-blind, placebo-controlled extension trial. Arthritis Rheum 2001; 44:202211.
  13. Cummings SR, Black DM, Thompson DE, et al. Effect of alendronate on risk of fracture in women with low bone density but without vertebral fractures: results from the Fracture Intervention Trial. JAMA 1998; 280:20772082.
Article PDF
Author and Disclosure Information

Susan M. Ott, MD
Professor, University of Washington, Department of Medicine, Seattle, WA

Address: Susan M. Ott, MD, Department of Medicine, University of Washington, Box 356426, Seattle, WA 98195; e-mail: [email protected]

Issue
Cleveland Clinic Journal of Medicine - 80(4)
Publications
Topics
Page Number
240-242
Sections
Author and Disclosure Information

Susan M. Ott, MD
Professor, University of Washington, Department of Medicine, Seattle, WA

Address: Susan M. Ott, MD, Department of Medicine, University of Washington, Box 356426, Seattle, WA 98195; e-mail: [email protected]

Author and Disclosure Information

Susan M. Ott, MD
Professor, University of Washington, Department of Medicine, Seattle, WA

Address: Susan M. Ott, MD, Department of Medicine, University of Washington, Box 356426, Seattle, WA 98195; e-mail: [email protected]

Article PDF
Article PDF

In 2010, the United States Preventive Services Task Force recommended screening for osteoporosis by measuring bone mineral density in women age 65 and older and also in younger women if their fracture risk is equal to or greater than that of a 65-year-old white woman who has no additional risk factors.

See related article

But what should be the interval between screenings? The Task Force stated that evidence on the optimum screening interval is lacking, that 2 years may be the minimum interval due to precision error, but that longer intervals may be necessary to improve fracture risk prediction.1 They also cited a study showing that repeating the test up to 8 years after an initial test did not improve the ability of screening to predict fractures.2 This was recently confirmed in a study from Canada.3

GOURLAY ET AL: TEST AGAIN IN 1 TO 15 YEARS

In response to this information void, Gourlay and colleagues4 analyzed data from the Study of Osteoporotic Fractures. Because these investigators were interested in the interval between screening measurements of bone mineral density, they included only women who did not already have osteoporosis or take medication for osteoporosis. They wanted to know how long it took for 10% of women to develop osteoporosis, and found that this interval varied from 1 to 15 years depending on the initial bone density.

I did not think these results were surprising. The durations in which osteoporosis developed were similar to what one would predict from cross-sectional reference ranges. The average woman loses a little less than 1% of bone density per year after age 65. A T score of −1.0 is 22% higher than a T score of −2.5, so on average it would take more than 20 years to go from early osteopenia to osteoporosis.

AN ONGOING DEBATE ON SCREENING

The report generated a debate about the value and timing of repeated screening.5,6

In their article “More bone density testing is needed, not less,”5 Lewiecki et al criticized the Gourlay analysis because it did not include spine measurements or screen for asymptomatic vertebral fractures, and because it did not include enough clinical risk factors.5,6 They claimed that media attention suggested that dual-energy absorptiometry (DXA) was overused and expensive, citing three news reports. One of the news reports did misinterpret the Gourlay study and suggested that fewer women should be screened.7 The others, however, accurately described the findings that many women did not need to undergo DXA every 2 years.8,9

In this issue of the Cleveland Clinic Journal of Medicine, Doshi and colleagues express their opinion that the interval between bone mineral density testings should be guided by an assessment of clinical risk factors and not just T scores.10

Doshi et al are also concerned about erroneous conclusions drawn by the media. However, when I reviewed the news reports that they cited, I thought the reports were well written and conveyed the results appropriately. One report, by Alice Park,11 cautioned: “doctors need to remain flexible in advising women about when to get tested. A patient who has a normal T score but then develops cancer and loses a lot of weight, for example, may be more vulnerable to developing osteoporosis and therefore may need to get screened before the 15-year interval.”11 The other, by Gina Kolata, also explained that those taking high doses of corticosteroids for another medical condition would lose bone rapidly, but the findings “cover most normal women.”9 Neither report discouraged patients from getting screening in the first place.

Both Lewiecki et al and Doshi et al say that clinical factors should be considered, but do not specify which factors should be included in addition to the ones already evaluated by Gourlay et al (age, body mass index, estrogen use at baseline, any fracture after 50 years of age, current smoking, current or past use of oral glucocorticoids, and self-reported rheumatoid arthritis). These did not change the estimated time to develop osteoporosis for 90% of the study participants.

Furthermore, Gourlay et al had already noted that “clinicians may choose to reevaluate patients before our estimated screening intervals if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in our analyses.”4 Thus, patients with serious diseases should undergo DXA not for screening but for monitoring disease progression, and the Gourlay study results do not apply to them.

PATIENTS ON GLUCOCORTICOIDS: A SPECIAL SUBSET

Patients who are treated with glucocorticoids deserve further discussion. Consider the example described by Doshi et al of a woman with rheumatoid arthritis, taking prednisone, with a T score of −1.4. She would have to lose about 17% of her bone density to reach a T score at the osteoporosis level. One clinical trial in patients taking glucocorticoids, most of whom had rheumatoid arthritis, reported a loss of 2% after 2 years in the placebo group,12 so it is unlikely that this patient would have bone density in the osteoporosis range for at least several years.

However, clinicians know that these patients get fractures, especially in the spine, even with a normal bone density. Therefore, vertebral fracture assessment would be more important than bone density screening in this patient. Currently, there is uncertainty about the best time to initiate treatment in patients taking these glucocortical steroids, as well as the choice of initial medication. More research about long-term benefits of treatment are especially needed in this population.

VERTEBRAL FRACTURES: NO FIRM RECOMMENDATIONS

Doshi et al state that the Gourlay study was biased towards longer screening intervals because it included women with asymptomatic vertebral fractures. This does not make sense, because women who have untreated asymptomatic fractures would not be expected to lose bone at a slower rate. This does not mean that the asymptomatic fractures are trivial.

Instead of getting more frequent bone density measurements, I think it would be more logical to evaluate vertebral fractures using radiographs or vertebral fracture assessment, but we can’t make a firm recommendation without studies of the effectiveness of screening for vertebral fractures.

WHAT ABOUT OSTEOPENIA?

Critics of the Gourlay study point out that most fractures occur in the osteopenic population. This is true, but it does not mean that bone density should be measured more frequently. The bisphosphonates are not effective at preventing a first fracture unless the T score is lower than −2.5.13 Patients who have risk factors in addition to osteopenia may have a higher risk of fracture, but it is not clear if this can be treated with medication. For example, rodeo riders have a high fracture risk, but they would not benefit from taking alendronate. In some cases, such as people who smoke or drink alcohol to excess, treating the risk factor would be more appropriate.

As Doshi et al and others have noted, the study by Gourlay et al has limitations, and of course clinical judgment must be used in implementing the findings of any study. But doctors should not order unnecessary and expensive tests, and physicians who perform bone densitometry should not recommend frequent repeat testing that does not benefit the patient.

In 2010, the United States Preventive Services Task Force recommended screening for osteoporosis by measuring bone mineral density in women age 65 and older and also in younger women if their fracture risk is equal to or greater than that of a 65-year-old white woman who has no additional risk factors.

See related article

But what should be the interval between screenings? The Task Force stated that evidence on the optimum screening interval is lacking, that 2 years may be the minimum interval due to precision error, but that longer intervals may be necessary to improve fracture risk prediction.1 They also cited a study showing that repeating the test up to 8 years after an initial test did not improve the ability of screening to predict fractures.2 This was recently confirmed in a study from Canada.3

GOURLAY ET AL: TEST AGAIN IN 1 TO 15 YEARS

In response to this information void, Gourlay and colleagues4 analyzed data from the Study of Osteoporotic Fractures. Because these investigators were interested in the interval between screening measurements of bone mineral density, they included only women who did not already have osteoporosis or take medication for osteoporosis. They wanted to know how long it took for 10% of women to develop osteoporosis, and found that this interval varied from 1 to 15 years depending on the initial bone density.

I did not think these results were surprising. The durations in which osteoporosis developed were similar to what one would predict from cross-sectional reference ranges. The average woman loses a little less than 1% of bone density per year after age 65. A T score of −1.0 is 22% higher than a T score of −2.5, so on average it would take more than 20 years to go from early osteopenia to osteoporosis.

AN ONGOING DEBATE ON SCREENING

The report generated a debate about the value and timing of repeated screening.5,6

In their article “More bone density testing is needed, not less,”5 Lewiecki et al criticized the Gourlay analysis because it did not include spine measurements or screen for asymptomatic vertebral fractures, and because it did not include enough clinical risk factors.5,6 They claimed that media attention suggested that dual-energy absorptiometry (DXA) was overused and expensive, citing three news reports. One of the news reports did misinterpret the Gourlay study and suggested that fewer women should be screened.7 The others, however, accurately described the findings that many women did not need to undergo DXA every 2 years.8,9

In this issue of the Cleveland Clinic Journal of Medicine, Doshi and colleagues express their opinion that the interval between bone mineral density testings should be guided by an assessment of clinical risk factors and not just T scores.10

Doshi et al are also concerned about erroneous conclusions drawn by the media. However, when I reviewed the news reports that they cited, I thought the reports were well written and conveyed the results appropriately. One report, by Alice Park,11 cautioned: “doctors need to remain flexible in advising women about when to get tested. A patient who has a normal T score but then develops cancer and loses a lot of weight, for example, may be more vulnerable to developing osteoporosis and therefore may need to get screened before the 15-year interval.”11 The other, by Gina Kolata, also explained that those taking high doses of corticosteroids for another medical condition would lose bone rapidly, but the findings “cover most normal women.”9 Neither report discouraged patients from getting screening in the first place.

Both Lewiecki et al and Doshi et al say that clinical factors should be considered, but do not specify which factors should be included in addition to the ones already evaluated by Gourlay et al (age, body mass index, estrogen use at baseline, any fracture after 50 years of age, current smoking, current or past use of oral glucocorticoids, and self-reported rheumatoid arthritis). These did not change the estimated time to develop osteoporosis for 90% of the study participants.

Furthermore, Gourlay et al had already noted that “clinicians may choose to reevaluate patients before our estimated screening intervals if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in our analyses.”4 Thus, patients with serious diseases should undergo DXA not for screening but for monitoring disease progression, and the Gourlay study results do not apply to them.

PATIENTS ON GLUCOCORTICOIDS: A SPECIAL SUBSET

Patients who are treated with glucocorticoids deserve further discussion. Consider the example described by Doshi et al of a woman with rheumatoid arthritis, taking prednisone, with a T score of −1.4. She would have to lose about 17% of her bone density to reach a T score at the osteoporosis level. One clinical trial in patients taking glucocorticoids, most of whom had rheumatoid arthritis, reported a loss of 2% after 2 years in the placebo group,12 so it is unlikely that this patient would have bone density in the osteoporosis range for at least several years.

However, clinicians know that these patients get fractures, especially in the spine, even with a normal bone density. Therefore, vertebral fracture assessment would be more important than bone density screening in this patient. Currently, there is uncertainty about the best time to initiate treatment in patients taking these glucocortical steroids, as well as the choice of initial medication. More research about long-term benefits of treatment are especially needed in this population.

VERTEBRAL FRACTURES: NO FIRM RECOMMENDATIONS

Doshi et al state that the Gourlay study was biased towards longer screening intervals because it included women with asymptomatic vertebral fractures. This does not make sense, because women who have untreated asymptomatic fractures would not be expected to lose bone at a slower rate. This does not mean that the asymptomatic fractures are trivial.

Instead of getting more frequent bone density measurements, I think it would be more logical to evaluate vertebral fractures using radiographs or vertebral fracture assessment, but we can’t make a firm recommendation without studies of the effectiveness of screening for vertebral fractures.

WHAT ABOUT OSTEOPENIA?

Critics of the Gourlay study point out that most fractures occur in the osteopenic population. This is true, but it does not mean that bone density should be measured more frequently. The bisphosphonates are not effective at preventing a first fracture unless the T score is lower than −2.5.13 Patients who have risk factors in addition to osteopenia may have a higher risk of fracture, but it is not clear if this can be treated with medication. For example, rodeo riders have a high fracture risk, but they would not benefit from taking alendronate. In some cases, such as people who smoke or drink alcohol to excess, treating the risk factor would be more appropriate.

As Doshi et al and others have noted, the study by Gourlay et al has limitations, and of course clinical judgment must be used in implementing the findings of any study. But doctors should not order unnecessary and expensive tests, and physicians who perform bone densitometry should not recommend frequent repeat testing that does not benefit the patient.

References
  1. US Preventive Services Task Force. Screening for osteoporosis: US preventive services task force recommendation statement. Ann Intern Med 2011; 154:356364.
  2. Hillier TA, Stone KL, Bauer DC, et al. Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the study of osteoporotic fractures. Arch Intern Med 2007; 167:155160.
  3. Leslie WD, Morin SN, Lix LM; Manitoba Bone Density Program. Rate of bone density change does not enhance fracture prediction in routine clinical practice. J Clin Endocrinol Metab 2012; 97:12111218.
  4. Gourlay ML, Fine J P, Preisser JS, et al; Study of Osteoporotic Fractures Research Group. Bone-density testing interval and transition to osteoporosis in older women. N Engl J Med 2012; 366:225233.
  5. Lewiecki EM, Laster AJ, Miller PD, Bilezikian JP. More bone density testing is needed, not less. J Bone Miner Res 2012; 27:739742.
  6. Yu EW, Finkelstein JS. Bone density screening intervals for osteoporosis: one size does not fit all. JAMA 2012; 307:25912592.
  7. Frier S. Women receive bone tests too often for osteoporosis, study finds. Bloomberg News; 2012. http://www.bloomberg.com/news/2012-01-18/many-women-screened-for-osteoporosis-don-t-need-it-researchers-report.html. Accessed January 3, 2013.
  8. Knox R. Many older women may not need frequent bone scans. National Public Radio; 2012. http://www.npr.org/blogs/health/2012/01/19/145419138/many-older-women-may-not-need-frequent-bone-scans?ps=sh_sthdl. Accessed January 3, 2013.
  9. Kolata G. Patients with normal bone density can delay retests, study suggests. The New York Times; 2012. http://www.nytimes.com/2012/01/19/health/bone-density-tests-for-osteoporosis-can-wait-study-says.html. Accessed January 3, 2013.
  10. Doshi KB, Khan LZ, Williams SE, Licata AA. Bone mineral density testing interval and transition to osteoporosis in older women: Is a T-score enough to determine a screening interval? Cleve Clin J Med 2013; 80:234239.
  11. Park A. How often do women really need bone density tests? Time Healthland; 2012. http://healthland.time.com/2012/01/19/most-women-may-be-getting-too-many-bone-density-tests/. Accessed January 3, 2013.
  12. Adachi JD, Saag KG, Delmas PD, et al. Two-year effects of alendronate on bone mineral density and vertebral fracture in patients receiving glucocorticoids: a randomized, double-blind, placebo-controlled extension trial. Arthritis Rheum 2001; 44:202211.
  13. Cummings SR, Black DM, Thompson DE, et al. Effect of alendronate on risk of fracture in women with low bone density but without vertebral fractures: results from the Fracture Intervention Trial. JAMA 1998; 280:20772082.
References
  1. US Preventive Services Task Force. Screening for osteoporosis: US preventive services task force recommendation statement. Ann Intern Med 2011; 154:356364.
  2. Hillier TA, Stone KL, Bauer DC, et al. Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the study of osteoporotic fractures. Arch Intern Med 2007; 167:155160.
  3. Leslie WD, Morin SN, Lix LM; Manitoba Bone Density Program. Rate of bone density change does not enhance fracture prediction in routine clinical practice. J Clin Endocrinol Metab 2012; 97:12111218.
  4. Gourlay ML, Fine J P, Preisser JS, et al; Study of Osteoporotic Fractures Research Group. Bone-density testing interval and transition to osteoporosis in older women. N Engl J Med 2012; 366:225233.
  5. Lewiecki EM, Laster AJ, Miller PD, Bilezikian JP. More bone density testing is needed, not less. J Bone Miner Res 2012; 27:739742.
  6. Yu EW, Finkelstein JS. Bone density screening intervals for osteoporosis: one size does not fit all. JAMA 2012; 307:25912592.
  7. Frier S. Women receive bone tests too often for osteoporosis, study finds. Bloomberg News; 2012. http://www.bloomberg.com/news/2012-01-18/many-women-screened-for-osteoporosis-don-t-need-it-researchers-report.html. Accessed January 3, 2013.
  8. Knox R. Many older women may not need frequent bone scans. National Public Radio; 2012. http://www.npr.org/blogs/health/2012/01/19/145419138/many-older-women-may-not-need-frequent-bone-scans?ps=sh_sthdl. Accessed January 3, 2013.
  9. Kolata G. Patients with normal bone density can delay retests, study suggests. The New York Times; 2012. http://www.nytimes.com/2012/01/19/health/bone-density-tests-for-osteoporosis-can-wait-study-says.html. Accessed January 3, 2013.
  10. Doshi KB, Khan LZ, Williams SE, Licata AA. Bone mineral density testing interval and transition to osteoporosis in older women: Is a T-score enough to determine a screening interval? Cleve Clin J Med 2013; 80:234239.
  11. Park A. How often do women really need bone density tests? Time Healthland; 2012. http://healthland.time.com/2012/01/19/most-women-may-be-getting-too-many-bone-density-tests/. Accessed January 3, 2013.
  12. Adachi JD, Saag KG, Delmas PD, et al. Two-year effects of alendronate on bone mineral density and vertebral fracture in patients receiving glucocorticoids: a randomized, double-blind, placebo-controlled extension trial. Arthritis Rheum 2001; 44:202211.
  13. Cummings SR, Black DM, Thompson DE, et al. Effect of alendronate on risk of fracture in women with low bone density but without vertebral fractures: results from the Fracture Intervention Trial. JAMA 1998; 280:20772082.
Issue
Cleveland Clinic Journal of Medicine - 80(4)
Issue
Cleveland Clinic Journal of Medicine - 80(4)
Page Number
240-242
Page Number
240-242
Publications
Publications
Topics
Article Type
Display Headline
What should be the interval between bone density screenings?
Display Headline
What should be the interval between bone density screenings?
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Affordable Care Act Implementation

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Affordable care act implementation: Implications for hospital medicine

At the Centers for Medicare and Medicaid Services (CMS), we are charged with implementing many of the major provisions of the Affordable Care Act (ACA). Major policies and programs aimed at transforming the way care is delivered and paid for, testing and scaling innovative delivery system reforms, and expanding the number of Americans with health insurance will now move forward. The healthcare system is moving from paying for volume to paying for value. Hospitals and clinicians will need to be able to manage and be accountable for populations of patients and improving health outcomes. In this article, we highlight 4 broad provisions of the ACA that are either already implemented or under development for implementation in 2014, and are anticipated to have widespread impact on our health system. The potential impacts of each provision on hospitals and hospitalists are outlined in Table 1.

Potential Impacts of Each Provision on Hospitals and Hospitalists
Affordable Care Act Provision Example of Potential Impacts on Hospitals and Hospitalists
  • NOTE: Abbreviations: FFS, fee for service; PCOR, Patient‐Centered Outcomes Research.

Expansion of insurance coverage Care for fewer uninsured patients/fewer unreimbursed services
Patients have improved access to services after discharge
Shorter lengths of stay due to better access to outpatient services and care
Delivery system transformation Financial incentives aligned between inpatient and outpatient providers to better coordinate care
Payment is at risk if performance rates do not meet benchmarks and if costs are not lowered
Consolidation of hospitals and health systems within local markets
Value‐based purchasing Medicare FFS reimbursement increased or decreased based on quality and cost measure results
Opportunity to align incentives between hospitals and hospitalists
Patient‐centered outcomes research Emerging research on delivery system interventions relevant to hospitalists, such as care transitions
Funding for PCOR available for hospitalist researchers interested in delivery systems and outcomes research

EXPANSION OF INSURANCE COVERAGE

The central and perhaps most anticipated provision of the ACA is the expansion of insurance to the currently uninsured through the creation of state‐based health insurance exchanges. The exchanges are a competitive marketplace for purchasing private insurance products by individuals and small and large businesses. The individual mandate that accompanies the exchange provision requires that individuals purchase insurance. For those who cannot afford it, the government provides a subsidy. Any health plan that wishes to participate in an exchange marketplace must include at minimum a package of essential health benefits in each of their insurance products, which include benefits such as ambulatory care services, maternal and newborn services, and prescription drugs.[1] Importantly, health plans are required to implement quality improvement strategies and publicly report quality data. The ACA also requires the Secretary of Health and Human Services (HHS) to develop and administer a quality rating system and an enrollee satisfaction survey system, the results of which will be available to exchange consumers. All of these requirements will promote the delivery of high‐quality healthcare to millions of previously uninsured Americans.

Implementation of the exchanges in combination with the expansion of Medicaid is expected to provide insurance to approximately 30 million people who currently lack coverage. Prior to the Supreme Court ruling in June of 2012, states were required to expand Medicaid eligibility to a minimum of 133% of the federal poverty level. This expansion is subsidized 100% by the federal government through 2016, dropping to 90% by 2020. The Supreme Court ruled that the federal government could not require states to expand their Medicaid rolls, although it is expected that most states will do so given the generous federal subsidy and the significant cost to states, hospitals, and society to provide healthcare to the uninsured.

TRANSFORMATION OF HEALTHCARE DELIVERY

In addition to the expansion of insurance coverage, the ACA initiates a transformation in the way that healthcare will be delivered through the testing and implementation of innovative payment and care delivery models. The ACA authorized the creation of the Center for Medicare and Medicaid Innovation (CMMI, or The Innovation Center) within CMS. Payment and care delivery demonstrations or pilots that demonstrate a high quality of care at lower costs can be scaled up nationally at the discretion of the Secretary, rather than requiring authorization by Congress. The Innovation Center has already launched initiatives that test a variety of new models of care, all of which incentivize care coordination, provision of team‐based care, and use of data and quality metrics to drive systems‐based improvement. These programs include pilots that bundle payments to hospitals, physician group practices, and post‐acute care facilities for episodes of care across settings. This allows providers to innovate and redesign systems to deliver equivalent or higher quality of care at lower costs. Another CMMI model, called the comprehensive primary care initiative, involves CMS partnering with private insurers to provide payment to primary care practices for the delivery of chronic disease management and coordinated care to their entire population of patients, regardless of payer. Of great relevance to all hospitalists, CMMI and CMS, in partnership with other HHS agencies, launched the Partnership for Patients program in 2011. To date, approximately 4000 hospitals have signed on to the Partnership in a collective effort to significantly reduce hospital readmissions and hospital‐acquired conditions. Hospitalists are leading the charge related to Partnership for Patients in many hospitals. The Innovation Center is concurrently launching and rapidly evaluating current pilots, while considering what other new pilots might be needed to further test models aimed at the delivery of better healthcare and health outcomes at lower costs.

Perhaps the delivery system initiative that has received the most attention is the implementation of the Medicare Shared Savings Program (MSSP), or Accountable Care Organizations (ACO). Under the MSSP, ACOs are groups of providers (which may include hospitals) and suppliers of services who work together to coordinate care for the patients they serve. Participating ACOs must achieve performance benchmarks while lowering costs to share in the cost savings with CMS. Although this program is focused on Medicare fee‐for‐service (FFS) beneficiaries, it is expected that all patients will benefit from the infrastructure redesign and care coordination that is required under this program. The pioneer ACOs are large integrated health systems or other providers that have higher levels of shared risk in addition to shared savings. Hospitals that are a part of a participating ACO have greater financial incentives to work with their primary care and other outpatient providers to reduce readmissions and other adverse events and achieve quality benchmarks. With the degree of savings as well as financial risk that is on the table, it is possible that over time, hospitals and health systems may consolidate to capture a larger share of the market. Such a consequence could have a parallel effect on job opportunities and financial incentives and risk for hospitalists in local markets.

VALUE‐BASED PURCHASING

Improvement in the quality of care delivered to all patients is another central purpose of the Affordable Care Act. The law requires that the Secretary develop a National Quality Strategy that must be updated annually; the first version of this strategy was published in April of 2011.[2] The strategy identifies 3 aims for the nation: better healthcare for individuals, better health for populations and communities, and lower costs for all. One of the levers that CMS uses to achieve these 3 aims is value‐based purchasing (VBP). VBP is a way to link the National Quality Strategy with Medicare FFS payments on a national scale by adjusting payments based on performance. VBP rewards providers and health systems that deliver better outcomes in health and healthcare at lower cost to the beneficiaries and communities they serve, rather than rewarding them for the volume of services they provide. The ACA authorizes implementation of the Hospital Value‐Based Purchasing (HVBP) program as well as the Physician Value Modifier (PVM). The HVBP program began in 2011, and currently includes process, outcome, and patient experience quality metrics as well as a total cost metric, which includes 30 days postdischarge for beneficiaries admitted to the hospital. Hospitals are rewarded on either their improvement from baseline or achievement of a benchmark, whichever is higher.[3] The PVM program adjusts providers' Medicare FFS payments up or down beginning in 2015, based on quality metrics reported on care provided in 2013. In the first year of the program, groups of 100 or more physicians are eligible for the program, and are given a choice on metrics to report and whether to elect for quality tiering and the potential for payment adjustment[4]; by payment year 2017, all physicians must participate. To participate, physicians must report on quality metrics that they choose through the Physician Quality Reporting System (PQRS) or elect to have their quality assessed based on administrative claim measures. Measures currently in the PQRS program may not always be relevant for hospitalists; CMS is working to define and include metrics that would be most meaningful to hospitalists' scope of practice and is seeking comment on whether to allow hospital‐based physicians to align with and accept hospital quality measures to count as their performance metrics.

PATIENT‐CENTERED OUTCOMES RESEARCH

Building on the down payment on Comparative Effectiveness Research (CER) funded under the American Recovery and Reinvestment Act of 2009, the ACA authorized the creation of the Patient‐Centered Outcomes Research Institute (PCORI) and allocated funding for CER over 10 years. Rebranded as Patient‐Centered Outcomes Research (PCOR), CER has the potential to improve quality and reduce costs by identifying what works for different populations of patients (eg, children, elderly, patients with multiple chronic conditions, racial and ethnic minorities) in varied settings (eg, ambulatory, hospital, nursing home) under real‐world conditions. The PCORI governance board was created in 2010, and as required by law, developed a national agenda for patient‐centered outcomes research, which includes assessment of prevention, diagnosis, and treatment options; improving healthcare systems; communicating and disseminating research; addressing healthcare disparities; and accelerating PCOR and methodological research. The amount of funding available for research and PCOR infrastructure will ramp up over the next several years, eventually reaching approximately $500 million annually, with increasing funding opportunities for comparative research questions related to clinical and delivery system interventions using pragmatic, randomized, controlled trials; implementation science; and other novel research methodologies. Hospitalists have many roles within this realm, whether as researchers comparing delivery system or clinical interventions, as educators of students or healthcare professionals on the results of PCOR and their implications for practice, or as hospital leaders responsible for implementation of evidence‐based practices.[5]

CONCLUSION

The Affordable Care Act is a transformative piece of legislation, and our healthcare system is changing rapidly. Many of the ACA's provisions will change how care is delivered in the United States and will have a direct effect on practicing physicians, hospitals, and patients. Although CMS plays a major role in the implementation of the law, the government cannot be, and should not be, the primary force in transforming health care in this country. Through the provisions highlighted here as well as others, CMS can create a supportive environment, be a catalyst, and provide incentives for change; however, true transformation must occur on the front lines. For hospitalists, this means partnering with the hospital administration and other hospital personnel, local providers, and community organizations to drive systems‐based improvements that will ultimately achieve higher‐quality care at lower costs for all. It also calls for hospitalists to lead change in their local systems focused on better care, better health, and lower costs through improvement.

Disclosure

The views expressed in this manuscript represent the authors and not necessarily the policy or opinions of the Centers for Medicare and Medicaid Services.

Files
References
  1. Department of Health and Human Services. Essential Health Benefits: HHS Informational Bulletin. Available at: http://www.healthcare.gov/news/factsheets/2011/12/essential‐health‐benefits12162011a.html. Accessed December 13, 2012.
  2. Department of Health and Human Services. Report to Congress: National Strategy for Quality Improvement in Healthcare. March 2011. Available at: http://www.healthcare.gov/law/resources/reports/quality03212011a.html. Accessed December 13, 2012.
  3. Centers for Medicare and Medicaid Services. FY 2013 IPPS Final Rule Home Page. August 2012. Available at: http://www.cms.gov/Medicare/Medicare‐Fee‐for‐Service‐Payment/AcuteInpatientPPS/FY‐2013‐IPPS‐Final‐Rule‐Home‐Page.html. Accessed December 13, 2012.
  4. Centers for Medicare and Medicaid Services. Physician Fee Schedule. November 2012. Available at: http://www.cms.gov/Medicare/Medicare‐Fee‐for‐Service‐Payment/PhysicianFeeSched/index.html. Accessed December 13, 2012.
  5. Goodrich KH, Conway PH. Comparative effectiveness research: implications for hospitalists. J Hosp Medicine. 2010;5(5):257260.
Article PDF
Issue
Journal of Hospital Medicine - 8(3)
Publications
Page Number
159-161
Sections
Files
Files
Article PDF
Article PDF

At the Centers for Medicare and Medicaid Services (CMS), we are charged with implementing many of the major provisions of the Affordable Care Act (ACA). Major policies and programs aimed at transforming the way care is delivered and paid for, testing and scaling innovative delivery system reforms, and expanding the number of Americans with health insurance will now move forward. The healthcare system is moving from paying for volume to paying for value. Hospitals and clinicians will need to be able to manage and be accountable for populations of patients and improving health outcomes. In this article, we highlight 4 broad provisions of the ACA that are either already implemented or under development for implementation in 2014, and are anticipated to have widespread impact on our health system. The potential impacts of each provision on hospitals and hospitalists are outlined in Table 1.

Potential Impacts of Each Provision on Hospitals and Hospitalists
Affordable Care Act Provision Example of Potential Impacts on Hospitals and Hospitalists
  • NOTE: Abbreviations: FFS, fee for service; PCOR, Patient‐Centered Outcomes Research.

Expansion of insurance coverage Care for fewer uninsured patients/fewer unreimbursed services
Patients have improved access to services after discharge
Shorter lengths of stay due to better access to outpatient services and care
Delivery system transformation Financial incentives aligned between inpatient and outpatient providers to better coordinate care
Payment is at risk if performance rates do not meet benchmarks and if costs are not lowered
Consolidation of hospitals and health systems within local markets
Value‐based purchasing Medicare FFS reimbursement increased or decreased based on quality and cost measure results
Opportunity to align incentives between hospitals and hospitalists
Patient‐centered outcomes research Emerging research on delivery system interventions relevant to hospitalists, such as care transitions
Funding for PCOR available for hospitalist researchers interested in delivery systems and outcomes research

EXPANSION OF INSURANCE COVERAGE

The central and perhaps most anticipated provision of the ACA is the expansion of insurance to the currently uninsured through the creation of state‐based health insurance exchanges. The exchanges are a competitive marketplace for purchasing private insurance products by individuals and small and large businesses. The individual mandate that accompanies the exchange provision requires that individuals purchase insurance. For those who cannot afford it, the government provides a subsidy. Any health plan that wishes to participate in an exchange marketplace must include at minimum a package of essential health benefits in each of their insurance products, which include benefits such as ambulatory care services, maternal and newborn services, and prescription drugs.[1] Importantly, health plans are required to implement quality improvement strategies and publicly report quality data. The ACA also requires the Secretary of Health and Human Services (HHS) to develop and administer a quality rating system and an enrollee satisfaction survey system, the results of which will be available to exchange consumers. All of these requirements will promote the delivery of high‐quality healthcare to millions of previously uninsured Americans.

Implementation of the exchanges in combination with the expansion of Medicaid is expected to provide insurance to approximately 30 million people who currently lack coverage. Prior to the Supreme Court ruling in June of 2012, states were required to expand Medicaid eligibility to a minimum of 133% of the federal poverty level. This expansion is subsidized 100% by the federal government through 2016, dropping to 90% by 2020. The Supreme Court ruled that the federal government could not require states to expand their Medicaid rolls, although it is expected that most states will do so given the generous federal subsidy and the significant cost to states, hospitals, and society to provide healthcare to the uninsured.

TRANSFORMATION OF HEALTHCARE DELIVERY

In addition to the expansion of insurance coverage, the ACA initiates a transformation in the way that healthcare will be delivered through the testing and implementation of innovative payment and care delivery models. The ACA authorized the creation of the Center for Medicare and Medicaid Innovation (CMMI, or The Innovation Center) within CMS. Payment and care delivery demonstrations or pilots that demonstrate a high quality of care at lower costs can be scaled up nationally at the discretion of the Secretary, rather than requiring authorization by Congress. The Innovation Center has already launched initiatives that test a variety of new models of care, all of which incentivize care coordination, provision of team‐based care, and use of data and quality metrics to drive systems‐based improvement. These programs include pilots that bundle payments to hospitals, physician group practices, and post‐acute care facilities for episodes of care across settings. This allows providers to innovate and redesign systems to deliver equivalent or higher quality of care at lower costs. Another CMMI model, called the comprehensive primary care initiative, involves CMS partnering with private insurers to provide payment to primary care practices for the delivery of chronic disease management and coordinated care to their entire population of patients, regardless of payer. Of great relevance to all hospitalists, CMMI and CMS, in partnership with other HHS agencies, launched the Partnership for Patients program in 2011. To date, approximately 4000 hospitals have signed on to the Partnership in a collective effort to significantly reduce hospital readmissions and hospital‐acquired conditions. Hospitalists are leading the charge related to Partnership for Patients in many hospitals. The Innovation Center is concurrently launching and rapidly evaluating current pilots, while considering what other new pilots might be needed to further test models aimed at the delivery of better healthcare and health outcomes at lower costs.

Perhaps the delivery system initiative that has received the most attention is the implementation of the Medicare Shared Savings Program (MSSP), or Accountable Care Organizations (ACO). Under the MSSP, ACOs are groups of providers (which may include hospitals) and suppliers of services who work together to coordinate care for the patients they serve. Participating ACOs must achieve performance benchmarks while lowering costs to share in the cost savings with CMS. Although this program is focused on Medicare fee‐for‐service (FFS) beneficiaries, it is expected that all patients will benefit from the infrastructure redesign and care coordination that is required under this program. The pioneer ACOs are large integrated health systems or other providers that have higher levels of shared risk in addition to shared savings. Hospitals that are a part of a participating ACO have greater financial incentives to work with their primary care and other outpatient providers to reduce readmissions and other adverse events and achieve quality benchmarks. With the degree of savings as well as financial risk that is on the table, it is possible that over time, hospitals and health systems may consolidate to capture a larger share of the market. Such a consequence could have a parallel effect on job opportunities and financial incentives and risk for hospitalists in local markets.

VALUE‐BASED PURCHASING

Improvement in the quality of care delivered to all patients is another central purpose of the Affordable Care Act. The law requires that the Secretary develop a National Quality Strategy that must be updated annually; the first version of this strategy was published in April of 2011.[2] The strategy identifies 3 aims for the nation: better healthcare for individuals, better health for populations and communities, and lower costs for all. One of the levers that CMS uses to achieve these 3 aims is value‐based purchasing (VBP). VBP is a way to link the National Quality Strategy with Medicare FFS payments on a national scale by adjusting payments based on performance. VBP rewards providers and health systems that deliver better outcomes in health and healthcare at lower cost to the beneficiaries and communities they serve, rather than rewarding them for the volume of services they provide. The ACA authorizes implementation of the Hospital Value‐Based Purchasing (HVBP) program as well as the Physician Value Modifier (PVM). The HVBP program began in 2011, and currently includes process, outcome, and patient experience quality metrics as well as a total cost metric, which includes 30 days postdischarge for beneficiaries admitted to the hospital. Hospitals are rewarded on either their improvement from baseline or achievement of a benchmark, whichever is higher.[3] The PVM program adjusts providers' Medicare FFS payments up or down beginning in 2015, based on quality metrics reported on care provided in 2013. In the first year of the program, groups of 100 or more physicians are eligible for the program, and are given a choice on metrics to report and whether to elect for quality tiering and the potential for payment adjustment[4]; by payment year 2017, all physicians must participate. To participate, physicians must report on quality metrics that they choose through the Physician Quality Reporting System (PQRS) or elect to have their quality assessed based on administrative claim measures. Measures currently in the PQRS program may not always be relevant for hospitalists; CMS is working to define and include metrics that would be most meaningful to hospitalists' scope of practice and is seeking comment on whether to allow hospital‐based physicians to align with and accept hospital quality measures to count as their performance metrics.

PATIENT‐CENTERED OUTCOMES RESEARCH

Building on the down payment on Comparative Effectiveness Research (CER) funded under the American Recovery and Reinvestment Act of 2009, the ACA authorized the creation of the Patient‐Centered Outcomes Research Institute (PCORI) and allocated funding for CER over 10 years. Rebranded as Patient‐Centered Outcomes Research (PCOR), CER has the potential to improve quality and reduce costs by identifying what works for different populations of patients (eg, children, elderly, patients with multiple chronic conditions, racial and ethnic minorities) in varied settings (eg, ambulatory, hospital, nursing home) under real‐world conditions. The PCORI governance board was created in 2010, and as required by law, developed a national agenda for patient‐centered outcomes research, which includes assessment of prevention, diagnosis, and treatment options; improving healthcare systems; communicating and disseminating research; addressing healthcare disparities; and accelerating PCOR and methodological research. The amount of funding available for research and PCOR infrastructure will ramp up over the next several years, eventually reaching approximately $500 million annually, with increasing funding opportunities for comparative research questions related to clinical and delivery system interventions using pragmatic, randomized, controlled trials; implementation science; and other novel research methodologies. Hospitalists have many roles within this realm, whether as researchers comparing delivery system or clinical interventions, as educators of students or healthcare professionals on the results of PCOR and their implications for practice, or as hospital leaders responsible for implementation of evidence‐based practices.[5]

CONCLUSION

The Affordable Care Act is a transformative piece of legislation, and our healthcare system is changing rapidly. Many of the ACA's provisions will change how care is delivered in the United States and will have a direct effect on practicing physicians, hospitals, and patients. Although CMS plays a major role in the implementation of the law, the government cannot be, and should not be, the primary force in transforming health care in this country. Through the provisions highlighted here as well as others, CMS can create a supportive environment, be a catalyst, and provide incentives for change; however, true transformation must occur on the front lines. For hospitalists, this means partnering with the hospital administration and other hospital personnel, local providers, and community organizations to drive systems‐based improvements that will ultimately achieve higher‐quality care at lower costs for all. It also calls for hospitalists to lead change in their local systems focused on better care, better health, and lower costs through improvement.

Disclosure

The views expressed in this manuscript represent the authors and not necessarily the policy or opinions of the Centers for Medicare and Medicaid Services.

At the Centers for Medicare and Medicaid Services (CMS), we are charged with implementing many of the major provisions of the Affordable Care Act (ACA). Major policies and programs aimed at transforming the way care is delivered and paid for, testing and scaling innovative delivery system reforms, and expanding the number of Americans with health insurance will now move forward. The healthcare system is moving from paying for volume to paying for value. Hospitals and clinicians will need to be able to manage and be accountable for populations of patients and improving health outcomes. In this article, we highlight 4 broad provisions of the ACA that are either already implemented or under development for implementation in 2014, and are anticipated to have widespread impact on our health system. The potential impacts of each provision on hospitals and hospitalists are outlined in Table 1.

Potential Impacts of Each Provision on Hospitals and Hospitalists
Affordable Care Act Provision Example of Potential Impacts on Hospitals and Hospitalists
  • NOTE: Abbreviations: FFS, fee for service; PCOR, Patient‐Centered Outcomes Research.

Expansion of insurance coverage Care for fewer uninsured patients/fewer unreimbursed services
Patients have improved access to services after discharge
Shorter lengths of stay due to better access to outpatient services and care
Delivery system transformation Financial incentives aligned between inpatient and outpatient providers to better coordinate care
Payment is at risk if performance rates do not meet benchmarks and if costs are not lowered
Consolidation of hospitals and health systems within local markets
Value‐based purchasing Medicare FFS reimbursement increased or decreased based on quality and cost measure results
Opportunity to align incentives between hospitals and hospitalists
Patient‐centered outcomes research Emerging research on delivery system interventions relevant to hospitalists, such as care transitions
Funding for PCOR available for hospitalist researchers interested in delivery systems and outcomes research

EXPANSION OF INSURANCE COVERAGE

The central and perhaps most anticipated provision of the ACA is the expansion of insurance to the currently uninsured through the creation of state‐based health insurance exchanges. The exchanges are a competitive marketplace for purchasing private insurance products by individuals and small and large businesses. The individual mandate that accompanies the exchange provision requires that individuals purchase insurance. For those who cannot afford it, the government provides a subsidy. Any health plan that wishes to participate in an exchange marketplace must include at minimum a package of essential health benefits in each of their insurance products, which include benefits such as ambulatory care services, maternal and newborn services, and prescription drugs.[1] Importantly, health plans are required to implement quality improvement strategies and publicly report quality data. The ACA also requires the Secretary of Health and Human Services (HHS) to develop and administer a quality rating system and an enrollee satisfaction survey system, the results of which will be available to exchange consumers. All of these requirements will promote the delivery of high‐quality healthcare to millions of previously uninsured Americans.

Implementation of the exchanges in combination with the expansion of Medicaid is expected to provide insurance to approximately 30 million people who currently lack coverage. Prior to the Supreme Court ruling in June of 2012, states were required to expand Medicaid eligibility to a minimum of 133% of the federal poverty level. This expansion is subsidized 100% by the federal government through 2016, dropping to 90% by 2020. The Supreme Court ruled that the federal government could not require states to expand their Medicaid rolls, although it is expected that most states will do so given the generous federal subsidy and the significant cost to states, hospitals, and society to provide healthcare to the uninsured.

TRANSFORMATION OF HEALTHCARE DELIVERY

In addition to the expansion of insurance coverage, the ACA initiates a transformation in the way that healthcare will be delivered through the testing and implementation of innovative payment and care delivery models. The ACA authorized the creation of the Center for Medicare and Medicaid Innovation (CMMI, or The Innovation Center) within CMS. Payment and care delivery demonstrations or pilots that demonstrate a high quality of care at lower costs can be scaled up nationally at the discretion of the Secretary, rather than requiring authorization by Congress. The Innovation Center has already launched initiatives that test a variety of new models of care, all of which incentivize care coordination, provision of team‐based care, and use of data and quality metrics to drive systems‐based improvement. These programs include pilots that bundle payments to hospitals, physician group practices, and post‐acute care facilities for episodes of care across settings. This allows providers to innovate and redesign systems to deliver equivalent or higher quality of care at lower costs. Another CMMI model, called the comprehensive primary care initiative, involves CMS partnering with private insurers to provide payment to primary care practices for the delivery of chronic disease management and coordinated care to their entire population of patients, regardless of payer. Of great relevance to all hospitalists, CMMI and CMS, in partnership with other HHS agencies, launched the Partnership for Patients program in 2011. To date, approximately 4000 hospitals have signed on to the Partnership in a collective effort to significantly reduce hospital readmissions and hospital‐acquired conditions. Hospitalists are leading the charge related to Partnership for Patients in many hospitals. The Innovation Center is concurrently launching and rapidly evaluating current pilots, while considering what other new pilots might be needed to further test models aimed at the delivery of better healthcare and health outcomes at lower costs.

Perhaps the delivery system initiative that has received the most attention is the implementation of the Medicare Shared Savings Program (MSSP), or Accountable Care Organizations (ACO). Under the MSSP, ACOs are groups of providers (which may include hospitals) and suppliers of services who work together to coordinate care for the patients they serve. Participating ACOs must achieve performance benchmarks while lowering costs to share in the cost savings with CMS. Although this program is focused on Medicare fee‐for‐service (FFS) beneficiaries, it is expected that all patients will benefit from the infrastructure redesign and care coordination that is required under this program. The pioneer ACOs are large integrated health systems or other providers that have higher levels of shared risk in addition to shared savings. Hospitals that are a part of a participating ACO have greater financial incentives to work with their primary care and other outpatient providers to reduce readmissions and other adverse events and achieve quality benchmarks. With the degree of savings as well as financial risk that is on the table, it is possible that over time, hospitals and health systems may consolidate to capture a larger share of the market. Such a consequence could have a parallel effect on job opportunities and financial incentives and risk for hospitalists in local markets.

VALUE‐BASED PURCHASING

Improvement in the quality of care delivered to all patients is another central purpose of the Affordable Care Act. The law requires that the Secretary develop a National Quality Strategy that must be updated annually; the first version of this strategy was published in April of 2011.[2] The strategy identifies 3 aims for the nation: better healthcare for individuals, better health for populations and communities, and lower costs for all. One of the levers that CMS uses to achieve these 3 aims is value‐based purchasing (VBP). VBP is a way to link the National Quality Strategy with Medicare FFS payments on a national scale by adjusting payments based on performance. VBP rewards providers and health systems that deliver better outcomes in health and healthcare at lower cost to the beneficiaries and communities they serve, rather than rewarding them for the volume of services they provide. The ACA authorizes implementation of the Hospital Value‐Based Purchasing (HVBP) program as well as the Physician Value Modifier (PVM). The HVBP program began in 2011, and currently includes process, outcome, and patient experience quality metrics as well as a total cost metric, which includes 30 days postdischarge for beneficiaries admitted to the hospital. Hospitals are rewarded on either their improvement from baseline or achievement of a benchmark, whichever is higher.[3] The PVM program adjusts providers' Medicare FFS payments up or down beginning in 2015, based on quality metrics reported on care provided in 2013. In the first year of the program, groups of 100 or more physicians are eligible for the program, and are given a choice on metrics to report and whether to elect for quality tiering and the potential for payment adjustment[4]; by payment year 2017, all physicians must participate. To participate, physicians must report on quality metrics that they choose through the Physician Quality Reporting System (PQRS) or elect to have their quality assessed based on administrative claim measures. Measures currently in the PQRS program may not always be relevant for hospitalists; CMS is working to define and include metrics that would be most meaningful to hospitalists' scope of practice and is seeking comment on whether to allow hospital‐based physicians to align with and accept hospital quality measures to count as their performance metrics.

PATIENT‐CENTERED OUTCOMES RESEARCH

Building on the down payment on Comparative Effectiveness Research (CER) funded under the American Recovery and Reinvestment Act of 2009, the ACA authorized the creation of the Patient‐Centered Outcomes Research Institute (PCORI) and allocated funding for CER over 10 years. Rebranded as Patient‐Centered Outcomes Research (PCOR), CER has the potential to improve quality and reduce costs by identifying what works for different populations of patients (eg, children, elderly, patients with multiple chronic conditions, racial and ethnic minorities) in varied settings (eg, ambulatory, hospital, nursing home) under real‐world conditions. The PCORI governance board was created in 2010, and as required by law, developed a national agenda for patient‐centered outcomes research, which includes assessment of prevention, diagnosis, and treatment options; improving healthcare systems; communicating and disseminating research; addressing healthcare disparities; and accelerating PCOR and methodological research. The amount of funding available for research and PCOR infrastructure will ramp up over the next several years, eventually reaching approximately $500 million annually, with increasing funding opportunities for comparative research questions related to clinical and delivery system interventions using pragmatic, randomized, controlled trials; implementation science; and other novel research methodologies. Hospitalists have many roles within this realm, whether as researchers comparing delivery system or clinical interventions, as educators of students or healthcare professionals on the results of PCOR and their implications for practice, or as hospital leaders responsible for implementation of evidence‐based practices.[5]

CONCLUSION

The Affordable Care Act is a transformative piece of legislation, and our healthcare system is changing rapidly. Many of the ACA's provisions will change how care is delivered in the United States and will have a direct effect on practicing physicians, hospitals, and patients. Although CMS plays a major role in the implementation of the law, the government cannot be, and should not be, the primary force in transforming health care in this country. Through the provisions highlighted here as well as others, CMS can create a supportive environment, be a catalyst, and provide incentives for change; however, true transformation must occur on the front lines. For hospitalists, this means partnering with the hospital administration and other hospital personnel, local providers, and community organizations to drive systems‐based improvements that will ultimately achieve higher‐quality care at lower costs for all. It also calls for hospitalists to lead change in their local systems focused on better care, better health, and lower costs through improvement.

Disclosure

The views expressed in this manuscript represent the authors and not necessarily the policy or opinions of the Centers for Medicare and Medicaid Services.

References
  1. Department of Health and Human Services. Essential Health Benefits: HHS Informational Bulletin. Available at: http://www.healthcare.gov/news/factsheets/2011/12/essential‐health‐benefits12162011a.html. Accessed December 13, 2012.
  2. Department of Health and Human Services. Report to Congress: National Strategy for Quality Improvement in Healthcare. March 2011. Available at: http://www.healthcare.gov/law/resources/reports/quality03212011a.html. Accessed December 13, 2012.
  3. Centers for Medicare and Medicaid Services. FY 2013 IPPS Final Rule Home Page. August 2012. Available at: http://www.cms.gov/Medicare/Medicare‐Fee‐for‐Service‐Payment/AcuteInpatientPPS/FY‐2013‐IPPS‐Final‐Rule‐Home‐Page.html. Accessed December 13, 2012.
  4. Centers for Medicare and Medicaid Services. Physician Fee Schedule. November 2012. Available at: http://www.cms.gov/Medicare/Medicare‐Fee‐for‐Service‐Payment/PhysicianFeeSched/index.html. Accessed December 13, 2012.
  5. Goodrich KH, Conway PH. Comparative effectiveness research: implications for hospitalists. J Hosp Medicine. 2010;5(5):257260.
References
  1. Department of Health and Human Services. Essential Health Benefits: HHS Informational Bulletin. Available at: http://www.healthcare.gov/news/factsheets/2011/12/essential‐health‐benefits12162011a.html. Accessed December 13, 2012.
  2. Department of Health and Human Services. Report to Congress: National Strategy for Quality Improvement in Healthcare. March 2011. Available at: http://www.healthcare.gov/law/resources/reports/quality03212011a.html. Accessed December 13, 2012.
  3. Centers for Medicare and Medicaid Services. FY 2013 IPPS Final Rule Home Page. August 2012. Available at: http://www.cms.gov/Medicare/Medicare‐Fee‐for‐Service‐Payment/AcuteInpatientPPS/FY‐2013‐IPPS‐Final‐Rule‐Home‐Page.html. Accessed December 13, 2012.
  4. Centers for Medicare and Medicaid Services. Physician Fee Schedule. November 2012. Available at: http://www.cms.gov/Medicare/Medicare‐Fee‐for‐Service‐Payment/PhysicianFeeSched/index.html. Accessed December 13, 2012.
  5. Goodrich KH, Conway PH. Comparative effectiveness research: implications for hospitalists. J Hosp Medicine. 2010;5(5):257260.
Issue
Journal of Hospital Medicine - 8(3)
Issue
Journal of Hospital Medicine - 8(3)
Page Number
159-161
Page Number
159-161
Publications
Publications
Article Type
Display Headline
Affordable care act implementation: Implications for hospital medicine
Display Headline
Affordable care act implementation: Implications for hospital medicine
Sections
Article Source
Copyright © 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Katherine Goodrich, MD, 7500 Security Blvd., S3‐02‐01, Baltimore, MD 21244; Telephone: 410-786-7828; Fax: 410-786-8532; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Functional anatomy of the facial nerve revealed by Ramsay Hunt syndrome

Article Type
Changed
Thu, 09/14/2017 - 15:39
Display Headline
Functional anatomy of the facial nerve revealed by Ramsay Hunt syndrome

Varicella-zoster virus (VZV) is a highly neurotropic and ubiquitous alpha-herpesvirus. Primary infection causes varicella (chickenpox), after which the virus becomes latent in ganglionic neurons along the entire neuraxis. Reactivation decades later usually results in zoster (shingles), pain with a dermatomal distribution, and rash. Unlike herpes simplex virus type 1 (HSV-1), which becomes latent exclusively in cranial nerve ganglia and reactivates to produce recurrent vesicular lesions around the mouth, and unlike HSV type 2, which becomes latent exclusively in sacral ganglia and reactivates to produce genital herpes, VZV may reactivate from any ganglia to cause zoster anywhere on the body.

See related article

Reactivation of VZV from the geniculate (facial nerve) ganglion leads to the Ramsay Hunt syndrome, ie, facial paralysis accompanied by a rash around the ear (zoster oticus). The syndrome is the second most common cause of atraumatic facial paralysis after Bell palsy (idiopathic facial paralysis). Importantly, virus reactivation from the geniculate ganglion may also be accompanied by zoster rash on the hard palate or on the anterior two-thirds of the tongue (Figure 1).1 A rash in any of these three skin or mucosal sites in a patient with facial paralysis indicates geniculate ganglionitis. To his credit, Dr. J. Ramsay Hunt recognized that although there is no somatic sensory facial branch to the oropharynx or tongue, virus can still spread from a seventh cranial nerve element to the pharynx or, via special sensory fibers, to the tongue, thus providing an anatomic explanation for zoster rash in patients with facial paralysis (geniculate zoster) not only around the ear, but also on the hard palate or on the anterior two-thirds of the tongue.2

Reprinted from Sweeney CJ, et al. Ramsay Hunt syndrome. J Neurol Neurosurg Psychiatry 2001; 71:149–154. With permission from BMJ Publishing Group Ltd.
Figure 1. Clinical features of Ramsay Hunt syndrome. Note peripheral facial weakness characterized by a widened palpebral fissure and decreased forehead wrinkling and smile on the right, associated with vesicles in the ipsilateral ear, on the hard palate, or on the anterior two-thirds of the tongue. Four nuclei are involved in facial nerve function: the motor nucleus of VII, the nucleus of the solitary tract, the superior salivatory nucleus, and the spinal nucleus of V. The solitary tract receives special visceral afferent taste fibers emanating from the anterior two-thirds of the tongue, cell bodies of which are in the geniculate ganglion, ie, the site of varicella-zoster virus reactivation when vesicles erupt on the tongue. The spinal nucleus of V receives general somatic afferent fibers from the geniculate zone of the ear via the chorda tympani. Cell bodies of those neurons are located in the geniculate ganglion and are the site of varicella-zoster virus reactivation in classic Ramsay Hunt syndrome, in which vesicular eruptions in geniculate zones are seen.

In geniculate ganglionitis, a rash is usually seen in one but not all three of these skin and mucosal sites. Yet in this issue of the Cleveland Clinic Journal of Medicine, Grillo et al3 describe a patient with facial palsy and rash in all three sites. This remarkable finding underscores the importance of distinguishing Ramsay Hunt syndrome from Bell palsy by checking for rash on the ear, tongue, and hard palate in any patient with acute unilateral peripheral facial weakness. Ramsay Hunt syndrome results from active VZV replication in the geniculate ganglion and requires treatment with antiviral drugs, whereas Bell palsy is usually treated with steroids. Steroid treatment of Ramsay Hunt syndrome misdiagnosed as Bell palsy can potentiate the viral infection. This may partially explain why the outcome of facial paralysis in Ramsay Hunt syndrome is not as good as in idiopathic Bell palsy, in which more than 70% of patients recover full facial function.

Although only cranial nerve VII (facial) was involved in their patient, Grillo et al correctly noted the frequent involvement of other cranial nerves in Ramsay Hunt syndrome. For example, dizziness, vertigo, or hearing loss indicative of involvement of cranial nerve VIII (acoustic) is most likely due to the close proximity of the geniculate ganglion and facial nerve to the vestibulocochlear nerve in the bony facial canal. Patients with this syndrome may also develop dysarthria or dysphagia indicative of lower cranial nerve involvement, reflecting the shared derivation of the facial, glossopharyngeal, and vagus nerves from the same branchial arch. Magnetic resonance imaging, not usually performed in patients with Ramsay Hunt syndrome, may show enhancement in the geniculate ganglion as well as in the intracanalicular and tympanic segments of the facial nerve during its course through the facial canal.

The report by Grillo et al comes at an auspicious time, 100 years after an enlightening series of papers by Dr. Hunt from 1907 to 1915 in which he described herpetic inflammation of the geniculate ganglion,4 the sensory system of the facial nerve,5 and ultimately the syndrome that bears his name.2,6 Dr. Hunt received his doctorate from the University of Pennsylvania in 1893 and later became instructor at Cornell University School of Medicine. In 1924, he became full professor at Columbia University School of Medicine. A clinician of Olympian stature, he is also credited with describing two additional syndromes (clinical features produced by carotid artery occlusion and dyssynergia cerebellaris progressiva), although the best known is zoster oticus with peripheral facial palsy.

Importantly, some patients develop peripheral facial paralysis without any rash but with a fourfold rise in antibody to VZV or in association with the presence of VZV DNA in auricular skin, blood mononuclear cells, middle ear fluid, or saliva, indicating that a proportion of patients with Bell palsy have “Ramsay Hunt syndrome zoster sine herpete” or, more accurately, “geniculate zoster sine herpete.” Treatment of such patients with acyclovir-prednisone within 7 days of onset has been shown to improve the outcome of facial palsy.

Because it is now clear that geniculate ganglionitis may present with facial palsy and zoster rash in any or all of three sites, it may be time to call peripheral facial paralysis associated with zoster rash on the ear, tongue, or palate exactly what it is: geniculate zoster. After all, zoster rash on the face is called trigeminal zoster, and zoster rash on the chest is called thoracic zoster. Most important, however, is the recognition that facial paralysis in association with rash on the ear, tongue, or hard palate reflects geniculate zoster and requires immediate antiviral treatment.

References
  1. Sweeney CJ, Gilden DH. Ramsay Hunt syndrome. J Neurol Neurosurg Psychiatry 2001; 71:149154.
  2. Hunt JR. The symptom-complex of the acute posterior poliomyelitis of the geniculate, auditory, glossopharyngeal and pneumogastric ganglia. Arch Intern Med 1910; 5:631675.
  3. Grillo E, Miguel-Morrondo A, Vano-Galvan S, Jaen P. A 54-year-old woman with odynophagia, peripheral facial nerve paralysis and mucocutaneous lesions. Cleve Clin J Med 2013; 80:7677.
  4. Hunt JR. On herpetic inflammations of the geniculate ganglion: a new syndrome and its complications. J Nerv Ment Dis 1907; 34:7396.
  5. Hunt JR. The sensory system of the facial nerve and its symptomatology. J Nerv Ment Dis 1909; 36:321350.
  6. Hunt JR. The sensory field of the facial nerve: a further contribution to the symptomatology of the geniculate ganglion. Brain 1915; 38:418446.
Article PDF
Author and Disclosure Information

Don Gilden, MD
Louise Baum Endowed Chair and Professor, Department of Neurology and Microbiology, University of Colorado School of Medicine, Aurora, CO

Address: Don Gilden, MD, Department of Neurology, University of Colorado School of Medicine, 12700 E. 19th Avenue, Box B182, Aurora, CO 80045; e-mail [email protected].

Issue
Cleveland Clinic Journal of Medicine - 80(2)
Publications
Topics
Page Number
78-79
Sections
Author and Disclosure Information

Don Gilden, MD
Louise Baum Endowed Chair and Professor, Department of Neurology and Microbiology, University of Colorado School of Medicine, Aurora, CO

Address: Don Gilden, MD, Department of Neurology, University of Colorado School of Medicine, 12700 E. 19th Avenue, Box B182, Aurora, CO 80045; e-mail [email protected].

Author and Disclosure Information

Don Gilden, MD
Louise Baum Endowed Chair and Professor, Department of Neurology and Microbiology, University of Colorado School of Medicine, Aurora, CO

Address: Don Gilden, MD, Department of Neurology, University of Colorado School of Medicine, 12700 E. 19th Avenue, Box B182, Aurora, CO 80045; e-mail [email protected].

Article PDF
Article PDF

Varicella-zoster virus (VZV) is a highly neurotropic and ubiquitous alpha-herpesvirus. Primary infection causes varicella (chickenpox), after which the virus becomes latent in ganglionic neurons along the entire neuraxis. Reactivation decades later usually results in zoster (shingles), pain with a dermatomal distribution, and rash. Unlike herpes simplex virus type 1 (HSV-1), which becomes latent exclusively in cranial nerve ganglia and reactivates to produce recurrent vesicular lesions around the mouth, and unlike HSV type 2, which becomes latent exclusively in sacral ganglia and reactivates to produce genital herpes, VZV may reactivate from any ganglia to cause zoster anywhere on the body.

See related article

Reactivation of VZV from the geniculate (facial nerve) ganglion leads to the Ramsay Hunt syndrome, ie, facial paralysis accompanied by a rash around the ear (zoster oticus). The syndrome is the second most common cause of atraumatic facial paralysis after Bell palsy (idiopathic facial paralysis). Importantly, virus reactivation from the geniculate ganglion may also be accompanied by zoster rash on the hard palate or on the anterior two-thirds of the tongue (Figure 1).1 A rash in any of these three skin or mucosal sites in a patient with facial paralysis indicates geniculate ganglionitis. To his credit, Dr. J. Ramsay Hunt recognized that although there is no somatic sensory facial branch to the oropharynx or tongue, virus can still spread from a seventh cranial nerve element to the pharynx or, via special sensory fibers, to the tongue, thus providing an anatomic explanation for zoster rash in patients with facial paralysis (geniculate zoster) not only around the ear, but also on the hard palate or on the anterior two-thirds of the tongue.2

Reprinted from Sweeney CJ, et al. Ramsay Hunt syndrome. J Neurol Neurosurg Psychiatry 2001; 71:149–154. With permission from BMJ Publishing Group Ltd.
Figure 1. Clinical features of Ramsay Hunt syndrome. Note peripheral facial weakness characterized by a widened palpebral fissure and decreased forehead wrinkling and smile on the right, associated with vesicles in the ipsilateral ear, on the hard palate, or on the anterior two-thirds of the tongue. Four nuclei are involved in facial nerve function: the motor nucleus of VII, the nucleus of the solitary tract, the superior salivatory nucleus, and the spinal nucleus of V. The solitary tract receives special visceral afferent taste fibers emanating from the anterior two-thirds of the tongue, cell bodies of which are in the geniculate ganglion, ie, the site of varicella-zoster virus reactivation when vesicles erupt on the tongue. The spinal nucleus of V receives general somatic afferent fibers from the geniculate zone of the ear via the chorda tympani. Cell bodies of those neurons are located in the geniculate ganglion and are the site of varicella-zoster virus reactivation in classic Ramsay Hunt syndrome, in which vesicular eruptions in geniculate zones are seen.

In geniculate ganglionitis, a rash is usually seen in one but not all three of these skin and mucosal sites. Yet in this issue of the Cleveland Clinic Journal of Medicine, Grillo et al3 describe a patient with facial palsy and rash in all three sites. This remarkable finding underscores the importance of distinguishing Ramsay Hunt syndrome from Bell palsy by checking for rash on the ear, tongue, and hard palate in any patient with acute unilateral peripheral facial weakness. Ramsay Hunt syndrome results from active VZV replication in the geniculate ganglion and requires treatment with antiviral drugs, whereas Bell palsy is usually treated with steroids. Steroid treatment of Ramsay Hunt syndrome misdiagnosed as Bell palsy can potentiate the viral infection. This may partially explain why the outcome of facial paralysis in Ramsay Hunt syndrome is not as good as in idiopathic Bell palsy, in which more than 70% of patients recover full facial function.

Although only cranial nerve VII (facial) was involved in their patient, Grillo et al correctly noted the frequent involvement of other cranial nerves in Ramsay Hunt syndrome. For example, dizziness, vertigo, or hearing loss indicative of involvement of cranial nerve VIII (acoustic) is most likely due to the close proximity of the geniculate ganglion and facial nerve to the vestibulocochlear nerve in the bony facial canal. Patients with this syndrome may also develop dysarthria or dysphagia indicative of lower cranial nerve involvement, reflecting the shared derivation of the facial, glossopharyngeal, and vagus nerves from the same branchial arch. Magnetic resonance imaging, not usually performed in patients with Ramsay Hunt syndrome, may show enhancement in the geniculate ganglion as well as in the intracanalicular and tympanic segments of the facial nerve during its course through the facial canal.

The report by Grillo et al comes at an auspicious time, 100 years after an enlightening series of papers by Dr. Hunt from 1907 to 1915 in which he described herpetic inflammation of the geniculate ganglion,4 the sensory system of the facial nerve,5 and ultimately the syndrome that bears his name.2,6 Dr. Hunt received his doctorate from the University of Pennsylvania in 1893 and later became instructor at Cornell University School of Medicine. In 1924, he became full professor at Columbia University School of Medicine. A clinician of Olympian stature, he is also credited with describing two additional syndromes (clinical features produced by carotid artery occlusion and dyssynergia cerebellaris progressiva), although the best known is zoster oticus with peripheral facial palsy.

Importantly, some patients develop peripheral facial paralysis without any rash but with a fourfold rise in antibody to VZV or in association with the presence of VZV DNA in auricular skin, blood mononuclear cells, middle ear fluid, or saliva, indicating that a proportion of patients with Bell palsy have “Ramsay Hunt syndrome zoster sine herpete” or, more accurately, “geniculate zoster sine herpete.” Treatment of such patients with acyclovir-prednisone within 7 days of onset has been shown to improve the outcome of facial palsy.

Because it is now clear that geniculate ganglionitis may present with facial palsy and zoster rash in any or all of three sites, it may be time to call peripheral facial paralysis associated with zoster rash on the ear, tongue, or palate exactly what it is: geniculate zoster. After all, zoster rash on the face is called trigeminal zoster, and zoster rash on the chest is called thoracic zoster. Most important, however, is the recognition that facial paralysis in association with rash on the ear, tongue, or hard palate reflects geniculate zoster and requires immediate antiviral treatment.

Varicella-zoster virus (VZV) is a highly neurotropic and ubiquitous alpha-herpesvirus. Primary infection causes varicella (chickenpox), after which the virus becomes latent in ganglionic neurons along the entire neuraxis. Reactivation decades later usually results in zoster (shingles), pain with a dermatomal distribution, and rash. Unlike herpes simplex virus type 1 (HSV-1), which becomes latent exclusively in cranial nerve ganglia and reactivates to produce recurrent vesicular lesions around the mouth, and unlike HSV type 2, which becomes latent exclusively in sacral ganglia and reactivates to produce genital herpes, VZV may reactivate from any ganglia to cause zoster anywhere on the body.

See related article

Reactivation of VZV from the geniculate (facial nerve) ganglion leads to the Ramsay Hunt syndrome, ie, facial paralysis accompanied by a rash around the ear (zoster oticus). The syndrome is the second most common cause of atraumatic facial paralysis after Bell palsy (idiopathic facial paralysis). Importantly, virus reactivation from the geniculate ganglion may also be accompanied by zoster rash on the hard palate or on the anterior two-thirds of the tongue (Figure 1).1 A rash in any of these three skin or mucosal sites in a patient with facial paralysis indicates geniculate ganglionitis. To his credit, Dr. J. Ramsay Hunt recognized that although there is no somatic sensory facial branch to the oropharynx or tongue, virus can still spread from a seventh cranial nerve element to the pharynx or, via special sensory fibers, to the tongue, thus providing an anatomic explanation for zoster rash in patients with facial paralysis (geniculate zoster) not only around the ear, but also on the hard palate or on the anterior two-thirds of the tongue.2

Reprinted from Sweeney CJ, et al. Ramsay Hunt syndrome. J Neurol Neurosurg Psychiatry 2001; 71:149–154. With permission from BMJ Publishing Group Ltd.
Figure 1. Clinical features of Ramsay Hunt syndrome. Note peripheral facial weakness characterized by a widened palpebral fissure and decreased forehead wrinkling and smile on the right, associated with vesicles in the ipsilateral ear, on the hard palate, or on the anterior two-thirds of the tongue. Four nuclei are involved in facial nerve function: the motor nucleus of VII, the nucleus of the solitary tract, the superior salivatory nucleus, and the spinal nucleus of V. The solitary tract receives special visceral afferent taste fibers emanating from the anterior two-thirds of the tongue, cell bodies of which are in the geniculate ganglion, ie, the site of varicella-zoster virus reactivation when vesicles erupt on the tongue. The spinal nucleus of V receives general somatic afferent fibers from the geniculate zone of the ear via the chorda tympani. Cell bodies of those neurons are located in the geniculate ganglion and are the site of varicella-zoster virus reactivation in classic Ramsay Hunt syndrome, in which vesicular eruptions in geniculate zones are seen.

In geniculate ganglionitis, a rash is usually seen in one but not all three of these skin and mucosal sites. Yet in this issue of the Cleveland Clinic Journal of Medicine, Grillo et al3 describe a patient with facial palsy and rash in all three sites. This remarkable finding underscores the importance of distinguishing Ramsay Hunt syndrome from Bell palsy by checking for rash on the ear, tongue, and hard palate in any patient with acute unilateral peripheral facial weakness. Ramsay Hunt syndrome results from active VZV replication in the geniculate ganglion and requires treatment with antiviral drugs, whereas Bell palsy is usually treated with steroids. Steroid treatment of Ramsay Hunt syndrome misdiagnosed as Bell palsy can potentiate the viral infection. This may partially explain why the outcome of facial paralysis in Ramsay Hunt syndrome is not as good as in idiopathic Bell palsy, in which more than 70% of patients recover full facial function.

Although only cranial nerve VII (facial) was involved in their patient, Grillo et al correctly noted the frequent involvement of other cranial nerves in Ramsay Hunt syndrome. For example, dizziness, vertigo, or hearing loss indicative of involvement of cranial nerve VIII (acoustic) is most likely due to the close proximity of the geniculate ganglion and facial nerve to the vestibulocochlear nerve in the bony facial canal. Patients with this syndrome may also develop dysarthria or dysphagia indicative of lower cranial nerve involvement, reflecting the shared derivation of the facial, glossopharyngeal, and vagus nerves from the same branchial arch. Magnetic resonance imaging, not usually performed in patients with Ramsay Hunt syndrome, may show enhancement in the geniculate ganglion as well as in the intracanalicular and tympanic segments of the facial nerve during its course through the facial canal.

The report by Grillo et al comes at an auspicious time, 100 years after an enlightening series of papers by Dr. Hunt from 1907 to 1915 in which he described herpetic inflammation of the geniculate ganglion,4 the sensory system of the facial nerve,5 and ultimately the syndrome that bears his name.2,6 Dr. Hunt received his doctorate from the University of Pennsylvania in 1893 and later became instructor at Cornell University School of Medicine. In 1924, he became full professor at Columbia University School of Medicine. A clinician of Olympian stature, he is also credited with describing two additional syndromes (clinical features produced by carotid artery occlusion and dyssynergia cerebellaris progressiva), although the best known is zoster oticus with peripheral facial palsy.

Importantly, some patients develop peripheral facial paralysis without any rash but with a fourfold rise in antibody to VZV or in association with the presence of VZV DNA in auricular skin, blood mononuclear cells, middle ear fluid, or saliva, indicating that a proportion of patients with Bell palsy have “Ramsay Hunt syndrome zoster sine herpete” or, more accurately, “geniculate zoster sine herpete.” Treatment of such patients with acyclovir-prednisone within 7 days of onset has been shown to improve the outcome of facial palsy.

Because it is now clear that geniculate ganglionitis may present with facial palsy and zoster rash in any or all of three sites, it may be time to call peripheral facial paralysis associated with zoster rash on the ear, tongue, or palate exactly what it is: geniculate zoster. After all, zoster rash on the face is called trigeminal zoster, and zoster rash on the chest is called thoracic zoster. Most important, however, is the recognition that facial paralysis in association with rash on the ear, tongue, or hard palate reflects geniculate zoster and requires immediate antiviral treatment.

References
  1. Sweeney CJ, Gilden DH. Ramsay Hunt syndrome. J Neurol Neurosurg Psychiatry 2001; 71:149154.
  2. Hunt JR. The symptom-complex of the acute posterior poliomyelitis of the geniculate, auditory, glossopharyngeal and pneumogastric ganglia. Arch Intern Med 1910; 5:631675.
  3. Grillo E, Miguel-Morrondo A, Vano-Galvan S, Jaen P. A 54-year-old woman with odynophagia, peripheral facial nerve paralysis and mucocutaneous lesions. Cleve Clin J Med 2013; 80:7677.
  4. Hunt JR. On herpetic inflammations of the geniculate ganglion: a new syndrome and its complications. J Nerv Ment Dis 1907; 34:7396.
  5. Hunt JR. The sensory system of the facial nerve and its symptomatology. J Nerv Ment Dis 1909; 36:321350.
  6. Hunt JR. The sensory field of the facial nerve: a further contribution to the symptomatology of the geniculate ganglion. Brain 1915; 38:418446.
References
  1. Sweeney CJ, Gilden DH. Ramsay Hunt syndrome. J Neurol Neurosurg Psychiatry 2001; 71:149154.
  2. Hunt JR. The symptom-complex of the acute posterior poliomyelitis of the geniculate, auditory, glossopharyngeal and pneumogastric ganglia. Arch Intern Med 1910; 5:631675.
  3. Grillo E, Miguel-Morrondo A, Vano-Galvan S, Jaen P. A 54-year-old woman with odynophagia, peripheral facial nerve paralysis and mucocutaneous lesions. Cleve Clin J Med 2013; 80:7677.
  4. Hunt JR. On herpetic inflammations of the geniculate ganglion: a new syndrome and its complications. J Nerv Ment Dis 1907; 34:7396.
  5. Hunt JR. The sensory system of the facial nerve and its symptomatology. J Nerv Ment Dis 1909; 36:321350.
  6. Hunt JR. The sensory field of the facial nerve: a further contribution to the symptomatology of the geniculate ganglion. Brain 1915; 38:418446.
Issue
Cleveland Clinic Journal of Medicine - 80(2)
Issue
Cleveland Clinic Journal of Medicine - 80(2)
Page Number
78-79
Page Number
78-79
Publications
Publications
Topics
Article Type
Display Headline
Functional anatomy of the facial nerve revealed by Ramsay Hunt syndrome
Display Headline
Functional anatomy of the facial nerve revealed by Ramsay Hunt syndrome
Sections
Disallow All Ads
Alternative CME
Article PDF Media

New Strategies to Combat an Old Foe

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Looking for new strategies to combat an old foe

In the early part of the 20th century, pneumonia was a leading causes of death, particularly among older adults, for whom Osler termed it the Captain of the Men of Death.[1] Mortality rates from severe (bacteremic) pneumonia were typically 80% to 90%, and the introduction of antibacterial therapy in the 1940s reduced that mortality to 10% to 20%. However, as pointed out by Austrian and Gold in a landmark paper in the 1950s, mortality for patients dying within the first 4 to 5 days was not reduced in the postantibiotic era.[2] The survival rates for patients with severe community‐acquired pneumonia minimally improved over the ensuing 50 years, despite the introduction of numerous new antimicrobial drugs and other medical interventions.

One promising area for therapeutic intervention relates to the potential adverse effects of the host inflammatory response in the setting of pneumonia. A growing body of literature supports the conclusion that the window of optimal host response may be relatively narrow. Too little response and patients quickly succumb to overwhelming sepsis. Too much response and a patient's hyperactivated inflammatory system can set off a cascade of secondary events, triggering events such as acute lung injury or ischemic heart disease.[3] Studies have also established that the level of inflammation, as measured by biomarkers such as C‐reactive protein, tumor necrosis factor, and interleukins, can identify patients at increased risk of adverse outcomes.[4] Thus, it is logical to ask whether immune modulating therapies can improve outcomes for these patients.

In this issue of the Journal of Hospital Medicine, Shafiq and colleagues completed a systematic review and meta‐analysis of corticosteroid therapy for patients with pneumonia.[5] Updating prior reviews, they included 8 randomized controlled trials, all of which consisted of low‐dose, systemic, steroid therapy as the intervention and standard care as the control arm. The overall quality of the included studies was judged moderate, and the overall size of the pooled data was only 1119 patients. In their analysis, adjunctive steroid therapy did not reduce in‐hospital mortality, with 4 studies demonstrating effect sizes suggesting benefit, 3 studies demonstrating no benefit or harm, and 1 study favoring the nonsteroid arm. In these situations with grossly heterogeneous study results, it seems prudent to avoid overly interpreting pooled results, even if statistical tests for heterogeneity are nonsignificant. The investigators also reported a range of secondary outcomes, noting that hospital length of stay was significantly reduced in the pooled steroid treated arms.

The overall negative finding is clearly disappointing at a time when clinicians are looking for new treatments to improve outcomes for these patients. Pneumonia is a heterogeneous disorder, representing a wide range of microbial pathogens and underlying host risk factors. Current treatment guidelines for patients with community‐acquired pneumonia are largely empirical and do not focus on pathogen identification, host risk factor analysis, or biomarker distributions to select antimicrobial therapy.[6] In this regard, despite being 1 of the oldest conditions for which we have published guidelines for treatment, the treatment approach for pneumonia remains quite antiquated, ignoring recent advances in the incorporation of personalized treatment strategies for other illnesses. We may have reached the limits of one‐size‐fits‐all treatment strategies for hospitalized adults with community‐acquired pneumonia. To improve outcomes further, we need to understand the heterogeneity of the disorder and tailor therapies at an individual level. Rapid point‐of‐care tests for pathogens and host response offer the most promising approach toward this strategy.

It is notable that the majority of studies focus on in‐hospital mortality, even though the impact of steroid therapy may be observed over a longer period of follow‐up. Moreover, although mortality is clearly a relevant outcome, it is not the only patient‐centered outcome of importance. However, other outcomes that are typically assessed, such as length of hospitalization and cost, are not patient‐centered outcomes. These are process measures that reflect physician judgment as much as any patient response to treatment. We need to move the field forward by embracing patient outcomes beyond mortality to optimally evaluate new treatment strategies, particularly because the majority of patients will survive hospitalization for the illness. These outcomes would include time to resolution of major symptoms, such as cough and fatigue, and functional outcomes, including return to work and usual activities. Future comparative efficacy and effectiveness studies in pneumonia need to consider a much wider range of true patient outcomes.[7]

It is increasingly fashionable to adopt cross‐disease approaches toward optimizing patient care, particularly in the hospital. Important initiatives that aim to reduce hospital injuries and improve transitions of care are relatively agnostic to specific disease states. Much of the research agenda of hospital medicine avoids a disease‐specific focus, assuming such disease‐specific approaches are the domain of specialists. Yet, it is worth remembering that much of the progress for medical care can be traced to traditional considerations of disease pathophysiology and empirical studies of risk factors and treatments for specific disease. Hospitalists remain at the front line in dealing with most of the common illnesses that afflict patients. Battling those conditions 1 at a time should be an important component of the broader hospitalist research agenda. One hundred years after Osler charged the medical community to identify new strategies for treating an old enemy, we are still struggling to win the battle.

Disclosure

This work was supported in part by K24‐AI073957 (JPM) from the National Institute of Allergy and Infectious Diseases, National Institutes of Health. The author has no conflicts of interest to report.

Files
References
  1. Osler W. The Principles and Practices of Medicine. 7th ed. New York, London: D. Appleton and Co.; 1909.
  2. Austrian R, Gold J. Pneumococcal bacteremia with special reference to bacteremic pneumococcal pneumonia. Ann Intern Med. 1964;60:759776.
  3. Corrales‐Medina VF, Serpa J, Rueda AM, et al. Acute bacterial pneumonia is associated with the occurrence of acute coronary syndromes. Medicine (Baltimore). 2009;88(3):154159.
  4. Kellum JA, Kong L, Fink MP, et al. Understanding the inflammatory cytokine response in pneumonia and sepsis: results of the Genetic and Inflammatory Markers of Sepsis (GenIMS) Study. Arch Intern Med. 2007;167(15):16551663.
  5. Shafiq M, Mansoor M, Khan A, Sohail M, Murad M. Adjuvant steroid therapy in community‐acquired pneumonia: a systematic review and meta‐analysis. J Hosp Med. 2013.
  6. Mandell LA, Wunderink RG, Anzueto A, et al. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community‐acquired pneumonia in adults. Clin Infect Dis. 2007;44(suppl 2):S27S72.
  7. Powers JH. Reassessing the design, conduct, and analysis of clinical trials of therapy for community‐acquired pneumonia. Clin Infect Dis. 2008;46(8):11521156.
Article PDF
Issue
Journal of Hospital Medicine - 8(2)
Publications
Page Number
59-60
Sections
Files
Files
Article PDF
Article PDF

In the early part of the 20th century, pneumonia was a leading causes of death, particularly among older adults, for whom Osler termed it the Captain of the Men of Death.[1] Mortality rates from severe (bacteremic) pneumonia were typically 80% to 90%, and the introduction of antibacterial therapy in the 1940s reduced that mortality to 10% to 20%. However, as pointed out by Austrian and Gold in a landmark paper in the 1950s, mortality for patients dying within the first 4 to 5 days was not reduced in the postantibiotic era.[2] The survival rates for patients with severe community‐acquired pneumonia minimally improved over the ensuing 50 years, despite the introduction of numerous new antimicrobial drugs and other medical interventions.

One promising area for therapeutic intervention relates to the potential adverse effects of the host inflammatory response in the setting of pneumonia. A growing body of literature supports the conclusion that the window of optimal host response may be relatively narrow. Too little response and patients quickly succumb to overwhelming sepsis. Too much response and a patient's hyperactivated inflammatory system can set off a cascade of secondary events, triggering events such as acute lung injury or ischemic heart disease.[3] Studies have also established that the level of inflammation, as measured by biomarkers such as C‐reactive protein, tumor necrosis factor, and interleukins, can identify patients at increased risk of adverse outcomes.[4] Thus, it is logical to ask whether immune modulating therapies can improve outcomes for these patients.

In this issue of the Journal of Hospital Medicine, Shafiq and colleagues completed a systematic review and meta‐analysis of corticosteroid therapy for patients with pneumonia.[5] Updating prior reviews, they included 8 randomized controlled trials, all of which consisted of low‐dose, systemic, steroid therapy as the intervention and standard care as the control arm. The overall quality of the included studies was judged moderate, and the overall size of the pooled data was only 1119 patients. In their analysis, adjunctive steroid therapy did not reduce in‐hospital mortality, with 4 studies demonstrating effect sizes suggesting benefit, 3 studies demonstrating no benefit or harm, and 1 study favoring the nonsteroid arm. In these situations with grossly heterogeneous study results, it seems prudent to avoid overly interpreting pooled results, even if statistical tests for heterogeneity are nonsignificant. The investigators also reported a range of secondary outcomes, noting that hospital length of stay was significantly reduced in the pooled steroid treated arms.

The overall negative finding is clearly disappointing at a time when clinicians are looking for new treatments to improve outcomes for these patients. Pneumonia is a heterogeneous disorder, representing a wide range of microbial pathogens and underlying host risk factors. Current treatment guidelines for patients with community‐acquired pneumonia are largely empirical and do not focus on pathogen identification, host risk factor analysis, or biomarker distributions to select antimicrobial therapy.[6] In this regard, despite being 1 of the oldest conditions for which we have published guidelines for treatment, the treatment approach for pneumonia remains quite antiquated, ignoring recent advances in the incorporation of personalized treatment strategies for other illnesses. We may have reached the limits of one‐size‐fits‐all treatment strategies for hospitalized adults with community‐acquired pneumonia. To improve outcomes further, we need to understand the heterogeneity of the disorder and tailor therapies at an individual level. Rapid point‐of‐care tests for pathogens and host response offer the most promising approach toward this strategy.

It is notable that the majority of studies focus on in‐hospital mortality, even though the impact of steroid therapy may be observed over a longer period of follow‐up. Moreover, although mortality is clearly a relevant outcome, it is not the only patient‐centered outcome of importance. However, other outcomes that are typically assessed, such as length of hospitalization and cost, are not patient‐centered outcomes. These are process measures that reflect physician judgment as much as any patient response to treatment. We need to move the field forward by embracing patient outcomes beyond mortality to optimally evaluate new treatment strategies, particularly because the majority of patients will survive hospitalization for the illness. These outcomes would include time to resolution of major symptoms, such as cough and fatigue, and functional outcomes, including return to work and usual activities. Future comparative efficacy and effectiveness studies in pneumonia need to consider a much wider range of true patient outcomes.[7]

It is increasingly fashionable to adopt cross‐disease approaches toward optimizing patient care, particularly in the hospital. Important initiatives that aim to reduce hospital injuries and improve transitions of care are relatively agnostic to specific disease states. Much of the research agenda of hospital medicine avoids a disease‐specific focus, assuming such disease‐specific approaches are the domain of specialists. Yet, it is worth remembering that much of the progress for medical care can be traced to traditional considerations of disease pathophysiology and empirical studies of risk factors and treatments for specific disease. Hospitalists remain at the front line in dealing with most of the common illnesses that afflict patients. Battling those conditions 1 at a time should be an important component of the broader hospitalist research agenda. One hundred years after Osler charged the medical community to identify new strategies for treating an old enemy, we are still struggling to win the battle.

Disclosure

This work was supported in part by K24‐AI073957 (JPM) from the National Institute of Allergy and Infectious Diseases, National Institutes of Health. The author has no conflicts of interest to report.

In the early part of the 20th century, pneumonia was a leading causes of death, particularly among older adults, for whom Osler termed it the Captain of the Men of Death.[1] Mortality rates from severe (bacteremic) pneumonia were typically 80% to 90%, and the introduction of antibacterial therapy in the 1940s reduced that mortality to 10% to 20%. However, as pointed out by Austrian and Gold in a landmark paper in the 1950s, mortality for patients dying within the first 4 to 5 days was not reduced in the postantibiotic era.[2] The survival rates for patients with severe community‐acquired pneumonia minimally improved over the ensuing 50 years, despite the introduction of numerous new antimicrobial drugs and other medical interventions.

One promising area for therapeutic intervention relates to the potential adverse effects of the host inflammatory response in the setting of pneumonia. A growing body of literature supports the conclusion that the window of optimal host response may be relatively narrow. Too little response and patients quickly succumb to overwhelming sepsis. Too much response and a patient's hyperactivated inflammatory system can set off a cascade of secondary events, triggering events such as acute lung injury or ischemic heart disease.[3] Studies have also established that the level of inflammation, as measured by biomarkers such as C‐reactive protein, tumor necrosis factor, and interleukins, can identify patients at increased risk of adverse outcomes.[4] Thus, it is logical to ask whether immune modulating therapies can improve outcomes for these patients.

In this issue of the Journal of Hospital Medicine, Shafiq and colleagues completed a systematic review and meta‐analysis of corticosteroid therapy for patients with pneumonia.[5] Updating prior reviews, they included 8 randomized controlled trials, all of which consisted of low‐dose, systemic, steroid therapy as the intervention and standard care as the control arm. The overall quality of the included studies was judged moderate, and the overall size of the pooled data was only 1119 patients. In their analysis, adjunctive steroid therapy did not reduce in‐hospital mortality, with 4 studies demonstrating effect sizes suggesting benefit, 3 studies demonstrating no benefit or harm, and 1 study favoring the nonsteroid arm. In these situations with grossly heterogeneous study results, it seems prudent to avoid overly interpreting pooled results, even if statistical tests for heterogeneity are nonsignificant. The investigators also reported a range of secondary outcomes, noting that hospital length of stay was significantly reduced in the pooled steroid treated arms.

The overall negative finding is clearly disappointing at a time when clinicians are looking for new treatments to improve outcomes for these patients. Pneumonia is a heterogeneous disorder, representing a wide range of microbial pathogens and underlying host risk factors. Current treatment guidelines for patients with community‐acquired pneumonia are largely empirical and do not focus on pathogen identification, host risk factor analysis, or biomarker distributions to select antimicrobial therapy.[6] In this regard, despite being 1 of the oldest conditions for which we have published guidelines for treatment, the treatment approach for pneumonia remains quite antiquated, ignoring recent advances in the incorporation of personalized treatment strategies for other illnesses. We may have reached the limits of one‐size‐fits‐all treatment strategies for hospitalized adults with community‐acquired pneumonia. To improve outcomes further, we need to understand the heterogeneity of the disorder and tailor therapies at an individual level. Rapid point‐of‐care tests for pathogens and host response offer the most promising approach toward this strategy.

It is notable that the majority of studies focus on in‐hospital mortality, even though the impact of steroid therapy may be observed over a longer period of follow‐up. Moreover, although mortality is clearly a relevant outcome, it is not the only patient‐centered outcome of importance. However, other outcomes that are typically assessed, such as length of hospitalization and cost, are not patient‐centered outcomes. These are process measures that reflect physician judgment as much as any patient response to treatment. We need to move the field forward by embracing patient outcomes beyond mortality to optimally evaluate new treatment strategies, particularly because the majority of patients will survive hospitalization for the illness. These outcomes would include time to resolution of major symptoms, such as cough and fatigue, and functional outcomes, including return to work and usual activities. Future comparative efficacy and effectiveness studies in pneumonia need to consider a much wider range of true patient outcomes.[7]

It is increasingly fashionable to adopt cross‐disease approaches toward optimizing patient care, particularly in the hospital. Important initiatives that aim to reduce hospital injuries and improve transitions of care are relatively agnostic to specific disease states. Much of the research agenda of hospital medicine avoids a disease‐specific focus, assuming such disease‐specific approaches are the domain of specialists. Yet, it is worth remembering that much of the progress for medical care can be traced to traditional considerations of disease pathophysiology and empirical studies of risk factors and treatments for specific disease. Hospitalists remain at the front line in dealing with most of the common illnesses that afflict patients. Battling those conditions 1 at a time should be an important component of the broader hospitalist research agenda. One hundred years after Osler charged the medical community to identify new strategies for treating an old enemy, we are still struggling to win the battle.

Disclosure

This work was supported in part by K24‐AI073957 (JPM) from the National Institute of Allergy and Infectious Diseases, National Institutes of Health. The author has no conflicts of interest to report.

References
  1. Osler W. The Principles and Practices of Medicine. 7th ed. New York, London: D. Appleton and Co.; 1909.
  2. Austrian R, Gold J. Pneumococcal bacteremia with special reference to bacteremic pneumococcal pneumonia. Ann Intern Med. 1964;60:759776.
  3. Corrales‐Medina VF, Serpa J, Rueda AM, et al. Acute bacterial pneumonia is associated with the occurrence of acute coronary syndromes. Medicine (Baltimore). 2009;88(3):154159.
  4. Kellum JA, Kong L, Fink MP, et al. Understanding the inflammatory cytokine response in pneumonia and sepsis: results of the Genetic and Inflammatory Markers of Sepsis (GenIMS) Study. Arch Intern Med. 2007;167(15):16551663.
  5. Shafiq M, Mansoor M, Khan A, Sohail M, Murad M. Adjuvant steroid therapy in community‐acquired pneumonia: a systematic review and meta‐analysis. J Hosp Med. 2013.
  6. Mandell LA, Wunderink RG, Anzueto A, et al. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community‐acquired pneumonia in adults. Clin Infect Dis. 2007;44(suppl 2):S27S72.
  7. Powers JH. Reassessing the design, conduct, and analysis of clinical trials of therapy for community‐acquired pneumonia. Clin Infect Dis. 2008;46(8):11521156.
References
  1. Osler W. The Principles and Practices of Medicine. 7th ed. New York, London: D. Appleton and Co.; 1909.
  2. Austrian R, Gold J. Pneumococcal bacteremia with special reference to bacteremic pneumococcal pneumonia. Ann Intern Med. 1964;60:759776.
  3. Corrales‐Medina VF, Serpa J, Rueda AM, et al. Acute bacterial pneumonia is associated with the occurrence of acute coronary syndromes. Medicine (Baltimore). 2009;88(3):154159.
  4. Kellum JA, Kong L, Fink MP, et al. Understanding the inflammatory cytokine response in pneumonia and sepsis: results of the Genetic and Inflammatory Markers of Sepsis (GenIMS) Study. Arch Intern Med. 2007;167(15):16551663.
  5. Shafiq M, Mansoor M, Khan A, Sohail M, Murad M. Adjuvant steroid therapy in community‐acquired pneumonia: a systematic review and meta‐analysis. J Hosp Med. 2013.
  6. Mandell LA, Wunderink RG, Anzueto A, et al. Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community‐acquired pneumonia in adults. Clin Infect Dis. 2007;44(suppl 2):S27S72.
  7. Powers JH. Reassessing the design, conduct, and analysis of clinical trials of therapy for community‐acquired pneumonia. Clin Infect Dis. 2008;46(8):11521156.
Issue
Journal of Hospital Medicine - 8(2)
Issue
Journal of Hospital Medicine - 8(2)
Page Number
59-60
Page Number
59-60
Publications
Publications
Article Type
Display Headline
Looking for new strategies to combat an old foe
Display Headline
Looking for new strategies to combat an old foe
Sections
Article Source
Copyright © 2012 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Joshua P. Metlay, MD, PhD, Perelman School of Medicine, University of Pennsylvania, 1232 Blockley Hall, 423 Guardian Drive, Philadelphia, PA 19104; Telephone: 215‐898‐1484; Fax: 215‐573‐0198; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Mild cognitive impairment: Challenges in research and in practice

Article Type
Changed
Wed, 10/04/2017 - 08:27
Display Headline
Mild cognitive impairment: Challenges in research and in practice

The integrity of cognitive function is a reliable indicator of healthy aging. But the progression of cognitive changes from normal aging to dementia is often insidious and easily underrecognized. Consequently, mild cognitive impairment (MCI)—the entity that characterizes this transition—has become an area of intense research. Since 1999, the number of research publications related to MCI has exploded, with more than 1,000 peer-reviewed studies in 2010 alone.

Controversy remains over the definition, diagnosis, prognosis, and management of MCI. However, in an evidence-based review of the literature,1 the American Academy of Neurology concluded that MCI is a useful clinical entity and that patients with MCI should be identified and monitored because of the increased risk of progression to dementia.

See related article

Early studies appeared to indicate that patients with MCI were at high risk of further cognitive decline and progression to Alzheimer dementia.1 But subsequent research found that not all were, leading to the recognition of two subtypes of MCI: amnestic, which mainly involves memory loss, and nonamnestic, which involves impairment of other cognitive domains. Patients with the amnestic type were determined to be more likely to eventually develop Alzheimer disease.2 The amnestic subtype is being considered for inclusion in the next revision of the Diagnostic and Statistical Manual of Mental Disorders, ie, the fifth edition (DSM-V).3

MCI varies with each person affected. Neither its clinical nor its neuropathologic course follows a predictable, linear path, making its study especially challenging. The pathologic and molecular mechanisms of MCI are not well established. In the amnestic type, the distribution of cortical amyloid deposits appears transitional to the pathologic changes seen in Alzheimer disease.4 But postmortem brain tissues5 and clinical imaging studies6 reveal that some normal controls have a degree of amyloid deposition similar to that in patients with MCI. These findings limit the use of amyloid lesions as a robust pathologic marker for distinguishing normal aging from MCI.

MCI is diagnosed clinically, and clinicians should be able to diagnose most cases of MCI in the office. The first step is cognitive concern (ie, a change from the patient’s baseline cognitive status) raised by the patient, by an informant, or by a clinician. Often, in amnestic MCI, the earliest symptom is memory loss. Once persistent memory loss is documented, the patient is assessed for the ability to perform activities of daily living. To fulfill the criteria for the diagnosis of MCI, patients need to have intact function in the activities of daily living and no features of neurologic and psychiatric diseases that affect cognition. Further office-based cognitive testing helps to determine whether MCI is the amnestic or the nonamnestic type. A brief neuropsychological test such as the Montreal Cognitive Assessment often supports the diagnosis of MCI, although accurate characterization of cognitive dysfunction is enhanced with thorough neuropsychological testing.

MCI remains a clinical diagnosis with an imprecise prognosis. Although the amnestic MCI criteria are reasonably specific, they do not always predict progression to Alzheimer disease. Growing evidence suggests that neuropsychiatric symptoms, including depression, apathy, and anxiety, are clinical predictors of the progression of MCI to Alzheimer disease, and that the added risk can be substantial. For example, in one study, the risk of incident dementia was seven times higher if apathy was present.7 As such, a careful psychiatric evaluation of patients with MCI is strongly recommended and should be part of a comprehensive workup.

The study of MCI touches on almost all aspects of aging and dementia investigation. A great deal of research is focusing on the development of cerebrospinal fluid or imaging biomarkers of amyloid deposition, structural magnetic resonance imaging markers of neuronal loss, and genetic predisposition to detect the earliest signs of the disease in people who may be at risk. The rationale for the intense study of MCI is that the sooner the intervention in a degenerative process is started, the more likely that further cognitive and functional decline can be prevented: early diagnosis is paramount in trying to prevent subsequent disability. Clinical trials are needed to determine whether early detection of MCI or the detection of biomarkers in asymptomatic individuals alters the incidence of dementia or its prognosis.

In this issue of the Cleveland Clinic Journal of Medicine, Patel and Holland8 present a comprehensive overview of MCI and highlight the issues related to its diagnosis and management. The treatment of MCI is another area that is unclear. At this time, prescription of cognition-enhancing medications is not indicated. No pharmacologic agent is approved by the US Food and Drug Administration for treating MCI, although cholinesterase inhibitors have been studied. At the pathologic level, there is no clear consensus on whether presynaptic or postsynaptic (or both) cholinergic receptors are defective in MCI.9 There is some evidence of increased choline acetyltransferase activity in the hippocampus and the superior frontal cortex.10 Selected hippocampal and cortical cholinergic systems may be capable of compensatory responses in MCI. This may help explain why cholinesterase inhibitors are ineffective in preventing dementia in patients with MCI in therapeutic trials.

Patel and Holland recommend a reasonable multidisciplinary approach for managing MCI, although supporting evidence for such recommendations from clinical trials is lacking. Realizing that not all patients with MCI progress to Alzheimer disease and that some cases are reversible is cause for recommending close follow-up and monitoring of neuropsychiatric and cognitive symptoms in older patients.

MCI is now a clinical reality for all physicians dealing with older patients. Thus, MCI is of more than merely research interest to clinicians, who will come to recognize and diagnose this condition frequently in the aging population.

References
  1. Petersen RC, Smith GE, Waring SC, Ivnik RJ, Tangalos EG, Kokmen E. Mild cognitive impairment: clinical characterization and outcome. Arch Neurol 1999; 56:303308.
  2. Winblad B, Palmer K, Kivipelto M, et al. Mild cognitive impairment—beyond controversies, towards a consensus: report of the International Working Group on Mild Cognitive Impairment. J Intern Med 2004; 256:240246.
  3. Petersen RC, O’Brien J. Mild cognitive impairment should be considered for DSM-V. J Geriatr Psychiatry Neurol 2006; 19:147154.
  4. Markesbery WR. Neuropathologic alterations in mild cognitive impairment: a review. J Alzheimers Dis 2010; 19:221228.
  5. Price JL, McKeel DW, Buckles VD, et al. Neuropathology of nondemented aging: presumptive evidence for pre-clinical Alzheimer disease. Neurobiol Aging 2009; 30:10261036.
  6. Aizenstein HJ, Nebes RD, Saxton JA, et al. Frequent amyloid deposition without significant cognitive impairment among the elderly. Arch Neurol 2008; 65:15091517.
  7. Palmer K, Di Iulio F, Varsi AE, et al. Neuropsychiatric predictors of progression from amnestic-mild cognitive impairment to Alzheimer’s disease: the role of depression and apathy. J Alzheimers Dis 2010; 20:175183.
  8. Patel BB, Holland NW. Mild cognitive impairment: hope for stability, plan for progression. Cleve Clin J Med 2012; 79:857864.
  9. Mufson EJ, Binder L, Counts SE, et al. Mild cognitive impairment: pathology and mechanisms. Acta Neuropathol 2012; 123:1330.
  10. DeKosky ST, Ikonomovic MD, Styren SD, et al. Upregulation of choline acetyltransferase activity in hippocampus and frontal cortex of elderly subjects with mild cognitive impairment. Ann Neurol 2002; 51:145155.
Article PDF
Author and Disclosure Information

Hamid R. Okhravi, MD
Assistant Professor, Eastern Virginia Medical School, and The Glennan Center for Geriatrics and Gerontology, Norfolk, VA

Robert M. Palmer, MD
Professor of Medicine, and Director, The Glennan Center for Geriatrics and Gerontology, Norfolk, VA

Address: Hamid R. Okhravi, MD, The Glennan Center for Geriatrics and Gerontology, 825 Fairfax Avenue, Suite 201, Norfolk, VA 23507; e-mail [email protected]

Issue
Cleveland Clinic Journal of Medicine - 79(12)
Publications
Topics
Page Number
853-854
Sections
Author and Disclosure Information

Hamid R. Okhravi, MD
Assistant Professor, Eastern Virginia Medical School, and The Glennan Center for Geriatrics and Gerontology, Norfolk, VA

Robert M. Palmer, MD
Professor of Medicine, and Director, The Glennan Center for Geriatrics and Gerontology, Norfolk, VA

Address: Hamid R. Okhravi, MD, The Glennan Center for Geriatrics and Gerontology, 825 Fairfax Avenue, Suite 201, Norfolk, VA 23507; e-mail [email protected]

Author and Disclosure Information

Hamid R. Okhravi, MD
Assistant Professor, Eastern Virginia Medical School, and The Glennan Center for Geriatrics and Gerontology, Norfolk, VA

Robert M. Palmer, MD
Professor of Medicine, and Director, The Glennan Center for Geriatrics and Gerontology, Norfolk, VA

Address: Hamid R. Okhravi, MD, The Glennan Center for Geriatrics and Gerontology, 825 Fairfax Avenue, Suite 201, Norfolk, VA 23507; e-mail [email protected]

Article PDF
Article PDF
Related Articles

The integrity of cognitive function is a reliable indicator of healthy aging. But the progression of cognitive changes from normal aging to dementia is often insidious and easily underrecognized. Consequently, mild cognitive impairment (MCI)—the entity that characterizes this transition—has become an area of intense research. Since 1999, the number of research publications related to MCI has exploded, with more than 1,000 peer-reviewed studies in 2010 alone.

Controversy remains over the definition, diagnosis, prognosis, and management of MCI. However, in an evidence-based review of the literature,1 the American Academy of Neurology concluded that MCI is a useful clinical entity and that patients with MCI should be identified and monitored because of the increased risk of progression to dementia.

See related article

Early studies appeared to indicate that patients with MCI were at high risk of further cognitive decline and progression to Alzheimer dementia.1 But subsequent research found that not all were, leading to the recognition of two subtypes of MCI: amnestic, which mainly involves memory loss, and nonamnestic, which involves impairment of other cognitive domains. Patients with the amnestic type were determined to be more likely to eventually develop Alzheimer disease.2 The amnestic subtype is being considered for inclusion in the next revision of the Diagnostic and Statistical Manual of Mental Disorders, ie, the fifth edition (DSM-V).3

MCI varies with each person affected. Neither its clinical nor its neuropathologic course follows a predictable, linear path, making its study especially challenging. The pathologic and molecular mechanisms of MCI are not well established. In the amnestic type, the distribution of cortical amyloid deposits appears transitional to the pathologic changes seen in Alzheimer disease.4 But postmortem brain tissues5 and clinical imaging studies6 reveal that some normal controls have a degree of amyloid deposition similar to that in patients with MCI. These findings limit the use of amyloid lesions as a robust pathologic marker for distinguishing normal aging from MCI.

MCI is diagnosed clinically, and clinicians should be able to diagnose most cases of MCI in the office. The first step is cognitive concern (ie, a change from the patient’s baseline cognitive status) raised by the patient, by an informant, or by a clinician. Often, in amnestic MCI, the earliest symptom is memory loss. Once persistent memory loss is documented, the patient is assessed for the ability to perform activities of daily living. To fulfill the criteria for the diagnosis of MCI, patients need to have intact function in the activities of daily living and no features of neurologic and psychiatric diseases that affect cognition. Further office-based cognitive testing helps to determine whether MCI is the amnestic or the nonamnestic type. A brief neuropsychological test such as the Montreal Cognitive Assessment often supports the diagnosis of MCI, although accurate characterization of cognitive dysfunction is enhanced with thorough neuropsychological testing.

MCI remains a clinical diagnosis with an imprecise prognosis. Although the amnestic MCI criteria are reasonably specific, they do not always predict progression to Alzheimer disease. Growing evidence suggests that neuropsychiatric symptoms, including depression, apathy, and anxiety, are clinical predictors of the progression of MCI to Alzheimer disease, and that the added risk can be substantial. For example, in one study, the risk of incident dementia was seven times higher if apathy was present.7 As such, a careful psychiatric evaluation of patients with MCI is strongly recommended and should be part of a comprehensive workup.

The study of MCI touches on almost all aspects of aging and dementia investigation. A great deal of research is focusing on the development of cerebrospinal fluid or imaging biomarkers of amyloid deposition, structural magnetic resonance imaging markers of neuronal loss, and genetic predisposition to detect the earliest signs of the disease in people who may be at risk. The rationale for the intense study of MCI is that the sooner the intervention in a degenerative process is started, the more likely that further cognitive and functional decline can be prevented: early diagnosis is paramount in trying to prevent subsequent disability. Clinical trials are needed to determine whether early detection of MCI or the detection of biomarkers in asymptomatic individuals alters the incidence of dementia or its prognosis.

In this issue of the Cleveland Clinic Journal of Medicine, Patel and Holland8 present a comprehensive overview of MCI and highlight the issues related to its diagnosis and management. The treatment of MCI is another area that is unclear. At this time, prescription of cognition-enhancing medications is not indicated. No pharmacologic agent is approved by the US Food and Drug Administration for treating MCI, although cholinesterase inhibitors have been studied. At the pathologic level, there is no clear consensus on whether presynaptic or postsynaptic (or both) cholinergic receptors are defective in MCI.9 There is some evidence of increased choline acetyltransferase activity in the hippocampus and the superior frontal cortex.10 Selected hippocampal and cortical cholinergic systems may be capable of compensatory responses in MCI. This may help explain why cholinesterase inhibitors are ineffective in preventing dementia in patients with MCI in therapeutic trials.

Patel and Holland recommend a reasonable multidisciplinary approach for managing MCI, although supporting evidence for such recommendations from clinical trials is lacking. Realizing that not all patients with MCI progress to Alzheimer disease and that some cases are reversible is cause for recommending close follow-up and monitoring of neuropsychiatric and cognitive symptoms in older patients.

MCI is now a clinical reality for all physicians dealing with older patients. Thus, MCI is of more than merely research interest to clinicians, who will come to recognize and diagnose this condition frequently in the aging population.

The integrity of cognitive function is a reliable indicator of healthy aging. But the progression of cognitive changes from normal aging to dementia is often insidious and easily underrecognized. Consequently, mild cognitive impairment (MCI)—the entity that characterizes this transition—has become an area of intense research. Since 1999, the number of research publications related to MCI has exploded, with more than 1,000 peer-reviewed studies in 2010 alone.

Controversy remains over the definition, diagnosis, prognosis, and management of MCI. However, in an evidence-based review of the literature,1 the American Academy of Neurology concluded that MCI is a useful clinical entity and that patients with MCI should be identified and monitored because of the increased risk of progression to dementia.

See related article

Early studies appeared to indicate that patients with MCI were at high risk of further cognitive decline and progression to Alzheimer dementia.1 But subsequent research found that not all were, leading to the recognition of two subtypes of MCI: amnestic, which mainly involves memory loss, and nonamnestic, which involves impairment of other cognitive domains. Patients with the amnestic type were determined to be more likely to eventually develop Alzheimer disease.2 The amnestic subtype is being considered for inclusion in the next revision of the Diagnostic and Statistical Manual of Mental Disorders, ie, the fifth edition (DSM-V).3

MCI varies with each person affected. Neither its clinical nor its neuropathologic course follows a predictable, linear path, making its study especially challenging. The pathologic and molecular mechanisms of MCI are not well established. In the amnestic type, the distribution of cortical amyloid deposits appears transitional to the pathologic changes seen in Alzheimer disease.4 But postmortem brain tissues5 and clinical imaging studies6 reveal that some normal controls have a degree of amyloid deposition similar to that in patients with MCI. These findings limit the use of amyloid lesions as a robust pathologic marker for distinguishing normal aging from MCI.

MCI is diagnosed clinically, and clinicians should be able to diagnose most cases of MCI in the office. The first step is cognitive concern (ie, a change from the patient’s baseline cognitive status) raised by the patient, by an informant, or by a clinician. Often, in amnestic MCI, the earliest symptom is memory loss. Once persistent memory loss is documented, the patient is assessed for the ability to perform activities of daily living. To fulfill the criteria for the diagnosis of MCI, patients need to have intact function in the activities of daily living and no features of neurologic and psychiatric diseases that affect cognition. Further office-based cognitive testing helps to determine whether MCI is the amnestic or the nonamnestic type. A brief neuropsychological test such as the Montreal Cognitive Assessment often supports the diagnosis of MCI, although accurate characterization of cognitive dysfunction is enhanced with thorough neuropsychological testing.

MCI remains a clinical diagnosis with an imprecise prognosis. Although the amnestic MCI criteria are reasonably specific, they do not always predict progression to Alzheimer disease. Growing evidence suggests that neuropsychiatric symptoms, including depression, apathy, and anxiety, are clinical predictors of the progression of MCI to Alzheimer disease, and that the added risk can be substantial. For example, in one study, the risk of incident dementia was seven times higher if apathy was present.7 As such, a careful psychiatric evaluation of patients with MCI is strongly recommended and should be part of a comprehensive workup.

The study of MCI touches on almost all aspects of aging and dementia investigation. A great deal of research is focusing on the development of cerebrospinal fluid or imaging biomarkers of amyloid deposition, structural magnetic resonance imaging markers of neuronal loss, and genetic predisposition to detect the earliest signs of the disease in people who may be at risk. The rationale for the intense study of MCI is that the sooner the intervention in a degenerative process is started, the more likely that further cognitive and functional decline can be prevented: early diagnosis is paramount in trying to prevent subsequent disability. Clinical trials are needed to determine whether early detection of MCI or the detection of biomarkers in asymptomatic individuals alters the incidence of dementia or its prognosis.

In this issue of the Cleveland Clinic Journal of Medicine, Patel and Holland8 present a comprehensive overview of MCI and highlight the issues related to its diagnosis and management. The treatment of MCI is another area that is unclear. At this time, prescription of cognition-enhancing medications is not indicated. No pharmacologic agent is approved by the US Food and Drug Administration for treating MCI, although cholinesterase inhibitors have been studied. At the pathologic level, there is no clear consensus on whether presynaptic or postsynaptic (or both) cholinergic receptors are defective in MCI.9 There is some evidence of increased choline acetyltransferase activity in the hippocampus and the superior frontal cortex.10 Selected hippocampal and cortical cholinergic systems may be capable of compensatory responses in MCI. This may help explain why cholinesterase inhibitors are ineffective in preventing dementia in patients with MCI in therapeutic trials.

Patel and Holland recommend a reasonable multidisciplinary approach for managing MCI, although supporting evidence for such recommendations from clinical trials is lacking. Realizing that not all patients with MCI progress to Alzheimer disease and that some cases are reversible is cause for recommending close follow-up and monitoring of neuropsychiatric and cognitive symptoms in older patients.

MCI is now a clinical reality for all physicians dealing with older patients. Thus, MCI is of more than merely research interest to clinicians, who will come to recognize and diagnose this condition frequently in the aging population.

References
  1. Petersen RC, Smith GE, Waring SC, Ivnik RJ, Tangalos EG, Kokmen E. Mild cognitive impairment: clinical characterization and outcome. Arch Neurol 1999; 56:303308.
  2. Winblad B, Palmer K, Kivipelto M, et al. Mild cognitive impairment—beyond controversies, towards a consensus: report of the International Working Group on Mild Cognitive Impairment. J Intern Med 2004; 256:240246.
  3. Petersen RC, O’Brien J. Mild cognitive impairment should be considered for DSM-V. J Geriatr Psychiatry Neurol 2006; 19:147154.
  4. Markesbery WR. Neuropathologic alterations in mild cognitive impairment: a review. J Alzheimers Dis 2010; 19:221228.
  5. Price JL, McKeel DW, Buckles VD, et al. Neuropathology of nondemented aging: presumptive evidence for pre-clinical Alzheimer disease. Neurobiol Aging 2009; 30:10261036.
  6. Aizenstein HJ, Nebes RD, Saxton JA, et al. Frequent amyloid deposition without significant cognitive impairment among the elderly. Arch Neurol 2008; 65:15091517.
  7. Palmer K, Di Iulio F, Varsi AE, et al. Neuropsychiatric predictors of progression from amnestic-mild cognitive impairment to Alzheimer’s disease: the role of depression and apathy. J Alzheimers Dis 2010; 20:175183.
  8. Patel BB, Holland NW. Mild cognitive impairment: hope for stability, plan for progression. Cleve Clin J Med 2012; 79:857864.
  9. Mufson EJ, Binder L, Counts SE, et al. Mild cognitive impairment: pathology and mechanisms. Acta Neuropathol 2012; 123:1330.
  10. DeKosky ST, Ikonomovic MD, Styren SD, et al. Upregulation of choline acetyltransferase activity in hippocampus and frontal cortex of elderly subjects with mild cognitive impairment. Ann Neurol 2002; 51:145155.
References
  1. Petersen RC, Smith GE, Waring SC, Ivnik RJ, Tangalos EG, Kokmen E. Mild cognitive impairment: clinical characterization and outcome. Arch Neurol 1999; 56:303308.
  2. Winblad B, Palmer K, Kivipelto M, et al. Mild cognitive impairment—beyond controversies, towards a consensus: report of the International Working Group on Mild Cognitive Impairment. J Intern Med 2004; 256:240246.
  3. Petersen RC, O’Brien J. Mild cognitive impairment should be considered for DSM-V. J Geriatr Psychiatry Neurol 2006; 19:147154.
  4. Markesbery WR. Neuropathologic alterations in mild cognitive impairment: a review. J Alzheimers Dis 2010; 19:221228.
  5. Price JL, McKeel DW, Buckles VD, et al. Neuropathology of nondemented aging: presumptive evidence for pre-clinical Alzheimer disease. Neurobiol Aging 2009; 30:10261036.
  6. Aizenstein HJ, Nebes RD, Saxton JA, et al. Frequent amyloid deposition without significant cognitive impairment among the elderly. Arch Neurol 2008; 65:15091517.
  7. Palmer K, Di Iulio F, Varsi AE, et al. Neuropsychiatric predictors of progression from amnestic-mild cognitive impairment to Alzheimer’s disease: the role of depression and apathy. J Alzheimers Dis 2010; 20:175183.
  8. Patel BB, Holland NW. Mild cognitive impairment: hope for stability, plan for progression. Cleve Clin J Med 2012; 79:857864.
  9. Mufson EJ, Binder L, Counts SE, et al. Mild cognitive impairment: pathology and mechanisms. Acta Neuropathol 2012; 123:1330.
  10. DeKosky ST, Ikonomovic MD, Styren SD, et al. Upregulation of choline acetyltransferase activity in hippocampus and frontal cortex of elderly subjects with mild cognitive impairment. Ann Neurol 2002; 51:145155.
Issue
Cleveland Clinic Journal of Medicine - 79(12)
Issue
Cleveland Clinic Journal of Medicine - 79(12)
Page Number
853-854
Page Number
853-854
Publications
Publications
Topics
Article Type
Display Headline
Mild cognitive impairment: Challenges in research and in practice
Display Headline
Mild cognitive impairment: Challenges in research and in practice
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Appreciating Asperger syndrome: Implications for better care and outcomes

Article Type
Changed
Wed, 10/04/2017 - 08:28
Display Headline
Appreciating Asperger syndrome: Implications for better care and outcomes

In this issue of the Cleveland Clinic Journal of Medicine, Prayson and Franco paint a comprehensive picture of the key medical and therapeutic issues faced by patients with Asperger syndrome.1 They offer a refreshing optimism about contemporary treatments aimed at enhancing independence and quality of life, while being realistic about the challenges for these patients, such as making the transition from pediatric care to adult care. Importantly, their overview offers practical suggestions for improving medical care through a greater understanding of the syndrome, along with strategies for how to relate to patients who have a difficult interpersonal style.

See related article

In this editorial, I focus on lessons learned in our practice that help identify the problems that people with Asperger syndrome have, and I build on the advice of Prayson and Franco on how to improve patient experiences in the adult medical setting, particularly by diminishing confusion and uncertainty in doctor-patient interactions and by supporting ongoing functioning.

PEOPLE WITH ASPERGER SYNDROME HAVE ALWAYS LIVED AMONG US

Asperger syndrome is being diagnosed more frequently, using criteria recognized by a greater number of professionals. This diagnostic distinction offers a clearer understanding of a group of people who have always lived among us—often standing out because of their appearance, behavior, and communication style, even before a common label existed for their condition.

In less-informed communities, they might be described by neighbors or peers as eccentric or odd, even when they present no obvious dysmorphic or other distinguishing physical features. In fact, some may stand out more because of their accomplishments. The behaviors reported for some innovative scientists (Einstein), inventors (Ford, Edison), musicians (Beethoven), and others might lead to a diagnosis of Asperger syndrome today, while an obsessive nature also characteristic of Asperger syndrome might well have enabled them to think and create in astonishing ways.

As we have come to understand this syndrome better, we have recognized that it is a spectrum. Some patients are highly functioning, for example, and different patients have different needs.

Steve Silberman,2 writing for Wired magazine, coined the term “geek syndrome” and suggested that geeks marrying geeks may help account for the comparatively high prevalence of autism and Asperger syndrome in “techheavy” communities such as Silicon Valley in California and Route 128 in Massachusetts. “At clinics and schools in the Valley, the observation that most parents of autistic kids are engineers and programmers who themselves display autistic behavior is not news.”2 Temple Grandin, arguably the best-known person with an autism spectrum condition, has characterized the NASA Space Center in Houston, TX, as a similar community.

Given this correlation, it follows that colleges and universities offering engineering, computer science, and other technical programs or degrees should have a relatively high prevalence of students with Asperger syndrome. The Massachusetts Institute of Technology, where such a pattern is often observed, offers a course entitled “Charm School,” and its online course description is suggestive of the unique needs of this population3:

“How do I ask for a date? Which bread plate is mine? At what point in a job interview can I ask about salary? Should I use a cell phone while on the T or the elevator? How can a student network to find the perfect position? Join us for MIT's 19th Annual Charm School to find out these answers and more.”

 

 

COMMUNICATION DISTURBANCES

The challenges a person with Asperger syndrome may be experiencing are often very difficult to understand. While these people may look normal and demonstrate average to above-average intellectual functioning, their sometimes-peculiar behaviors and deficits in social skills are often difficult for peers to interpret— and to forgive. People with Asperger syndrome want to get along with peers, develop relationships, and succeed in the workplace, and they feel perplexed that others sometimes seem put off by their behavior.

At the core of this discomfort are a range of communication disorders that negatively affect interactions with others. One practical indication of a communication disorder is whether more attention is paid to how something is said than what is being said. This may present to the physician in different ways.

Language

Difficulty with introspection and description may render a patient incapable of describing symptoms and related historical information. In addition, the idiomatic and figurative nature of English may lead Asperger syndrome patients to misunderstand what the physician is saying—even common nonliteral expressions such as “Hop up on the table,” “You’re as fit as a fiddle,” “Are you feeling under the weather?” and “I’m all ears.”

Speech and voice

For the person with Asperger syndrome, speech is often marked by prosodic disturbances, including problems with varying and atypical intonation and stress and, less commonly, unusual fluency patterns and residual articulation issues (l, r, and s sounds). These characteristics can be addressed in therapy.

Conversational style

When people with Asperger syndrome engage in conversation, it is usually brief, or they tend to monopolize it with topics of high interest to themselves or topics of a perseverative or obsessive nature. The patient also tends to have limited perspective and experiences difficulty with higher-order language (including inference and reasoning).

Nonverbal language

A host of nonverbal communication problems include the use of unacceptable social distance and the unintentional messages conveyed nonverbally by unusual clothing choices and poor grooming and hygiene.

WHAT CAN BE DONE IN THE OFFICE VISIT

The key to a successful visit with such patients is to help them anticipate and make sense of their experience. In the visit, predictability should be emphasized and “chaos” avoided. Try to schedule the patient with Asperger syndrome during less-busy days and times, and avoid surprises during medical examinations or procedures, as the unexpected often triggers an extreme reaction. Examinations and procedures should be conducted in a deliberate and slow manner, as rushing through the examination raises the risk of complicating the outcome. Care should also be taken to simplify communications to accommodate the language constraints of the patient.

ONGOING TREATMENT: THE PROMISE OF TECHNOLOGY

Access to support services is critical—especially as people with Asperger syndrome move into adulthood—while the apparent rise in the prevalence of Asperger syndrome and other forms of autism spectrum disorder call for an expansion of current service models. Typically eager to address areas of social deficit, people with Asperger syndrome could benefit from ongoing social-skills support.

Mobile devices such as tablets and smart phones are a transformative technology that shows great promise in supporting treatment innovation. I believe they will have the greatest impact on quality of life for patients with Asperger syndrome by enhancing the potential to live completely independently or semi-independently. These devices can function as personal assistants for those who experience difficulty with time management, human connectivity, way-finding, and other tasks. We have observed, for example, that visual connectivity with caregivers (and others) through a cell phone, messaging, or video chatting, or the provision of electronic reminders for medications or appointments, can reduce the anxiety of a child with Asperger syndrome living outside the parental home. It can also help the physician better ensure that treatment regimens are being followed. Finally, an endless supply of entertainment “apps” along with robust search engines to suit every interest is afforded by feature-rich mobile devices.

Armed with these gadgets, therapists now tailor support to meet the patient’s individual needs, which can range from basic social-skills development and social-cue reminders to higher-level conversational and organizational supports. New tools and techniques, along with better understanding of the condition, portend far more innovative and improved treatments for the future.

References
  1. Prayson B, Franco K. Is an adult with Asperger syndrome sitting in your waiting room? Cleve Clin J Med 2012; 79:875882.
  2. Silberman S. The Geek syndrome. Autism—and its milder cousin Asperger’s syndrome—is surging among the children of Silicon Valley. Are math-and-tech genes to blame? Wired. http://www.wired.com/wired/archive/9.12/aspergers_pr.html. Accessed October 11, 2012.
  3. MIT Student Activities Office. The MIT Student Activities Office presents Charm School. http://studentlife.mit.edu/sao/charm. Accessed October 11, 2012.
Article PDF
Author and Disclosure Information

Howard C. Shane, PhD
Director, Center for Communication Enhancement, and Director, Autism Language Program, Boston Children’s Hospital; Associate Professor of Otology and Laryngology, Harvard Medical School, Boston, MA; and Monarch Center for Autism, a division of Bellefaire JCB, Shaker Heights, OH

Address: Howard C. Shane, PhD, Boston Children’s Hospital, 9 Hope Avenue, 2nd Floor West, Waltham, MA 02143; e-mail [email protected]

Issue
Cleveland Clinic Journal of Medicine - 79(12)
Publications
Topics
Page Number
872-874
Sections
Author and Disclosure Information

Howard C. Shane, PhD
Director, Center for Communication Enhancement, and Director, Autism Language Program, Boston Children’s Hospital; Associate Professor of Otology and Laryngology, Harvard Medical School, Boston, MA; and Monarch Center for Autism, a division of Bellefaire JCB, Shaker Heights, OH

Address: Howard C. Shane, PhD, Boston Children’s Hospital, 9 Hope Avenue, 2nd Floor West, Waltham, MA 02143; e-mail [email protected]

Author and Disclosure Information

Howard C. Shane, PhD
Director, Center for Communication Enhancement, and Director, Autism Language Program, Boston Children’s Hospital; Associate Professor of Otology and Laryngology, Harvard Medical School, Boston, MA; and Monarch Center for Autism, a division of Bellefaire JCB, Shaker Heights, OH

Address: Howard C. Shane, PhD, Boston Children’s Hospital, 9 Hope Avenue, 2nd Floor West, Waltham, MA 02143; e-mail [email protected]

Article PDF
Article PDF

In this issue of the Cleveland Clinic Journal of Medicine, Prayson and Franco paint a comprehensive picture of the key medical and therapeutic issues faced by patients with Asperger syndrome.1 They offer a refreshing optimism about contemporary treatments aimed at enhancing independence and quality of life, while being realistic about the challenges for these patients, such as making the transition from pediatric care to adult care. Importantly, their overview offers practical suggestions for improving medical care through a greater understanding of the syndrome, along with strategies for how to relate to patients who have a difficult interpersonal style.

See related article

In this editorial, I focus on lessons learned in our practice that help identify the problems that people with Asperger syndrome have, and I build on the advice of Prayson and Franco on how to improve patient experiences in the adult medical setting, particularly by diminishing confusion and uncertainty in doctor-patient interactions and by supporting ongoing functioning.

PEOPLE WITH ASPERGER SYNDROME HAVE ALWAYS LIVED AMONG US

Asperger syndrome is being diagnosed more frequently, using criteria recognized by a greater number of professionals. This diagnostic distinction offers a clearer understanding of a group of people who have always lived among us—often standing out because of their appearance, behavior, and communication style, even before a common label existed for their condition.

In less-informed communities, they might be described by neighbors or peers as eccentric or odd, even when they present no obvious dysmorphic or other distinguishing physical features. In fact, some may stand out more because of their accomplishments. The behaviors reported for some innovative scientists (Einstein), inventors (Ford, Edison), musicians (Beethoven), and others might lead to a diagnosis of Asperger syndrome today, while an obsessive nature also characteristic of Asperger syndrome might well have enabled them to think and create in astonishing ways.

As we have come to understand this syndrome better, we have recognized that it is a spectrum. Some patients are highly functioning, for example, and different patients have different needs.

Steve Silberman,2 writing for Wired magazine, coined the term “geek syndrome” and suggested that geeks marrying geeks may help account for the comparatively high prevalence of autism and Asperger syndrome in “techheavy” communities such as Silicon Valley in California and Route 128 in Massachusetts. “At clinics and schools in the Valley, the observation that most parents of autistic kids are engineers and programmers who themselves display autistic behavior is not news.”2 Temple Grandin, arguably the best-known person with an autism spectrum condition, has characterized the NASA Space Center in Houston, TX, as a similar community.

Given this correlation, it follows that colleges and universities offering engineering, computer science, and other technical programs or degrees should have a relatively high prevalence of students with Asperger syndrome. The Massachusetts Institute of Technology, where such a pattern is often observed, offers a course entitled “Charm School,” and its online course description is suggestive of the unique needs of this population3:

“How do I ask for a date? Which bread plate is mine? At what point in a job interview can I ask about salary? Should I use a cell phone while on the T or the elevator? How can a student network to find the perfect position? Join us for MIT's 19th Annual Charm School to find out these answers and more.”

 

 

COMMUNICATION DISTURBANCES

The challenges a person with Asperger syndrome may be experiencing are often very difficult to understand. While these people may look normal and demonstrate average to above-average intellectual functioning, their sometimes-peculiar behaviors and deficits in social skills are often difficult for peers to interpret— and to forgive. People with Asperger syndrome want to get along with peers, develop relationships, and succeed in the workplace, and they feel perplexed that others sometimes seem put off by their behavior.

At the core of this discomfort are a range of communication disorders that negatively affect interactions with others. One practical indication of a communication disorder is whether more attention is paid to how something is said than what is being said. This may present to the physician in different ways.

Language

Difficulty with introspection and description may render a patient incapable of describing symptoms and related historical information. In addition, the idiomatic and figurative nature of English may lead Asperger syndrome patients to misunderstand what the physician is saying—even common nonliteral expressions such as “Hop up on the table,” “You’re as fit as a fiddle,” “Are you feeling under the weather?” and “I’m all ears.”

Speech and voice

For the person with Asperger syndrome, speech is often marked by prosodic disturbances, including problems with varying and atypical intonation and stress and, less commonly, unusual fluency patterns and residual articulation issues (l, r, and s sounds). These characteristics can be addressed in therapy.

Conversational style

When people with Asperger syndrome engage in conversation, it is usually brief, or they tend to monopolize it with topics of high interest to themselves or topics of a perseverative or obsessive nature. The patient also tends to have limited perspective and experiences difficulty with higher-order language (including inference and reasoning).

Nonverbal language

A host of nonverbal communication problems include the use of unacceptable social distance and the unintentional messages conveyed nonverbally by unusual clothing choices and poor grooming and hygiene.

WHAT CAN BE DONE IN THE OFFICE VISIT

The key to a successful visit with such patients is to help them anticipate and make sense of their experience. In the visit, predictability should be emphasized and “chaos” avoided. Try to schedule the patient with Asperger syndrome during less-busy days and times, and avoid surprises during medical examinations or procedures, as the unexpected often triggers an extreme reaction. Examinations and procedures should be conducted in a deliberate and slow manner, as rushing through the examination raises the risk of complicating the outcome. Care should also be taken to simplify communications to accommodate the language constraints of the patient.

ONGOING TREATMENT: THE PROMISE OF TECHNOLOGY

Access to support services is critical—especially as people with Asperger syndrome move into adulthood—while the apparent rise in the prevalence of Asperger syndrome and other forms of autism spectrum disorder call for an expansion of current service models. Typically eager to address areas of social deficit, people with Asperger syndrome could benefit from ongoing social-skills support.

Mobile devices such as tablets and smart phones are a transformative technology that shows great promise in supporting treatment innovation. I believe they will have the greatest impact on quality of life for patients with Asperger syndrome by enhancing the potential to live completely independently or semi-independently. These devices can function as personal assistants for those who experience difficulty with time management, human connectivity, way-finding, and other tasks. We have observed, for example, that visual connectivity with caregivers (and others) through a cell phone, messaging, or video chatting, or the provision of electronic reminders for medications or appointments, can reduce the anxiety of a child with Asperger syndrome living outside the parental home. It can also help the physician better ensure that treatment regimens are being followed. Finally, an endless supply of entertainment “apps” along with robust search engines to suit every interest is afforded by feature-rich mobile devices.

Armed with these gadgets, therapists now tailor support to meet the patient’s individual needs, which can range from basic social-skills development and social-cue reminders to higher-level conversational and organizational supports. New tools and techniques, along with better understanding of the condition, portend far more innovative and improved treatments for the future.

In this issue of the Cleveland Clinic Journal of Medicine, Prayson and Franco paint a comprehensive picture of the key medical and therapeutic issues faced by patients with Asperger syndrome.1 They offer a refreshing optimism about contemporary treatments aimed at enhancing independence and quality of life, while being realistic about the challenges for these patients, such as making the transition from pediatric care to adult care. Importantly, their overview offers practical suggestions for improving medical care through a greater understanding of the syndrome, along with strategies for how to relate to patients who have a difficult interpersonal style.

See related article

In this editorial, I focus on lessons learned in our practice that help identify the problems that people with Asperger syndrome have, and I build on the advice of Prayson and Franco on how to improve patient experiences in the adult medical setting, particularly by diminishing confusion and uncertainty in doctor-patient interactions and by supporting ongoing functioning.

PEOPLE WITH ASPERGER SYNDROME HAVE ALWAYS LIVED AMONG US

Asperger syndrome is being diagnosed more frequently, using criteria recognized by a greater number of professionals. This diagnostic distinction offers a clearer understanding of a group of people who have always lived among us—often standing out because of their appearance, behavior, and communication style, even before a common label existed for their condition.

In less-informed communities, they might be described by neighbors or peers as eccentric or odd, even when they present no obvious dysmorphic or other distinguishing physical features. In fact, some may stand out more because of their accomplishments. The behaviors reported for some innovative scientists (Einstein), inventors (Ford, Edison), musicians (Beethoven), and others might lead to a diagnosis of Asperger syndrome today, while an obsessive nature also characteristic of Asperger syndrome might well have enabled them to think and create in astonishing ways.

As we have come to understand this syndrome better, we have recognized that it is a spectrum. Some patients are highly functioning, for example, and different patients have different needs.

Steve Silberman,2 writing for Wired magazine, coined the term “geek syndrome” and suggested that geeks marrying geeks may help account for the comparatively high prevalence of autism and Asperger syndrome in “techheavy” communities such as Silicon Valley in California and Route 128 in Massachusetts. “At clinics and schools in the Valley, the observation that most parents of autistic kids are engineers and programmers who themselves display autistic behavior is not news.”2 Temple Grandin, arguably the best-known person with an autism spectrum condition, has characterized the NASA Space Center in Houston, TX, as a similar community.

Given this correlation, it follows that colleges and universities offering engineering, computer science, and other technical programs or degrees should have a relatively high prevalence of students with Asperger syndrome. The Massachusetts Institute of Technology, where such a pattern is often observed, offers a course entitled “Charm School,” and its online course description is suggestive of the unique needs of this population3:

“How do I ask for a date? Which bread plate is mine? At what point in a job interview can I ask about salary? Should I use a cell phone while on the T or the elevator? How can a student network to find the perfect position? Join us for MIT's 19th Annual Charm School to find out these answers and more.”

 

 

COMMUNICATION DISTURBANCES

The challenges a person with Asperger syndrome may be experiencing are often very difficult to understand. While these people may look normal and demonstrate average to above-average intellectual functioning, their sometimes-peculiar behaviors and deficits in social skills are often difficult for peers to interpret— and to forgive. People with Asperger syndrome want to get along with peers, develop relationships, and succeed in the workplace, and they feel perplexed that others sometimes seem put off by their behavior.

At the core of this discomfort are a range of communication disorders that negatively affect interactions with others. One practical indication of a communication disorder is whether more attention is paid to how something is said than what is being said. This may present to the physician in different ways.

Language

Difficulty with introspection and description may render a patient incapable of describing symptoms and related historical information. In addition, the idiomatic and figurative nature of English may lead Asperger syndrome patients to misunderstand what the physician is saying—even common nonliteral expressions such as “Hop up on the table,” “You’re as fit as a fiddle,” “Are you feeling under the weather?” and “I’m all ears.”

Speech and voice

For the person with Asperger syndrome, speech is often marked by prosodic disturbances, including problems with varying and atypical intonation and stress and, less commonly, unusual fluency patterns and residual articulation issues (l, r, and s sounds). These characteristics can be addressed in therapy.

Conversational style

When people with Asperger syndrome engage in conversation, it is usually brief, or they tend to monopolize it with topics of high interest to themselves or topics of a perseverative or obsessive nature. The patient also tends to have limited perspective and experiences difficulty with higher-order language (including inference and reasoning).

Nonverbal language

A host of nonverbal communication problems include the use of unacceptable social distance and the unintentional messages conveyed nonverbally by unusual clothing choices and poor grooming and hygiene.

WHAT CAN BE DONE IN THE OFFICE VISIT

The key to a successful visit with such patients is to help them anticipate and make sense of their experience. In the visit, predictability should be emphasized and “chaos” avoided. Try to schedule the patient with Asperger syndrome during less-busy days and times, and avoid surprises during medical examinations or procedures, as the unexpected often triggers an extreme reaction. Examinations and procedures should be conducted in a deliberate and slow manner, as rushing through the examination raises the risk of complicating the outcome. Care should also be taken to simplify communications to accommodate the language constraints of the patient.

ONGOING TREATMENT: THE PROMISE OF TECHNOLOGY

Access to support services is critical—especially as people with Asperger syndrome move into adulthood—while the apparent rise in the prevalence of Asperger syndrome and other forms of autism spectrum disorder call for an expansion of current service models. Typically eager to address areas of social deficit, people with Asperger syndrome could benefit from ongoing social-skills support.

Mobile devices such as tablets and smart phones are a transformative technology that shows great promise in supporting treatment innovation. I believe they will have the greatest impact on quality of life for patients with Asperger syndrome by enhancing the potential to live completely independently or semi-independently. These devices can function as personal assistants for those who experience difficulty with time management, human connectivity, way-finding, and other tasks. We have observed, for example, that visual connectivity with caregivers (and others) through a cell phone, messaging, or video chatting, or the provision of electronic reminders for medications or appointments, can reduce the anxiety of a child with Asperger syndrome living outside the parental home. It can also help the physician better ensure that treatment regimens are being followed. Finally, an endless supply of entertainment “apps” along with robust search engines to suit every interest is afforded by feature-rich mobile devices.

Armed with these gadgets, therapists now tailor support to meet the patient’s individual needs, which can range from basic social-skills development and social-cue reminders to higher-level conversational and organizational supports. New tools and techniques, along with better understanding of the condition, portend far more innovative and improved treatments for the future.

References
  1. Prayson B, Franco K. Is an adult with Asperger syndrome sitting in your waiting room? Cleve Clin J Med 2012; 79:875882.
  2. Silberman S. The Geek syndrome. Autism—and its milder cousin Asperger’s syndrome—is surging among the children of Silicon Valley. Are math-and-tech genes to blame? Wired. http://www.wired.com/wired/archive/9.12/aspergers_pr.html. Accessed October 11, 2012.
  3. MIT Student Activities Office. The MIT Student Activities Office presents Charm School. http://studentlife.mit.edu/sao/charm. Accessed October 11, 2012.
References
  1. Prayson B, Franco K. Is an adult with Asperger syndrome sitting in your waiting room? Cleve Clin J Med 2012; 79:875882.
  2. Silberman S. The Geek syndrome. Autism—and its milder cousin Asperger’s syndrome—is surging among the children of Silicon Valley. Are math-and-tech genes to blame? Wired. http://www.wired.com/wired/archive/9.12/aspergers_pr.html. Accessed October 11, 2012.
  3. MIT Student Activities Office. The MIT Student Activities Office presents Charm School. http://studentlife.mit.edu/sao/charm. Accessed October 11, 2012.
Issue
Cleveland Clinic Journal of Medicine - 79(12)
Issue
Cleveland Clinic Journal of Medicine - 79(12)
Page Number
872-874
Page Number
872-874
Publications
Publications
Topics
Article Type
Display Headline
Appreciating Asperger syndrome: Implications for better care and outcomes
Display Headline
Appreciating Asperger syndrome: Implications for better care and outcomes
Sections
Disallow All Ads
Alternative CME
Article PDF Media