ONLINE EXCLUSIVE: Hospitalists discuss how HM can improve patient satisfaction

Article Type
Changed
Fri, 09/14/2018 - 12:26
Display Headline
ONLINE EXCLUSIVE: Hospitalists discuss how HM can improve patient satisfaction

Click here to listen to Dr. Sliwka.

Click here to listen to Dr. Cumbler.

Audio / Podcast
Issue
The Hospitalist - 2011(10)
Publications
Sections
Audio / Podcast
Audio / Podcast

Click here to listen to Dr. Sliwka.

Click here to listen to Dr. Cumbler.

Click here to listen to Dr. Sliwka.

Click here to listen to Dr. Cumbler.

Issue
The Hospitalist - 2011(10)
Issue
The Hospitalist - 2011(10)
Publications
Publications
Article Type
Display Headline
ONLINE EXCLUSIVE: Hospitalists discuss how HM can improve patient satisfaction
Display Headline
ONLINE EXCLUSIVE: Hospitalists discuss how HM can improve patient satisfaction
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

ONLINE EXCLUSIVE: The Pros and Cons of a Super-Commuter Lifestyle

Article Type
Changed
Fri, 09/14/2018 - 12:26
Display Headline
ONLINE EXCLUSIVE: The Pros and Cons of a Super-Commuter Lifestyle

Click here to listen to Alan Pisarski.

Click here to listen to Mark Hamm.

Audio / Podcast
Issue
The Hospitalist - 2011(10)
Publications
Sections
Audio / Podcast
Audio / Podcast

Click here to listen to Alan Pisarski.

Click here to listen to Mark Hamm.

Click here to listen to Alan Pisarski.

Click here to listen to Mark Hamm.

Issue
The Hospitalist - 2011(10)
Issue
The Hospitalist - 2011(10)
Publications
Publications
Article Type
Display Headline
ONLINE EXCLUSIVE: The Pros and Cons of a Super-Commuter Lifestyle
Display Headline
ONLINE EXCLUSIVE: The Pros and Cons of a Super-Commuter Lifestyle
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

ONLINE EXCLUSIVE: A Discharge Solution—or Problem?

Article Type
Changed
Fri, 09/14/2018 - 12:26
Display Headline
ONLINE EXCLUSIVE: A Discharge Solution—or Problem?

In a bit of counterintuition, an empty discharge lounge might be the most successful kind.

Christine Collins, executive director of patient access services at Brigham and Women’s Hospital in Boston, says that the lounge should be a service for discharged patients who have completed medical treatment, but who for some reason remain unable to leave the institution. Such cases can include waiting on a prescription from the pharmacy, or simply waiting on a relative or friend to arrive with transportation.

Whether you have a discharge lounge or not, you need to improve your systems so that the patients leave when they leave.

—Christine Collins, executive director, patient access services, Brigham and Women’s Hospital, Boston

She does not view Brigham’s discharge lounge, a room with lounge chairs and light meals that is staffed by a registered nurse, as the answer to the throughput conundrum hospitals across the country face each and every day. So when the lounge is empty, it means patients have been discharged without any hang-ups.

“It’s not a patient-care area,” Collins says. “They’re people that should be home.”

Some view discharge lounges as a potential aid in smoothing out the discharge process. In theory, patients ready to be medically discharged but unable to leave the hospital have a place to go. But keeping the patients in the building, and under the eye of a nurse, could create liability issues, says Ken Simone, DO, SFHM, president of Hospitalist and Practice Solutions in Veazie, Maine, and a member of Team Hospitalist. Dr. Simone also wonders how the lounge concept impacts patient satisfaction, as some could view it negatively if they’re told they have to sit in what could be construed as a back-end waiting room.

“People need to assess what they’re doing it for and is it really accomplishing what they want it to accomplish,” Collins says.

Discharge lounges “can’t be another nursing unit because a patient is supposed to be discharged. ... Whether you have a discharge lounge or not, you need to improve your systems so that the patients leave when they leave.”

Richard Quinn is a freelance writer based in New Jersey.

Issue
The Hospitalist - 2011(10)
Publications
Topics
Sections

In a bit of counterintuition, an empty discharge lounge might be the most successful kind.

Christine Collins, executive director of patient access services at Brigham and Women’s Hospital in Boston, says that the lounge should be a service for discharged patients who have completed medical treatment, but who for some reason remain unable to leave the institution. Such cases can include waiting on a prescription from the pharmacy, or simply waiting on a relative or friend to arrive with transportation.

Whether you have a discharge lounge or not, you need to improve your systems so that the patients leave when they leave.

—Christine Collins, executive director, patient access services, Brigham and Women’s Hospital, Boston

She does not view Brigham’s discharge lounge, a room with lounge chairs and light meals that is staffed by a registered nurse, as the answer to the throughput conundrum hospitals across the country face each and every day. So when the lounge is empty, it means patients have been discharged without any hang-ups.

“It’s not a patient-care area,” Collins says. “They’re people that should be home.”

Some view discharge lounges as a potential aid in smoothing out the discharge process. In theory, patients ready to be medically discharged but unable to leave the hospital have a place to go. But keeping the patients in the building, and under the eye of a nurse, could create liability issues, says Ken Simone, DO, SFHM, president of Hospitalist and Practice Solutions in Veazie, Maine, and a member of Team Hospitalist. Dr. Simone also wonders how the lounge concept impacts patient satisfaction, as some could view it negatively if they’re told they have to sit in what could be construed as a back-end waiting room.

“People need to assess what they’re doing it for and is it really accomplishing what they want it to accomplish,” Collins says.

Discharge lounges “can’t be another nursing unit because a patient is supposed to be discharged. ... Whether you have a discharge lounge or not, you need to improve your systems so that the patients leave when they leave.”

Richard Quinn is a freelance writer based in New Jersey.

In a bit of counterintuition, an empty discharge lounge might be the most successful kind.

Christine Collins, executive director of patient access services at Brigham and Women’s Hospital in Boston, says that the lounge should be a service for discharged patients who have completed medical treatment, but who for some reason remain unable to leave the institution. Such cases can include waiting on a prescription from the pharmacy, or simply waiting on a relative or friend to arrive with transportation.

Whether you have a discharge lounge or not, you need to improve your systems so that the patients leave when they leave.

—Christine Collins, executive director, patient access services, Brigham and Women’s Hospital, Boston

She does not view Brigham’s discharge lounge, a room with lounge chairs and light meals that is staffed by a registered nurse, as the answer to the throughput conundrum hospitals across the country face each and every day. So when the lounge is empty, it means patients have been discharged without any hang-ups.

“It’s not a patient-care area,” Collins says. “They’re people that should be home.”

Some view discharge lounges as a potential aid in smoothing out the discharge process. In theory, patients ready to be medically discharged but unable to leave the hospital have a place to go. But keeping the patients in the building, and under the eye of a nurse, could create liability issues, says Ken Simone, DO, SFHM, president of Hospitalist and Practice Solutions in Veazie, Maine, and a member of Team Hospitalist. Dr. Simone also wonders how the lounge concept impacts patient satisfaction, as some could view it negatively if they’re told they have to sit in what could be construed as a back-end waiting room.

“People need to assess what they’re doing it for and is it really accomplishing what they want it to accomplish,” Collins says.

Discharge lounges “can’t be another nursing unit because a patient is supposed to be discharged. ... Whether you have a discharge lounge or not, you need to improve your systems so that the patients leave when they leave.”

Richard Quinn is a freelance writer based in New Jersey.

Issue
The Hospitalist - 2011(10)
Issue
The Hospitalist - 2011(10)
Publications
Publications
Topics
Article Type
Display Headline
ONLINE EXCLUSIVE: A Discharge Solution—or Problem?
Display Headline
ONLINE EXCLUSIVE: A Discharge Solution—or Problem?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

ONLINE EXCLUSIVE: Experts discuss strategies to improve early discharges

Article Type
Changed
Fri, 09/14/2018 - 12:26
Display Headline
ONLINE EXCLUSIVE: Experts discuss strategies to improve early discharges
Audio / Podcast
Issue
The Hospitalist - 2011(10)
Publications
Topics
Sections
Audio / Podcast
Audio / Podcast
Issue
The Hospitalist - 2011(10)
Issue
The Hospitalist - 2011(10)
Publications
Publications
Topics
Article Type
Display Headline
ONLINE EXCLUSIVE: Experts discuss strategies to improve early discharges
Display Headline
ONLINE EXCLUSIVE: Experts discuss strategies to improve early discharges
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

ONLINE EXCLUSIVE TK

Article Type
Changed
Fri, 09/14/2018 - 12:26
Display Headline
ONLINE EXCLUSIVE TK

Enter text here

Issue
The Hospitalist - 2011(10)
Publications
Sections

Enter text here

Enter text here

Issue
The Hospitalist - 2011(10)
Issue
The Hospitalist - 2011(10)
Publications
Publications
Article Type
Display Headline
ONLINE EXCLUSIVE TK
Display Headline
ONLINE EXCLUSIVE TK
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

A new ICU paradigm: Intensivists as primary critical care physicians

Article Type
Changed
Fri, 11/10/2017 - 10:12
Display Headline
A new ICU paradigm: Intensivists as primary critical care physicians

After nearly a half-century, the subspecialty of critical care medicine—uniquely trained physicians caring for critically ill or injured patients in specialized, discrete nursing units—continues to suffer from an identity crisis.

Too often, the role of the intensivist in caring for the patient is unclear, to the patient, to the family, and to other physicians. Is the intensivist merely a consultant, or does he or she have a larger role?

The time has come to end the identity crisis with a fundamental paradigm shift, to identify intensivists as the principal caregivers of critically ill patients, ie, the “primary critical care physicians,” or PCCPs. We think this is necessary based not only on evidence from clinical studies, but also on our decades of experience as intensivist caregivers in a high-intensity, closed-staffing model.

REASONS FOR THE IDENTITY CRISIS

The reasons for the continued identity crisis of intensivists are many and complex.

To begin with, other physicians tend to be ambiguous about the duties of intensivists, and the general population is mostly unaware of the subspecialty. In contrast to mature subspecialties such as cardiology or gastroenterology, where responsibilities are generally known to physicians and the lay public alike, or in contrast even to recently evolved specialties such as emergency medicine, the enigmatic roles of an intensivist may differ depending on primary specialty (anesthesiology, internal medicine, surgery) and the patient population, or even among intensive care units (ICUs) within the same hospital.

Moreover, that an identity crisis exists is even more surprising given the disproportionately large consumption by critical care medicine of finite economic resources. One would expect that a sector of health care that expends 1% of the GNP1 would have clearly explicit roles and responsibilities for its physicians.

Nearly three-quarters of the care by intensivists in the United States is delivered in what is considered an “open” or “low-intensity” ICU staffing model2: an intensivist makes treatment recommendations but otherwise has no overarching authority over patient care. In this model, the admitting physician is not trained in critical care and is not available throughout the day to make decisions concerning the management of the patient. In addition, various consulting physicians and single-organ specialists may not be aware of the overall management plan, resulting in potentially unnecessary or conflicting orders and increased expense.2 What is more, in an open ICU model, critical care nurses are often left to detect and correct a significant change in a patient’s status without the necessary immediate physician availability, resulting not only in a stressful working environment for nursing staff, but also in potential harm associated with individuals providing care outside their scope of practice.3

In only a small percentage of ICUs—mostly medical ICUs and ICUs in teaching hospitals—is critical care provided in a “high-intensity” or “closed” staffing pattern, in which treatment decisions are cohesively managed under the guidance of an intensivist.2

EVIDENCE IN THE MEDICAL LITERATURE

Staffing patterns in the ICU

Several studies have attempted to identify the consequences of these different ICU staffing patterns on patient care.

Hanson et al4 examined two concurrent patient cohorts admitted to a surgical ICU. The study cohort was cared for by an on-site critical care team supervised by an intensivist, while the control cohort received care from a team with patient care responsibilities in multiple sites, supervised by a general surgeon. The results showed that patients cared for by the critical care team spent less time in the ICU, used fewer resources, had fewer complications, and had lower total hospital charges. The difference between the two cohorts was most evident in patients with the worst Acute Physiology and Chronic Health Evaluation (APACHE) II scores.

According to Hanson et al, the lack of an accepted prototype for the delivery of critical care is due to factors such as the relative youth of the discipline, contention over control of individual patient management, and the absence of a single academic advocate.4

Moreover, Pronovost et al5 concluded that high-intensity staffing (mandatory intensivist consultation or closed ICU) was associated with lower ICU mortality rates in 93% of studies and with a reduced ICU length of stay in the high-intensity staffing units when compared with ICUs with low-intensity staffing (no intensivist or elective intensivist consultation).

Critics of our PCCP paradigm may point to a study by Levy et al6 that, using a database of more than 100,000 patients, could not demonstrate any survival benefit with management by critical care physicians. Indeed the study found that patients managed by intensivists had a higher mortality rate than patients managed by physicians not trained in critical care. However, they also showed that more patients managed for the entire stay by intensivists received interventions such as intravenous drugs, mechanical ventilation, and continuous sedation and that they had a higher mean severity of illness as measured by the expanded Simplified Acute Physiology Score (SAPS II) and higher hospital mortality rates than patients who were not managed by a critical care team.

According to Levy et al, most ICUs in the United States are structured as completely open units in which the admitting physicians retain full clinical and decisional responsibility and thus have the option to care for their patients with or without input from intensivists.6

However, a recent study by Kim et al7 likely rebuts the findings of Levy et al. Kim et al analyzed more than 100,000 ICU admissions and found that the lowest odds of death within 30 days were in ICUs that had high-intensity physician staffing and multidisciplinary care teams, suggesting that the presence of an intensivist confers a survival benefit.

Other studies have also shown that high-intensity staffing improves patient outcomes in the ICU.5,8,9

Issues of cost and use of resources

Issues concerning cost and human resources for staffing ICUs have acquired increasing importance. According to Angus et al,10 intensivists provided care to only 36.8% of all ICU patients. The demand for critical care services will continue to grow rapidly as the population ages. It is this shift in the care of the critically ill that requires intensivists to take on the role of the PCCP, so as to provide high-quality, evidence-based critical care and to promote a long-term sustainable model of physician and nursing care.

 

 

OUR EXPERIENCE

Our intensivist group has been providing a near-primary-care style of critical care practice for almost 40 years, from its inception in 1977 by one of the authors (A.B.), to our current group of 15 board-certified intensivists. We can easily cite the clinical value of our practice approach, with outcome data showing consistent and better-than-expected Standardized Mortality Ratio accounts from our APACHE IV data (personal communication, Cleveland Clinic Cerner/APACHE IV report), or with reports showing that the presence of a full-time, attending-level, in-house staff physician ensures that patients, surgeons, and consultants have confidence and respect for the care provided. However, we feel that the intangible components are what make our practice a prototype for the PCCP model.

A dedicated team with a low turnover rate

First, we have a team of anesthesiology- and surgery-based intensivists dedicated to ICU practice, with a very low turnover or burnout rate, in contrast to most ICUs in the United States, where intensivists tend to practice part-time (at other times either providing operating-room-based anesthesia or surgical care or working in a pulmonary- or sleep-lab-based practice). We believe this point should not go unstressed: we have a team of physicians who have dedicated their career to working in the ICU full-time, and some have done so in excess of 20 years, even as long as 30 years! It is our opinion that we are able to provide such a highly desirable working environment by a unique daily staffing model that does not utilize the conventional practice style of one intensivist on-call per week.

We also feel that our model dramatically reduces the risk of burnout by permitting our attending intensivists to break up on-call sequences so that there are days on which work in the ICU is not also associated with on-call responsibilities.

A successful fellowship program

Second, we have an extremely successful fellowship program, which began in 1974 when one of the authors (A.B.) advocated the training of anesthesiology residents as intensivists.11 The American Board of Anesthesiology certifies on average 55 candidates per year in critical care medicine, and our program trains about 10% of the physicians applying for certification. In most years, there are actually more candidates for our program than there are available positions, which is atypical for anesthesiology-based critical care training programs. This wealth of young, talented candidates interested in critical care as a career is, again, in contrast to most anesthesiology-based programs, which find it difficult to enroll even one fellow per year.

Critical care programs grounded in anesthesiology typically struggle because of the realities of economics.12 The payoff of operating-room-based anesthesiology practices generally outshines those in critical care, yet we already have three times as many candidates as there are positions to start our training program in the next 2 years. We feel that candidates are attracted to our program simply because our environment (dedicated staffing, equal clinical footing with surgeons, low burnout rates) is seen as an exciting, positively charged role-modeling atmosphere for young physicians who may have a career interest that involves more than just their original base specialty.

A collegial working relationship

Third, we have a thriving, collegial working relationship—including daily bedside and weekly bioethics rounds with our nursing staff—which has fueled a high degree of professional satisfaction among nurses. This is evidenced by the extremely low turnover rate of nurses (less than 5% per year in the last 5 years) and by national recognition for nursing excellence (Beacon Award for Critical Care Excellence, American Association of Critical Care Nurses) (personal communication, S. Wilson, Nurse Manager). In 2009, the four nurses out of 174 who left did so to further their careers.

While low turnover rates among nurses and award-winning practices are surely a testament to a highly motivated and skilled nursing team, there is no question that a constructive collegiality among the physicians and nurses has provided an environment to allow these positive aspects to flourish.

OVERCOMING ROADBLOCKS

Obviously, although in theory it is easy to proclaim a PCCP paradigm, in reality the roadblocks are many.

For example, standardization of education and credentialing would be an essential hurdle to overcome. The current educational arrangement of the various adult specialties (anesthesiology, internal medicine, surgery), each offering disparate subspecialty critical care training and certification, is deeply rooted in interdisciplinary politics, but without any demonstration of improved patient care.13 As described recently by Kaplan and Shaw,14 an all-encompassing training and credentialing standard for critical care is essential for 21st century medicine and would go a long way toward development of the PCCP paradigm.

Another major roadblock is the shortage of intensivists in the United States.13 There are many reasons why physicians opt not to select critical care as a career, such as a non-straight-forward training pathway (as described above), recognition that the 24-hours per day, 7-days-per-week nature of critical care affects lifestyle issues, and inconsistent physician compensation.13

However, technological and personnel advances, including the use of electronic (e-ICU)15 and mid-level practitioner models, have led to creative approaches to extend critical care coverage.13

Additionally, the multitude of physician specialty stakeholders and the overall flux of the future of medical care in the United States all would contribute to the difficulties of prioritizing the implementation of the PCCP concept. Also, our practice style—a large intensivist group working in an ostensibly closed surgical ICU in a tertiary-care hospital—is one possible model, as is the even more highly evolved Cleveland Clinic medical ICU, where medical intensivists are already essentially PCCPs. But these models of care may not be generalizable among the local care patterns and medical politics across hospitals or ICUs.

Based on the described successes of our practice model, coupled with evidence in the literature, we have proposed a paradigm shift toward the concept of a PCCP. To be sure, paradigm shifts nearly always require time, effort, and wherewithal. In the end, however, we feel that embracement of the PCCP paradigm would result in a concise, discrete understanding of the role of intensivist, eliminate the specialty’s identity crisis, and ultimately improve patient care.

References
  1. Bloomfield EL. The impact of economics on changing medical technology with reference to critical care medicine in the United States. Anesth Analg 2003; 96:418425.
  2. Gajic O, Afessa B. Physician staffing models and patient safety in the ICU. Chest 2009; 135:10381044.
  3. Baggs JG, Schmitt MH, Mushlin AI, et al. Association between nurse-physician collaboration and patient outcomes in three intensive care units. Crit Care Med 1999; 27:19911998.
  4. Hanson CW, Deutschman CS, Anderson HL, et al. Effects of an organized critical care service on outcomes and resource utilization: a cohort study. Crit Care Med 1999; 27:270274.
  5. Pronovost PJ, Angus DC, Dorman T, Robinson KA, Dremsizov TT, Young TL. Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review. JAMA 2002; 288:21512162.
  6. Levy MM, Rapoport J, Lemeshow S, Chalfin DB, Phillips G, Danis M. Association between critical care physician management and patient mortality in the intensive care unit. Ann Intern Med 2008; 148:801809.
  7. Kim MM, Barnato AE, Angus DC, Fleisher LA, Kahn JM. The effect of multidisciplinary care teams on intensive care unit mortality. Arch Intern Med 2010; 170:369376.
  8. Carson SS, Stocking C, Podsadecki T, et al. Effects of organizational change in the medical intensive care unit of a teaching hospital: a comparison of ‘open’ and ‘closed’ formats. JAMA 1996; 276:322328.
  9. Treggiari MM, Martin DP, Yanez ND, Caldwell E, Hudson LD, Rubenfeld GD. Effect of intensive care unit organizational model and structure on outcomes in patients with acute lung injury. Am J Respir Crit Care Med 2007; 176:685690.
  10. Angus DC, Kelley MA, Schmitz RJ, White A, Popovich J; Committee on Manpower for Pulmonary and Critical Care Societies (COMPACCS). Caring for the critically ill patient. Current and projected workforce requirements for care of the critically ill and patients with pulmonary disease: can we meet the requirements of an aging population? JAMA 2000; 284:27622770.
  11. Boutros AR. Anesthesiology and intensive care (editorial). Anesthesiology 1974; 41:319320.
  12. Boyle WA. A critical time for anesthesiology? American Society of Anesthesiologists (ASA) Newsletter, September 2009;1011. http://viewer.zmags.com/publication/9960917c#/9960917c/12. Accessed July 13, 2011.
  13. Ewart GW, Marcus L, Gaba MM, Bradner RH, Medina JL, Chandler EB. The critical care medicine crisis: a call for federal action: a white paper from the critical care professional societies. Chest 2004; 125:15181521.
  14. Kaplan LJ, Shaw AD. Standards for education and credentialing in critical care medicine. JAMA 2011; 305:296297.
  15. Leong JR, Sirio CA, Rotondi AJ. eICU program favorably affects clinical and economic outcomes. Crit Care 2005, http://ccforum.com/content/9/5/E22. Accessed July 13, 2011.
Article PDF
Author and Disclosure Information

Marc J. Popovich, MD
Medical Director, Surgical Intensive Care Unit, Anesthesiology Institute, Cleveland Clinic

Shahpour Esfandiari, MD
Director Emeritus, Surgical Intensive Care Unit, Anesthesiology Institute, Cleveland Clinic

Azmy Boutros, MD, FRCA
Chairman Emeritus, Anesthesiology Institute, Cleveland Clinic

Address: Marc J. Popovich, MD, Anesthesiology Institute, G58, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail [email protected]

Issue
Cleveland Clinic Journal of Medicine - 78(10)
Publications
Topics
Page Number
697-700
Sections
Author and Disclosure Information

Marc J. Popovich, MD
Medical Director, Surgical Intensive Care Unit, Anesthesiology Institute, Cleveland Clinic

Shahpour Esfandiari, MD
Director Emeritus, Surgical Intensive Care Unit, Anesthesiology Institute, Cleveland Clinic

Azmy Boutros, MD, FRCA
Chairman Emeritus, Anesthesiology Institute, Cleveland Clinic

Address: Marc J. Popovich, MD, Anesthesiology Institute, G58, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail [email protected]

Author and Disclosure Information

Marc J. Popovich, MD
Medical Director, Surgical Intensive Care Unit, Anesthesiology Institute, Cleveland Clinic

Shahpour Esfandiari, MD
Director Emeritus, Surgical Intensive Care Unit, Anesthesiology Institute, Cleveland Clinic

Azmy Boutros, MD, FRCA
Chairman Emeritus, Anesthesiology Institute, Cleveland Clinic

Address: Marc J. Popovich, MD, Anesthesiology Institute, G58, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail [email protected]

Article PDF
Article PDF

After nearly a half-century, the subspecialty of critical care medicine—uniquely trained physicians caring for critically ill or injured patients in specialized, discrete nursing units—continues to suffer from an identity crisis.

Too often, the role of the intensivist in caring for the patient is unclear, to the patient, to the family, and to other physicians. Is the intensivist merely a consultant, or does he or she have a larger role?

The time has come to end the identity crisis with a fundamental paradigm shift, to identify intensivists as the principal caregivers of critically ill patients, ie, the “primary critical care physicians,” or PCCPs. We think this is necessary based not only on evidence from clinical studies, but also on our decades of experience as intensivist caregivers in a high-intensity, closed-staffing model.

REASONS FOR THE IDENTITY CRISIS

The reasons for the continued identity crisis of intensivists are many and complex.

To begin with, other physicians tend to be ambiguous about the duties of intensivists, and the general population is mostly unaware of the subspecialty. In contrast to mature subspecialties such as cardiology or gastroenterology, where responsibilities are generally known to physicians and the lay public alike, or in contrast even to recently evolved specialties such as emergency medicine, the enigmatic roles of an intensivist may differ depending on primary specialty (anesthesiology, internal medicine, surgery) and the patient population, or even among intensive care units (ICUs) within the same hospital.

Moreover, that an identity crisis exists is even more surprising given the disproportionately large consumption by critical care medicine of finite economic resources. One would expect that a sector of health care that expends 1% of the GNP1 would have clearly explicit roles and responsibilities for its physicians.

Nearly three-quarters of the care by intensivists in the United States is delivered in what is considered an “open” or “low-intensity” ICU staffing model2: an intensivist makes treatment recommendations but otherwise has no overarching authority over patient care. In this model, the admitting physician is not trained in critical care and is not available throughout the day to make decisions concerning the management of the patient. In addition, various consulting physicians and single-organ specialists may not be aware of the overall management plan, resulting in potentially unnecessary or conflicting orders and increased expense.2 What is more, in an open ICU model, critical care nurses are often left to detect and correct a significant change in a patient’s status without the necessary immediate physician availability, resulting not only in a stressful working environment for nursing staff, but also in potential harm associated with individuals providing care outside their scope of practice.3

In only a small percentage of ICUs—mostly medical ICUs and ICUs in teaching hospitals—is critical care provided in a “high-intensity” or “closed” staffing pattern, in which treatment decisions are cohesively managed under the guidance of an intensivist.2

EVIDENCE IN THE MEDICAL LITERATURE

Staffing patterns in the ICU

Several studies have attempted to identify the consequences of these different ICU staffing patterns on patient care.

Hanson et al4 examined two concurrent patient cohorts admitted to a surgical ICU. The study cohort was cared for by an on-site critical care team supervised by an intensivist, while the control cohort received care from a team with patient care responsibilities in multiple sites, supervised by a general surgeon. The results showed that patients cared for by the critical care team spent less time in the ICU, used fewer resources, had fewer complications, and had lower total hospital charges. The difference between the two cohorts was most evident in patients with the worst Acute Physiology and Chronic Health Evaluation (APACHE) II scores.

According to Hanson et al, the lack of an accepted prototype for the delivery of critical care is due to factors such as the relative youth of the discipline, contention over control of individual patient management, and the absence of a single academic advocate.4

Moreover, Pronovost et al5 concluded that high-intensity staffing (mandatory intensivist consultation or closed ICU) was associated with lower ICU mortality rates in 93% of studies and with a reduced ICU length of stay in the high-intensity staffing units when compared with ICUs with low-intensity staffing (no intensivist or elective intensivist consultation).

Critics of our PCCP paradigm may point to a study by Levy et al6 that, using a database of more than 100,000 patients, could not demonstrate any survival benefit with management by critical care physicians. Indeed the study found that patients managed by intensivists had a higher mortality rate than patients managed by physicians not trained in critical care. However, they also showed that more patients managed for the entire stay by intensivists received interventions such as intravenous drugs, mechanical ventilation, and continuous sedation and that they had a higher mean severity of illness as measured by the expanded Simplified Acute Physiology Score (SAPS II) and higher hospital mortality rates than patients who were not managed by a critical care team.

According to Levy et al, most ICUs in the United States are structured as completely open units in which the admitting physicians retain full clinical and decisional responsibility and thus have the option to care for their patients with or without input from intensivists.6

However, a recent study by Kim et al7 likely rebuts the findings of Levy et al. Kim et al analyzed more than 100,000 ICU admissions and found that the lowest odds of death within 30 days were in ICUs that had high-intensity physician staffing and multidisciplinary care teams, suggesting that the presence of an intensivist confers a survival benefit.

Other studies have also shown that high-intensity staffing improves patient outcomes in the ICU.5,8,9

Issues of cost and use of resources

Issues concerning cost and human resources for staffing ICUs have acquired increasing importance. According to Angus et al,10 intensivists provided care to only 36.8% of all ICU patients. The demand for critical care services will continue to grow rapidly as the population ages. It is this shift in the care of the critically ill that requires intensivists to take on the role of the PCCP, so as to provide high-quality, evidence-based critical care and to promote a long-term sustainable model of physician and nursing care.

 

 

OUR EXPERIENCE

Our intensivist group has been providing a near-primary-care style of critical care practice for almost 40 years, from its inception in 1977 by one of the authors (A.B.), to our current group of 15 board-certified intensivists. We can easily cite the clinical value of our practice approach, with outcome data showing consistent and better-than-expected Standardized Mortality Ratio accounts from our APACHE IV data (personal communication, Cleveland Clinic Cerner/APACHE IV report), or with reports showing that the presence of a full-time, attending-level, in-house staff physician ensures that patients, surgeons, and consultants have confidence and respect for the care provided. However, we feel that the intangible components are what make our practice a prototype for the PCCP model.

A dedicated team with a low turnover rate

First, we have a team of anesthesiology- and surgery-based intensivists dedicated to ICU practice, with a very low turnover or burnout rate, in contrast to most ICUs in the United States, where intensivists tend to practice part-time (at other times either providing operating-room-based anesthesia or surgical care or working in a pulmonary- or sleep-lab-based practice). We believe this point should not go unstressed: we have a team of physicians who have dedicated their career to working in the ICU full-time, and some have done so in excess of 20 years, even as long as 30 years! It is our opinion that we are able to provide such a highly desirable working environment by a unique daily staffing model that does not utilize the conventional practice style of one intensivist on-call per week.

We also feel that our model dramatically reduces the risk of burnout by permitting our attending intensivists to break up on-call sequences so that there are days on which work in the ICU is not also associated with on-call responsibilities.

A successful fellowship program

Second, we have an extremely successful fellowship program, which began in 1974 when one of the authors (A.B.) advocated the training of anesthesiology residents as intensivists.11 The American Board of Anesthesiology certifies on average 55 candidates per year in critical care medicine, and our program trains about 10% of the physicians applying for certification. In most years, there are actually more candidates for our program than there are available positions, which is atypical for anesthesiology-based critical care training programs. This wealth of young, talented candidates interested in critical care as a career is, again, in contrast to most anesthesiology-based programs, which find it difficult to enroll even one fellow per year.

Critical care programs grounded in anesthesiology typically struggle because of the realities of economics.12 The payoff of operating-room-based anesthesiology practices generally outshines those in critical care, yet we already have three times as many candidates as there are positions to start our training program in the next 2 years. We feel that candidates are attracted to our program simply because our environment (dedicated staffing, equal clinical footing with surgeons, low burnout rates) is seen as an exciting, positively charged role-modeling atmosphere for young physicians who may have a career interest that involves more than just their original base specialty.

A collegial working relationship

Third, we have a thriving, collegial working relationship—including daily bedside and weekly bioethics rounds with our nursing staff—which has fueled a high degree of professional satisfaction among nurses. This is evidenced by the extremely low turnover rate of nurses (less than 5% per year in the last 5 years) and by national recognition for nursing excellence (Beacon Award for Critical Care Excellence, American Association of Critical Care Nurses) (personal communication, S. Wilson, Nurse Manager). In 2009, the four nurses out of 174 who left did so to further their careers.

While low turnover rates among nurses and award-winning practices are surely a testament to a highly motivated and skilled nursing team, there is no question that a constructive collegiality among the physicians and nurses has provided an environment to allow these positive aspects to flourish.

OVERCOMING ROADBLOCKS

Obviously, although in theory it is easy to proclaim a PCCP paradigm, in reality the roadblocks are many.

For example, standardization of education and credentialing would be an essential hurdle to overcome. The current educational arrangement of the various adult specialties (anesthesiology, internal medicine, surgery), each offering disparate subspecialty critical care training and certification, is deeply rooted in interdisciplinary politics, but without any demonstration of improved patient care.13 As described recently by Kaplan and Shaw,14 an all-encompassing training and credentialing standard for critical care is essential for 21st century medicine and would go a long way toward development of the PCCP paradigm.

Another major roadblock is the shortage of intensivists in the United States.13 There are many reasons why physicians opt not to select critical care as a career, such as a non-straight-forward training pathway (as described above), recognition that the 24-hours per day, 7-days-per-week nature of critical care affects lifestyle issues, and inconsistent physician compensation.13

However, technological and personnel advances, including the use of electronic (e-ICU)15 and mid-level practitioner models, have led to creative approaches to extend critical care coverage.13

Additionally, the multitude of physician specialty stakeholders and the overall flux of the future of medical care in the United States all would contribute to the difficulties of prioritizing the implementation of the PCCP concept. Also, our practice style—a large intensivist group working in an ostensibly closed surgical ICU in a tertiary-care hospital—is one possible model, as is the even more highly evolved Cleveland Clinic medical ICU, where medical intensivists are already essentially PCCPs. But these models of care may not be generalizable among the local care patterns and medical politics across hospitals or ICUs.

Based on the described successes of our practice model, coupled with evidence in the literature, we have proposed a paradigm shift toward the concept of a PCCP. To be sure, paradigm shifts nearly always require time, effort, and wherewithal. In the end, however, we feel that embracement of the PCCP paradigm would result in a concise, discrete understanding of the role of intensivist, eliminate the specialty’s identity crisis, and ultimately improve patient care.

After nearly a half-century, the subspecialty of critical care medicine—uniquely trained physicians caring for critically ill or injured patients in specialized, discrete nursing units—continues to suffer from an identity crisis.

Too often, the role of the intensivist in caring for the patient is unclear, to the patient, to the family, and to other physicians. Is the intensivist merely a consultant, or does he or she have a larger role?

The time has come to end the identity crisis with a fundamental paradigm shift, to identify intensivists as the principal caregivers of critically ill patients, ie, the “primary critical care physicians,” or PCCPs. We think this is necessary based not only on evidence from clinical studies, but also on our decades of experience as intensivist caregivers in a high-intensity, closed-staffing model.

REASONS FOR THE IDENTITY CRISIS

The reasons for the continued identity crisis of intensivists are many and complex.

To begin with, other physicians tend to be ambiguous about the duties of intensivists, and the general population is mostly unaware of the subspecialty. In contrast to mature subspecialties such as cardiology or gastroenterology, where responsibilities are generally known to physicians and the lay public alike, or in contrast even to recently evolved specialties such as emergency medicine, the enigmatic roles of an intensivist may differ depending on primary specialty (anesthesiology, internal medicine, surgery) and the patient population, or even among intensive care units (ICUs) within the same hospital.

Moreover, that an identity crisis exists is even more surprising given the disproportionately large consumption by critical care medicine of finite economic resources. One would expect that a sector of health care that expends 1% of the GNP1 would have clearly explicit roles and responsibilities for its physicians.

Nearly three-quarters of the care by intensivists in the United States is delivered in what is considered an “open” or “low-intensity” ICU staffing model2: an intensivist makes treatment recommendations but otherwise has no overarching authority over patient care. In this model, the admitting physician is not trained in critical care and is not available throughout the day to make decisions concerning the management of the patient. In addition, various consulting physicians and single-organ specialists may not be aware of the overall management plan, resulting in potentially unnecessary or conflicting orders and increased expense.2 What is more, in an open ICU model, critical care nurses are often left to detect and correct a significant change in a patient’s status without the necessary immediate physician availability, resulting not only in a stressful working environment for nursing staff, but also in potential harm associated with individuals providing care outside their scope of practice.3

In only a small percentage of ICUs—mostly medical ICUs and ICUs in teaching hospitals—is critical care provided in a “high-intensity” or “closed” staffing pattern, in which treatment decisions are cohesively managed under the guidance of an intensivist.2

EVIDENCE IN THE MEDICAL LITERATURE

Staffing patterns in the ICU

Several studies have attempted to identify the consequences of these different ICU staffing patterns on patient care.

Hanson et al4 examined two concurrent patient cohorts admitted to a surgical ICU. The study cohort was cared for by an on-site critical care team supervised by an intensivist, while the control cohort received care from a team with patient care responsibilities in multiple sites, supervised by a general surgeon. The results showed that patients cared for by the critical care team spent less time in the ICU, used fewer resources, had fewer complications, and had lower total hospital charges. The difference between the two cohorts was most evident in patients with the worst Acute Physiology and Chronic Health Evaluation (APACHE) II scores.

According to Hanson et al, the lack of an accepted prototype for the delivery of critical care is due to factors such as the relative youth of the discipline, contention over control of individual patient management, and the absence of a single academic advocate.4

Moreover, Pronovost et al5 concluded that high-intensity staffing (mandatory intensivist consultation or closed ICU) was associated with lower ICU mortality rates in 93% of studies and with a reduced ICU length of stay in the high-intensity staffing units when compared with ICUs with low-intensity staffing (no intensivist or elective intensivist consultation).

Critics of our PCCP paradigm may point to a study by Levy et al6 that, using a database of more than 100,000 patients, could not demonstrate any survival benefit with management by critical care physicians. Indeed the study found that patients managed by intensivists had a higher mortality rate than patients managed by physicians not trained in critical care. However, they also showed that more patients managed for the entire stay by intensivists received interventions such as intravenous drugs, mechanical ventilation, and continuous sedation and that they had a higher mean severity of illness as measured by the expanded Simplified Acute Physiology Score (SAPS II) and higher hospital mortality rates than patients who were not managed by a critical care team.

According to Levy et al, most ICUs in the United States are structured as completely open units in which the admitting physicians retain full clinical and decisional responsibility and thus have the option to care for their patients with or without input from intensivists.6

However, a recent study by Kim et al7 likely rebuts the findings of Levy et al. Kim et al analyzed more than 100,000 ICU admissions and found that the lowest odds of death within 30 days were in ICUs that had high-intensity physician staffing and multidisciplinary care teams, suggesting that the presence of an intensivist confers a survival benefit.

Other studies have also shown that high-intensity staffing improves patient outcomes in the ICU.5,8,9

Issues of cost and use of resources

Issues concerning cost and human resources for staffing ICUs have acquired increasing importance. According to Angus et al,10 intensivists provided care to only 36.8% of all ICU patients. The demand for critical care services will continue to grow rapidly as the population ages. It is this shift in the care of the critically ill that requires intensivists to take on the role of the PCCP, so as to provide high-quality, evidence-based critical care and to promote a long-term sustainable model of physician and nursing care.

 

 

OUR EXPERIENCE

Our intensivist group has been providing a near-primary-care style of critical care practice for almost 40 years, from its inception in 1977 by one of the authors (A.B.), to our current group of 15 board-certified intensivists. We can easily cite the clinical value of our practice approach, with outcome data showing consistent and better-than-expected Standardized Mortality Ratio accounts from our APACHE IV data (personal communication, Cleveland Clinic Cerner/APACHE IV report), or with reports showing that the presence of a full-time, attending-level, in-house staff physician ensures that patients, surgeons, and consultants have confidence and respect for the care provided. However, we feel that the intangible components are what make our practice a prototype for the PCCP model.

A dedicated team with a low turnover rate

First, we have a team of anesthesiology- and surgery-based intensivists dedicated to ICU practice, with a very low turnover or burnout rate, in contrast to most ICUs in the United States, where intensivists tend to practice part-time (at other times either providing operating-room-based anesthesia or surgical care or working in a pulmonary- or sleep-lab-based practice). We believe this point should not go unstressed: we have a team of physicians who have dedicated their career to working in the ICU full-time, and some have done so in excess of 20 years, even as long as 30 years! It is our opinion that we are able to provide such a highly desirable working environment by a unique daily staffing model that does not utilize the conventional practice style of one intensivist on-call per week.

We also feel that our model dramatically reduces the risk of burnout by permitting our attending intensivists to break up on-call sequences so that there are days on which work in the ICU is not also associated with on-call responsibilities.

A successful fellowship program

Second, we have an extremely successful fellowship program, which began in 1974 when one of the authors (A.B.) advocated the training of anesthesiology residents as intensivists.11 The American Board of Anesthesiology certifies on average 55 candidates per year in critical care medicine, and our program trains about 10% of the physicians applying for certification. In most years, there are actually more candidates for our program than there are available positions, which is atypical for anesthesiology-based critical care training programs. This wealth of young, talented candidates interested in critical care as a career is, again, in contrast to most anesthesiology-based programs, which find it difficult to enroll even one fellow per year.

Critical care programs grounded in anesthesiology typically struggle because of the realities of economics.12 The payoff of operating-room-based anesthesiology practices generally outshines those in critical care, yet we already have three times as many candidates as there are positions to start our training program in the next 2 years. We feel that candidates are attracted to our program simply because our environment (dedicated staffing, equal clinical footing with surgeons, low burnout rates) is seen as an exciting, positively charged role-modeling atmosphere for young physicians who may have a career interest that involves more than just their original base specialty.

A collegial working relationship

Third, we have a thriving, collegial working relationship—including daily bedside and weekly bioethics rounds with our nursing staff—which has fueled a high degree of professional satisfaction among nurses. This is evidenced by the extremely low turnover rate of nurses (less than 5% per year in the last 5 years) and by national recognition for nursing excellence (Beacon Award for Critical Care Excellence, American Association of Critical Care Nurses) (personal communication, S. Wilson, Nurse Manager). In 2009, the four nurses out of 174 who left did so to further their careers.

While low turnover rates among nurses and award-winning practices are surely a testament to a highly motivated and skilled nursing team, there is no question that a constructive collegiality among the physicians and nurses has provided an environment to allow these positive aspects to flourish.

OVERCOMING ROADBLOCKS

Obviously, although in theory it is easy to proclaim a PCCP paradigm, in reality the roadblocks are many.

For example, standardization of education and credentialing would be an essential hurdle to overcome. The current educational arrangement of the various adult specialties (anesthesiology, internal medicine, surgery), each offering disparate subspecialty critical care training and certification, is deeply rooted in interdisciplinary politics, but without any demonstration of improved patient care.13 As described recently by Kaplan and Shaw,14 an all-encompassing training and credentialing standard for critical care is essential for 21st century medicine and would go a long way toward development of the PCCP paradigm.

Another major roadblock is the shortage of intensivists in the United States.13 There are many reasons why physicians opt not to select critical care as a career, such as a non-straight-forward training pathway (as described above), recognition that the 24-hours per day, 7-days-per-week nature of critical care affects lifestyle issues, and inconsistent physician compensation.13

However, technological and personnel advances, including the use of electronic (e-ICU)15 and mid-level practitioner models, have led to creative approaches to extend critical care coverage.13

Additionally, the multitude of physician specialty stakeholders and the overall flux of the future of medical care in the United States all would contribute to the difficulties of prioritizing the implementation of the PCCP concept. Also, our practice style—a large intensivist group working in an ostensibly closed surgical ICU in a tertiary-care hospital—is one possible model, as is the even more highly evolved Cleveland Clinic medical ICU, where medical intensivists are already essentially PCCPs. But these models of care may not be generalizable among the local care patterns and medical politics across hospitals or ICUs.

Based on the described successes of our practice model, coupled with evidence in the literature, we have proposed a paradigm shift toward the concept of a PCCP. To be sure, paradigm shifts nearly always require time, effort, and wherewithal. In the end, however, we feel that embracement of the PCCP paradigm would result in a concise, discrete understanding of the role of intensivist, eliminate the specialty’s identity crisis, and ultimately improve patient care.

References
  1. Bloomfield EL. The impact of economics on changing medical technology with reference to critical care medicine in the United States. Anesth Analg 2003; 96:418425.
  2. Gajic O, Afessa B. Physician staffing models and patient safety in the ICU. Chest 2009; 135:10381044.
  3. Baggs JG, Schmitt MH, Mushlin AI, et al. Association between nurse-physician collaboration and patient outcomes in three intensive care units. Crit Care Med 1999; 27:19911998.
  4. Hanson CW, Deutschman CS, Anderson HL, et al. Effects of an organized critical care service on outcomes and resource utilization: a cohort study. Crit Care Med 1999; 27:270274.
  5. Pronovost PJ, Angus DC, Dorman T, Robinson KA, Dremsizov TT, Young TL. Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review. JAMA 2002; 288:21512162.
  6. Levy MM, Rapoport J, Lemeshow S, Chalfin DB, Phillips G, Danis M. Association between critical care physician management and patient mortality in the intensive care unit. Ann Intern Med 2008; 148:801809.
  7. Kim MM, Barnato AE, Angus DC, Fleisher LA, Kahn JM. The effect of multidisciplinary care teams on intensive care unit mortality. Arch Intern Med 2010; 170:369376.
  8. Carson SS, Stocking C, Podsadecki T, et al. Effects of organizational change in the medical intensive care unit of a teaching hospital: a comparison of ‘open’ and ‘closed’ formats. JAMA 1996; 276:322328.
  9. Treggiari MM, Martin DP, Yanez ND, Caldwell E, Hudson LD, Rubenfeld GD. Effect of intensive care unit organizational model and structure on outcomes in patients with acute lung injury. Am J Respir Crit Care Med 2007; 176:685690.
  10. Angus DC, Kelley MA, Schmitz RJ, White A, Popovich J; Committee on Manpower for Pulmonary and Critical Care Societies (COMPACCS). Caring for the critically ill patient. Current and projected workforce requirements for care of the critically ill and patients with pulmonary disease: can we meet the requirements of an aging population? JAMA 2000; 284:27622770.
  11. Boutros AR. Anesthesiology and intensive care (editorial). Anesthesiology 1974; 41:319320.
  12. Boyle WA. A critical time for anesthesiology? American Society of Anesthesiologists (ASA) Newsletter, September 2009;1011. http://viewer.zmags.com/publication/9960917c#/9960917c/12. Accessed July 13, 2011.
  13. Ewart GW, Marcus L, Gaba MM, Bradner RH, Medina JL, Chandler EB. The critical care medicine crisis: a call for federal action: a white paper from the critical care professional societies. Chest 2004; 125:15181521.
  14. Kaplan LJ, Shaw AD. Standards for education and credentialing in critical care medicine. JAMA 2011; 305:296297.
  15. Leong JR, Sirio CA, Rotondi AJ. eICU program favorably affects clinical and economic outcomes. Crit Care 2005, http://ccforum.com/content/9/5/E22. Accessed July 13, 2011.
References
  1. Bloomfield EL. The impact of economics on changing medical technology with reference to critical care medicine in the United States. Anesth Analg 2003; 96:418425.
  2. Gajic O, Afessa B. Physician staffing models and patient safety in the ICU. Chest 2009; 135:10381044.
  3. Baggs JG, Schmitt MH, Mushlin AI, et al. Association between nurse-physician collaboration and patient outcomes in three intensive care units. Crit Care Med 1999; 27:19911998.
  4. Hanson CW, Deutschman CS, Anderson HL, et al. Effects of an organized critical care service on outcomes and resource utilization: a cohort study. Crit Care Med 1999; 27:270274.
  5. Pronovost PJ, Angus DC, Dorman T, Robinson KA, Dremsizov TT, Young TL. Physician staffing patterns and clinical outcomes in critically ill patients: a systematic review. JAMA 2002; 288:21512162.
  6. Levy MM, Rapoport J, Lemeshow S, Chalfin DB, Phillips G, Danis M. Association between critical care physician management and patient mortality in the intensive care unit. Ann Intern Med 2008; 148:801809.
  7. Kim MM, Barnato AE, Angus DC, Fleisher LA, Kahn JM. The effect of multidisciplinary care teams on intensive care unit mortality. Arch Intern Med 2010; 170:369376.
  8. Carson SS, Stocking C, Podsadecki T, et al. Effects of organizational change in the medical intensive care unit of a teaching hospital: a comparison of ‘open’ and ‘closed’ formats. JAMA 1996; 276:322328.
  9. Treggiari MM, Martin DP, Yanez ND, Caldwell E, Hudson LD, Rubenfeld GD. Effect of intensive care unit organizational model and structure on outcomes in patients with acute lung injury. Am J Respir Crit Care Med 2007; 176:685690.
  10. Angus DC, Kelley MA, Schmitz RJ, White A, Popovich J; Committee on Manpower for Pulmonary and Critical Care Societies (COMPACCS). Caring for the critically ill patient. Current and projected workforce requirements for care of the critically ill and patients with pulmonary disease: can we meet the requirements of an aging population? JAMA 2000; 284:27622770.
  11. Boutros AR. Anesthesiology and intensive care (editorial). Anesthesiology 1974; 41:319320.
  12. Boyle WA. A critical time for anesthesiology? American Society of Anesthesiologists (ASA) Newsletter, September 2009;1011. http://viewer.zmags.com/publication/9960917c#/9960917c/12. Accessed July 13, 2011.
  13. Ewart GW, Marcus L, Gaba MM, Bradner RH, Medina JL, Chandler EB. The critical care medicine crisis: a call for federal action: a white paper from the critical care professional societies. Chest 2004; 125:15181521.
  14. Kaplan LJ, Shaw AD. Standards for education and credentialing in critical care medicine. JAMA 2011; 305:296297.
  15. Leong JR, Sirio CA, Rotondi AJ. eICU program favorably affects clinical and economic outcomes. Crit Care 2005, http://ccforum.com/content/9/5/E22. Accessed July 13, 2011.
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Page Number
697-700
Page Number
697-700
Publications
Publications
Topics
Article Type
Display Headline
A new ICU paradigm: Intensivists as primary critical care physicians
Display Headline
A new ICU paradigm: Intensivists as primary critical care physicians
Sections
Disallow All Ads
Alternative CME
Article PDF Media

Jet lag and shift work sleep disorders: How to help reset the internal clock

Article Type
Changed
Tue, 06/12/2018 - 08:54
Display Headline
Jet lag and shift work sleep disorders: How to help reset the internal clock

For people who must travel long distances east or west by air or who must work the night shift, some relief is possible for the grogginess and disorientation that often ensue. The problems arise from the body’s internal clock being out of sync with the sun. Part of the solution involves helping reset the internal clock, or sometimes, preventing it from resetting itself.

This review will focus on jet lag sleep disorder and shift work sleep disorder, with an emphasis on the causes, the clinical assessment, and evidence-based treatment options.

WHEN THE INTERNAL CLOCK IS OUT OF SYNC WITH THE SUN

Circadian rhythm sleep disorders are the result of dyssynchrony between the body’s internal clock and the external 24-hour light-dark cycle. Patients typically present with insomnia or excessive somnolence. These disorders may represent an intrinsic disorder, such as delayed or advanced sleep-phase disorder, or may be the result of transmeridian air travel or working nonstandard shifts.1

Modified with permission of Elsevier LTD. From Beersma DG, Gordijn MC. Circadian control of the sleep-wake cycle. Physiol Behav 2007; 90:190–195.
Figure 1. The two-process model of sleep regulation. Sleep propensity grows during periods of wakefulness and abates during sleeping periods. The homeostatic process (process S, blue line) is limited to a range of values determined by a clock-like circadian process (process C, red lines) that varies with the biological time of day.
Sleep and wakefulness are conceptually governed by two processes, “process S” and “process C.”2 The homeostatic drive to sleep (process S) is proportional to the duration of sleep restriction, and it becomes maximal at about 40 hours.3 In contrast, process C creates a drive for wakefulness that variably opposes process S and depends on circadian rhythms intrinsic to the organism (Figure 1).4 Coordinating this sleep-wake rhythm (and numerous other behavioral and physiologic processes) are the neurons of the suprachiasmatic nuclei of the hypothalamus.5–8

The intrinsic human circadian period is typically slightly longer than 24 hours,9 but it is synchronized (“entrained”) to the 24-hour day by various environmental inputs, or zeitgebers (German for “time-givers”), the most important of which is light exposure.10

When the internal clock is out of sync with the sun, the misalignment can result in daytime anergia, alternating complaints of insomnia and hypersomnia, and various other symptoms, including emotional disturbances and gastrointestinal distress. In particular, long-distance air travel or a nocturnal work schedule overwhelms the ability of the intrinsic clock to adjust rapidly enough, and the result is jet lag sleep disorder or shift work sleep disorder.1

TOOLS TO EVALUATE CIRCADIAN RHYTHM DISTURBANCES

A thorough history is the cornerstone of the evaluation for all sleep disorders, and if a circadian rhythm disturbance is suspected, the sleep history is supplemented with specific questions to establish a clear diagnosis.

When assessing for jet lag disorder, ask about:

  • The patient’s degree of sleep deprivation before and during travel
  • His or her innate circadian preference (ie, whether he or she is a “night owl” or “early bird”)
  • Patterns of alcohol and caffeine consumption.

When assessing for shift work disorder, include the above questions and also look for differences in the sleep-wake schedule on working days vs nonworking days, as well as external contributors to poor sleep quality (eg, the degree to which daytime sleep is not “protected”).

The following tools help in acquiring this information.

Sleep diary

In a sleep diary or log, patients record the times that they take naps, maintain consolidated sleep, and subsequently arise. The diary also prompts the patient for information about sleep latency, wakefulness after sleep onset, time in bed, medication and caffeine intake, and the restorative quality of sleep.

While the sleep diary by itself may provide insight into counteractive sleep-related behaviors and misperceptions the patient may have, compliance is often limited. Therefore, the sleep diary is best used in conjunction with actigraphy.

Actigraphy

An actigraph is a wristwatch-size motion detector, typically worn continuously for 7 days or longer. The data it gathers and stores serve as a surrogate measure of various sleep-wake variables.11

Either a sleep diary or actigraphy is required to demonstrate the stability of sleep patterns and circadian preference, but the actigraph typically generates more reliable data.11,12 It is also valuable in assessing the response to treatment of circadian rhythm sleep disorders.13

Are you an early bird or a night owl?

The Morningness-Eveningness questionnaire contains 19 items. Night owls tend to score lower on it than early birds do.14 This information may help some people avoid situations in which they may not do well, such as an early bird going on a permanent night-shift schedule.

Other assessment tools

Polysomnography is used primarily to rule out sleep-disordered breathing; it is not indicated for routine evaluation of circadian rhythm sleep disorders.

The minimum core body temperature and the peak melatonin secretion follow a 24-hour cycle. Although these measures are often used in research, they are not routinely used in clinical practice. (The minimum core body temperature is discussed further below.)

 

 

JET LAG SLEEP DISORDER

Jet lag results from air travel across multiple time zones, with a resultant discordance between the internal circadian clock and the destination’s light-dark cycle. Most sufferers report sleeping poorly at night and feeling groggy during the day, and some also experience general malaise and gastrointestinal distress.1

The severity depends on a number of variables.

Going west is easier than going east

Westward travel is normally less taxing than eastward travel, as it requires setting one’s internal clock later rather than earlier. Presumably, because the circadian period tends to exceed 24 hours, we can move our internal clock later by about 2 hours per day, but we can move it earlier by only 1 to 1.5 hours.15,16

The more time zones crossed, the longer it takes the circadian pacemaker to re-entrain and the longer-lasting and more severe are the symptoms of jet lag. Travel across one or two time zones is only transiently troublesome.

Does age affect jet lag?

Whether age affects the severity of jet lag is not yet known.

In a study of simulated jet lag (requiring a 6-hour advance), middle-aged people (ages 37 to 52) experienced a greater degree of fragmented sleep on polysomnography than younger ones (ages 18 to 25). The older group also had greater impairment in daytime alertness, suggesting that phase tolerance—ie, the ability to sleep at an abnormal time in the circadian cycle17—decreases with age. However, two field studies involving both eastward and westward travel yielded the opposite results, suggesting that older age may actually protect against jet lag.18–20

Methodologic differences preclude direct comparisons of the studies, as do differences in the age groups studied.

Light exposure can help or hurt, depending on the timing

Reprinted with permission of Elsevier LTD. From Burgess HJ, et al. Bright light, dark, and melatonin can promote circadian adaptation in night shift workers. Sleep Med Rev 2002; 6:407–420.
Figure 2. A schematic human phase-response curve to light (blue line) and a one to exogenous melatonin (red line). The y axis shows the direction and relative magnitude of the phase shift produced by the administration of light or melatonin at various times, which are shown on the x axis. This graph shows typical times and phase relationships among these rhythms when the circadian clock is entrained to a 24-hour day. For individuals with earlier or later circadian rhythms, the local time axis should be adjusted accordingly. The light phase-response curve is a schematic based on the results of numerous studies. The melatonin curve is based on a single study using 0.5-mg doses of melatonin.22
Light exposure is of primary importance in shifting the circadian clock, and the direction of the shift depends on the timing of the exposure (Figure 2).20–22

Our core body temperature dips to its lowest point about 2 to 3 hours before we habitually awake. Exposure to bright light in the hours leading up to this minimum (the inverted triangle in Figure 2) sets our internal clock later (a phase delay)—desirable, say, for someone travelling from New York City to Los Angeles. Conversely, exposure to bright light after this temperature minimum sets the clock earlier.

Inadvertent shifting of circadian phase in the wrong direction (“antidromic re-entrainment”) is common and delays circadian reacclimation and the dissipation of jet lag symptoms.

Burgess HJ, Eastman CT. Prevention of jet lag. American College of Physicians, 2010. Modified with permission of the American College of Physicians.
Figure 3. Diagram demonstrating a flight from Chicago to Paris, seven time zones east. Times when darkness and light should be sought are denoted by the letters “D” for darkness and “L” for light. The inverted triangles represent the minimum core body temperature. Subsequent to arrival, the depicted light-dark pattern should result in average daily phase shifts of 1 hour.
For example (Figure 3),23 a typical flight from Chicago to Paris (seven time zones to the east) arrives there early in the morning Paris time. Although the clocks at Charles de Gaulle airport say 08:00, the traveler’s internal clock says it is still 01:00. Furthermore, his or her core body temperature will reach its minimum at about 04:00 Chicago time, or 11:00 Paris time. If the traveller decides to go for a walk right away, the light exposure will promote a phase delay rather than the desired phase advance. Therefore, circadian re-entrainment will be relatively prolonged.24

We discuss ways to reduce antidromic reentrainment in more detail further below.

Other factors

Other factors that contribute to travel fatigue include sleep deprivation (before the flight or en route), acute discomfort as the plane ascends to its cruising altitude,25 and excessive alcohol or caffeine intake during the flight. Although the effects of these factors rapidly diminish once one reaches the travel destination, jet lag will persist until circadian re-entrainment occurs.15

NONDRUG THERAPIES FOR JET LAG SLEEP DISORDER

The goal of treatment is to realign the circadian rhythm in the most rapid and efficient way and to minimize symptoms in the meantime. Frequent shifts to different time zones, often required in business travel, are very difficult to accommodate, and business travelers actually may do better if they remain on their home-based schedule.

One study compared keeping home-based sleep hours as opposed to adopting local sleep hours during a 2-day stay after a 9-hour westward flight.26 Travelers who remained on home-based hours were less sleepy and had lower (ie, better) global jet lag ratings than those who adopted local sleep hours, in part because of better sleep quality and duration. Nevertheless, about one-third of the participants said they preferred to adhere to the local schedule.

Strategic avoidance of, and exposure to, light

If the traveler intends to remain at the destination long enough, he or she can adjust better (and avoid an antidromic process) via strategic avoidance of and exposure to light.24

Burgess HJ, Eastman CT. Prevention of jet lag. American College of Physicians, 2010. Modified with permission of the American College of Physicians.
Figure 4. The diagram demonstrates a flight from Los Angeles to Rome, nine time zones east. Times when darkness (letter D) and light (letter L) should be sought are also indicated. The inverted triangles represent the minimum core body temperature. The depicted light-dark pattern should result in average daily phase shifts of 2 hours.
Burgess and Eastman23,27 have devised plans to help in deciding whether a phase delay or phase advance is most desirable, depending on the number of time zones crossed. Generally, shifts earlier in time are required for eastward flights (as in Figure 3), and shifts later in time are required for westward flights. However, advances of 8 hours or more are more readily accomplished by a phase delay (Figure 4).23,28

People travelling east, who want to set their clocks ahead (a phase advance), need to keep to the dark in the 3 hours leading up to the time they reach their minimum core body temperature (depicted as “D” in Figure 3), and then expose themselves to light in the 3 hours immediately after (“L” in Figure 3). Thus, the traveler from Chicago to Paris would do better by avoiding light exposure on arrival, either by remaining in darkness in his or her hotel room, or by wearing dark sunglasses when outdoors. Wearing sunglasses during transit to the hotel would also help avoid light exposure.

When attempting to delay circadian rhythms, the opposite light-dark patterns are sought, as depicted in Figure 4. As flight and layover patterns often do not permit strict adherence to these measures, they represent idealized scenarios.

The first step is to make a grid with a concurrent listing of home and destination times. In the example in Figure 3, the person is traveling seven time zones east. On day 0, a rectangle is drawn around the times representing home-based sleep hours.

Next, we mark the time at which we expect the traveler’s core body temperature to reach its minimum (inverted triangle). If the person habitually sleeps no more than 7 hours per night, then we mark this point as 2 hours before his or her habitual wake-up time; if the person sleeps more than 7 hours, then we place it 3 hours before wake-up time.23,29 This process is repeated at the bottom of the grid to represent the desired sleep schedule at the traveler’s destination. The distance between the home and the destination-based minimum core body temperature symbols represents the required degree of circadian realignment.

If a phase advance is required (eg, if travelling from Chicago to Paris), the core body temperature symbol is drawn on day 1 in the same location as day 0. For each subsequent day, the symbol is moved 1 hour earlier (which is about how fast the internal clock can advance),15,27 until a clock time within 1 hour of the desired destination core body temperature time is reached or satisfactory sleep and daytime functioning are achieved (Figure 3). If a phase delay is required (eg, if travelling from New York City to Los Angeles), the symbol is drawn 2 hours later on day 1 than on day 0 (reflecting the greater ease at which delays are achieved),15,27 with subsequent daily shifts in 2-hour increments, again until a clock time within 1 hour of the desired destination minimum core body temperature time is reached or satisfactory sleep and daytime functioning are achieved.

Requirements for darkness can be met with protective eyewear (ie, dark sunglasses), or by remaining in a dark room. Light requirements can be met with outdoor exposure, with a commercial light box, or with a separate apparatus (eg, goggles, visors) portable enough for travel.

 

 

DRUGS TO TREAT JET LAG SLEEP DISORDER

Melatonin appears safe

Most field studies have found that nightly doses of melatonin (2–8 mg) improve the quality of sleep30–32 or alleviate daytime symptoms of jet lag, or both.20,30,31,33–36 Immediate-release preparations appear to be more effective than slow-release ones.31 Although most studies looked exclusively at adaptation to eastward travel,30–32,35,36 one studied westward travel,33 and another assessed melatonin’s effects during both departure and return trips that traversed 11 time zones.34

In studies of preflight dosing, melatonin was scheduled for up to 3 days before departure (and en route in two instances),30,34 at clock hours corresponding to the nocturnal sleep period at the travel destination (consistent times daily), and then for a subsequent 3 to 4 days between a destination time of 22:00 and 00:00 hours (ie, at bedtime).30,31,34–36 Several other studies further simplified this regimen, with participants taking nocturnal melatonin only on arrival at the destination, either for eastward31,32 or for westward travel.33

The study involving solely westward travel (Los Angeles to New Zealand) was the only one of the studies with positive findings that allowed for comparisons between participants who received melatonin before departure (3 days at 5-mg doses, taken between 07:00 and 08:00 Los Angeles time) and continuing for 5 days after arrival at 22:00 to 00:00 New Zealand time, and those who received melatonin beginning only on arrival.33 Significantly better jet lag outcomes were found in the latter group.

An important caveat is that melatonin is sold over the counter as a nutritional supplement and is not regulated by the United States Food and Drug Administration (FDA), so verification of purity of the product is difficult.

A comprehensive review by the National Academy of Sciences stated that, given the available data, short-term use of melatonin in total daily doses of 10 mg or less in healthy adults appears to be safe.37

Benzodiazepine receptor agonists improve sleep, but maybe not sleepiness

The use of standard hypnotics during periods of circadian realignment appears to be commonplace but has not been well studied.20 Trials of the newer benzodiazepine receptor agonists—three studies of zolpidem (Ambien) 10 mg30,38,39 and two of zopiclone 5 to 7.5 mg32,40—found consistently favorable subjective30,38 and objective32,39,40 outcomes in counteracting jet-lag-induced insomnia (for both eastward and westward travel). (Note: Zopiclone is not available in the United States, but its enantiomer eszopiclone [Lunesta] is.) However, the evidence is less clear for daytime symptoms of jet lag, with outcomes reported as favorable,30 equivocal,40 or inaccessible.32,38,39

The discrepancy between studies incorporating systematic daytime assessments may be due to differential medication effects (zolpidem vs zopiclone).

In two studies that compared these standard hypnotics to oral melatonin, one found that zopiclone 5 mg and melatonin 2 mg were equally beneficial with respect to sleep variables (other jet lag symptoms were not assessed).32 In another study, zolpidem 10 mg was superior to melatonin 5 mg for sleep and other jet lag symptoms, and the combination of zolpidem and melatonin was no better than zolpidem alone.30

Importantly, however, adverse effects were more frequent in those taking zolpidem and included nausea, vomiting, and confusion.30 Although these effects were not deemed serious, 14 participants (10%) withdrew from the study.

Stimulants

Caffeine is commonly used to combat the sleepiness of jet lag, but only two controlled field studies have assessed its efficacy.41,42 Both used slow-release preparations at a daily dosage of 300 mg.

In one study, after an eastward flight traversing seven time zones, participants took the pill at 08:00 destination time every day for 5 days.41 Curiously, alertness and other jet lag symptoms were not assessed, but circadian rhythms (determined by levels of cortisol in saliva) were re-entrained at a more rapid rate with caffeine than with placebo, and to a degree comparable with that achieved by exogenous melatonin.

In a follow-up study by the same group, those receiving caffeine were objectively less sleepy (as assessed by multiple sleep latency tests) than those taking melatonin or placebo, but subjective differences between groups were not identified.42 Furthermore, those taking caffeine had significantly more nocturnal sleep complaints, as assessed both objectively and subjectively.

A recent randomized, double-blind, placebo-controlled trial of the stimulant armodafinil (Nuvigil) found less sleepiness on multiple sleep latency testing and a decrease in jet leg symptoms with a dosage of 150 mg than with placebo.43

SHIFT WORK SLEEP DISORDER: DEFINITION, PREDISPOSING FACTORS

Shift work refers to nonstandard work schedules, including on-call duty, rotating shifts, and permanent night work. In the United States, one in five workers works a nonstandard shift.20

While shift work presents obvious difficulties, the diagnosis of shift work sleep disorder is reserved for those who have chronic insomnia or sleepiness at times that are not conducive to the externally demanded sleep-wake schedule, despite having the opportunity for sufficient daytime sleep.1 When defined in such a fashion, this disorder may afflict nearly a third of workers,44 with potential adverse effects on safety, health, and quality of life.

Older age is considered a risk factor for intolerance to shift work.20 In a study of physiologic phase shifts in response to night work, older workers were less able to recover after several night shifts.45 A large survey of police officers working the night shift supported the finding of more sleep disruption and on-duty sleepiness in older people.46

 

 

TREATMENT OF SHIFT WORK SLEEP DISORDER

Bright light at work, sunglasses on the way home

Various field studies have described hastening of circadian adaptation (and immediate alerting effects) during night shifts with the use of bright light.20

Boivin and James47 found that workers who received 6 hours of intermittent bright light during their shifts experienced significantly greater phase delays than those who received no such intervention. Those receiving bright light also wore sunglasses during the commute home (to protect from an undesired phase advance), and this has demonstrated favorable effects as an independent intervention.48

Drug treatment of shift work sleep disorder

Melatonin: Mixed results. Two field studies found that taking melatonin (5–6 mg) before the daytime sleep period had a favorable impact on subjective sleep quality.49,50 However, two other studies found no such benefit with doses ranging from 6 to 10 mg.51,52 Differences between these studies—eg, shift schedules, dosages, and the time the melatonin was taken—preclude definitive comparisons.

Effects of melatonin on workplace alertness are indeterminate because of inconsistent measurements of this variable. Importantly, a simulated shift work study found no phase-shifting advantages of melatonin in those who concomitantly used bright light during their work shift with or without morning protective eyewear.48

Hypnotic drugs. In simulation studies and field studies, people taking benzodiazepine receptor agonists have consistently said they sleep better.53–58 A simulation study noted additional benefit in the ability to stay alert during the night shift (assessed by maintenance of wakefulness testing),55 but two other studies saw no changes in manifest sleepiness (assessed with multiple sleep latency tests).53,54 These divergent findings may represent different effects on these two dimensions of sleepiness.

The only field study to assess post-sleep psychomotor performance found no impairments after taking 7.5 mg of zopiclone, a relatively long-acting nonbenzodiazepine hypnotic.57

Stimulants. In the largest trial to date of shift work sleep disorder, modafinil 200 mg (the only drug currently FDA-approved for shift work sleep disorder) had significant benefits compared with placebo with respect to objective measurements of workplace sleepiness, reaction time performance testing, and self-rated improvement of symptoms.59 Perhaps because of the low dose studied, both treated and untreated patients continued to manifest sleepiness within the pathologic range on objective testing.

Although the efficacy of caffeine is well documented as a countermeasure for sleepiness during experimentally induced sleep deprivation,20 very few field trials have specifically addressed impairments associated with shift work sleep disorder. In one study, caffeine at a dose of 4 mg/kg taken 30 minutes before starting a night shift provided objective improvement in both performance and alertness.60

Strategic napping is an additional practical intervention to promote alertness during night shifts, and cumulative data indicate that it provides objective and subjective improvements in alertness and performance.61,62 Earlier timed naps (ie, before or during the early portion of a shift) of short duration (ie, 20 minutes or less) are likely to produce maximal benefit, because they avoid sleep inertia (the grogginess or sleepiness that may follow a long nap), and also because they have no effect on the subsequent daytime sleep bout.61,63

Interventions may also be used in combination. For example, napping in conjunction with caffeine results in a greater degree of increased objective alertness than either intervention alone.60

How about days off?

The recommendations described here presume that shift workers maintain the workday sleep-wake schedule continuously, including when they are not at work. This is likely not a real-world scenario.

Smith et al64 developed a “compromise” phase position, whereby internal rhythms are optimized to facilitate alertness during work and sleepiness during the day, while allowing one to adopt a non-workday sleep schedule that maintains accessibility to family and social activities. In brief, non-workday sleep starts about 5.5 hours earlier than workday sleep; all sleep bouts are followed by brief exposure to bright light (to avoid excessive phase delay); and, as described previously, both workplace bright light and protection from morning light are implemented.

Although further studies are needed to determine whether this regimen is practical in real life, study participants who achieved desired partial phase shifts had performance ratings on a par with baseline levels, and comparable to those in a group that achieved complete re-entrainment.64

Finally, all shift workers need to be encouraged to protect the daytime bedroom environment just as daytime workers protect their nighttime environment. Sleep should be sought in an appropriately darkened and quiet environment, phones and doorbells silenced, and appointments scheduled accordingly.

References
  1. International Classification of Sleep Disorders: Diagnostic and Coding Manual/American Academy of Sleep Medicine. 2nd ed. Westchester, IL: American Academy of Sleep Medicine; 2005.
  2. Borbély AA, Achermann P. Concepts and models of sleep regulation: an overview. J Sleep Res 1992; 1:6379.
  3. Carskadon MA, Dement WC. Effects of total sleep loss on sleep tendency. Percept Mot Skills 1979; 48:495506.
  4. Beersma DG, Gordijn MC. Circadian control of the sleep-wake cycle. Physiol Behav 2007; 90:190195.
  5. Moore RY, Eichler VB. Loss of a circadian adrenal corticosterone rhythm following suprachiasmatic lesions in the rat. Brain Res 1972; 42:201206.
  6. Stephan FK, Zucker I. Circadian rhythms in drinking behavior and locomotor activity of rats are eliminated by hypothalamic lesions. Proc Natl Acad Sci U S A 1972; 69:15831586.
  7. Welsh DK, Logothetis DE, Meister M, Reppert SM. Individual neurons dissociated from rat suprachiasmatic nucleus express independently phased circadian firing rhythms. Neuron 1995; 14:697706.
  8. Ralph MR, Foster RG, Davis FC, Menaker M. Transplanted suprachiasmatic nucleus determines circadian period. Science 1990; 247:975978.
  9. Czeisler CA, Duffy JF, Shanahan TL, et al. Stability, precision, and near-24-hour period of the human circadian pacemaker. Science 1999; 284:21772181.
  10. Waterhouse JM, DeCoursey PJ. Human circadian organization. In:Dunlap JC, Loros JJ, DeCoursey PJ, editors. Chronobiology: Biological Timekeeping. Sunderland, MA: Sinauer Associates; 2004:291324.
  11. Morgenthaler T, Alessi C, Friedman L, et al; Standards of Practice Committee; American Academy of Sleep Medicine. Practice parameters for the use of actigraphy in the assessment of sleep and sleep disorders: an update for 2007. Sleep 2007; 30:519529.
  12. Bradshaw DA, Yanagi MA, Pak ES, Peery TS, Ruff GA. Nightly sleep duration in the 2-week period preceding multiple sleep latency testing. J Clin Sleep Med 2007; 3:613619.
  13. Morgenthaler TI, Lee-Chiong T, Alessi C, et al; Standards of Practice Committee of the American Academy of Sleep Medicine. Practice parameters for the clinical evaluation and treatment of circadian rhythm sleep disorders. An American Academy of Sleep Medicine report. Sleep 2007; 30:14451459.
  14. Horne JA, Ostberg O. A self-assessment questionnaire to determine morningness-eveningness in human circadian rhythms. Int J Chronobiol 1976; 4:97110.
  15. Waterhouse J, Reilly T, Atkinson G, Edwards B. Jet lag: trends and coping strategies. Lancet 2007; 369:11171129.
  16. Eastman CI, Gazda CJ, Burgess HJ, Crowley SJ, Fogg LF. Advancing circadian rhythms before eastward flight: a strategy to prevent or reduce jet lag. Sleep 2005; 28:3344.
  17. Moline ML, Pollak CP, Monk TH, et al. Age-related differences in recovery from simulated jet lag. Sleep 1992; 15:2840.
  18. Waterhouse J, Edwards B, Nevill A, et al. Identifying some determinants of “jet lag” and its symptoms: a study of athletes and other travellers. Br J Sports Med 2002; 36:5460.
  19. Tresguerres JA, Ariznavarreta C, Granados B, et al. Circadian urinary 6-sulphatoxymelatonin, cortisol excretion and locomotor activity in airline pilots during transmeridian flights. J Pineal Res 2001; 31:1622.
  20. Sack RL, Auckley D, Auger RR, et al; American Academy of Sleep Medicine. Circadian rhythm sleep disorders: part I, basic principles, shift work and jet lag disorders. An American Academy of Sleep Medicine review. Sleep 2007; 30:14601483.
  21. Burgess HJ, Sharkey KM, Eastman CI. Bright light, dark and melatonin can promote circadian adaptation in night shift workers. Sleep Med Rev 2002; 6:407420.
  22. Lewy AJ, Bauer VK, Saeeduddin A, et al. The human phase response curve (PRC) to melatonin is about 12 hours out of phase with the PRC to light. Chronobiol Int 1998; 15:7183.
  23. Burgess HJ, Eastman CT. Prevention of Jet Lag. 2010. http://pier.acponline.org/physicians/screening/prev1015/prev1015.html. Accessed June 25, 2010.
  24. Daan S, Lewy AJ. Scheduled exposure to daylight: a potential strategy to reduce “jet lag” following transmeridian flight. Psychopharmacol Bull 1984; 20:566568.
  25. Muhm JM, Rock PB, McMullin DL, et al. Effect of aircraft-cabin altitude on passenger discomfort. N Engl J Med 2007; 357:1827.
  26. Lowden A, Akerstedt T. Retaining home-base sleep hours to prevent jet lag in connection with a westward flight across nine time zones. Chronobiol Int 1998; 15:365376.
  27. Eastman CI, Burgess HJ. How to travel the world without jet lag. Sleep Med Clin 2009; 4:241255.
  28. Revell VL, Eastman CI. How to trick mother nature into letting you fly around or stay up all night. J Biol Rhythms 2005; 20:353365.
  29. Cagnacci A, Elliott JA, Yen SS. Melatonin: a major regulator of the circadian rhythm of core temperature in humans. J Clin Endocrinol Metab 1992; 75:447452.
  30. Suhner A, Schlagenhauf P, Höfer I, Johnson R, Tschopp A, Steffen R. Effectiveness and tolerability of melatonin and zolpidem for the alleviation of jet lag. Aviat Space Environ Med 2001; 72:638646.
  31. Suhner A, Schlagenhauf P, Johnson R, Tschopp A, Steffen R. Comparative study to determine the optimal melatonin dosage form for the alleviation of jet lag. Chronobiol Int 1998; 15:655666.
  32. Paul MA, Gray G, Sardana TM, Pigeau RA. Melatonin and zopiclone as facilitators of early circadian sleep in operational air transport crews. Aviat Space Environ Med 2004; 75:439443.
  33. Petrie K, Dawson AG, Thompson L, Brook R. A double-blind trial of melatonin as a treatment for jet lag in international cabin crew. Biol Psychiatry 1993; 33:526530.
  34. Petrie K, Conaglen JV, Thompson L, Chamberlain K. Effect of melatonin on jet lag after long haul flights. BMJ 1989; 298:705707.
  35. Arendt J, Aldhous M, Marks V. Alleviation of jet lag by melatonin: preliminary results of controlled double blind trial. Br Med J (Clin Res Ed) 1986; 292:1170.
  36. Claustrat B, Brun J, David M, Sassolas G, Chazot G. Melatonin and jet lag: confirmatory result using a simplified protocol. Biol Psychiatry 1992; 32:705711.
  37. Committee on the Framework for Evaluating the Safety of Dietary Supplements, Food and Nutrition Board, Board on Life Sciences, Institute of Medicine and National Research Council of the National Academies. Dietary supplements: a framework for evaluating safety. Washington, DC: The National Academies Press; 2005.
  38. Jamieson AO, Zammit GK, Rosenberg RS, Davis JR, Walsh JK. Zolpidem reduces the sleep disturbance of jet lag. Sleep Med 2001; 2:423430.
  39. Hirschfeld U, Moreno-Reyes R, Akseki E, et al. Progressive elevation of plasma thyrotropin during adaptation to simulated jet lag: effects of treatment with bright light or zolpidem. J Clin Endocrinol Metab 1996; 81:32703277.
  40. Daurat A, Benoit O, Buguet A. Effects of zopiclone on the rest/activity rhythm after a westward flight across five time zones. Psychopharmacology (Berl) 2000; 149:241245.
  41. Piérard C, Beaumont M, Enslen M, et al. Resynchronization of hormonal rhythms after an eastbound flight in humans: effects of slow-release caffeine and melatonin. Eur J Appl Physiol 2001; 85:144150.
  42. Beaumont M, Batéjat D, Piérard C, et al. Caffeine or melatonin effects on sleep and sleepiness after rapid eastward transmeridian travel. J Appl Physiol 2004; 96:5058.
  43. Rosenberg RP, Bogan RK, Tiller JM, et al. A phase 3, double-blind, randomized, placebo-controlled study of armodafinil for excessive sleepiness associated with jet lag disorder. Mayo Clin Proc 2010; 85:630638.
  44. Drake CL, Roehrs T, Richardson G, Walsh JK, Roth T. Shift work sleep disorder: prevalence and consequences beyond that of symptomatic day workers. Sleep 2004; 27:14531462.
  45. Härmä MI, Hakola T, Akerstedt T, Laitinen JT. Age and adjustment to night work. Occup Environ Med 1994; 51:568573.
  46. Smith L, Mason C. Reducing night shift exposure: a pilot study of rota, night shift and age effects on sleepiness and fatigue. J Hum Ergol (Tokyo) 2001; 30:8387.
  47. Boivin DB, James FO. Circadian adaptation to night-shift work by judicious light and darkness exposure. J Biol Rhythms 2002; 17:556567.
  48. Crowley SJ, Lee C, Tseng CY, Fogg LF, Eastman CI. Combinations of bright light, scheduled dark, sunglasses, and melatonin to facilitate circadian entrainment to night shift work. J Biol Rhythms 2003; 18:513523.
  49. Folkard S, Arendt J, Clark M. Can melatonin improve shift workers’ tolerance of the night shift? Some preliminary findings. Chronobiol Int 1993; 10:315320.
  50. Yoon IY, Song BG. Role of morning melatonin administration and attenuation of sunlight exposure in improving adaptation of nightshift workers. Chronobiol Int 2002; 19:903913.
  51. James M, Tremea MO, Jones JS, Krohmer JR. Can melatonin improve adaptation to night shift? Am J Emerg Med 1998; 16:367370.
  52. Jorgensen KM, Witting MD. Does exogenous melatonin improve day sleep or night alertness in emergency physicians working night shifts? Ann Emerg Med 1998; 31:699704.
  53. Walsh JK, Schweitzer PK, Anch AM, Muehlbach MJ, Jenkins NA, Dickins QS. Sleepiness/alertness on a simulated night shift following sleep at home with triazolam. Sleep 1991; 14:140146.
  54. Walsh JK, Sugerman JL, Muehlbach MJ, Schweitzer PK. Physiological sleep tendency on a simulated night shift: adaptation and effects of triazolam. Sleep 1988; 11:251264.
  55. Porcù S, Bellatreccia A, Ferrara M, Casagrande M. Performance, ability to stay awake, and tendency to fall asleep during the night after a diurnal sleep with temazepam or placebo. Sleep 1997; 20:535541.
  56. Monchesky TC, Billings BJ, Phillips R, Bourgouin J. Zopiclone in insomniac shiftworkers. Evaluation of its hypnotic properties and its effects on mood and work performance. Int Arch Occup Environ Health 1989; 61:255259.
  57. Moon CA, Hindmarch I, Holland RL. The effect of zopiclone 7.5 mg on the sleep, mood and performance of shift workers. Int Clin Psychopharmacol 1990; 5(suppl 2):7983.
  58. Puca FM, Perrucci S, Prudenzano MP, et al. Quality of life in shift work syndrome. Funct Neurol 1996; 11:261268.
  59. Czeisler CA, Walsh JK, Roth T, et al; US Modafinil in Shift Work Sleep Disorder Study Group. Modafinil for excessive sleepiness associated with shift-work sleep disorder. N Engl J Med 2005; 353:476486.
  60. Schweitzer PK, Randazzo AC, Stone K, Erman M, Walsh JK. Laboratory and field studies of naps and caffeine as practical countermeasures for sleep-wake problems associated with night work. Sleep 2006; 29:3950.
  61. Sallinen M, Härmä M, Akerstedt T, Rosa R, Lillqvist O. Promoting alertness with a short nap during a night shift. J Sleep Res 1998; 7:240247.
  62. Garbarino S, Mascialino B, Penco MA, et al. Professional shift-work drivers who adopt prophylactic naps can reduce the risk of car accidents during night work. Sleep 2004; 27:12951302.
  63. Purnell MT, Feyer AM, Herbison GP. The impact of a nap opportunity during the night shift on the performance and alertness of 12-h shift workers. J Sleep Res 2002; 11:219227.
  64. Smith MR, Fogg LF, Eastman CI. A compromise circadian phase position for permanent night work improves mood, fatigue, and performance. Sleep 2009; 32:14811489.
Article PDF
Author and Disclosure Information

Bhanu P. Kolla, MBBS
Mayo Center for Sleep Medicine, Department of Psychiatry and Psychology, Mayo Clinic College of Medicine, Rochester, MN

R. Robert Auger, MD
Mayo Center for Sleep Medicine, Department of Psychiatry and Psychology, Mayo Clinic College of Medicine, Rochester, MN

Address: R. Robert Auger, MD, Mayo Center For Sleep Medicine, Mayo Clinic College of Medicine, Gonda Building 17W, 200 First Street SW, Rochester, MN 55905; e-mail [email protected]

Issue
Cleveland Clinic Journal of Medicine - 78(10)
Publications
Topics
Page Number
675-684
Sections
Author and Disclosure Information

Bhanu P. Kolla, MBBS
Mayo Center for Sleep Medicine, Department of Psychiatry and Psychology, Mayo Clinic College of Medicine, Rochester, MN

R. Robert Auger, MD
Mayo Center for Sleep Medicine, Department of Psychiatry and Psychology, Mayo Clinic College of Medicine, Rochester, MN

Address: R. Robert Auger, MD, Mayo Center For Sleep Medicine, Mayo Clinic College of Medicine, Gonda Building 17W, 200 First Street SW, Rochester, MN 55905; e-mail [email protected]

Author and Disclosure Information

Bhanu P. Kolla, MBBS
Mayo Center for Sleep Medicine, Department of Psychiatry and Psychology, Mayo Clinic College of Medicine, Rochester, MN

R. Robert Auger, MD
Mayo Center for Sleep Medicine, Department of Psychiatry and Psychology, Mayo Clinic College of Medicine, Rochester, MN

Address: R. Robert Auger, MD, Mayo Center For Sleep Medicine, Mayo Clinic College of Medicine, Gonda Building 17W, 200 First Street SW, Rochester, MN 55905; e-mail [email protected]

Article PDF
Article PDF

For people who must travel long distances east or west by air or who must work the night shift, some relief is possible for the grogginess and disorientation that often ensue. The problems arise from the body’s internal clock being out of sync with the sun. Part of the solution involves helping reset the internal clock, or sometimes, preventing it from resetting itself.

This review will focus on jet lag sleep disorder and shift work sleep disorder, with an emphasis on the causes, the clinical assessment, and evidence-based treatment options.

WHEN THE INTERNAL CLOCK IS OUT OF SYNC WITH THE SUN

Circadian rhythm sleep disorders are the result of dyssynchrony between the body’s internal clock and the external 24-hour light-dark cycle. Patients typically present with insomnia or excessive somnolence. These disorders may represent an intrinsic disorder, such as delayed or advanced sleep-phase disorder, or may be the result of transmeridian air travel or working nonstandard shifts.1

Modified with permission of Elsevier LTD. From Beersma DG, Gordijn MC. Circadian control of the sleep-wake cycle. Physiol Behav 2007; 90:190–195.
Figure 1. The two-process model of sleep regulation. Sleep propensity grows during periods of wakefulness and abates during sleeping periods. The homeostatic process (process S, blue line) is limited to a range of values determined by a clock-like circadian process (process C, red lines) that varies with the biological time of day.
Sleep and wakefulness are conceptually governed by two processes, “process S” and “process C.”2 The homeostatic drive to sleep (process S) is proportional to the duration of sleep restriction, and it becomes maximal at about 40 hours.3 In contrast, process C creates a drive for wakefulness that variably opposes process S and depends on circadian rhythms intrinsic to the organism (Figure 1).4 Coordinating this sleep-wake rhythm (and numerous other behavioral and physiologic processes) are the neurons of the suprachiasmatic nuclei of the hypothalamus.5–8

The intrinsic human circadian period is typically slightly longer than 24 hours,9 but it is synchronized (“entrained”) to the 24-hour day by various environmental inputs, or zeitgebers (German for “time-givers”), the most important of which is light exposure.10

When the internal clock is out of sync with the sun, the misalignment can result in daytime anergia, alternating complaints of insomnia and hypersomnia, and various other symptoms, including emotional disturbances and gastrointestinal distress. In particular, long-distance air travel or a nocturnal work schedule overwhelms the ability of the intrinsic clock to adjust rapidly enough, and the result is jet lag sleep disorder or shift work sleep disorder.1

TOOLS TO EVALUATE CIRCADIAN RHYTHM DISTURBANCES

A thorough history is the cornerstone of the evaluation for all sleep disorders, and if a circadian rhythm disturbance is suspected, the sleep history is supplemented with specific questions to establish a clear diagnosis.

When assessing for jet lag disorder, ask about:

  • The patient’s degree of sleep deprivation before and during travel
  • His or her innate circadian preference (ie, whether he or she is a “night owl” or “early bird”)
  • Patterns of alcohol and caffeine consumption.

When assessing for shift work disorder, include the above questions and also look for differences in the sleep-wake schedule on working days vs nonworking days, as well as external contributors to poor sleep quality (eg, the degree to which daytime sleep is not “protected”).

The following tools help in acquiring this information.

Sleep diary

In a sleep diary or log, patients record the times that they take naps, maintain consolidated sleep, and subsequently arise. The diary also prompts the patient for information about sleep latency, wakefulness after sleep onset, time in bed, medication and caffeine intake, and the restorative quality of sleep.

While the sleep diary by itself may provide insight into counteractive sleep-related behaviors and misperceptions the patient may have, compliance is often limited. Therefore, the sleep diary is best used in conjunction with actigraphy.

Actigraphy

An actigraph is a wristwatch-size motion detector, typically worn continuously for 7 days or longer. The data it gathers and stores serve as a surrogate measure of various sleep-wake variables.11

Either a sleep diary or actigraphy is required to demonstrate the stability of sleep patterns and circadian preference, but the actigraph typically generates more reliable data.11,12 It is also valuable in assessing the response to treatment of circadian rhythm sleep disorders.13

Are you an early bird or a night owl?

The Morningness-Eveningness questionnaire contains 19 items. Night owls tend to score lower on it than early birds do.14 This information may help some people avoid situations in which they may not do well, such as an early bird going on a permanent night-shift schedule.

Other assessment tools

Polysomnography is used primarily to rule out sleep-disordered breathing; it is not indicated for routine evaluation of circadian rhythm sleep disorders.

The minimum core body temperature and the peak melatonin secretion follow a 24-hour cycle. Although these measures are often used in research, they are not routinely used in clinical practice. (The minimum core body temperature is discussed further below.)

 

 

JET LAG SLEEP DISORDER

Jet lag results from air travel across multiple time zones, with a resultant discordance between the internal circadian clock and the destination’s light-dark cycle. Most sufferers report sleeping poorly at night and feeling groggy during the day, and some also experience general malaise and gastrointestinal distress.1

The severity depends on a number of variables.

Going west is easier than going east

Westward travel is normally less taxing than eastward travel, as it requires setting one’s internal clock later rather than earlier. Presumably, because the circadian period tends to exceed 24 hours, we can move our internal clock later by about 2 hours per day, but we can move it earlier by only 1 to 1.5 hours.15,16

The more time zones crossed, the longer it takes the circadian pacemaker to re-entrain and the longer-lasting and more severe are the symptoms of jet lag. Travel across one or two time zones is only transiently troublesome.

Does age affect jet lag?

Whether age affects the severity of jet lag is not yet known.

In a study of simulated jet lag (requiring a 6-hour advance), middle-aged people (ages 37 to 52) experienced a greater degree of fragmented sleep on polysomnography than younger ones (ages 18 to 25). The older group also had greater impairment in daytime alertness, suggesting that phase tolerance—ie, the ability to sleep at an abnormal time in the circadian cycle17—decreases with age. However, two field studies involving both eastward and westward travel yielded the opposite results, suggesting that older age may actually protect against jet lag.18–20

Methodologic differences preclude direct comparisons of the studies, as do differences in the age groups studied.

Light exposure can help or hurt, depending on the timing

Reprinted with permission of Elsevier LTD. From Burgess HJ, et al. Bright light, dark, and melatonin can promote circadian adaptation in night shift workers. Sleep Med Rev 2002; 6:407–420.
Figure 2. A schematic human phase-response curve to light (blue line) and a one to exogenous melatonin (red line). The y axis shows the direction and relative magnitude of the phase shift produced by the administration of light or melatonin at various times, which are shown on the x axis. This graph shows typical times and phase relationships among these rhythms when the circadian clock is entrained to a 24-hour day. For individuals with earlier or later circadian rhythms, the local time axis should be adjusted accordingly. The light phase-response curve is a schematic based on the results of numerous studies. The melatonin curve is based on a single study using 0.5-mg doses of melatonin.22
Light exposure is of primary importance in shifting the circadian clock, and the direction of the shift depends on the timing of the exposure (Figure 2).20–22

Our core body temperature dips to its lowest point about 2 to 3 hours before we habitually awake. Exposure to bright light in the hours leading up to this minimum (the inverted triangle in Figure 2) sets our internal clock later (a phase delay)—desirable, say, for someone travelling from New York City to Los Angeles. Conversely, exposure to bright light after this temperature minimum sets the clock earlier.

Inadvertent shifting of circadian phase in the wrong direction (“antidromic re-entrainment”) is common and delays circadian reacclimation and the dissipation of jet lag symptoms.

Burgess HJ, Eastman CT. Prevention of jet lag. American College of Physicians, 2010. Modified with permission of the American College of Physicians.
Figure 3. Diagram demonstrating a flight from Chicago to Paris, seven time zones east. Times when darkness and light should be sought are denoted by the letters “D” for darkness and “L” for light. The inverted triangles represent the minimum core body temperature. Subsequent to arrival, the depicted light-dark pattern should result in average daily phase shifts of 1 hour.
For example (Figure 3),23 a typical flight from Chicago to Paris (seven time zones to the east) arrives there early in the morning Paris time. Although the clocks at Charles de Gaulle airport say 08:00, the traveler’s internal clock says it is still 01:00. Furthermore, his or her core body temperature will reach its minimum at about 04:00 Chicago time, or 11:00 Paris time. If the traveller decides to go for a walk right away, the light exposure will promote a phase delay rather than the desired phase advance. Therefore, circadian re-entrainment will be relatively prolonged.24

We discuss ways to reduce antidromic reentrainment in more detail further below.

Other factors

Other factors that contribute to travel fatigue include sleep deprivation (before the flight or en route), acute discomfort as the plane ascends to its cruising altitude,25 and excessive alcohol or caffeine intake during the flight. Although the effects of these factors rapidly diminish once one reaches the travel destination, jet lag will persist until circadian re-entrainment occurs.15

NONDRUG THERAPIES FOR JET LAG SLEEP DISORDER

The goal of treatment is to realign the circadian rhythm in the most rapid and efficient way and to minimize symptoms in the meantime. Frequent shifts to different time zones, often required in business travel, are very difficult to accommodate, and business travelers actually may do better if they remain on their home-based schedule.

One study compared keeping home-based sleep hours as opposed to adopting local sleep hours during a 2-day stay after a 9-hour westward flight.26 Travelers who remained on home-based hours were less sleepy and had lower (ie, better) global jet lag ratings than those who adopted local sleep hours, in part because of better sleep quality and duration. Nevertheless, about one-third of the participants said they preferred to adhere to the local schedule.

Strategic avoidance of, and exposure to, light

If the traveler intends to remain at the destination long enough, he or she can adjust better (and avoid an antidromic process) via strategic avoidance of and exposure to light.24

Burgess HJ, Eastman CT. Prevention of jet lag. American College of Physicians, 2010. Modified with permission of the American College of Physicians.
Figure 4. The diagram demonstrates a flight from Los Angeles to Rome, nine time zones east. Times when darkness (letter D) and light (letter L) should be sought are also indicated. The inverted triangles represent the minimum core body temperature. The depicted light-dark pattern should result in average daily phase shifts of 2 hours.
Burgess and Eastman23,27 have devised plans to help in deciding whether a phase delay or phase advance is most desirable, depending on the number of time zones crossed. Generally, shifts earlier in time are required for eastward flights (as in Figure 3), and shifts later in time are required for westward flights. However, advances of 8 hours or more are more readily accomplished by a phase delay (Figure 4).23,28

People travelling east, who want to set their clocks ahead (a phase advance), need to keep to the dark in the 3 hours leading up to the time they reach their minimum core body temperature (depicted as “D” in Figure 3), and then expose themselves to light in the 3 hours immediately after (“L” in Figure 3). Thus, the traveler from Chicago to Paris would do better by avoiding light exposure on arrival, either by remaining in darkness in his or her hotel room, or by wearing dark sunglasses when outdoors. Wearing sunglasses during transit to the hotel would also help avoid light exposure.

When attempting to delay circadian rhythms, the opposite light-dark patterns are sought, as depicted in Figure 4. As flight and layover patterns often do not permit strict adherence to these measures, they represent idealized scenarios.

The first step is to make a grid with a concurrent listing of home and destination times. In the example in Figure 3, the person is traveling seven time zones east. On day 0, a rectangle is drawn around the times representing home-based sleep hours.

Next, we mark the time at which we expect the traveler’s core body temperature to reach its minimum (inverted triangle). If the person habitually sleeps no more than 7 hours per night, then we mark this point as 2 hours before his or her habitual wake-up time; if the person sleeps more than 7 hours, then we place it 3 hours before wake-up time.23,29 This process is repeated at the bottom of the grid to represent the desired sleep schedule at the traveler’s destination. The distance between the home and the destination-based minimum core body temperature symbols represents the required degree of circadian realignment.

If a phase advance is required (eg, if travelling from Chicago to Paris), the core body temperature symbol is drawn on day 1 in the same location as day 0. For each subsequent day, the symbol is moved 1 hour earlier (which is about how fast the internal clock can advance),15,27 until a clock time within 1 hour of the desired destination core body temperature time is reached or satisfactory sleep and daytime functioning are achieved (Figure 3). If a phase delay is required (eg, if travelling from New York City to Los Angeles), the symbol is drawn 2 hours later on day 1 than on day 0 (reflecting the greater ease at which delays are achieved),15,27 with subsequent daily shifts in 2-hour increments, again until a clock time within 1 hour of the desired destination minimum core body temperature time is reached or satisfactory sleep and daytime functioning are achieved.

Requirements for darkness can be met with protective eyewear (ie, dark sunglasses), or by remaining in a dark room. Light requirements can be met with outdoor exposure, with a commercial light box, or with a separate apparatus (eg, goggles, visors) portable enough for travel.

 

 

DRUGS TO TREAT JET LAG SLEEP DISORDER

Melatonin appears safe

Most field studies have found that nightly doses of melatonin (2–8 mg) improve the quality of sleep30–32 or alleviate daytime symptoms of jet lag, or both.20,30,31,33–36 Immediate-release preparations appear to be more effective than slow-release ones.31 Although most studies looked exclusively at adaptation to eastward travel,30–32,35,36 one studied westward travel,33 and another assessed melatonin’s effects during both departure and return trips that traversed 11 time zones.34

In studies of preflight dosing, melatonin was scheduled for up to 3 days before departure (and en route in two instances),30,34 at clock hours corresponding to the nocturnal sleep period at the travel destination (consistent times daily), and then for a subsequent 3 to 4 days between a destination time of 22:00 and 00:00 hours (ie, at bedtime).30,31,34–36 Several other studies further simplified this regimen, with participants taking nocturnal melatonin only on arrival at the destination, either for eastward31,32 or for westward travel.33

The study involving solely westward travel (Los Angeles to New Zealand) was the only one of the studies with positive findings that allowed for comparisons between participants who received melatonin before departure (3 days at 5-mg doses, taken between 07:00 and 08:00 Los Angeles time) and continuing for 5 days after arrival at 22:00 to 00:00 New Zealand time, and those who received melatonin beginning only on arrival.33 Significantly better jet lag outcomes were found in the latter group.

An important caveat is that melatonin is sold over the counter as a nutritional supplement and is not regulated by the United States Food and Drug Administration (FDA), so verification of purity of the product is difficult.

A comprehensive review by the National Academy of Sciences stated that, given the available data, short-term use of melatonin in total daily doses of 10 mg or less in healthy adults appears to be safe.37

Benzodiazepine receptor agonists improve sleep, but maybe not sleepiness

The use of standard hypnotics during periods of circadian realignment appears to be commonplace but has not been well studied.20 Trials of the newer benzodiazepine receptor agonists—three studies of zolpidem (Ambien) 10 mg30,38,39 and two of zopiclone 5 to 7.5 mg32,40—found consistently favorable subjective30,38 and objective32,39,40 outcomes in counteracting jet-lag-induced insomnia (for both eastward and westward travel). (Note: Zopiclone is not available in the United States, but its enantiomer eszopiclone [Lunesta] is.) However, the evidence is less clear for daytime symptoms of jet lag, with outcomes reported as favorable,30 equivocal,40 or inaccessible.32,38,39

The discrepancy between studies incorporating systematic daytime assessments may be due to differential medication effects (zolpidem vs zopiclone).

In two studies that compared these standard hypnotics to oral melatonin, one found that zopiclone 5 mg and melatonin 2 mg were equally beneficial with respect to sleep variables (other jet lag symptoms were not assessed).32 In another study, zolpidem 10 mg was superior to melatonin 5 mg for sleep and other jet lag symptoms, and the combination of zolpidem and melatonin was no better than zolpidem alone.30

Importantly, however, adverse effects were more frequent in those taking zolpidem and included nausea, vomiting, and confusion.30 Although these effects were not deemed serious, 14 participants (10%) withdrew from the study.

Stimulants

Caffeine is commonly used to combat the sleepiness of jet lag, but only two controlled field studies have assessed its efficacy.41,42 Both used slow-release preparations at a daily dosage of 300 mg.

In one study, after an eastward flight traversing seven time zones, participants took the pill at 08:00 destination time every day for 5 days.41 Curiously, alertness and other jet lag symptoms were not assessed, but circadian rhythms (determined by levels of cortisol in saliva) were re-entrained at a more rapid rate with caffeine than with placebo, and to a degree comparable with that achieved by exogenous melatonin.

In a follow-up study by the same group, those receiving caffeine were objectively less sleepy (as assessed by multiple sleep latency tests) than those taking melatonin or placebo, but subjective differences between groups were not identified.42 Furthermore, those taking caffeine had significantly more nocturnal sleep complaints, as assessed both objectively and subjectively.

A recent randomized, double-blind, placebo-controlled trial of the stimulant armodafinil (Nuvigil) found less sleepiness on multiple sleep latency testing and a decrease in jet leg symptoms with a dosage of 150 mg than with placebo.43

SHIFT WORK SLEEP DISORDER: DEFINITION, PREDISPOSING FACTORS

Shift work refers to nonstandard work schedules, including on-call duty, rotating shifts, and permanent night work. In the United States, one in five workers works a nonstandard shift.20

While shift work presents obvious difficulties, the diagnosis of shift work sleep disorder is reserved for those who have chronic insomnia or sleepiness at times that are not conducive to the externally demanded sleep-wake schedule, despite having the opportunity for sufficient daytime sleep.1 When defined in such a fashion, this disorder may afflict nearly a third of workers,44 with potential adverse effects on safety, health, and quality of life.

Older age is considered a risk factor for intolerance to shift work.20 In a study of physiologic phase shifts in response to night work, older workers were less able to recover after several night shifts.45 A large survey of police officers working the night shift supported the finding of more sleep disruption and on-duty sleepiness in older people.46

 

 

TREATMENT OF SHIFT WORK SLEEP DISORDER

Bright light at work, sunglasses on the way home

Various field studies have described hastening of circadian adaptation (and immediate alerting effects) during night shifts with the use of bright light.20

Boivin and James47 found that workers who received 6 hours of intermittent bright light during their shifts experienced significantly greater phase delays than those who received no such intervention. Those receiving bright light also wore sunglasses during the commute home (to protect from an undesired phase advance), and this has demonstrated favorable effects as an independent intervention.48

Drug treatment of shift work sleep disorder

Melatonin: Mixed results. Two field studies found that taking melatonin (5–6 mg) before the daytime sleep period had a favorable impact on subjective sleep quality.49,50 However, two other studies found no such benefit with doses ranging from 6 to 10 mg.51,52 Differences between these studies—eg, shift schedules, dosages, and the time the melatonin was taken—preclude definitive comparisons.

Effects of melatonin on workplace alertness are indeterminate because of inconsistent measurements of this variable. Importantly, a simulated shift work study found no phase-shifting advantages of melatonin in those who concomitantly used bright light during their work shift with or without morning protective eyewear.48

Hypnotic drugs. In simulation studies and field studies, people taking benzodiazepine receptor agonists have consistently said they sleep better.53–58 A simulation study noted additional benefit in the ability to stay alert during the night shift (assessed by maintenance of wakefulness testing),55 but two other studies saw no changes in manifest sleepiness (assessed with multiple sleep latency tests).53,54 These divergent findings may represent different effects on these two dimensions of sleepiness.

The only field study to assess post-sleep psychomotor performance found no impairments after taking 7.5 mg of zopiclone, a relatively long-acting nonbenzodiazepine hypnotic.57

Stimulants. In the largest trial to date of shift work sleep disorder, modafinil 200 mg (the only drug currently FDA-approved for shift work sleep disorder) had significant benefits compared with placebo with respect to objective measurements of workplace sleepiness, reaction time performance testing, and self-rated improvement of symptoms.59 Perhaps because of the low dose studied, both treated and untreated patients continued to manifest sleepiness within the pathologic range on objective testing.

Although the efficacy of caffeine is well documented as a countermeasure for sleepiness during experimentally induced sleep deprivation,20 very few field trials have specifically addressed impairments associated with shift work sleep disorder. In one study, caffeine at a dose of 4 mg/kg taken 30 minutes before starting a night shift provided objective improvement in both performance and alertness.60

Strategic napping is an additional practical intervention to promote alertness during night shifts, and cumulative data indicate that it provides objective and subjective improvements in alertness and performance.61,62 Earlier timed naps (ie, before or during the early portion of a shift) of short duration (ie, 20 minutes or less) are likely to produce maximal benefit, because they avoid sleep inertia (the grogginess or sleepiness that may follow a long nap), and also because they have no effect on the subsequent daytime sleep bout.61,63

Interventions may also be used in combination. For example, napping in conjunction with caffeine results in a greater degree of increased objective alertness than either intervention alone.60

How about days off?

The recommendations described here presume that shift workers maintain the workday sleep-wake schedule continuously, including when they are not at work. This is likely not a real-world scenario.

Smith et al64 developed a “compromise” phase position, whereby internal rhythms are optimized to facilitate alertness during work and sleepiness during the day, while allowing one to adopt a non-workday sleep schedule that maintains accessibility to family and social activities. In brief, non-workday sleep starts about 5.5 hours earlier than workday sleep; all sleep bouts are followed by brief exposure to bright light (to avoid excessive phase delay); and, as described previously, both workplace bright light and protection from morning light are implemented.

Although further studies are needed to determine whether this regimen is practical in real life, study participants who achieved desired partial phase shifts had performance ratings on a par with baseline levels, and comparable to those in a group that achieved complete re-entrainment.64

Finally, all shift workers need to be encouraged to protect the daytime bedroom environment just as daytime workers protect their nighttime environment. Sleep should be sought in an appropriately darkened and quiet environment, phones and doorbells silenced, and appointments scheduled accordingly.

For people who must travel long distances east or west by air or who must work the night shift, some relief is possible for the grogginess and disorientation that often ensue. The problems arise from the body’s internal clock being out of sync with the sun. Part of the solution involves helping reset the internal clock, or sometimes, preventing it from resetting itself.

This review will focus on jet lag sleep disorder and shift work sleep disorder, with an emphasis on the causes, the clinical assessment, and evidence-based treatment options.

WHEN THE INTERNAL CLOCK IS OUT OF SYNC WITH THE SUN

Circadian rhythm sleep disorders are the result of dyssynchrony between the body’s internal clock and the external 24-hour light-dark cycle. Patients typically present with insomnia or excessive somnolence. These disorders may represent an intrinsic disorder, such as delayed or advanced sleep-phase disorder, or may be the result of transmeridian air travel or working nonstandard shifts.1

Modified with permission of Elsevier LTD. From Beersma DG, Gordijn MC. Circadian control of the sleep-wake cycle. Physiol Behav 2007; 90:190–195.
Figure 1. The two-process model of sleep regulation. Sleep propensity grows during periods of wakefulness and abates during sleeping periods. The homeostatic process (process S, blue line) is limited to a range of values determined by a clock-like circadian process (process C, red lines) that varies with the biological time of day.
Sleep and wakefulness are conceptually governed by two processes, “process S” and “process C.”2 The homeostatic drive to sleep (process S) is proportional to the duration of sleep restriction, and it becomes maximal at about 40 hours.3 In contrast, process C creates a drive for wakefulness that variably opposes process S and depends on circadian rhythms intrinsic to the organism (Figure 1).4 Coordinating this sleep-wake rhythm (and numerous other behavioral and physiologic processes) are the neurons of the suprachiasmatic nuclei of the hypothalamus.5–8

The intrinsic human circadian period is typically slightly longer than 24 hours,9 but it is synchronized (“entrained”) to the 24-hour day by various environmental inputs, or zeitgebers (German for “time-givers”), the most important of which is light exposure.10

When the internal clock is out of sync with the sun, the misalignment can result in daytime anergia, alternating complaints of insomnia and hypersomnia, and various other symptoms, including emotional disturbances and gastrointestinal distress. In particular, long-distance air travel or a nocturnal work schedule overwhelms the ability of the intrinsic clock to adjust rapidly enough, and the result is jet lag sleep disorder or shift work sleep disorder.1

TOOLS TO EVALUATE CIRCADIAN RHYTHM DISTURBANCES

A thorough history is the cornerstone of the evaluation for all sleep disorders, and if a circadian rhythm disturbance is suspected, the sleep history is supplemented with specific questions to establish a clear diagnosis.

When assessing for jet lag disorder, ask about:

  • The patient’s degree of sleep deprivation before and during travel
  • His or her innate circadian preference (ie, whether he or she is a “night owl” or “early bird”)
  • Patterns of alcohol and caffeine consumption.

When assessing for shift work disorder, include the above questions and also look for differences in the sleep-wake schedule on working days vs nonworking days, as well as external contributors to poor sleep quality (eg, the degree to which daytime sleep is not “protected”).

The following tools help in acquiring this information.

Sleep diary

In a sleep diary or log, patients record the times that they take naps, maintain consolidated sleep, and subsequently arise. The diary also prompts the patient for information about sleep latency, wakefulness after sleep onset, time in bed, medication and caffeine intake, and the restorative quality of sleep.

While the sleep diary by itself may provide insight into counteractive sleep-related behaviors and misperceptions the patient may have, compliance is often limited. Therefore, the sleep diary is best used in conjunction with actigraphy.

Actigraphy

An actigraph is a wristwatch-size motion detector, typically worn continuously for 7 days or longer. The data it gathers and stores serve as a surrogate measure of various sleep-wake variables.11

Either a sleep diary or actigraphy is required to demonstrate the stability of sleep patterns and circadian preference, but the actigraph typically generates more reliable data.11,12 It is also valuable in assessing the response to treatment of circadian rhythm sleep disorders.13

Are you an early bird or a night owl?

The Morningness-Eveningness questionnaire contains 19 items. Night owls tend to score lower on it than early birds do.14 This information may help some people avoid situations in which they may not do well, such as an early bird going on a permanent night-shift schedule.

Other assessment tools

Polysomnography is used primarily to rule out sleep-disordered breathing; it is not indicated for routine evaluation of circadian rhythm sleep disorders.

The minimum core body temperature and the peak melatonin secretion follow a 24-hour cycle. Although these measures are often used in research, they are not routinely used in clinical practice. (The minimum core body temperature is discussed further below.)

 

 

JET LAG SLEEP DISORDER

Jet lag results from air travel across multiple time zones, with a resultant discordance between the internal circadian clock and the destination’s light-dark cycle. Most sufferers report sleeping poorly at night and feeling groggy during the day, and some also experience general malaise and gastrointestinal distress.1

The severity depends on a number of variables.

Going west is easier than going east

Westward travel is normally less taxing than eastward travel, as it requires setting one’s internal clock later rather than earlier. Presumably, because the circadian period tends to exceed 24 hours, we can move our internal clock later by about 2 hours per day, but we can move it earlier by only 1 to 1.5 hours.15,16

The more time zones crossed, the longer it takes the circadian pacemaker to re-entrain and the longer-lasting and more severe are the symptoms of jet lag. Travel across one or two time zones is only transiently troublesome.

Does age affect jet lag?

Whether age affects the severity of jet lag is not yet known.

In a study of simulated jet lag (requiring a 6-hour advance), middle-aged people (ages 37 to 52) experienced a greater degree of fragmented sleep on polysomnography than younger ones (ages 18 to 25). The older group also had greater impairment in daytime alertness, suggesting that phase tolerance—ie, the ability to sleep at an abnormal time in the circadian cycle17—decreases with age. However, two field studies involving both eastward and westward travel yielded the opposite results, suggesting that older age may actually protect against jet lag.18–20

Methodologic differences preclude direct comparisons of the studies, as do differences in the age groups studied.

Light exposure can help or hurt, depending on the timing

Reprinted with permission of Elsevier LTD. From Burgess HJ, et al. Bright light, dark, and melatonin can promote circadian adaptation in night shift workers. Sleep Med Rev 2002; 6:407–420.
Figure 2. A schematic human phase-response curve to light (blue line) and a one to exogenous melatonin (red line). The y axis shows the direction and relative magnitude of the phase shift produced by the administration of light or melatonin at various times, which are shown on the x axis. This graph shows typical times and phase relationships among these rhythms when the circadian clock is entrained to a 24-hour day. For individuals with earlier or later circadian rhythms, the local time axis should be adjusted accordingly. The light phase-response curve is a schematic based on the results of numerous studies. The melatonin curve is based on a single study using 0.5-mg doses of melatonin.22
Light exposure is of primary importance in shifting the circadian clock, and the direction of the shift depends on the timing of the exposure (Figure 2).20–22

Our core body temperature dips to its lowest point about 2 to 3 hours before we habitually awake. Exposure to bright light in the hours leading up to this minimum (the inverted triangle in Figure 2) sets our internal clock later (a phase delay)—desirable, say, for someone travelling from New York City to Los Angeles. Conversely, exposure to bright light after this temperature minimum sets the clock earlier.

Inadvertent shifting of circadian phase in the wrong direction (“antidromic re-entrainment”) is common and delays circadian reacclimation and the dissipation of jet lag symptoms.

Burgess HJ, Eastman CT. Prevention of jet lag. American College of Physicians, 2010. Modified with permission of the American College of Physicians.
Figure 3. Diagram demonstrating a flight from Chicago to Paris, seven time zones east. Times when darkness and light should be sought are denoted by the letters “D” for darkness and “L” for light. The inverted triangles represent the minimum core body temperature. Subsequent to arrival, the depicted light-dark pattern should result in average daily phase shifts of 1 hour.
For example (Figure 3),23 a typical flight from Chicago to Paris (seven time zones to the east) arrives there early in the morning Paris time. Although the clocks at Charles de Gaulle airport say 08:00, the traveler’s internal clock says it is still 01:00. Furthermore, his or her core body temperature will reach its minimum at about 04:00 Chicago time, or 11:00 Paris time. If the traveller decides to go for a walk right away, the light exposure will promote a phase delay rather than the desired phase advance. Therefore, circadian re-entrainment will be relatively prolonged.24

We discuss ways to reduce antidromic reentrainment in more detail further below.

Other factors

Other factors that contribute to travel fatigue include sleep deprivation (before the flight or en route), acute discomfort as the plane ascends to its cruising altitude,25 and excessive alcohol or caffeine intake during the flight. Although the effects of these factors rapidly diminish once one reaches the travel destination, jet lag will persist until circadian re-entrainment occurs.15

NONDRUG THERAPIES FOR JET LAG SLEEP DISORDER

The goal of treatment is to realign the circadian rhythm in the most rapid and efficient way and to minimize symptoms in the meantime. Frequent shifts to different time zones, often required in business travel, are very difficult to accommodate, and business travelers actually may do better if they remain on their home-based schedule.

One study compared keeping home-based sleep hours as opposed to adopting local sleep hours during a 2-day stay after a 9-hour westward flight.26 Travelers who remained on home-based hours were less sleepy and had lower (ie, better) global jet lag ratings than those who adopted local sleep hours, in part because of better sleep quality and duration. Nevertheless, about one-third of the participants said they preferred to adhere to the local schedule.

Strategic avoidance of, and exposure to, light

If the traveler intends to remain at the destination long enough, he or she can adjust better (and avoid an antidromic process) via strategic avoidance of and exposure to light.24

Burgess HJ, Eastman CT. Prevention of jet lag. American College of Physicians, 2010. Modified with permission of the American College of Physicians.
Figure 4. The diagram demonstrates a flight from Los Angeles to Rome, nine time zones east. Times when darkness (letter D) and light (letter L) should be sought are also indicated. The inverted triangles represent the minimum core body temperature. The depicted light-dark pattern should result in average daily phase shifts of 2 hours.
Burgess and Eastman23,27 have devised plans to help in deciding whether a phase delay or phase advance is most desirable, depending on the number of time zones crossed. Generally, shifts earlier in time are required for eastward flights (as in Figure 3), and shifts later in time are required for westward flights. However, advances of 8 hours or more are more readily accomplished by a phase delay (Figure 4).23,28

People travelling east, who want to set their clocks ahead (a phase advance), need to keep to the dark in the 3 hours leading up to the time they reach their minimum core body temperature (depicted as “D” in Figure 3), and then expose themselves to light in the 3 hours immediately after (“L” in Figure 3). Thus, the traveler from Chicago to Paris would do better by avoiding light exposure on arrival, either by remaining in darkness in his or her hotel room, or by wearing dark sunglasses when outdoors. Wearing sunglasses during transit to the hotel would also help avoid light exposure.

When attempting to delay circadian rhythms, the opposite light-dark patterns are sought, as depicted in Figure 4. As flight and layover patterns often do not permit strict adherence to these measures, they represent idealized scenarios.

The first step is to make a grid with a concurrent listing of home and destination times. In the example in Figure 3, the person is traveling seven time zones east. On day 0, a rectangle is drawn around the times representing home-based sleep hours.

Next, we mark the time at which we expect the traveler’s core body temperature to reach its minimum (inverted triangle). If the person habitually sleeps no more than 7 hours per night, then we mark this point as 2 hours before his or her habitual wake-up time; if the person sleeps more than 7 hours, then we place it 3 hours before wake-up time.23,29 This process is repeated at the bottom of the grid to represent the desired sleep schedule at the traveler’s destination. The distance between the home and the destination-based minimum core body temperature symbols represents the required degree of circadian realignment.

If a phase advance is required (eg, if travelling from Chicago to Paris), the core body temperature symbol is drawn on day 1 in the same location as day 0. For each subsequent day, the symbol is moved 1 hour earlier (which is about how fast the internal clock can advance),15,27 until a clock time within 1 hour of the desired destination core body temperature time is reached or satisfactory sleep and daytime functioning are achieved (Figure 3). If a phase delay is required (eg, if travelling from New York City to Los Angeles), the symbol is drawn 2 hours later on day 1 than on day 0 (reflecting the greater ease at which delays are achieved),15,27 with subsequent daily shifts in 2-hour increments, again until a clock time within 1 hour of the desired destination minimum core body temperature time is reached or satisfactory sleep and daytime functioning are achieved.

Requirements for darkness can be met with protective eyewear (ie, dark sunglasses), or by remaining in a dark room. Light requirements can be met with outdoor exposure, with a commercial light box, or with a separate apparatus (eg, goggles, visors) portable enough for travel.

 

 

DRUGS TO TREAT JET LAG SLEEP DISORDER

Melatonin appears safe

Most field studies have found that nightly doses of melatonin (2–8 mg) improve the quality of sleep30–32 or alleviate daytime symptoms of jet lag, or both.20,30,31,33–36 Immediate-release preparations appear to be more effective than slow-release ones.31 Although most studies looked exclusively at adaptation to eastward travel,30–32,35,36 one studied westward travel,33 and another assessed melatonin’s effects during both departure and return trips that traversed 11 time zones.34

In studies of preflight dosing, melatonin was scheduled for up to 3 days before departure (and en route in two instances),30,34 at clock hours corresponding to the nocturnal sleep period at the travel destination (consistent times daily), and then for a subsequent 3 to 4 days between a destination time of 22:00 and 00:00 hours (ie, at bedtime).30,31,34–36 Several other studies further simplified this regimen, with participants taking nocturnal melatonin only on arrival at the destination, either for eastward31,32 or for westward travel.33

The study involving solely westward travel (Los Angeles to New Zealand) was the only one of the studies with positive findings that allowed for comparisons between participants who received melatonin before departure (3 days at 5-mg doses, taken between 07:00 and 08:00 Los Angeles time) and continuing for 5 days after arrival at 22:00 to 00:00 New Zealand time, and those who received melatonin beginning only on arrival.33 Significantly better jet lag outcomes were found in the latter group.

An important caveat is that melatonin is sold over the counter as a nutritional supplement and is not regulated by the United States Food and Drug Administration (FDA), so verification of purity of the product is difficult.

A comprehensive review by the National Academy of Sciences stated that, given the available data, short-term use of melatonin in total daily doses of 10 mg or less in healthy adults appears to be safe.37

Benzodiazepine receptor agonists improve sleep, but maybe not sleepiness

The use of standard hypnotics during periods of circadian realignment appears to be commonplace but has not been well studied.20 Trials of the newer benzodiazepine receptor agonists—three studies of zolpidem (Ambien) 10 mg30,38,39 and two of zopiclone 5 to 7.5 mg32,40—found consistently favorable subjective30,38 and objective32,39,40 outcomes in counteracting jet-lag-induced insomnia (for both eastward and westward travel). (Note: Zopiclone is not available in the United States, but its enantiomer eszopiclone [Lunesta] is.) However, the evidence is less clear for daytime symptoms of jet lag, with outcomes reported as favorable,30 equivocal,40 or inaccessible.32,38,39

The discrepancy between studies incorporating systematic daytime assessments may be due to differential medication effects (zolpidem vs zopiclone).

In two studies that compared these standard hypnotics to oral melatonin, one found that zopiclone 5 mg and melatonin 2 mg were equally beneficial with respect to sleep variables (other jet lag symptoms were not assessed).32 In another study, zolpidem 10 mg was superior to melatonin 5 mg for sleep and other jet lag symptoms, and the combination of zolpidem and melatonin was no better than zolpidem alone.30

Importantly, however, adverse effects were more frequent in those taking zolpidem and included nausea, vomiting, and confusion.30 Although these effects were not deemed serious, 14 participants (10%) withdrew from the study.

Stimulants

Caffeine is commonly used to combat the sleepiness of jet lag, but only two controlled field studies have assessed its efficacy.41,42 Both used slow-release preparations at a daily dosage of 300 mg.

In one study, after an eastward flight traversing seven time zones, participants took the pill at 08:00 destination time every day for 5 days.41 Curiously, alertness and other jet lag symptoms were not assessed, but circadian rhythms (determined by levels of cortisol in saliva) were re-entrained at a more rapid rate with caffeine than with placebo, and to a degree comparable with that achieved by exogenous melatonin.

In a follow-up study by the same group, those receiving caffeine were objectively less sleepy (as assessed by multiple sleep latency tests) than those taking melatonin or placebo, but subjective differences between groups were not identified.42 Furthermore, those taking caffeine had significantly more nocturnal sleep complaints, as assessed both objectively and subjectively.

A recent randomized, double-blind, placebo-controlled trial of the stimulant armodafinil (Nuvigil) found less sleepiness on multiple sleep latency testing and a decrease in jet leg symptoms with a dosage of 150 mg than with placebo.43

SHIFT WORK SLEEP DISORDER: DEFINITION, PREDISPOSING FACTORS

Shift work refers to nonstandard work schedules, including on-call duty, rotating shifts, and permanent night work. In the United States, one in five workers works a nonstandard shift.20

While shift work presents obvious difficulties, the diagnosis of shift work sleep disorder is reserved for those who have chronic insomnia or sleepiness at times that are not conducive to the externally demanded sleep-wake schedule, despite having the opportunity for sufficient daytime sleep.1 When defined in such a fashion, this disorder may afflict nearly a third of workers,44 with potential adverse effects on safety, health, and quality of life.

Older age is considered a risk factor for intolerance to shift work.20 In a study of physiologic phase shifts in response to night work, older workers were less able to recover after several night shifts.45 A large survey of police officers working the night shift supported the finding of more sleep disruption and on-duty sleepiness in older people.46

 

 

TREATMENT OF SHIFT WORK SLEEP DISORDER

Bright light at work, sunglasses on the way home

Various field studies have described hastening of circadian adaptation (and immediate alerting effects) during night shifts with the use of bright light.20

Boivin and James47 found that workers who received 6 hours of intermittent bright light during their shifts experienced significantly greater phase delays than those who received no such intervention. Those receiving bright light also wore sunglasses during the commute home (to protect from an undesired phase advance), and this has demonstrated favorable effects as an independent intervention.48

Drug treatment of shift work sleep disorder

Melatonin: Mixed results. Two field studies found that taking melatonin (5–6 mg) before the daytime sleep period had a favorable impact on subjective sleep quality.49,50 However, two other studies found no such benefit with doses ranging from 6 to 10 mg.51,52 Differences between these studies—eg, shift schedules, dosages, and the time the melatonin was taken—preclude definitive comparisons.

Effects of melatonin on workplace alertness are indeterminate because of inconsistent measurements of this variable. Importantly, a simulated shift work study found no phase-shifting advantages of melatonin in those who concomitantly used bright light during their work shift with or without morning protective eyewear.48

Hypnotic drugs. In simulation studies and field studies, people taking benzodiazepine receptor agonists have consistently said they sleep better.53–58 A simulation study noted additional benefit in the ability to stay alert during the night shift (assessed by maintenance of wakefulness testing),55 but two other studies saw no changes in manifest sleepiness (assessed with multiple sleep latency tests).53,54 These divergent findings may represent different effects on these two dimensions of sleepiness.

The only field study to assess post-sleep psychomotor performance found no impairments after taking 7.5 mg of zopiclone, a relatively long-acting nonbenzodiazepine hypnotic.57

Stimulants. In the largest trial to date of shift work sleep disorder, modafinil 200 mg (the only drug currently FDA-approved for shift work sleep disorder) had significant benefits compared with placebo with respect to objective measurements of workplace sleepiness, reaction time performance testing, and self-rated improvement of symptoms.59 Perhaps because of the low dose studied, both treated and untreated patients continued to manifest sleepiness within the pathologic range on objective testing.

Although the efficacy of caffeine is well documented as a countermeasure for sleepiness during experimentally induced sleep deprivation,20 very few field trials have specifically addressed impairments associated with shift work sleep disorder. In one study, caffeine at a dose of 4 mg/kg taken 30 minutes before starting a night shift provided objective improvement in both performance and alertness.60

Strategic napping is an additional practical intervention to promote alertness during night shifts, and cumulative data indicate that it provides objective and subjective improvements in alertness and performance.61,62 Earlier timed naps (ie, before or during the early portion of a shift) of short duration (ie, 20 minutes or less) are likely to produce maximal benefit, because they avoid sleep inertia (the grogginess or sleepiness that may follow a long nap), and also because they have no effect on the subsequent daytime sleep bout.61,63

Interventions may also be used in combination. For example, napping in conjunction with caffeine results in a greater degree of increased objective alertness than either intervention alone.60

How about days off?

The recommendations described here presume that shift workers maintain the workday sleep-wake schedule continuously, including when they are not at work. This is likely not a real-world scenario.

Smith et al64 developed a “compromise” phase position, whereby internal rhythms are optimized to facilitate alertness during work and sleepiness during the day, while allowing one to adopt a non-workday sleep schedule that maintains accessibility to family and social activities. In brief, non-workday sleep starts about 5.5 hours earlier than workday sleep; all sleep bouts are followed by brief exposure to bright light (to avoid excessive phase delay); and, as described previously, both workplace bright light and protection from morning light are implemented.

Although further studies are needed to determine whether this regimen is practical in real life, study participants who achieved desired partial phase shifts had performance ratings on a par with baseline levels, and comparable to those in a group that achieved complete re-entrainment.64

Finally, all shift workers need to be encouraged to protect the daytime bedroom environment just as daytime workers protect their nighttime environment. Sleep should be sought in an appropriately darkened and quiet environment, phones and doorbells silenced, and appointments scheduled accordingly.

References
  1. International Classification of Sleep Disorders: Diagnostic and Coding Manual/American Academy of Sleep Medicine. 2nd ed. Westchester, IL: American Academy of Sleep Medicine; 2005.
  2. Borbély AA, Achermann P. Concepts and models of sleep regulation: an overview. J Sleep Res 1992; 1:6379.
  3. Carskadon MA, Dement WC. Effects of total sleep loss on sleep tendency. Percept Mot Skills 1979; 48:495506.
  4. Beersma DG, Gordijn MC. Circadian control of the sleep-wake cycle. Physiol Behav 2007; 90:190195.
  5. Moore RY, Eichler VB. Loss of a circadian adrenal corticosterone rhythm following suprachiasmatic lesions in the rat. Brain Res 1972; 42:201206.
  6. Stephan FK, Zucker I. Circadian rhythms in drinking behavior and locomotor activity of rats are eliminated by hypothalamic lesions. Proc Natl Acad Sci U S A 1972; 69:15831586.
  7. Welsh DK, Logothetis DE, Meister M, Reppert SM. Individual neurons dissociated from rat suprachiasmatic nucleus express independently phased circadian firing rhythms. Neuron 1995; 14:697706.
  8. Ralph MR, Foster RG, Davis FC, Menaker M. Transplanted suprachiasmatic nucleus determines circadian period. Science 1990; 247:975978.
  9. Czeisler CA, Duffy JF, Shanahan TL, et al. Stability, precision, and near-24-hour period of the human circadian pacemaker. Science 1999; 284:21772181.
  10. Waterhouse JM, DeCoursey PJ. Human circadian organization. In:Dunlap JC, Loros JJ, DeCoursey PJ, editors. Chronobiology: Biological Timekeeping. Sunderland, MA: Sinauer Associates; 2004:291324.
  11. Morgenthaler T, Alessi C, Friedman L, et al; Standards of Practice Committee; American Academy of Sleep Medicine. Practice parameters for the use of actigraphy in the assessment of sleep and sleep disorders: an update for 2007. Sleep 2007; 30:519529.
  12. Bradshaw DA, Yanagi MA, Pak ES, Peery TS, Ruff GA. Nightly sleep duration in the 2-week period preceding multiple sleep latency testing. J Clin Sleep Med 2007; 3:613619.
  13. Morgenthaler TI, Lee-Chiong T, Alessi C, et al; Standards of Practice Committee of the American Academy of Sleep Medicine. Practice parameters for the clinical evaluation and treatment of circadian rhythm sleep disorders. An American Academy of Sleep Medicine report. Sleep 2007; 30:14451459.
  14. Horne JA, Ostberg O. A self-assessment questionnaire to determine morningness-eveningness in human circadian rhythms. Int J Chronobiol 1976; 4:97110.
  15. Waterhouse J, Reilly T, Atkinson G, Edwards B. Jet lag: trends and coping strategies. Lancet 2007; 369:11171129.
  16. Eastman CI, Gazda CJ, Burgess HJ, Crowley SJ, Fogg LF. Advancing circadian rhythms before eastward flight: a strategy to prevent or reduce jet lag. Sleep 2005; 28:3344.
  17. Moline ML, Pollak CP, Monk TH, et al. Age-related differences in recovery from simulated jet lag. Sleep 1992; 15:2840.
  18. Waterhouse J, Edwards B, Nevill A, et al. Identifying some determinants of “jet lag” and its symptoms: a study of athletes and other travellers. Br J Sports Med 2002; 36:5460.
  19. Tresguerres JA, Ariznavarreta C, Granados B, et al. Circadian urinary 6-sulphatoxymelatonin, cortisol excretion and locomotor activity in airline pilots during transmeridian flights. J Pineal Res 2001; 31:1622.
  20. Sack RL, Auckley D, Auger RR, et al; American Academy of Sleep Medicine. Circadian rhythm sleep disorders: part I, basic principles, shift work and jet lag disorders. An American Academy of Sleep Medicine review. Sleep 2007; 30:14601483.
  21. Burgess HJ, Sharkey KM, Eastman CI. Bright light, dark and melatonin can promote circadian adaptation in night shift workers. Sleep Med Rev 2002; 6:407420.
  22. Lewy AJ, Bauer VK, Saeeduddin A, et al. The human phase response curve (PRC) to melatonin is about 12 hours out of phase with the PRC to light. Chronobiol Int 1998; 15:7183.
  23. Burgess HJ, Eastman CT. Prevention of Jet Lag. 2010. http://pier.acponline.org/physicians/screening/prev1015/prev1015.html. Accessed June 25, 2010.
  24. Daan S, Lewy AJ. Scheduled exposure to daylight: a potential strategy to reduce “jet lag” following transmeridian flight. Psychopharmacol Bull 1984; 20:566568.
  25. Muhm JM, Rock PB, McMullin DL, et al. Effect of aircraft-cabin altitude on passenger discomfort. N Engl J Med 2007; 357:1827.
  26. Lowden A, Akerstedt T. Retaining home-base sleep hours to prevent jet lag in connection with a westward flight across nine time zones. Chronobiol Int 1998; 15:365376.
  27. Eastman CI, Burgess HJ. How to travel the world without jet lag. Sleep Med Clin 2009; 4:241255.
  28. Revell VL, Eastman CI. How to trick mother nature into letting you fly around or stay up all night. J Biol Rhythms 2005; 20:353365.
  29. Cagnacci A, Elliott JA, Yen SS. Melatonin: a major regulator of the circadian rhythm of core temperature in humans. J Clin Endocrinol Metab 1992; 75:447452.
  30. Suhner A, Schlagenhauf P, Höfer I, Johnson R, Tschopp A, Steffen R. Effectiveness and tolerability of melatonin and zolpidem for the alleviation of jet lag. Aviat Space Environ Med 2001; 72:638646.
  31. Suhner A, Schlagenhauf P, Johnson R, Tschopp A, Steffen R. Comparative study to determine the optimal melatonin dosage form for the alleviation of jet lag. Chronobiol Int 1998; 15:655666.
  32. Paul MA, Gray G, Sardana TM, Pigeau RA. Melatonin and zopiclone as facilitators of early circadian sleep in operational air transport crews. Aviat Space Environ Med 2004; 75:439443.
  33. Petrie K, Dawson AG, Thompson L, Brook R. A double-blind trial of melatonin as a treatment for jet lag in international cabin crew. Biol Psychiatry 1993; 33:526530.
  34. Petrie K, Conaglen JV, Thompson L, Chamberlain K. Effect of melatonin on jet lag after long haul flights. BMJ 1989; 298:705707.
  35. Arendt J, Aldhous M, Marks V. Alleviation of jet lag by melatonin: preliminary results of controlled double blind trial. Br Med J (Clin Res Ed) 1986; 292:1170.
  36. Claustrat B, Brun J, David M, Sassolas G, Chazot G. Melatonin and jet lag: confirmatory result using a simplified protocol. Biol Psychiatry 1992; 32:705711.
  37. Committee on the Framework for Evaluating the Safety of Dietary Supplements, Food and Nutrition Board, Board on Life Sciences, Institute of Medicine and National Research Council of the National Academies. Dietary supplements: a framework for evaluating safety. Washington, DC: The National Academies Press; 2005.
  38. Jamieson AO, Zammit GK, Rosenberg RS, Davis JR, Walsh JK. Zolpidem reduces the sleep disturbance of jet lag. Sleep Med 2001; 2:423430.
  39. Hirschfeld U, Moreno-Reyes R, Akseki E, et al. Progressive elevation of plasma thyrotropin during adaptation to simulated jet lag: effects of treatment with bright light or zolpidem. J Clin Endocrinol Metab 1996; 81:32703277.
  40. Daurat A, Benoit O, Buguet A. Effects of zopiclone on the rest/activity rhythm after a westward flight across five time zones. Psychopharmacology (Berl) 2000; 149:241245.
  41. Piérard C, Beaumont M, Enslen M, et al. Resynchronization of hormonal rhythms after an eastbound flight in humans: effects of slow-release caffeine and melatonin. Eur J Appl Physiol 2001; 85:144150.
  42. Beaumont M, Batéjat D, Piérard C, et al. Caffeine or melatonin effects on sleep and sleepiness after rapid eastward transmeridian travel. J Appl Physiol 2004; 96:5058.
  43. Rosenberg RP, Bogan RK, Tiller JM, et al. A phase 3, double-blind, randomized, placebo-controlled study of armodafinil for excessive sleepiness associated with jet lag disorder. Mayo Clin Proc 2010; 85:630638.
  44. Drake CL, Roehrs T, Richardson G, Walsh JK, Roth T. Shift work sleep disorder: prevalence and consequences beyond that of symptomatic day workers. Sleep 2004; 27:14531462.
  45. Härmä MI, Hakola T, Akerstedt T, Laitinen JT. Age and adjustment to night work. Occup Environ Med 1994; 51:568573.
  46. Smith L, Mason C. Reducing night shift exposure: a pilot study of rota, night shift and age effects on sleepiness and fatigue. J Hum Ergol (Tokyo) 2001; 30:8387.
  47. Boivin DB, James FO. Circadian adaptation to night-shift work by judicious light and darkness exposure. J Biol Rhythms 2002; 17:556567.
  48. Crowley SJ, Lee C, Tseng CY, Fogg LF, Eastman CI. Combinations of bright light, scheduled dark, sunglasses, and melatonin to facilitate circadian entrainment to night shift work. J Biol Rhythms 2003; 18:513523.
  49. Folkard S, Arendt J, Clark M. Can melatonin improve shift workers’ tolerance of the night shift? Some preliminary findings. Chronobiol Int 1993; 10:315320.
  50. Yoon IY, Song BG. Role of morning melatonin administration and attenuation of sunlight exposure in improving adaptation of nightshift workers. Chronobiol Int 2002; 19:903913.
  51. James M, Tremea MO, Jones JS, Krohmer JR. Can melatonin improve adaptation to night shift? Am J Emerg Med 1998; 16:367370.
  52. Jorgensen KM, Witting MD. Does exogenous melatonin improve day sleep or night alertness in emergency physicians working night shifts? Ann Emerg Med 1998; 31:699704.
  53. Walsh JK, Schweitzer PK, Anch AM, Muehlbach MJ, Jenkins NA, Dickins QS. Sleepiness/alertness on a simulated night shift following sleep at home with triazolam. Sleep 1991; 14:140146.
  54. Walsh JK, Sugerman JL, Muehlbach MJ, Schweitzer PK. Physiological sleep tendency on a simulated night shift: adaptation and effects of triazolam. Sleep 1988; 11:251264.
  55. Porcù S, Bellatreccia A, Ferrara M, Casagrande M. Performance, ability to stay awake, and tendency to fall asleep during the night after a diurnal sleep with temazepam or placebo. Sleep 1997; 20:535541.
  56. Monchesky TC, Billings BJ, Phillips R, Bourgouin J. Zopiclone in insomniac shiftworkers. Evaluation of its hypnotic properties and its effects on mood and work performance. Int Arch Occup Environ Health 1989; 61:255259.
  57. Moon CA, Hindmarch I, Holland RL. The effect of zopiclone 7.5 mg on the sleep, mood and performance of shift workers. Int Clin Psychopharmacol 1990; 5(suppl 2):7983.
  58. Puca FM, Perrucci S, Prudenzano MP, et al. Quality of life in shift work syndrome. Funct Neurol 1996; 11:261268.
  59. Czeisler CA, Walsh JK, Roth T, et al; US Modafinil in Shift Work Sleep Disorder Study Group. Modafinil for excessive sleepiness associated with shift-work sleep disorder. N Engl J Med 2005; 353:476486.
  60. Schweitzer PK, Randazzo AC, Stone K, Erman M, Walsh JK. Laboratory and field studies of naps and caffeine as practical countermeasures for sleep-wake problems associated with night work. Sleep 2006; 29:3950.
  61. Sallinen M, Härmä M, Akerstedt T, Rosa R, Lillqvist O. Promoting alertness with a short nap during a night shift. J Sleep Res 1998; 7:240247.
  62. Garbarino S, Mascialino B, Penco MA, et al. Professional shift-work drivers who adopt prophylactic naps can reduce the risk of car accidents during night work. Sleep 2004; 27:12951302.
  63. Purnell MT, Feyer AM, Herbison GP. The impact of a nap opportunity during the night shift on the performance and alertness of 12-h shift workers. J Sleep Res 2002; 11:219227.
  64. Smith MR, Fogg LF, Eastman CI. A compromise circadian phase position for permanent night work improves mood, fatigue, and performance. Sleep 2009; 32:14811489.
References
  1. International Classification of Sleep Disorders: Diagnostic and Coding Manual/American Academy of Sleep Medicine. 2nd ed. Westchester, IL: American Academy of Sleep Medicine; 2005.
  2. Borbély AA, Achermann P. Concepts and models of sleep regulation: an overview. J Sleep Res 1992; 1:6379.
  3. Carskadon MA, Dement WC. Effects of total sleep loss on sleep tendency. Percept Mot Skills 1979; 48:495506.
  4. Beersma DG, Gordijn MC. Circadian control of the sleep-wake cycle. Physiol Behav 2007; 90:190195.
  5. Moore RY, Eichler VB. Loss of a circadian adrenal corticosterone rhythm following suprachiasmatic lesions in the rat. Brain Res 1972; 42:201206.
  6. Stephan FK, Zucker I. Circadian rhythms in drinking behavior and locomotor activity of rats are eliminated by hypothalamic lesions. Proc Natl Acad Sci U S A 1972; 69:15831586.
  7. Welsh DK, Logothetis DE, Meister M, Reppert SM. Individual neurons dissociated from rat suprachiasmatic nucleus express independently phased circadian firing rhythms. Neuron 1995; 14:697706.
  8. Ralph MR, Foster RG, Davis FC, Menaker M. Transplanted suprachiasmatic nucleus determines circadian period. Science 1990; 247:975978.
  9. Czeisler CA, Duffy JF, Shanahan TL, et al. Stability, precision, and near-24-hour period of the human circadian pacemaker. Science 1999; 284:21772181.
  10. Waterhouse JM, DeCoursey PJ. Human circadian organization. In:Dunlap JC, Loros JJ, DeCoursey PJ, editors. Chronobiology: Biological Timekeeping. Sunderland, MA: Sinauer Associates; 2004:291324.
  11. Morgenthaler T, Alessi C, Friedman L, et al; Standards of Practice Committee; American Academy of Sleep Medicine. Practice parameters for the use of actigraphy in the assessment of sleep and sleep disorders: an update for 2007. Sleep 2007; 30:519529.
  12. Bradshaw DA, Yanagi MA, Pak ES, Peery TS, Ruff GA. Nightly sleep duration in the 2-week period preceding multiple sleep latency testing. J Clin Sleep Med 2007; 3:613619.
  13. Morgenthaler TI, Lee-Chiong T, Alessi C, et al; Standards of Practice Committee of the American Academy of Sleep Medicine. Practice parameters for the clinical evaluation and treatment of circadian rhythm sleep disorders. An American Academy of Sleep Medicine report. Sleep 2007; 30:14451459.
  14. Horne JA, Ostberg O. A self-assessment questionnaire to determine morningness-eveningness in human circadian rhythms. Int J Chronobiol 1976; 4:97110.
  15. Waterhouse J, Reilly T, Atkinson G, Edwards B. Jet lag: trends and coping strategies. Lancet 2007; 369:11171129.
  16. Eastman CI, Gazda CJ, Burgess HJ, Crowley SJ, Fogg LF. Advancing circadian rhythms before eastward flight: a strategy to prevent or reduce jet lag. Sleep 2005; 28:3344.
  17. Moline ML, Pollak CP, Monk TH, et al. Age-related differences in recovery from simulated jet lag. Sleep 1992; 15:2840.
  18. Waterhouse J, Edwards B, Nevill A, et al. Identifying some determinants of “jet lag” and its symptoms: a study of athletes and other travellers. Br J Sports Med 2002; 36:5460.
  19. Tresguerres JA, Ariznavarreta C, Granados B, et al. Circadian urinary 6-sulphatoxymelatonin, cortisol excretion and locomotor activity in airline pilots during transmeridian flights. J Pineal Res 2001; 31:1622.
  20. Sack RL, Auckley D, Auger RR, et al; American Academy of Sleep Medicine. Circadian rhythm sleep disorders: part I, basic principles, shift work and jet lag disorders. An American Academy of Sleep Medicine review. Sleep 2007; 30:14601483.
  21. Burgess HJ, Sharkey KM, Eastman CI. Bright light, dark and melatonin can promote circadian adaptation in night shift workers. Sleep Med Rev 2002; 6:407420.
  22. Lewy AJ, Bauer VK, Saeeduddin A, et al. The human phase response curve (PRC) to melatonin is about 12 hours out of phase with the PRC to light. Chronobiol Int 1998; 15:7183.
  23. Burgess HJ, Eastman CT. Prevention of Jet Lag. 2010. http://pier.acponline.org/physicians/screening/prev1015/prev1015.html. Accessed June 25, 2010.
  24. Daan S, Lewy AJ. Scheduled exposure to daylight: a potential strategy to reduce “jet lag” following transmeridian flight. Psychopharmacol Bull 1984; 20:566568.
  25. Muhm JM, Rock PB, McMullin DL, et al. Effect of aircraft-cabin altitude on passenger discomfort. N Engl J Med 2007; 357:1827.
  26. Lowden A, Akerstedt T. Retaining home-base sleep hours to prevent jet lag in connection with a westward flight across nine time zones. Chronobiol Int 1998; 15:365376.
  27. Eastman CI, Burgess HJ. How to travel the world without jet lag. Sleep Med Clin 2009; 4:241255.
  28. Revell VL, Eastman CI. How to trick mother nature into letting you fly around or stay up all night. J Biol Rhythms 2005; 20:353365.
  29. Cagnacci A, Elliott JA, Yen SS. Melatonin: a major regulator of the circadian rhythm of core temperature in humans. J Clin Endocrinol Metab 1992; 75:447452.
  30. Suhner A, Schlagenhauf P, Höfer I, Johnson R, Tschopp A, Steffen R. Effectiveness and tolerability of melatonin and zolpidem for the alleviation of jet lag. Aviat Space Environ Med 2001; 72:638646.
  31. Suhner A, Schlagenhauf P, Johnson R, Tschopp A, Steffen R. Comparative study to determine the optimal melatonin dosage form for the alleviation of jet lag. Chronobiol Int 1998; 15:655666.
  32. Paul MA, Gray G, Sardana TM, Pigeau RA. Melatonin and zopiclone as facilitators of early circadian sleep in operational air transport crews. Aviat Space Environ Med 2004; 75:439443.
  33. Petrie K, Dawson AG, Thompson L, Brook R. A double-blind trial of melatonin as a treatment for jet lag in international cabin crew. Biol Psychiatry 1993; 33:526530.
  34. Petrie K, Conaglen JV, Thompson L, Chamberlain K. Effect of melatonin on jet lag after long haul flights. BMJ 1989; 298:705707.
  35. Arendt J, Aldhous M, Marks V. Alleviation of jet lag by melatonin: preliminary results of controlled double blind trial. Br Med J (Clin Res Ed) 1986; 292:1170.
  36. Claustrat B, Brun J, David M, Sassolas G, Chazot G. Melatonin and jet lag: confirmatory result using a simplified protocol. Biol Psychiatry 1992; 32:705711.
  37. Committee on the Framework for Evaluating the Safety of Dietary Supplements, Food and Nutrition Board, Board on Life Sciences, Institute of Medicine and National Research Council of the National Academies. Dietary supplements: a framework for evaluating safety. Washington, DC: The National Academies Press; 2005.
  38. Jamieson AO, Zammit GK, Rosenberg RS, Davis JR, Walsh JK. Zolpidem reduces the sleep disturbance of jet lag. Sleep Med 2001; 2:423430.
  39. Hirschfeld U, Moreno-Reyes R, Akseki E, et al. Progressive elevation of plasma thyrotropin during adaptation to simulated jet lag: effects of treatment with bright light or zolpidem. J Clin Endocrinol Metab 1996; 81:32703277.
  40. Daurat A, Benoit O, Buguet A. Effects of zopiclone on the rest/activity rhythm after a westward flight across five time zones. Psychopharmacology (Berl) 2000; 149:241245.
  41. Piérard C, Beaumont M, Enslen M, et al. Resynchronization of hormonal rhythms after an eastbound flight in humans: effects of slow-release caffeine and melatonin. Eur J Appl Physiol 2001; 85:144150.
  42. Beaumont M, Batéjat D, Piérard C, et al. Caffeine or melatonin effects on sleep and sleepiness after rapid eastward transmeridian travel. J Appl Physiol 2004; 96:5058.
  43. Rosenberg RP, Bogan RK, Tiller JM, et al. A phase 3, double-blind, randomized, placebo-controlled study of armodafinil for excessive sleepiness associated with jet lag disorder. Mayo Clin Proc 2010; 85:630638.
  44. Drake CL, Roehrs T, Richardson G, Walsh JK, Roth T. Shift work sleep disorder: prevalence and consequences beyond that of symptomatic day workers. Sleep 2004; 27:14531462.
  45. Härmä MI, Hakola T, Akerstedt T, Laitinen JT. Age and adjustment to night work. Occup Environ Med 1994; 51:568573.
  46. Smith L, Mason C. Reducing night shift exposure: a pilot study of rota, night shift and age effects on sleepiness and fatigue. J Hum Ergol (Tokyo) 2001; 30:8387.
  47. Boivin DB, James FO. Circadian adaptation to night-shift work by judicious light and darkness exposure. J Biol Rhythms 2002; 17:556567.
  48. Crowley SJ, Lee C, Tseng CY, Fogg LF, Eastman CI. Combinations of bright light, scheduled dark, sunglasses, and melatonin to facilitate circadian entrainment to night shift work. J Biol Rhythms 2003; 18:513523.
  49. Folkard S, Arendt J, Clark M. Can melatonin improve shift workers’ tolerance of the night shift? Some preliminary findings. Chronobiol Int 1993; 10:315320.
  50. Yoon IY, Song BG. Role of morning melatonin administration and attenuation of sunlight exposure in improving adaptation of nightshift workers. Chronobiol Int 2002; 19:903913.
  51. James M, Tremea MO, Jones JS, Krohmer JR. Can melatonin improve adaptation to night shift? Am J Emerg Med 1998; 16:367370.
  52. Jorgensen KM, Witting MD. Does exogenous melatonin improve day sleep or night alertness in emergency physicians working night shifts? Ann Emerg Med 1998; 31:699704.
  53. Walsh JK, Schweitzer PK, Anch AM, Muehlbach MJ, Jenkins NA, Dickins QS. Sleepiness/alertness on a simulated night shift following sleep at home with triazolam. Sleep 1991; 14:140146.
  54. Walsh JK, Sugerman JL, Muehlbach MJ, Schweitzer PK. Physiological sleep tendency on a simulated night shift: adaptation and effects of triazolam. Sleep 1988; 11:251264.
  55. Porcù S, Bellatreccia A, Ferrara M, Casagrande M. Performance, ability to stay awake, and tendency to fall asleep during the night after a diurnal sleep with temazepam or placebo. Sleep 1997; 20:535541.
  56. Monchesky TC, Billings BJ, Phillips R, Bourgouin J. Zopiclone in insomniac shiftworkers. Evaluation of its hypnotic properties and its effects on mood and work performance. Int Arch Occup Environ Health 1989; 61:255259.
  57. Moon CA, Hindmarch I, Holland RL. The effect of zopiclone 7.5 mg on the sleep, mood and performance of shift workers. Int Clin Psychopharmacol 1990; 5(suppl 2):7983.
  58. Puca FM, Perrucci S, Prudenzano MP, et al. Quality of life in shift work syndrome. Funct Neurol 1996; 11:261268.
  59. Czeisler CA, Walsh JK, Roth T, et al; US Modafinil in Shift Work Sleep Disorder Study Group. Modafinil for excessive sleepiness associated with shift-work sleep disorder. N Engl J Med 2005; 353:476486.
  60. Schweitzer PK, Randazzo AC, Stone K, Erman M, Walsh JK. Laboratory and field studies of naps and caffeine as practical countermeasures for sleep-wake problems associated with night work. Sleep 2006; 29:3950.
  61. Sallinen M, Härmä M, Akerstedt T, Rosa R, Lillqvist O. Promoting alertness with a short nap during a night shift. J Sleep Res 1998; 7:240247.
  62. Garbarino S, Mascialino B, Penco MA, et al. Professional shift-work drivers who adopt prophylactic naps can reduce the risk of car accidents during night work. Sleep 2004; 27:12951302.
  63. Purnell MT, Feyer AM, Herbison GP. The impact of a nap opportunity during the night shift on the performance and alertness of 12-h shift workers. J Sleep Res 2002; 11:219227.
  64. Smith MR, Fogg LF, Eastman CI. A compromise circadian phase position for permanent night work improves mood, fatigue, and performance. Sleep 2009; 32:14811489.
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Page Number
675-684
Page Number
675-684
Publications
Publications
Topics
Article Type
Display Headline
Jet lag and shift work sleep disorders: How to help reset the internal clock
Display Headline
Jet lag and shift work sleep disorders: How to help reset the internal clock
Sections
Inside the Article

KEY POINTS

  • Symptoms include daytime anergia, alternating complaints of insomnia and hypersomnia, emotional disturbances, and gastrointestinal distress. The severity depends on the degree and the duration of dyssynchrony, as well as on innate factors such as age and whether the patient is an “early bird” or a “night owl.”
  • Drug treatment addresses sleep-related symptoms (eg, somnolence, insomnia) and attempts to hasten circadian reacclimation.
  • Exposure to bright light in the hours leading up to the patient’s minimum core body temperature tends to push the internal clock later in time, whereas bright light in the hours immediately afterward pushes the clock earlier in time.
Disallow All Ads
Alternative CME
Use ProPublica
Article PDF Media

Update in intensive care medicine: Studies that challenged our practice in the last 5 years

Article Type
Changed
Fri, 11/10/2017 - 08:42
Display Headline
Update in intensive care medicine: Studies that challenged our practice in the last 5 years

We have seen significant growth in clinical research in critical care medicine in the last decade. Advances have been made in many important areas in this field; of these, advances in treating septic shock and acute respiratory distress syndrome (ARDS), and also in supportive therapies for critically ill patients (eg, sedatives, insulin), have perhaps received the most attention.

Of note, several once-established therapies in these areas have failed the test of time, as the result of evidence from more-recent clinical trials. For example, recent studies have shown that a pulmonary arterial catheter does not improve outcomes in patients with ARDS. Similarly, what used to be “optimal” fluid management in patients with ARDS is no longer considered appropriate.

In this review, we summarize eight major studies in critical care medicine published in the last 5 years, studies that have contributed to changes in our practice in the intensive care unit (ICU).

FLUID MANAGEMENT IN ARDS

Key points

  • In patients with acute lung injury (ALI) and ARDS, fluid restriction is associated with better outcomes than a liberal fluid policy.
  • A pulmonary arterial catheter is not necessary and, compared with a central venous catheter, may result in more complications in patients with ALI and ARDS.

Background

Fluid management practices in patients with ARDS have been extremely variable. Two different approaches are commonly used: the liberal or “wet” approach to optimize tissue perfusion and the “dry” approach, which focuses on reducing lung edema. Given that most deaths attributed to ARDS result from extrapulmonary organ failure, aggressive fluid restriction has been the less popular approach.

Additionally, although earlier studies and meta-analyses suggested that the use of a pulmonary arterial catheter was not associated with better outcomes in critically ill patients,1 controversy remained regarding the value of a pulmonary arterial catheter compared with a central venous catheter in guiding fluid management in patients with ARDS, and data were insufficient to prove one strategy better than the other.

The Fluids and Catheter Treatment Trial (FACTT)

NATIONAL HEART, LUNG, AND BLOOD INSTITUTE ACUTE RESPIRATORY DISTRESS SYNDROME (ARDS) CLINICAL TRIALS NETWORK; WIEDEMANN HP, WHEELER AP, BERNARD GR, ET AL. COMPARISON OF TWO FLUID-MANAGEMENT STRATEGIES IN ACUTE LUNG INJURY. N ENGL J MED 2006; 354:2564–2575.

NATIONAL HEART, LUNG, AND BLOOD INSTITUTE ACUTE RESPIRATORY DISTRESS SYNDROME (ARDS) CLINICAL TRIALS NETWORK; WHEELER AP, BERNARD GR, THOMPSON BT, ET AL. PULMONARY-ARTERY VERSUS CENTRAL VENOUS CATHETER TO GUIDE TREATMENT OF ACUTE LUNG INJURY. N ENGL J MED 2006; 354:2213–2224.

The Fluids and Catheter Treatment Trial (FACTT) compared two fluid strategies2 and also the utility of a pulmonary arterial catheter vs a central venous catheter3 in patients with ALI or ARDS.

This two-by-two factorial trial randomized 1,000 patients to be treated according to either a conservative (fluid-restrictive or “dry”) or a liberal (“wet”) fluid management strategy for 7 days. Additionally, they were randomly assigned to receive either a central venous catheter or a pulmonary arterial catheter. The trial thus had four treatment groups:

  • Fluid-restricted and a central venous catheter, with a goal of keeping the central venous pressure below 4 mm Hg
  • Fluid-restricted and a pulmonary arterial catheter: fluids were restricted and diuretics were given to keep the pulmonary artery occlusion pressure below 8 mm Hg
  • Fluid-liberal and a central venous catheter: fluids were given to keep the central venous pressure between 10 and 14 mm Hg
  • Fluid-liberal and a pulmonary arterial catheter: fluids were given to keep the pulmonary artery occlusion pressure between 14 and 18 mm Hg.

The primary end point was the mortality rate at 60 days. Secondary end points included the number of ventilator-free days and organ-failure-free days and parameters of lung physiology. All patients were managed with a low-tidal-volume strategy.

The ‘dry’ strategy was better

The cumulative fluid balance was −136 mL ± 491 mL in the “dry” group and 6,992 mL ± 502 mL in the “wet” group, a difference of more than 7 L (P < .0001). Of note, before randomization, the patients were already fluid-positive, with a mean total fluid balance of +2,700 mL).2

At 60 days, no statistically significant difference in mortality rate was seen between the fluid-management groups (25.5% in the dry group vs 28.4% in the wet group (P = .30). Nevertheless, patients in the dry group had better oxygenation indices and lung injury scores (including lower plateau airway pressure), resulting in more ventilator-free days (14.6 ± 0.5 vs 12.1 ± 0.5; P = .0002) and ICU-free days (13.4 ± 0.4 vs 11.2 ± 0.4; P = .0003).2

Although those in the dry-strategy group had a slightly lower cardiac index and mean arterial pressure, they did not have a higher incidence of shock.

More importantly, the dry group did not have a higher rate of nonpulmonary organ failure. Serum creatinine and blood urea nitrogen concentrations were slightly higher in this group, but this was not associated with a higher incidence of renal failure or the use of dialysis: 10% in the dry-strategy group vs 14% in the wet-strategy group; P = .0642).2

No advantage with a pulmonary arterial catheter

The mortality rate did not differ between the catheter groups. However, the patients who received a pulmonary arterial catheter stayed in the ICU 0.2 days longer and had twice as many nonfatal cardiac arrhythmias as those who received a central venous catheter.3

Comments

The liberal fluid-strategy group had fluid balances similar to those seen in previous National Institutes of Health ARDS Network trials in which fluid management was not controlled. This suggests that the liberal fluid strategy reflects usual clinical practice.

Although the goals used in this study (central venous pressure < 4 mm Hg or pulmonary artery occlusion pressure < 8 mm Hg) could be difficult to achieve in clinical practice, a conservative strategy of fluid management is preferred in patients with ALI or ARDS, given the benefits observed in this trial.

A pulmonary arterial catheter is not indicated to guide hemodynamic management of patients with ARDS.

 

 

CORTICOSTEROID USE IN ARDS

Key points

  • In selected patients with ARDS, the prolonged use of corticosteroids may result in better oxygenation and a shorter duration of mechanical ventilation.
  • Late use of corticosteroids in patients with ARDS (> 14 days after diagnosis) is not indicated and may increase the risk of death.
  • The role of corticosteroids in early ARDS (< 7 days after diagnosis) remains controversial.

Background

Systemic corticosteroid therapy was commonly used in ARDS patients in the 1970s and 1980s. However, a single-center study published in the late 1980s showed that a corticosteroid in high doses (methylprednisolone 30 mg/kg) resulted in more complications and was not associated with a lower mortality rate.4 On the other hand, a small study that included only patients with persistent ARDS (defined as ARDS lasting for more than 7 days) subsequently showed that oxygenation was significantly better and that fewer patients died while in the hospital with the use of methylprednisolone 2 mg/kg for 32 days.5

In view of these divergent findings, the ARDS Network decided to perform a study to help understand the role of corticosteroids in ARDS.

The Late Steroid Rescue Study (LaSRS)

STEINBERG KP, HUDSON LD, GOODMAN RB, ET AL; NATIONAL HEART, LUNG, AND BLOOD INSTITUTE ACUTE RESPIRATORY DISTRESS SYNDROME (ARDS) CLINICAL TRIALS NETWORK. EFFICACY AND SAFETY OF CORTICOSTEROIDS FOR PERSISTENT ACUTE RESPIRATORY DISTRESS SYNDROME. N ENGL J MED 2006; 354:1671–1684.

The Late Steroid Rescue Study (LaSRS),6 a double-blind, multicenter trial, randomly assigned 180 patients with persistent ARDS (defined as ongoing disease 7–28 days after its onset) to receive methylprednisolone or placebo for 21 days.

Methylprednisolone was given in an initial dose of 2 mg/kg of predicted body weight followed by a dose of 0.5 mg/kg every 6 hours for 14 days and then a dose of 0.5 mg/kg every 12 hours for 7 days, and then it was tapered over 2 to 4 days and discontinued. It could be discontinued if 21 days of treatment were completed or if the patient was able to breathe without assistance.

The primary end point was the mortality rate at 60 days. Secondary end points included the number of ventilator-free days, organ-failure-free days, and complications and the levels of biomarkers of inflammation.

No reduction in mortality rates with steroids

The mortality rates did not differ significantly in the corticosteroid group vs the placebo group at 60 days:

  • 29.2% with methylprednisolone (95% confidence interval [CI] 20.8–39.4)
  • 28.6% with placebo (95% CI 20.3–38.6, P = 1.0).

Mortality rates at 180 days were also similar between the groups:

  • 31.5% with methylprednisolone (95% CI 22.8–41.7)
  • 31.9% with placebo (95% CI 23.2–42.0, P = 1.0).

In patients randomized between 7 and 13 days after the onset of ARDS, the mortality rates were lower in the methylprednisolone group than in the placebo group but the differences were not statistically significant. The mortality rate in this subgroup was 27% vs 36% (P = .26) at 60 days and was 27% vs 39% (P = .14) at 180 days.

However, in patients randomized more than 14 days after the onset of ARDS, the mortality rate was significantly higher in the methylprednisolone group than in the placebo group at 60 days (35% vs 8%, P = .02) and at 180 days (44% vs 12%, P = .01).

Some benefit in secondary outcomes

At day 28, methylprednisolone was associated with:

  • More ventilator-free days (11.2 ± 9.4 vs 6.8 ± 8.5, P < .001)
  • More shock-free days (20.7 ± 8.9 vs 17.9 ± 10.2, P = .04)
  • More ICU-free days (8.9 ± 8.2 vs 6.7 ± 7.8, P = .02).

Similarly, pulmonary physiologic indices were better with methylprednisolone, specifically:

  • The ratio of Pao2 to the fraction of inspired oxygen at days 3, 4, and 14 (P < .05)
  • Plateau pressure at days 4, 5, and 7 (P < .05)
  • Static compliance at days 7 and 14 (P < .05).

In terms of side effects, methylprednisolone was associated with more events associated with myopathy or neuropathy (9 vs 0, P = .001), but there were no differences in the number of serious infections or in glycemic control.

Comments

Although other recent studies suggested that corticosteroid use may be associated with a reduction in mortality rates,7–9 LaSRS did not confirm this effect. Although the doses and length of therapy were similar in these studies, LaSRS was much larger and included patients from the ARDS Network.

Nevertheless, LaSRS was criticized because of strict exclusion criteria and poor enrollment (only 5% of eligible patients were included). Additionally, it was conducted over a period of time when some ICU practices varied significantly (eg, low vs high tidal volume ventilation, tight vs loose glucose control).

The role of corticosteroids in early ARDS (< 7 days after diagnosis) remains controversial at best. Table 1 summarizes recent studies that evaluated the use of corticosteroids in patients with ARDS.

INTERRUPTING SEDATION DURING MECHANICAL VENTILATION

Key points

  • Daily awakening of mechanically ventilated patients is safe.
  • Daily interruption of sedation in mechanically ventilated patients is associated with a shorter length of mechanical ventilation.

Background

Sedatives are a central component of critical care. Continuous infusions of narcotics, benzodiazepines, and anesthetic agents are frequently used to promote comfort in patients receiving mechanical ventilation.

Despite its widespread use in the ICU, there is little evidence that such sedation improves outcomes. Observational and randomized trials10–12 have shown that patients who receive continuous infusions of sedatives need to be on mechanical ventilation longer than those who receive intermittent dosing. Additionally, an earlier randomized controlled trial13 showed that daily interruption of sedative drug infusions decreased the duration of mechanical ventilation by almost 50% and resulted in a reduction in the length of stay in the ICU.

Despite these findings, many ICU physicians remain skeptical of the value of daily interruption of sedative medications and question the safety of this practice.

The Awakening and Breathing Controlled (ABC) trial

GIRARD TD, KRESS JP, FUCHS BD, ET AL. EFFICACY AND SAFETY OF A PAIRED SEDATION AND VENTILATOR WEANING PROTOCOL FOR MECHANICALLY VENTILATED PATIENTS IN INTENSIVE CARE (AWAKENING AND BREATHING CONTROLLED TRIAL): A RANDOMISED CONTROLLED TRIAL. LANCET 2008; 371:126–134.

The Awakening and Breathing Controlled (ABC) trial14 was a multicenter, randomized controlled trial that included 336 patients who required at least 12 consecutive hours of mechanical ventilation. All patients had to be receiving patient-targeted sedation.

Those in the intervention group (n = 168) had their sedation interrupted every day, followed by a clinical assessment to determine whether they could be allowed to try breathing spontaneously. The control group (n = 168) also received a clinical assessment for a trial of spontaneous breathing, while their sedation was continued as usual.

In patients in the intervention group who failed the screening for a spontaneous breathing trial, the sedatives were resumed at half the previous dose. Criteria for failure on the spontaneous breathing trial included any of the following: anxiety, agitation, respiratory rate more than 35 breaths per minute for 5 minutes or longer, cardiac arrhythmia, oxygen saturation less than 88% for 5 minutes or longer, or two or more signs of respiratory distress, tachycardia, bradycardia, paradoxical breathing, accessory muscle use, diaphoresis, or marked dyspnea.

 

 

Interrupting sedation was superior

The combination of sedation interruption and a spontaneous breathing trial was superior to a spontaneous breathing trial alone. The mean number of ventilator-free days:

  • 14.7 ± 0.9 with sedation interruption
  • 11.6 ± 0.9 days with usual care (P = .02).

The median time to ICU discharge:

  • 9.1 days with sedation interruption (interquartile range 5.1 to 17.8)
  • 12.9 days with usual care (interquartile range 6.0 to 24.2, P = .01).

The mortality rate at 28 days:

  • 28% with sedation interruption
  • 35% with usual care (P = .21).

The mortality rate at 1 year:

  • 44% with sedation interruption
  • 58% with usual care (hazard ratio [HR] in the intervention group 0.68, 95% CI 0.50–0.92, P = .01).

Of note, patients in the intervention group had a higher rate of self-extubation (9.6% vs 3.6%, P = .03), but the rate of reintubation was similar between the groups (14% vs 13%, P = .47).

Comments

The addition of daily awakenings to spontaneous breathing trials results in a further reduction in the number of ICU days and increases the number of ventilator-free days.

Of note, the protocol allowed patients in the control group to undergo a spontaneous breathing trial while on sedatives (69% of the patients were receiving sedation at the time). Therefore, a bias effect in favor of the intervention group cannot be excluded. However, both groups had to meet criteria for readiness for spontaneous breathing.

The study demonstrates the safety of daily awakenings and confirms previous findings suggesting that a daily trial of spontaneous breathing results in better ICU outcomes.

GLUCOSE CONTROL IN THE ICU

Key points

  • Although earlier studies suggested that intensive insulin therapy might be beneficial in critically ill patients, new findings show that strict glucose control can lead to complications without improving outcomes.

Background

A previous study15 found that intensive insulin therapy to maintain a blood glucose level between 80 and 110 mg/dL (compared with 180–200 mg/dL) reduced the mortality rate in surgical critical care patients. The mortality rate in the ICU was 4.6% with intensive insulin therapy vs 8.0% with conventional therapy (P < .04), and the effect was more robust for patients who remained longer than 5 days in the ICU (10.6% vs 20.2%).

Importantly, however, hypoglycemia (defined as blood glucose ≤ 40 mg/dL) occurred in 39 patients in the intensive-treatment group vs 6 patients in the conventional-treatment group.

The NICE-SUGAR trial

NICE-SUGAR STUDY INVESTIGATORS; FINFER S, CHITTOCK DR, SU SY, ET AL. INTENSIVE VERSUS CONVENTIONAL GLUCOSE CONTROL IN CRITICALLY ILL PATIENTS. N ENGL J MED 2009; 360:1283–1297.

The Normoglycemia in Intensive Care Evaluation-Survival Using Glucose Algorithm Regulation (NICE-SUGAR) trial16 randomized 6,104 patients in medical and surgical ICUs to receive either intensive glucose control (blood glucose 81–108 mg/dL) with insulin therapy or conventional glucose control (blood glucose < 180 mg/dL). In the conventional-control group, insulin was discontinued if the blood glucose level dropped below 144 mg/dL.

A higher mortality rate with intensive glucose control

As expected, the intensive-control group achieved lower blood glucose levels: 115 vs 144 mg/dL.

Nevertheless, intensive glucose control was associated with a higher incidence of severe hypoglycemia, defined as a blood glucose level lower than 40 mg/dL: 6.8% vs 0.5%.

More importantly, compared with conventional insulin therapy, intensive glucose control was associated with a higher 90-day mortality rate: 27.5% vs 24.9% (odds ratio 1.14, 95% CI 1.02–1.28). These findings were similar in the subgroup of surgical patients (24.4% vs 19.8%, odds ratio 1.31, 95% CI 1.07–1.61).

Comments

Of note, the conventional-control group had more patients who discontinued the treatment protocol prematurely. Additionally, more patients in this group received corticosteroids.

These results widely differ from those of a previous study by van den Berghe et al,15 which showed that tight glycemic control is associated with a survival benefit. The differences in outcomes are probably largely related to different patient populations, as van den Berghe et al included patients who had undergone cardiac surgery, who were more likely to benefit from strict blood glucose control.

The VISEP trial

BRUNKHORST FM, ENGEL C, BLOOS F, ET AL; GERMAN COMPETENCE NETWORK SEPSIS (SEPNET). INTENSIVE INSULIN THERAPY AND PENTASTARCH RESUSCITATION IN SEVERE SEPSIS. N ENGL J MED 2008; 358:125–139.

The Volume Substitution and Insulin Therapy in Severe Sepsis (VISEP) trial was a multicenter study designed to compare intensive insulin therapy (target blood glucose level 80–110 mg/dL) and conventional glucose control (target blood glucose level 180–200 mg/dL) in patients with severe sepsis.17 It also compared two fluids for volume resuscitation: 10% pentastarch vs modified Ringer's lactate. It included both medical and surgical patients.

Trial halted early for safety reasons

The mean morning blood glucose level was significantly lower in the intensive insulin group (112 vs 151 mg/dL).

Severe hypoglycemia (blood glucose ≤ 40 mg/dL) was more common in the group that received intensive insulin therapy (17% vs 4.1%, P < .001).

Mortality rates at 28 days did not differ significantly: 24.7% with intensive control vs 26.0% with conventional glucose control. The mortality rate at 90 days was 39.7% in the intensive therapy group and 35.4% in the conventional therapy group, but the difference was not statistically significant.

The intensive insulin arm of the trial was stopped after 488 patients were enrolled because of a higher rate of hypoglycemia (12.1% vs 2.1%) and of serious adverse events (10.9% vs 5.2%).

Additionally, the fluid resuscitation arm of the study was suspended at the first planned interim analysis because of a higher risk of organ failure in the 10% pentastarch group.

 

 

CORTICOSTEROID THERAPY IN SEPTIC SHOCK

Key points

  • Corticosteroid therapy improves hemodynamic outcomes in patients with severe septic shock.
  • Although meta-analyses suggest the mortality rate is lower with corticosteroid therapy, there is not enough evidence from randomized controlled trials to prove that the use of low-dose corticosteroids lowers the mortality rate in patients with septic shock.
  • The corticotropin (ACTH) stimulation test should not be used to determine the need for corticosteroids in patients with septic shock.

Background

A previous multicenter study,18 performed in France, found that the use of corticosteroids in patients with septic shock resulted in lower rates of death at 28 days, in the ICU, and in the hospital and a shorter time to vasopressor withdrawal. Nevertheless, the beneficial effects were not observed in patients with adequate adrenal reserve (based on an ACTH stimulation test).

This study was criticized because of a high mortality rate in the placebo group.

The CORTICUS study

SPRUNG CL, ANNANE D, KEH D, ET AL; CORTICUS STUDY GROUP. HYDROCORTISONE THERAPY FOR PATIENTS WITH SEPTIC SHOCK. N ENGL J MED 2008; 358:111–124.

The Corticosteroid Therapy of Septic Shock (CORTICUS) study was a multicenter trial that randomly assigned 499 patients with septic shock to receive hydrocortisone (50 mg intravenously every 6 hours for 5 days, followed by a 6-day taper period) or placebo.19

Patients were eligible to be enrolled within 72 hours of onset of shock. Similar to previous studies, the CORTICUS trial classified patients on the basis of an ACTH stimulation test as having inadequate adrenal reserve (a cortisol increase of ≤ 9 μg/dL) or adequate adrenal reserve (a cortisol increase of > 9 μg/dL).

Faster reversal of shock with steroids

At baseline, the mean Simplified Acute Physiologic Score II (SAPS II) was 49 (the range of possible scores is 0 to 163; the higher the score the worse the organ dysfunction).

Hydrocortisone use resulted in a shorter duration of vasopressor use and a faster reversal of shock (3.3 days vs 5.8 days, P < .001).

This association was the same when patients were divided according to response to ACTH stimulation test. Time to reversal of shock in responders:

  • 2.8 days with hydrocortisone
  • 5.8 days with placebo (P < .001).

Time to reversal of shock in nonresponders:

  • 3.9 days with hydrocortisone
  • 6.0 days with placebo (P = .06).

Nevertheless, the treatment did not reduce the mortality rate at 28 days overall (34.3% vs 31.5% P = .51), or in the subgroups based on response to ACTH, or at any other time point. A post hoc analysis suggested that patients who had a systolic blood pressure of less than 90 mm Hg within 30 minutes of enrollment had a greater benefit in terms of mortality rate, but the effect was not statistically significant: the absolute difference was −11.2% (P = 0.28). Similarly, post hoc analyses also revealed a higher rate of death at 28 days in patients who received etomidate (Amidate) before randomization in both groups (P = .03).

Importantly, patients who received corticosteroids had a higher incidence of superinfections, including new episodes of sepsis or septic shock, with a combined odds ratio of 1.37 (95% CI 1.05–1.79).

Length of stay in the hospital or in the ICU was similar in patients who received corticosteroids and in those who received placebo. The ICU length of stay was 19 ± 31 days with hydrocortisone vs 18 ± 17 days with placebo (P = .51).

Comments

The CORTICUS trial showed that low-dose corticosteroid therapy results in faster reversal of shock in patients with severe septic shock. The hemodynamic benefits are present in all patients regardless of response to the ACTH stimulation test.

Nevertheless, contrary to previous findings,18 corticosteroid use was not associated with an improvement in mortality rates. Important differences exist between these two studies:

  • The mortality rates in the placebo groups were significantly different (> 50% in the French study vs 30% in CORTICUS).
  • The SAPS II scores were different in these two trials (55 vs 49), suggesting a greater severity of illness in the French study.
  • The criteria for enrollment were different: the French study included patients who had a systolic blood pressure lower than 90 mm Hg for more than 1 hour despite fluid administration and vasopressor use, whereas the CORTICUS trial included patients who had a systolic blood pressure lower than 90 mm Hg for more than 1 hour despite fluid administration or vasopressor use.
  • The time of enrollment was different: patients were enrolled much faster in the French study (within 8 hours) than in the CORTICUS trial (within 72 hours).

A recent meta-analysis of 17 randomized trials (including the CORTICUS study), found that, compared with those who received placebo, patients who received corticosteroids had a small reduction in the 28-day mortality rate (HR 0.84, 95% CI 0.71–1.00, P < .05).20 Of note, this meta-analysis has been criticized for possible publication bias and also for a large degree of heterogeneity in its results.21

 

 

VASOPRESSOR THERAPY IN SHOCK

Key points

  • Vasopressin use in patients with severe septic shock is not associated with an improvement in mortality rates.
  • Vasopressin should not be used as a first-line agent in patients with septic shock.
  • Norepinephrine should be considered a first-line agent in patients with shock.
  • Compared with norepinephrine, the use of dopamine in patients with shock is associated with similar mortality rates, although its use may result in a greater number of cardiac adverse events.

Background

Vasopressin gained popularity in critical care in the last 10 years because several small studies showed that adding it improves hemodynamics and results in a reduction in the doses of catecholamines in patients with refractory septic shock.22 Furthermore, the Surviving Sepsis Campaign guidelines recommended the use of vasopressin in patients who have refractory shock despite fluid resuscitation and the use of other “conventional” vasopressors.23

Despite these positive findings, it remained unknown if the use of vasopressin increases the survival rate in patients with septic shock.

The Vasopressin and Septic Shock Trial (VASST)

RUSSELL JA, WALLEY KR, SINGER J, ET AL; VASST INVESTIGATORS. VASOPRESSIN VERSUS NOREPINEPHRINE INFUSION IN PATIENTS WITH SEPTIC SHOCK. N ENGL J MED 2008; 358:877–887.

The Vasopressin and Septic Shock Trial (VASST)24 was a multicenter randomized, double-blind, controlled trial that included 778 patients with refractory septic shock. Refractory shock was defined as the lack of a response to a normal saline fluid bolus of 500 mL or the need for vasopressors (norepinephrine in doses of at least 5 μg/minute or its equivalent for 6 hours or more in the 24 hours before randomization).

Two subgroups were identified: those with severe septic shock (requiring norepinephrine in doses of 15 μg/minute or higher) and those with less-severe septic shock (needing norepinephrine in doses of 5 to 14 μg/minute). Patients with unstable coronary artery disease (acute myocardial infarction, angina) and severe congestive heart failure were excluded.

Patients were randomized to receive an intravenous infusion of vasopressin (0.01–0.03 U/minute) or norepinephrine (5–15 mg/minute) in addition to open-labeled vasopressors (excluding vasopressin). The primary outcome was the all-cause mortality rate at 28 days.

Results

At 28 days, fewer patients had died in the vasopressin group than in the norepinephrine group (35.4% vs 39.3%), but the difference was not statistically significant (P = .26). The trend was the same at 90 days (mortality rate 43.9% vs 49.6%, P = .11).

Subgroup analysis showed that in patients with less-severe septic shock, those who received vasopressin had a lower mortality rate at 28 days (26.5% vs 35.7%, P = .05; relative risk 0.74; 95% CI 0.55–1.01) and at 90 days (35.8% vs 46.1%, P = .04; relative risk 0.78, 95% CI 0.61–0.99).

There were no statistically significant differences in any of the other secondary outcomes or in serious adverse events.

Comments

The study has been criticized for several reasons:

  • The mean arterial blood pressure at baseline before initiation of vasopressin was 72 mm Hg (and some argue that vasopressin was therefore not needed by the time it was started).
  • The time from screening to infusion of the study drug was very long (12 hours).
  • The observed mortality rate was lower than expected (37%).

Despite these considerations, the VASST trial showed that vasopressin is not associated with an increased number of adverse events in patients without active cardiovascular disease. The possible benefit in terms of the mortality rate in the subgroup of patients with less-severe septic shock requires further investigation.

Is dopamine equivalent to norepinephrine?

Previously, the Sepsis Occurrence in Acutely Ill Patients (SOAP) study, a multicenter, observational cohort study, found that dopamine use was associated with a higher all-cause mortality rate in the ICU compared with no dopamine.25 This finding had not been reproduced, as few well-designed studies had compared the effects of dopamine and norepinephrine.

The SOAP II study

DE BACKER D, BISTON P, DEVRIENDT J, ET AL; SOAP II INVESTIGATORS.. COMPARISON OF DOPAMINE AND NOREPINEPHRINE IN THE TREATMENT OF SHOCK. N ENGL J MED 2010; 362:779–789.

The SOAP II study,26 a multicenter, randomized trial, compared dopamine vs norepinephrine as first-line vasopressor therapy. In patients with refractory shock despite use of dopamine 20 μg/kg/minute or norepinephrine 0.19 μg/kg/minute, open-label norepinephrine, epinephrine, or vasopressin was added.

The primary outcome was the mortality rate at 28 days after randomization; secondary end points included the number of days without need for organ support and the occurrence of adverse events.

Results

A total of 1,679 patients were included; 858 were assigned to dopamine and 821 to norepinephrine. Most (1,044, 62%) of the patients had a diagnosis of septic shock.

No significant difference in mortality rates was noted at 28 days: 52.5% with dopamine vs 48.5% with norepinephrine (P = .10).

However, there were more arrhythmias in the patients treated with dopamine: 207 events (24.1%) vs 102 events (12.4%) (P < .001). The number of other adverse events such as renal failure, myocardial infarction, arterial occlusion, or skin necrosis was not different between the groups.

A subgroup analysis showed that dopamine was associated with more deaths at 28 days in patients with cardiogenic shock (P = .03) but not in patients with septic shock (P = .19) or with hypovolemic shock (P = .84).

Comments

The study was criticized because the patients may not have received adequate fluid resuscitation (the study considered adequate resuscitation to be equivalent to 1 L of crystalloids or 500 mL of colloids), as different degrees of volume depletion among patients make direct comparisons of vasopressor effects difficult.

Additionally, the study defined dopamine 20 μg/kg/minute as being equipotent with norepinephrine 0.19 μg/kg/minute. Comparisons of potency between drugs are difficult to establish, as there are no available data.

Nevertheless, this study further confirms previous findings suggesting that norepinephrine is not associated with more end-organ damage (such as renal failure or skin ischemia), and shows that dopamine may increase the number of adverse events, particularly in patients with cardiac disease.

References
  1. Shah MR, Hasselblad V, Stevenson LW, et al. Impact of the pulmonary artery catheter in critically ill patients: meta-analysis of randomized clinical trials. JAMA 2005; 294:16641670.
  2. National Heart, Lung, and Blood Institute Acute Respiratory Distress Syndrome (ARDS) Clinical Trials Network; Wiedemann HP, Wheeler AP, Bernard GR, et al. Comparison of two fluid-management strategies in acute lung injury. N Engl J Med 2006; 354:25642575.
  3. National Heart, Lung, and Blood Institute Acute Respiratory Distress Syndrome (ARDS) Clinical Trials Network; Wheeler AP, Bernard GR, Thompson BT, et al. Pulmonary-artery versus central venous catheter to guide treatment of acute lung injury. N Engl J Med 2006; 354:22132224.
  4. Bernard GR, Luce JM, Sprung CL, et al. High-dose corticosteroids in patients with the adult respiratory distress syndrome. N Engl J Med 1987; 317:15651570.
  5. Meduri GU, Headley AS, Golden E, et al. Effect of prolonged methylprednisolone therapy in unresolving acute respiratory distress syndrome: a randomized controlled trial. JAMA 1998; 280:159165.
  6. Steinberg KP, Hudson LD, Goodman RB, et al; National Heart, Lung, and Blood Institute Acute Respiratory Distress Syndrome (ARDS) Clinical Trials Network. Efficacy and safety of corticosteroids for persistent acute respiratory distress syndrome. N Engl J Med 2006; 354:16711684.
  7. Meduri GU, Golden E, Freire AX, et al. Methylprednisolone infusion in early severe ARDS: results of a randomized controlled trial. Chest 2007; 131:954963.
  8. Meduri GU, Golden E, Freire AX, et al. Methylprednisolone infusion in early severe ARDS results of a randomized controlled trial. 2007. Chest 2009; 136(suppl 5):e30.
  9. Annane D, Sébille V, Bellissant E; Ger-Inf-05 Study Group. Effect of low doses of corticosteroids in septic shock patients with or without early acute respiratory distress syndrome. Crit Care Med 2006; 34:2230.
  10. Kollef MH, Levy NT, Ahrens TS, Schaiff R, Prentice D, Sherman G. The use of continuous i.v. sedation is associated with prolongation of mechanical ventilation. Chest 1998; 114:541548.
  11. Carson SS, Kress JP, Rodgers JE, et al. A randomized trial of intermittent lorazepam versus propofol with daily interruption in mechanically ventilated patients. Crit Care Med 2006; 34:13261332.
  12. Brook AD, Ahrens TS, Schaiff R, et al. Effect of a nursing-implemented sedation protocol on the duration of mechanical ventilation. Crit Care Med 1999; 27:26092615.
  13. Kress JP, Pohlman AS, O’Connor MF, Hall JB. Daily interruption of sedative infusions in critically ill patients undergoing mechanical ventilation. N Engl J Med 2000; 342:14711477.
  14. Girard TD, Kress JP, Fuchs BD, et al. Efficacy and safety of a paired sedation and ventilator weaning protocol for mechanically ventilated patients in intensive care (Awakening and Breathing Controlled trial): a randomised controlled trial. Lancet 2008; 371:126134.
  15. van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in the critically ill patients. N Engl J Med 2001; 345:13591367.
  16. NICE-SUGAR Study Investigators; Finfer S, Chittock DR, Su SY, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009; 360:12831297.
  17. Brunkhorst FM, Engel C, Bloos F, et al; German Competence Network Sepsis (SepNet). Intensive insulin therapy and pentastarch resuscitation in severe sepsis. N Engl J Med 2008; 358:125139.
  18. Annane D, Sébille V, Charpentier C, et al. Effect of treatment with low doses of hydrocortisone and fludrocortisone on mortality in patients with septic shock. JAMA 2002; 288:862871.
  19. Sprung CL, Annane D, Keh D, et al; CORTICUS Study Group. Hydrocortisone therapy for patients with septic shock. N Engl J Med 2008; 358:111124.
  20. Annane D, Bellissant E, Bollaert PE, et al. Corticosteroids in the treatment of severe sepsis and septic shock in adults: a systematic review. JAMA 2009; 301:23622375.
  21. Minneci PC, Deans KJ, Natanson C. Corticosteroid therapy for severe sepsis and septic shock [letter]. JAMA 2009; 302:164431644.
  22. Kampmeier TG, Rehberg S, Westphal M, Lange M. Vasopressin in sepsis and septic shock. Minerva Anestesiol 2010; 76:844850.
  23. Dellinger RP, Levy MM, Carlet JM, et al; International Surviving Sepsis Campaign Guidelines Committee. Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock: 2008. Crit Care Med 2008; 36:296327.
  24. Russell JA, Walley KR, Singer J, et al; VASST Investigators. Vasopressin versus norepinephrine infusion in patients with septic shock. N Engl J Med 2008; 358:877887.
  25. Sakr Y, Reinhart K, Vincent JL, et al. Does dopamine administration in shock influence outcome? Results of the Sepsis Occurrence in Acutely Ill Patients (SOAP) Study. Crit Care Med 2006; 34:589597.
  26. De Backer D, Biston P, Devriendt J, et al; SOAP II Investigators. Comparison of dopamine and norepinephrine in the treatment of shock. N Engl J Med 2010; 362:779789.
Article PDF
Author and Disclosure Information

Enrique Diaz-Guzman, MD
Assistant Professor of Medicine, Chief, Pulmonary Section, Lexington Veterans Affairs Medical Center, Division of Pulmonary & Critical Care Medicine, University of Kentucky, Lexington

Juan Sanchez, MD
Assistant Professor of Medicine, Division of Pulmonary & Critical Care Medicine, Scott & White Health Center, and Texas A&M College of Medicine, Temple, TX

Alejandro C. Arroliga, MD, FCCP
Chairman and Professor, Dr. A. Ford Wolf and Brooksie Nell Boyd Wolf Centennial Chair of Medicine, Department of Internal Medicine, Scott & White Health Center, and Texas A&M Health Science Center College of Medicine, Temple, TX

Address: Enrique Diaz-Guzman, MD, University of Kentucky, L543 Kentucky Clinic, 740 S. Limestone Street, Lexington, KY 40536-0284; e-mail [email protected]

Issue
Cleveland Clinic Journal of Medicine - 78(10)
Publications
Topics
Page Number
665-674
Sections
Author and Disclosure Information

Enrique Diaz-Guzman, MD
Assistant Professor of Medicine, Chief, Pulmonary Section, Lexington Veterans Affairs Medical Center, Division of Pulmonary & Critical Care Medicine, University of Kentucky, Lexington

Juan Sanchez, MD
Assistant Professor of Medicine, Division of Pulmonary & Critical Care Medicine, Scott & White Health Center, and Texas A&M College of Medicine, Temple, TX

Alejandro C. Arroliga, MD, FCCP
Chairman and Professor, Dr. A. Ford Wolf and Brooksie Nell Boyd Wolf Centennial Chair of Medicine, Department of Internal Medicine, Scott & White Health Center, and Texas A&M Health Science Center College of Medicine, Temple, TX

Address: Enrique Diaz-Guzman, MD, University of Kentucky, L543 Kentucky Clinic, 740 S. Limestone Street, Lexington, KY 40536-0284; e-mail [email protected]

Author and Disclosure Information

Enrique Diaz-Guzman, MD
Assistant Professor of Medicine, Chief, Pulmonary Section, Lexington Veterans Affairs Medical Center, Division of Pulmonary & Critical Care Medicine, University of Kentucky, Lexington

Juan Sanchez, MD
Assistant Professor of Medicine, Division of Pulmonary & Critical Care Medicine, Scott & White Health Center, and Texas A&M College of Medicine, Temple, TX

Alejandro C. Arroliga, MD, FCCP
Chairman and Professor, Dr. A. Ford Wolf and Brooksie Nell Boyd Wolf Centennial Chair of Medicine, Department of Internal Medicine, Scott & White Health Center, and Texas A&M Health Science Center College of Medicine, Temple, TX

Address: Enrique Diaz-Guzman, MD, University of Kentucky, L543 Kentucky Clinic, 740 S. Limestone Street, Lexington, KY 40536-0284; e-mail [email protected]

Article PDF
Article PDF

We have seen significant growth in clinical research in critical care medicine in the last decade. Advances have been made in many important areas in this field; of these, advances in treating septic shock and acute respiratory distress syndrome (ARDS), and also in supportive therapies for critically ill patients (eg, sedatives, insulin), have perhaps received the most attention.

Of note, several once-established therapies in these areas have failed the test of time, as the result of evidence from more-recent clinical trials. For example, recent studies have shown that a pulmonary arterial catheter does not improve outcomes in patients with ARDS. Similarly, what used to be “optimal” fluid management in patients with ARDS is no longer considered appropriate.

In this review, we summarize eight major studies in critical care medicine published in the last 5 years, studies that have contributed to changes in our practice in the intensive care unit (ICU).

FLUID MANAGEMENT IN ARDS

Key points

  • In patients with acute lung injury (ALI) and ARDS, fluid restriction is associated with better outcomes than a liberal fluid policy.
  • A pulmonary arterial catheter is not necessary and, compared with a central venous catheter, may result in more complications in patients with ALI and ARDS.

Background

Fluid management practices in patients with ARDS have been extremely variable. Two different approaches are commonly used: the liberal or “wet” approach to optimize tissue perfusion and the “dry” approach, which focuses on reducing lung edema. Given that most deaths attributed to ARDS result from extrapulmonary organ failure, aggressive fluid restriction has been the less popular approach.

Additionally, although earlier studies and meta-analyses suggested that the use of a pulmonary arterial catheter was not associated with better outcomes in critically ill patients,1 controversy remained regarding the value of a pulmonary arterial catheter compared with a central venous catheter in guiding fluid management in patients with ARDS, and data were insufficient to prove one strategy better than the other.

The Fluids and Catheter Treatment Trial (FACTT)

NATIONAL HEART, LUNG, AND BLOOD INSTITUTE ACUTE RESPIRATORY DISTRESS SYNDROME (ARDS) CLINICAL TRIALS NETWORK; WIEDEMANN HP, WHEELER AP, BERNARD GR, ET AL. COMPARISON OF TWO FLUID-MANAGEMENT STRATEGIES IN ACUTE LUNG INJURY. N ENGL J MED 2006; 354:2564–2575.

NATIONAL HEART, LUNG, AND BLOOD INSTITUTE ACUTE RESPIRATORY DISTRESS SYNDROME (ARDS) CLINICAL TRIALS NETWORK; WHEELER AP, BERNARD GR, THOMPSON BT, ET AL. PULMONARY-ARTERY VERSUS CENTRAL VENOUS CATHETER TO GUIDE TREATMENT OF ACUTE LUNG INJURY. N ENGL J MED 2006; 354:2213–2224.

The Fluids and Catheter Treatment Trial (FACTT) compared two fluid strategies2 and also the utility of a pulmonary arterial catheter vs a central venous catheter3 in patients with ALI or ARDS.

This two-by-two factorial trial randomized 1,000 patients to be treated according to either a conservative (fluid-restrictive or “dry”) or a liberal (“wet”) fluid management strategy for 7 days. Additionally, they were randomly assigned to receive either a central venous catheter or a pulmonary arterial catheter. The trial thus had four treatment groups:

  • Fluid-restricted and a central venous catheter, with a goal of keeping the central venous pressure below 4 mm Hg
  • Fluid-restricted and a pulmonary arterial catheter: fluids were restricted and diuretics were given to keep the pulmonary artery occlusion pressure below 8 mm Hg
  • Fluid-liberal and a central venous catheter: fluids were given to keep the central venous pressure between 10 and 14 mm Hg
  • Fluid-liberal and a pulmonary arterial catheter: fluids were given to keep the pulmonary artery occlusion pressure between 14 and 18 mm Hg.

The primary end point was the mortality rate at 60 days. Secondary end points included the number of ventilator-free days and organ-failure-free days and parameters of lung physiology. All patients were managed with a low-tidal-volume strategy.

The ‘dry’ strategy was better

The cumulative fluid balance was −136 mL ± 491 mL in the “dry” group and 6,992 mL ± 502 mL in the “wet” group, a difference of more than 7 L (P < .0001). Of note, before randomization, the patients were already fluid-positive, with a mean total fluid balance of +2,700 mL).2

At 60 days, no statistically significant difference in mortality rate was seen between the fluid-management groups (25.5% in the dry group vs 28.4% in the wet group (P = .30). Nevertheless, patients in the dry group had better oxygenation indices and lung injury scores (including lower plateau airway pressure), resulting in more ventilator-free days (14.6 ± 0.5 vs 12.1 ± 0.5; P = .0002) and ICU-free days (13.4 ± 0.4 vs 11.2 ± 0.4; P = .0003).2

Although those in the dry-strategy group had a slightly lower cardiac index and mean arterial pressure, they did not have a higher incidence of shock.

More importantly, the dry group did not have a higher rate of nonpulmonary organ failure. Serum creatinine and blood urea nitrogen concentrations were slightly higher in this group, but this was not associated with a higher incidence of renal failure or the use of dialysis: 10% in the dry-strategy group vs 14% in the wet-strategy group; P = .0642).2

No advantage with a pulmonary arterial catheter

The mortality rate did not differ between the catheter groups. However, the patients who received a pulmonary arterial catheter stayed in the ICU 0.2 days longer and had twice as many nonfatal cardiac arrhythmias as those who received a central venous catheter.3

Comments

The liberal fluid-strategy group had fluid balances similar to those seen in previous National Institutes of Health ARDS Network trials in which fluid management was not controlled. This suggests that the liberal fluid strategy reflects usual clinical practice.

Although the goals used in this study (central venous pressure < 4 mm Hg or pulmonary artery occlusion pressure < 8 mm Hg) could be difficult to achieve in clinical practice, a conservative strategy of fluid management is preferred in patients with ALI or ARDS, given the benefits observed in this trial.

A pulmonary arterial catheter is not indicated to guide hemodynamic management of patients with ARDS.

 

 

CORTICOSTEROID USE IN ARDS

Key points

  • In selected patients with ARDS, the prolonged use of corticosteroids may result in better oxygenation and a shorter duration of mechanical ventilation.
  • Late use of corticosteroids in patients with ARDS (> 14 days after diagnosis) is not indicated and may increase the risk of death.
  • The role of corticosteroids in early ARDS (< 7 days after diagnosis) remains controversial.

Background

Systemic corticosteroid therapy was commonly used in ARDS patients in the 1970s and 1980s. However, a single-center study published in the late 1980s showed that a corticosteroid in high doses (methylprednisolone 30 mg/kg) resulted in more complications and was not associated with a lower mortality rate.4 On the other hand, a small study that included only patients with persistent ARDS (defined as ARDS lasting for more than 7 days) subsequently showed that oxygenation was significantly better and that fewer patients died while in the hospital with the use of methylprednisolone 2 mg/kg for 32 days.5

In view of these divergent findings, the ARDS Network decided to perform a study to help understand the role of corticosteroids in ARDS.

The Late Steroid Rescue Study (LaSRS)

STEINBERG KP, HUDSON LD, GOODMAN RB, ET AL; NATIONAL HEART, LUNG, AND BLOOD INSTITUTE ACUTE RESPIRATORY DISTRESS SYNDROME (ARDS) CLINICAL TRIALS NETWORK. EFFICACY AND SAFETY OF CORTICOSTEROIDS FOR PERSISTENT ACUTE RESPIRATORY DISTRESS SYNDROME. N ENGL J MED 2006; 354:1671–1684.

The Late Steroid Rescue Study (LaSRS),6 a double-blind, multicenter trial, randomly assigned 180 patients with persistent ARDS (defined as ongoing disease 7–28 days after its onset) to receive methylprednisolone or placebo for 21 days.

Methylprednisolone was given in an initial dose of 2 mg/kg of predicted body weight followed by a dose of 0.5 mg/kg every 6 hours for 14 days and then a dose of 0.5 mg/kg every 12 hours for 7 days, and then it was tapered over 2 to 4 days and discontinued. It could be discontinued if 21 days of treatment were completed or if the patient was able to breathe without assistance.

The primary end point was the mortality rate at 60 days. Secondary end points included the number of ventilator-free days, organ-failure-free days, and complications and the levels of biomarkers of inflammation.

No reduction in mortality rates with steroids

The mortality rates did not differ significantly in the corticosteroid group vs the placebo group at 60 days:

  • 29.2% with methylprednisolone (95% confidence interval [CI] 20.8–39.4)
  • 28.6% with placebo (95% CI 20.3–38.6, P = 1.0).

Mortality rates at 180 days were also similar between the groups:

  • 31.5% with methylprednisolone (95% CI 22.8–41.7)
  • 31.9% with placebo (95% CI 23.2–42.0, P = 1.0).

In patients randomized between 7 and 13 days after the onset of ARDS, the mortality rates were lower in the methylprednisolone group than in the placebo group but the differences were not statistically significant. The mortality rate in this subgroup was 27% vs 36% (P = .26) at 60 days and was 27% vs 39% (P = .14) at 180 days.

However, in patients randomized more than 14 days after the onset of ARDS, the mortality rate was significantly higher in the methylprednisolone group than in the placebo group at 60 days (35% vs 8%, P = .02) and at 180 days (44% vs 12%, P = .01).

Some benefit in secondary outcomes

At day 28, methylprednisolone was associated with:

  • More ventilator-free days (11.2 ± 9.4 vs 6.8 ± 8.5, P < .001)
  • More shock-free days (20.7 ± 8.9 vs 17.9 ± 10.2, P = .04)
  • More ICU-free days (8.9 ± 8.2 vs 6.7 ± 7.8, P = .02).

Similarly, pulmonary physiologic indices were better with methylprednisolone, specifically:

  • The ratio of Pao2 to the fraction of inspired oxygen at days 3, 4, and 14 (P < .05)
  • Plateau pressure at days 4, 5, and 7 (P < .05)
  • Static compliance at days 7 and 14 (P < .05).

In terms of side effects, methylprednisolone was associated with more events associated with myopathy or neuropathy (9 vs 0, P = .001), but there were no differences in the number of serious infections or in glycemic control.

Comments

Although other recent studies suggested that corticosteroid use may be associated with a reduction in mortality rates,7–9 LaSRS did not confirm this effect. Although the doses and length of therapy were similar in these studies, LaSRS was much larger and included patients from the ARDS Network.

Nevertheless, LaSRS was criticized because of strict exclusion criteria and poor enrollment (only 5% of eligible patients were included). Additionally, it was conducted over a period of time when some ICU practices varied significantly (eg, low vs high tidal volume ventilation, tight vs loose glucose control).

The role of corticosteroids in early ARDS (< 7 days after diagnosis) remains controversial at best. Table 1 summarizes recent studies that evaluated the use of corticosteroids in patients with ARDS.

INTERRUPTING SEDATION DURING MECHANICAL VENTILATION

Key points

  • Daily awakening of mechanically ventilated patients is safe.
  • Daily interruption of sedation in mechanically ventilated patients is associated with a shorter length of mechanical ventilation.

Background

Sedatives are a central component of critical care. Continuous infusions of narcotics, benzodiazepines, and anesthetic agents are frequently used to promote comfort in patients receiving mechanical ventilation.

Despite its widespread use in the ICU, there is little evidence that such sedation improves outcomes. Observational and randomized trials10–12 have shown that patients who receive continuous infusions of sedatives need to be on mechanical ventilation longer than those who receive intermittent dosing. Additionally, an earlier randomized controlled trial13 showed that daily interruption of sedative drug infusions decreased the duration of mechanical ventilation by almost 50% and resulted in a reduction in the length of stay in the ICU.

Despite these findings, many ICU physicians remain skeptical of the value of daily interruption of sedative medications and question the safety of this practice.

The Awakening and Breathing Controlled (ABC) trial

GIRARD TD, KRESS JP, FUCHS BD, ET AL. EFFICACY AND SAFETY OF A PAIRED SEDATION AND VENTILATOR WEANING PROTOCOL FOR MECHANICALLY VENTILATED PATIENTS IN INTENSIVE CARE (AWAKENING AND BREATHING CONTROLLED TRIAL): A RANDOMISED CONTROLLED TRIAL. LANCET 2008; 371:126–134.

The Awakening and Breathing Controlled (ABC) trial14 was a multicenter, randomized controlled trial that included 336 patients who required at least 12 consecutive hours of mechanical ventilation. All patients had to be receiving patient-targeted sedation.

Those in the intervention group (n = 168) had their sedation interrupted every day, followed by a clinical assessment to determine whether they could be allowed to try breathing spontaneously. The control group (n = 168) also received a clinical assessment for a trial of spontaneous breathing, while their sedation was continued as usual.

In patients in the intervention group who failed the screening for a spontaneous breathing trial, the sedatives were resumed at half the previous dose. Criteria for failure on the spontaneous breathing trial included any of the following: anxiety, agitation, respiratory rate more than 35 breaths per minute for 5 minutes or longer, cardiac arrhythmia, oxygen saturation less than 88% for 5 minutes or longer, or two or more signs of respiratory distress, tachycardia, bradycardia, paradoxical breathing, accessory muscle use, diaphoresis, or marked dyspnea.

 

 

Interrupting sedation was superior

The combination of sedation interruption and a spontaneous breathing trial was superior to a spontaneous breathing trial alone. The mean number of ventilator-free days:

  • 14.7 ± 0.9 with sedation interruption
  • 11.6 ± 0.9 days with usual care (P = .02).

The median time to ICU discharge:

  • 9.1 days with sedation interruption (interquartile range 5.1 to 17.8)
  • 12.9 days with usual care (interquartile range 6.0 to 24.2, P = .01).

The mortality rate at 28 days:

  • 28% with sedation interruption
  • 35% with usual care (P = .21).

The mortality rate at 1 year:

  • 44% with sedation interruption
  • 58% with usual care (hazard ratio [HR] in the intervention group 0.68, 95% CI 0.50–0.92, P = .01).

Of note, patients in the intervention group had a higher rate of self-extubation (9.6% vs 3.6%, P = .03), but the rate of reintubation was similar between the groups (14% vs 13%, P = .47).

Comments

The addition of daily awakenings to spontaneous breathing trials results in a further reduction in the number of ICU days and increases the number of ventilator-free days.

Of note, the protocol allowed patients in the control group to undergo a spontaneous breathing trial while on sedatives (69% of the patients were receiving sedation at the time). Therefore, a bias effect in favor of the intervention group cannot be excluded. However, both groups had to meet criteria for readiness for spontaneous breathing.

The study demonstrates the safety of daily awakenings and confirms previous findings suggesting that a daily trial of spontaneous breathing results in better ICU outcomes.

GLUCOSE CONTROL IN THE ICU

Key points

  • Although earlier studies suggested that intensive insulin therapy might be beneficial in critically ill patients, new findings show that strict glucose control can lead to complications without improving outcomes.

Background

A previous study15 found that intensive insulin therapy to maintain a blood glucose level between 80 and 110 mg/dL (compared with 180–200 mg/dL) reduced the mortality rate in surgical critical care patients. The mortality rate in the ICU was 4.6% with intensive insulin therapy vs 8.0% with conventional therapy (P < .04), and the effect was more robust for patients who remained longer than 5 days in the ICU (10.6% vs 20.2%).

Importantly, however, hypoglycemia (defined as blood glucose ≤ 40 mg/dL) occurred in 39 patients in the intensive-treatment group vs 6 patients in the conventional-treatment group.

The NICE-SUGAR trial

NICE-SUGAR STUDY INVESTIGATORS; FINFER S, CHITTOCK DR, SU SY, ET AL. INTENSIVE VERSUS CONVENTIONAL GLUCOSE CONTROL IN CRITICALLY ILL PATIENTS. N ENGL J MED 2009; 360:1283–1297.

The Normoglycemia in Intensive Care Evaluation-Survival Using Glucose Algorithm Regulation (NICE-SUGAR) trial16 randomized 6,104 patients in medical and surgical ICUs to receive either intensive glucose control (blood glucose 81–108 mg/dL) with insulin therapy or conventional glucose control (blood glucose < 180 mg/dL). In the conventional-control group, insulin was discontinued if the blood glucose level dropped below 144 mg/dL.

A higher mortality rate with intensive glucose control

As expected, the intensive-control group achieved lower blood glucose levels: 115 vs 144 mg/dL.

Nevertheless, intensive glucose control was associated with a higher incidence of severe hypoglycemia, defined as a blood glucose level lower than 40 mg/dL: 6.8% vs 0.5%.

More importantly, compared with conventional insulin therapy, intensive glucose control was associated with a higher 90-day mortality rate: 27.5% vs 24.9% (odds ratio 1.14, 95% CI 1.02–1.28). These findings were similar in the subgroup of surgical patients (24.4% vs 19.8%, odds ratio 1.31, 95% CI 1.07–1.61).

Comments

Of note, the conventional-control group had more patients who discontinued the treatment protocol prematurely. Additionally, more patients in this group received corticosteroids.

These results widely differ from those of a previous study by van den Berghe et al,15 which showed that tight glycemic control is associated with a survival benefit. The differences in outcomes are probably largely related to different patient populations, as van den Berghe et al included patients who had undergone cardiac surgery, who were more likely to benefit from strict blood glucose control.

The VISEP trial

BRUNKHORST FM, ENGEL C, BLOOS F, ET AL; GERMAN COMPETENCE NETWORK SEPSIS (SEPNET). INTENSIVE INSULIN THERAPY AND PENTASTARCH RESUSCITATION IN SEVERE SEPSIS. N ENGL J MED 2008; 358:125–139.

The Volume Substitution and Insulin Therapy in Severe Sepsis (VISEP) trial was a multicenter study designed to compare intensive insulin therapy (target blood glucose level 80–110 mg/dL) and conventional glucose control (target blood glucose level 180–200 mg/dL) in patients with severe sepsis.17 It also compared two fluids for volume resuscitation: 10% pentastarch vs modified Ringer's lactate. It included both medical and surgical patients.

Trial halted early for safety reasons

The mean morning blood glucose level was significantly lower in the intensive insulin group (112 vs 151 mg/dL).

Severe hypoglycemia (blood glucose ≤ 40 mg/dL) was more common in the group that received intensive insulin therapy (17% vs 4.1%, P < .001).

Mortality rates at 28 days did not differ significantly: 24.7% with intensive control vs 26.0% with conventional glucose control. The mortality rate at 90 days was 39.7% in the intensive therapy group and 35.4% in the conventional therapy group, but the difference was not statistically significant.

The intensive insulin arm of the trial was stopped after 488 patients were enrolled because of a higher rate of hypoglycemia (12.1% vs 2.1%) and of serious adverse events (10.9% vs 5.2%).

Additionally, the fluid resuscitation arm of the study was suspended at the first planned interim analysis because of a higher risk of organ failure in the 10% pentastarch group.

 

 

CORTICOSTEROID THERAPY IN SEPTIC SHOCK

Key points

  • Corticosteroid therapy improves hemodynamic outcomes in patients with severe septic shock.
  • Although meta-analyses suggest the mortality rate is lower with corticosteroid therapy, there is not enough evidence from randomized controlled trials to prove that the use of low-dose corticosteroids lowers the mortality rate in patients with septic shock.
  • The corticotropin (ACTH) stimulation test should not be used to determine the need for corticosteroids in patients with septic shock.

Background

A previous multicenter study,18 performed in France, found that the use of corticosteroids in patients with septic shock resulted in lower rates of death at 28 days, in the ICU, and in the hospital and a shorter time to vasopressor withdrawal. Nevertheless, the beneficial effects were not observed in patients with adequate adrenal reserve (based on an ACTH stimulation test).

This study was criticized because of a high mortality rate in the placebo group.

The CORTICUS study

SPRUNG CL, ANNANE D, KEH D, ET AL; CORTICUS STUDY GROUP. HYDROCORTISONE THERAPY FOR PATIENTS WITH SEPTIC SHOCK. N ENGL J MED 2008; 358:111–124.

The Corticosteroid Therapy of Septic Shock (CORTICUS) study was a multicenter trial that randomly assigned 499 patients with septic shock to receive hydrocortisone (50 mg intravenously every 6 hours for 5 days, followed by a 6-day taper period) or placebo.19

Patients were eligible to be enrolled within 72 hours of onset of shock. Similar to previous studies, the CORTICUS trial classified patients on the basis of an ACTH stimulation test as having inadequate adrenal reserve (a cortisol increase of ≤ 9 μg/dL) or adequate adrenal reserve (a cortisol increase of > 9 μg/dL).

Faster reversal of shock with steroids

At baseline, the mean Simplified Acute Physiologic Score II (SAPS II) was 49 (the range of possible scores is 0 to 163; the higher the score the worse the organ dysfunction).

Hydrocortisone use resulted in a shorter duration of vasopressor use and a faster reversal of shock (3.3 days vs 5.8 days, P < .001).

This association was the same when patients were divided according to response to ACTH stimulation test. Time to reversal of shock in responders:

  • 2.8 days with hydrocortisone
  • 5.8 days with placebo (P < .001).

Time to reversal of shock in nonresponders:

  • 3.9 days with hydrocortisone
  • 6.0 days with placebo (P = .06).

Nevertheless, the treatment did not reduce the mortality rate at 28 days overall (34.3% vs 31.5% P = .51), or in the subgroups based on response to ACTH, or at any other time point. A post hoc analysis suggested that patients who had a systolic blood pressure of less than 90 mm Hg within 30 minutes of enrollment had a greater benefit in terms of mortality rate, but the effect was not statistically significant: the absolute difference was −11.2% (P = 0.28). Similarly, post hoc analyses also revealed a higher rate of death at 28 days in patients who received etomidate (Amidate) before randomization in both groups (P = .03).

Importantly, patients who received corticosteroids had a higher incidence of superinfections, including new episodes of sepsis or septic shock, with a combined odds ratio of 1.37 (95% CI 1.05–1.79).

Length of stay in the hospital or in the ICU was similar in patients who received corticosteroids and in those who received placebo. The ICU length of stay was 19 ± 31 days with hydrocortisone vs 18 ± 17 days with placebo (P = .51).

Comments

The CORTICUS trial showed that low-dose corticosteroid therapy results in faster reversal of shock in patients with severe septic shock. The hemodynamic benefits are present in all patients regardless of response to the ACTH stimulation test.

Nevertheless, contrary to previous findings,18 corticosteroid use was not associated with an improvement in mortality rates. Important differences exist between these two studies:

  • The mortality rates in the placebo groups were significantly different (> 50% in the French study vs 30% in CORTICUS).
  • The SAPS II scores were different in these two trials (55 vs 49), suggesting a greater severity of illness in the French study.
  • The criteria for enrollment were different: the French study included patients who had a systolic blood pressure lower than 90 mm Hg for more than 1 hour despite fluid administration and vasopressor use, whereas the CORTICUS trial included patients who had a systolic blood pressure lower than 90 mm Hg for more than 1 hour despite fluid administration or vasopressor use.
  • The time of enrollment was different: patients were enrolled much faster in the French study (within 8 hours) than in the CORTICUS trial (within 72 hours).

A recent meta-analysis of 17 randomized trials (including the CORTICUS study), found that, compared with those who received placebo, patients who received corticosteroids had a small reduction in the 28-day mortality rate (HR 0.84, 95% CI 0.71–1.00, P < .05).20 Of note, this meta-analysis has been criticized for possible publication bias and also for a large degree of heterogeneity in its results.21

 

 

VASOPRESSOR THERAPY IN SHOCK

Key points

  • Vasopressin use in patients with severe septic shock is not associated with an improvement in mortality rates.
  • Vasopressin should not be used as a first-line agent in patients with septic shock.
  • Norepinephrine should be considered a first-line agent in patients with shock.
  • Compared with norepinephrine, the use of dopamine in patients with shock is associated with similar mortality rates, although its use may result in a greater number of cardiac adverse events.

Background

Vasopressin gained popularity in critical care in the last 10 years because several small studies showed that adding it improves hemodynamics and results in a reduction in the doses of catecholamines in patients with refractory septic shock.22 Furthermore, the Surviving Sepsis Campaign guidelines recommended the use of vasopressin in patients who have refractory shock despite fluid resuscitation and the use of other “conventional” vasopressors.23

Despite these positive findings, it remained unknown if the use of vasopressin increases the survival rate in patients with septic shock.

The Vasopressin and Septic Shock Trial (VASST)

RUSSELL JA, WALLEY KR, SINGER J, ET AL; VASST INVESTIGATORS. VASOPRESSIN VERSUS NOREPINEPHRINE INFUSION IN PATIENTS WITH SEPTIC SHOCK. N ENGL J MED 2008; 358:877–887.

The Vasopressin and Septic Shock Trial (VASST)24 was a multicenter randomized, double-blind, controlled trial that included 778 patients with refractory septic shock. Refractory shock was defined as the lack of a response to a normal saline fluid bolus of 500 mL or the need for vasopressors (norepinephrine in doses of at least 5 μg/minute or its equivalent for 6 hours or more in the 24 hours before randomization).

Two subgroups were identified: those with severe septic shock (requiring norepinephrine in doses of 15 μg/minute or higher) and those with less-severe septic shock (needing norepinephrine in doses of 5 to 14 μg/minute). Patients with unstable coronary artery disease (acute myocardial infarction, angina) and severe congestive heart failure were excluded.

Patients were randomized to receive an intravenous infusion of vasopressin (0.01–0.03 U/minute) or norepinephrine (5–15 mg/minute) in addition to open-labeled vasopressors (excluding vasopressin). The primary outcome was the all-cause mortality rate at 28 days.

Results

At 28 days, fewer patients had died in the vasopressin group than in the norepinephrine group (35.4% vs 39.3%), but the difference was not statistically significant (P = .26). The trend was the same at 90 days (mortality rate 43.9% vs 49.6%, P = .11).

Subgroup analysis showed that in patients with less-severe septic shock, those who received vasopressin had a lower mortality rate at 28 days (26.5% vs 35.7%, P = .05; relative risk 0.74; 95% CI 0.55–1.01) and at 90 days (35.8% vs 46.1%, P = .04; relative risk 0.78, 95% CI 0.61–0.99).

There were no statistically significant differences in any of the other secondary outcomes or in serious adverse events.

Comments

The study has been criticized for several reasons:

  • The mean arterial blood pressure at baseline before initiation of vasopressin was 72 mm Hg (and some argue that vasopressin was therefore not needed by the time it was started).
  • The time from screening to infusion of the study drug was very long (12 hours).
  • The observed mortality rate was lower than expected (37%).

Despite these considerations, the VASST trial showed that vasopressin is not associated with an increased number of adverse events in patients without active cardiovascular disease. The possible benefit in terms of the mortality rate in the subgroup of patients with less-severe septic shock requires further investigation.

Is dopamine equivalent to norepinephrine?

Previously, the Sepsis Occurrence in Acutely Ill Patients (SOAP) study, a multicenter, observational cohort study, found that dopamine use was associated with a higher all-cause mortality rate in the ICU compared with no dopamine.25 This finding had not been reproduced, as few well-designed studies had compared the effects of dopamine and norepinephrine.

The SOAP II study

DE BACKER D, BISTON P, DEVRIENDT J, ET AL; SOAP II INVESTIGATORS.. COMPARISON OF DOPAMINE AND NOREPINEPHRINE IN THE TREATMENT OF SHOCK. N ENGL J MED 2010; 362:779–789.

The SOAP II study,26 a multicenter, randomized trial, compared dopamine vs norepinephrine as first-line vasopressor therapy. In patients with refractory shock despite use of dopamine 20 μg/kg/minute or norepinephrine 0.19 μg/kg/minute, open-label norepinephrine, epinephrine, or vasopressin was added.

The primary outcome was the mortality rate at 28 days after randomization; secondary end points included the number of days without need for organ support and the occurrence of adverse events.

Results

A total of 1,679 patients were included; 858 were assigned to dopamine and 821 to norepinephrine. Most (1,044, 62%) of the patients had a diagnosis of septic shock.

No significant difference in mortality rates was noted at 28 days: 52.5% with dopamine vs 48.5% with norepinephrine (P = .10).

However, there were more arrhythmias in the patients treated with dopamine: 207 events (24.1%) vs 102 events (12.4%) (P < .001). The number of other adverse events such as renal failure, myocardial infarction, arterial occlusion, or skin necrosis was not different between the groups.

A subgroup analysis showed that dopamine was associated with more deaths at 28 days in patients with cardiogenic shock (P = .03) but not in patients with septic shock (P = .19) or with hypovolemic shock (P = .84).

Comments

The study was criticized because the patients may not have received adequate fluid resuscitation (the study considered adequate resuscitation to be equivalent to 1 L of crystalloids or 500 mL of colloids), as different degrees of volume depletion among patients make direct comparisons of vasopressor effects difficult.

Additionally, the study defined dopamine 20 μg/kg/minute as being equipotent with norepinephrine 0.19 μg/kg/minute. Comparisons of potency between drugs are difficult to establish, as there are no available data.

Nevertheless, this study further confirms previous findings suggesting that norepinephrine is not associated with more end-organ damage (such as renal failure or skin ischemia), and shows that dopamine may increase the number of adverse events, particularly in patients with cardiac disease.

We have seen significant growth in clinical research in critical care medicine in the last decade. Advances have been made in many important areas in this field; of these, advances in treating septic shock and acute respiratory distress syndrome (ARDS), and also in supportive therapies for critically ill patients (eg, sedatives, insulin), have perhaps received the most attention.

Of note, several once-established therapies in these areas have failed the test of time, as the result of evidence from more-recent clinical trials. For example, recent studies have shown that a pulmonary arterial catheter does not improve outcomes in patients with ARDS. Similarly, what used to be “optimal” fluid management in patients with ARDS is no longer considered appropriate.

In this review, we summarize eight major studies in critical care medicine published in the last 5 years, studies that have contributed to changes in our practice in the intensive care unit (ICU).

FLUID MANAGEMENT IN ARDS

Key points

  • In patients with acute lung injury (ALI) and ARDS, fluid restriction is associated with better outcomes than a liberal fluid policy.
  • A pulmonary arterial catheter is not necessary and, compared with a central venous catheter, may result in more complications in patients with ALI and ARDS.

Background

Fluid management practices in patients with ARDS have been extremely variable. Two different approaches are commonly used: the liberal or “wet” approach to optimize tissue perfusion and the “dry” approach, which focuses on reducing lung edema. Given that most deaths attributed to ARDS result from extrapulmonary organ failure, aggressive fluid restriction has been the less popular approach.

Additionally, although earlier studies and meta-analyses suggested that the use of a pulmonary arterial catheter was not associated with better outcomes in critically ill patients,1 controversy remained regarding the value of a pulmonary arterial catheter compared with a central venous catheter in guiding fluid management in patients with ARDS, and data were insufficient to prove one strategy better than the other.

The Fluids and Catheter Treatment Trial (FACTT)

NATIONAL HEART, LUNG, AND BLOOD INSTITUTE ACUTE RESPIRATORY DISTRESS SYNDROME (ARDS) CLINICAL TRIALS NETWORK; WIEDEMANN HP, WHEELER AP, BERNARD GR, ET AL. COMPARISON OF TWO FLUID-MANAGEMENT STRATEGIES IN ACUTE LUNG INJURY. N ENGL J MED 2006; 354:2564–2575.

NATIONAL HEART, LUNG, AND BLOOD INSTITUTE ACUTE RESPIRATORY DISTRESS SYNDROME (ARDS) CLINICAL TRIALS NETWORK; WHEELER AP, BERNARD GR, THOMPSON BT, ET AL. PULMONARY-ARTERY VERSUS CENTRAL VENOUS CATHETER TO GUIDE TREATMENT OF ACUTE LUNG INJURY. N ENGL J MED 2006; 354:2213–2224.

The Fluids and Catheter Treatment Trial (FACTT) compared two fluid strategies2 and also the utility of a pulmonary arterial catheter vs a central venous catheter3 in patients with ALI or ARDS.

This two-by-two factorial trial randomized 1,000 patients to be treated according to either a conservative (fluid-restrictive or “dry”) or a liberal (“wet”) fluid management strategy for 7 days. Additionally, they were randomly assigned to receive either a central venous catheter or a pulmonary arterial catheter. The trial thus had four treatment groups:

  • Fluid-restricted and a central venous catheter, with a goal of keeping the central venous pressure below 4 mm Hg
  • Fluid-restricted and a pulmonary arterial catheter: fluids were restricted and diuretics were given to keep the pulmonary artery occlusion pressure below 8 mm Hg
  • Fluid-liberal and a central venous catheter: fluids were given to keep the central venous pressure between 10 and 14 mm Hg
  • Fluid-liberal and a pulmonary arterial catheter: fluids were given to keep the pulmonary artery occlusion pressure between 14 and 18 mm Hg.

The primary end point was the mortality rate at 60 days. Secondary end points included the number of ventilator-free days and organ-failure-free days and parameters of lung physiology. All patients were managed with a low-tidal-volume strategy.

The ‘dry’ strategy was better

The cumulative fluid balance was −136 mL ± 491 mL in the “dry” group and 6,992 mL ± 502 mL in the “wet” group, a difference of more than 7 L (P < .0001). Of note, before randomization, the patients were already fluid-positive, with a mean total fluid balance of +2,700 mL).2

At 60 days, no statistically significant difference in mortality rate was seen between the fluid-management groups (25.5% in the dry group vs 28.4% in the wet group (P = .30). Nevertheless, patients in the dry group had better oxygenation indices and lung injury scores (including lower plateau airway pressure), resulting in more ventilator-free days (14.6 ± 0.5 vs 12.1 ± 0.5; P = .0002) and ICU-free days (13.4 ± 0.4 vs 11.2 ± 0.4; P = .0003).2

Although those in the dry-strategy group had a slightly lower cardiac index and mean arterial pressure, they did not have a higher incidence of shock.

More importantly, the dry group did not have a higher rate of nonpulmonary organ failure. Serum creatinine and blood urea nitrogen concentrations were slightly higher in this group, but this was not associated with a higher incidence of renal failure or the use of dialysis: 10% in the dry-strategy group vs 14% in the wet-strategy group; P = .0642).2

No advantage with a pulmonary arterial catheter

The mortality rate did not differ between the catheter groups. However, the patients who received a pulmonary arterial catheter stayed in the ICU 0.2 days longer and had twice as many nonfatal cardiac arrhythmias as those who received a central venous catheter.3

Comments

The liberal fluid-strategy group had fluid balances similar to those seen in previous National Institutes of Health ARDS Network trials in which fluid management was not controlled. This suggests that the liberal fluid strategy reflects usual clinical practice.

Although the goals used in this study (central venous pressure < 4 mm Hg or pulmonary artery occlusion pressure < 8 mm Hg) could be difficult to achieve in clinical practice, a conservative strategy of fluid management is preferred in patients with ALI or ARDS, given the benefits observed in this trial.

A pulmonary arterial catheter is not indicated to guide hemodynamic management of patients with ARDS.

 

 

CORTICOSTEROID USE IN ARDS

Key points

  • In selected patients with ARDS, the prolonged use of corticosteroids may result in better oxygenation and a shorter duration of mechanical ventilation.
  • Late use of corticosteroids in patients with ARDS (> 14 days after diagnosis) is not indicated and may increase the risk of death.
  • The role of corticosteroids in early ARDS (< 7 days after diagnosis) remains controversial.

Background

Systemic corticosteroid therapy was commonly used in ARDS patients in the 1970s and 1980s. However, a single-center study published in the late 1980s showed that a corticosteroid in high doses (methylprednisolone 30 mg/kg) resulted in more complications and was not associated with a lower mortality rate.4 On the other hand, a small study that included only patients with persistent ARDS (defined as ARDS lasting for more than 7 days) subsequently showed that oxygenation was significantly better and that fewer patients died while in the hospital with the use of methylprednisolone 2 mg/kg for 32 days.5

In view of these divergent findings, the ARDS Network decided to perform a study to help understand the role of corticosteroids in ARDS.

The Late Steroid Rescue Study (LaSRS)

STEINBERG KP, HUDSON LD, GOODMAN RB, ET AL; NATIONAL HEART, LUNG, AND BLOOD INSTITUTE ACUTE RESPIRATORY DISTRESS SYNDROME (ARDS) CLINICAL TRIALS NETWORK. EFFICACY AND SAFETY OF CORTICOSTEROIDS FOR PERSISTENT ACUTE RESPIRATORY DISTRESS SYNDROME. N ENGL J MED 2006; 354:1671–1684.

The Late Steroid Rescue Study (LaSRS),6 a double-blind, multicenter trial, randomly assigned 180 patients with persistent ARDS (defined as ongoing disease 7–28 days after its onset) to receive methylprednisolone or placebo for 21 days.

Methylprednisolone was given in an initial dose of 2 mg/kg of predicted body weight followed by a dose of 0.5 mg/kg every 6 hours for 14 days and then a dose of 0.5 mg/kg every 12 hours for 7 days, and then it was tapered over 2 to 4 days and discontinued. It could be discontinued if 21 days of treatment were completed or if the patient was able to breathe without assistance.

The primary end point was the mortality rate at 60 days. Secondary end points included the number of ventilator-free days, organ-failure-free days, and complications and the levels of biomarkers of inflammation.

No reduction in mortality rates with steroids

The mortality rates did not differ significantly in the corticosteroid group vs the placebo group at 60 days:

  • 29.2% with methylprednisolone (95% confidence interval [CI] 20.8–39.4)
  • 28.6% with placebo (95% CI 20.3–38.6, P = 1.0).

Mortality rates at 180 days were also similar between the groups:

  • 31.5% with methylprednisolone (95% CI 22.8–41.7)
  • 31.9% with placebo (95% CI 23.2–42.0, P = 1.0).

In patients randomized between 7 and 13 days after the onset of ARDS, the mortality rates were lower in the methylprednisolone group than in the placebo group but the differences were not statistically significant. The mortality rate in this subgroup was 27% vs 36% (P = .26) at 60 days and was 27% vs 39% (P = .14) at 180 days.

However, in patients randomized more than 14 days after the onset of ARDS, the mortality rate was significantly higher in the methylprednisolone group than in the placebo group at 60 days (35% vs 8%, P = .02) and at 180 days (44% vs 12%, P = .01).

Some benefit in secondary outcomes

At day 28, methylprednisolone was associated with:

  • More ventilator-free days (11.2 ± 9.4 vs 6.8 ± 8.5, P < .001)
  • More shock-free days (20.7 ± 8.9 vs 17.9 ± 10.2, P = .04)
  • More ICU-free days (8.9 ± 8.2 vs 6.7 ± 7.8, P = .02).

Similarly, pulmonary physiologic indices were better with methylprednisolone, specifically:

  • The ratio of Pao2 to the fraction of inspired oxygen at days 3, 4, and 14 (P < .05)
  • Plateau pressure at days 4, 5, and 7 (P < .05)
  • Static compliance at days 7 and 14 (P < .05).

In terms of side effects, methylprednisolone was associated with more events associated with myopathy or neuropathy (9 vs 0, P = .001), but there were no differences in the number of serious infections or in glycemic control.

Comments

Although other recent studies suggested that corticosteroid use may be associated with a reduction in mortality rates,7–9 LaSRS did not confirm this effect. Although the doses and length of therapy were similar in these studies, LaSRS was much larger and included patients from the ARDS Network.

Nevertheless, LaSRS was criticized because of strict exclusion criteria and poor enrollment (only 5% of eligible patients were included). Additionally, it was conducted over a period of time when some ICU practices varied significantly (eg, low vs high tidal volume ventilation, tight vs loose glucose control).

The role of corticosteroids in early ARDS (< 7 days after diagnosis) remains controversial at best. Table 1 summarizes recent studies that evaluated the use of corticosteroids in patients with ARDS.

INTERRUPTING SEDATION DURING MECHANICAL VENTILATION

Key points

  • Daily awakening of mechanically ventilated patients is safe.
  • Daily interruption of sedation in mechanically ventilated patients is associated with a shorter length of mechanical ventilation.

Background

Sedatives are a central component of critical care. Continuous infusions of narcotics, benzodiazepines, and anesthetic agents are frequently used to promote comfort in patients receiving mechanical ventilation.

Despite its widespread use in the ICU, there is little evidence that such sedation improves outcomes. Observational and randomized trials10–12 have shown that patients who receive continuous infusions of sedatives need to be on mechanical ventilation longer than those who receive intermittent dosing. Additionally, an earlier randomized controlled trial13 showed that daily interruption of sedative drug infusions decreased the duration of mechanical ventilation by almost 50% and resulted in a reduction in the length of stay in the ICU.

Despite these findings, many ICU physicians remain skeptical of the value of daily interruption of sedative medications and question the safety of this practice.

The Awakening and Breathing Controlled (ABC) trial

GIRARD TD, KRESS JP, FUCHS BD, ET AL. EFFICACY AND SAFETY OF A PAIRED SEDATION AND VENTILATOR WEANING PROTOCOL FOR MECHANICALLY VENTILATED PATIENTS IN INTENSIVE CARE (AWAKENING AND BREATHING CONTROLLED TRIAL): A RANDOMISED CONTROLLED TRIAL. LANCET 2008; 371:126–134.

The Awakening and Breathing Controlled (ABC) trial14 was a multicenter, randomized controlled trial that included 336 patients who required at least 12 consecutive hours of mechanical ventilation. All patients had to be receiving patient-targeted sedation.

Those in the intervention group (n = 168) had their sedation interrupted every day, followed by a clinical assessment to determine whether they could be allowed to try breathing spontaneously. The control group (n = 168) also received a clinical assessment for a trial of spontaneous breathing, while their sedation was continued as usual.

In patients in the intervention group who failed the screening for a spontaneous breathing trial, the sedatives were resumed at half the previous dose. Criteria for failure on the spontaneous breathing trial included any of the following: anxiety, agitation, respiratory rate more than 35 breaths per minute for 5 minutes or longer, cardiac arrhythmia, oxygen saturation less than 88% for 5 minutes or longer, or two or more signs of respiratory distress, tachycardia, bradycardia, paradoxical breathing, accessory muscle use, diaphoresis, or marked dyspnea.

 

 

Interrupting sedation was superior

The combination of sedation interruption and a spontaneous breathing trial was superior to a spontaneous breathing trial alone. The mean number of ventilator-free days:

  • 14.7 ± 0.9 with sedation interruption
  • 11.6 ± 0.9 days with usual care (P = .02).

The median time to ICU discharge:

  • 9.1 days with sedation interruption (interquartile range 5.1 to 17.8)
  • 12.9 days with usual care (interquartile range 6.0 to 24.2, P = .01).

The mortality rate at 28 days:

  • 28% with sedation interruption
  • 35% with usual care (P = .21).

The mortality rate at 1 year:

  • 44% with sedation interruption
  • 58% with usual care (hazard ratio [HR] in the intervention group 0.68, 95% CI 0.50–0.92, P = .01).

Of note, patients in the intervention group had a higher rate of self-extubation (9.6% vs 3.6%, P = .03), but the rate of reintubation was similar between the groups (14% vs 13%, P = .47).

Comments

The addition of daily awakenings to spontaneous breathing trials results in a further reduction in the number of ICU days and increases the number of ventilator-free days.

Of note, the protocol allowed patients in the control group to undergo a spontaneous breathing trial while on sedatives (69% of the patients were receiving sedation at the time). Therefore, a bias effect in favor of the intervention group cannot be excluded. However, both groups had to meet criteria for readiness for spontaneous breathing.

The study demonstrates the safety of daily awakenings and confirms previous findings suggesting that a daily trial of spontaneous breathing results in better ICU outcomes.

GLUCOSE CONTROL IN THE ICU

Key points

  • Although earlier studies suggested that intensive insulin therapy might be beneficial in critically ill patients, new findings show that strict glucose control can lead to complications without improving outcomes.

Background

A previous study15 found that intensive insulin therapy to maintain a blood glucose level between 80 and 110 mg/dL (compared with 180–200 mg/dL) reduced the mortality rate in surgical critical care patients. The mortality rate in the ICU was 4.6% with intensive insulin therapy vs 8.0% with conventional therapy (P < .04), and the effect was more robust for patients who remained longer than 5 days in the ICU (10.6% vs 20.2%).

Importantly, however, hypoglycemia (defined as blood glucose ≤ 40 mg/dL) occurred in 39 patients in the intensive-treatment group vs 6 patients in the conventional-treatment group.

The NICE-SUGAR trial

NICE-SUGAR STUDY INVESTIGATORS; FINFER S, CHITTOCK DR, SU SY, ET AL. INTENSIVE VERSUS CONVENTIONAL GLUCOSE CONTROL IN CRITICALLY ILL PATIENTS. N ENGL J MED 2009; 360:1283–1297.

The Normoglycemia in Intensive Care Evaluation-Survival Using Glucose Algorithm Regulation (NICE-SUGAR) trial16 randomized 6,104 patients in medical and surgical ICUs to receive either intensive glucose control (blood glucose 81–108 mg/dL) with insulin therapy or conventional glucose control (blood glucose < 180 mg/dL). In the conventional-control group, insulin was discontinued if the blood glucose level dropped below 144 mg/dL.

A higher mortality rate with intensive glucose control

As expected, the intensive-control group achieved lower blood glucose levels: 115 vs 144 mg/dL.

Nevertheless, intensive glucose control was associated with a higher incidence of severe hypoglycemia, defined as a blood glucose level lower than 40 mg/dL: 6.8% vs 0.5%.

More importantly, compared with conventional insulin therapy, intensive glucose control was associated with a higher 90-day mortality rate: 27.5% vs 24.9% (odds ratio 1.14, 95% CI 1.02–1.28). These findings were similar in the subgroup of surgical patients (24.4% vs 19.8%, odds ratio 1.31, 95% CI 1.07–1.61).

Comments

Of note, the conventional-control group had more patients who discontinued the treatment protocol prematurely. Additionally, more patients in this group received corticosteroids.

These results widely differ from those of a previous study by van den Berghe et al,15 which showed that tight glycemic control is associated with a survival benefit. The differences in outcomes are probably largely related to different patient populations, as van den Berghe et al included patients who had undergone cardiac surgery, who were more likely to benefit from strict blood glucose control.

The VISEP trial

BRUNKHORST FM, ENGEL C, BLOOS F, ET AL; GERMAN COMPETENCE NETWORK SEPSIS (SEPNET). INTENSIVE INSULIN THERAPY AND PENTASTARCH RESUSCITATION IN SEVERE SEPSIS. N ENGL J MED 2008; 358:125–139.

The Volume Substitution and Insulin Therapy in Severe Sepsis (VISEP) trial was a multicenter study designed to compare intensive insulin therapy (target blood glucose level 80–110 mg/dL) and conventional glucose control (target blood glucose level 180–200 mg/dL) in patients with severe sepsis.17 It also compared two fluids for volume resuscitation: 10% pentastarch vs modified Ringer's lactate. It included both medical and surgical patients.

Trial halted early for safety reasons

The mean morning blood glucose level was significantly lower in the intensive insulin group (112 vs 151 mg/dL).

Severe hypoglycemia (blood glucose ≤ 40 mg/dL) was more common in the group that received intensive insulin therapy (17% vs 4.1%, P < .001).

Mortality rates at 28 days did not differ significantly: 24.7% with intensive control vs 26.0% with conventional glucose control. The mortality rate at 90 days was 39.7% in the intensive therapy group and 35.4% in the conventional therapy group, but the difference was not statistically significant.

The intensive insulin arm of the trial was stopped after 488 patients were enrolled because of a higher rate of hypoglycemia (12.1% vs 2.1%) and of serious adverse events (10.9% vs 5.2%).

Additionally, the fluid resuscitation arm of the study was suspended at the first planned interim analysis because of a higher risk of organ failure in the 10% pentastarch group.

 

 

CORTICOSTEROID THERAPY IN SEPTIC SHOCK

Key points

  • Corticosteroid therapy improves hemodynamic outcomes in patients with severe septic shock.
  • Although meta-analyses suggest the mortality rate is lower with corticosteroid therapy, there is not enough evidence from randomized controlled trials to prove that the use of low-dose corticosteroids lowers the mortality rate in patients with septic shock.
  • The corticotropin (ACTH) stimulation test should not be used to determine the need for corticosteroids in patients with septic shock.

Background

A previous multicenter study,18 performed in France, found that the use of corticosteroids in patients with septic shock resulted in lower rates of death at 28 days, in the ICU, and in the hospital and a shorter time to vasopressor withdrawal. Nevertheless, the beneficial effects were not observed in patients with adequate adrenal reserve (based on an ACTH stimulation test).

This study was criticized because of a high mortality rate in the placebo group.

The CORTICUS study

SPRUNG CL, ANNANE D, KEH D, ET AL; CORTICUS STUDY GROUP. HYDROCORTISONE THERAPY FOR PATIENTS WITH SEPTIC SHOCK. N ENGL J MED 2008; 358:111–124.

The Corticosteroid Therapy of Septic Shock (CORTICUS) study was a multicenter trial that randomly assigned 499 patients with septic shock to receive hydrocortisone (50 mg intravenously every 6 hours for 5 days, followed by a 6-day taper period) or placebo.19

Patients were eligible to be enrolled within 72 hours of onset of shock. Similar to previous studies, the CORTICUS trial classified patients on the basis of an ACTH stimulation test as having inadequate adrenal reserve (a cortisol increase of ≤ 9 μg/dL) or adequate adrenal reserve (a cortisol increase of > 9 μg/dL).

Faster reversal of shock with steroids

At baseline, the mean Simplified Acute Physiologic Score II (SAPS II) was 49 (the range of possible scores is 0 to 163; the higher the score the worse the organ dysfunction).

Hydrocortisone use resulted in a shorter duration of vasopressor use and a faster reversal of shock (3.3 days vs 5.8 days, P < .001).

This association was the same when patients were divided according to response to ACTH stimulation test. Time to reversal of shock in responders:

  • 2.8 days with hydrocortisone
  • 5.8 days with placebo (P < .001).

Time to reversal of shock in nonresponders:

  • 3.9 days with hydrocortisone
  • 6.0 days with placebo (P = .06).

Nevertheless, the treatment did not reduce the mortality rate at 28 days overall (34.3% vs 31.5% P = .51), or in the subgroups based on response to ACTH, or at any other time point. A post hoc analysis suggested that patients who had a systolic blood pressure of less than 90 mm Hg within 30 minutes of enrollment had a greater benefit in terms of mortality rate, but the effect was not statistically significant: the absolute difference was −11.2% (P = 0.28). Similarly, post hoc analyses also revealed a higher rate of death at 28 days in patients who received etomidate (Amidate) before randomization in both groups (P = .03).

Importantly, patients who received corticosteroids had a higher incidence of superinfections, including new episodes of sepsis or septic shock, with a combined odds ratio of 1.37 (95% CI 1.05–1.79).

Length of stay in the hospital or in the ICU was similar in patients who received corticosteroids and in those who received placebo. The ICU length of stay was 19 ± 31 days with hydrocortisone vs 18 ± 17 days with placebo (P = .51).

Comments

The CORTICUS trial showed that low-dose corticosteroid therapy results in faster reversal of shock in patients with severe septic shock. The hemodynamic benefits are present in all patients regardless of response to the ACTH stimulation test.

Nevertheless, contrary to previous findings,18 corticosteroid use was not associated with an improvement in mortality rates. Important differences exist between these two studies:

  • The mortality rates in the placebo groups were significantly different (> 50% in the French study vs 30% in CORTICUS).
  • The SAPS II scores were different in these two trials (55 vs 49), suggesting a greater severity of illness in the French study.
  • The criteria for enrollment were different: the French study included patients who had a systolic blood pressure lower than 90 mm Hg for more than 1 hour despite fluid administration and vasopressor use, whereas the CORTICUS trial included patients who had a systolic blood pressure lower than 90 mm Hg for more than 1 hour despite fluid administration or vasopressor use.
  • The time of enrollment was different: patients were enrolled much faster in the French study (within 8 hours) than in the CORTICUS trial (within 72 hours).

A recent meta-analysis of 17 randomized trials (including the CORTICUS study), found that, compared with those who received placebo, patients who received corticosteroids had a small reduction in the 28-day mortality rate (HR 0.84, 95% CI 0.71–1.00, P < .05).20 Of note, this meta-analysis has been criticized for possible publication bias and also for a large degree of heterogeneity in its results.21

 

 

VASOPRESSOR THERAPY IN SHOCK

Key points

  • Vasopressin use in patients with severe septic shock is not associated with an improvement in mortality rates.
  • Vasopressin should not be used as a first-line agent in patients with septic shock.
  • Norepinephrine should be considered a first-line agent in patients with shock.
  • Compared with norepinephrine, the use of dopamine in patients with shock is associated with similar mortality rates, although its use may result in a greater number of cardiac adverse events.

Background

Vasopressin gained popularity in critical care in the last 10 years because several small studies showed that adding it improves hemodynamics and results in a reduction in the doses of catecholamines in patients with refractory septic shock.22 Furthermore, the Surviving Sepsis Campaign guidelines recommended the use of vasopressin in patients who have refractory shock despite fluid resuscitation and the use of other “conventional” vasopressors.23

Despite these positive findings, it remained unknown if the use of vasopressin increases the survival rate in patients with septic shock.

The Vasopressin and Septic Shock Trial (VASST)

RUSSELL JA, WALLEY KR, SINGER J, ET AL; VASST INVESTIGATORS. VASOPRESSIN VERSUS NOREPINEPHRINE INFUSION IN PATIENTS WITH SEPTIC SHOCK. N ENGL J MED 2008; 358:877–887.

The Vasopressin and Septic Shock Trial (VASST)24 was a multicenter randomized, double-blind, controlled trial that included 778 patients with refractory septic shock. Refractory shock was defined as the lack of a response to a normal saline fluid bolus of 500 mL or the need for vasopressors (norepinephrine in doses of at least 5 μg/minute or its equivalent for 6 hours or more in the 24 hours before randomization).

Two subgroups were identified: those with severe septic shock (requiring norepinephrine in doses of 15 μg/minute or higher) and those with less-severe septic shock (needing norepinephrine in doses of 5 to 14 μg/minute). Patients with unstable coronary artery disease (acute myocardial infarction, angina) and severe congestive heart failure were excluded.

Patients were randomized to receive an intravenous infusion of vasopressin (0.01–0.03 U/minute) or norepinephrine (5–15 mg/minute) in addition to open-labeled vasopressors (excluding vasopressin). The primary outcome was the all-cause mortality rate at 28 days.

Results

At 28 days, fewer patients had died in the vasopressin group than in the norepinephrine group (35.4% vs 39.3%), but the difference was not statistically significant (P = .26). The trend was the same at 90 days (mortality rate 43.9% vs 49.6%, P = .11).

Subgroup analysis showed that in patients with less-severe septic shock, those who received vasopressin had a lower mortality rate at 28 days (26.5% vs 35.7%, P = .05; relative risk 0.74; 95% CI 0.55–1.01) and at 90 days (35.8% vs 46.1%, P = .04; relative risk 0.78, 95% CI 0.61–0.99).

There were no statistically significant differences in any of the other secondary outcomes or in serious adverse events.

Comments

The study has been criticized for several reasons:

  • The mean arterial blood pressure at baseline before initiation of vasopressin was 72 mm Hg (and some argue that vasopressin was therefore not needed by the time it was started).
  • The time from screening to infusion of the study drug was very long (12 hours).
  • The observed mortality rate was lower than expected (37%).

Despite these considerations, the VASST trial showed that vasopressin is not associated with an increased number of adverse events in patients without active cardiovascular disease. The possible benefit in terms of the mortality rate in the subgroup of patients with less-severe septic shock requires further investigation.

Is dopamine equivalent to norepinephrine?

Previously, the Sepsis Occurrence in Acutely Ill Patients (SOAP) study, a multicenter, observational cohort study, found that dopamine use was associated with a higher all-cause mortality rate in the ICU compared with no dopamine.25 This finding had not been reproduced, as few well-designed studies had compared the effects of dopamine and norepinephrine.

The SOAP II study

DE BACKER D, BISTON P, DEVRIENDT J, ET AL; SOAP II INVESTIGATORS.. COMPARISON OF DOPAMINE AND NOREPINEPHRINE IN THE TREATMENT OF SHOCK. N ENGL J MED 2010; 362:779–789.

The SOAP II study,26 a multicenter, randomized trial, compared dopamine vs norepinephrine as first-line vasopressor therapy. In patients with refractory shock despite use of dopamine 20 μg/kg/minute or norepinephrine 0.19 μg/kg/minute, open-label norepinephrine, epinephrine, or vasopressin was added.

The primary outcome was the mortality rate at 28 days after randomization; secondary end points included the number of days without need for organ support and the occurrence of adverse events.

Results

A total of 1,679 patients were included; 858 were assigned to dopamine and 821 to norepinephrine. Most (1,044, 62%) of the patients had a diagnosis of septic shock.

No significant difference in mortality rates was noted at 28 days: 52.5% with dopamine vs 48.5% with norepinephrine (P = .10).

However, there were more arrhythmias in the patients treated with dopamine: 207 events (24.1%) vs 102 events (12.4%) (P < .001). The number of other adverse events such as renal failure, myocardial infarction, arterial occlusion, or skin necrosis was not different between the groups.

A subgroup analysis showed that dopamine was associated with more deaths at 28 days in patients with cardiogenic shock (P = .03) but not in patients with septic shock (P = .19) or with hypovolemic shock (P = .84).

Comments

The study was criticized because the patients may not have received adequate fluid resuscitation (the study considered adequate resuscitation to be equivalent to 1 L of crystalloids or 500 mL of colloids), as different degrees of volume depletion among patients make direct comparisons of vasopressor effects difficult.

Additionally, the study defined dopamine 20 μg/kg/minute as being equipotent with norepinephrine 0.19 μg/kg/minute. Comparisons of potency between drugs are difficult to establish, as there are no available data.

Nevertheless, this study further confirms previous findings suggesting that norepinephrine is not associated with more end-organ damage (such as renal failure or skin ischemia), and shows that dopamine may increase the number of adverse events, particularly in patients with cardiac disease.

References
  1. Shah MR, Hasselblad V, Stevenson LW, et al. Impact of the pulmonary artery catheter in critically ill patients: meta-analysis of randomized clinical trials. JAMA 2005; 294:16641670.
  2. National Heart, Lung, and Blood Institute Acute Respiratory Distress Syndrome (ARDS) Clinical Trials Network; Wiedemann HP, Wheeler AP, Bernard GR, et al. Comparison of two fluid-management strategies in acute lung injury. N Engl J Med 2006; 354:25642575.
  3. National Heart, Lung, and Blood Institute Acute Respiratory Distress Syndrome (ARDS) Clinical Trials Network; Wheeler AP, Bernard GR, Thompson BT, et al. Pulmonary-artery versus central venous catheter to guide treatment of acute lung injury. N Engl J Med 2006; 354:22132224.
  4. Bernard GR, Luce JM, Sprung CL, et al. High-dose corticosteroids in patients with the adult respiratory distress syndrome. N Engl J Med 1987; 317:15651570.
  5. Meduri GU, Headley AS, Golden E, et al. Effect of prolonged methylprednisolone therapy in unresolving acute respiratory distress syndrome: a randomized controlled trial. JAMA 1998; 280:159165.
  6. Steinberg KP, Hudson LD, Goodman RB, et al; National Heart, Lung, and Blood Institute Acute Respiratory Distress Syndrome (ARDS) Clinical Trials Network. Efficacy and safety of corticosteroids for persistent acute respiratory distress syndrome. N Engl J Med 2006; 354:16711684.
  7. Meduri GU, Golden E, Freire AX, et al. Methylprednisolone infusion in early severe ARDS: results of a randomized controlled trial. Chest 2007; 131:954963.
  8. Meduri GU, Golden E, Freire AX, et al. Methylprednisolone infusion in early severe ARDS results of a randomized controlled trial. 2007. Chest 2009; 136(suppl 5):e30.
  9. Annane D, Sébille V, Bellissant E; Ger-Inf-05 Study Group. Effect of low doses of corticosteroids in septic shock patients with or without early acute respiratory distress syndrome. Crit Care Med 2006; 34:2230.
  10. Kollef MH, Levy NT, Ahrens TS, Schaiff R, Prentice D, Sherman G. The use of continuous i.v. sedation is associated with prolongation of mechanical ventilation. Chest 1998; 114:541548.
  11. Carson SS, Kress JP, Rodgers JE, et al. A randomized trial of intermittent lorazepam versus propofol with daily interruption in mechanically ventilated patients. Crit Care Med 2006; 34:13261332.
  12. Brook AD, Ahrens TS, Schaiff R, et al. Effect of a nursing-implemented sedation protocol on the duration of mechanical ventilation. Crit Care Med 1999; 27:26092615.
  13. Kress JP, Pohlman AS, O’Connor MF, Hall JB. Daily interruption of sedative infusions in critically ill patients undergoing mechanical ventilation. N Engl J Med 2000; 342:14711477.
  14. Girard TD, Kress JP, Fuchs BD, et al. Efficacy and safety of a paired sedation and ventilator weaning protocol for mechanically ventilated patients in intensive care (Awakening and Breathing Controlled trial): a randomised controlled trial. Lancet 2008; 371:126134.
  15. van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in the critically ill patients. N Engl J Med 2001; 345:13591367.
  16. NICE-SUGAR Study Investigators; Finfer S, Chittock DR, Su SY, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009; 360:12831297.
  17. Brunkhorst FM, Engel C, Bloos F, et al; German Competence Network Sepsis (SepNet). Intensive insulin therapy and pentastarch resuscitation in severe sepsis. N Engl J Med 2008; 358:125139.
  18. Annane D, Sébille V, Charpentier C, et al. Effect of treatment with low doses of hydrocortisone and fludrocortisone on mortality in patients with septic shock. JAMA 2002; 288:862871.
  19. Sprung CL, Annane D, Keh D, et al; CORTICUS Study Group. Hydrocortisone therapy for patients with septic shock. N Engl J Med 2008; 358:111124.
  20. Annane D, Bellissant E, Bollaert PE, et al. Corticosteroids in the treatment of severe sepsis and septic shock in adults: a systematic review. JAMA 2009; 301:23622375.
  21. Minneci PC, Deans KJ, Natanson C. Corticosteroid therapy for severe sepsis and septic shock [letter]. JAMA 2009; 302:164431644.
  22. Kampmeier TG, Rehberg S, Westphal M, Lange M. Vasopressin in sepsis and septic shock. Minerva Anestesiol 2010; 76:844850.
  23. Dellinger RP, Levy MM, Carlet JM, et al; International Surviving Sepsis Campaign Guidelines Committee. Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock: 2008. Crit Care Med 2008; 36:296327.
  24. Russell JA, Walley KR, Singer J, et al; VASST Investigators. Vasopressin versus norepinephrine infusion in patients with septic shock. N Engl J Med 2008; 358:877887.
  25. Sakr Y, Reinhart K, Vincent JL, et al. Does dopamine administration in shock influence outcome? Results of the Sepsis Occurrence in Acutely Ill Patients (SOAP) Study. Crit Care Med 2006; 34:589597.
  26. De Backer D, Biston P, Devriendt J, et al; SOAP II Investigators. Comparison of dopamine and norepinephrine in the treatment of shock. N Engl J Med 2010; 362:779789.
References
  1. Shah MR, Hasselblad V, Stevenson LW, et al. Impact of the pulmonary artery catheter in critically ill patients: meta-analysis of randomized clinical trials. JAMA 2005; 294:16641670.
  2. National Heart, Lung, and Blood Institute Acute Respiratory Distress Syndrome (ARDS) Clinical Trials Network; Wiedemann HP, Wheeler AP, Bernard GR, et al. Comparison of two fluid-management strategies in acute lung injury. N Engl J Med 2006; 354:25642575.
  3. National Heart, Lung, and Blood Institute Acute Respiratory Distress Syndrome (ARDS) Clinical Trials Network; Wheeler AP, Bernard GR, Thompson BT, et al. Pulmonary-artery versus central venous catheter to guide treatment of acute lung injury. N Engl J Med 2006; 354:22132224.
  4. Bernard GR, Luce JM, Sprung CL, et al. High-dose corticosteroids in patients with the adult respiratory distress syndrome. N Engl J Med 1987; 317:15651570.
  5. Meduri GU, Headley AS, Golden E, et al. Effect of prolonged methylprednisolone therapy in unresolving acute respiratory distress syndrome: a randomized controlled trial. JAMA 1998; 280:159165.
  6. Steinberg KP, Hudson LD, Goodman RB, et al; National Heart, Lung, and Blood Institute Acute Respiratory Distress Syndrome (ARDS) Clinical Trials Network. Efficacy and safety of corticosteroids for persistent acute respiratory distress syndrome. N Engl J Med 2006; 354:16711684.
  7. Meduri GU, Golden E, Freire AX, et al. Methylprednisolone infusion in early severe ARDS: results of a randomized controlled trial. Chest 2007; 131:954963.
  8. Meduri GU, Golden E, Freire AX, et al. Methylprednisolone infusion in early severe ARDS results of a randomized controlled trial. 2007. Chest 2009; 136(suppl 5):e30.
  9. Annane D, Sébille V, Bellissant E; Ger-Inf-05 Study Group. Effect of low doses of corticosteroids in septic shock patients with or without early acute respiratory distress syndrome. Crit Care Med 2006; 34:2230.
  10. Kollef MH, Levy NT, Ahrens TS, Schaiff R, Prentice D, Sherman G. The use of continuous i.v. sedation is associated with prolongation of mechanical ventilation. Chest 1998; 114:541548.
  11. Carson SS, Kress JP, Rodgers JE, et al. A randomized trial of intermittent lorazepam versus propofol with daily interruption in mechanically ventilated patients. Crit Care Med 2006; 34:13261332.
  12. Brook AD, Ahrens TS, Schaiff R, et al. Effect of a nursing-implemented sedation protocol on the duration of mechanical ventilation. Crit Care Med 1999; 27:26092615.
  13. Kress JP, Pohlman AS, O’Connor MF, Hall JB. Daily interruption of sedative infusions in critically ill patients undergoing mechanical ventilation. N Engl J Med 2000; 342:14711477.
  14. Girard TD, Kress JP, Fuchs BD, et al. Efficacy and safety of a paired sedation and ventilator weaning protocol for mechanically ventilated patients in intensive care (Awakening and Breathing Controlled trial): a randomised controlled trial. Lancet 2008; 371:126134.
  15. van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in the critically ill patients. N Engl J Med 2001; 345:13591367.
  16. NICE-SUGAR Study Investigators; Finfer S, Chittock DR, Su SY, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med 2009; 360:12831297.
  17. Brunkhorst FM, Engel C, Bloos F, et al; German Competence Network Sepsis (SepNet). Intensive insulin therapy and pentastarch resuscitation in severe sepsis. N Engl J Med 2008; 358:125139.
  18. Annane D, Sébille V, Charpentier C, et al. Effect of treatment with low doses of hydrocortisone and fludrocortisone on mortality in patients with septic shock. JAMA 2002; 288:862871.
  19. Sprung CL, Annane D, Keh D, et al; CORTICUS Study Group. Hydrocortisone therapy for patients with septic shock. N Engl J Med 2008; 358:111124.
  20. Annane D, Bellissant E, Bollaert PE, et al. Corticosteroids in the treatment of severe sepsis and septic shock in adults: a systematic review. JAMA 2009; 301:23622375.
  21. Minneci PC, Deans KJ, Natanson C. Corticosteroid therapy for severe sepsis and septic shock [letter]. JAMA 2009; 302:164431644.
  22. Kampmeier TG, Rehberg S, Westphal M, Lange M. Vasopressin in sepsis and septic shock. Minerva Anestesiol 2010; 76:844850.
  23. Dellinger RP, Levy MM, Carlet JM, et al; International Surviving Sepsis Campaign Guidelines Committee. Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock: 2008. Crit Care Med 2008; 36:296327.
  24. Russell JA, Walley KR, Singer J, et al; VASST Investigators. Vasopressin versus norepinephrine infusion in patients with septic shock. N Engl J Med 2008; 358:877887.
  25. Sakr Y, Reinhart K, Vincent JL, et al. Does dopamine administration in shock influence outcome? Results of the Sepsis Occurrence in Acutely Ill Patients (SOAP) Study. Crit Care Med 2006; 34:589597.
  26. De Backer D, Biston P, Devriendt J, et al; SOAP II Investigators. Comparison of dopamine and norepinephrine in the treatment of shock. N Engl J Med 2010; 362:779789.
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Page Number
665-674
Page Number
665-674
Publications
Publications
Topics
Article Type
Display Headline
Update in intensive care medicine: Studies that challenged our practice in the last 5 years
Display Headline
Update in intensive care medicine: Studies that challenged our practice in the last 5 years
Sections
Inside the Article

KEY POINTS

  • In patients with acute respiratory distress syndrome (ARDS), fluid restriction is associated with better outcomes. A pulmonary arterial catheter is not indicated in the routine management of ARDS. Corticosteroid use can result in improved oxygenation but may be associated with worse outcomes if treatment is started late, ie, more than 14 days after the onset of the disease.
  • Intensive insulin therapy is associated with hypoglycemia and may be associated with complications in medical patients.
  • In patients with septic shock, corticosteroid therapy is associated with faster shock reversal, but its effects on mortality rates remain controversial. Vasopressin improves hemodynamic variables but is not associated with a lower mortality rate.
  • Daily interruption of sedation and early awakening of mechanically ventilated patients result in better outcomes.
  • Compared with norepinephrine, dopamine is associated with more cardiac adverse events in patients with shock.
Disallow All Ads
Alternative CME
Article PDF Media

Dabigatran: Will it change clinical practice?

Article Type
Changed
Fri, 11/10/2017 - 08:17
Display Headline
Dabigatran: Will it change clinical practice?

Dabigatran etexilate (Pradaxa) is a new oral anticoagulant that has distinct advantages over warfarin (Coumadin) in terms of its ease of administration, efficacy, and safety.

In the Randomized Evaluation of Long-Term Anticoagulation Therapy (RE-LY trial),1 in patients with nonvalvular atrial fibrillation, dabigatran 110 mg twice a day was found to be as good as warfarin in preventing systemic embolization and stroke (the primary outcome of the study), and at 150 mg twice a day it was superior.1 It has also shown efficacy in treating acute deep vein thrombosis and pulmonary embolism and in preventing these complications in orthopedic surgical patients.2–4

Dabigatran has been approved in 75 countries. It carries the trade name Pradaxa in Europe and the United States and Pradax in Canada. In October 2010, the US Food and Drug Administration (FDA) Cardiovascular and Renal Drugs Advisory Committee endorsed two twice-daily doses (75 mg and 150 mg) of dabigatran for the prevention of systemic embolization and stroke in patients with nonvalvular atrial fibrillation.

However, dabigatran is relatively expensive, and its current high cost might be a barrier to its wider use.

MANY PATIENTS NEED ANTICOAGULATION

Anticoagulation plays a vital role in the primary and secondary prevention of stroke in patients with atrial fibrillation and of pulmonary embolism in patients with venous thromboembolism. It is also used during cardiothoracic and vascular surgery, endovascular procedures, and dialysis and in patients with mechanical heart valves and hypercoagulable conditions.

Atrial fibrillation affects 3.03 million people in the United States (2005 figures), and this number is predicted to be as high as 7.56 million by 2050.5 More than 10% of people over the age of 80 years have it, and the lifetime risk of developing it is approximately 25%.6,7 Its most serious complication is ischemic stroke (the risk of which increases with age) and systemic embolization.5,8

Until the recent introduction of dabigatran, the only oral anticoagulant available in the United States for treating patients with atrial fibrillation was warfarin. Although warfarin has a number of disadvantages (see below), it is actually very effective for preventing ischemic stroke, reducing the incidence by as much as 65%.9,10

Venous thromboembolism is the third most common cardiovascular disorder after myocardial infarction and stroke.11 Although its exact incidence is unknown, nearly 1 million cases of it (incident or recurrent, fatal and nonfatal events) occur in the United States each year.12 Many patients with venous thromboembolism need oral anticoagulation long-term, and currently warfarin remains the only option for them as well.

NEEDED: A BETTER ANTICOAGULANT

Warfarin has been the most commonly prescribed oral anticoagulant in the United States for more than 60 years. As of 2004, more than 30 million outpatient prescriptions for it were filled annually in this country alone.13 However, warfarin has several important limitations.

Warfarin has a narrow therapeutic index. Patients taking it require monitoring of their international normalized ratio (INR) and frequent dose adjustments, and this is time-consuming and inconvenient. The target INR for patients with venous thromboembolism and atrial fibrillation is 2.0 to 3.0, whereas patients with a mechanical heart valve need a higher INR (2.5 to 3.5). If the INR is below these ranges, warfarin is less effective, with a risk of new thrombosis. On the other hand, if the INR is too high, there is a risk of bleeding.14 In fact, the most important side effect of warfarin is the risk of major and minor bleeding.13 However, even in well-designed clinical trials in which patients are closely managed, only 55% to 60% of patients regularly achieve their therapeutic target INR.1,2,14,15

Warfarin also interacts with many drugs and with some foods. Compliance is difficult. It has a slow onset of action. Genetic variations require dose adjustments. When switching from a parenteral anticoagulant, overlapping is required. Skin necrosis is a possible side effect. And warfarin is teratogenic.

Despite these limitations, the American College of Chest Physicians endorses warfarin to prevent or treat venous thromboembolism, and to prevent stroke in patients with atrial fibrillation.16

Recently, a number of new oral and parenteral anticoagulants have been developed (Table 1) with the aim of overcoming some of the drawbacks of warfarin and the other currently available agents, and to improve the prevention and treatment of thromboembolic disorders.

DABIGATRAN, A THROMBIN INHIBITOR

Dabigatran, developed by Boehringer Ingelheim, is a potent, competitive, and reversible inhibitor of both free and clot-bound thrombin, inhibiting both thrombin activity and generation (Table 2).17,18

A prodrug, dabigatran is rapidly absorbed and converted to its active form. Its plasma concentration reaches a peak 1.5 to 3 hours after an oral dose, and it has an elimination half-life of 12 to 14 hours. About 80% of its excretion is by the kidneys and the remaining 20% is through bile.

Dabigatran is not metabolized by cytochrome P450 isoenzymes, and therefore it has few major interactions with other drugs. An exception is rifampin, a P-glycoprotein inducer that blocks dabigatran’s absorption in the gut, so this combination should be avoided. Another is quinidine, a strong P-glycoprotein inhibitor that is contraindicated for use with dabigatran. Also, amiodarone (Cordarone), another P-glycoprotein inhibitor, increases blood levels of dabigatran, and therefore a lower dose of dabigatran is recommended if these drugs are given together.18–20

 

 

DOES DABIGATRAN NEED MONITORING? CAN IT EVEN BE MONITORED?

Dabigatran has a predictable pharmacodynamic effect, and current data indicate it does not need regular monitoring.18–20 However, one may need to be able to measure the drug’s activity in certain situations, such as suspected overdose, bleeding, need for emergency surgery, impaired renal function, pregnancy, and obesity, and in children.20

Dabigatran has little effect on the prothrombin time or the INR, even at therapeutic concentrations.19 Further, its effect on the activated partial thromboplastin time (aPTT) is neither linear nor dose-dependent, and the aPTT reaches a plateau and becomes less sensitive at very high concentrations. Therefore, the aPTT does not appear to be an appropriate test to monitor dabigatran’s therapeutic anticoagulant effect, although it does provide a qualitative indication of anticoagulant activity.18,19

The thrombin time is a very sensitive method for determining if dabigatran is present, but the test lacks standardization; the ecarin clotting time provides better evidence of the dose but is not readily available at most institutions.18,19,21

EVALUATED IN CLINICAL TRIALS

Dabigatran has been evaluated in a number of trials for its ability to prevent ischemic stroke and systemic embolization in patients with atrial fibrillation and to prevent and treat venous thromboembolism in surgical orthopedic patients, and in patients with acute coronary syndrome (Table 3).1–4,22–25

DABIGATRAN IS EXPENSIVE BUT MAY BE COST-EFFECTIVE

The estimated price of dabigatran 150 mg twice a day in the United States is about $6.75 to $8.00 per day.26,27

Warfarin, in contrast, costs as little as $50 per year.28 However, this low price does not include the cost of monitoring the INR (office visits and laboratory testing), and these combined expenses are much higher than the price of the warfarin itself.29 In addition, warfarin requires time-consuming management when bridging to a parenteral anticoagulant (for reversal of its anticoagulant action) before routine health maintenance procedures such as dental work and colonoscopy and interventional procedures and surgery. Any bleeding complication will also add to its cost and will be associated with a decrease in the patient’s perceived health and quality of life, but this is true for both drugs.30

In today’s health care environment, controlling costs is a universal priority, but it may be unfair to compare the cost of dabigatran with that of warfarin alone. The expense and morbidity associated with stroke and intracranial bleeding are high, and if patients on dabigatran have fewer strokes (as seen in the RE-LY trial with dabigatran 150 mg twice a day) and no added expense of monitoring, then dabigatran may be cost-effective.

Freeman et al31 analyzed the cost-effectiveness of dabigatran, using an estimated cost of $13.70 per day and data from the RE-LY trial. They concluded that dabigatran may be a cost-effective alternative to warfarin in preventing ischemic stroke in patients considered at higher risk for ischemic stroke or intracranial hemorrhage, ie, those with a CHADS2 score of 1 or higher or equivalent. (The CHADS2 score is calculated as 1 point each for congestive heart failure, hypertension, age 75 or older, and diabetes mellitus; 2 points for prior stroke or transient ischemic attack.)

As more new-generation oral anticoagulants become available (see below), the price of dabigatran will undoubtedly decrease. Until then, warfarin will remain a cost-effective and cost-saving drug that cannot yet be considered obsolete.

WHO SHOULD RECEIVE DABIGATRAN?

The ideal patient for dabigatran treatment is not yet defined. The decision to convert a patient’s treatment from warfarin to dabigatran will likely depend on several factors, including the patient’s response to warfarin and the physician’s comfort with this new drug.

Many patients do extremely well with warfarin, requiring infrequent monitoring to maintain a therapeutic INR and having no bleeding complications. For them, it would be more practical to continue warfarin. Another reason for staying with warfarin would be if twice-a-day dosing would pose a problem.

Dabigatran would be a reasonable choice for a patient whose INR is erratic, who requires more frequent monitoring, for whom cost is not an issue, and for whom there is concern about dietary or drug interactions.

Another consideration is whether the patient has access to a health care facility for warfarin monitoring: this is difficult for those who cannot drive, who depend on others for transportation, and who live in rural areas.

Additionally, dabigatran may be a cost-effective alternative to warfarin for a patient with a high CHADS2 score who is considered at a higher risk for stroke.31

In all cases, the option should be considered only after an open discussion with the patient about the risks and benefits of this new drug.

WHO SHOULD NOT RECEIVE IT?

Dabigatran is a twice-daily drug with a short half-life. No patient with a history of poor compliance will be a good candidate for dabigatran. Since there are no practical laboratory tests for monitoring compliance, one will have to reinforce at every visit the importance of taking this medication according to instructions.

Patients with underlying kidney disease will need close monitoring of their creatinine clearance, with dose adjustment if renal function deteriorates.

Additionally, one should use caution when prescribing dabigatran to obese patients, pregnant women, or children until more is known about its use in these populations.

ADVANTAGES AND DISADVANTAGES OF DABIGATRAN

In addition to its pharmacologic advantages, dabigatran demonstrated two other major advantages over warfarin in the RE-LY trial in patients with atrial fibrillation (Table 4). First, the rate of intracranial bleeding, a major devastating complication of warfarin, was 60% lower with dabigatran 150 mg twice a day than with warfarin—and lower still with dabigatran 110 mg twice a day.1 Second, the rate of stroke or systemic embolism was 34% lower in the group that got dabigatran 150 mg twice a day than in the group that got warfarin.

A reason may be that patients with atrial fibrillation and poor INR control have higher rates of death, stroke, myocardial infarction, and major bleeding.14 In most clinical trials, only 55% to 60% of patients achieve a therapeutic INR on warfarin, leaving them at risk of thrombosis or, conversely, bleeding.1,2,15,32 Dabigatran has predictable pharmacokinetics, and its twice-daily dosing allows for less variability in its anticoagulant effect, making it more consistently therapeutic with less potential for bleeding or thrombosis.1

The Canadian Cardiovascular Society included dabigatran in its 2010 guidelines on atrial fibrillation, recommending it or warfarin.33 The American College of Cardiology, the American Heart Association, and the Heart Rhythm Society now give dabigatran a class I B recommendation (benefit greater than risk, but limited populations studied) in secondary stroke prevention.34

On the other hand, major concerns are the lack of an antidote for dabigatran and a lack of experience in treating bleeding complications. Since dabigatran is not monitored, physicians may be uncertain if we are overdosing or undertreating. As we gain experience, we will learn how to treat bleeding complications. Until then, it will be important to anticipate this problem and to develop an algorithm based on the best available evidence in managing this complication.

Although the overall rates of bleeding in the RE-LY trial were lower with dabigatran than with warfarin, there were more gastrointestinal bleeding events with the 150-mg dose of dabigatran, which was not readily explained.

Further, the rate of dyspepsia was almost twice as high with dabigatran than with warfarin, regardless of the dose of dabigatran. There were also more dropouts in the 2nd year of follow-up in the dabigatran groups, with gastrointestinal intolerance being one of the major reasons. Therefore, dyspepsia may cause intolerance and noncompliance.1

Dabigatran must be taken twice a day and has a relatively short half-life. For a noncompliant patient, missing one or two doses will cause a reversal of its anticoagulation effect, leaving the patient susceptible to thrombosis. In comparison, warfarin has a longer half-life and is taken once a day, so missing a dose is less likely to result in a similar reversal of its anticoagulant effect.

 

 

SPECIAL CONDITIONS

Switching from other anticoagulants to dabigatran

When making the transition from a subcutaneously administered anticoagulant, ie, a low-molecular-weight heparin or the anti-Xa inhibitor fondaparinux (Arixtra), dabigatran should be started 0 to 2 hours before the next subcutaneous dose of the parenteral anticoagulant was to be given.21,35

When switching from unfractionated heparin given by continuous intravenous infusion, the first dose of dabigatran should be given at the time the infusion is stopped.

When switching from warfarin, dabigatran should be started once the patient’s INR is less than 2.0.

Switching from dabigatran to a parenteral anticoagulant

When switching from dabigatran back to a parenteral anticoagulant, allow 12 to 24 hours after the last dabigatran dose before starting the parenteral agent.21,35

Elective surgery or invasive procedures

The manufacturer recommends stopping dabigatran 1 to 2 days before elective surgery for patients who have normal renal function and a low risk of bleeding, or 3 to 5 days before surgery for patients who have a creatinine clearance of 50 mL/min or less. Before major surgery or placement of a spinal or epidural catheter, the manufacturer recommends that dabigatran be held even longer.35

If emergency surgery is needed

If emergency surgery is needed, the clinician must use his or her judgment as to the risks of bleeding vs those of postponing the surgery.21,35

Overdose or bleeding

No antidote for dabigatran is currently available. It has a short half-life (12–14 hours), and the treatment for overdose or bleeding is to discontinue it immediately, maintain adequate diuresis, and transfuse fresh-frozen plasma or red blood cells as indicated.

The role of activated charcoal given orally to reduce absorption is under evaluation, but the charcoal must be given within 1 to 2 hours after the overdose is taken.21

Dabigatran does not bind very much to plasma proteins and hence is dialyzable—an approach that may be necessary in cases of persistent or life-threatening bleeding.

Recombinant activated factor VII or prothrombin complex concentrates may be additional options in cases of severe bleeding.18,21

TOPICS OF FUTURE RESEARCH

A limitation of the dabigatran trials was that they did not enroll patients who had renal or liver impairment, cancer, or other comorbidities; pregnant women; or children. Other topics of future research include its use in patients weighing less than 48 kg or more than 110 kg, its efficacy in patients with thrombophilia, in patients with mechanical heart valves, and in long-term follow-up and the use of thrombolytics in patients with acute stroke who are on dabigatran.

WILL DABIGATRAN CHANGE CLINICAL PRACTICE?

Despite some of the challenges listed above, we believe that dabigatran is likely to change medical practice in patients requiring anticoagulation.

Dabigatran’s biggest use will most likely be in patients with atrial fibrillation, mainly because this is the largest group of people receiving anticoagulation. In addition, the incidence of atrial fibrillation rises with age, the US population is living longer, and patients generally require life-long anticoagulation once this condition develops.

Dabigatran may be approved for additional indications in the near future. It has already shown efficacy in primary and secondary prevention of venous thromboembolism. Other important areas to be studied include its use in patients with mechanical heart valves and thrombophilia.

Whether dabigatran will be a worthy substitute for the parenteral anticoagulants (heparin, low-molecular-weight heparins, or factor Xa inhibitors) is not yet known, but it will have an enormous impact on anticoagulation management if proved efficacious.

If dabigatran becomes a major substitute for warfarin, it will affect the anticoagulation clinics, with their well-trained staff, that are currently monitoring millions of patients in the United States. These clinics would no longer be needed, and laboratory and technical costs could be saved. A downside is that patients on dabigatran will not be as closely supervised and reminded to take their medication as patients on warfarin are now at these clinics. Instead, they will likely be supervised by their own physician (or assistants), who will need to become familiar with this anticoagulant. This may affect compliance with dabigatran.

OTHER NEW ORAL ANTICOAGULANTS ARE ON THE WAY

Other oral anticoagulants, including rivaroxaban (Xarelto) and apixaban (Eliquis), have been under study and show promise in preventing both thrombotic stroke and venous thromboembolism. They will likely compete with dabigatran once they are approved.

Rivaroxaban, an oral direct factor Xa inhibitor, is being investigated for stroke prevention in patients with atrial fibrillation. It has also been shown to be not inferior to (and to be less expensive than) enoxaparin in treating and preventing venous thromboembolism in patients undergoing hip or knee arthroplasty.32,36,37 Rivaroxaban has recently been approved by the FDA for this indication.

Apixaban, another direct factor Xa inhibitor, is also being studied for the prevention of stroke and systemic embolism in patients with nonvalvular atrial fibrillation. To date, there are no head-to-head trials comparing dabigatran with either of these new oral anticoagulants.

References
  1. Connolly SJ, Ezekowitz MD, Yusuf S, et al. Dabigatran versus warfarin in patients with atrial fibrillation. N Engl J Med 2009; 361:11391151.
  2. Schulman S, Kearon C, Kakkar AK, et al. Dabigatran versus warfarin in the treatment of acute venous thromboembolism. N Engl J Med 2009; 361:23422352.
  3. RE-MOBILIZE Writing Committee; Ginsberg JS, Davidson BL, Comp PC, et al. Oral thrombin inhibitor dabigatran etexilate vs North American enoxaparin regimen for prevention of venous thromboembolism after knee arthroplasty surgery. J Arthroplasty 2009; 24:19.
  4. Wolowacz SE, Roskell NS, Plumb JM, Caprini JA, Eriksson BI. Efficacy and safety of dabigatran etexilate for the prevention of venous thromboembolism following total hip or knee arthroplasty. A meta-analysis. Thromb Haemost 2009; 101:7785.
  5. Naccarelli GV, Varker H, Lin J, Schulman KL. Increasing prevalence of atrial fibrillation and flutter in the United States. Am J Cardiol 2009; 104:15341539.
  6. Krahn AD, Manfreda J, Tate RB, Mathewson FA, Cuddy TE. The natural history of atrial fibrillation: incidence, risk factors, and prognosis in the Manitoba follow-up study. Am J Med 1995; 98:476484.
  7. Lloyd-Jones DM, Wang TJ, Leip EP, et al. Lifetime risk for development of atrial fibrillation: the Framingham Heart Study. Circulation 2004; 110:10421046.
  8. Wolf PA, Abbott RD, Kannel WB. Atrial fibrillation as an independent risk factor for stroke: the Framingham study. Stroke 1991; 22:983988.
  9. Go AS, Hylek EM, Chang Y, et al. Anticoagulation therapy for stroke prevention in atrial fibrillation: how well do randomized trials translate into clinical practice? JAMA 2003; 290:26852692.
  10. Singer DE, Chang Y, Fang MC, et al. The net clinical benefit of warfarin anticoagulation in atrial fibrillation. Ann Intern Med 2009; 151:297305.
  11. Goldhaber SZ. Pulmonary embolism thrombolysis: a clarion call for international collaboration. J Am Coll Cardiol 1992; 19:246247.
  12. Heit JA. The epidemiology of venous thromboembolism in the community. Arterioscler Thromb Vasc Biol 2008; 28:370372.
  13. Wysowski DK, Nourjah P, Swartz L. Bleeding complications with warfarin use: a prevalent adverse effect resulting in regulatory action. Arch Intern Med 2007; 167:14141419.
  14. White HD, Gruber M, Feyzi J, et al. Comparison of outcomes among patients randomized to warfarin therapy according to anticoagulant control: results from SPORTIF III and V. Arch Intern Med 2007; 167:239245.
  15. ACTIVE Writing Group of the ACTIVE Investigators; Connolly S, Pogue J, Hart R, et al. Clopidogrel plus aspirin versus oral anticoagulation for atrial fibrillation in the Atrial Fibrillation Clopidogrel Trial with Irbesartan for prevention of Vascular Events (ACTIVE W): a randomised controlled trial. Lancet 2006; 367:19031912.
  16. Geerts WH, Bergqvist D, Pineo GF, et al. Prevention of venous thromboembolism: American College of Chest Physicians evidence-based clinical practice guidelines (8th edition). Chest 2008; 133(6 suppl):381S453S.
  17. Mungall D. BIBR-1048 Boehringer Ingelheim. Curr Opin Investig Drugs 2002; 3:905907.
  18. Stangier J, Clemens A. Pharmacology, pharmacokinetics, and pharmacodynamics of dabigatran etexilate, an oral direct thrombin inhibitor. Clin Appl Thromb Hemost 2009; 15(suppl 1):9S16S.
  19. Eisert WG, Hauel N, Stangier J, Wienen W, Clemens A, van Ryn J. Dabigatran: an oral novel potent reversible nonpeptide inhibitor of thrombin. Arterioscler Thromb Vasc Biol 2010; 30:18851889.
  20. Bounameaux H, Reber G. New oral antithrombotics: a need for laboratory monitoring. Against. J Thromb Haemost 2010; 8:627630.
  21. van Ryn J, Stangier J, Haertter S, et al. Dabigatran etexilate—a novel, reversible, oral direct thrombin inhibitor: interpretation of coagulation assays and reversal of anticoagulant activity. Thromb Haemost 2010; 103:11161127.
  22. Eriksson BI, Dahl OE, Buller HR, et al. A new oral direct thrombin inhibitor, dabigatran etexilate, compared with enoxaparin for prevention of thromboembolic events following total hip or knee replacement: the BISTRO II randomized trial. J Thromb Haemost 2005; 3:103111.
  23. Eriksson BI, Dahl OE, Rosencher N, et al. Dabigatran etexilate versus enoxaparin for prevention of venous thromboembolism after total hip replacement: a randomised, double-blind, non-inferiority trial. Lancet 2007; 370:949956.
  24. Eriksson BI, Dahl OE, Rosencher N, et al. Oral dabigatran etexilate vs. subcutaneous enoxaparin for the prevention of venous thromboembolism after total knee replacement: the RE-MODEL randomized trial. J Thromb Haemost 2007; 5:21782185.
  25. Ezekowitz MD, Reilly PA, Nehmiz G, et al. Dabigatran with or without concomitant aspirin compared with warfarin alone in patients with nonvalvular atrial fibrillation (PETRO study). Am J Cardiol 2007; 100:14191426.
  26. Burger L. Bayer rival Boehringer prices blood pill at $6.75. Reuters, October 26, 2010. Available at http://www.reuters.com. Accessed September 12, 2011.
  27. Drugstore.com. Pradaxa. http://www.drugstore.com/pradaxa/bottle-60-150mg-capsules/qxn00597013554. Accessed September 10, 2011.
  28. Wal-Mart Stores, Inc. Retail Prescription Program Drug List. http://i.walmartimages.com/i/if/hmp/fusion/customer_list.pdf. Accessed September 10, 2011.
  29. Teachey DT. Dabigatran versus warfarin for venous thromboembolism (letter). N Engl J Med 2010; 362:1050; author reply10501051.
  30. Lancaster TR, Singer DE, Sheehan MA, et al. The impact of long-term warfarin therapy on quality of life. Evidence from a randomized trial. Boston Area Anticoagulation Trial for Atrial Fibrillation Investigators. Arch Intern Med 1991; 151:19441949.
  31. Freeman JV, Zhu RP, Owens DK, et al. Cost-effectiveness of dabigatran compared with warfarin for stroke prevention in atrial fibrillation. Ann Intern Med 2011; 154:111.
  32. EINSTEIN Investigators; Bauersachs R, Berkowitz SD, Brenner B, et al. Oral rivaroxaban for symptomatic venous thromboembolism. N Engl J Med 2010; 363:24992510.
  33. Cairns JA, Connolly S, McMurtry S, Stephenson M, Talajic M; CCS Atrial Fibrillation Guidelines Committee. Canadian Cardiovascular Society atrial fibrillation guidelines 2010: prevention of stroke and systemic embolization in atrial fibrillation and flutter. Can J Cardiol 2011; 27:7490.
  34. Wann LS, Curtis AB, January CT, et al. 2011 ACCF/AHA/HRS focused update on the management of patients with atrial fibrillation (update on dabigatran): a Report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Circulation 2011; 123:11441150.
  35. Boehringer Ingelheim. Pradaxa prescribing information. http://www.pradaxa.com. Accessed September 8, 2011.
  36. Huisman MV, Quinlan DJ, Dahl OE, Schulman S. Enoxaparin versus dabigatran or rivaroxaban for thromboprophylaxis after hip or knee arthroplasty: results of separate pooled analyses of phase III multicenter randomized trials. Circ Cardiovasc Qual Outcomes 2010; 3:652660.
  37. McCullagh L, Tilson L, Walsh C, Barry M. A cost-effectiveness model comparing rivaroxaban and dabigatran etexilate with enoxaparin sodium as thromboprophylaxis after total hip and total knee replacement in the Irish healthcare setting. Pharmacoeconomics 2009; 27:829846.
Article PDF
Author and Disclosure Information

Siddharth A. Wartak, MD
Section of Vascular Medicine, Department of Cardiovascular Medicine, Cleveland Clinic

John R. Bartholomew, MD, FACC
Professor of Medicine, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH; Head, Section of Vascular Medicine, Department of Cardiovascular Medicine, Heart and Vascular Institute, Cleveland Clinic

Address: John R. Bartholomew, MD, FACC, Department of Cardiovascular Medicine, J3-5, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail [email protected]

Issue
Cleveland Clinic Journal of Medicine - 78(10)
Publications
Topics
Page Number
657-664
Sections
Author and Disclosure Information

Siddharth A. Wartak, MD
Section of Vascular Medicine, Department of Cardiovascular Medicine, Cleveland Clinic

John R. Bartholomew, MD, FACC
Professor of Medicine, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH; Head, Section of Vascular Medicine, Department of Cardiovascular Medicine, Heart and Vascular Institute, Cleveland Clinic

Address: John R. Bartholomew, MD, FACC, Department of Cardiovascular Medicine, J3-5, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail [email protected]

Author and Disclosure Information

Siddharth A. Wartak, MD
Section of Vascular Medicine, Department of Cardiovascular Medicine, Cleveland Clinic

John R. Bartholomew, MD, FACC
Professor of Medicine, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, OH; Head, Section of Vascular Medicine, Department of Cardiovascular Medicine, Heart and Vascular Institute, Cleveland Clinic

Address: John R. Bartholomew, MD, FACC, Department of Cardiovascular Medicine, J3-5, Cleveland Clinic, 9500 Euclid Avenue, Cleveland, OH 44195; e-mail [email protected]

Article PDF
Article PDF
Related Articles

Dabigatran etexilate (Pradaxa) is a new oral anticoagulant that has distinct advantages over warfarin (Coumadin) in terms of its ease of administration, efficacy, and safety.

In the Randomized Evaluation of Long-Term Anticoagulation Therapy (RE-LY trial),1 in patients with nonvalvular atrial fibrillation, dabigatran 110 mg twice a day was found to be as good as warfarin in preventing systemic embolization and stroke (the primary outcome of the study), and at 150 mg twice a day it was superior.1 It has also shown efficacy in treating acute deep vein thrombosis and pulmonary embolism and in preventing these complications in orthopedic surgical patients.2–4

Dabigatran has been approved in 75 countries. It carries the trade name Pradaxa in Europe and the United States and Pradax in Canada. In October 2010, the US Food and Drug Administration (FDA) Cardiovascular and Renal Drugs Advisory Committee endorsed two twice-daily doses (75 mg and 150 mg) of dabigatran for the prevention of systemic embolization and stroke in patients with nonvalvular atrial fibrillation.

However, dabigatran is relatively expensive, and its current high cost might be a barrier to its wider use.

MANY PATIENTS NEED ANTICOAGULATION

Anticoagulation plays a vital role in the primary and secondary prevention of stroke in patients with atrial fibrillation and of pulmonary embolism in patients with venous thromboembolism. It is also used during cardiothoracic and vascular surgery, endovascular procedures, and dialysis and in patients with mechanical heart valves and hypercoagulable conditions.

Atrial fibrillation affects 3.03 million people in the United States (2005 figures), and this number is predicted to be as high as 7.56 million by 2050.5 More than 10% of people over the age of 80 years have it, and the lifetime risk of developing it is approximately 25%.6,7 Its most serious complication is ischemic stroke (the risk of which increases with age) and systemic embolization.5,8

Until the recent introduction of dabigatran, the only oral anticoagulant available in the United States for treating patients with atrial fibrillation was warfarin. Although warfarin has a number of disadvantages (see below), it is actually very effective for preventing ischemic stroke, reducing the incidence by as much as 65%.9,10

Venous thromboembolism is the third most common cardiovascular disorder after myocardial infarction and stroke.11 Although its exact incidence is unknown, nearly 1 million cases of it (incident or recurrent, fatal and nonfatal events) occur in the United States each year.12 Many patients with venous thromboembolism need oral anticoagulation long-term, and currently warfarin remains the only option for them as well.

NEEDED: A BETTER ANTICOAGULANT

Warfarin has been the most commonly prescribed oral anticoagulant in the United States for more than 60 years. As of 2004, more than 30 million outpatient prescriptions for it were filled annually in this country alone.13 However, warfarin has several important limitations.

Warfarin has a narrow therapeutic index. Patients taking it require monitoring of their international normalized ratio (INR) and frequent dose adjustments, and this is time-consuming and inconvenient. The target INR for patients with venous thromboembolism and atrial fibrillation is 2.0 to 3.0, whereas patients with a mechanical heart valve need a higher INR (2.5 to 3.5). If the INR is below these ranges, warfarin is less effective, with a risk of new thrombosis. On the other hand, if the INR is too high, there is a risk of bleeding.14 In fact, the most important side effect of warfarin is the risk of major and minor bleeding.13 However, even in well-designed clinical trials in which patients are closely managed, only 55% to 60% of patients regularly achieve their therapeutic target INR.1,2,14,15

Warfarin also interacts with many drugs and with some foods. Compliance is difficult. It has a slow onset of action. Genetic variations require dose adjustments. When switching from a parenteral anticoagulant, overlapping is required. Skin necrosis is a possible side effect. And warfarin is teratogenic.

Despite these limitations, the American College of Chest Physicians endorses warfarin to prevent or treat venous thromboembolism, and to prevent stroke in patients with atrial fibrillation.16

Recently, a number of new oral and parenteral anticoagulants have been developed (Table 1) with the aim of overcoming some of the drawbacks of warfarin and the other currently available agents, and to improve the prevention and treatment of thromboembolic disorders.

DABIGATRAN, A THROMBIN INHIBITOR

Dabigatran, developed by Boehringer Ingelheim, is a potent, competitive, and reversible inhibitor of both free and clot-bound thrombin, inhibiting both thrombin activity and generation (Table 2).17,18

A prodrug, dabigatran is rapidly absorbed and converted to its active form. Its plasma concentration reaches a peak 1.5 to 3 hours after an oral dose, and it has an elimination half-life of 12 to 14 hours. About 80% of its excretion is by the kidneys and the remaining 20% is through bile.

Dabigatran is not metabolized by cytochrome P450 isoenzymes, and therefore it has few major interactions with other drugs. An exception is rifampin, a P-glycoprotein inducer that blocks dabigatran’s absorption in the gut, so this combination should be avoided. Another is quinidine, a strong P-glycoprotein inhibitor that is contraindicated for use with dabigatran. Also, amiodarone (Cordarone), another P-glycoprotein inhibitor, increases blood levels of dabigatran, and therefore a lower dose of dabigatran is recommended if these drugs are given together.18–20

 

 

DOES DABIGATRAN NEED MONITORING? CAN IT EVEN BE MONITORED?

Dabigatran has a predictable pharmacodynamic effect, and current data indicate it does not need regular monitoring.18–20 However, one may need to be able to measure the drug’s activity in certain situations, such as suspected overdose, bleeding, need for emergency surgery, impaired renal function, pregnancy, and obesity, and in children.20

Dabigatran has little effect on the prothrombin time or the INR, even at therapeutic concentrations.19 Further, its effect on the activated partial thromboplastin time (aPTT) is neither linear nor dose-dependent, and the aPTT reaches a plateau and becomes less sensitive at very high concentrations. Therefore, the aPTT does not appear to be an appropriate test to monitor dabigatran’s therapeutic anticoagulant effect, although it does provide a qualitative indication of anticoagulant activity.18,19

The thrombin time is a very sensitive method for determining if dabigatran is present, but the test lacks standardization; the ecarin clotting time provides better evidence of the dose but is not readily available at most institutions.18,19,21

EVALUATED IN CLINICAL TRIALS

Dabigatran has been evaluated in a number of trials for its ability to prevent ischemic stroke and systemic embolization in patients with atrial fibrillation and to prevent and treat venous thromboembolism in surgical orthopedic patients, and in patients with acute coronary syndrome (Table 3).1–4,22–25

DABIGATRAN IS EXPENSIVE BUT MAY BE COST-EFFECTIVE

The estimated price of dabigatran 150 mg twice a day in the United States is about $6.75 to $8.00 per day.26,27

Warfarin, in contrast, costs as little as $50 per year.28 However, this low price does not include the cost of monitoring the INR (office visits and laboratory testing), and these combined expenses are much higher than the price of the warfarin itself.29 In addition, warfarin requires time-consuming management when bridging to a parenteral anticoagulant (for reversal of its anticoagulant action) before routine health maintenance procedures such as dental work and colonoscopy and interventional procedures and surgery. Any bleeding complication will also add to its cost and will be associated with a decrease in the patient’s perceived health and quality of life, but this is true for both drugs.30

In today’s health care environment, controlling costs is a universal priority, but it may be unfair to compare the cost of dabigatran with that of warfarin alone. The expense and morbidity associated with stroke and intracranial bleeding are high, and if patients on dabigatran have fewer strokes (as seen in the RE-LY trial with dabigatran 150 mg twice a day) and no added expense of monitoring, then dabigatran may be cost-effective.

Freeman et al31 analyzed the cost-effectiveness of dabigatran, using an estimated cost of $13.70 per day and data from the RE-LY trial. They concluded that dabigatran may be a cost-effective alternative to warfarin in preventing ischemic stroke in patients considered at higher risk for ischemic stroke or intracranial hemorrhage, ie, those with a CHADS2 score of 1 or higher or equivalent. (The CHADS2 score is calculated as 1 point each for congestive heart failure, hypertension, age 75 or older, and diabetes mellitus; 2 points for prior stroke or transient ischemic attack.)

As more new-generation oral anticoagulants become available (see below), the price of dabigatran will undoubtedly decrease. Until then, warfarin will remain a cost-effective and cost-saving drug that cannot yet be considered obsolete.

WHO SHOULD RECEIVE DABIGATRAN?

The ideal patient for dabigatran treatment is not yet defined. The decision to convert a patient’s treatment from warfarin to dabigatran will likely depend on several factors, including the patient’s response to warfarin and the physician’s comfort with this new drug.

Many patients do extremely well with warfarin, requiring infrequent monitoring to maintain a therapeutic INR and having no bleeding complications. For them, it would be more practical to continue warfarin. Another reason for staying with warfarin would be if twice-a-day dosing would pose a problem.

Dabigatran would be a reasonable choice for a patient whose INR is erratic, who requires more frequent monitoring, for whom cost is not an issue, and for whom there is concern about dietary or drug interactions.

Another consideration is whether the patient has access to a health care facility for warfarin monitoring: this is difficult for those who cannot drive, who depend on others for transportation, and who live in rural areas.

Additionally, dabigatran may be a cost-effective alternative to warfarin for a patient with a high CHADS2 score who is considered at a higher risk for stroke.31

In all cases, the option should be considered only after an open discussion with the patient about the risks and benefits of this new drug.

WHO SHOULD NOT RECEIVE IT?

Dabigatran is a twice-daily drug with a short half-life. No patient with a history of poor compliance will be a good candidate for dabigatran. Since there are no practical laboratory tests for monitoring compliance, one will have to reinforce at every visit the importance of taking this medication according to instructions.

Patients with underlying kidney disease will need close monitoring of their creatinine clearance, with dose adjustment if renal function deteriorates.

Additionally, one should use caution when prescribing dabigatran to obese patients, pregnant women, or children until more is known about its use in these populations.

ADVANTAGES AND DISADVANTAGES OF DABIGATRAN

In addition to its pharmacologic advantages, dabigatran demonstrated two other major advantages over warfarin in the RE-LY trial in patients with atrial fibrillation (Table 4). First, the rate of intracranial bleeding, a major devastating complication of warfarin, was 60% lower with dabigatran 150 mg twice a day than with warfarin—and lower still with dabigatran 110 mg twice a day.1 Second, the rate of stroke or systemic embolism was 34% lower in the group that got dabigatran 150 mg twice a day than in the group that got warfarin.

A reason may be that patients with atrial fibrillation and poor INR control have higher rates of death, stroke, myocardial infarction, and major bleeding.14 In most clinical trials, only 55% to 60% of patients achieve a therapeutic INR on warfarin, leaving them at risk of thrombosis or, conversely, bleeding.1,2,15,32 Dabigatran has predictable pharmacokinetics, and its twice-daily dosing allows for less variability in its anticoagulant effect, making it more consistently therapeutic with less potential for bleeding or thrombosis.1

The Canadian Cardiovascular Society included dabigatran in its 2010 guidelines on atrial fibrillation, recommending it or warfarin.33 The American College of Cardiology, the American Heart Association, and the Heart Rhythm Society now give dabigatran a class I B recommendation (benefit greater than risk, but limited populations studied) in secondary stroke prevention.34

On the other hand, major concerns are the lack of an antidote for dabigatran and a lack of experience in treating bleeding complications. Since dabigatran is not monitored, physicians may be uncertain if we are overdosing or undertreating. As we gain experience, we will learn how to treat bleeding complications. Until then, it will be important to anticipate this problem and to develop an algorithm based on the best available evidence in managing this complication.

Although the overall rates of bleeding in the RE-LY trial were lower with dabigatran than with warfarin, there were more gastrointestinal bleeding events with the 150-mg dose of dabigatran, which was not readily explained.

Further, the rate of dyspepsia was almost twice as high with dabigatran than with warfarin, regardless of the dose of dabigatran. There were also more dropouts in the 2nd year of follow-up in the dabigatran groups, with gastrointestinal intolerance being one of the major reasons. Therefore, dyspepsia may cause intolerance and noncompliance.1

Dabigatran must be taken twice a day and has a relatively short half-life. For a noncompliant patient, missing one or two doses will cause a reversal of its anticoagulation effect, leaving the patient susceptible to thrombosis. In comparison, warfarin has a longer half-life and is taken once a day, so missing a dose is less likely to result in a similar reversal of its anticoagulant effect.

 

 

SPECIAL CONDITIONS

Switching from other anticoagulants to dabigatran

When making the transition from a subcutaneously administered anticoagulant, ie, a low-molecular-weight heparin or the anti-Xa inhibitor fondaparinux (Arixtra), dabigatran should be started 0 to 2 hours before the next subcutaneous dose of the parenteral anticoagulant was to be given.21,35

When switching from unfractionated heparin given by continuous intravenous infusion, the first dose of dabigatran should be given at the time the infusion is stopped.

When switching from warfarin, dabigatran should be started once the patient’s INR is less than 2.0.

Switching from dabigatran to a parenteral anticoagulant

When switching from dabigatran back to a parenteral anticoagulant, allow 12 to 24 hours after the last dabigatran dose before starting the parenteral agent.21,35

Elective surgery or invasive procedures

The manufacturer recommends stopping dabigatran 1 to 2 days before elective surgery for patients who have normal renal function and a low risk of bleeding, or 3 to 5 days before surgery for patients who have a creatinine clearance of 50 mL/min or less. Before major surgery or placement of a spinal or epidural catheter, the manufacturer recommends that dabigatran be held even longer.35

If emergency surgery is needed

If emergency surgery is needed, the clinician must use his or her judgment as to the risks of bleeding vs those of postponing the surgery.21,35

Overdose or bleeding

No antidote for dabigatran is currently available. It has a short half-life (12–14 hours), and the treatment for overdose or bleeding is to discontinue it immediately, maintain adequate diuresis, and transfuse fresh-frozen plasma or red blood cells as indicated.

The role of activated charcoal given orally to reduce absorption is under evaluation, but the charcoal must be given within 1 to 2 hours after the overdose is taken.21

Dabigatran does not bind very much to plasma proteins and hence is dialyzable—an approach that may be necessary in cases of persistent or life-threatening bleeding.

Recombinant activated factor VII or prothrombin complex concentrates may be additional options in cases of severe bleeding.18,21

TOPICS OF FUTURE RESEARCH

A limitation of the dabigatran trials was that they did not enroll patients who had renal or liver impairment, cancer, or other comorbidities; pregnant women; or children. Other topics of future research include its use in patients weighing less than 48 kg or more than 110 kg, its efficacy in patients with thrombophilia, in patients with mechanical heart valves, and in long-term follow-up and the use of thrombolytics in patients with acute stroke who are on dabigatran.

WILL DABIGATRAN CHANGE CLINICAL PRACTICE?

Despite some of the challenges listed above, we believe that dabigatran is likely to change medical practice in patients requiring anticoagulation.

Dabigatran’s biggest use will most likely be in patients with atrial fibrillation, mainly because this is the largest group of people receiving anticoagulation. In addition, the incidence of atrial fibrillation rises with age, the US population is living longer, and patients generally require life-long anticoagulation once this condition develops.

Dabigatran may be approved for additional indications in the near future. It has already shown efficacy in primary and secondary prevention of venous thromboembolism. Other important areas to be studied include its use in patients with mechanical heart valves and thrombophilia.

Whether dabigatran will be a worthy substitute for the parenteral anticoagulants (heparin, low-molecular-weight heparins, or factor Xa inhibitors) is not yet known, but it will have an enormous impact on anticoagulation management if proved efficacious.

If dabigatran becomes a major substitute for warfarin, it will affect the anticoagulation clinics, with their well-trained staff, that are currently monitoring millions of patients in the United States. These clinics would no longer be needed, and laboratory and technical costs could be saved. A downside is that patients on dabigatran will not be as closely supervised and reminded to take their medication as patients on warfarin are now at these clinics. Instead, they will likely be supervised by their own physician (or assistants), who will need to become familiar with this anticoagulant. This may affect compliance with dabigatran.

OTHER NEW ORAL ANTICOAGULANTS ARE ON THE WAY

Other oral anticoagulants, including rivaroxaban (Xarelto) and apixaban (Eliquis), have been under study and show promise in preventing both thrombotic stroke and venous thromboembolism. They will likely compete with dabigatran once they are approved.

Rivaroxaban, an oral direct factor Xa inhibitor, is being investigated for stroke prevention in patients with atrial fibrillation. It has also been shown to be not inferior to (and to be less expensive than) enoxaparin in treating and preventing venous thromboembolism in patients undergoing hip or knee arthroplasty.32,36,37 Rivaroxaban has recently been approved by the FDA for this indication.

Apixaban, another direct factor Xa inhibitor, is also being studied for the prevention of stroke and systemic embolism in patients with nonvalvular atrial fibrillation. To date, there are no head-to-head trials comparing dabigatran with either of these new oral anticoagulants.

Dabigatran etexilate (Pradaxa) is a new oral anticoagulant that has distinct advantages over warfarin (Coumadin) in terms of its ease of administration, efficacy, and safety.

In the Randomized Evaluation of Long-Term Anticoagulation Therapy (RE-LY trial),1 in patients with nonvalvular atrial fibrillation, dabigatran 110 mg twice a day was found to be as good as warfarin in preventing systemic embolization and stroke (the primary outcome of the study), and at 150 mg twice a day it was superior.1 It has also shown efficacy in treating acute deep vein thrombosis and pulmonary embolism and in preventing these complications in orthopedic surgical patients.2–4

Dabigatran has been approved in 75 countries. It carries the trade name Pradaxa in Europe and the United States and Pradax in Canada. In October 2010, the US Food and Drug Administration (FDA) Cardiovascular and Renal Drugs Advisory Committee endorsed two twice-daily doses (75 mg and 150 mg) of dabigatran for the prevention of systemic embolization and stroke in patients with nonvalvular atrial fibrillation.

However, dabigatran is relatively expensive, and its current high cost might be a barrier to its wider use.

MANY PATIENTS NEED ANTICOAGULATION

Anticoagulation plays a vital role in the primary and secondary prevention of stroke in patients with atrial fibrillation and of pulmonary embolism in patients with venous thromboembolism. It is also used during cardiothoracic and vascular surgery, endovascular procedures, and dialysis and in patients with mechanical heart valves and hypercoagulable conditions.

Atrial fibrillation affects 3.03 million people in the United States (2005 figures), and this number is predicted to be as high as 7.56 million by 2050.5 More than 10% of people over the age of 80 years have it, and the lifetime risk of developing it is approximately 25%.6,7 Its most serious complication is ischemic stroke (the risk of which increases with age) and systemic embolization.5,8

Until the recent introduction of dabigatran, the only oral anticoagulant available in the United States for treating patients with atrial fibrillation was warfarin. Although warfarin has a number of disadvantages (see below), it is actually very effective for preventing ischemic stroke, reducing the incidence by as much as 65%.9,10

Venous thromboembolism is the third most common cardiovascular disorder after myocardial infarction and stroke.11 Although its exact incidence is unknown, nearly 1 million cases of it (incident or recurrent, fatal and nonfatal events) occur in the United States each year.12 Many patients with venous thromboembolism need oral anticoagulation long-term, and currently warfarin remains the only option for them as well.

NEEDED: A BETTER ANTICOAGULANT

Warfarin has been the most commonly prescribed oral anticoagulant in the United States for more than 60 years. As of 2004, more than 30 million outpatient prescriptions for it were filled annually in this country alone.13 However, warfarin has several important limitations.

Warfarin has a narrow therapeutic index. Patients taking it require monitoring of their international normalized ratio (INR) and frequent dose adjustments, and this is time-consuming and inconvenient. The target INR for patients with venous thromboembolism and atrial fibrillation is 2.0 to 3.0, whereas patients with a mechanical heart valve need a higher INR (2.5 to 3.5). If the INR is below these ranges, warfarin is less effective, with a risk of new thrombosis. On the other hand, if the INR is too high, there is a risk of bleeding.14 In fact, the most important side effect of warfarin is the risk of major and minor bleeding.13 However, even in well-designed clinical trials in which patients are closely managed, only 55% to 60% of patients regularly achieve their therapeutic target INR.1,2,14,15

Warfarin also interacts with many drugs and with some foods. Compliance is difficult. It has a slow onset of action. Genetic variations require dose adjustments. When switching from a parenteral anticoagulant, overlapping is required. Skin necrosis is a possible side effect. And warfarin is teratogenic.

Despite these limitations, the American College of Chest Physicians endorses warfarin to prevent or treat venous thromboembolism, and to prevent stroke in patients with atrial fibrillation.16

Recently, a number of new oral and parenteral anticoagulants have been developed (Table 1) with the aim of overcoming some of the drawbacks of warfarin and the other currently available agents, and to improve the prevention and treatment of thromboembolic disorders.

DABIGATRAN, A THROMBIN INHIBITOR

Dabigatran, developed by Boehringer Ingelheim, is a potent, competitive, and reversible inhibitor of both free and clot-bound thrombin, inhibiting both thrombin activity and generation (Table 2).17,18

A prodrug, dabigatran is rapidly absorbed and converted to its active form. Its plasma concentration reaches a peak 1.5 to 3 hours after an oral dose, and it has an elimination half-life of 12 to 14 hours. About 80% of its excretion is by the kidneys and the remaining 20% is through bile.

Dabigatran is not metabolized by cytochrome P450 isoenzymes, and therefore it has few major interactions with other drugs. An exception is rifampin, a P-glycoprotein inducer that blocks dabigatran’s absorption in the gut, so this combination should be avoided. Another is quinidine, a strong P-glycoprotein inhibitor that is contraindicated for use with dabigatran. Also, amiodarone (Cordarone), another P-glycoprotein inhibitor, increases blood levels of dabigatran, and therefore a lower dose of dabigatran is recommended if these drugs are given together.18–20

 

 

DOES DABIGATRAN NEED MONITORING? CAN IT EVEN BE MONITORED?

Dabigatran has a predictable pharmacodynamic effect, and current data indicate it does not need regular monitoring.18–20 However, one may need to be able to measure the drug’s activity in certain situations, such as suspected overdose, bleeding, need for emergency surgery, impaired renal function, pregnancy, and obesity, and in children.20

Dabigatran has little effect on the prothrombin time or the INR, even at therapeutic concentrations.19 Further, its effect on the activated partial thromboplastin time (aPTT) is neither linear nor dose-dependent, and the aPTT reaches a plateau and becomes less sensitive at very high concentrations. Therefore, the aPTT does not appear to be an appropriate test to monitor dabigatran’s therapeutic anticoagulant effect, although it does provide a qualitative indication of anticoagulant activity.18,19

The thrombin time is a very sensitive method for determining if dabigatran is present, but the test lacks standardization; the ecarin clotting time provides better evidence of the dose but is not readily available at most institutions.18,19,21

EVALUATED IN CLINICAL TRIALS

Dabigatran has been evaluated in a number of trials for its ability to prevent ischemic stroke and systemic embolization in patients with atrial fibrillation and to prevent and treat venous thromboembolism in surgical orthopedic patients, and in patients with acute coronary syndrome (Table 3).1–4,22–25

DABIGATRAN IS EXPENSIVE BUT MAY BE COST-EFFECTIVE

The estimated price of dabigatran 150 mg twice a day in the United States is about $6.75 to $8.00 per day.26,27

Warfarin, in contrast, costs as little as $50 per year.28 However, this low price does not include the cost of monitoring the INR (office visits and laboratory testing), and these combined expenses are much higher than the price of the warfarin itself.29 In addition, warfarin requires time-consuming management when bridging to a parenteral anticoagulant (for reversal of its anticoagulant action) before routine health maintenance procedures such as dental work and colonoscopy and interventional procedures and surgery. Any bleeding complication will also add to its cost and will be associated with a decrease in the patient’s perceived health and quality of life, but this is true for both drugs.30

In today’s health care environment, controlling costs is a universal priority, but it may be unfair to compare the cost of dabigatran with that of warfarin alone. The expense and morbidity associated with stroke and intracranial bleeding are high, and if patients on dabigatran have fewer strokes (as seen in the RE-LY trial with dabigatran 150 mg twice a day) and no added expense of monitoring, then dabigatran may be cost-effective.

Freeman et al31 analyzed the cost-effectiveness of dabigatran, using an estimated cost of $13.70 per day and data from the RE-LY trial. They concluded that dabigatran may be a cost-effective alternative to warfarin in preventing ischemic stroke in patients considered at higher risk for ischemic stroke or intracranial hemorrhage, ie, those with a CHADS2 score of 1 or higher or equivalent. (The CHADS2 score is calculated as 1 point each for congestive heart failure, hypertension, age 75 or older, and diabetes mellitus; 2 points for prior stroke or transient ischemic attack.)

As more new-generation oral anticoagulants become available (see below), the price of dabigatran will undoubtedly decrease. Until then, warfarin will remain a cost-effective and cost-saving drug that cannot yet be considered obsolete.

WHO SHOULD RECEIVE DABIGATRAN?

The ideal patient for dabigatran treatment is not yet defined. The decision to convert a patient’s treatment from warfarin to dabigatran will likely depend on several factors, including the patient’s response to warfarin and the physician’s comfort with this new drug.

Many patients do extremely well with warfarin, requiring infrequent monitoring to maintain a therapeutic INR and having no bleeding complications. For them, it would be more practical to continue warfarin. Another reason for staying with warfarin would be if twice-a-day dosing would pose a problem.

Dabigatran would be a reasonable choice for a patient whose INR is erratic, who requires more frequent monitoring, for whom cost is not an issue, and for whom there is concern about dietary or drug interactions.

Another consideration is whether the patient has access to a health care facility for warfarin monitoring: this is difficult for those who cannot drive, who depend on others for transportation, and who live in rural areas.

Additionally, dabigatran may be a cost-effective alternative to warfarin for a patient with a high CHADS2 score who is considered at a higher risk for stroke.31

In all cases, the option should be considered only after an open discussion with the patient about the risks and benefits of this new drug.

WHO SHOULD NOT RECEIVE IT?

Dabigatran is a twice-daily drug with a short half-life. No patient with a history of poor compliance will be a good candidate for dabigatran. Since there are no practical laboratory tests for monitoring compliance, one will have to reinforce at every visit the importance of taking this medication according to instructions.

Patients with underlying kidney disease will need close monitoring of their creatinine clearance, with dose adjustment if renal function deteriorates.

Additionally, one should use caution when prescribing dabigatran to obese patients, pregnant women, or children until more is known about its use in these populations.

ADVANTAGES AND DISADVANTAGES OF DABIGATRAN

In addition to its pharmacologic advantages, dabigatran demonstrated two other major advantages over warfarin in the RE-LY trial in patients with atrial fibrillation (Table 4). First, the rate of intracranial bleeding, a major devastating complication of warfarin, was 60% lower with dabigatran 150 mg twice a day than with warfarin—and lower still with dabigatran 110 mg twice a day.1 Second, the rate of stroke or systemic embolism was 34% lower in the group that got dabigatran 150 mg twice a day than in the group that got warfarin.

A reason may be that patients with atrial fibrillation and poor INR control have higher rates of death, stroke, myocardial infarction, and major bleeding.14 In most clinical trials, only 55% to 60% of patients achieve a therapeutic INR on warfarin, leaving them at risk of thrombosis or, conversely, bleeding.1,2,15,32 Dabigatran has predictable pharmacokinetics, and its twice-daily dosing allows for less variability in its anticoagulant effect, making it more consistently therapeutic with less potential for bleeding or thrombosis.1

The Canadian Cardiovascular Society included dabigatran in its 2010 guidelines on atrial fibrillation, recommending it or warfarin.33 The American College of Cardiology, the American Heart Association, and the Heart Rhythm Society now give dabigatran a class I B recommendation (benefit greater than risk, but limited populations studied) in secondary stroke prevention.34

On the other hand, major concerns are the lack of an antidote for dabigatran and a lack of experience in treating bleeding complications. Since dabigatran is not monitored, physicians may be uncertain if we are overdosing or undertreating. As we gain experience, we will learn how to treat bleeding complications. Until then, it will be important to anticipate this problem and to develop an algorithm based on the best available evidence in managing this complication.

Although the overall rates of bleeding in the RE-LY trial were lower with dabigatran than with warfarin, there were more gastrointestinal bleeding events with the 150-mg dose of dabigatran, which was not readily explained.

Further, the rate of dyspepsia was almost twice as high with dabigatran than with warfarin, regardless of the dose of dabigatran. There were also more dropouts in the 2nd year of follow-up in the dabigatran groups, with gastrointestinal intolerance being one of the major reasons. Therefore, dyspepsia may cause intolerance and noncompliance.1

Dabigatran must be taken twice a day and has a relatively short half-life. For a noncompliant patient, missing one or two doses will cause a reversal of its anticoagulation effect, leaving the patient susceptible to thrombosis. In comparison, warfarin has a longer half-life and is taken once a day, so missing a dose is less likely to result in a similar reversal of its anticoagulant effect.

 

 

SPECIAL CONDITIONS

Switching from other anticoagulants to dabigatran

When making the transition from a subcutaneously administered anticoagulant, ie, a low-molecular-weight heparin or the anti-Xa inhibitor fondaparinux (Arixtra), dabigatran should be started 0 to 2 hours before the next subcutaneous dose of the parenteral anticoagulant was to be given.21,35

When switching from unfractionated heparin given by continuous intravenous infusion, the first dose of dabigatran should be given at the time the infusion is stopped.

When switching from warfarin, dabigatran should be started once the patient’s INR is less than 2.0.

Switching from dabigatran to a parenteral anticoagulant

When switching from dabigatran back to a parenteral anticoagulant, allow 12 to 24 hours after the last dabigatran dose before starting the parenteral agent.21,35

Elective surgery or invasive procedures

The manufacturer recommends stopping dabigatran 1 to 2 days before elective surgery for patients who have normal renal function and a low risk of bleeding, or 3 to 5 days before surgery for patients who have a creatinine clearance of 50 mL/min or less. Before major surgery or placement of a spinal or epidural catheter, the manufacturer recommends that dabigatran be held even longer.35

If emergency surgery is needed

If emergency surgery is needed, the clinician must use his or her judgment as to the risks of bleeding vs those of postponing the surgery.21,35

Overdose or bleeding

No antidote for dabigatran is currently available. It has a short half-life (12–14 hours), and the treatment for overdose or bleeding is to discontinue it immediately, maintain adequate diuresis, and transfuse fresh-frozen plasma or red blood cells as indicated.

The role of activated charcoal given orally to reduce absorption is under evaluation, but the charcoal must be given within 1 to 2 hours after the overdose is taken.21

Dabigatran does not bind very much to plasma proteins and hence is dialyzable—an approach that may be necessary in cases of persistent or life-threatening bleeding.

Recombinant activated factor VII or prothrombin complex concentrates may be additional options in cases of severe bleeding.18,21

TOPICS OF FUTURE RESEARCH

A limitation of the dabigatran trials was that they did not enroll patients who had renal or liver impairment, cancer, or other comorbidities; pregnant women; or children. Other topics of future research include its use in patients weighing less than 48 kg or more than 110 kg, its efficacy in patients with thrombophilia, in patients with mechanical heart valves, and in long-term follow-up and the use of thrombolytics in patients with acute stroke who are on dabigatran.

WILL DABIGATRAN CHANGE CLINICAL PRACTICE?

Despite some of the challenges listed above, we believe that dabigatran is likely to change medical practice in patients requiring anticoagulation.

Dabigatran’s biggest use will most likely be in patients with atrial fibrillation, mainly because this is the largest group of people receiving anticoagulation. In addition, the incidence of atrial fibrillation rises with age, the US population is living longer, and patients generally require life-long anticoagulation once this condition develops.

Dabigatran may be approved for additional indications in the near future. It has already shown efficacy in primary and secondary prevention of venous thromboembolism. Other important areas to be studied include its use in patients with mechanical heart valves and thrombophilia.

Whether dabigatran will be a worthy substitute for the parenteral anticoagulants (heparin, low-molecular-weight heparins, or factor Xa inhibitors) is not yet known, but it will have an enormous impact on anticoagulation management if proved efficacious.

If dabigatran becomes a major substitute for warfarin, it will affect the anticoagulation clinics, with their well-trained staff, that are currently monitoring millions of patients in the United States. These clinics would no longer be needed, and laboratory and technical costs could be saved. A downside is that patients on dabigatran will not be as closely supervised and reminded to take their medication as patients on warfarin are now at these clinics. Instead, they will likely be supervised by their own physician (or assistants), who will need to become familiar with this anticoagulant. This may affect compliance with dabigatran.

OTHER NEW ORAL ANTICOAGULANTS ARE ON THE WAY

Other oral anticoagulants, including rivaroxaban (Xarelto) and apixaban (Eliquis), have been under study and show promise in preventing both thrombotic stroke and venous thromboembolism. They will likely compete with dabigatran once they are approved.

Rivaroxaban, an oral direct factor Xa inhibitor, is being investigated for stroke prevention in patients with atrial fibrillation. It has also been shown to be not inferior to (and to be less expensive than) enoxaparin in treating and preventing venous thromboembolism in patients undergoing hip or knee arthroplasty.32,36,37 Rivaroxaban has recently been approved by the FDA for this indication.

Apixaban, another direct factor Xa inhibitor, is also being studied for the prevention of stroke and systemic embolism in patients with nonvalvular atrial fibrillation. To date, there are no head-to-head trials comparing dabigatran with either of these new oral anticoagulants.

References
  1. Connolly SJ, Ezekowitz MD, Yusuf S, et al. Dabigatran versus warfarin in patients with atrial fibrillation. N Engl J Med 2009; 361:11391151.
  2. Schulman S, Kearon C, Kakkar AK, et al. Dabigatran versus warfarin in the treatment of acute venous thromboembolism. N Engl J Med 2009; 361:23422352.
  3. RE-MOBILIZE Writing Committee; Ginsberg JS, Davidson BL, Comp PC, et al. Oral thrombin inhibitor dabigatran etexilate vs North American enoxaparin regimen for prevention of venous thromboembolism after knee arthroplasty surgery. J Arthroplasty 2009; 24:19.
  4. Wolowacz SE, Roskell NS, Plumb JM, Caprini JA, Eriksson BI. Efficacy and safety of dabigatran etexilate for the prevention of venous thromboembolism following total hip or knee arthroplasty. A meta-analysis. Thromb Haemost 2009; 101:7785.
  5. Naccarelli GV, Varker H, Lin J, Schulman KL. Increasing prevalence of atrial fibrillation and flutter in the United States. Am J Cardiol 2009; 104:15341539.
  6. Krahn AD, Manfreda J, Tate RB, Mathewson FA, Cuddy TE. The natural history of atrial fibrillation: incidence, risk factors, and prognosis in the Manitoba follow-up study. Am J Med 1995; 98:476484.
  7. Lloyd-Jones DM, Wang TJ, Leip EP, et al. Lifetime risk for development of atrial fibrillation: the Framingham Heart Study. Circulation 2004; 110:10421046.
  8. Wolf PA, Abbott RD, Kannel WB. Atrial fibrillation as an independent risk factor for stroke: the Framingham study. Stroke 1991; 22:983988.
  9. Go AS, Hylek EM, Chang Y, et al. Anticoagulation therapy for stroke prevention in atrial fibrillation: how well do randomized trials translate into clinical practice? JAMA 2003; 290:26852692.
  10. Singer DE, Chang Y, Fang MC, et al. The net clinical benefit of warfarin anticoagulation in atrial fibrillation. Ann Intern Med 2009; 151:297305.
  11. Goldhaber SZ. Pulmonary embolism thrombolysis: a clarion call for international collaboration. J Am Coll Cardiol 1992; 19:246247.
  12. Heit JA. The epidemiology of venous thromboembolism in the community. Arterioscler Thromb Vasc Biol 2008; 28:370372.
  13. Wysowski DK, Nourjah P, Swartz L. Bleeding complications with warfarin use: a prevalent adverse effect resulting in regulatory action. Arch Intern Med 2007; 167:14141419.
  14. White HD, Gruber M, Feyzi J, et al. Comparison of outcomes among patients randomized to warfarin therapy according to anticoagulant control: results from SPORTIF III and V. Arch Intern Med 2007; 167:239245.
  15. ACTIVE Writing Group of the ACTIVE Investigators; Connolly S, Pogue J, Hart R, et al. Clopidogrel plus aspirin versus oral anticoagulation for atrial fibrillation in the Atrial Fibrillation Clopidogrel Trial with Irbesartan for prevention of Vascular Events (ACTIVE W): a randomised controlled trial. Lancet 2006; 367:19031912.
  16. Geerts WH, Bergqvist D, Pineo GF, et al. Prevention of venous thromboembolism: American College of Chest Physicians evidence-based clinical practice guidelines (8th edition). Chest 2008; 133(6 suppl):381S453S.
  17. Mungall D. BIBR-1048 Boehringer Ingelheim. Curr Opin Investig Drugs 2002; 3:905907.
  18. Stangier J, Clemens A. Pharmacology, pharmacokinetics, and pharmacodynamics of dabigatran etexilate, an oral direct thrombin inhibitor. Clin Appl Thromb Hemost 2009; 15(suppl 1):9S16S.
  19. Eisert WG, Hauel N, Stangier J, Wienen W, Clemens A, van Ryn J. Dabigatran: an oral novel potent reversible nonpeptide inhibitor of thrombin. Arterioscler Thromb Vasc Biol 2010; 30:18851889.
  20. Bounameaux H, Reber G. New oral antithrombotics: a need for laboratory monitoring. Against. J Thromb Haemost 2010; 8:627630.
  21. van Ryn J, Stangier J, Haertter S, et al. Dabigatran etexilate—a novel, reversible, oral direct thrombin inhibitor: interpretation of coagulation assays and reversal of anticoagulant activity. Thromb Haemost 2010; 103:11161127.
  22. Eriksson BI, Dahl OE, Buller HR, et al. A new oral direct thrombin inhibitor, dabigatran etexilate, compared with enoxaparin for prevention of thromboembolic events following total hip or knee replacement: the BISTRO II randomized trial. J Thromb Haemost 2005; 3:103111.
  23. Eriksson BI, Dahl OE, Rosencher N, et al. Dabigatran etexilate versus enoxaparin for prevention of venous thromboembolism after total hip replacement: a randomised, double-blind, non-inferiority trial. Lancet 2007; 370:949956.
  24. Eriksson BI, Dahl OE, Rosencher N, et al. Oral dabigatran etexilate vs. subcutaneous enoxaparin for the prevention of venous thromboembolism after total knee replacement: the RE-MODEL randomized trial. J Thromb Haemost 2007; 5:21782185.
  25. Ezekowitz MD, Reilly PA, Nehmiz G, et al. Dabigatran with or without concomitant aspirin compared with warfarin alone in patients with nonvalvular atrial fibrillation (PETRO study). Am J Cardiol 2007; 100:14191426.
  26. Burger L. Bayer rival Boehringer prices blood pill at $6.75. Reuters, October 26, 2010. Available at http://www.reuters.com. Accessed September 12, 2011.
  27. Drugstore.com. Pradaxa. http://www.drugstore.com/pradaxa/bottle-60-150mg-capsules/qxn00597013554. Accessed September 10, 2011.
  28. Wal-Mart Stores, Inc. Retail Prescription Program Drug List. http://i.walmartimages.com/i/if/hmp/fusion/customer_list.pdf. Accessed September 10, 2011.
  29. Teachey DT. Dabigatran versus warfarin for venous thromboembolism (letter). N Engl J Med 2010; 362:1050; author reply10501051.
  30. Lancaster TR, Singer DE, Sheehan MA, et al. The impact of long-term warfarin therapy on quality of life. Evidence from a randomized trial. Boston Area Anticoagulation Trial for Atrial Fibrillation Investigators. Arch Intern Med 1991; 151:19441949.
  31. Freeman JV, Zhu RP, Owens DK, et al. Cost-effectiveness of dabigatran compared with warfarin for stroke prevention in atrial fibrillation. Ann Intern Med 2011; 154:111.
  32. EINSTEIN Investigators; Bauersachs R, Berkowitz SD, Brenner B, et al. Oral rivaroxaban for symptomatic venous thromboembolism. N Engl J Med 2010; 363:24992510.
  33. Cairns JA, Connolly S, McMurtry S, Stephenson M, Talajic M; CCS Atrial Fibrillation Guidelines Committee. Canadian Cardiovascular Society atrial fibrillation guidelines 2010: prevention of stroke and systemic embolization in atrial fibrillation and flutter. Can J Cardiol 2011; 27:7490.
  34. Wann LS, Curtis AB, January CT, et al. 2011 ACCF/AHA/HRS focused update on the management of patients with atrial fibrillation (update on dabigatran): a Report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Circulation 2011; 123:11441150.
  35. Boehringer Ingelheim. Pradaxa prescribing information. http://www.pradaxa.com. Accessed September 8, 2011.
  36. Huisman MV, Quinlan DJ, Dahl OE, Schulman S. Enoxaparin versus dabigatran or rivaroxaban for thromboprophylaxis after hip or knee arthroplasty: results of separate pooled analyses of phase III multicenter randomized trials. Circ Cardiovasc Qual Outcomes 2010; 3:652660.
  37. McCullagh L, Tilson L, Walsh C, Barry M. A cost-effectiveness model comparing rivaroxaban and dabigatran etexilate with enoxaparin sodium as thromboprophylaxis after total hip and total knee replacement in the Irish healthcare setting. Pharmacoeconomics 2009; 27:829846.
References
  1. Connolly SJ, Ezekowitz MD, Yusuf S, et al. Dabigatran versus warfarin in patients with atrial fibrillation. N Engl J Med 2009; 361:11391151.
  2. Schulman S, Kearon C, Kakkar AK, et al. Dabigatran versus warfarin in the treatment of acute venous thromboembolism. N Engl J Med 2009; 361:23422352.
  3. RE-MOBILIZE Writing Committee; Ginsberg JS, Davidson BL, Comp PC, et al. Oral thrombin inhibitor dabigatran etexilate vs North American enoxaparin regimen for prevention of venous thromboembolism after knee arthroplasty surgery. J Arthroplasty 2009; 24:19.
  4. Wolowacz SE, Roskell NS, Plumb JM, Caprini JA, Eriksson BI. Efficacy and safety of dabigatran etexilate for the prevention of venous thromboembolism following total hip or knee arthroplasty. A meta-analysis. Thromb Haemost 2009; 101:7785.
  5. Naccarelli GV, Varker H, Lin J, Schulman KL. Increasing prevalence of atrial fibrillation and flutter in the United States. Am J Cardiol 2009; 104:15341539.
  6. Krahn AD, Manfreda J, Tate RB, Mathewson FA, Cuddy TE. The natural history of atrial fibrillation: incidence, risk factors, and prognosis in the Manitoba follow-up study. Am J Med 1995; 98:476484.
  7. Lloyd-Jones DM, Wang TJ, Leip EP, et al. Lifetime risk for development of atrial fibrillation: the Framingham Heart Study. Circulation 2004; 110:10421046.
  8. Wolf PA, Abbott RD, Kannel WB. Atrial fibrillation as an independent risk factor for stroke: the Framingham study. Stroke 1991; 22:983988.
  9. Go AS, Hylek EM, Chang Y, et al. Anticoagulation therapy for stroke prevention in atrial fibrillation: how well do randomized trials translate into clinical practice? JAMA 2003; 290:26852692.
  10. Singer DE, Chang Y, Fang MC, et al. The net clinical benefit of warfarin anticoagulation in atrial fibrillation. Ann Intern Med 2009; 151:297305.
  11. Goldhaber SZ. Pulmonary embolism thrombolysis: a clarion call for international collaboration. J Am Coll Cardiol 1992; 19:246247.
  12. Heit JA. The epidemiology of venous thromboembolism in the community. Arterioscler Thromb Vasc Biol 2008; 28:370372.
  13. Wysowski DK, Nourjah P, Swartz L. Bleeding complications with warfarin use: a prevalent adverse effect resulting in regulatory action. Arch Intern Med 2007; 167:14141419.
  14. White HD, Gruber M, Feyzi J, et al. Comparison of outcomes among patients randomized to warfarin therapy according to anticoagulant control: results from SPORTIF III and V. Arch Intern Med 2007; 167:239245.
  15. ACTIVE Writing Group of the ACTIVE Investigators; Connolly S, Pogue J, Hart R, et al. Clopidogrel plus aspirin versus oral anticoagulation for atrial fibrillation in the Atrial Fibrillation Clopidogrel Trial with Irbesartan for prevention of Vascular Events (ACTIVE W): a randomised controlled trial. Lancet 2006; 367:19031912.
  16. Geerts WH, Bergqvist D, Pineo GF, et al. Prevention of venous thromboembolism: American College of Chest Physicians evidence-based clinical practice guidelines (8th edition). Chest 2008; 133(6 suppl):381S453S.
  17. Mungall D. BIBR-1048 Boehringer Ingelheim. Curr Opin Investig Drugs 2002; 3:905907.
  18. Stangier J, Clemens A. Pharmacology, pharmacokinetics, and pharmacodynamics of dabigatran etexilate, an oral direct thrombin inhibitor. Clin Appl Thromb Hemost 2009; 15(suppl 1):9S16S.
  19. Eisert WG, Hauel N, Stangier J, Wienen W, Clemens A, van Ryn J. Dabigatran: an oral novel potent reversible nonpeptide inhibitor of thrombin. Arterioscler Thromb Vasc Biol 2010; 30:18851889.
  20. Bounameaux H, Reber G. New oral antithrombotics: a need for laboratory monitoring. Against. J Thromb Haemost 2010; 8:627630.
  21. van Ryn J, Stangier J, Haertter S, et al. Dabigatran etexilate—a novel, reversible, oral direct thrombin inhibitor: interpretation of coagulation assays and reversal of anticoagulant activity. Thromb Haemost 2010; 103:11161127.
  22. Eriksson BI, Dahl OE, Buller HR, et al. A new oral direct thrombin inhibitor, dabigatran etexilate, compared with enoxaparin for prevention of thromboembolic events following total hip or knee replacement: the BISTRO II randomized trial. J Thromb Haemost 2005; 3:103111.
  23. Eriksson BI, Dahl OE, Rosencher N, et al. Dabigatran etexilate versus enoxaparin for prevention of venous thromboembolism after total hip replacement: a randomised, double-blind, non-inferiority trial. Lancet 2007; 370:949956.
  24. Eriksson BI, Dahl OE, Rosencher N, et al. Oral dabigatran etexilate vs. subcutaneous enoxaparin for the prevention of venous thromboembolism after total knee replacement: the RE-MODEL randomized trial. J Thromb Haemost 2007; 5:21782185.
  25. Ezekowitz MD, Reilly PA, Nehmiz G, et al. Dabigatran with or without concomitant aspirin compared with warfarin alone in patients with nonvalvular atrial fibrillation (PETRO study). Am J Cardiol 2007; 100:14191426.
  26. Burger L. Bayer rival Boehringer prices blood pill at $6.75. Reuters, October 26, 2010. Available at http://www.reuters.com. Accessed September 12, 2011.
  27. Drugstore.com. Pradaxa. http://www.drugstore.com/pradaxa/bottle-60-150mg-capsules/qxn00597013554. Accessed September 10, 2011.
  28. Wal-Mart Stores, Inc. Retail Prescription Program Drug List. http://i.walmartimages.com/i/if/hmp/fusion/customer_list.pdf. Accessed September 10, 2011.
  29. Teachey DT. Dabigatran versus warfarin for venous thromboembolism (letter). N Engl J Med 2010; 362:1050; author reply10501051.
  30. Lancaster TR, Singer DE, Sheehan MA, et al. The impact of long-term warfarin therapy on quality of life. Evidence from a randomized trial. Boston Area Anticoagulation Trial for Atrial Fibrillation Investigators. Arch Intern Med 1991; 151:19441949.
  31. Freeman JV, Zhu RP, Owens DK, et al. Cost-effectiveness of dabigatran compared with warfarin for stroke prevention in atrial fibrillation. Ann Intern Med 2011; 154:111.
  32. EINSTEIN Investigators; Bauersachs R, Berkowitz SD, Brenner B, et al. Oral rivaroxaban for symptomatic venous thromboembolism. N Engl J Med 2010; 363:24992510.
  33. Cairns JA, Connolly S, McMurtry S, Stephenson M, Talajic M; CCS Atrial Fibrillation Guidelines Committee. Canadian Cardiovascular Society atrial fibrillation guidelines 2010: prevention of stroke and systemic embolization in atrial fibrillation and flutter. Can J Cardiol 2011; 27:7490.
  34. Wann LS, Curtis AB, January CT, et al. 2011 ACCF/AHA/HRS focused update on the management of patients with atrial fibrillation (update on dabigatran): a Report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. Circulation 2011; 123:11441150.
  35. Boehringer Ingelheim. Pradaxa prescribing information. http://www.pradaxa.com. Accessed September 8, 2011.
  36. Huisman MV, Quinlan DJ, Dahl OE, Schulman S. Enoxaparin versus dabigatran or rivaroxaban for thromboprophylaxis after hip or knee arthroplasty: results of separate pooled analyses of phase III multicenter randomized trials. Circ Cardiovasc Qual Outcomes 2010; 3:652660.
  37. McCullagh L, Tilson L, Walsh C, Barry M. A cost-effectiveness model comparing rivaroxaban and dabigatran etexilate with enoxaparin sodium as thromboprophylaxis after total hip and total knee replacement in the Irish healthcare setting. Pharmacoeconomics 2009; 27:829846.
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Page Number
657-664
Page Number
657-664
Publications
Publications
Topics
Article Type
Display Headline
Dabigatran: Will it change clinical practice?
Display Headline
Dabigatran: Will it change clinical practice?
Sections
Inside the Article

KEY POINTS

  • Dabigatran is a potent, reversible, direct thrombin inhibitor. Available only in oral form, it has a rapid onset of action, a predictable anticoagulant response, and few major interactions.
  • Dabigatran does not require dose adjustments (except for renal insufficiency) or monitoring of its effect during treatment.
  • In trials in patients with nonvalvular atrial fibrillation, two different doses of dabigatran were compared with warfarin. Less bleeding occurred with the lower dose than with warfarin, while the higher dose was more effective than warfarin in preventing stroke and systemic embolization.
  • The American College of Cardiology, the American Heart Association, and the Heart Rhythm Society have given dabigatran a class I B recommendation for secondary stroke prevention in patients with nonvalvular atrial fibrillation.
Disallow All Ads
Alternative CME
Article PDF Media

What is the best questionnaire to screen for alcohol use disorder in an office practice?

Article Type
Changed
Fri, 11/10/2017 - 07:59
Display Headline
What is the best questionnaire to screen for alcohol use disorder in an office practice?

Popular questionnaires to screen for alcohol misuse include the CAGE, the TWEAK, and the short form of the Alcohol Use Disorder Identification Test (AUDIT-C). Any of these is recommended. The important thing is to be proactive about screening for this very common and underrecognized problem.

A COMMON PROBLEM, NOT OFTEN ADMITTED

Alcohol use disorder, which ranges from hazardous drinking to binge drinking and alcohol dependence, is more common than admitted and often goes undiagnosed. Its personal, societal, and economic consequences cannot be overemphasized. Alcohol use is responsible for 85,000 deaths each year in the United States, and it is linked to substantial medical and psychiatric consequences and injuries, especially motor vehicle accidents. The estimated annual cost of problems attributed to alcohol use is over $185 billion.1

About three in 10 US adults drink at levels that increase their risk for alcohol-related consequences, and about one in four adults currently abuses alcohol or is dependent on it.2 In 2009, 6.8% of the US population age 12 and above reported heavy drinking, with highest rates in those ages 21 to 29.3 The rate of alcohol use was higher in men than in women, but about 10% of pregnant women ages 15 to 44 reported current alcohol use.3

The prevalence of alcohol use disorder ranges from 2% to 29% in a typical ambulatory primary care medical practice.4 And only one-third of people with alcohol use disorder are diagnosed.

Studies and experience have shown that problem drinkers tend to not seek help until they have advanced dependence, often with associated medical and sociolegal complications. It is also well established that the earlier the diagnosis is made and appropriate intervention is offered, the better the prognosis.

WHAT IS THE GOAL OF SCREENING?

The goals of screening for alcohol use disorder are to estimate the patient’s risk level, to identify those at risk because they exceed defined limits, and to identify those with evidence of an active problem, ie, with adverse consequences related to their drinking. This screening paves the way for further assessment, definitive diagnosis, and a treatment plan.

The US Preventive Services Task Force recommends screening and behavioral counseling interventions (such as a brief intervention) in the primary care setting to reduce alcohol misuse by adults, including pregnant women.5 In addition, most primary care patients who screen positive for heavy drinking or alcohol use disorder show motivation and readiness to change, and those with the most severe symptoms tend to be the most ready.6

THE IDEAL QUESTIONNAIRE: SENSITIVE, SPECIFIC, AND SHORT

The ideal alcohol screening questionnaire for a busy practice should be brief and highly sensitive and specific for identifying the spectrum of alcohol misuse. Also, it should be easy to recall so it can be part of routine face-to-face discussion with the patient during an office visit.

Further, it should include questions that focus on the consequences of drinking as well as on quantity and frequency. It should also take into account factors such as the patient’s age, sex, race or ethnicity, and pregnancy status, as these can influence the effectiveness of the screening method.

Problems with focusing on quantity alone

“Risky use” is defined (in a non-alcohol-dependent person or one with no alcohol-related consequences) as more than seven standard drinks per week or more than three per occasion for women, and more than 14 standard drinks per week or more than four per occasion for men.2

A standard drink in the United States contains about 12 to 14 g of ethanol: a 12-oz can or bottle of beer, a 5-oz glass of wine, or about 1.5 oz of 80-proof liquor.2

The common single-item screening test asks, “How many times in the past year have you had more than four drinks (for women) or five drinks (for men) in a day?” This is recommended by the National Institute on Alcohol Abuse and Alcoholism for brief screening in primary care. However, a positive answer (ie, one or more times in the past year) has a sensitivity of only 82% and a specificity of only 79% for detecting unhealthy alcohol use, and an even lower specificity (67%) for detecting current alcohol use disorder.7

The CAGE questionnaire

The four-item CAGE questionnaire8 focuses on the consequences of drinking:

  • C: Have you felt the need to cut down on your drinking?
  • A: Have you ever felt annoyed by someone criticizing your drinking?
  • G: Have you ever felt bad or guilty about your drinking?
  • E: Have you ever had an eye-opener—a drink the first thing in the morning to steady your nerves?

A yes to one or more of the questions denotes a need for further assessment.

The CAGE questionnaire is simple, non-threatening, brief, and easy to remember. A yes answer to two or more items has a sensitivity of 75% to 95% and a specificity of 84% to 97% for alcohol dependence.9 However, CAGE is less sensitive for identifying nonalcohol-dependent at-risk drinkers. The patient’s sex and ethnicity have also been found to affect its performance somewhat, with some studies showing a sensitivity as low as 50% in adult white women and as low as 40% in at-risk groups ages 60 and over.

 

 

The TWEAK questionnaire

The TWEAK is a modification of the CAGE and includes a question about tolerance; it has a sensitivity of 87% for harmful drinking and 84% for dependence, especially in trauma-related cases.9 It has also been found to be better than the CAGE for screening pregnant patients.

  • Tolerance: How many drinks can you hold without falling asleep or passing out? (2 points if six drinks or more)
  • Worried: Have friends or relatives worried about your drinking? (2 points if yes)
  • Eye-opener: Do you sometimes take a drink in the morning when you first get up? (1 point if yes)
  • Amnesia: Have friends or relatives told you about things you said or did while drinking that you could not remember? (1 point if yes)
  • Cut down: Do you sometimes feel the need to cut down on your drinking? (1 point if yes)

An answer of ≥ 6 to the first question or a total score of 3 or more denotes a problem with alcohol use and a need for further assessment.10

The AUDIT-C

The AUDIT-C, a shorter form of the 10-item AUDIT developed by the World Health Organization, uses only the first three questions of the full-length AUDIT. The three-item AUDIT-C has a sensitivity ranging from 85% in Hispanic women to 95% in white men.9,11 The questions center on the quantity and frequency of alcohol use:

  • How often do you have a drink containing alcohol? Answer choices: never; monthly or less often; 2 to 4 times a month; 2 to 3 times a week; 4 or more times a week.
  • How many standard drinks containing alcohol do you have on a typical day when you are drinking? Answer choices: one or two; three or four; five or six; seven to nine; 10 or more.
  • How often do you have six or more drinks on one occasion? Answer choices: never, less than monthly; monthly; weekly; daily or almost.

Scoring is 0 for never, and 1, 2, 3, or 4 for the subsequent answer choices in each question.

The cut-off score for the AUDIT-C is usually a total of 3 points for women and 4 for men: ie, a score of 3 or higher for women and a score of 4 or higher for men indicate alcohol use disorder and the need for further assessment.

The AUDIT questionnaire has been found not only to have a high sensitivity (83%) and specificity (90%) for identifying alcohol dependence, but also to be more sensitive than the CAGE questionnaire (85% vs 75%) for identifying harmful drinking, hazardous drinking, and at-risk drinking. (Note: The full version of AUDIT performed similarly to the three-item AUDIT-C for detecting heavy drinking and active abuse or dependence.12) Furthermore, it has performed well as a screening test in many multinational trials of alcohol brief intervention. The questions about quantity of alcohol consumed may be even more suitable for adolescents and young adults, who tend to fall into the harmful-hazardous drinking category rather than the dependent category. In some studies, patients tended to reveal less with the CAGE questionnaire when it was preceded by direct and close-ended questions about the quantity and frequency of alcohol use, thus reducing its sensitivity.13

The AUDIT and TWEAK questionnaires showed greater sensitivity in both men and women than the CAGE questionnaire and were equally sensitive in African Americans.14

HOW TO FIT ALCOHOL SCREENING INTO AN OFFICE VISIT

A practical way to fit alcohol screening into an office visit is to include a questionnaire in the assessment papers completed by the patient while in the waiting room. In other settings, these questions may be asked by trained nursing staff as part of the initial assessment, ie, while obtaining the patient’s weight and vital statistics. This can be briefly reviewed by the physician during the face-to-face history and physical examination.

A concerted effort is needed to proactively screen for alcohol use. A combination of questions about the effect, the quantity, and the frequency of alcohol use is the best way to screen for the many different aspects of alcohol use disorder—many of which can be managed in the primary care setting through brief interventions without referral to a specialist.

When screening for alcohol misuse, it is also important to consider factors such as age, sex, race or ethnicity, pregnancy, and history of recent trauma or surgery.

References
  1. Saitz R. Clinical practice. Unhealthy alcohol use. N Engl J Med 2005; 352:596607.
  2. National institute on Alcohol Abuse and Alcoholism (NIAAA). Helping patients who drink too much: A clinician’s guide and related professional support resources. http://pubs.niaaa.nih.gov/publications/practitioner/cliniciansguide2005/clinicians_guide.htm. Accessed July 29, 2011.
  3. Substance Abuse and Mental Health Services Administration (SAMHSA). Results from the 2009 National Survey on Drug Use and Health: Volume I. Summary of National Findings. http://www.oas.samhsa.gov/NSDUH/2k9NSDUH/2k9ResultsP.pdf. Accessed July 29, 2011.
  4. Fiellin DA, Reid MC, O’Connor PG. Screening for alcohol problems in primary care: a systematic review. Arch Intern Med 2000; 160:19771989.
  5. US Preventive Services Task Force (USPSTF). Screening and behavioral counseling interventions in primary care to reduce alcohol misuse. Release date: April 2004. http://www.uspreventiveservicestaskforce.org/uspstf/uspsdrin.htm. Accessed July 29, 2011.
  6. Williams EC, Kivlahan DR, Saitz R, et al. Readiness to change in primary care patients who screened positive for alcohol misuse. Ann Fam Med 2006; 4:213220.
  7. Smith PC, Schmidt SM, Allensworth-Davies D, Saitz R. Primary care validation of a single-question alcohol screening test. J Gen Intern Med 2009; 24:783788.
  8. Ewing JA. Detecting alcoholism. The CAGE questionnaire. JAMA 1984; 252:19051907.
  9. Cherpitel CJ. Screening for alcohol problems in the emergency department. Ann Emerg Med 1995; 26:158166.
  10. Russell M, Martier SS, Sokol RJ, et al. Screening for pregnancy risk-drinking. Alcohol Clin Exp Res 1994; 18:11561161.
  11. Frank D, DeBenedetti AF, Volk RJ, Williams EC, Kivlahan DR, Bradley KA. Effectiveness of the AUDIT-C as a screening test for alcohol misuse in three race/ethnic groups. J Gen Intern Med 2008; 23:781787.
  12. Bush K, Kivlahan DR, McDonell MB, Fihn SD, Bradley KA. The AUDIT alcohol consumption questions (AUDIT-C): an effective brief screening test for problem drinking. Ambulatory Care Quality Improvement Project (ACQUIP). Alcohol Use Disorders Identification Test. Arch Intern Med 1998; 158:17891795.
  13. Steinweg DL, Worth H. Alcoholism: the keys to the CAGE. Am J Med 1993; 94:520523.
  14. Cherpitel CJ. Brief screening instruments for alcoholism. Alcohol Health Res World 1997; 21:348351.
Article PDF
Author and Disclosure Information

Keji Fagbemi, MD
Unit Chief, In-patient Detoxification Unit, Addiction Service, Department of Psychiatry, Bronx Lebanon Hospital, Bronx, NY, affiliated with Albert Einstein College of Medicine, New York, NY

Address: Keji Fagbemi, MD, In-Patient Detoxification Unit, Addiction Services, Department of Psychiatry, Bronx Lebanon Hospital, 1276 Fulton Avenue, Bronx, NY 10456; e-mail [email protected]

Issue
Cleveland Clinic Journal of Medicine - 78(10)
Publications
Topics
Page Number
649-651
Sections
Author and Disclosure Information

Keji Fagbemi, MD
Unit Chief, In-patient Detoxification Unit, Addiction Service, Department of Psychiatry, Bronx Lebanon Hospital, Bronx, NY, affiliated with Albert Einstein College of Medicine, New York, NY

Address: Keji Fagbemi, MD, In-Patient Detoxification Unit, Addiction Services, Department of Psychiatry, Bronx Lebanon Hospital, 1276 Fulton Avenue, Bronx, NY 10456; e-mail [email protected]

Author and Disclosure Information

Keji Fagbemi, MD
Unit Chief, In-patient Detoxification Unit, Addiction Service, Department of Psychiatry, Bronx Lebanon Hospital, Bronx, NY, affiliated with Albert Einstein College of Medicine, New York, NY

Address: Keji Fagbemi, MD, In-Patient Detoxification Unit, Addiction Services, Department of Psychiatry, Bronx Lebanon Hospital, 1276 Fulton Avenue, Bronx, NY 10456; e-mail [email protected]

Article PDF
Article PDF

Popular questionnaires to screen for alcohol misuse include the CAGE, the TWEAK, and the short form of the Alcohol Use Disorder Identification Test (AUDIT-C). Any of these is recommended. The important thing is to be proactive about screening for this very common and underrecognized problem.

A COMMON PROBLEM, NOT OFTEN ADMITTED

Alcohol use disorder, which ranges from hazardous drinking to binge drinking and alcohol dependence, is more common than admitted and often goes undiagnosed. Its personal, societal, and economic consequences cannot be overemphasized. Alcohol use is responsible for 85,000 deaths each year in the United States, and it is linked to substantial medical and psychiatric consequences and injuries, especially motor vehicle accidents. The estimated annual cost of problems attributed to alcohol use is over $185 billion.1

About three in 10 US adults drink at levels that increase their risk for alcohol-related consequences, and about one in four adults currently abuses alcohol or is dependent on it.2 In 2009, 6.8% of the US population age 12 and above reported heavy drinking, with highest rates in those ages 21 to 29.3 The rate of alcohol use was higher in men than in women, but about 10% of pregnant women ages 15 to 44 reported current alcohol use.3

The prevalence of alcohol use disorder ranges from 2% to 29% in a typical ambulatory primary care medical practice.4 And only one-third of people with alcohol use disorder are diagnosed.

Studies and experience have shown that problem drinkers tend to not seek help until they have advanced dependence, often with associated medical and sociolegal complications. It is also well established that the earlier the diagnosis is made and appropriate intervention is offered, the better the prognosis.

WHAT IS THE GOAL OF SCREENING?

The goals of screening for alcohol use disorder are to estimate the patient’s risk level, to identify those at risk because they exceed defined limits, and to identify those with evidence of an active problem, ie, with adverse consequences related to their drinking. This screening paves the way for further assessment, definitive diagnosis, and a treatment plan.

The US Preventive Services Task Force recommends screening and behavioral counseling interventions (such as a brief intervention) in the primary care setting to reduce alcohol misuse by adults, including pregnant women.5 In addition, most primary care patients who screen positive for heavy drinking or alcohol use disorder show motivation and readiness to change, and those with the most severe symptoms tend to be the most ready.6

THE IDEAL QUESTIONNAIRE: SENSITIVE, SPECIFIC, AND SHORT

The ideal alcohol screening questionnaire for a busy practice should be brief and highly sensitive and specific for identifying the spectrum of alcohol misuse. Also, it should be easy to recall so it can be part of routine face-to-face discussion with the patient during an office visit.

Further, it should include questions that focus on the consequences of drinking as well as on quantity and frequency. It should also take into account factors such as the patient’s age, sex, race or ethnicity, and pregnancy status, as these can influence the effectiveness of the screening method.

Problems with focusing on quantity alone

“Risky use” is defined (in a non-alcohol-dependent person or one with no alcohol-related consequences) as more than seven standard drinks per week or more than three per occasion for women, and more than 14 standard drinks per week or more than four per occasion for men.2

A standard drink in the United States contains about 12 to 14 g of ethanol: a 12-oz can or bottle of beer, a 5-oz glass of wine, or about 1.5 oz of 80-proof liquor.2

The common single-item screening test asks, “How many times in the past year have you had more than four drinks (for women) or five drinks (for men) in a day?” This is recommended by the National Institute on Alcohol Abuse and Alcoholism for brief screening in primary care. However, a positive answer (ie, one or more times in the past year) has a sensitivity of only 82% and a specificity of only 79% for detecting unhealthy alcohol use, and an even lower specificity (67%) for detecting current alcohol use disorder.7

The CAGE questionnaire

The four-item CAGE questionnaire8 focuses on the consequences of drinking:

  • C: Have you felt the need to cut down on your drinking?
  • A: Have you ever felt annoyed by someone criticizing your drinking?
  • G: Have you ever felt bad or guilty about your drinking?
  • E: Have you ever had an eye-opener—a drink the first thing in the morning to steady your nerves?

A yes to one or more of the questions denotes a need for further assessment.

The CAGE questionnaire is simple, non-threatening, brief, and easy to remember. A yes answer to two or more items has a sensitivity of 75% to 95% and a specificity of 84% to 97% for alcohol dependence.9 However, CAGE is less sensitive for identifying nonalcohol-dependent at-risk drinkers. The patient’s sex and ethnicity have also been found to affect its performance somewhat, with some studies showing a sensitivity as low as 50% in adult white women and as low as 40% in at-risk groups ages 60 and over.

 

 

The TWEAK questionnaire

The TWEAK is a modification of the CAGE and includes a question about tolerance; it has a sensitivity of 87% for harmful drinking and 84% for dependence, especially in trauma-related cases.9 It has also been found to be better than the CAGE for screening pregnant patients.

  • Tolerance: How many drinks can you hold without falling asleep or passing out? (2 points if six drinks or more)
  • Worried: Have friends or relatives worried about your drinking? (2 points if yes)
  • Eye-opener: Do you sometimes take a drink in the morning when you first get up? (1 point if yes)
  • Amnesia: Have friends or relatives told you about things you said or did while drinking that you could not remember? (1 point if yes)
  • Cut down: Do you sometimes feel the need to cut down on your drinking? (1 point if yes)

An answer of ≥ 6 to the first question or a total score of 3 or more denotes a problem with alcohol use and a need for further assessment.10

The AUDIT-C

The AUDIT-C, a shorter form of the 10-item AUDIT developed by the World Health Organization, uses only the first three questions of the full-length AUDIT. The three-item AUDIT-C has a sensitivity ranging from 85% in Hispanic women to 95% in white men.9,11 The questions center on the quantity and frequency of alcohol use:

  • How often do you have a drink containing alcohol? Answer choices: never; monthly or less often; 2 to 4 times a month; 2 to 3 times a week; 4 or more times a week.
  • How many standard drinks containing alcohol do you have on a typical day when you are drinking? Answer choices: one or two; three or four; five or six; seven to nine; 10 or more.
  • How often do you have six or more drinks on one occasion? Answer choices: never, less than monthly; monthly; weekly; daily or almost.

Scoring is 0 for never, and 1, 2, 3, or 4 for the subsequent answer choices in each question.

The cut-off score for the AUDIT-C is usually a total of 3 points for women and 4 for men: ie, a score of 3 or higher for women and a score of 4 or higher for men indicate alcohol use disorder and the need for further assessment.

The AUDIT questionnaire has been found not only to have a high sensitivity (83%) and specificity (90%) for identifying alcohol dependence, but also to be more sensitive than the CAGE questionnaire (85% vs 75%) for identifying harmful drinking, hazardous drinking, and at-risk drinking. (Note: The full version of AUDIT performed similarly to the three-item AUDIT-C for detecting heavy drinking and active abuse or dependence.12) Furthermore, it has performed well as a screening test in many multinational trials of alcohol brief intervention. The questions about quantity of alcohol consumed may be even more suitable for adolescents and young adults, who tend to fall into the harmful-hazardous drinking category rather than the dependent category. In some studies, patients tended to reveal less with the CAGE questionnaire when it was preceded by direct and close-ended questions about the quantity and frequency of alcohol use, thus reducing its sensitivity.13

The AUDIT and TWEAK questionnaires showed greater sensitivity in both men and women than the CAGE questionnaire and were equally sensitive in African Americans.14

HOW TO FIT ALCOHOL SCREENING INTO AN OFFICE VISIT

A practical way to fit alcohol screening into an office visit is to include a questionnaire in the assessment papers completed by the patient while in the waiting room. In other settings, these questions may be asked by trained nursing staff as part of the initial assessment, ie, while obtaining the patient’s weight and vital statistics. This can be briefly reviewed by the physician during the face-to-face history and physical examination.

A concerted effort is needed to proactively screen for alcohol use. A combination of questions about the effect, the quantity, and the frequency of alcohol use is the best way to screen for the many different aspects of alcohol use disorder—many of which can be managed in the primary care setting through brief interventions without referral to a specialist.

When screening for alcohol misuse, it is also important to consider factors such as age, sex, race or ethnicity, pregnancy, and history of recent trauma or surgery.

Popular questionnaires to screen for alcohol misuse include the CAGE, the TWEAK, and the short form of the Alcohol Use Disorder Identification Test (AUDIT-C). Any of these is recommended. The important thing is to be proactive about screening for this very common and underrecognized problem.

A COMMON PROBLEM, NOT OFTEN ADMITTED

Alcohol use disorder, which ranges from hazardous drinking to binge drinking and alcohol dependence, is more common than admitted and often goes undiagnosed. Its personal, societal, and economic consequences cannot be overemphasized. Alcohol use is responsible for 85,000 deaths each year in the United States, and it is linked to substantial medical and psychiatric consequences and injuries, especially motor vehicle accidents. The estimated annual cost of problems attributed to alcohol use is over $185 billion.1

About three in 10 US adults drink at levels that increase their risk for alcohol-related consequences, and about one in four adults currently abuses alcohol or is dependent on it.2 In 2009, 6.8% of the US population age 12 and above reported heavy drinking, with highest rates in those ages 21 to 29.3 The rate of alcohol use was higher in men than in women, but about 10% of pregnant women ages 15 to 44 reported current alcohol use.3

The prevalence of alcohol use disorder ranges from 2% to 29% in a typical ambulatory primary care medical practice.4 And only one-third of people with alcohol use disorder are diagnosed.

Studies and experience have shown that problem drinkers tend to not seek help until they have advanced dependence, often with associated medical and sociolegal complications. It is also well established that the earlier the diagnosis is made and appropriate intervention is offered, the better the prognosis.

WHAT IS THE GOAL OF SCREENING?

The goals of screening for alcohol use disorder are to estimate the patient’s risk level, to identify those at risk because they exceed defined limits, and to identify those with evidence of an active problem, ie, with adverse consequences related to their drinking. This screening paves the way for further assessment, definitive diagnosis, and a treatment plan.

The US Preventive Services Task Force recommends screening and behavioral counseling interventions (such as a brief intervention) in the primary care setting to reduce alcohol misuse by adults, including pregnant women.5 In addition, most primary care patients who screen positive for heavy drinking or alcohol use disorder show motivation and readiness to change, and those with the most severe symptoms tend to be the most ready.6

THE IDEAL QUESTIONNAIRE: SENSITIVE, SPECIFIC, AND SHORT

The ideal alcohol screening questionnaire for a busy practice should be brief and highly sensitive and specific for identifying the spectrum of alcohol misuse. Also, it should be easy to recall so it can be part of routine face-to-face discussion with the patient during an office visit.

Further, it should include questions that focus on the consequences of drinking as well as on quantity and frequency. It should also take into account factors such as the patient’s age, sex, race or ethnicity, and pregnancy status, as these can influence the effectiveness of the screening method.

Problems with focusing on quantity alone

“Risky use” is defined (in a non-alcohol-dependent person or one with no alcohol-related consequences) as more than seven standard drinks per week or more than three per occasion for women, and more than 14 standard drinks per week or more than four per occasion for men.2

A standard drink in the United States contains about 12 to 14 g of ethanol: a 12-oz can or bottle of beer, a 5-oz glass of wine, or about 1.5 oz of 80-proof liquor.2

The common single-item screening test asks, “How many times in the past year have you had more than four drinks (for women) or five drinks (for men) in a day?” This is recommended by the National Institute on Alcohol Abuse and Alcoholism for brief screening in primary care. However, a positive answer (ie, one or more times in the past year) has a sensitivity of only 82% and a specificity of only 79% for detecting unhealthy alcohol use, and an even lower specificity (67%) for detecting current alcohol use disorder.7

The CAGE questionnaire

The four-item CAGE questionnaire8 focuses on the consequences of drinking:

  • C: Have you felt the need to cut down on your drinking?
  • A: Have you ever felt annoyed by someone criticizing your drinking?
  • G: Have you ever felt bad or guilty about your drinking?
  • E: Have you ever had an eye-opener—a drink the first thing in the morning to steady your nerves?

A yes to one or more of the questions denotes a need for further assessment.

The CAGE questionnaire is simple, non-threatening, brief, and easy to remember. A yes answer to two or more items has a sensitivity of 75% to 95% and a specificity of 84% to 97% for alcohol dependence.9 However, CAGE is less sensitive for identifying nonalcohol-dependent at-risk drinkers. The patient’s sex and ethnicity have also been found to affect its performance somewhat, with some studies showing a sensitivity as low as 50% in adult white women and as low as 40% in at-risk groups ages 60 and over.

 

 

The TWEAK questionnaire

The TWEAK is a modification of the CAGE and includes a question about tolerance; it has a sensitivity of 87% for harmful drinking and 84% for dependence, especially in trauma-related cases.9 It has also been found to be better than the CAGE for screening pregnant patients.

  • Tolerance: How many drinks can you hold without falling asleep or passing out? (2 points if six drinks or more)
  • Worried: Have friends or relatives worried about your drinking? (2 points if yes)
  • Eye-opener: Do you sometimes take a drink in the morning when you first get up? (1 point if yes)
  • Amnesia: Have friends or relatives told you about things you said or did while drinking that you could not remember? (1 point if yes)
  • Cut down: Do you sometimes feel the need to cut down on your drinking? (1 point if yes)

An answer of ≥ 6 to the first question or a total score of 3 or more denotes a problem with alcohol use and a need for further assessment.10

The AUDIT-C

The AUDIT-C, a shorter form of the 10-item AUDIT developed by the World Health Organization, uses only the first three questions of the full-length AUDIT. The three-item AUDIT-C has a sensitivity ranging from 85% in Hispanic women to 95% in white men.9,11 The questions center on the quantity and frequency of alcohol use:

  • How often do you have a drink containing alcohol? Answer choices: never; monthly or less often; 2 to 4 times a month; 2 to 3 times a week; 4 or more times a week.
  • How many standard drinks containing alcohol do you have on a typical day when you are drinking? Answer choices: one or two; three or four; five or six; seven to nine; 10 or more.
  • How often do you have six or more drinks on one occasion? Answer choices: never, less than monthly; monthly; weekly; daily or almost.

Scoring is 0 for never, and 1, 2, 3, or 4 for the subsequent answer choices in each question.

The cut-off score for the AUDIT-C is usually a total of 3 points for women and 4 for men: ie, a score of 3 or higher for women and a score of 4 or higher for men indicate alcohol use disorder and the need for further assessment.

The AUDIT questionnaire has been found not only to have a high sensitivity (83%) and specificity (90%) for identifying alcohol dependence, but also to be more sensitive than the CAGE questionnaire (85% vs 75%) for identifying harmful drinking, hazardous drinking, and at-risk drinking. (Note: The full version of AUDIT performed similarly to the three-item AUDIT-C for detecting heavy drinking and active abuse or dependence.12) Furthermore, it has performed well as a screening test in many multinational trials of alcohol brief intervention. The questions about quantity of alcohol consumed may be even more suitable for adolescents and young adults, who tend to fall into the harmful-hazardous drinking category rather than the dependent category. In some studies, patients tended to reveal less with the CAGE questionnaire when it was preceded by direct and close-ended questions about the quantity and frequency of alcohol use, thus reducing its sensitivity.13

The AUDIT and TWEAK questionnaires showed greater sensitivity in both men and women than the CAGE questionnaire and were equally sensitive in African Americans.14

HOW TO FIT ALCOHOL SCREENING INTO AN OFFICE VISIT

A practical way to fit alcohol screening into an office visit is to include a questionnaire in the assessment papers completed by the patient while in the waiting room. In other settings, these questions may be asked by trained nursing staff as part of the initial assessment, ie, while obtaining the patient’s weight and vital statistics. This can be briefly reviewed by the physician during the face-to-face history and physical examination.

A concerted effort is needed to proactively screen for alcohol use. A combination of questions about the effect, the quantity, and the frequency of alcohol use is the best way to screen for the many different aspects of alcohol use disorder—many of which can be managed in the primary care setting through brief interventions without referral to a specialist.

When screening for alcohol misuse, it is also important to consider factors such as age, sex, race or ethnicity, pregnancy, and history of recent trauma or surgery.

References
  1. Saitz R. Clinical practice. Unhealthy alcohol use. N Engl J Med 2005; 352:596607.
  2. National institute on Alcohol Abuse and Alcoholism (NIAAA). Helping patients who drink too much: A clinician’s guide and related professional support resources. http://pubs.niaaa.nih.gov/publications/practitioner/cliniciansguide2005/clinicians_guide.htm. Accessed July 29, 2011.
  3. Substance Abuse and Mental Health Services Administration (SAMHSA). Results from the 2009 National Survey on Drug Use and Health: Volume I. Summary of National Findings. http://www.oas.samhsa.gov/NSDUH/2k9NSDUH/2k9ResultsP.pdf. Accessed July 29, 2011.
  4. Fiellin DA, Reid MC, O’Connor PG. Screening for alcohol problems in primary care: a systematic review. Arch Intern Med 2000; 160:19771989.
  5. US Preventive Services Task Force (USPSTF). Screening and behavioral counseling interventions in primary care to reduce alcohol misuse. Release date: April 2004. http://www.uspreventiveservicestaskforce.org/uspstf/uspsdrin.htm. Accessed July 29, 2011.
  6. Williams EC, Kivlahan DR, Saitz R, et al. Readiness to change in primary care patients who screened positive for alcohol misuse. Ann Fam Med 2006; 4:213220.
  7. Smith PC, Schmidt SM, Allensworth-Davies D, Saitz R. Primary care validation of a single-question alcohol screening test. J Gen Intern Med 2009; 24:783788.
  8. Ewing JA. Detecting alcoholism. The CAGE questionnaire. JAMA 1984; 252:19051907.
  9. Cherpitel CJ. Screening for alcohol problems in the emergency department. Ann Emerg Med 1995; 26:158166.
  10. Russell M, Martier SS, Sokol RJ, et al. Screening for pregnancy risk-drinking. Alcohol Clin Exp Res 1994; 18:11561161.
  11. Frank D, DeBenedetti AF, Volk RJ, Williams EC, Kivlahan DR, Bradley KA. Effectiveness of the AUDIT-C as a screening test for alcohol misuse in three race/ethnic groups. J Gen Intern Med 2008; 23:781787.
  12. Bush K, Kivlahan DR, McDonell MB, Fihn SD, Bradley KA. The AUDIT alcohol consumption questions (AUDIT-C): an effective brief screening test for problem drinking. Ambulatory Care Quality Improvement Project (ACQUIP). Alcohol Use Disorders Identification Test. Arch Intern Med 1998; 158:17891795.
  13. Steinweg DL, Worth H. Alcoholism: the keys to the CAGE. Am J Med 1993; 94:520523.
  14. Cherpitel CJ. Brief screening instruments for alcoholism. Alcohol Health Res World 1997; 21:348351.
References
  1. Saitz R. Clinical practice. Unhealthy alcohol use. N Engl J Med 2005; 352:596607.
  2. National institute on Alcohol Abuse and Alcoholism (NIAAA). Helping patients who drink too much: A clinician’s guide and related professional support resources. http://pubs.niaaa.nih.gov/publications/practitioner/cliniciansguide2005/clinicians_guide.htm. Accessed July 29, 2011.
  3. Substance Abuse and Mental Health Services Administration (SAMHSA). Results from the 2009 National Survey on Drug Use and Health: Volume I. Summary of National Findings. http://www.oas.samhsa.gov/NSDUH/2k9NSDUH/2k9ResultsP.pdf. Accessed July 29, 2011.
  4. Fiellin DA, Reid MC, O’Connor PG. Screening for alcohol problems in primary care: a systematic review. Arch Intern Med 2000; 160:19771989.
  5. US Preventive Services Task Force (USPSTF). Screening and behavioral counseling interventions in primary care to reduce alcohol misuse. Release date: April 2004. http://www.uspreventiveservicestaskforce.org/uspstf/uspsdrin.htm. Accessed July 29, 2011.
  6. Williams EC, Kivlahan DR, Saitz R, et al. Readiness to change in primary care patients who screened positive for alcohol misuse. Ann Fam Med 2006; 4:213220.
  7. Smith PC, Schmidt SM, Allensworth-Davies D, Saitz R. Primary care validation of a single-question alcohol screening test. J Gen Intern Med 2009; 24:783788.
  8. Ewing JA. Detecting alcoholism. The CAGE questionnaire. JAMA 1984; 252:19051907.
  9. Cherpitel CJ. Screening for alcohol problems in the emergency department. Ann Emerg Med 1995; 26:158166.
  10. Russell M, Martier SS, Sokol RJ, et al. Screening for pregnancy risk-drinking. Alcohol Clin Exp Res 1994; 18:11561161.
  11. Frank D, DeBenedetti AF, Volk RJ, Williams EC, Kivlahan DR, Bradley KA. Effectiveness of the AUDIT-C as a screening test for alcohol misuse in three race/ethnic groups. J Gen Intern Med 2008; 23:781787.
  12. Bush K, Kivlahan DR, McDonell MB, Fihn SD, Bradley KA. The AUDIT alcohol consumption questions (AUDIT-C): an effective brief screening test for problem drinking. Ambulatory Care Quality Improvement Project (ACQUIP). Alcohol Use Disorders Identification Test. Arch Intern Med 1998; 158:17891795.
  13. Steinweg DL, Worth H. Alcoholism: the keys to the CAGE. Am J Med 1993; 94:520523.
  14. Cherpitel CJ. Brief screening instruments for alcoholism. Alcohol Health Res World 1997; 21:348351.
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Issue
Cleveland Clinic Journal of Medicine - 78(10)
Page Number
649-651
Page Number
649-651
Publications
Publications
Topics
Article Type
Display Headline
What is the best questionnaire to screen for alcohol use disorder in an office practice?
Display Headline
What is the best questionnaire to screen for alcohol use disorder in an office practice?
Sections
Disallow All Ads
Alternative CME
Article PDF Media