User login
Not another burnout article
Does this sound like your day?
You show up to work after a terrible night’s sleep. Your back is tense, and you do some kind of walking/stretching combo as you walk through the doors. Your focus fades during the mind-numbing routine of the morning shift sign out. As the day moves forward, you begin to feel resentful as you sign orders, see patients, and address your ICU team needs. You know that’s not right, that it’s not in line with who you want to be, but the irritation doesn’t go away.
Your lunchtime is filled with computer screens, notes, billing, and more billing. The previous feelings of irritation begin to boil into anger because more of your day is filled with bureaucratic demands and insurance reports rather than actually helping people. This isn’t what you signed up for. Years and years of training so you could be a paper pusher? The thought leads to rage ... or sometimes apathy on days you give in to the inevitable.
You finish your shift with admissions, procedures, code blues, and an overwhelming and exhausting night shift sign out. You feel like a hamster in a wheel. You’re going nowhere. What’s the point of all of this? You find yourself questioning why you went into medicine anyways ... yeah, that’s burnout.
I know what you’re thinking. You keep hearing about this, and it’s important to recognize, but then you hear the same old solutions: be more positive, find balance, do some yoga, take this resilience module, be mindful (what on earth does this mean anyways?), get some more sleep. Basically, it’s our problem. It’s our burden. If all of these were easy to understand and implement, don’t you think doctors and health-care providers would have done it already? I think you and I are a lot alike. These were my exact feelings. But stick with me on this one. I have a solution for you, albeit a little different. I’ll show you a more “positive” spin on the DIY.
I burned out early. After fellowship, I didn’t want to be a doctor anymore. I desperately sought to alter my career somehow. I looked into website development, something I had been good at in high school. I took a few refresher classes on my days off and started coding my own sites, but I had bills to pay. Big bills. Student loan bills. Luckily, my first job out of fellowship accepted many of my schedule demands, such as day shifts only, and after about a year, I recovered and remembered why I had loved medicine to begin with.
What is burnout?
Mind-body-soul exhaustion caused by excessive stress. Stress and burnout are closely related, but they’re more like distant cousins. Stress can be (and is) a normal part of our jobs. I bet you think you’re stressed, when you’re probably burned out. Critical care doctors have the highest rate of burnout among all physician subspecialties at >55%, and it is even higher in pediatric critical care. (Sessler C. https://www.mdedge.com/chestphysician/article/160951/society-news/turning-heat-icu-burnout). The main difference between stress and burnout is hope. With stress, you still feel like things can get better and you can get it all under control. Burnout feels hopeless.
What are the three core symptoms of burnout?
• Irritability and impatience with patients (depersonalization)
• Cynicism and difficulty concentrating (emotional exhaustion)
• What’s the point of all of this? Nothing I do matters or is appreciated (decreased self-efficacy)
We can talk about the symptoms of burnout all day, but what does that really look like? It looks like the day we described at the beginning. You know, the day that resonated with you and caused you to keep reading.
Why should we all be discussing this important topic?
Being burned out not only affects us on a soul level (achingly described above), but, more importantly, this can trickle down to our personal lives, family relationships, and how we care for our patients, with some studies showing that it affects our performance and, gulp, patient outcomes. That’s scary (Moss M et al. Crit Care Med. 2016;44[7]:1414).
Causes of burnout
There are many causes of burnout, and several studies have identified risk factors. A lack of control, conflicts with colleagues and leadership, and performing menial tasks can add to the irritation of a workday. This doesn’t even include the nature of our actual job as critical care doctors. We care for the sickest and are frequently involved in end-of-life care. Over time, the stress morphs into burnout. Female gender is also an independent risk factor for doctors (Pastores SM, et al. Crit Care Med. 2019;47[4]:550).
We’ve identified it. We’ve quantified it. But we’re not fixing it. In fact, there are only a few studies that have incorporated a needs assessment of doctors, paired with appropriate environmental intervention. A study done with primary care doctors in New York City clinics found that surveying a doctor’s “wish list” of interventions can help identify gaps in workflow, such as pairing one medical assistant with each attending (Linzer M, et al. J Gen Intern Med. 2015;30[8]:1105).
Without more data like this, we’re hamsters in a wheel. Luckily, organizations like CHEST have joined together with others to create the Critical Care Societies Collaborative and have an annual summit to discuss research strategies.
Solutions
Even millennials are sick of the mindful “chore” list. Yoga pants, yoga mats, crystals, chakras, meditation, and the list goes on and on. What millennials want are work-life integrations that are easy; workspaces that invite mindful behavior and daily rituals that excite and relax them. Co-working spaces like WeWork have designated self-care spaces.
Self-care is now essential, not an indulgence. I wasn’t sure how to create this space in my ICU, so I started small, with things I could carry with myself. The key is to find small rituals with big meanings. What could this look like for you? I began doing breathwork. Frankly, the idea came to me from my Apple® watch. It just started giving me these reminders one day, and I decided to take it seriously. I found that my mind and muscles eased after only 1 minute of breathing in and out slowly. This elevated my mood and was the refresher I needed in the afternoons. My body ached less after procedures.
I also got a little woo-woo (stay with me now) and began carrying around crystal stones. You don’t have to carry around crystals. Prayer books, religious symbols, your child’s toy car, anything can work if it has meaning for you, so when you see it or touch it during your day, you remember your big why. Why you’re serving people. Why you’re a doctor. I prefer the crystals over jewelry because it’s something unusual that I don’t expect to be sitting in my pocket. It’s always a nice gentle reminder of the love I have for my patients, my job, and humanity. When I put my hands in my pocket as I’m talking to yet another frustrated family member, my responses are more patient and calmer, which leads to a more productive conversation.
Lastly, I started what I call a new Pavlov home routine. When I’m done with work, I light a candle and write out three things I’m grateful for. Retrain your brain. Retrain your triggers. What’s your Pavlov’s bell going to be? Many of us come home hungry and stressed. Food then becomes linked to stress. This is not good. Link it with something else. Light a candle, count to 3, then blow it out. Use your kids to incorporate something fun. Use a toy with “super powers” to “beam” the bad feelings away. Taking a few extra minutes to shift gears has created a much happier home for me.
There are things that we can’t control. That’s called circumstances. We can’t control other people; we can’t control the hospital system; we can’t control our past. But the rest of everything we can control: our thoughts, feelings, and daily self-care rituals.
It reminds me of something my dad always said when I was a little girl. When crossing the street, you always look twice, oftentimes three. Why be so careful? It’s the pedestrian’s right of way after all. “Well..” he replied, “If a car hits you, nothing much happens to them, but your entire life will be destroyed, forever.”
Stop walking into traffic thinking everything will be ok. Take control of what you can.
Look, I get it. As health-care providers, we are an independent group. But just because you can do it alone, doesn’t mean you have to.
Choose one thing. Whether it be something I mentioned or something that came to your mind as you read this. Then, drop me a line at my personal email [email protected]. I will send you a reply to let you know I hear you and I’m in your corner.
Burnout happens.
But, so does joy, job satisfaction, and balance. Those things just take more effort.
Dr. Khan is Assistant Editor, Web and Multimedia, CHEST® journal.
Does this sound like your day?
You show up to work after a terrible night’s sleep. Your back is tense, and you do some kind of walking/stretching combo as you walk through the doors. Your focus fades during the mind-numbing routine of the morning shift sign out. As the day moves forward, you begin to feel resentful as you sign orders, see patients, and address your ICU team needs. You know that’s not right, that it’s not in line with who you want to be, but the irritation doesn’t go away.
Your lunchtime is filled with computer screens, notes, billing, and more billing. The previous feelings of irritation begin to boil into anger because more of your day is filled with bureaucratic demands and insurance reports rather than actually helping people. This isn’t what you signed up for. Years and years of training so you could be a paper pusher? The thought leads to rage ... or sometimes apathy on days you give in to the inevitable.
You finish your shift with admissions, procedures, code blues, and an overwhelming and exhausting night shift sign out. You feel like a hamster in a wheel. You’re going nowhere. What’s the point of all of this? You find yourself questioning why you went into medicine anyways ... yeah, that’s burnout.
I know what you’re thinking. You keep hearing about this, and it’s important to recognize, but then you hear the same old solutions: be more positive, find balance, do some yoga, take this resilience module, be mindful (what on earth does this mean anyways?), get some more sleep. Basically, it’s our problem. It’s our burden. If all of these were easy to understand and implement, don’t you think doctors and health-care providers would have done it already? I think you and I are a lot alike. These were my exact feelings. But stick with me on this one. I have a solution for you, albeit a little different. I’ll show you a more “positive” spin on the DIY.
I burned out early. After fellowship, I didn’t want to be a doctor anymore. I desperately sought to alter my career somehow. I looked into website development, something I had been good at in high school. I took a few refresher classes on my days off and started coding my own sites, but I had bills to pay. Big bills. Student loan bills. Luckily, my first job out of fellowship accepted many of my schedule demands, such as day shifts only, and after about a year, I recovered and remembered why I had loved medicine to begin with.
What is burnout?
Mind-body-soul exhaustion caused by excessive stress. Stress and burnout are closely related, but they’re more like distant cousins. Stress can be (and is) a normal part of our jobs. I bet you think you’re stressed, when you’re probably burned out. Critical care doctors have the highest rate of burnout among all physician subspecialties at >55%, and it is even higher in pediatric critical care. (Sessler C. https://www.mdedge.com/chestphysician/article/160951/society-news/turning-heat-icu-burnout). The main difference between stress and burnout is hope. With stress, you still feel like things can get better and you can get it all under control. Burnout feels hopeless.
What are the three core symptoms of burnout?
• Irritability and impatience with patients (depersonalization)
• Cynicism and difficulty concentrating (emotional exhaustion)
• What’s the point of all of this? Nothing I do matters or is appreciated (decreased self-efficacy)
We can talk about the symptoms of burnout all day, but what does that really look like? It looks like the day we described at the beginning. You know, the day that resonated with you and caused you to keep reading.
Why should we all be discussing this important topic?
Being burned out not only affects us on a soul level (achingly described above), but, more importantly, this can trickle down to our personal lives, family relationships, and how we care for our patients, with some studies showing that it affects our performance and, gulp, patient outcomes. That’s scary (Moss M et al. Crit Care Med. 2016;44[7]:1414).
Causes of burnout
There are many causes of burnout, and several studies have identified risk factors. A lack of control, conflicts with colleagues and leadership, and performing menial tasks can add to the irritation of a workday. This doesn’t even include the nature of our actual job as critical care doctors. We care for the sickest and are frequently involved in end-of-life care. Over time, the stress morphs into burnout. Female gender is also an independent risk factor for doctors (Pastores SM, et al. Crit Care Med. 2019;47[4]:550).
We’ve identified it. We’ve quantified it. But we’re not fixing it. In fact, there are only a few studies that have incorporated a needs assessment of doctors, paired with appropriate environmental intervention. A study done with primary care doctors in New York City clinics found that surveying a doctor’s “wish list” of interventions can help identify gaps in workflow, such as pairing one medical assistant with each attending (Linzer M, et al. J Gen Intern Med. 2015;30[8]:1105).
Without more data like this, we’re hamsters in a wheel. Luckily, organizations like CHEST have joined together with others to create the Critical Care Societies Collaborative and have an annual summit to discuss research strategies.
Solutions
Even millennials are sick of the mindful “chore” list. Yoga pants, yoga mats, crystals, chakras, meditation, and the list goes on and on. What millennials want are work-life integrations that are easy; workspaces that invite mindful behavior and daily rituals that excite and relax them. Co-working spaces like WeWork have designated self-care spaces.
Self-care is now essential, not an indulgence. I wasn’t sure how to create this space in my ICU, so I started small, with things I could carry with myself. The key is to find small rituals with big meanings. What could this look like for you? I began doing breathwork. Frankly, the idea came to me from my Apple® watch. It just started giving me these reminders one day, and I decided to take it seriously. I found that my mind and muscles eased after only 1 minute of breathing in and out slowly. This elevated my mood and was the refresher I needed in the afternoons. My body ached less after procedures.
I also got a little woo-woo (stay with me now) and began carrying around crystal stones. You don’t have to carry around crystals. Prayer books, religious symbols, your child’s toy car, anything can work if it has meaning for you, so when you see it or touch it during your day, you remember your big why. Why you’re serving people. Why you’re a doctor. I prefer the crystals over jewelry because it’s something unusual that I don’t expect to be sitting in my pocket. It’s always a nice gentle reminder of the love I have for my patients, my job, and humanity. When I put my hands in my pocket as I’m talking to yet another frustrated family member, my responses are more patient and calmer, which leads to a more productive conversation.
Lastly, I started what I call a new Pavlov home routine. When I’m done with work, I light a candle and write out three things I’m grateful for. Retrain your brain. Retrain your triggers. What’s your Pavlov’s bell going to be? Many of us come home hungry and stressed. Food then becomes linked to stress. This is not good. Link it with something else. Light a candle, count to 3, then blow it out. Use your kids to incorporate something fun. Use a toy with “super powers” to “beam” the bad feelings away. Taking a few extra minutes to shift gears has created a much happier home for me.
There are things that we can’t control. That’s called circumstances. We can’t control other people; we can’t control the hospital system; we can’t control our past. But the rest of everything we can control: our thoughts, feelings, and daily self-care rituals.
It reminds me of something my dad always said when I was a little girl. When crossing the street, you always look twice, oftentimes three. Why be so careful? It’s the pedestrian’s right of way after all. “Well..” he replied, “If a car hits you, nothing much happens to them, but your entire life will be destroyed, forever.”
Stop walking into traffic thinking everything will be ok. Take control of what you can.
Look, I get it. As health-care providers, we are an independent group. But just because you can do it alone, doesn’t mean you have to.
Choose one thing. Whether it be something I mentioned or something that came to your mind as you read this. Then, drop me a line at my personal email [email protected]. I will send you a reply to let you know I hear you and I’m in your corner.
Burnout happens.
But, so does joy, job satisfaction, and balance. Those things just take more effort.
Dr. Khan is Assistant Editor, Web and Multimedia, CHEST® journal.
Does this sound like your day?
You show up to work after a terrible night’s sleep. Your back is tense, and you do some kind of walking/stretching combo as you walk through the doors. Your focus fades during the mind-numbing routine of the morning shift sign out. As the day moves forward, you begin to feel resentful as you sign orders, see patients, and address your ICU team needs. You know that’s not right, that it’s not in line with who you want to be, but the irritation doesn’t go away.
Your lunchtime is filled with computer screens, notes, billing, and more billing. The previous feelings of irritation begin to boil into anger because more of your day is filled with bureaucratic demands and insurance reports rather than actually helping people. This isn’t what you signed up for. Years and years of training so you could be a paper pusher? The thought leads to rage ... or sometimes apathy on days you give in to the inevitable.
You finish your shift with admissions, procedures, code blues, and an overwhelming and exhausting night shift sign out. You feel like a hamster in a wheel. You’re going nowhere. What’s the point of all of this? You find yourself questioning why you went into medicine anyways ... yeah, that’s burnout.
I know what you’re thinking. You keep hearing about this, and it’s important to recognize, but then you hear the same old solutions: be more positive, find balance, do some yoga, take this resilience module, be mindful (what on earth does this mean anyways?), get some more sleep. Basically, it’s our problem. It’s our burden. If all of these were easy to understand and implement, don’t you think doctors and health-care providers would have done it already? I think you and I are a lot alike. These were my exact feelings. But stick with me on this one. I have a solution for you, albeit a little different. I’ll show you a more “positive” spin on the DIY.
I burned out early. After fellowship, I didn’t want to be a doctor anymore. I desperately sought to alter my career somehow. I looked into website development, something I had been good at in high school. I took a few refresher classes on my days off and started coding my own sites, but I had bills to pay. Big bills. Student loan bills. Luckily, my first job out of fellowship accepted many of my schedule demands, such as day shifts only, and after about a year, I recovered and remembered why I had loved medicine to begin with.
What is burnout?
Mind-body-soul exhaustion caused by excessive stress. Stress and burnout are closely related, but they’re more like distant cousins. Stress can be (and is) a normal part of our jobs. I bet you think you’re stressed, when you’re probably burned out. Critical care doctors have the highest rate of burnout among all physician subspecialties at >55%, and it is even higher in pediatric critical care. (Sessler C. https://www.mdedge.com/chestphysician/article/160951/society-news/turning-heat-icu-burnout). The main difference between stress and burnout is hope. With stress, you still feel like things can get better and you can get it all under control. Burnout feels hopeless.
What are the three core symptoms of burnout?
• Irritability and impatience with patients (depersonalization)
• Cynicism and difficulty concentrating (emotional exhaustion)
• What’s the point of all of this? Nothing I do matters or is appreciated (decreased self-efficacy)
We can talk about the symptoms of burnout all day, but what does that really look like? It looks like the day we described at the beginning. You know, the day that resonated with you and caused you to keep reading.
Why should we all be discussing this important topic?
Being burned out not only affects us on a soul level (achingly described above), but, more importantly, this can trickle down to our personal lives, family relationships, and how we care for our patients, with some studies showing that it affects our performance and, gulp, patient outcomes. That’s scary (Moss M et al. Crit Care Med. 2016;44[7]:1414).
Causes of burnout
There are many causes of burnout, and several studies have identified risk factors. A lack of control, conflicts with colleagues and leadership, and performing menial tasks can add to the irritation of a workday. This doesn’t even include the nature of our actual job as critical care doctors. We care for the sickest and are frequently involved in end-of-life care. Over time, the stress morphs into burnout. Female gender is also an independent risk factor for doctors (Pastores SM, et al. Crit Care Med. 2019;47[4]:550).
We’ve identified it. We’ve quantified it. But we’re not fixing it. In fact, there are only a few studies that have incorporated a needs assessment of doctors, paired with appropriate environmental intervention. A study done with primary care doctors in New York City clinics found that surveying a doctor’s “wish list” of interventions can help identify gaps in workflow, such as pairing one medical assistant with each attending (Linzer M, et al. J Gen Intern Med. 2015;30[8]:1105).
Without more data like this, we’re hamsters in a wheel. Luckily, organizations like CHEST have joined together with others to create the Critical Care Societies Collaborative and have an annual summit to discuss research strategies.
Solutions
Even millennials are sick of the mindful “chore” list. Yoga pants, yoga mats, crystals, chakras, meditation, and the list goes on and on. What millennials want are work-life integrations that are easy; workspaces that invite mindful behavior and daily rituals that excite and relax them. Co-working spaces like WeWork have designated self-care spaces.
Self-care is now essential, not an indulgence. I wasn’t sure how to create this space in my ICU, so I started small, with things I could carry with myself. The key is to find small rituals with big meanings. What could this look like for you? I began doing breathwork. Frankly, the idea came to me from my Apple® watch. It just started giving me these reminders one day, and I decided to take it seriously. I found that my mind and muscles eased after only 1 minute of breathing in and out slowly. This elevated my mood and was the refresher I needed in the afternoons. My body ached less after procedures.
I also got a little woo-woo (stay with me now) and began carrying around crystal stones. You don’t have to carry around crystals. Prayer books, religious symbols, your child’s toy car, anything can work if it has meaning for you, so when you see it or touch it during your day, you remember your big why. Why you’re serving people. Why you’re a doctor. I prefer the crystals over jewelry because it’s something unusual that I don’t expect to be sitting in my pocket. It’s always a nice gentle reminder of the love I have for my patients, my job, and humanity. When I put my hands in my pocket as I’m talking to yet another frustrated family member, my responses are more patient and calmer, which leads to a more productive conversation.
Lastly, I started what I call a new Pavlov home routine. When I’m done with work, I light a candle and write out three things I’m grateful for. Retrain your brain. Retrain your triggers. What’s your Pavlov’s bell going to be? Many of us come home hungry and stressed. Food then becomes linked to stress. This is not good. Link it with something else. Light a candle, count to 3, then blow it out. Use your kids to incorporate something fun. Use a toy with “super powers” to “beam” the bad feelings away. Taking a few extra minutes to shift gears has created a much happier home for me.
There are things that we can’t control. That’s called circumstances. We can’t control other people; we can’t control the hospital system; we can’t control our past. But the rest of everything we can control: our thoughts, feelings, and daily self-care rituals.
It reminds me of something my dad always said when I was a little girl. When crossing the street, you always look twice, oftentimes three. Why be so careful? It’s the pedestrian’s right of way after all. “Well..” he replied, “If a car hits you, nothing much happens to them, but your entire life will be destroyed, forever.”
Stop walking into traffic thinking everything will be ok. Take control of what you can.
Look, I get it. As health-care providers, we are an independent group. But just because you can do it alone, doesn’t mean you have to.
Choose one thing. Whether it be something I mentioned or something that came to your mind as you read this. Then, drop me a line at my personal email [email protected]. I will send you a reply to let you know I hear you and I’m in your corner.
Burnout happens.
But, so does joy, job satisfaction, and balance. Those things just take more effort.
Dr. Khan is Assistant Editor, Web and Multimedia, CHEST® journal.
Social media for physicians: Strong medicine or snake oil?
For most of us, social media is a daunting new reality that we are pressured to be part of but that we struggle to fit into our increasingly demanding schedules. My first social media foray as a physician was a Facebook fan page as a hobby rather than a professional presence. Years later, I have learned the incredible benefit that being on social media in other platforms brought to my profession.
What’s social media going to bring to my medical practice?
The days where physicians retreat to the safety of our offices to deliver our care, or to issue carefully structured opinions, or interactions with patients have made way for more direct interaction. Social media has, indeed, allowed us to share more personal glimpses of our daily struggle to save lives, behind-the-scenes snapshot of ethical struggles in decision making, our difficulties qualifying patients for therapies due to insurance complications, or real-time addressing medical news and combating misinformation. Moreover, when patients self-refer, or are referred to my practice, they look me up online before coming to my office. Online profiles are the new “first impression” of the bedside manner of a physician.
Other personal examples of social media benefits include being informed of new publications, since many journals now have an online presence; being able to interact in real-time with authors; learning from physicians in other countries how they handled issues, such as shortage of critical medications; or earning CME, such as the Twitter chats hosted by CHEST (eg, new biologic agents in difficult to treat asthma, or patient selection in triple therapy for COPD).
Why should I pay attention to social media presence?
The pace by which social media changed the landscape took the medical community by surprise. Patients, third-party websites, and online review agencies (official or not) adopted it well before physicians became comfortable with it. As such, when I decided to google myself online, I was shocked at the level of misinformation about me (as a pulmonologist, I didn’t know I had performed sigmoidoscopies, yet that’s what my patients learned before they met me). That was an important lesson: If I don’t control the narrative, someone else will. Consequently, I dedicated a few hours to establish an online presence in order to introduce myself accurately and to be accessible to my patients and colleagues online.
Who decides what’s ethical and what’s not?
As the lines blurred, our community struggled to define what was appropriate and what was not. Finally, we welcomed with relief the issuance of a Code of Ethics, regarding social media use by physicians, from several societies, including the American Medical Association (https://www.ama-assn.org/delivering-care/ethics/professionalism-use-social-media). The principles guiding physicians use of social media include respect for human dignity and rights, honesty and upholding the standards of professionalism, and the duty to safeguard patient confidences and privacy.
Which platform should I use? There are so many.
While any content can be shared on any platform, social media sites have organically differentiated into being more amenable to one content vs the other. Some accounts tend to be more for professional use (ie, Twitter and LinkedIn), and other accounts for personal use (ie, Facebook, Instagram, Snapchat, and Pinterest). CHEST has selected Twitter to host its CME chats regarding preselected topics, post information about an upcoming lecture during the CHEST meeting, etc. New social media sites are now “physician only,” such as Sermo, Doximity, QuantiMD, and Doc2Doc. Many of these sites require doctors to submit their credentials to a site gatekeeper, recreating the intimacy of a “physicians’ lounge” in an online environment (J Med Internet Res. 2014:Feb 11;16[2]:e13). Lastly, Figure1 is a media sharing app between physicians allowing discussions of de-identified images or cases, recreating the “curbside” consult concept online.
I heard about hashtags. What are they?
Hashtags are simply clickable topic titles (#COPD #Sepsis # Education, etc.) that can be added to a post, in order to widen its reach. For instance, if I am interested in sepsis, I can click on the hashtag #Sepsis, and it would bring up all the posts on any Twitter account that added that hashtag. It’s a filter that takes me to that topic of interest. I can then click on the button “Like” on the message or the account itself where the post was found. The “Like” is similar to a bookmark for that account on my own Twitter. In the future, all the posts from that account would be available to me.
What are influencers or thought leaders?
Anyone who “liked” my account is now “following” me. The number of followers has become a measure of the popularity of anyone on social media. If it reaches a high level, then the person with the account is dubbed an “influencer.” Social media “influencers” are individuals whose opinion is followed by hundreds of thousands. Influencers may even be rewarded for harnessing their reach to make money off advertising. One can easily see how it is powerful for a physician to become an influencer or a “thought leader,” not to make money but to expand their reach on social media to spread the correct information about diets, drugs, e-cigarettes, and vaccinations, to name a few.
Can social media get me in trouble?
In 2012, a survey of the state medical boards published by JAMA (2012;307[11]:1141) revealed that approximately 30% of state medical boards reported complaints of “online violations of patient confidentiality.” More than 10% stated they had encountered a case of an “online depiction of intoxication.”
Another study a year earlier revealed that 13% of physicians reported they have discussed individual, though anonymized, cases with other physicians in public online forums (http://www.quantiamd.com/qqcp/DoctorsPatientSocialMedia.pdf).
Even if posted anonymously, or on a “personal” rather than professional social media site, various investigative methods may potentially be used to directly link information to a specific person or incident. The most current case law dictates that such information is “discoverable.” In fact, Facebook’s policy for the use of data informs users that, “we may access, preserve, and share your information in response to a legal request” both within and outside of U.S. jurisdiction”.
What kind of trouble could I be exposed to?
Poor quality of information, damage to our professional image, breaches of patient’s privacy, violation of patient-physician boundary, license revoking by state boards, and erroneous medical advice given in the absence of examining a patient, are all potential pitfalls for physicians in the careless use of social media.
How can I minimize my legal risk when interacting online?
It has been suggested that a legally sound approach in response to requests for online medical advice would be to send a standard response form that:
• informs the inquirer that the health-care provider does not answer online questions;
• supplies offline contact information so that an appointment can be made, if desired; and
• identifies a source for emergency services if the inquirer cannot wait for an appointment.
In circumstances where a patient–physician relationship already exists, informed consent should be obtained, which should include a careful explanation regarding the risks of online communication, expected response times, and the handling of emergencies, then documented in the patient’s chart (PT. 2014 Jul;39[7]:491,520).
In Summary
Social media, much like any area of medicine one is interested in, can be daunting and exciting but fraught with potential difficulties. I liken its adoption in our daily practice to any other decision or interest, including being in a private or academic setting, adopting procedural medicine or sticking to diagnostic consultations, or participating in research. In the end, it’s an individual expression of our desire to practice medicine. However, verifying information already existing online about us is of paramount importance. If I don’t tell my story, someone else will, and they may not be as truthful.
Dr. Bencheqroun is Assistant Professor, University of California Riverside School of Medicine, Pulmonary/Critical Care Faculty Program Coordinator & Research Mentor - Internal Medicine Residency Program Desert Regional Medical Center, Palm Springs CA; and Immediate Past Chair of the CHEST Council of Networks.
For most of us, social media is a daunting new reality that we are pressured to be part of but that we struggle to fit into our increasingly demanding schedules. My first social media foray as a physician was a Facebook fan page as a hobby rather than a professional presence. Years later, I have learned the incredible benefit that being on social media in other platforms brought to my profession.
What’s social media going to bring to my medical practice?
The days where physicians retreat to the safety of our offices to deliver our care, or to issue carefully structured opinions, or interactions with patients have made way for more direct interaction. Social media has, indeed, allowed us to share more personal glimpses of our daily struggle to save lives, behind-the-scenes snapshot of ethical struggles in decision making, our difficulties qualifying patients for therapies due to insurance complications, or real-time addressing medical news and combating misinformation. Moreover, when patients self-refer, or are referred to my practice, they look me up online before coming to my office. Online profiles are the new “first impression” of the bedside manner of a physician.
Other personal examples of social media benefits include being informed of new publications, since many journals now have an online presence; being able to interact in real-time with authors; learning from physicians in other countries how they handled issues, such as shortage of critical medications; or earning CME, such as the Twitter chats hosted by CHEST (eg, new biologic agents in difficult to treat asthma, or patient selection in triple therapy for COPD).
Why should I pay attention to social media presence?
The pace by which social media changed the landscape took the medical community by surprise. Patients, third-party websites, and online review agencies (official or not) adopted it well before physicians became comfortable with it. As such, when I decided to google myself online, I was shocked at the level of misinformation about me (as a pulmonologist, I didn’t know I had performed sigmoidoscopies, yet that’s what my patients learned before they met me). That was an important lesson: If I don’t control the narrative, someone else will. Consequently, I dedicated a few hours to establish an online presence in order to introduce myself accurately and to be accessible to my patients and colleagues online.
Who decides what’s ethical and what’s not?
As the lines blurred, our community struggled to define what was appropriate and what was not. Finally, we welcomed with relief the issuance of a Code of Ethics, regarding social media use by physicians, from several societies, including the American Medical Association (https://www.ama-assn.org/delivering-care/ethics/professionalism-use-social-media). The principles guiding physicians use of social media include respect for human dignity and rights, honesty and upholding the standards of professionalism, and the duty to safeguard patient confidences and privacy.
Which platform should I use? There are so many.
While any content can be shared on any platform, social media sites have organically differentiated into being more amenable to one content vs the other. Some accounts tend to be more for professional use (ie, Twitter and LinkedIn), and other accounts for personal use (ie, Facebook, Instagram, Snapchat, and Pinterest). CHEST has selected Twitter to host its CME chats regarding preselected topics, post information about an upcoming lecture during the CHEST meeting, etc. New social media sites are now “physician only,” such as Sermo, Doximity, QuantiMD, and Doc2Doc. Many of these sites require doctors to submit their credentials to a site gatekeeper, recreating the intimacy of a “physicians’ lounge” in an online environment (J Med Internet Res. 2014:Feb 11;16[2]:e13). Lastly, Figure1 is a media sharing app between physicians allowing discussions of de-identified images or cases, recreating the “curbside” consult concept online.
I heard about hashtags. What are they?
Hashtags are simply clickable topic titles (#COPD #Sepsis # Education, etc.) that can be added to a post, in order to widen its reach. For instance, if I am interested in sepsis, I can click on the hashtag #Sepsis, and it would bring up all the posts on any Twitter account that added that hashtag. It’s a filter that takes me to that topic of interest. I can then click on the button “Like” on the message or the account itself where the post was found. The “Like” is similar to a bookmark for that account on my own Twitter. In the future, all the posts from that account would be available to me.
What are influencers or thought leaders?
Anyone who “liked” my account is now “following” me. The number of followers has become a measure of the popularity of anyone on social media. If it reaches a high level, then the person with the account is dubbed an “influencer.” Social media “influencers” are individuals whose opinion is followed by hundreds of thousands. Influencers may even be rewarded for harnessing their reach to make money off advertising. One can easily see how it is powerful for a physician to become an influencer or a “thought leader,” not to make money but to expand their reach on social media to spread the correct information about diets, drugs, e-cigarettes, and vaccinations, to name a few.
Can social media get me in trouble?
In 2012, a survey of the state medical boards published by JAMA (2012;307[11]:1141) revealed that approximately 30% of state medical boards reported complaints of “online violations of patient confidentiality.” More than 10% stated they had encountered a case of an “online depiction of intoxication.”
Another study a year earlier revealed that 13% of physicians reported they have discussed individual, though anonymized, cases with other physicians in public online forums (http://www.quantiamd.com/qqcp/DoctorsPatientSocialMedia.pdf).
Even if posted anonymously, or on a “personal” rather than professional social media site, various investigative methods may potentially be used to directly link information to a specific person or incident. The most current case law dictates that such information is “discoverable.” In fact, Facebook’s policy for the use of data informs users that, “we may access, preserve, and share your information in response to a legal request” both within and outside of U.S. jurisdiction”.
What kind of trouble could I be exposed to?
Poor quality of information, damage to our professional image, breaches of patient’s privacy, violation of patient-physician boundary, license revoking by state boards, and erroneous medical advice given in the absence of examining a patient, are all potential pitfalls for physicians in the careless use of social media.
How can I minimize my legal risk when interacting online?
It has been suggested that a legally sound approach in response to requests for online medical advice would be to send a standard response form that:
• informs the inquirer that the health-care provider does not answer online questions;
• supplies offline contact information so that an appointment can be made, if desired; and
• identifies a source for emergency services if the inquirer cannot wait for an appointment.
In circumstances where a patient–physician relationship already exists, informed consent should be obtained, which should include a careful explanation regarding the risks of online communication, expected response times, and the handling of emergencies, then documented in the patient’s chart (PT. 2014 Jul;39[7]:491,520).
In Summary
Social media, much like any area of medicine one is interested in, can be daunting and exciting but fraught with potential difficulties. I liken its adoption in our daily practice to any other decision or interest, including being in a private or academic setting, adopting procedural medicine or sticking to diagnostic consultations, or participating in research. In the end, it’s an individual expression of our desire to practice medicine. However, verifying information already existing online about us is of paramount importance. If I don’t tell my story, someone else will, and they may not be as truthful.
Dr. Bencheqroun is Assistant Professor, University of California Riverside School of Medicine, Pulmonary/Critical Care Faculty Program Coordinator & Research Mentor - Internal Medicine Residency Program Desert Regional Medical Center, Palm Springs CA; and Immediate Past Chair of the CHEST Council of Networks.
For most of us, social media is a daunting new reality that we are pressured to be part of but that we struggle to fit into our increasingly demanding schedules. My first social media foray as a physician was a Facebook fan page as a hobby rather than a professional presence. Years later, I have learned the incredible benefit that being on social media in other platforms brought to my profession.
What’s social media going to bring to my medical practice?
The days where physicians retreat to the safety of our offices to deliver our care, or to issue carefully structured opinions, or interactions with patients have made way for more direct interaction. Social media has, indeed, allowed us to share more personal glimpses of our daily struggle to save lives, behind-the-scenes snapshot of ethical struggles in decision making, our difficulties qualifying patients for therapies due to insurance complications, or real-time addressing medical news and combating misinformation. Moreover, when patients self-refer, or are referred to my practice, they look me up online before coming to my office. Online profiles are the new “first impression” of the bedside manner of a physician.
Other personal examples of social media benefits include being informed of new publications, since many journals now have an online presence; being able to interact in real-time with authors; learning from physicians in other countries how they handled issues, such as shortage of critical medications; or earning CME, such as the Twitter chats hosted by CHEST (eg, new biologic agents in difficult to treat asthma, or patient selection in triple therapy for COPD).
Why should I pay attention to social media presence?
The pace by which social media changed the landscape took the medical community by surprise. Patients, third-party websites, and online review agencies (official or not) adopted it well before physicians became comfortable with it. As such, when I decided to google myself online, I was shocked at the level of misinformation about me (as a pulmonologist, I didn’t know I had performed sigmoidoscopies, yet that’s what my patients learned before they met me). That was an important lesson: If I don’t control the narrative, someone else will. Consequently, I dedicated a few hours to establish an online presence in order to introduce myself accurately and to be accessible to my patients and colleagues online.
Who decides what’s ethical and what’s not?
As the lines blurred, our community struggled to define what was appropriate and what was not. Finally, we welcomed with relief the issuance of a Code of Ethics, regarding social media use by physicians, from several societies, including the American Medical Association (https://www.ama-assn.org/delivering-care/ethics/professionalism-use-social-media). The principles guiding physicians use of social media include respect for human dignity and rights, honesty and upholding the standards of professionalism, and the duty to safeguard patient confidences and privacy.
Which platform should I use? There are so many.
While any content can be shared on any platform, social media sites have organically differentiated into being more amenable to one content vs the other. Some accounts tend to be more for professional use (ie, Twitter and LinkedIn), and other accounts for personal use (ie, Facebook, Instagram, Snapchat, and Pinterest). CHEST has selected Twitter to host its CME chats regarding preselected topics, post information about an upcoming lecture during the CHEST meeting, etc. New social media sites are now “physician only,” such as Sermo, Doximity, QuantiMD, and Doc2Doc. Many of these sites require doctors to submit their credentials to a site gatekeeper, recreating the intimacy of a “physicians’ lounge” in an online environment (J Med Internet Res. 2014:Feb 11;16[2]:e13). Lastly, Figure1 is a media sharing app between physicians allowing discussions of de-identified images or cases, recreating the “curbside” consult concept online.
I heard about hashtags. What are they?
Hashtags are simply clickable topic titles (#COPD #Sepsis # Education, etc.) that can be added to a post, in order to widen its reach. For instance, if I am interested in sepsis, I can click on the hashtag #Sepsis, and it would bring up all the posts on any Twitter account that added that hashtag. It’s a filter that takes me to that topic of interest. I can then click on the button “Like” on the message or the account itself where the post was found. The “Like” is similar to a bookmark for that account on my own Twitter. In the future, all the posts from that account would be available to me.
What are influencers or thought leaders?
Anyone who “liked” my account is now “following” me. The number of followers has become a measure of the popularity of anyone on social media. If it reaches a high level, then the person with the account is dubbed an “influencer.” Social media “influencers” are individuals whose opinion is followed by hundreds of thousands. Influencers may even be rewarded for harnessing their reach to make money off advertising. One can easily see how it is powerful for a physician to become an influencer or a “thought leader,” not to make money but to expand their reach on social media to spread the correct information about diets, drugs, e-cigarettes, and vaccinations, to name a few.
Can social media get me in trouble?
In 2012, a survey of the state medical boards published by JAMA (2012;307[11]:1141) revealed that approximately 30% of state medical boards reported complaints of “online violations of patient confidentiality.” More than 10% stated they had encountered a case of an “online depiction of intoxication.”
Another study a year earlier revealed that 13% of physicians reported they have discussed individual, though anonymized, cases with other physicians in public online forums (http://www.quantiamd.com/qqcp/DoctorsPatientSocialMedia.pdf).
Even if posted anonymously, or on a “personal” rather than professional social media site, various investigative methods may potentially be used to directly link information to a specific person or incident. The most current case law dictates that such information is “discoverable.” In fact, Facebook’s policy for the use of data informs users that, “we may access, preserve, and share your information in response to a legal request” both within and outside of U.S. jurisdiction”.
What kind of trouble could I be exposed to?
Poor quality of information, damage to our professional image, breaches of patient’s privacy, violation of patient-physician boundary, license revoking by state boards, and erroneous medical advice given in the absence of examining a patient, are all potential pitfalls for physicians in the careless use of social media.
How can I minimize my legal risk when interacting online?
It has been suggested that a legally sound approach in response to requests for online medical advice would be to send a standard response form that:
• informs the inquirer that the health-care provider does not answer online questions;
• supplies offline contact information so that an appointment can be made, if desired; and
• identifies a source for emergency services if the inquirer cannot wait for an appointment.
In circumstances where a patient–physician relationship already exists, informed consent should be obtained, which should include a careful explanation regarding the risks of online communication, expected response times, and the handling of emergencies, then documented in the patient’s chart (PT. 2014 Jul;39[7]:491,520).
In Summary
Social media, much like any area of medicine one is interested in, can be daunting and exciting but fraught with potential difficulties. I liken its adoption in our daily practice to any other decision or interest, including being in a private or academic setting, adopting procedural medicine or sticking to diagnostic consultations, or participating in research. In the end, it’s an individual expression of our desire to practice medicine. However, verifying information already existing online about us is of paramount importance. If I don’t tell my story, someone else will, and they may not be as truthful.
Dr. Bencheqroun is Assistant Professor, University of California Riverside School of Medicine, Pulmonary/Critical Care Faculty Program Coordinator & Research Mentor - Internal Medicine Residency Program Desert Regional Medical Center, Palm Springs CA; and Immediate Past Chair of the CHEST Council of Networks.
Risks of removing the default: Lung protective ventilation IS for everyone
Since the landmark ARMA trial, use of low tidal volume ventilation (LTVV) at 6 mL/kg predicted body weight (PBW) has become our gold standard for ventilator management in acute respiratory distress syndrome (ARDS) (Brower RG, et al. N Engl J Med. 2000;342[18]:1301). While other studies have suggested that patients without ARDS may also benefit from lower volumes, the recently published Protective Ventilation in Patients Without ARDS (PReVENT) trial found no benefit to using LTVV in non-ARDS patients (Simonis FD, et al. JAMA. 2018;320[18]:1872). Does this mean we let physicians set volumes at will? Is tidal volume (VT) even clinically relevant anymore in the non-ARDS population?
Prior to the PReVENT trial, our practice of LTVV for patients without ARDS was informed primarily by observational data. In 2012, a meta-analysis comparing LTVV with “conventional” VT (10-12 mL/kg IBW) in non-ARDS patients found that those given LTVV had a lower incidence of acute lung injury and lower overall mortality (Neto AS, et al. JAMA. 2012 308[16]:1651). While these were promising findings, there was limited follow-up poststudy onset, and the majority of included studies were based on a surgical population. Additionally, the use of VT > 10 mL/kg PBW has become uncommon in routine clinical practice. How comparable are those previous studies to today’s clinical milieu? When comparing outcomes for ICU patients who were ventilated with low (≤7mL/kg PBW), intermediate (>7, but <10 mL/kg PBW), and high (≥10 mL/kg PBW) VT, a second meta-analysis found a 28% risk reduction in the development of ARDS or pneumonia with low vs high, but the similar difference was not seen when comparing low vs intermediate groups (Neto AS, et al. Crit Care Med. 2015;43[10]:2155). This research suggested that negative outcomes were driven by the excessive VT.
Slated to be the definitive study on the matter, the PReVENT trial used a multicenter randomized control trial design comparing target VT of 4 mL/kg with 10 mL/kg PBW, with setting titration primarily based on plateau pressure targets. The headline out of this trial may have been that it was “negative,” in that there was no difference between the groups in the primary outcome of ventilator-free days and survival by day 28. However, there are some important limitations to consider before discounting LTVV for everyone. First, half of the trial patients were ventilated with pressure-control ventilation, the actual VT settings were 7.3 (5.9 – 9.1) for the low group vs 9.1 (7.7 – 10.5) mL/kg PBW for the intermediate group by day 3, statistically significant differences, but perhaps not as striking clinically. Moreover, a secondary analysis of ARDSnet data (Amato MB, et al, N Engl J Med. 2015;372[8]:747) also suggests that driving pressure, more so than VT, may determine outcomes, which, for most patients in the PReVENT trial, remained in the “safe” range of < 15 cm H2O. Finally, almost two-thirds of patients eligible for PReVENT were not enrolled, and the included cohort had PaO2/FiO2 ratios greater than 200 for the 3 days of the study, limiting generalizability, especially for patients with acute hypoxemic respiratory failure.
When approaching the patient who we have determined to not have ARDS (either by clinical diagnosis or suspicion plus a low PaO2/FiO2 ratio as defined by PReVENT’s protocol), it is important to also consider our accuracy in recognizing ARDS before settling for the use of unregulated VT. ARDS is often underrecognized, and this delay in diagnosis results in delayed LTVV initiation. Results from the LUNG SAFE study, an international multicenter prospective observational study of over 2,300 ICU patients with ARDS, showed that only 34% of patients were recognized by the clinician to have ARDS at the time they met the Berlin criteria (Bellani G, et al. JAMA. 2016;315[8]:788). As ARDS is defined by clinical criteria, it is biologically plausible to think that the pathologic process commences before these criteria are recognized by the clinician.
To investigate the importance of timing of LTVV in ARDS, Needham and colleagues performed a prospective cohort study in patients with ARDS, examining the effect of VT received over time on the outcome of ICU mortality (Needham DM, et al. Am J Respir Crit Care Med. 2015;191[2]:177). They found that every 1 mL/kg increase in VT setting was associated with a 23% increase in mortality and, indeed, increases in subsequent VT compared with baseline setting were associated with increasing mortality. One may, therefore, be concerned that if we miss the ARDS diagnosis, the default to higher VT at the time of intubation may harm our patients. With or without clinician recognition of ARDS, LUNG SAFE revealed that the average VT for the patients with confirmed ARDS was 7.6 (95% CI 7.5-7.7) mL/kg PBW. While this mean value is well within the range of lung protective ventilation (less than 8 mL/kg PBW), over one-third of patients were exposed to larger VT. A recently published study by Sjoding and colleagues showed that VT of >8 mL/kg PBW was used in 40% of the cohort, and continued exposure to 24 total hours of these high VT was associated with increased risk of mortality (OR 1.82 (95% CI, 1.20–2.78) (Sjoding MW, et al. Crit Care Med. 2019;47[1]:56). All three studies support early administration of lung protective ventilation, considering the high mortality associated with ARDS.
Before consolidating what we know about empiric use of LTVV, we also must highlight the important concerns about LTVV that were investigated in the PReVENT trial. Over-sedation to maintain low VT, increased delirium, ventilator asynchrony, and possibility of effort-induced lung injury are some of the potential risks associated with LTVV. While there were no differences in the use of sedatives or neuromuscular blocking agents between groups in the PReVENT trial, more delirium was seen in the LTVV group with a P = .06, which may be a signal deserving further exploration.
Therefore, now understanding both the upside and downside of LTVV, what’s our best approach? While we lack prospective clinical trial data showing benefit of LTVV in patients without ARDS, we do not have conclusive evidence to show its harm. Remembering that even intensivists can fail to recognize ARDS at its onset, default utilization of LTVV, or at least lung protective ventilation of <8 mL/kg PBW, may be the safest approach for all patients. To be clear, this approach would still allow for active physician decision-making to personalize the settings to the individual patient’s needs, including the use of higher VT if needed for patient comfort, effort, and sedation needs. Changing the default settings and implementing friendly reminders about how to manage the ventilator has already been shown to be helpful for the surgical population (O’Reilly-Shah VN, et al. BMJ Qual Saf. 2018;27[12]:1008).
We must also consider the process of health-care delivery and the implementation of best practices, after considering the facilitators and barriers to adoption of said practices. Many patients decompensate and require intubation prior to ICU arrival, with prolonged boarding in the ED or medical wards being a common occurrence for many hospitals. As such, we need to consider a ventilation strategy that allows for best practice implementation at a hospital-wide level, appealing to an interprofessional approach to ventilator management, employing physicians outside of critical care medicine, respiratory therapists, and nursing. The PReVENT trial had a nicely constructed protocol with clear instructions on ventilator adjustments with frequent plateau pressure measurements and patient assessments. In the real world setting, especially in a non-ICU setting, ventilator management is not as straightforward. Considering that plateau pressures were only checked in approximately 40% of the patients in LUNG SAFE cohort, active management and attention to driving pressure may be a stretch in many settings.
Until we get 100% sensitive in timely recognition (instantaneous, really) of ARDS pathology augmented by automated diagnostic tools embedded in the medical record and/or incorporate advanced technology in the ventilator management to avoid human error, employing simple defaults to guarantee a protective setting in case of later diagnosis of ARDS seems logical. We can even go further to separate the defaults into LTVV for hypoxemic respiratory failure and lung protective ventilation for everything else, with future development of more algorithms, protocols, and clinical decision support tools for ventilator management. For the time being, a simpler intervention of setting a safer default is a great universal start.
Dr. Mathews and Dr. Howell are with the Division of Pulmonary, Critical Care, and Sleep Medicine, Department of Medicine; Dr. Mathews is also with the Department of Emergency Medicine; Icahn School of Medicine at Mount Sinai, New York, NY.
Since the landmark ARMA trial, use of low tidal volume ventilation (LTVV) at 6 mL/kg predicted body weight (PBW) has become our gold standard for ventilator management in acute respiratory distress syndrome (ARDS) (Brower RG, et al. N Engl J Med. 2000;342[18]:1301). While other studies have suggested that patients without ARDS may also benefit from lower volumes, the recently published Protective Ventilation in Patients Without ARDS (PReVENT) trial found no benefit to using LTVV in non-ARDS patients (Simonis FD, et al. JAMA. 2018;320[18]:1872). Does this mean we let physicians set volumes at will? Is tidal volume (VT) even clinically relevant anymore in the non-ARDS population?
Prior to the PReVENT trial, our practice of LTVV for patients without ARDS was informed primarily by observational data. In 2012, a meta-analysis comparing LTVV with “conventional” VT (10-12 mL/kg IBW) in non-ARDS patients found that those given LTVV had a lower incidence of acute lung injury and lower overall mortality (Neto AS, et al. JAMA. 2012 308[16]:1651). While these were promising findings, there was limited follow-up poststudy onset, and the majority of included studies were based on a surgical population. Additionally, the use of VT > 10 mL/kg PBW has become uncommon in routine clinical practice. How comparable are those previous studies to today’s clinical milieu? When comparing outcomes for ICU patients who were ventilated with low (≤7mL/kg PBW), intermediate (>7, but <10 mL/kg PBW), and high (≥10 mL/kg PBW) VT, a second meta-analysis found a 28% risk reduction in the development of ARDS or pneumonia with low vs high, but the similar difference was not seen when comparing low vs intermediate groups (Neto AS, et al. Crit Care Med. 2015;43[10]:2155). This research suggested that negative outcomes were driven by the excessive VT.
Slated to be the definitive study on the matter, the PReVENT trial used a multicenter randomized control trial design comparing target VT of 4 mL/kg with 10 mL/kg PBW, with setting titration primarily based on plateau pressure targets. The headline out of this trial may have been that it was “negative,” in that there was no difference between the groups in the primary outcome of ventilator-free days and survival by day 28. However, there are some important limitations to consider before discounting LTVV for everyone. First, half of the trial patients were ventilated with pressure-control ventilation, the actual VT settings were 7.3 (5.9 – 9.1) for the low group vs 9.1 (7.7 – 10.5) mL/kg PBW for the intermediate group by day 3, statistically significant differences, but perhaps not as striking clinically. Moreover, a secondary analysis of ARDSnet data (Amato MB, et al, N Engl J Med. 2015;372[8]:747) also suggests that driving pressure, more so than VT, may determine outcomes, which, for most patients in the PReVENT trial, remained in the “safe” range of < 15 cm H2O. Finally, almost two-thirds of patients eligible for PReVENT were not enrolled, and the included cohort had PaO2/FiO2 ratios greater than 200 for the 3 days of the study, limiting generalizability, especially for patients with acute hypoxemic respiratory failure.
When approaching the patient who we have determined to not have ARDS (either by clinical diagnosis or suspicion plus a low PaO2/FiO2 ratio as defined by PReVENT’s protocol), it is important to also consider our accuracy in recognizing ARDS before settling for the use of unregulated VT. ARDS is often underrecognized, and this delay in diagnosis results in delayed LTVV initiation. Results from the LUNG SAFE study, an international multicenter prospective observational study of over 2,300 ICU patients with ARDS, showed that only 34% of patients were recognized by the clinician to have ARDS at the time they met the Berlin criteria (Bellani G, et al. JAMA. 2016;315[8]:788). As ARDS is defined by clinical criteria, it is biologically plausible to think that the pathologic process commences before these criteria are recognized by the clinician.
To investigate the importance of timing of LTVV in ARDS, Needham and colleagues performed a prospective cohort study in patients with ARDS, examining the effect of VT received over time on the outcome of ICU mortality (Needham DM, et al. Am J Respir Crit Care Med. 2015;191[2]:177). They found that every 1 mL/kg increase in VT setting was associated with a 23% increase in mortality and, indeed, increases in subsequent VT compared with baseline setting were associated with increasing mortality. One may, therefore, be concerned that if we miss the ARDS diagnosis, the default to higher VT at the time of intubation may harm our patients. With or without clinician recognition of ARDS, LUNG SAFE revealed that the average VT for the patients with confirmed ARDS was 7.6 (95% CI 7.5-7.7) mL/kg PBW. While this mean value is well within the range of lung protective ventilation (less than 8 mL/kg PBW), over one-third of patients were exposed to larger VT. A recently published study by Sjoding and colleagues showed that VT of >8 mL/kg PBW was used in 40% of the cohort, and continued exposure to 24 total hours of these high VT was associated with increased risk of mortality (OR 1.82 (95% CI, 1.20–2.78) (Sjoding MW, et al. Crit Care Med. 2019;47[1]:56). All three studies support early administration of lung protective ventilation, considering the high mortality associated with ARDS.
Before consolidating what we know about empiric use of LTVV, we also must highlight the important concerns about LTVV that were investigated in the PReVENT trial. Over-sedation to maintain low VT, increased delirium, ventilator asynchrony, and possibility of effort-induced lung injury are some of the potential risks associated with LTVV. While there were no differences in the use of sedatives or neuromuscular blocking agents between groups in the PReVENT trial, more delirium was seen in the LTVV group with a P = .06, which may be a signal deserving further exploration.
Therefore, now understanding both the upside and downside of LTVV, what’s our best approach? While we lack prospective clinical trial data showing benefit of LTVV in patients without ARDS, we do not have conclusive evidence to show its harm. Remembering that even intensivists can fail to recognize ARDS at its onset, default utilization of LTVV, or at least lung protective ventilation of <8 mL/kg PBW, may be the safest approach for all patients. To be clear, this approach would still allow for active physician decision-making to personalize the settings to the individual patient’s needs, including the use of higher VT if needed for patient comfort, effort, and sedation needs. Changing the default settings and implementing friendly reminders about how to manage the ventilator has already been shown to be helpful for the surgical population (O’Reilly-Shah VN, et al. BMJ Qual Saf. 2018;27[12]:1008).
We must also consider the process of health-care delivery and the implementation of best practices, after considering the facilitators and barriers to adoption of said practices. Many patients decompensate and require intubation prior to ICU arrival, with prolonged boarding in the ED or medical wards being a common occurrence for many hospitals. As such, we need to consider a ventilation strategy that allows for best practice implementation at a hospital-wide level, appealing to an interprofessional approach to ventilator management, employing physicians outside of critical care medicine, respiratory therapists, and nursing. The PReVENT trial had a nicely constructed protocol with clear instructions on ventilator adjustments with frequent plateau pressure measurements and patient assessments. In the real world setting, especially in a non-ICU setting, ventilator management is not as straightforward. Considering that plateau pressures were only checked in approximately 40% of the patients in LUNG SAFE cohort, active management and attention to driving pressure may be a stretch in many settings.
Until we get 100% sensitive in timely recognition (instantaneous, really) of ARDS pathology augmented by automated diagnostic tools embedded in the medical record and/or incorporate advanced technology in the ventilator management to avoid human error, employing simple defaults to guarantee a protective setting in case of later diagnosis of ARDS seems logical. We can even go further to separate the defaults into LTVV for hypoxemic respiratory failure and lung protective ventilation for everything else, with future development of more algorithms, protocols, and clinical decision support tools for ventilator management. For the time being, a simpler intervention of setting a safer default is a great universal start.
Dr. Mathews and Dr. Howell are with the Division of Pulmonary, Critical Care, and Sleep Medicine, Department of Medicine; Dr. Mathews is also with the Department of Emergency Medicine; Icahn School of Medicine at Mount Sinai, New York, NY.
Since the landmark ARMA trial, use of low tidal volume ventilation (LTVV) at 6 mL/kg predicted body weight (PBW) has become our gold standard for ventilator management in acute respiratory distress syndrome (ARDS) (Brower RG, et al. N Engl J Med. 2000;342[18]:1301). While other studies have suggested that patients without ARDS may also benefit from lower volumes, the recently published Protective Ventilation in Patients Without ARDS (PReVENT) trial found no benefit to using LTVV in non-ARDS patients (Simonis FD, et al. JAMA. 2018;320[18]:1872). Does this mean we let physicians set volumes at will? Is tidal volume (VT) even clinically relevant anymore in the non-ARDS population?
Prior to the PReVENT trial, our practice of LTVV for patients without ARDS was informed primarily by observational data. In 2012, a meta-analysis comparing LTVV with “conventional” VT (10-12 mL/kg IBW) in non-ARDS patients found that those given LTVV had a lower incidence of acute lung injury and lower overall mortality (Neto AS, et al. JAMA. 2012 308[16]:1651). While these were promising findings, there was limited follow-up poststudy onset, and the majority of included studies were based on a surgical population. Additionally, the use of VT > 10 mL/kg PBW has become uncommon in routine clinical practice. How comparable are those previous studies to today’s clinical milieu? When comparing outcomes for ICU patients who were ventilated with low (≤7mL/kg PBW), intermediate (>7, but <10 mL/kg PBW), and high (≥10 mL/kg PBW) VT, a second meta-analysis found a 28% risk reduction in the development of ARDS or pneumonia with low vs high, but the similar difference was not seen when comparing low vs intermediate groups (Neto AS, et al. Crit Care Med. 2015;43[10]:2155). This research suggested that negative outcomes were driven by the excessive VT.
Slated to be the definitive study on the matter, the PReVENT trial used a multicenter randomized control trial design comparing target VT of 4 mL/kg with 10 mL/kg PBW, with setting titration primarily based on plateau pressure targets. The headline out of this trial may have been that it was “negative,” in that there was no difference between the groups in the primary outcome of ventilator-free days and survival by day 28. However, there are some important limitations to consider before discounting LTVV for everyone. First, half of the trial patients were ventilated with pressure-control ventilation, the actual VT settings were 7.3 (5.9 – 9.1) for the low group vs 9.1 (7.7 – 10.5) mL/kg PBW for the intermediate group by day 3, statistically significant differences, but perhaps not as striking clinically. Moreover, a secondary analysis of ARDSnet data (Amato MB, et al, N Engl J Med. 2015;372[8]:747) also suggests that driving pressure, more so than VT, may determine outcomes, which, for most patients in the PReVENT trial, remained in the “safe” range of < 15 cm H2O. Finally, almost two-thirds of patients eligible for PReVENT were not enrolled, and the included cohort had PaO2/FiO2 ratios greater than 200 for the 3 days of the study, limiting generalizability, especially for patients with acute hypoxemic respiratory failure.
When approaching the patient who we have determined to not have ARDS (either by clinical diagnosis or suspicion plus a low PaO2/FiO2 ratio as defined by PReVENT’s protocol), it is important to also consider our accuracy in recognizing ARDS before settling for the use of unregulated VT. ARDS is often underrecognized, and this delay in diagnosis results in delayed LTVV initiation. Results from the LUNG SAFE study, an international multicenter prospective observational study of over 2,300 ICU patients with ARDS, showed that only 34% of patients were recognized by the clinician to have ARDS at the time they met the Berlin criteria (Bellani G, et al. JAMA. 2016;315[8]:788). As ARDS is defined by clinical criteria, it is biologically plausible to think that the pathologic process commences before these criteria are recognized by the clinician.
To investigate the importance of timing of LTVV in ARDS, Needham and colleagues performed a prospective cohort study in patients with ARDS, examining the effect of VT received over time on the outcome of ICU mortality (Needham DM, et al. Am J Respir Crit Care Med. 2015;191[2]:177). They found that every 1 mL/kg increase in VT setting was associated with a 23% increase in mortality and, indeed, increases in subsequent VT compared with baseline setting were associated with increasing mortality. One may, therefore, be concerned that if we miss the ARDS diagnosis, the default to higher VT at the time of intubation may harm our patients. With or without clinician recognition of ARDS, LUNG SAFE revealed that the average VT for the patients with confirmed ARDS was 7.6 (95% CI 7.5-7.7) mL/kg PBW. While this mean value is well within the range of lung protective ventilation (less than 8 mL/kg PBW), over one-third of patients were exposed to larger VT. A recently published study by Sjoding and colleagues showed that VT of >8 mL/kg PBW was used in 40% of the cohort, and continued exposure to 24 total hours of these high VT was associated with increased risk of mortality (OR 1.82 (95% CI, 1.20–2.78) (Sjoding MW, et al. Crit Care Med. 2019;47[1]:56). All three studies support early administration of lung protective ventilation, considering the high mortality associated with ARDS.
Before consolidating what we know about empiric use of LTVV, we also must highlight the important concerns about LTVV that were investigated in the PReVENT trial. Over-sedation to maintain low VT, increased delirium, ventilator asynchrony, and possibility of effort-induced lung injury are some of the potential risks associated with LTVV. While there were no differences in the use of sedatives or neuromuscular blocking agents between groups in the PReVENT trial, more delirium was seen in the LTVV group with a P = .06, which may be a signal deserving further exploration.
Therefore, now understanding both the upside and downside of LTVV, what’s our best approach? While we lack prospective clinical trial data showing benefit of LTVV in patients without ARDS, we do not have conclusive evidence to show its harm. Remembering that even intensivists can fail to recognize ARDS at its onset, default utilization of LTVV, or at least lung protective ventilation of <8 mL/kg PBW, may be the safest approach for all patients. To be clear, this approach would still allow for active physician decision-making to personalize the settings to the individual patient’s needs, including the use of higher VT if needed for patient comfort, effort, and sedation needs. Changing the default settings and implementing friendly reminders about how to manage the ventilator has already been shown to be helpful for the surgical population (O’Reilly-Shah VN, et al. BMJ Qual Saf. 2018;27[12]:1008).
We must also consider the process of health-care delivery and the implementation of best practices, after considering the facilitators and barriers to adoption of said practices. Many patients decompensate and require intubation prior to ICU arrival, with prolonged boarding in the ED or medical wards being a common occurrence for many hospitals. As such, we need to consider a ventilation strategy that allows for best practice implementation at a hospital-wide level, appealing to an interprofessional approach to ventilator management, employing physicians outside of critical care medicine, respiratory therapists, and nursing. The PReVENT trial had a nicely constructed protocol with clear instructions on ventilator adjustments with frequent plateau pressure measurements and patient assessments. In the real world setting, especially in a non-ICU setting, ventilator management is not as straightforward. Considering that plateau pressures were only checked in approximately 40% of the patients in LUNG SAFE cohort, active management and attention to driving pressure may be a stretch in many settings.
Until we get 100% sensitive in timely recognition (instantaneous, really) of ARDS pathology augmented by automated diagnostic tools embedded in the medical record and/or incorporate advanced technology in the ventilator management to avoid human error, employing simple defaults to guarantee a protective setting in case of later diagnosis of ARDS seems logical. We can even go further to separate the defaults into LTVV for hypoxemic respiratory failure and lung protective ventilation for everything else, with future development of more algorithms, protocols, and clinical decision support tools for ventilator management. For the time being, a simpler intervention of setting a safer default is a great universal start.
Dr. Mathews and Dr. Howell are with the Division of Pulmonary, Critical Care, and Sleep Medicine, Department of Medicine; Dr. Mathews is also with the Department of Emergency Medicine; Icahn School of Medicine at Mount Sinai, New York, NY.
Welcoming a new Section Editor for Sleep Strategies
Michelle Cao, DO, FCCP
Dr. Michelle Cao is a Clinical Associate Professor in the Division of Sleep Medicine and Division of Neuromuscular Medicine, at the Stanford University School of Medicine. Her clinical expertise is in complex sleep-related respiratory disorders and home mechanical ventilation for chronic respiratory failure syndromes. She oversees the Noninvasive Ventilation Program for the Stanford Neuromuscular Medicine Center. Dr. Cao also holds the position of Vice-Chair for the Home-Based Mechanical Ventilation and Neuromuscular Disease NetWork with CHEST and is a member of the Scientific Presentations and Awards Committee.
Michelle Cao, DO, FCCP
Dr. Michelle Cao is a Clinical Associate Professor in the Division of Sleep Medicine and Division of Neuromuscular Medicine, at the Stanford University School of Medicine. Her clinical expertise is in complex sleep-related respiratory disorders and home mechanical ventilation for chronic respiratory failure syndromes. She oversees the Noninvasive Ventilation Program for the Stanford Neuromuscular Medicine Center. Dr. Cao also holds the position of Vice-Chair for the Home-Based Mechanical Ventilation and Neuromuscular Disease NetWork with CHEST and is a member of the Scientific Presentations and Awards Committee.
Michelle Cao, DO, FCCP
Dr. Michelle Cao is a Clinical Associate Professor in the Division of Sleep Medicine and Division of Neuromuscular Medicine, at the Stanford University School of Medicine. Her clinical expertise is in complex sleep-related respiratory disorders and home mechanical ventilation for chronic respiratory failure syndromes. She oversees the Noninvasive Ventilation Program for the Stanford Neuromuscular Medicine Center. Dr. Cao also holds the position of Vice-Chair for the Home-Based Mechanical Ventilation and Neuromuscular Disease NetWork with CHEST and is a member of the Scientific Presentations and Awards Committee.
Sleep Strategies
Compared with obstructive sleep apnea (OSA), the prevalence of central sleep apnea (CSA) is low in the general population. However, in adults, CSA may be highly prevalent in certain conditions, most commonly among those with left ventricular systolic dysfunction, left ventricular diastolic dysfunction, atrial fibrillation, stroke, and opioid users (Javaheri S, et al. J Am Coll Cardiol. 2017; 69:841). CSA may also be found in patients with carotid artery stenosis, cervical neck injury, and renal dysfunction. CSA can occur when OSA is treated (treatment-emergent central sleep apnea, or TECA), notably, and most frequently, with continuous positive airway pressure (CPAP) devices. Though in many individuals, this frequently resolves with continued use of the device.
In addition, unlike OSA, adequate treatment of CSA has proven difficult. Specifically, the response to CPAP, oxygen, theophylline, acetazolamide, and adaptive-servo ventilation (ASV) is highly variable, with individuals who respond well, and individuals in whom therapy fails to fully suppress the disorder.
Our interest in phrenic nerve stimulation increased after it was shown that CPAP therapy failed to improve morbidity and mortality of CSA in patients with heart failure and reduced ejection fraction (HFrEF) (CANPAP trial, Bradley et al. N Engl J Med. 2005;353(19):2025). In fact, in this trial, treatment with CPAP was associated with significantly increased mortality during the first few months of therapy. We reason that a potential mechanism was positive airway pressure that had adverse cardiovascular effects (Javaheri S. J Clin Sleep Med. 2006;2:399). This is because positive airway pressure therapy decreases venous return to the right side of the heart and increases lung volume. This could increase pulmonary vascular resistance (right ventricular afterload), which is lung volume-dependent. Therefore, the subgroup of individuals with heart failure whose right ventricular function is preload-dependent and has pulmonary hypertension is at risk for premature mortality with any PAP device.
Interestingly, investigators of the SERVE-HF trial (Cowie MR, et al. N Engl J Med. 2015;373:1095) also hypothesized that one reason for excess mortality associated with ASV use might have been due to an ASV-associated excessive rise in intrathoracic pressure, similar to the hypothesis we proposed earlier for CPAP. We expanded on this hypothesis and reasoned that based on the algorithm of the device, in some patients, it could have generated excessive minute ventilation and pressure contributing to excess mortality, either at night or daytime (Javaheri S, et al. Chest. 2016;149:900). Other deficiencies of the algorithm of the ASV device could have contributed to excess mortality as well (Javaheri S, et al. Chest. 2014;146:514). These deficiencies of the ASV device used in the SERVE-HF trial have been significantly improved in the new generation of ASV devices.
Undoubtedly, therefore, mask therapy with positive airway pressures increases intrathoracic pressure and will adversely affect cardiovascular function in some patients with heart failure. Another issue for mask therapy is adherence to the device remains poor, as demonstrated both in the CANPAP and SERVE-HF trials, confirming the need for new approaches utilizing non-mask therapies both for CSA and OSA.
Given the limitations of mask-based therapies, over the last several years, we have performed studies exploring the use of oxygen, acetazolamide, theophylline, and, most recently, phrenic nerve stimulation (PNS). In general, these therapies are devoid of increasing intrathoracic pressure and are expected to be less reliant on patients’ adherence than PAP therapy. Long-term randomized clinical trials are needed, and, most recently, the NIH approved a phase 3 trial for a randomized placebo-controlled low flow oxygen therapy for treatment of CSA in HFrEF. This is a modified trial proposed by one of us more than 20 years ago!
Regarding PNS, CSA is characterized by intermittent phrenic nerve (and intercostal nerves) deactivation. It, therefore, makes sense to have an implanted stimulator for the phrenic nerve to prevent development of central apneas during sleep. This is not a new idea. In 1948, Sarnoff and colleagues demonstrated for the first time that artificial respiration could be effectively administered to the cat, dog, monkey, and rabbit in the absence of spontaneous respiration by electrical stimulation of one (or both) phrenic nerves (Sarnoff SJ, et al. Science. 1948;108:482). In later experiments, these investigators showed that unilateral phrenic nerve stimulation is also equally effective in man as that shown in animal models.
The phrenic nerves comes in contact with veins on both the right (brachiocephalic) and the left (pericardiophrenic vein) side of the mediastinum. Like a cardiac pacemaker, an electrophysiologist places the stimulator within the vein at the point of encounter with the phrenic nerve. Only unilateral stimulation is needed for the therapy. The device is typically placed on the right side of the chest as many patients may already have a cardiac implanted electronic device such as a pacemaker. Like the hypoglossal nerve stimulation, the FDA approved this device for the treatment of OSA. The system can be programmed using an external programmer in the office.
Phrenic nerve stimulation system is initially activated 1 month after the device is placed. It is programmed to be automatically activated at night when the patient is at rest. First, a time is set on the device for when the patient typically goes to bed and awakens. This allows the therapy to activate. The device contains a position sensor and accelerometer, which determine position and activity level. Once appropriate time, position, and activity are confirmed, the device activates automatically. Therapy comes on and can increase in level over several minutes. The device senses transthoracic impedance and can use this measurement to make changes in the therapy output and activity. If the patient gets up at night, the device automatically stops and restarts when the patient is back in a sleeping position. How quickly the therapy restarts and at what energy is programmable. The device may allow from 1 to 15 minutes for the patient to get back to sleep before beginning therapy. These programming changes allow for patient acceptance and comfort with the therapy even in very sensitive patients. Importantly, no patient activation is needed, so that therapy delivery is independent of patient’s adherence over time.
In the prospective, randomized pivotal trial (Costanzo et al. Lancet. 2016;388:974), 151 eligible patients with moderate-severe central sleep apnea were implanted and randomly assigned to the treatment (n=73) or control (n=78) groups. Participants in the active arm received PNS for 6 months. All polysomnograms were centrally and blindly scored. There were significant decreases in AHI (50 to 26/per hour of sleep), CAI (32 to 6), arousal index (46 to 25), and ODI (44 to 25). Two points should be emphasized: first, changes in AHI with PNS are similar to those in CANPAP trial, and there remained a significant number of hypopneas (some of these hypopneas are at least in part related to the speed of the titration when the subject sits up and the device automatically is deactivated, only to resume therapy in supine position); second, in contrast to the CANPAP trial, there was a significant reduction in arousals. Probably for this reason, subjective daytime sleepiness, as measured by the ESS, improved. In addition, PNS improved quality of life, in contrast to lack of effect of CPAP or ASV in this domain. Regarding side effects, 138 (91%) of 151 patients had no serious-related adverse events at 12 months. Seven (9%) cases of related-serious adverse events occurred in the control group and six (8%) cases were reported in the treatment group.—3.4% needed lead repositioning, a rate which is like that of cardiac implantable devices. Seven patients died (unrelated to implant, system, or therapy), four deaths (two in treatment group and two in control group) during the 6-month randomization period when neurostimulation was delivered to only the treatment and was off in the control group, and three deaths between 6 months and 12 months of follow-up when all patients received neurostimulation. Of 73 patients in the treatment group, 27 (37%) reported nonserious therapy-related discomfort that was resolved with simple system reprogramming in 26 (36%) patients but was unresolved in one (1%) patient.
Long-term studies have shown sustained effects of PNS on CSA with improvement in both sleep metrics and QOL, as measured by the Minnesota Living with Heart Failure Questionnaire (MLWHF) and patient global assessment (PGA). Furthermore, in the subgroup of patients with concomitant heart failure with LVEF ≤ 45%, PNS was associated with both improvements in LVEF and a trend toward lower hospitalization rates (Costanzo et al. Eur J Heart Fail. 2018; doi:10.1002/ejhf.1312).
Several issues must be emphasized. One advantage of PNS is complete adherence resulting in a major reduction in apnea burden across the whole night. Second, the mechanism of action prevents any potential adverse consequences related to increased intrathoracic pressure. However, the cost of this therapy is high, similar to that of hypoglossal nerve stimulation. Large scale, long-term studies related to mortality are not yet available, and continued research should help identify those patients most likely to benefit from this therapeutic approach.
Compared with obstructive sleep apnea (OSA), the prevalence of central sleep apnea (CSA) is low in the general population. However, in adults, CSA may be highly prevalent in certain conditions, most commonly among those with left ventricular systolic dysfunction, left ventricular diastolic dysfunction, atrial fibrillation, stroke, and opioid users (Javaheri S, et al. J Am Coll Cardiol. 2017; 69:841). CSA may also be found in patients with carotid artery stenosis, cervical neck injury, and renal dysfunction. CSA can occur when OSA is treated (treatment-emergent central sleep apnea, or TECA), notably, and most frequently, with continuous positive airway pressure (CPAP) devices. Though in many individuals, this frequently resolves with continued use of the device.
In addition, unlike OSA, adequate treatment of CSA has proven difficult. Specifically, the response to CPAP, oxygen, theophylline, acetazolamide, and adaptive-servo ventilation (ASV) is highly variable, with individuals who respond well, and individuals in whom therapy fails to fully suppress the disorder.
Our interest in phrenic nerve stimulation increased after it was shown that CPAP therapy failed to improve morbidity and mortality of CSA in patients with heart failure and reduced ejection fraction (HFrEF) (CANPAP trial, Bradley et al. N Engl J Med. 2005;353(19):2025). In fact, in this trial, treatment with CPAP was associated with significantly increased mortality during the first few months of therapy. We reason that a potential mechanism was positive airway pressure that had adverse cardiovascular effects (Javaheri S. J Clin Sleep Med. 2006;2:399). This is because positive airway pressure therapy decreases venous return to the right side of the heart and increases lung volume. This could increase pulmonary vascular resistance (right ventricular afterload), which is lung volume-dependent. Therefore, the subgroup of individuals with heart failure whose right ventricular function is preload-dependent and has pulmonary hypertension is at risk for premature mortality with any PAP device.
Interestingly, investigators of the SERVE-HF trial (Cowie MR, et al. N Engl J Med. 2015;373:1095) also hypothesized that one reason for excess mortality associated with ASV use might have been due to an ASV-associated excessive rise in intrathoracic pressure, similar to the hypothesis we proposed earlier for CPAP. We expanded on this hypothesis and reasoned that based on the algorithm of the device, in some patients, it could have generated excessive minute ventilation and pressure contributing to excess mortality, either at night or daytime (Javaheri S, et al. Chest. 2016;149:900). Other deficiencies of the algorithm of the ASV device could have contributed to excess mortality as well (Javaheri S, et al. Chest. 2014;146:514). These deficiencies of the ASV device used in the SERVE-HF trial have been significantly improved in the new generation of ASV devices.
Undoubtedly, therefore, mask therapy with positive airway pressures increases intrathoracic pressure and will adversely affect cardiovascular function in some patients with heart failure. Another issue for mask therapy is adherence to the device remains poor, as demonstrated both in the CANPAP and SERVE-HF trials, confirming the need for new approaches utilizing non-mask therapies both for CSA and OSA.
Given the limitations of mask-based therapies, over the last several years, we have performed studies exploring the use of oxygen, acetazolamide, theophylline, and, most recently, phrenic nerve stimulation (PNS). In general, these therapies are devoid of increasing intrathoracic pressure and are expected to be less reliant on patients’ adherence than PAP therapy. Long-term randomized clinical trials are needed, and, most recently, the NIH approved a phase 3 trial for a randomized placebo-controlled low flow oxygen therapy for treatment of CSA in HFrEF. This is a modified trial proposed by one of us more than 20 years ago!
Regarding PNS, CSA is characterized by intermittent phrenic nerve (and intercostal nerves) deactivation. It, therefore, makes sense to have an implanted stimulator for the phrenic nerve to prevent development of central apneas during sleep. This is not a new idea. In 1948, Sarnoff and colleagues demonstrated for the first time that artificial respiration could be effectively administered to the cat, dog, monkey, and rabbit in the absence of spontaneous respiration by electrical stimulation of one (or both) phrenic nerves (Sarnoff SJ, et al. Science. 1948;108:482). In later experiments, these investigators showed that unilateral phrenic nerve stimulation is also equally effective in man as that shown in animal models.
The phrenic nerves comes in contact with veins on both the right (brachiocephalic) and the left (pericardiophrenic vein) side of the mediastinum. Like a cardiac pacemaker, an electrophysiologist places the stimulator within the vein at the point of encounter with the phrenic nerve. Only unilateral stimulation is needed for the therapy. The device is typically placed on the right side of the chest as many patients may already have a cardiac implanted electronic device such as a pacemaker. Like the hypoglossal nerve stimulation, the FDA approved this device for the treatment of OSA. The system can be programmed using an external programmer in the office.
Phrenic nerve stimulation system is initially activated 1 month after the device is placed. It is programmed to be automatically activated at night when the patient is at rest. First, a time is set on the device for when the patient typically goes to bed and awakens. This allows the therapy to activate. The device contains a position sensor and accelerometer, which determine position and activity level. Once appropriate time, position, and activity are confirmed, the device activates automatically. Therapy comes on and can increase in level over several minutes. The device senses transthoracic impedance and can use this measurement to make changes in the therapy output and activity. If the patient gets up at night, the device automatically stops and restarts when the patient is back in a sleeping position. How quickly the therapy restarts and at what energy is programmable. The device may allow from 1 to 15 minutes for the patient to get back to sleep before beginning therapy. These programming changes allow for patient acceptance and comfort with the therapy even in very sensitive patients. Importantly, no patient activation is needed, so that therapy delivery is independent of patient’s adherence over time.
In the prospective, randomized pivotal trial (Costanzo et al. Lancet. 2016;388:974), 151 eligible patients with moderate-severe central sleep apnea were implanted and randomly assigned to the treatment (n=73) or control (n=78) groups. Participants in the active arm received PNS for 6 months. All polysomnograms were centrally and blindly scored. There were significant decreases in AHI (50 to 26/per hour of sleep), CAI (32 to 6), arousal index (46 to 25), and ODI (44 to 25). Two points should be emphasized: first, changes in AHI with PNS are similar to those in CANPAP trial, and there remained a significant number of hypopneas (some of these hypopneas are at least in part related to the speed of the titration when the subject sits up and the device automatically is deactivated, only to resume therapy in supine position); second, in contrast to the CANPAP trial, there was a significant reduction in arousals. Probably for this reason, subjective daytime sleepiness, as measured by the ESS, improved. In addition, PNS improved quality of life, in contrast to lack of effect of CPAP or ASV in this domain. Regarding side effects, 138 (91%) of 151 patients had no serious-related adverse events at 12 months. Seven (9%) cases of related-serious adverse events occurred in the control group and six (8%) cases were reported in the treatment group.—3.4% needed lead repositioning, a rate which is like that of cardiac implantable devices. Seven patients died (unrelated to implant, system, or therapy), four deaths (two in treatment group and two in control group) during the 6-month randomization period when neurostimulation was delivered to only the treatment and was off in the control group, and three deaths between 6 months and 12 months of follow-up when all patients received neurostimulation. Of 73 patients in the treatment group, 27 (37%) reported nonserious therapy-related discomfort that was resolved with simple system reprogramming in 26 (36%) patients but was unresolved in one (1%) patient.
Long-term studies have shown sustained effects of PNS on CSA with improvement in both sleep metrics and QOL, as measured by the Minnesota Living with Heart Failure Questionnaire (MLWHF) and patient global assessment (PGA). Furthermore, in the subgroup of patients with concomitant heart failure with LVEF ≤ 45%, PNS was associated with both improvements in LVEF and a trend toward lower hospitalization rates (Costanzo et al. Eur J Heart Fail. 2018; doi:10.1002/ejhf.1312).
Several issues must be emphasized. One advantage of PNS is complete adherence resulting in a major reduction in apnea burden across the whole night. Second, the mechanism of action prevents any potential adverse consequences related to increased intrathoracic pressure. However, the cost of this therapy is high, similar to that of hypoglossal nerve stimulation. Large scale, long-term studies related to mortality are not yet available, and continued research should help identify those patients most likely to benefit from this therapeutic approach.
Compared with obstructive sleep apnea (OSA), the prevalence of central sleep apnea (CSA) is low in the general population. However, in adults, CSA may be highly prevalent in certain conditions, most commonly among those with left ventricular systolic dysfunction, left ventricular diastolic dysfunction, atrial fibrillation, stroke, and opioid users (Javaheri S, et al. J Am Coll Cardiol. 2017; 69:841). CSA may also be found in patients with carotid artery stenosis, cervical neck injury, and renal dysfunction. CSA can occur when OSA is treated (treatment-emergent central sleep apnea, or TECA), notably, and most frequently, with continuous positive airway pressure (CPAP) devices. Though in many individuals, this frequently resolves with continued use of the device.
In addition, unlike OSA, adequate treatment of CSA has proven difficult. Specifically, the response to CPAP, oxygen, theophylline, acetazolamide, and adaptive-servo ventilation (ASV) is highly variable, with individuals who respond well, and individuals in whom therapy fails to fully suppress the disorder.
Our interest in phrenic nerve stimulation increased after it was shown that CPAP therapy failed to improve morbidity and mortality of CSA in patients with heart failure and reduced ejection fraction (HFrEF) (CANPAP trial, Bradley et al. N Engl J Med. 2005;353(19):2025). In fact, in this trial, treatment with CPAP was associated with significantly increased mortality during the first few months of therapy. We reason that a potential mechanism was positive airway pressure that had adverse cardiovascular effects (Javaheri S. J Clin Sleep Med. 2006;2:399). This is because positive airway pressure therapy decreases venous return to the right side of the heart and increases lung volume. This could increase pulmonary vascular resistance (right ventricular afterload), which is lung volume-dependent. Therefore, the subgroup of individuals with heart failure whose right ventricular function is preload-dependent and has pulmonary hypertension is at risk for premature mortality with any PAP device.
Interestingly, investigators of the SERVE-HF trial (Cowie MR, et al. N Engl J Med. 2015;373:1095) also hypothesized that one reason for excess mortality associated with ASV use might have been due to an ASV-associated excessive rise in intrathoracic pressure, similar to the hypothesis we proposed earlier for CPAP. We expanded on this hypothesis and reasoned that based on the algorithm of the device, in some patients, it could have generated excessive minute ventilation and pressure contributing to excess mortality, either at night or daytime (Javaheri S, et al. Chest. 2016;149:900). Other deficiencies of the algorithm of the ASV device could have contributed to excess mortality as well (Javaheri S, et al. Chest. 2014;146:514). These deficiencies of the ASV device used in the SERVE-HF trial have been significantly improved in the new generation of ASV devices.
Undoubtedly, therefore, mask therapy with positive airway pressures increases intrathoracic pressure and will adversely affect cardiovascular function in some patients with heart failure. Another issue for mask therapy is adherence to the device remains poor, as demonstrated both in the CANPAP and SERVE-HF trials, confirming the need for new approaches utilizing non-mask therapies both for CSA and OSA.
Given the limitations of mask-based therapies, over the last several years, we have performed studies exploring the use of oxygen, acetazolamide, theophylline, and, most recently, phrenic nerve stimulation (PNS). In general, these therapies are devoid of increasing intrathoracic pressure and are expected to be less reliant on patients’ adherence than PAP therapy. Long-term randomized clinical trials are needed, and, most recently, the NIH approved a phase 3 trial for a randomized placebo-controlled low flow oxygen therapy for treatment of CSA in HFrEF. This is a modified trial proposed by one of us more than 20 years ago!
Regarding PNS, CSA is characterized by intermittent phrenic nerve (and intercostal nerves) deactivation. It, therefore, makes sense to have an implanted stimulator for the phrenic nerve to prevent development of central apneas during sleep. This is not a new idea. In 1948, Sarnoff and colleagues demonstrated for the first time that artificial respiration could be effectively administered to the cat, dog, monkey, and rabbit in the absence of spontaneous respiration by electrical stimulation of one (or both) phrenic nerves (Sarnoff SJ, et al. Science. 1948;108:482). In later experiments, these investigators showed that unilateral phrenic nerve stimulation is also equally effective in man as that shown in animal models.
The phrenic nerves comes in contact with veins on both the right (brachiocephalic) and the left (pericardiophrenic vein) side of the mediastinum. Like a cardiac pacemaker, an electrophysiologist places the stimulator within the vein at the point of encounter with the phrenic nerve. Only unilateral stimulation is needed for the therapy. The device is typically placed on the right side of the chest as many patients may already have a cardiac implanted electronic device such as a pacemaker. Like the hypoglossal nerve stimulation, the FDA approved this device for the treatment of OSA. The system can be programmed using an external programmer in the office.
Phrenic nerve stimulation system is initially activated 1 month after the device is placed. It is programmed to be automatically activated at night when the patient is at rest. First, a time is set on the device for when the patient typically goes to bed and awakens. This allows the therapy to activate. The device contains a position sensor and accelerometer, which determine position and activity level. Once appropriate time, position, and activity are confirmed, the device activates automatically. Therapy comes on and can increase in level over several minutes. The device senses transthoracic impedance and can use this measurement to make changes in the therapy output and activity. If the patient gets up at night, the device automatically stops and restarts when the patient is back in a sleeping position. How quickly the therapy restarts and at what energy is programmable. The device may allow from 1 to 15 minutes for the patient to get back to sleep before beginning therapy. These programming changes allow for patient acceptance and comfort with the therapy even in very sensitive patients. Importantly, no patient activation is needed, so that therapy delivery is independent of patient’s adherence over time.
In the prospective, randomized pivotal trial (Costanzo et al. Lancet. 2016;388:974), 151 eligible patients with moderate-severe central sleep apnea were implanted and randomly assigned to the treatment (n=73) or control (n=78) groups. Participants in the active arm received PNS for 6 months. All polysomnograms were centrally and blindly scored. There were significant decreases in AHI (50 to 26/per hour of sleep), CAI (32 to 6), arousal index (46 to 25), and ODI (44 to 25). Two points should be emphasized: first, changes in AHI with PNS are similar to those in CANPAP trial, and there remained a significant number of hypopneas (some of these hypopneas are at least in part related to the speed of the titration when the subject sits up and the device automatically is deactivated, only to resume therapy in supine position); second, in contrast to the CANPAP trial, there was a significant reduction in arousals. Probably for this reason, subjective daytime sleepiness, as measured by the ESS, improved. In addition, PNS improved quality of life, in contrast to lack of effect of CPAP or ASV in this domain. Regarding side effects, 138 (91%) of 151 patients had no serious-related adverse events at 12 months. Seven (9%) cases of related-serious adverse events occurred in the control group and six (8%) cases were reported in the treatment group.—3.4% needed lead repositioning, a rate which is like that of cardiac implantable devices. Seven patients died (unrelated to implant, system, or therapy), four deaths (two in treatment group and two in control group) during the 6-month randomization period when neurostimulation was delivered to only the treatment and was off in the control group, and three deaths between 6 months and 12 months of follow-up when all patients received neurostimulation. Of 73 patients in the treatment group, 27 (37%) reported nonserious therapy-related discomfort that was resolved with simple system reprogramming in 26 (36%) patients but was unresolved in one (1%) patient.
Long-term studies have shown sustained effects of PNS on CSA with improvement in both sleep metrics and QOL, as measured by the Minnesota Living with Heart Failure Questionnaire (MLWHF) and patient global assessment (PGA). Furthermore, in the subgroup of patients with concomitant heart failure with LVEF ≤ 45%, PNS was associated with both improvements in LVEF and a trend toward lower hospitalization rates (Costanzo et al. Eur J Heart Fail. 2018; doi:10.1002/ejhf.1312).
Several issues must be emphasized. One advantage of PNS is complete adherence resulting in a major reduction in apnea burden across the whole night. Second, the mechanism of action prevents any potential adverse consequences related to increased intrathoracic pressure. However, the cost of this therapy is high, similar to that of hypoglossal nerve stimulation. Large scale, long-term studies related to mortality are not yet available, and continued research should help identify those patients most likely to benefit from this therapeutic approach.
Renal replacement therapy in the ICU: Vexed questions and team dynamics
More than 5 million patients are admitted to ICUs each year in the United States, and approximately 2% to 10% of these patients develop acute kidney injury requiring renal replacement therapy (AKI-RRT). AKI-RRT carries high morbidity and mortality (Hoste EA, et al. Intensive Care Med. 2015;41:1411) and is associated with renal and systemic complications, such as cardiovascular disease. RRT, frequently provided by nephrologists and/or intensivists, is a supportive therapy that can be life-saving when provided to the right patient at the right time. However, several questions related to the provision of RRT still remain, including the optimal timing of RRT initiation, the development of quality metrics for optimal RRT deliverables and monitoring, and the optimal strategy of RRT de-escalation and risk-stratification of renal recovery. Overall, there is paucity of randomized trials and standardized risk-stratification tools that can guide RRT in the ICU.
Current vexed questions of RRT deliverables in the ICU
There is ongoing research aiming to answer critical questions that can potentially improve current standards of RRT.
What is the optimal time of RRT initiation for critically ill patients with AKI?
Over the last 2 years, three randomized clinical trials have attempted to address this important question involving heterogeneous ICU populations and distinct research hypotheses and study designs. Two of these studies, AKIKI (Gaudry S, et al. N Engl J Med. 2016;375:122) and IDEAL-ICU (Barbar SD, et al. N Engl J Med. 2018;379:1431) yielded no significant difference in the primary outcome of 60-day and 90-day all-cause mortality between the early vs delayed RRT initiation strategies, respectively (Table 1). Further, AKIKI showed no difference in RRT dependence at 60 days and higher catheter-related infections and hypophosphatemia in the early initiation arm. It is important to note that IDEAL-ICU was stopped early for futility after the second planned interim analysis with only 56% of patients enrolled (
How can RRT deliverables in the ICU be effectively and systematically monitored?
The provision of RRT to ICU patients with AKI requires an iterative adjustment of the RRT prescription and goals of therapy to accommodate changes in the clinical status with emphasis in hemodynamics, multiorgan failure, and fluid overload (Neyra JA. Clin Nephrol. 2018;90:1). The utilization of static and functional tests or point-of-care ultrasonography to assess hemodynamic variables can be useful. Furthermore, the implementation of customized and automated flowsheets in the electronic health record can facilitate remote monitoring. It is, therefore, essential that the multidisciplinary ICU team develops a process to monitor and ensure RRT deliverables. In this context, the standardization and monitoring of quality metrics (dose, modality, anticoagulation, filter life, downtime, etc) and the development of effective quality management systems are critically important. However, big multicenter data are direly needed to provide insight in this arena.
How can renal recovery be assessed and RRT effectively de-escalated?
The continuous examination of renal recovery in ICU patients with AKI-RRT is mostly based on urine output trend and, if feasible, interdialytic solute control. Sometimes, the transition from continuous RRT to intermittent modalities is necessary in the context of multiorgan recovery and de-escalation of care. However, clinical risk-prediction tools that identify patients who can potentially recover or already exhibit early signs of renal function recovery are needed. Current advances in clinical informatics can help to incorporate time-varying clinical parameters that may be informative for risk-prediction models. In addition, incorporating novel biomarkers of AKI repair and functional tests (eg, furosemide stress test, functional MRI) into these models may further inform these tools and aid the development of clinical decision support systems that enhance interventions to promote AKI recovery (Neyra JA, et al. Nephron. 2018;140: 99).
Is post-AKI outpatient care beneficial for ICU survivors who suffered from AKI-RRT?
Specialized AKI survivor clinics have been implemented in some centers. In general, this outpatient follow-up model includes survivors who suffered from AKI stage 2 or 3, some of them requiring RRT, and tailors individualized interventions for post-AKI complications (preventing recurrent AKI, attenuating incident or progressive CKD). However, the value of this outpatient model needs to be further evaluated with emphasis on clinical outcomes (eg, recurrent AKI, CKD, readmissions, or death) and elements that impact quality of life. This is an area of evolving research and a great opportunity for the nephrology and critical care communities to integrate and enhance post-ICU outpatient care and research collaboration.
Interdisciplinary communication among acute care team members
Two essential elements to provide effective RRT to ICU patients with AKI are: (1) the dynamics of the ICU team (intensivists, nephrologists, pharmacists, nurses, nutritionists, physical therapists, etc) to enhance the delivery of personalized therapy (RRT candidacy, timing of initiation, goals for solute control and fluid removal/regulation, renal recovery evaluation, RRT de-escalation, etc.) and (2) the frequent assessment and adjustment of RRT goals according to the clinical status of the patient. Therefore, effective RRT provision in the ICU requires the development of optimal channels of communication among all members of the acute care team and the systematic monitoring of the clinical status of the patient and RRT-specific goals and deliverables.
Perspective from a nurse and quality improvement officer for the provision of RRT in the ICU
The provision of continuous RRT (CRRT) to critically ill patients requires close communication between the bedside nurse and the rest of the ICU team. The physician typically prescribes CRRT and determines the specific goals of therapy. The pharmacist works closely with the nephrologist/intensivist and bedside nurse, especially in regards to customized CRRT solutions (when indicated) and medication dosing. Because CRRT can alter drug pharmacokinetics, the pharmacist closely and constantly monitors the patient’s clinical status, CRRT prescription, and all active medications. CRRT can also affect the nutritional and metabolic status of critically ill patients; therefore, the input of the nutritionist is necessary. The syndrome of ICU-acquired weakness is commonly encountered in ICU patients and is related to physical immobility. While ICU patients with AKI are already at risk for decreased mobility, the continuous connection to an immobile extracorporeal machine for the provision of CRRT may further contribute to immobilization and can also preclude the provision of optimal physical therapy. Therefore, the bedside nurse should assist the physical therapist for the timely and effective delivery of physical therapy according to the clinical status of the patient.
The clinical scenarios discussed above provide a small glimpse into the importance of developing an interdisciplinary ICU team caring for critically ill patients receiving CRRT. In the context of how integral the specific role of each team member is, it becomes clear that the bedside nurse’s role is not only to deliver hands-on patient care but also the orchestration of collaborative communication among all health-care providers for the effective provision of CRRT to critically ill patients in the ICU.
Dr. Neyra and Ms. Hauschild are with the Department of Internal Medicine; Division of Nephrology; Bone and Mineral Metabolism; University of Kentucky; Lexington, Kentucky.
More than 5 million patients are admitted to ICUs each year in the United States, and approximately 2% to 10% of these patients develop acute kidney injury requiring renal replacement therapy (AKI-RRT). AKI-RRT carries high morbidity and mortality (Hoste EA, et al. Intensive Care Med. 2015;41:1411) and is associated with renal and systemic complications, such as cardiovascular disease. RRT, frequently provided by nephrologists and/or intensivists, is a supportive therapy that can be life-saving when provided to the right patient at the right time. However, several questions related to the provision of RRT still remain, including the optimal timing of RRT initiation, the development of quality metrics for optimal RRT deliverables and monitoring, and the optimal strategy of RRT de-escalation and risk-stratification of renal recovery. Overall, there is paucity of randomized trials and standardized risk-stratification tools that can guide RRT in the ICU.
Current vexed questions of RRT deliverables in the ICU
There is ongoing research aiming to answer critical questions that can potentially improve current standards of RRT.
What is the optimal time of RRT initiation for critically ill patients with AKI?
Over the last 2 years, three randomized clinical trials have attempted to address this important question involving heterogeneous ICU populations and distinct research hypotheses and study designs. Two of these studies, AKIKI (Gaudry S, et al. N Engl J Med. 2016;375:122) and IDEAL-ICU (Barbar SD, et al. N Engl J Med. 2018;379:1431) yielded no significant difference in the primary outcome of 60-day and 90-day all-cause mortality between the early vs delayed RRT initiation strategies, respectively (Table 1). Further, AKIKI showed no difference in RRT dependence at 60 days and higher catheter-related infections and hypophosphatemia in the early initiation arm. It is important to note that IDEAL-ICU was stopped early for futility after the second planned interim analysis with only 56% of patients enrolled (
How can RRT deliverables in the ICU be effectively and systematically monitored?
The provision of RRT to ICU patients with AKI requires an iterative adjustment of the RRT prescription and goals of therapy to accommodate changes in the clinical status with emphasis in hemodynamics, multiorgan failure, and fluid overload (Neyra JA. Clin Nephrol. 2018;90:1). The utilization of static and functional tests or point-of-care ultrasonography to assess hemodynamic variables can be useful. Furthermore, the implementation of customized and automated flowsheets in the electronic health record can facilitate remote monitoring. It is, therefore, essential that the multidisciplinary ICU team develops a process to monitor and ensure RRT deliverables. In this context, the standardization and monitoring of quality metrics (dose, modality, anticoagulation, filter life, downtime, etc) and the development of effective quality management systems are critically important. However, big multicenter data are direly needed to provide insight in this arena.
How can renal recovery be assessed and RRT effectively de-escalated?
The continuous examination of renal recovery in ICU patients with AKI-RRT is mostly based on urine output trend and, if feasible, interdialytic solute control. Sometimes, the transition from continuous RRT to intermittent modalities is necessary in the context of multiorgan recovery and de-escalation of care. However, clinical risk-prediction tools that identify patients who can potentially recover or already exhibit early signs of renal function recovery are needed. Current advances in clinical informatics can help to incorporate time-varying clinical parameters that may be informative for risk-prediction models. In addition, incorporating novel biomarkers of AKI repair and functional tests (eg, furosemide stress test, functional MRI) into these models may further inform these tools and aid the development of clinical decision support systems that enhance interventions to promote AKI recovery (Neyra JA, et al. Nephron. 2018;140: 99).
Is post-AKI outpatient care beneficial for ICU survivors who suffered from AKI-RRT?
Specialized AKI survivor clinics have been implemented in some centers. In general, this outpatient follow-up model includes survivors who suffered from AKI stage 2 or 3, some of them requiring RRT, and tailors individualized interventions for post-AKI complications (preventing recurrent AKI, attenuating incident or progressive CKD). However, the value of this outpatient model needs to be further evaluated with emphasis on clinical outcomes (eg, recurrent AKI, CKD, readmissions, or death) and elements that impact quality of life. This is an area of evolving research and a great opportunity for the nephrology and critical care communities to integrate and enhance post-ICU outpatient care and research collaboration.
Interdisciplinary communication among acute care team members
Two essential elements to provide effective RRT to ICU patients with AKI are: (1) the dynamics of the ICU team (intensivists, nephrologists, pharmacists, nurses, nutritionists, physical therapists, etc) to enhance the delivery of personalized therapy (RRT candidacy, timing of initiation, goals for solute control and fluid removal/regulation, renal recovery evaluation, RRT de-escalation, etc.) and (2) the frequent assessment and adjustment of RRT goals according to the clinical status of the patient. Therefore, effective RRT provision in the ICU requires the development of optimal channels of communication among all members of the acute care team and the systematic monitoring of the clinical status of the patient and RRT-specific goals and deliverables.
Perspective from a nurse and quality improvement officer for the provision of RRT in the ICU
The provision of continuous RRT (CRRT) to critically ill patients requires close communication between the bedside nurse and the rest of the ICU team. The physician typically prescribes CRRT and determines the specific goals of therapy. The pharmacist works closely with the nephrologist/intensivist and bedside nurse, especially in regards to customized CRRT solutions (when indicated) and medication dosing. Because CRRT can alter drug pharmacokinetics, the pharmacist closely and constantly monitors the patient’s clinical status, CRRT prescription, and all active medications. CRRT can also affect the nutritional and metabolic status of critically ill patients; therefore, the input of the nutritionist is necessary. The syndrome of ICU-acquired weakness is commonly encountered in ICU patients and is related to physical immobility. While ICU patients with AKI are already at risk for decreased mobility, the continuous connection to an immobile extracorporeal machine for the provision of CRRT may further contribute to immobilization and can also preclude the provision of optimal physical therapy. Therefore, the bedside nurse should assist the physical therapist for the timely and effective delivery of physical therapy according to the clinical status of the patient.
The clinical scenarios discussed above provide a small glimpse into the importance of developing an interdisciplinary ICU team caring for critically ill patients receiving CRRT. In the context of how integral the specific role of each team member is, it becomes clear that the bedside nurse’s role is not only to deliver hands-on patient care but also the orchestration of collaborative communication among all health-care providers for the effective provision of CRRT to critically ill patients in the ICU.
Dr. Neyra and Ms. Hauschild are with the Department of Internal Medicine; Division of Nephrology; Bone and Mineral Metabolism; University of Kentucky; Lexington, Kentucky.
More than 5 million patients are admitted to ICUs each year in the United States, and approximately 2% to 10% of these patients develop acute kidney injury requiring renal replacement therapy (AKI-RRT). AKI-RRT carries high morbidity and mortality (Hoste EA, et al. Intensive Care Med. 2015;41:1411) and is associated with renal and systemic complications, such as cardiovascular disease. RRT, frequently provided by nephrologists and/or intensivists, is a supportive therapy that can be life-saving when provided to the right patient at the right time. However, several questions related to the provision of RRT still remain, including the optimal timing of RRT initiation, the development of quality metrics for optimal RRT deliverables and monitoring, and the optimal strategy of RRT de-escalation and risk-stratification of renal recovery. Overall, there is paucity of randomized trials and standardized risk-stratification tools that can guide RRT in the ICU.
Current vexed questions of RRT deliverables in the ICU
There is ongoing research aiming to answer critical questions that can potentially improve current standards of RRT.
What is the optimal time of RRT initiation for critically ill patients with AKI?
Over the last 2 years, three randomized clinical trials have attempted to address this important question involving heterogeneous ICU populations and distinct research hypotheses and study designs. Two of these studies, AKIKI (Gaudry S, et al. N Engl J Med. 2016;375:122) and IDEAL-ICU (Barbar SD, et al. N Engl J Med. 2018;379:1431) yielded no significant difference in the primary outcome of 60-day and 90-day all-cause mortality between the early vs delayed RRT initiation strategies, respectively (Table 1). Further, AKIKI showed no difference in RRT dependence at 60 days and higher catheter-related infections and hypophosphatemia in the early initiation arm. It is important to note that IDEAL-ICU was stopped early for futility after the second planned interim analysis with only 56% of patients enrolled (
How can RRT deliverables in the ICU be effectively and systematically monitored?
The provision of RRT to ICU patients with AKI requires an iterative adjustment of the RRT prescription and goals of therapy to accommodate changes in the clinical status with emphasis in hemodynamics, multiorgan failure, and fluid overload (Neyra JA. Clin Nephrol. 2018;90:1). The utilization of static and functional tests or point-of-care ultrasonography to assess hemodynamic variables can be useful. Furthermore, the implementation of customized and automated flowsheets in the electronic health record can facilitate remote monitoring. It is, therefore, essential that the multidisciplinary ICU team develops a process to monitor and ensure RRT deliverables. In this context, the standardization and monitoring of quality metrics (dose, modality, anticoagulation, filter life, downtime, etc) and the development of effective quality management systems are critically important. However, big multicenter data are direly needed to provide insight in this arena.
How can renal recovery be assessed and RRT effectively de-escalated?
The continuous examination of renal recovery in ICU patients with AKI-RRT is mostly based on urine output trend and, if feasible, interdialytic solute control. Sometimes, the transition from continuous RRT to intermittent modalities is necessary in the context of multiorgan recovery and de-escalation of care. However, clinical risk-prediction tools that identify patients who can potentially recover or already exhibit early signs of renal function recovery are needed. Current advances in clinical informatics can help to incorporate time-varying clinical parameters that may be informative for risk-prediction models. In addition, incorporating novel biomarkers of AKI repair and functional tests (eg, furosemide stress test, functional MRI) into these models may further inform these tools and aid the development of clinical decision support systems that enhance interventions to promote AKI recovery (Neyra JA, et al. Nephron. 2018;140: 99).
Is post-AKI outpatient care beneficial for ICU survivors who suffered from AKI-RRT?
Specialized AKI survivor clinics have been implemented in some centers. In general, this outpatient follow-up model includes survivors who suffered from AKI stage 2 or 3, some of them requiring RRT, and tailors individualized interventions for post-AKI complications (preventing recurrent AKI, attenuating incident or progressive CKD). However, the value of this outpatient model needs to be further evaluated with emphasis on clinical outcomes (eg, recurrent AKI, CKD, readmissions, or death) and elements that impact quality of life. This is an area of evolving research and a great opportunity for the nephrology and critical care communities to integrate and enhance post-ICU outpatient care and research collaboration.
Interdisciplinary communication among acute care team members
Two essential elements to provide effective RRT to ICU patients with AKI are: (1) the dynamics of the ICU team (intensivists, nephrologists, pharmacists, nurses, nutritionists, physical therapists, etc) to enhance the delivery of personalized therapy (RRT candidacy, timing of initiation, goals for solute control and fluid removal/regulation, renal recovery evaluation, RRT de-escalation, etc.) and (2) the frequent assessment and adjustment of RRT goals according to the clinical status of the patient. Therefore, effective RRT provision in the ICU requires the development of optimal channels of communication among all members of the acute care team and the systematic monitoring of the clinical status of the patient and RRT-specific goals and deliverables.
Perspective from a nurse and quality improvement officer for the provision of RRT in the ICU
The provision of continuous RRT (CRRT) to critically ill patients requires close communication between the bedside nurse and the rest of the ICU team. The physician typically prescribes CRRT and determines the specific goals of therapy. The pharmacist works closely with the nephrologist/intensivist and bedside nurse, especially in regards to customized CRRT solutions (when indicated) and medication dosing. Because CRRT can alter drug pharmacokinetics, the pharmacist closely and constantly monitors the patient’s clinical status, CRRT prescription, and all active medications. CRRT can also affect the nutritional and metabolic status of critically ill patients; therefore, the input of the nutritionist is necessary. The syndrome of ICU-acquired weakness is commonly encountered in ICU patients and is related to physical immobility. While ICU patients with AKI are already at risk for decreased mobility, the continuous connection to an immobile extracorporeal machine for the provision of CRRT may further contribute to immobilization and can also preclude the provision of optimal physical therapy. Therefore, the bedside nurse should assist the physical therapist for the timely and effective delivery of physical therapy according to the clinical status of the patient.
The clinical scenarios discussed above provide a small glimpse into the importance of developing an interdisciplinary ICU team caring for critically ill patients receiving CRRT. In the context of how integral the specific role of each team member is, it becomes clear that the bedside nurse’s role is not only to deliver hands-on patient care but also the orchestration of collaborative communication among all health-care providers for the effective provision of CRRT to critically ill patients in the ICU.
Dr. Neyra and Ms. Hauschild are with the Department of Internal Medicine; Division of Nephrology; Bone and Mineral Metabolism; University of Kentucky; Lexington, Kentucky.
An update on chronic thromboembolic pulmonary hypertension
The “fixable” form of PH that you don’t want to miss
Chronic thromboembolic pulmonary hypertension (CTEPH) is an elevation in pulmonary vascular resistance (PVR) resulting from chronic, “scarred-in” thromboembolic material partially occluding the pulmonary arteries. This vascular obstruction, over time, results in failure of the right ventricle and early mortality.
CTEPH was first characterized in an autopsy series from the Massachusetts General Hospital in 1931. On these postmortem examinations, it was noted that the affected patients had large pulmonary artery vascular obstruction, but also normal pulmonary parenchyma distal to this vascular obstruction and extensive bronchial collateral blood flow (Means J. Ann Intern Med. 1931;5:417). Although this observation set the groundwork for the theory that surgically removing the vascular obstruction to this preserved lung tissue could improve the condition of these patients, it would take until the mid-20th century until imaging and cardiac catheterization techniques allowed the recognition of the disease in real time.
CTEPH is thought to begin with an acute pulmonary embolus, but in approximately 3.4% of patients, rather than resolving over time, the thrombus will organize and incorporate into the pulmonary artery intimal layer (Simonneau G, et al. Eur Respir Rev. 2017;26:160112) A history of venous thromboembolism in a patient with persistent dyspnea should spur a screening evaluation for CTEPH; 75% of patients with CTEPH have a history of prior known acute pulmonary embolus and 56% of patients report a prior diagnosis of deep venous thrombosis. An acute pulmonary embolus will fibrinolyse early with the vast majority of the vascular obstruction resolving by the third month. Therefore, if the patient continues to report a significant exercise limitation after 3 months of therapeutic anticoagulation therapy, or has concerning physical exam signs, a workup should be pursued. The initial evaluation for CTEPH begins with a transthoracic echocardiogram (TTE) and ventilation/perfusion (V/Q) scintigraphy. A retrospective study comparing V/Q scan and multidetector CT scan revealed that V/Q scanning had a sensitivity and specificity of 97% and 95% for CTEPH, while CTPA had good specificity at 99% but only 51% sensitivity (Tunariu N, et al. J Nuc Med. 2007;48(5):680). If these are abnormal, then right-sided heart catheterization and invasive biplane digital subtraction pulmonary angiography are recommended. These studies confirm the diagnosis, grade its severity, and allow an evaluation for surgically accessible vs distal disease. Some CTEPH centers utilize additional imaging techniques, such as magnetic resonance angiography, optical resonance imaging, spectral CT scanning with iodine perfusion images, and intravascular ultrasound. These modalities and their place in the diagnostic algorithm are under investigation.
The goal of the initial evaluation process is to determine if the patient can undergo surgical pulmonary thromboendarterectomy (PTE), because in experienced hands, this procedure ensures the best long-term outcome for the patient. The first pulmonary thromboendarterectomy was performed at the University of California San Diego in 1970. Because the disease involves the intimal layer of the pulmonary artery, the surgery had to involve not just removal of the intravascular obstruction but also a pulmonary artery intimectomy. Surgical mortality rates were high in the initial experience. In 1984, a review of 85 worldwide cases reported an average mortality rate of 22%, and as high as 40% in some centers (Chitwood WR, Jr, et al. Clin Chest Med. 1984;5(3):507).
Over the ensuing years, refinements in surgical technique, the utilization of deep hypothermia and cardiac arrest during the procedure, development of new surgical instruments, and standardization of surgical selection and postoperative care have improved surgical mortality to <5% in experienced centers. Long-term outcomes of successful PTE surgery remain good, with 90% 3-year survival vs 70% for those who do not undergo surgery and are medically treated. Importantly, 90% of postoperative patients report functional class I or II symptoms at 1 year (Condliffe R, et al. Am J Reslpir Crit Care Med. 2008:177(10);1122). Because of this difference in early mortality and symptoms, PTE surgery remains the treatment of choice for CTEPH.
Despite the advances in PTE surgery, some patients are not operative candidates either due to surgically inaccessible disease or due to comorbidities. In 2001, Feinstein and colleagues described a series of 18 CTEPH cases treated with balloon pulmonary angioplasty (BPA). Promising hemodynamics effects were reported; however, the procedure had an unacceptable complication rate in which 11 patients developed reperfusion lung injury, 3 patients required mechanical ventilation, and 1 patient died. In the ensuing years, Japanese and Norwegian groups have independently developed and improved techniques for BPA. The procedure is done in a series of sessions (average four to six), 1 to 4 weeks apart, where small (2-3 mm) balloons are directed toward distal, diseased pulmonary vessels. Common complications include reperfusion injury, vessel injury, hemoptysis, and, more rarely, respiratory failure. Still, early experience suggests this procedure decreases pulmonary vascular resistance over time, improves right ventricular function, and improves patients’ symptoms (Andreassen A, et al. Heart. 2013;99(19):1415). The experience with this procedure is limited but growing in the United States, with only a handful of centers currently performing BPAs and collecting data.
Lifelong anticoagulation, oxygen, and diuretics for right-sided heart failure are recommended for patients with CTEPH. The first successful large phase III medication study for CTEPH was the CHEST-1 trial published in 2013. This was a multicenter, randomized, placebo-controlled trial of the soluble guanylate cyclase stimulator riociguat. The study enrolled 261 patients with inoperable CTEPH or persistent pulmonary hypertension after surgery. The primary end point was 6-minute walk distance at 12 weeks. The treatment group showed a 46 m improvement (P<.001). Secondary end points of pulmonary vascular resistance, NT-proBNP level, and functional class also improved. This pivotal trial led to the FDA approval of riociguat for inoperable or persistent postoperative CTEPH.
MERIT-1, a phase II, randomized placebo-controlled double trial of macitentan (an oral endothelin receptor antagonist) was recently completed. It enrolled 80 patients with inoperable CTEPH. The primary endpoint was pulmonary vascular resistance at week 16, expressed as a percentage of baseline. At week 16, the patients in the treatment arm had a PVR 73% of baseline vs 87.2% in the treatment group. This medication is not yet FDA-approved for the treatment of inoperable CTEPH (Ghofrani H, et al. Lancet Respir Med. 2017;5(10):785-794).
Pulmonary hypertension medication has been postulated as a possible way to “pretreat” patients before pulmonary thromboendarterectomy surgery, perhaps lowering preoperative pulmonary vascular resistance and surgical risk. However, there are currently no convincing data to support this practice, and medical treatment has been associated with a possible counterproductive delay in surgery. A phase II study including CTEPH patients with high PVR for preoperative treatment with riociguat vs placebo is currently enrolling to determine if “induction” treatment with medication prior to surgery reduces risk or delays definitive surgery. Occasionally, patients are found who have persistent thrombus but not pulmonary hypertension. Chronic thromboembolic disease (CTED) is a recently coined term describing patients who have chronic thromboembolism on imaging but have normal resting hemodynamics. Whether CTED represents simply unresolved clot that will never progress to CTEPH or is an early point on the continuum of disease not well-defined and a controversial topic among experts. At many centers, patients with CTED and symptoms will undergo exercise testing to look for exercise -induced pulmonary hypertension or an increase in dead space ventilation as a cause of their symptoms. A retrospective series of carefully chosen CTED patients who underwent PTE surgery reported improvements in symptoms and overall quality of life, without increased complications (Taboada D, et al. Eur Respir J. 2014 44(6):1635). The operation carries risk, however, and further work into the epidemiology and prognosis of CTED is required before operative intervention can be recommended.
In conclusion, CTEPH is a disease that rarely occurs after an acute PE but when undiagnosed and untreated portends a poor prognosis. The definitive treatment for this disease is surgical PTE, but to achieve the best outcomes, this procedure needs to be performed at expert centers with multidisciplinary team experience. Patients who are poor operative candidates or with surgically inaccessible disease may be considered for balloon pulmonary angioplasty. For patients without more curative options, medication improves exercise tolerance. The field of CTEPH has been rapidly expanding over the last decade, leading to better patient outcomes and more treatment options.
Dr. Bartolome is Associate Professor, Pulmonary and Critical Care Medicine; Director, CTEPH Program; and Associate Director, PH Program; UT Southwestern Medical Center, Dallas, Texas.
The “fixable” form of PH that you don’t want to miss
The “fixable” form of PH that you don’t want to miss
Chronic thromboembolic pulmonary hypertension (CTEPH) is an elevation in pulmonary vascular resistance (PVR) resulting from chronic, “scarred-in” thromboembolic material partially occluding the pulmonary arteries. This vascular obstruction, over time, results in failure of the right ventricle and early mortality.
CTEPH was first characterized in an autopsy series from the Massachusetts General Hospital in 1931. On these postmortem examinations, it was noted that the affected patients had large pulmonary artery vascular obstruction, but also normal pulmonary parenchyma distal to this vascular obstruction and extensive bronchial collateral blood flow (Means J. Ann Intern Med. 1931;5:417). Although this observation set the groundwork for the theory that surgically removing the vascular obstruction to this preserved lung tissue could improve the condition of these patients, it would take until the mid-20th century until imaging and cardiac catheterization techniques allowed the recognition of the disease in real time.
CTEPH is thought to begin with an acute pulmonary embolus, but in approximately 3.4% of patients, rather than resolving over time, the thrombus will organize and incorporate into the pulmonary artery intimal layer (Simonneau G, et al. Eur Respir Rev. 2017;26:160112) A history of venous thromboembolism in a patient with persistent dyspnea should spur a screening evaluation for CTEPH; 75% of patients with CTEPH have a history of prior known acute pulmonary embolus and 56% of patients report a prior diagnosis of deep venous thrombosis. An acute pulmonary embolus will fibrinolyse early with the vast majority of the vascular obstruction resolving by the third month. Therefore, if the patient continues to report a significant exercise limitation after 3 months of therapeutic anticoagulation therapy, or has concerning physical exam signs, a workup should be pursued. The initial evaluation for CTEPH begins with a transthoracic echocardiogram (TTE) and ventilation/perfusion (V/Q) scintigraphy. A retrospective study comparing V/Q scan and multidetector CT scan revealed that V/Q scanning had a sensitivity and specificity of 97% and 95% for CTEPH, while CTPA had good specificity at 99% but only 51% sensitivity (Tunariu N, et al. J Nuc Med. 2007;48(5):680). If these are abnormal, then right-sided heart catheterization and invasive biplane digital subtraction pulmonary angiography are recommended. These studies confirm the diagnosis, grade its severity, and allow an evaluation for surgically accessible vs distal disease. Some CTEPH centers utilize additional imaging techniques, such as magnetic resonance angiography, optical resonance imaging, spectral CT scanning with iodine perfusion images, and intravascular ultrasound. These modalities and their place in the diagnostic algorithm are under investigation.
The goal of the initial evaluation process is to determine if the patient can undergo surgical pulmonary thromboendarterectomy (PTE), because in experienced hands, this procedure ensures the best long-term outcome for the patient. The first pulmonary thromboendarterectomy was performed at the University of California San Diego in 1970. Because the disease involves the intimal layer of the pulmonary artery, the surgery had to involve not just removal of the intravascular obstruction but also a pulmonary artery intimectomy. Surgical mortality rates were high in the initial experience. In 1984, a review of 85 worldwide cases reported an average mortality rate of 22%, and as high as 40% in some centers (Chitwood WR, Jr, et al. Clin Chest Med. 1984;5(3):507).
Over the ensuing years, refinements in surgical technique, the utilization of deep hypothermia and cardiac arrest during the procedure, development of new surgical instruments, and standardization of surgical selection and postoperative care have improved surgical mortality to <5% in experienced centers. Long-term outcomes of successful PTE surgery remain good, with 90% 3-year survival vs 70% for those who do not undergo surgery and are medically treated. Importantly, 90% of postoperative patients report functional class I or II symptoms at 1 year (Condliffe R, et al. Am J Reslpir Crit Care Med. 2008:177(10);1122). Because of this difference in early mortality and symptoms, PTE surgery remains the treatment of choice for CTEPH.
Despite the advances in PTE surgery, some patients are not operative candidates either due to surgically inaccessible disease or due to comorbidities. In 2001, Feinstein and colleagues described a series of 18 CTEPH cases treated with balloon pulmonary angioplasty (BPA). Promising hemodynamics effects were reported; however, the procedure had an unacceptable complication rate in which 11 patients developed reperfusion lung injury, 3 patients required mechanical ventilation, and 1 patient died. In the ensuing years, Japanese and Norwegian groups have independently developed and improved techniques for BPA. The procedure is done in a series of sessions (average four to six), 1 to 4 weeks apart, where small (2-3 mm) balloons are directed toward distal, diseased pulmonary vessels. Common complications include reperfusion injury, vessel injury, hemoptysis, and, more rarely, respiratory failure. Still, early experience suggests this procedure decreases pulmonary vascular resistance over time, improves right ventricular function, and improves patients’ symptoms (Andreassen A, et al. Heart. 2013;99(19):1415). The experience with this procedure is limited but growing in the United States, with only a handful of centers currently performing BPAs and collecting data.
Lifelong anticoagulation, oxygen, and diuretics for right-sided heart failure are recommended for patients with CTEPH. The first successful large phase III medication study for CTEPH was the CHEST-1 trial published in 2013. This was a multicenter, randomized, placebo-controlled trial of the soluble guanylate cyclase stimulator riociguat. The study enrolled 261 patients with inoperable CTEPH or persistent pulmonary hypertension after surgery. The primary end point was 6-minute walk distance at 12 weeks. The treatment group showed a 46 m improvement (P<.001). Secondary end points of pulmonary vascular resistance, NT-proBNP level, and functional class also improved. This pivotal trial led to the FDA approval of riociguat for inoperable or persistent postoperative CTEPH.
MERIT-1, a phase II, randomized placebo-controlled double trial of macitentan (an oral endothelin receptor antagonist) was recently completed. It enrolled 80 patients with inoperable CTEPH. The primary endpoint was pulmonary vascular resistance at week 16, expressed as a percentage of baseline. At week 16, the patients in the treatment arm had a PVR 73% of baseline vs 87.2% in the treatment group. This medication is not yet FDA-approved for the treatment of inoperable CTEPH (Ghofrani H, et al. Lancet Respir Med. 2017;5(10):785-794).
Pulmonary hypertension medication has been postulated as a possible way to “pretreat” patients before pulmonary thromboendarterectomy surgery, perhaps lowering preoperative pulmonary vascular resistance and surgical risk. However, there are currently no convincing data to support this practice, and medical treatment has been associated with a possible counterproductive delay in surgery. A phase II study including CTEPH patients with high PVR for preoperative treatment with riociguat vs placebo is currently enrolling to determine if “induction” treatment with medication prior to surgery reduces risk or delays definitive surgery. Occasionally, patients are found who have persistent thrombus but not pulmonary hypertension. Chronic thromboembolic disease (CTED) is a recently coined term describing patients who have chronic thromboembolism on imaging but have normal resting hemodynamics. Whether CTED represents simply unresolved clot that will never progress to CTEPH or is an early point on the continuum of disease not well-defined and a controversial topic among experts. At many centers, patients with CTED and symptoms will undergo exercise testing to look for exercise -induced pulmonary hypertension or an increase in dead space ventilation as a cause of their symptoms. A retrospective series of carefully chosen CTED patients who underwent PTE surgery reported improvements in symptoms and overall quality of life, without increased complications (Taboada D, et al. Eur Respir J. 2014 44(6):1635). The operation carries risk, however, and further work into the epidemiology and prognosis of CTED is required before operative intervention can be recommended.
In conclusion, CTEPH is a disease that rarely occurs after an acute PE but when undiagnosed and untreated portends a poor prognosis. The definitive treatment for this disease is surgical PTE, but to achieve the best outcomes, this procedure needs to be performed at expert centers with multidisciplinary team experience. Patients who are poor operative candidates or with surgically inaccessible disease may be considered for balloon pulmonary angioplasty. For patients without more curative options, medication improves exercise tolerance. The field of CTEPH has been rapidly expanding over the last decade, leading to better patient outcomes and more treatment options.
Dr. Bartolome is Associate Professor, Pulmonary and Critical Care Medicine; Director, CTEPH Program; and Associate Director, PH Program; UT Southwestern Medical Center, Dallas, Texas.
Chronic thromboembolic pulmonary hypertension (CTEPH) is an elevation in pulmonary vascular resistance (PVR) resulting from chronic, “scarred-in” thromboembolic material partially occluding the pulmonary arteries. This vascular obstruction, over time, results in failure of the right ventricle and early mortality.
CTEPH was first characterized in an autopsy series from the Massachusetts General Hospital in 1931. On these postmortem examinations, it was noted that the affected patients had large pulmonary artery vascular obstruction, but also normal pulmonary parenchyma distal to this vascular obstruction and extensive bronchial collateral blood flow (Means J. Ann Intern Med. 1931;5:417). Although this observation set the groundwork for the theory that surgically removing the vascular obstruction to this preserved lung tissue could improve the condition of these patients, it would take until the mid-20th century until imaging and cardiac catheterization techniques allowed the recognition of the disease in real time.
CTEPH is thought to begin with an acute pulmonary embolus, but in approximately 3.4% of patients, rather than resolving over time, the thrombus will organize and incorporate into the pulmonary artery intimal layer (Simonneau G, et al. Eur Respir Rev. 2017;26:160112) A history of venous thromboembolism in a patient with persistent dyspnea should spur a screening evaluation for CTEPH; 75% of patients with CTEPH have a history of prior known acute pulmonary embolus and 56% of patients report a prior diagnosis of deep venous thrombosis. An acute pulmonary embolus will fibrinolyse early with the vast majority of the vascular obstruction resolving by the third month. Therefore, if the patient continues to report a significant exercise limitation after 3 months of therapeutic anticoagulation therapy, or has concerning physical exam signs, a workup should be pursued. The initial evaluation for CTEPH begins with a transthoracic echocardiogram (TTE) and ventilation/perfusion (V/Q) scintigraphy. A retrospective study comparing V/Q scan and multidetector CT scan revealed that V/Q scanning had a sensitivity and specificity of 97% and 95% for CTEPH, while CTPA had good specificity at 99% but only 51% sensitivity (Tunariu N, et al. J Nuc Med. 2007;48(5):680). If these are abnormal, then right-sided heart catheterization and invasive biplane digital subtraction pulmonary angiography are recommended. These studies confirm the diagnosis, grade its severity, and allow an evaluation for surgically accessible vs distal disease. Some CTEPH centers utilize additional imaging techniques, such as magnetic resonance angiography, optical resonance imaging, spectral CT scanning with iodine perfusion images, and intravascular ultrasound. These modalities and their place in the diagnostic algorithm are under investigation.
The goal of the initial evaluation process is to determine if the patient can undergo surgical pulmonary thromboendarterectomy (PTE), because in experienced hands, this procedure ensures the best long-term outcome for the patient. The first pulmonary thromboendarterectomy was performed at the University of California San Diego in 1970. Because the disease involves the intimal layer of the pulmonary artery, the surgery had to involve not just removal of the intravascular obstruction but also a pulmonary artery intimectomy. Surgical mortality rates were high in the initial experience. In 1984, a review of 85 worldwide cases reported an average mortality rate of 22%, and as high as 40% in some centers (Chitwood WR, Jr, et al. Clin Chest Med. 1984;5(3):507).
Over the ensuing years, refinements in surgical technique, the utilization of deep hypothermia and cardiac arrest during the procedure, development of new surgical instruments, and standardization of surgical selection and postoperative care have improved surgical mortality to <5% in experienced centers. Long-term outcomes of successful PTE surgery remain good, with 90% 3-year survival vs 70% for those who do not undergo surgery and are medically treated. Importantly, 90% of postoperative patients report functional class I or II symptoms at 1 year (Condliffe R, et al. Am J Reslpir Crit Care Med. 2008:177(10);1122). Because of this difference in early mortality and symptoms, PTE surgery remains the treatment of choice for CTEPH.
Despite the advances in PTE surgery, some patients are not operative candidates either due to surgically inaccessible disease or due to comorbidities. In 2001, Feinstein and colleagues described a series of 18 CTEPH cases treated with balloon pulmonary angioplasty (BPA). Promising hemodynamics effects were reported; however, the procedure had an unacceptable complication rate in which 11 patients developed reperfusion lung injury, 3 patients required mechanical ventilation, and 1 patient died. In the ensuing years, Japanese and Norwegian groups have independently developed and improved techniques for BPA. The procedure is done in a series of sessions (average four to six), 1 to 4 weeks apart, where small (2-3 mm) balloons are directed toward distal, diseased pulmonary vessels. Common complications include reperfusion injury, vessel injury, hemoptysis, and, more rarely, respiratory failure. Still, early experience suggests this procedure decreases pulmonary vascular resistance over time, improves right ventricular function, and improves patients’ symptoms (Andreassen A, et al. Heart. 2013;99(19):1415). The experience with this procedure is limited but growing in the United States, with only a handful of centers currently performing BPAs and collecting data.
Lifelong anticoagulation, oxygen, and diuretics for right-sided heart failure are recommended for patients with CTEPH. The first successful large phase III medication study for CTEPH was the CHEST-1 trial published in 2013. This was a multicenter, randomized, placebo-controlled trial of the soluble guanylate cyclase stimulator riociguat. The study enrolled 261 patients with inoperable CTEPH or persistent pulmonary hypertension after surgery. The primary end point was 6-minute walk distance at 12 weeks. The treatment group showed a 46 m improvement (P<.001). Secondary end points of pulmonary vascular resistance, NT-proBNP level, and functional class also improved. This pivotal trial led to the FDA approval of riociguat for inoperable or persistent postoperative CTEPH.
MERIT-1, a phase II, randomized placebo-controlled double trial of macitentan (an oral endothelin receptor antagonist) was recently completed. It enrolled 80 patients with inoperable CTEPH. The primary endpoint was pulmonary vascular resistance at week 16, expressed as a percentage of baseline. At week 16, the patients in the treatment arm had a PVR 73% of baseline vs 87.2% in the treatment group. This medication is not yet FDA-approved for the treatment of inoperable CTEPH (Ghofrani H, et al. Lancet Respir Med. 2017;5(10):785-794).
Pulmonary hypertension medication has been postulated as a possible way to “pretreat” patients before pulmonary thromboendarterectomy surgery, perhaps lowering preoperative pulmonary vascular resistance and surgical risk. However, there are currently no convincing data to support this practice, and medical treatment has been associated with a possible counterproductive delay in surgery. A phase II study including CTEPH patients with high PVR for preoperative treatment with riociguat vs placebo is currently enrolling to determine if “induction” treatment with medication prior to surgery reduces risk or delays definitive surgery. Occasionally, patients are found who have persistent thrombus but not pulmonary hypertension. Chronic thromboembolic disease (CTED) is a recently coined term describing patients who have chronic thromboembolism on imaging but have normal resting hemodynamics. Whether CTED represents simply unresolved clot that will never progress to CTEPH or is an early point on the continuum of disease not well-defined and a controversial topic among experts. At many centers, patients with CTED and symptoms will undergo exercise testing to look for exercise -induced pulmonary hypertension or an increase in dead space ventilation as a cause of their symptoms. A retrospective series of carefully chosen CTED patients who underwent PTE surgery reported improvements in symptoms and overall quality of life, without increased complications (Taboada D, et al. Eur Respir J. 2014 44(6):1635). The operation carries risk, however, and further work into the epidemiology and prognosis of CTED is required before operative intervention can be recommended.
In conclusion, CTEPH is a disease that rarely occurs after an acute PE but when undiagnosed and untreated portends a poor prognosis. The definitive treatment for this disease is surgical PTE, but to achieve the best outcomes, this procedure needs to be performed at expert centers with multidisciplinary team experience. Patients who are poor operative candidates or with surgically inaccessible disease may be considered for balloon pulmonary angioplasty. For patients without more curative options, medication improves exercise tolerance. The field of CTEPH has been rapidly expanding over the last decade, leading to better patient outcomes and more treatment options.
Dr. Bartolome is Associate Professor, Pulmonary and Critical Care Medicine; Director, CTEPH Program; and Associate Director, PH Program; UT Southwestern Medical Center, Dallas, Texas.
Use of ECMO in the management of influenza-associated ARDS
Now that we are in the midst of flu season, many discussions regarding the management of patients with influenza virus infections are ensuing. While prevention is always preferable, and we encourage everyone to get vaccinated, influenza remains a rapidly widespread infection. In the United States during last year’s flu season (2017-18), there was an estimated 49 million cases of influenza, 960,000 hospitalizations, and 79,000 deaths. Approximately 86% of all deaths were estimated to occur in those aged 65 and older (Centers for Disease Control and Prevention webpage on Burden of Influenza).
Despite our best efforts, there are inevitable times when some patients become ill enough to require hospitalization. Patients aged 65 and older make up the overwhelming majority of patients with influenza who eventually require hospitalization (Fig 1) (The Centers for Disease Control and Prevention FluView Database). Comorbidities also confer higher risk for more severe illness and potential hospitalization irrespective of age (Fig 2). In children with known medical conditions, asthma confers highest risk of hospitalization, as 27% of those with asthma were hospitalized after developing the flu. In adults, 52% of those with cardiovascular disease and 30% of adult patients with chronic lung disease who were confirmed to have influenza required hospitalization for treatment (Fig 2, The Centers for Disease Control and Prevention FluView Database).
The most severe cases of influenza can require ICU care and advanced management of respiratory failure as a result of the acute respiratory distress syndrome (ARDS). The lungs suffer significant injury due to the viral infection, and they lose their ability to effectively oxygenate the blood. Secondary bacterial infections can also occur as a complication, which compounds the injury. Given the fact that so many patients have significant comorbidities and are of advanced age, it is reasonable to expect that a fair proportion of those with influenza would develop respiratory failure as a consequence. For some of these patients, the hypoxemia that develops as a result of the lung injury can be exceptionally challenging to manage. In extreme cases, conventional ventilator management is insufficient, and the need for additional, advanced therapies arise.
Studies of VV ECMO in severe influenza
ECMO (extracorporeal membrane oxygenation) is a treatment that has been employed to help support patients with severe hypoxemic respiratory failure while their lungs recover from acute injury. Venovenous (VV) ECMO requires peripheral insertion of large cannulae into the venous system to take deoxygenated blood, deliver it through the membrane oxygenator and return the oxygenated blood back to the venous system. In simplest terms, the membrane of ECMO circuit serves as a substitute for the gas exchange function of the lungs and provides the oxygenation that the injured alveoli of the lung are unable to provide. The overall intent is to have the external ECMO circuit do all of the gas exchange work while the lungs heal.
Much research has been done on VV ECMO as an adjunct or salvage therapy in patients with refractory hypoxemic respiratory failure due to ARDS. Historical and recent studies have shown that approximately 60% of patients with ARDS have viral (approximately 20%) or bacterial (approximately 40%) pneumonia as the underlying cause (Zapol, et al. JAMA. 1979; 242[20]:2193; Combes A, et al. N Engl J Med. 2018;378:1965). Naturally, given the frequency of infection as a cause for ARDS, and the severity of illness that can develop with influenza infection in particular, an interest has arisen in the applicability of ECMO in cases of severe influenza-related ARDS.
In 2009, during the H1N1 influenza pandemic, the ANZ ECMO investigators in Australia and New Zealand described a 78% survival rate for their patients with severe H1N1 associated ARDS treated with VV ECMO between June and August of that year (Davies A, et al. JAMA. 2009;302[17]:1888). The eagerly awaited results of the randomized, controlled CESAR trial (Peek G, et al. Lancet. 2009;374:1351) that studied patients aged 18 to 64 with severe, refractory respiratory failure transferred to a specialized center for ECMO care had additional impact in catalyzing interest in ECMO use. This trial showed improved survival with ECMO (63% in ECMO vs 47% control, RR 0.69; 95% CI 0.05-0.97 P=.03) with a gain of 0.03 QALY (quality-adjusted life years) with additional cost of 40,000 pounds sterling. However, a major critique is that 24% of patients transferred to the specialized center never were treated with ECMO. Significantly, there was incomplete follow-up data on nearly half of the patients, as well. Many conclude that the survival benefit seen in this study may be more reflective of the expertise in respiratory failure management (especially as it relates to lung protective ventilation) at this center than therapy with ECMO itself.
Additional cohort studies in the United Kingdom (Noah MA, et al. JAMA. 2011;306[15]:1659) and Italy (Pappalardo F, et al. Intensive Care Med. 2013;39[2]:275) showed approximately 70% in-hospital survival rates for patients with H1N1 influenza transferred to a specialized ECMO center and treated with ECMO.
Nonetheless, the information gained from the observational data from ANZ ECMO, along with data published in European cohort studies and the randomized controlled CESAR trial after the 2009 H1N1 influenza pandemic, greatly contributed to the rise in use of ECMO for refractory ARDS due to influenza. Subsequently, there has been a rapid establishment and expansion of ECMO centers over the past decade, primarily to meet the anticipated demands of treating severe influenza-related ARDS.
The recently published EOLIA trial (Combes A, et al. N Engl J Med. 2018;378:1965) was designed to study the benefit of VV ECMO vs conventional mechanical ventilation in ARDS and demonstrated an 11% absolute reduction in 60-day mortality, which did not reach statistical significance. Like the CESAR trial, there are critiques of the outcome, especially as it relates to stopping the trial early due to the inability to show a significant benefit of VV ECMO over mechanical ventilation.
All of the aforementioned studies evaluated adults under age 65. Interestingly, there are no specific age contraindications for the use of ECMO (ELSO Guidelines for Cardiopulmonary Extracorporeal Life Support, Extracorporeal Life Support Organization, Version 1.4 August 2017), but many consider older age as a risk for poor outcome. Approximately 2,300 adult patients in the United States have been treated with ECMO for respiratory failure each year, and only 10% of those are over age 65 (CMS Changes in ECMO Reimbursements – ELSO Report). The outcome benefit of ECMO for a relatively healthy patient over age 65 is not known, as those patients have not been evaluated in studies thus far. When comparison to data from decades ago is made, one must keep in mind that populations worldwide are living longer, and a continued increase in number of adults over the age 65 is expected.
While the overall interpretation of the outcomes of studies of ECMO may be fraught with controversy, there is little debate that providing care for patients with refractory respiratory failure in centers that provide high-level skill and expertise in management of respiratory failure has a clear benefit, irrespective of whether the patient eventually receives therapy with ECMO. What is also clear is that ECMO is costly, with per-patient costs demonstrated to be at least double that of those receiving mechanical ventilation alone (Peek G, et al. Lancet. 2009;374:1351). This substantial cost associated with ECMO cannot be ignored in today’s era of value-based care.
Fortuitously, CMS recently released new DRG reimbursement scales for the use of ECMO effective Oct 1, 2018. VV ECMO could have as much as a 70% reduction in reimbursement, and many insurance companies are expected to follow suit (CMS Changes in ECMO Reimbursements –ELSO Report). Only time will tell what impact this, along with the current evidence, will have on long-term provision of ECMO care for our sickest of patients with influenza and associated respiratory illnesses.
Dr. Tatem is with the Division of Pulmonary and Critical Care Medicine, Department of Medicine, Henry Ford Hospital, Detroit, Michigan.
Now that we are in the midst of flu season, many discussions regarding the management of patients with influenza virus infections are ensuing. While prevention is always preferable, and we encourage everyone to get vaccinated, influenza remains a rapidly widespread infection. In the United States during last year’s flu season (2017-18), there was an estimated 49 million cases of influenza, 960,000 hospitalizations, and 79,000 deaths. Approximately 86% of all deaths were estimated to occur in those aged 65 and older (Centers for Disease Control and Prevention webpage on Burden of Influenza).
Despite our best efforts, there are inevitable times when some patients become ill enough to require hospitalization. Patients aged 65 and older make up the overwhelming majority of patients with influenza who eventually require hospitalization (Fig 1) (The Centers for Disease Control and Prevention FluView Database). Comorbidities also confer higher risk for more severe illness and potential hospitalization irrespective of age (Fig 2). In children with known medical conditions, asthma confers highest risk of hospitalization, as 27% of those with asthma were hospitalized after developing the flu. In adults, 52% of those with cardiovascular disease and 30% of adult patients with chronic lung disease who were confirmed to have influenza required hospitalization for treatment (Fig 2, The Centers for Disease Control and Prevention FluView Database).
The most severe cases of influenza can require ICU care and advanced management of respiratory failure as a result of the acute respiratory distress syndrome (ARDS). The lungs suffer significant injury due to the viral infection, and they lose their ability to effectively oxygenate the blood. Secondary bacterial infections can also occur as a complication, which compounds the injury. Given the fact that so many patients have significant comorbidities and are of advanced age, it is reasonable to expect that a fair proportion of those with influenza would develop respiratory failure as a consequence. For some of these patients, the hypoxemia that develops as a result of the lung injury can be exceptionally challenging to manage. In extreme cases, conventional ventilator management is insufficient, and the need for additional, advanced therapies arise.
Studies of VV ECMO in severe influenza
ECMO (extracorporeal membrane oxygenation) is a treatment that has been employed to help support patients with severe hypoxemic respiratory failure while their lungs recover from acute injury. Venovenous (VV) ECMO requires peripheral insertion of large cannulae into the venous system to take deoxygenated blood, deliver it through the membrane oxygenator and return the oxygenated blood back to the venous system. In simplest terms, the membrane of ECMO circuit serves as a substitute for the gas exchange function of the lungs and provides the oxygenation that the injured alveoli of the lung are unable to provide. The overall intent is to have the external ECMO circuit do all of the gas exchange work while the lungs heal.
Much research has been done on VV ECMO as an adjunct or salvage therapy in patients with refractory hypoxemic respiratory failure due to ARDS. Historical and recent studies have shown that approximately 60% of patients with ARDS have viral (approximately 20%) or bacterial (approximately 40%) pneumonia as the underlying cause (Zapol, et al. JAMA. 1979; 242[20]:2193; Combes A, et al. N Engl J Med. 2018;378:1965). Naturally, given the frequency of infection as a cause for ARDS, and the severity of illness that can develop with influenza infection in particular, an interest has arisen in the applicability of ECMO in cases of severe influenza-related ARDS.
In 2009, during the H1N1 influenza pandemic, the ANZ ECMO investigators in Australia and New Zealand described a 78% survival rate for their patients with severe H1N1 associated ARDS treated with VV ECMO between June and August of that year (Davies A, et al. JAMA. 2009;302[17]:1888). The eagerly awaited results of the randomized, controlled CESAR trial (Peek G, et al. Lancet. 2009;374:1351) that studied patients aged 18 to 64 with severe, refractory respiratory failure transferred to a specialized center for ECMO care had additional impact in catalyzing interest in ECMO use. This trial showed improved survival with ECMO (63% in ECMO vs 47% control, RR 0.69; 95% CI 0.05-0.97 P=.03) with a gain of 0.03 QALY (quality-adjusted life years) with additional cost of 40,000 pounds sterling. However, a major critique is that 24% of patients transferred to the specialized center never were treated with ECMO. Significantly, there was incomplete follow-up data on nearly half of the patients, as well. Many conclude that the survival benefit seen in this study may be more reflective of the expertise in respiratory failure management (especially as it relates to lung protective ventilation) at this center than therapy with ECMO itself.
Additional cohort studies in the United Kingdom (Noah MA, et al. JAMA. 2011;306[15]:1659) and Italy (Pappalardo F, et al. Intensive Care Med. 2013;39[2]:275) showed approximately 70% in-hospital survival rates for patients with H1N1 influenza transferred to a specialized ECMO center and treated with ECMO.
Nonetheless, the information gained from the observational data from ANZ ECMO, along with data published in European cohort studies and the randomized controlled CESAR trial after the 2009 H1N1 influenza pandemic, greatly contributed to the rise in use of ECMO for refractory ARDS due to influenza. Subsequently, there has been a rapid establishment and expansion of ECMO centers over the past decade, primarily to meet the anticipated demands of treating severe influenza-related ARDS.
The recently published EOLIA trial (Combes A, et al. N Engl J Med. 2018;378:1965) was designed to study the benefit of VV ECMO vs conventional mechanical ventilation in ARDS and demonstrated an 11% absolute reduction in 60-day mortality, which did not reach statistical significance. Like the CESAR trial, there are critiques of the outcome, especially as it relates to stopping the trial early due to the inability to show a significant benefit of VV ECMO over mechanical ventilation.
All of the aforementioned studies evaluated adults under age 65. Interestingly, there are no specific age contraindications for the use of ECMO (ELSO Guidelines for Cardiopulmonary Extracorporeal Life Support, Extracorporeal Life Support Organization, Version 1.4 August 2017), but many consider older age as a risk for poor outcome. Approximately 2,300 adult patients in the United States have been treated with ECMO for respiratory failure each year, and only 10% of those are over age 65 (CMS Changes in ECMO Reimbursements – ELSO Report). The outcome benefit of ECMO for a relatively healthy patient over age 65 is not known, as those patients have not been evaluated in studies thus far. When comparison to data from decades ago is made, one must keep in mind that populations worldwide are living longer, and a continued increase in number of adults over the age 65 is expected.
While the overall interpretation of the outcomes of studies of ECMO may be fraught with controversy, there is little debate that providing care for patients with refractory respiratory failure in centers that provide high-level skill and expertise in management of respiratory failure has a clear benefit, irrespective of whether the patient eventually receives therapy with ECMO. What is also clear is that ECMO is costly, with per-patient costs demonstrated to be at least double that of those receiving mechanical ventilation alone (Peek G, et al. Lancet. 2009;374:1351). This substantial cost associated with ECMO cannot be ignored in today’s era of value-based care.
Fortuitously, CMS recently released new DRG reimbursement scales for the use of ECMO effective Oct 1, 2018. VV ECMO could have as much as a 70% reduction in reimbursement, and many insurance companies are expected to follow suit (CMS Changes in ECMO Reimbursements –ELSO Report). Only time will tell what impact this, along with the current evidence, will have on long-term provision of ECMO care for our sickest of patients with influenza and associated respiratory illnesses.
Dr. Tatem is with the Division of Pulmonary and Critical Care Medicine, Department of Medicine, Henry Ford Hospital, Detroit, Michigan.
Now that we are in the midst of flu season, many discussions regarding the management of patients with influenza virus infections are ensuing. While prevention is always preferable, and we encourage everyone to get vaccinated, influenza remains a rapidly widespread infection. In the United States during last year’s flu season (2017-18), there was an estimated 49 million cases of influenza, 960,000 hospitalizations, and 79,000 deaths. Approximately 86% of all deaths were estimated to occur in those aged 65 and older (Centers for Disease Control and Prevention webpage on Burden of Influenza).
Despite our best efforts, there are inevitable times when some patients become ill enough to require hospitalization. Patients aged 65 and older make up the overwhelming majority of patients with influenza who eventually require hospitalization (Fig 1) (The Centers for Disease Control and Prevention FluView Database). Comorbidities also confer higher risk for more severe illness and potential hospitalization irrespective of age (Fig 2). In children with known medical conditions, asthma confers highest risk of hospitalization, as 27% of those with asthma were hospitalized after developing the flu. In adults, 52% of those with cardiovascular disease and 30% of adult patients with chronic lung disease who were confirmed to have influenza required hospitalization for treatment (Fig 2, The Centers for Disease Control and Prevention FluView Database).
The most severe cases of influenza can require ICU care and advanced management of respiratory failure as a result of the acute respiratory distress syndrome (ARDS). The lungs suffer significant injury due to the viral infection, and they lose their ability to effectively oxygenate the blood. Secondary bacterial infections can also occur as a complication, which compounds the injury. Given the fact that so many patients have significant comorbidities and are of advanced age, it is reasonable to expect that a fair proportion of those with influenza would develop respiratory failure as a consequence. For some of these patients, the hypoxemia that develops as a result of the lung injury can be exceptionally challenging to manage. In extreme cases, conventional ventilator management is insufficient, and the need for additional, advanced therapies arise.
Studies of VV ECMO in severe influenza
ECMO (extracorporeal membrane oxygenation) is a treatment that has been employed to help support patients with severe hypoxemic respiratory failure while their lungs recover from acute injury. Venovenous (VV) ECMO requires peripheral insertion of large cannulae into the venous system to take deoxygenated blood, deliver it through the membrane oxygenator and return the oxygenated blood back to the venous system. In simplest terms, the membrane of ECMO circuit serves as a substitute for the gas exchange function of the lungs and provides the oxygenation that the injured alveoli of the lung are unable to provide. The overall intent is to have the external ECMO circuit do all of the gas exchange work while the lungs heal.
Much research has been done on VV ECMO as an adjunct or salvage therapy in patients with refractory hypoxemic respiratory failure due to ARDS. Historical and recent studies have shown that approximately 60% of patients with ARDS have viral (approximately 20%) or bacterial (approximately 40%) pneumonia as the underlying cause (Zapol, et al. JAMA. 1979; 242[20]:2193; Combes A, et al. N Engl J Med. 2018;378:1965). Naturally, given the frequency of infection as a cause for ARDS, and the severity of illness that can develop with influenza infection in particular, an interest has arisen in the applicability of ECMO in cases of severe influenza-related ARDS.
In 2009, during the H1N1 influenza pandemic, the ANZ ECMO investigators in Australia and New Zealand described a 78% survival rate for their patients with severe H1N1 associated ARDS treated with VV ECMO between June and August of that year (Davies A, et al. JAMA. 2009;302[17]:1888). The eagerly awaited results of the randomized, controlled CESAR trial (Peek G, et al. Lancet. 2009;374:1351) that studied patients aged 18 to 64 with severe, refractory respiratory failure transferred to a specialized center for ECMO care had additional impact in catalyzing interest in ECMO use. This trial showed improved survival with ECMO (63% in ECMO vs 47% control, RR 0.69; 95% CI 0.05-0.97 P=.03) with a gain of 0.03 QALY (quality-adjusted life years) with additional cost of 40,000 pounds sterling. However, a major critique is that 24% of patients transferred to the specialized center never were treated with ECMO. Significantly, there was incomplete follow-up data on nearly half of the patients, as well. Many conclude that the survival benefit seen in this study may be more reflective of the expertise in respiratory failure management (especially as it relates to lung protective ventilation) at this center than therapy with ECMO itself.
Additional cohort studies in the United Kingdom (Noah MA, et al. JAMA. 2011;306[15]:1659) and Italy (Pappalardo F, et al. Intensive Care Med. 2013;39[2]:275) showed approximately 70% in-hospital survival rates for patients with H1N1 influenza transferred to a specialized ECMO center and treated with ECMO.
Nonetheless, the information gained from the observational data from ANZ ECMO, along with data published in European cohort studies and the randomized controlled CESAR trial after the 2009 H1N1 influenza pandemic, greatly contributed to the rise in use of ECMO for refractory ARDS due to influenza. Subsequently, there has been a rapid establishment and expansion of ECMO centers over the past decade, primarily to meet the anticipated demands of treating severe influenza-related ARDS.
The recently published EOLIA trial (Combes A, et al. N Engl J Med. 2018;378:1965) was designed to study the benefit of VV ECMO vs conventional mechanical ventilation in ARDS and demonstrated an 11% absolute reduction in 60-day mortality, which did not reach statistical significance. Like the CESAR trial, there are critiques of the outcome, especially as it relates to stopping the trial early due to the inability to show a significant benefit of VV ECMO over mechanical ventilation.
All of the aforementioned studies evaluated adults under age 65. Interestingly, there are no specific age contraindications for the use of ECMO (ELSO Guidelines for Cardiopulmonary Extracorporeal Life Support, Extracorporeal Life Support Organization, Version 1.4 August 2017), but many consider older age as a risk for poor outcome. Approximately 2,300 adult patients in the United States have been treated with ECMO for respiratory failure each year, and only 10% of those are over age 65 (CMS Changes in ECMO Reimbursements – ELSO Report). The outcome benefit of ECMO for a relatively healthy patient over age 65 is not known, as those patients have not been evaluated in studies thus far. When comparison to data from decades ago is made, one must keep in mind that populations worldwide are living longer, and a continued increase in number of adults over the age 65 is expected.
While the overall interpretation of the outcomes of studies of ECMO may be fraught with controversy, there is little debate that providing care for patients with refractory respiratory failure in centers that provide high-level skill and expertise in management of respiratory failure has a clear benefit, irrespective of whether the patient eventually receives therapy with ECMO. What is also clear is that ECMO is costly, with per-patient costs demonstrated to be at least double that of those receiving mechanical ventilation alone (Peek G, et al. Lancet. 2009;374:1351). This substantial cost associated with ECMO cannot be ignored in today’s era of value-based care.
Fortuitously, CMS recently released new DRG reimbursement scales for the use of ECMO effective Oct 1, 2018. VV ECMO could have as much as a 70% reduction in reimbursement, and many insurance companies are expected to follow suit (CMS Changes in ECMO Reimbursements –ELSO Report). Only time will tell what impact this, along with the current evidence, will have on long-term provision of ECMO care for our sickest of patients with influenza and associated respiratory illnesses.
Dr. Tatem is with the Division of Pulmonary and Critical Care Medicine, Department of Medicine, Henry Ford Hospital, Detroit, Michigan.
The 1-hour sepsis bundle is serious—serious like a heart attack
In 2002, the European Society of Intensive Care Medicine, the Society of Critical Care Medicine, and the International Sepsis Forum formed the Surviving Sepsis Campaign (SSC) aiming to reduce sepsis-related mortality by 25% within 5 years, mimicking the progress made in the management of STEMI (http://www.survivingsepsis.org/About-SSC/Pages/History.aspx).
SSC bundles: a historic perspective
The first guidelines were published in 2004. Recognizing that guidelines may not influence bedside practice for many years, the SSC partnered with the Institute for Healthcare Improvement to apply performance improvement methodology to sepsis management, developing the “sepsis change bundles.” In addition to hospital resources for education, screening, and data collection, the 6-hour resuscitation and 24-hour management bundles were created. Subsequent data, collected as part of the initiative, demonstrated an association between bundle compliance and survival.
In 2008, the SSC guidelines were revised, and the National Quality Forum (NQF) adopted sepsis bundle compliance as a quality measure. NQF endorsement is often the first step toward the creation of mandates by the Centers for Medicare and Medicaid Services (CMS), but that did not occur at the time.
In 2012, the SSC guidelines were updated and published with new 3- and 6-hour bundles. That year, Rory Staunton, an otherwise healthy 12-year-old boy, died of septic shock in New York. The public discussion of this case, among other factors, prompted New York state to develop a sepsis care mandate that became state law in 2014. An annual public report details each hospital’s compliance with process measures and risk-adjusted mortality. The correlation between measure compliance and survival also holds true in this data set.
In 2015, CMS developed the SEP-1 measure. While the symbolic importance of a sepsis federal mandate and its potential to improve patient outcomes is recognized, concerns remain about the measure itself. The detailed and specific way data must be collected may disconnect clinical care provided from measured compliance. The time pressure and the “all-or-nothing” approach might incentivize interventions potentially harmful in some patients. No patient-centered outcomes are reported. This measure might be tied to reimbursement in the future.
The original version of SEP-1 was based on the 2012 SSC bundles, which reflected the best evidence available at the time (the 2001 Early Goal-Directed Therapy trial). By 2015, elements of that strategy had been challenged, and the PROCESS, PROMISE, and ARISE trials contested the notion that protcolized resuscitation decreased mortality. Moreover, new definitions of sepsis syndromes (Sepsis-3) were published in 2016 (Singer M, et al. JAMA. 2016;315[8]:801).
The 2016 SSC guidelines adopted the new definitions and recommended that patients with sepsis-induced hypoperfusion immediately receive a 30 mL/kg crystalloid bolus, followed by frequent reassessment. CMS did not adopt the Sepsis-3 definitions, but updates were made to allow the clinicians flexibility to demonstrate reassessment of the patient.
Comparing the 1-hour bundle to STEMI care
This year, the SSC published a 1-hour bundle to replace the 3- and 6-hour bundles (Levy MM et al. Crit Care Med. 2018;46[6]:997). Whereas previous bundles set time frames for completion of the elements, the 1-hour bundle focuses on the initiation of these components. The authors revisited the parallel between early management of sepsis and STEMI. The 1-hour bundle includes serum lactate, blood cultures prior to antibiotics, broad-spectrum antibiotics, a 30 mL/kg crystalloid bolus for patients with hypotension or lactate greater than or equal to 4 mmol/L, and vasopressors for persistent hypotension.
Elements of controversy after the publication of this bundle include:
1. One hour seems insufficient for complex clinical decision making and interventions for a syndrome with no specific diagnostic test: sepsis often mimics, or is mimicked by, other conditions.
2. Some bundle elements are not supported by high-quality evidence. No controlled studies exist regarding the appropriate volume of initial fluids or the impact of timing of antibiotics on outcomes.
3. The 1-hour time frame will encourage empiric delivery of fluids and antibiotics to patients who are not septic, potentially leading to harm.
4. While the 1-hour bundle is a quality improvement tool and not for public reporting, former bundles have been adopted as federally regulated measures.
Has the SSC gone too far? Are these concerns enough to abandon the 1-hour bundle? Or are the concerns regarding the 1-hour bundle an example of “perfect is the enemy of better”? To understand the potential for imperfect guidelines to drive tremendous patient-level improvements, one must consider the evolution of STEMI management.
Since the 1970s, the in-hospital mortality for STEMI has decreased from 25% to around 5%. The most significant factor in this achievement was the recognition that early reperfusion improves outcomes and that doing it consistently requires complex coordination. In 2004, a Door-to-Balloon (D2B) time of less than 90 minutes was included as a guideline recommendation (Antman EM, et al. Circulation. 2004;110[5]:588). CMS started collecting performance data on this metric, made that data public, and later tied the performance to hospital reimbursement.
Initially, the 90-minute goal was achieved in only 44% of cases. In 2006, the D2B initiative was launched, providing recommendations for public education, coordination of care, and emergent management of STEMI. Compliance with these recommendations required significant education and changes to STEMI care at multiple levels. Data were collected and submitted to inform the process. Six years later, compliance with the D2B goal had increased from 44% to 91%. The median D2B dropped from 96 to 64 minutes. Based on high compliance, CMS discontinued the use of this metric for reimbursement as the variation between high and low performers was minimal. Put simply, the entire country had gotten better at treating STEMI. The “time-zero” for STEMI was pushed back further, and D2B has been replaced with first-medical-contact (FMC) to device time. The recommendation is to achieve this as quickly as possible, and in less than 90 minutes (O’Gara P, et al. JACC. 2013;61[4]:485).
Consider the complexity of getting a patient from their home to a catheterization lab within 90 minutes, even in ideal circumstances. This short time frame encourages, by design, a low threshold to activate the system. We accept that some patients will receive an unnecessary catheterization or systemic fibrinolysis although the recommendation is based on level B evidence.
Compliance with the STEMI guidelines is more labor-intensive and complex than compliance with the 1-hour sepsis bundle. So, is STEMI a fair comparison to sepsis? Both syndromes are common, potentially deadly, and time-sensitive. Both require early recognition, but neither has a definitive diagnostic test. Instead, diagnosis requires an integration of multiple complex clinical factors. Both are backed by imperfect science that continues to evolve. Over-diagnosis of either will expose the patient to potentially harmful therapies.
The early management of STEMI is a valid comparison to the early management of sepsis. We must consider this comparison as we ponder the 1-hour sepsis bundle.
Is triage time the appropriate time-zero? In either condition, triage time is too early in some cases and too late in others. Unfortunately, there is no better alternative, and STEMI guidelines have evolved to start the clock before triage. Using a point such as “recognition of sepsis” would fail to capture delayed recognition.
Is it possible to diagnose and initiate treatment for sepsis in such a short time frame? Consider the treatment received by the usual care group of the PROCESS trial (The ProCESS Investigators. N Engl J Med. 2014;370:1683). Prior to meeting entry criteria, which occurred in less than 1 hour, patients in this group received an initial fluid bolus and had a lactate assessment. Prior to randomization, which occurred at around 90 minutes, this group completed 28 mL/kg of crystalloid fluid, and 76% received antibiotics. Thus, the usual-care group in this study nearly achieved the 1-hour bundle currently being contested.
Is it appropriate for a guideline to strongly recommend interventions not backed by level A evidence? The recommendation for FMC to catheterization within 90 minutes has not been studied in a controlled way. The precise dosing and timing of fibrinolysis is also not based on controlled data. Reperfusion devices and antiplatelet agents continue to be rigorously studied, sometimes with conflicting results.
Finally, should the 1-hour bundle be abandoned out of concern that it will be used as a national performance metric? First, there is currently no indication that the 1-hour bundle will be adopted as a performance metric. For the sake of argument, let’s assume the 1-hour bundle will be regulated and used to compare hospitals. Is there reason to think this bundle favors some hospitals over others and will lead to an unfair comparison? Is there significant inequity in the ability to draw blood cultures, send a lactate, start IV fluids, and initiate antibiotics?
Certainly, national compliance with such a metric would be very low at first. Therein lies the actual problem: a person who suffers a STEMI anywhere in the country is very likely to receive high-quality care. Currently, the same cannot be said about a patient with sepsis. Perhaps that should be the focus of our concern.
Dr. Uppal is Assistant Professor, NYU School of Medicine, Bellevue Hospital Center, New York, New York.
In 2002, the European Society of Intensive Care Medicine, the Society of Critical Care Medicine, and the International Sepsis Forum formed the Surviving Sepsis Campaign (SSC) aiming to reduce sepsis-related mortality by 25% within 5 years, mimicking the progress made in the management of STEMI (http://www.survivingsepsis.org/About-SSC/Pages/History.aspx).
SSC bundles: a historic perspective
The first guidelines were published in 2004. Recognizing that guidelines may not influence bedside practice for many years, the SSC partnered with the Institute for Healthcare Improvement to apply performance improvement methodology to sepsis management, developing the “sepsis change bundles.” In addition to hospital resources for education, screening, and data collection, the 6-hour resuscitation and 24-hour management bundles were created. Subsequent data, collected as part of the initiative, demonstrated an association between bundle compliance and survival.
In 2008, the SSC guidelines were revised, and the National Quality Forum (NQF) adopted sepsis bundle compliance as a quality measure. NQF endorsement is often the first step toward the creation of mandates by the Centers for Medicare and Medicaid Services (CMS), but that did not occur at the time.
In 2012, the SSC guidelines were updated and published with new 3- and 6-hour bundles. That year, Rory Staunton, an otherwise healthy 12-year-old boy, died of septic shock in New York. The public discussion of this case, among other factors, prompted New York state to develop a sepsis care mandate that became state law in 2014. An annual public report details each hospital’s compliance with process measures and risk-adjusted mortality. The correlation between measure compliance and survival also holds true in this data set.
In 2015, CMS developed the SEP-1 measure. While the symbolic importance of a sepsis federal mandate and its potential to improve patient outcomes is recognized, concerns remain about the measure itself. The detailed and specific way data must be collected may disconnect clinical care provided from measured compliance. The time pressure and the “all-or-nothing” approach might incentivize interventions potentially harmful in some patients. No patient-centered outcomes are reported. This measure might be tied to reimbursement in the future.
The original version of SEP-1 was based on the 2012 SSC bundles, which reflected the best evidence available at the time (the 2001 Early Goal-Directed Therapy trial). By 2015, elements of that strategy had been challenged, and the PROCESS, PROMISE, and ARISE trials contested the notion that protcolized resuscitation decreased mortality. Moreover, new definitions of sepsis syndromes (Sepsis-3) were published in 2016 (Singer M, et al. JAMA. 2016;315[8]:801).
The 2016 SSC guidelines adopted the new definitions and recommended that patients with sepsis-induced hypoperfusion immediately receive a 30 mL/kg crystalloid bolus, followed by frequent reassessment. CMS did not adopt the Sepsis-3 definitions, but updates were made to allow the clinicians flexibility to demonstrate reassessment of the patient.
Comparing the 1-hour bundle to STEMI care
This year, the SSC published a 1-hour bundle to replace the 3- and 6-hour bundles (Levy MM et al. Crit Care Med. 2018;46[6]:997). Whereas previous bundles set time frames for completion of the elements, the 1-hour bundle focuses on the initiation of these components. The authors revisited the parallel between early management of sepsis and STEMI. The 1-hour bundle includes serum lactate, blood cultures prior to antibiotics, broad-spectrum antibiotics, a 30 mL/kg crystalloid bolus for patients with hypotension or lactate greater than or equal to 4 mmol/L, and vasopressors for persistent hypotension.
Elements of controversy after the publication of this bundle include:
1. One hour seems insufficient for complex clinical decision making and interventions for a syndrome with no specific diagnostic test: sepsis often mimics, or is mimicked by, other conditions.
2. Some bundle elements are not supported by high-quality evidence. No controlled studies exist regarding the appropriate volume of initial fluids or the impact of timing of antibiotics on outcomes.
3. The 1-hour time frame will encourage empiric delivery of fluids and antibiotics to patients who are not septic, potentially leading to harm.
4. While the 1-hour bundle is a quality improvement tool and not for public reporting, former bundles have been adopted as federally regulated measures.
Has the SSC gone too far? Are these concerns enough to abandon the 1-hour bundle? Or are the concerns regarding the 1-hour bundle an example of “perfect is the enemy of better”? To understand the potential for imperfect guidelines to drive tremendous patient-level improvements, one must consider the evolution of STEMI management.
Since the 1970s, the in-hospital mortality for STEMI has decreased from 25% to around 5%. The most significant factor in this achievement was the recognition that early reperfusion improves outcomes and that doing it consistently requires complex coordination. In 2004, a Door-to-Balloon (D2B) time of less than 90 minutes was included as a guideline recommendation (Antman EM, et al. Circulation. 2004;110[5]:588). CMS started collecting performance data on this metric, made that data public, and later tied the performance to hospital reimbursement.
Initially, the 90-minute goal was achieved in only 44% of cases. In 2006, the D2B initiative was launched, providing recommendations for public education, coordination of care, and emergent management of STEMI. Compliance with these recommendations required significant education and changes to STEMI care at multiple levels. Data were collected and submitted to inform the process. Six years later, compliance with the D2B goal had increased from 44% to 91%. The median D2B dropped from 96 to 64 minutes. Based on high compliance, CMS discontinued the use of this metric for reimbursement as the variation between high and low performers was minimal. Put simply, the entire country had gotten better at treating STEMI. The “time-zero” for STEMI was pushed back further, and D2B has been replaced with first-medical-contact (FMC) to device time. The recommendation is to achieve this as quickly as possible, and in less than 90 minutes (O’Gara P, et al. JACC. 2013;61[4]:485).
Consider the complexity of getting a patient from their home to a catheterization lab within 90 minutes, even in ideal circumstances. This short time frame encourages, by design, a low threshold to activate the system. We accept that some patients will receive an unnecessary catheterization or systemic fibrinolysis although the recommendation is based on level B evidence.
Compliance with the STEMI guidelines is more labor-intensive and complex than compliance with the 1-hour sepsis bundle. So, is STEMI a fair comparison to sepsis? Both syndromes are common, potentially deadly, and time-sensitive. Both require early recognition, but neither has a definitive diagnostic test. Instead, diagnosis requires an integration of multiple complex clinical factors. Both are backed by imperfect science that continues to evolve. Over-diagnosis of either will expose the patient to potentially harmful therapies.
The early management of STEMI is a valid comparison to the early management of sepsis. We must consider this comparison as we ponder the 1-hour sepsis bundle.
Is triage time the appropriate time-zero? In either condition, triage time is too early in some cases and too late in others. Unfortunately, there is no better alternative, and STEMI guidelines have evolved to start the clock before triage. Using a point such as “recognition of sepsis” would fail to capture delayed recognition.
Is it possible to diagnose and initiate treatment for sepsis in such a short time frame? Consider the treatment received by the usual care group of the PROCESS trial (The ProCESS Investigators. N Engl J Med. 2014;370:1683). Prior to meeting entry criteria, which occurred in less than 1 hour, patients in this group received an initial fluid bolus and had a lactate assessment. Prior to randomization, which occurred at around 90 minutes, this group completed 28 mL/kg of crystalloid fluid, and 76% received antibiotics. Thus, the usual-care group in this study nearly achieved the 1-hour bundle currently being contested.
Is it appropriate for a guideline to strongly recommend interventions not backed by level A evidence? The recommendation for FMC to catheterization within 90 minutes has not been studied in a controlled way. The precise dosing and timing of fibrinolysis is also not based on controlled data. Reperfusion devices and antiplatelet agents continue to be rigorously studied, sometimes with conflicting results.
Finally, should the 1-hour bundle be abandoned out of concern that it will be used as a national performance metric? First, there is currently no indication that the 1-hour bundle will be adopted as a performance metric. For the sake of argument, let’s assume the 1-hour bundle will be regulated and used to compare hospitals. Is there reason to think this bundle favors some hospitals over others and will lead to an unfair comparison? Is there significant inequity in the ability to draw blood cultures, send a lactate, start IV fluids, and initiate antibiotics?
Certainly, national compliance with such a metric would be very low at first. Therein lies the actual problem: a person who suffers a STEMI anywhere in the country is very likely to receive high-quality care. Currently, the same cannot be said about a patient with sepsis. Perhaps that should be the focus of our concern.
Dr. Uppal is Assistant Professor, NYU School of Medicine, Bellevue Hospital Center, New York, New York.
In 2002, the European Society of Intensive Care Medicine, the Society of Critical Care Medicine, and the International Sepsis Forum formed the Surviving Sepsis Campaign (SSC) aiming to reduce sepsis-related mortality by 25% within 5 years, mimicking the progress made in the management of STEMI (http://www.survivingsepsis.org/About-SSC/Pages/History.aspx).
SSC bundles: a historic perspective
The first guidelines were published in 2004. Recognizing that guidelines may not influence bedside practice for many years, the SSC partnered with the Institute for Healthcare Improvement to apply performance improvement methodology to sepsis management, developing the “sepsis change bundles.” In addition to hospital resources for education, screening, and data collection, the 6-hour resuscitation and 24-hour management bundles were created. Subsequent data, collected as part of the initiative, demonstrated an association between bundle compliance and survival.
In 2008, the SSC guidelines were revised, and the National Quality Forum (NQF) adopted sepsis bundle compliance as a quality measure. NQF endorsement is often the first step toward the creation of mandates by the Centers for Medicare and Medicaid Services (CMS), but that did not occur at the time.
In 2012, the SSC guidelines were updated and published with new 3- and 6-hour bundles. That year, Rory Staunton, an otherwise healthy 12-year-old boy, died of septic shock in New York. The public discussion of this case, among other factors, prompted New York state to develop a sepsis care mandate that became state law in 2014. An annual public report details each hospital’s compliance with process measures and risk-adjusted mortality. The correlation between measure compliance and survival also holds true in this data set.
In 2015, CMS developed the SEP-1 measure. While the symbolic importance of a sepsis federal mandate and its potential to improve patient outcomes is recognized, concerns remain about the measure itself. The detailed and specific way data must be collected may disconnect clinical care provided from measured compliance. The time pressure and the “all-or-nothing” approach might incentivize interventions potentially harmful in some patients. No patient-centered outcomes are reported. This measure might be tied to reimbursement in the future.
The original version of SEP-1 was based on the 2012 SSC bundles, which reflected the best evidence available at the time (the 2001 Early Goal-Directed Therapy trial). By 2015, elements of that strategy had been challenged, and the PROCESS, PROMISE, and ARISE trials contested the notion that protcolized resuscitation decreased mortality. Moreover, new definitions of sepsis syndromes (Sepsis-3) were published in 2016 (Singer M, et al. JAMA. 2016;315[8]:801).
The 2016 SSC guidelines adopted the new definitions and recommended that patients with sepsis-induced hypoperfusion immediately receive a 30 mL/kg crystalloid bolus, followed by frequent reassessment. CMS did not adopt the Sepsis-3 definitions, but updates were made to allow the clinicians flexibility to demonstrate reassessment of the patient.
Comparing the 1-hour bundle to STEMI care
This year, the SSC published a 1-hour bundle to replace the 3- and 6-hour bundles (Levy MM et al. Crit Care Med. 2018;46[6]:997). Whereas previous bundles set time frames for completion of the elements, the 1-hour bundle focuses on the initiation of these components. The authors revisited the parallel between early management of sepsis and STEMI. The 1-hour bundle includes serum lactate, blood cultures prior to antibiotics, broad-spectrum antibiotics, a 30 mL/kg crystalloid bolus for patients with hypotension or lactate greater than or equal to 4 mmol/L, and vasopressors for persistent hypotension.
Elements of controversy after the publication of this bundle include:
1. One hour seems insufficient for complex clinical decision making and interventions for a syndrome with no specific diagnostic test: sepsis often mimics, or is mimicked by, other conditions.
2. Some bundle elements are not supported by high-quality evidence. No controlled studies exist regarding the appropriate volume of initial fluids or the impact of timing of antibiotics on outcomes.
3. The 1-hour time frame will encourage empiric delivery of fluids and antibiotics to patients who are not septic, potentially leading to harm.
4. While the 1-hour bundle is a quality improvement tool and not for public reporting, former bundles have been adopted as federally regulated measures.
Has the SSC gone too far? Are these concerns enough to abandon the 1-hour bundle? Or are the concerns regarding the 1-hour bundle an example of “perfect is the enemy of better”? To understand the potential for imperfect guidelines to drive tremendous patient-level improvements, one must consider the evolution of STEMI management.
Since the 1970s, the in-hospital mortality for STEMI has decreased from 25% to around 5%. The most significant factor in this achievement was the recognition that early reperfusion improves outcomes and that doing it consistently requires complex coordination. In 2004, a Door-to-Balloon (D2B) time of less than 90 minutes was included as a guideline recommendation (Antman EM, et al. Circulation. 2004;110[5]:588). CMS started collecting performance data on this metric, made that data public, and later tied the performance to hospital reimbursement.
Initially, the 90-minute goal was achieved in only 44% of cases. In 2006, the D2B initiative was launched, providing recommendations for public education, coordination of care, and emergent management of STEMI. Compliance with these recommendations required significant education and changes to STEMI care at multiple levels. Data were collected and submitted to inform the process. Six years later, compliance with the D2B goal had increased from 44% to 91%. The median D2B dropped from 96 to 64 minutes. Based on high compliance, CMS discontinued the use of this metric for reimbursement as the variation between high and low performers was minimal. Put simply, the entire country had gotten better at treating STEMI. The “time-zero” for STEMI was pushed back further, and D2B has been replaced with first-medical-contact (FMC) to device time. The recommendation is to achieve this as quickly as possible, and in less than 90 minutes (O’Gara P, et al. JACC. 2013;61[4]:485).
Consider the complexity of getting a patient from their home to a catheterization lab within 90 minutes, even in ideal circumstances. This short time frame encourages, by design, a low threshold to activate the system. We accept that some patients will receive an unnecessary catheterization or systemic fibrinolysis although the recommendation is based on level B evidence.
Compliance with the STEMI guidelines is more labor-intensive and complex than compliance with the 1-hour sepsis bundle. So, is STEMI a fair comparison to sepsis? Both syndromes are common, potentially deadly, and time-sensitive. Both require early recognition, but neither has a definitive diagnostic test. Instead, diagnosis requires an integration of multiple complex clinical factors. Both are backed by imperfect science that continues to evolve. Over-diagnosis of either will expose the patient to potentially harmful therapies.
The early management of STEMI is a valid comparison to the early management of sepsis. We must consider this comparison as we ponder the 1-hour sepsis bundle.
Is triage time the appropriate time-zero? In either condition, triage time is too early in some cases and too late in others. Unfortunately, there is no better alternative, and STEMI guidelines have evolved to start the clock before triage. Using a point such as “recognition of sepsis” would fail to capture delayed recognition.
Is it possible to diagnose and initiate treatment for sepsis in such a short time frame? Consider the treatment received by the usual care group of the PROCESS trial (The ProCESS Investigators. N Engl J Med. 2014;370:1683). Prior to meeting entry criteria, which occurred in less than 1 hour, patients in this group received an initial fluid bolus and had a lactate assessment. Prior to randomization, which occurred at around 90 minutes, this group completed 28 mL/kg of crystalloid fluid, and 76% received antibiotics. Thus, the usual-care group in this study nearly achieved the 1-hour bundle currently being contested.
Is it appropriate for a guideline to strongly recommend interventions not backed by level A evidence? The recommendation for FMC to catheterization within 90 minutes has not been studied in a controlled way. The precise dosing and timing of fibrinolysis is also not based on controlled data. Reperfusion devices and antiplatelet agents continue to be rigorously studied, sometimes with conflicting results.
Finally, should the 1-hour bundle be abandoned out of concern that it will be used as a national performance metric? First, there is currently no indication that the 1-hour bundle will be adopted as a performance metric. For the sake of argument, let’s assume the 1-hour bundle will be regulated and used to compare hospitals. Is there reason to think this bundle favors some hospitals over others and will lead to an unfair comparison? Is there significant inequity in the ability to draw blood cultures, send a lactate, start IV fluids, and initiate antibiotics?
Certainly, national compliance with such a metric would be very low at first. Therein lies the actual problem: a person who suffers a STEMI anywhere in the country is very likely to receive high-quality care. Currently, the same cannot be said about a patient with sepsis. Perhaps that should be the focus of our concern.
Dr. Uppal is Assistant Professor, NYU School of Medicine, Bellevue Hospital Center, New York, New York.
The link between suicide and sleep
According to the Centers for Disease Control and Prevention, suicide is the 10th leading cause of mortality in the United States, with rates of suicide rising over the past 2 decades. In 2016, completed suicides accounted for approximately 45,000 deaths in the United States (Ivey-Stephenson AZ, et al. MMWR Surveill Summ. 2017;66[18]:1). While progress has been made to lower mortality rates of other leading causes of death, very little progress has been made on reducing the rates of suicide. The term “suicide,” as referred to in this article, encompasses suicidal ideation, suicidal behavior, and suicide death.
Researchers have been investigating potential risk factors and prevention strategies for suicide. The relationship between suicide and sleep disturbances, specifically insomnia and nightmares, has been well documented in the literature. Given that insomnia and nightmares are potentially modifiable risk factors, it continues to be an area of active exploration for suicide rate reduction. While there are many different types of sleep disorders, including excessive daytime sleepiness, parasomnias, obstructive sleep apnea, and restless legs syndrome, this article will focus on the relationship between insomnia and nightmares with suicide.
Insomnia
Insomnia disorder, according to the American Psychiatric Association’s DSM-5, is a dissatisfaction of sleep quantity or quality that occurs at least three nights per week for a minimum of 3 months despite adequate opportunity for sleep. This may present as difficulty with falling asleep, staying asleep, or early morning awakenings. The sleep disturbance results in functional impairment or significant distress in at least one area of life (American Psychiatric Association. Arlington, Virginia: APA; 2013). While insomnia is often a symptom of many psychiatric disorders, research has shown that insomnia is an independent risk factor for suicide, even when controlling for mental illness. Studies have shown that there is up to a 2.4 relative risk of suicide death with insomnia after adjusting for depression severity (McCall W, et al. J Clin Sleep Med. 2013;32[9]:135).
Nightmares
Nightmares, as defined by the American Psychiatric Association’s DSM-5, are “typically lengthy, elaborate, story-like sequences of dream imagery that seem real and incite anxiety, fear, or other dysphoric emotions” (American Psychiatric Association. Arlington, Virginia: APA; 2013). They are common symptoms in posttraumatic stress disorder (PTSD), with up to 90% of individuals with PTSD experiencing nightmares following a traumatic event (Littlewood DL, et al. J Clin Sleep Med. 2016;12[3]:393). Nightmares have also been shown to be an independent risk factor for suicide when controlling for mental illness. Studies have shown that nightmares are associated with an elevated risk factor of 1.5 to 3 times for suicidal ideation and 3 to 4 times for suicide attempts. The data suggest that nightmares may be a stronger risk factor for suicide than insomnia (McCall W, et al. Curr Psychiatr Rep. 2013;15[9]:389).
Proposed Mechanism
The mechanism linking insomnia and nightmares with suicide has been theorized and studied by researchers. A couple of the most noteworthy proposed psychological mechanisms involve dysfunctional beliefs and attitudes about sleep, as well as deficits in problem solving capability. Dysfunctional beliefs and attitudes about sleep (DBAS) are negative cognitions pertaining to sleep, and they have been shown to be related to the intensity of suicidal ideations. Many of the DBAS are pessimistic thoughts that contain a “hopelessness flavor” to them, which lead to the perpetuation of insomnia. Hopelessness has been found to be a strong risk factor for suicide. In addition to DBAS, insomnia has also shown to lead to impairments in complex problem solving. The lack of problem solving skills in these patients may lead to fewer quantity and quality of solutions during stressful situations and leave suicide as the perceived best or only option.
The biological theories focus on serotonin and hyperarousal mediated by the hypothalamic-pituitary-adrenal (HPA) axis. Serotonin is a neurotransmitter that is involved in the induction and maintenance of sleep. Of interesting note, low levels of serotonin’s main metabolite, 5-hydroxyindoleacetic acid (5-HIAA) have been found in the cerebrospinal fluid of suicide victims. Evidence has also shown that sleep and the HPA axis are closely related. The HPA axis is activated by stress leading to a cascade of hormones that can cause susceptibility of hyperarousal, REM alterations, and suicide. Hyperarousal, shared in context with PTSD and insomnia, can lead to hyperactivation of the noradrenergic systems in the medial prefrontal cortex, which can lead to decrease in executive decision making (McCall W, et al. Curr Psychiatr Rep. 2013;15[9]:389).
Treatment Strategies
The benefit of treating insomnia and nightmares, in regards to reducing suicidality, continues to be an area of active research. Many of the previous studies have theorized that treating symptoms of insomnia and nightmares may indirectly reduce suicide. Pharmaceutical and nonpharmaceutical treatments are currently being used to help treat patients with insomnia and nightmares, but the benefit for reducing suicidality is still unknown.
One of the main treatment modalities for insomnia is hypnotic medication; however, these medications carry their own potential risk for suicide. Reports of suicide death in conjunction with hypnotic medication has led the FDA to add warnings about the increased risk of suicide with these medications. Some of these medications include zolpidem, zaleplon, eszopiclone, doxepin, ramelteon, and suvorexant. A review of research studies and case reports was completed in 2017 and showed that there was an odds ratio of 2 to 3 for hypnotic use in suicide deaths. However, most of the studies that were reviewed reported a potential confounding bias of the individual’s current mental health state. Furthermore, many of the suicide case reports that involved hypnotics also had additional substances detected, such as alcohol. Hypnotic medication has been shown to be an effective treatment for insomnia, but caution needs to be used when prescribing these medications. Strategies that may be beneficial when using hypnotic medication to reduce the risk of an adverse outcome include using the lowest effective dose and educating the patient of not combining the medication with alcohol or other sedative/hypnotics (McCall W, et al. Am J Psychiatry. 2017;174[1]:18).
For patients who have recurrent nightmares in the context of PTSD, the alpha-1 adrenergic receptor antagonist, prazosin, may provide some benefit; however, the literature is divided. There have been several randomized, placebo-controlled clinical trials with prazosin, which has shown a moderate to large effect for alleviating trauma-related nightmares and improving sleep quality. Some of the limitations of these studies were that the trials were small to moderate in size, and the length of the trials was 15 weeks or less. In 2018, Raskin and colleagues completed a follow-up randomized, placebo-controlled study for 26 weeks with 304 participants and did not find a significant difference between prazosin and placebo in regards to nightmares and sleep quality (Raskind MA, et al. N Engl J Med. 2018;378[6]:507).
Cognitive behavioral therapy for insomnia (CBT-I) and image rehearsal therapy (IRT) are two sleep-targeted therapy modalities that are evidence based. CBT-I targets dysfunctional beliefs and attitudes regarding sleep (McCall W, et al. J Clin Sleep Med. 2013;9[2]:135). IRT, on the other hand, specifically targets nightmares by having the patient write out a narrative of the nightmare, followed by re-scripting an alternative ending to something that is less distressing. The patient will rehearse the new dream narrative before going to sleep. There is still insufficient evidence to determine if these therapies have benefit in reducing suicide (Littlewood DL, et al. J Clin Sleep Med. 2016;12[3]:393).
While the jury is still out on how best to target and treat the risk factors of insomnia and nightmares in regards to suicide, there are still steps that health-care providers can take to help keep their patients safe. During the patient interview, new or worsening insomnia and nightmares should prompt further investigation of suicidal thoughts and behaviors. After a thorough interview, treatment options, with a discussion of risks and benefits, can be tailored to the individual’s needs. Managing insomnia and nightmares may be one avenue of suicide prevention.
Drs. Locrotondo and McCall are with the Department of Psychiatry and Health Behavior at the Medical College of Georgia, Augusta University, Augusta, Georgia.
According to the Centers for Disease Control and Prevention, suicide is the 10th leading cause of mortality in the United States, with rates of suicide rising over the past 2 decades. In 2016, completed suicides accounted for approximately 45,000 deaths in the United States (Ivey-Stephenson AZ, et al. MMWR Surveill Summ. 2017;66[18]:1). While progress has been made to lower mortality rates of other leading causes of death, very little progress has been made on reducing the rates of suicide. The term “suicide,” as referred to in this article, encompasses suicidal ideation, suicidal behavior, and suicide death.
Researchers have been investigating potential risk factors and prevention strategies for suicide. The relationship between suicide and sleep disturbances, specifically insomnia and nightmares, has been well documented in the literature. Given that insomnia and nightmares are potentially modifiable risk factors, it continues to be an area of active exploration for suicide rate reduction. While there are many different types of sleep disorders, including excessive daytime sleepiness, parasomnias, obstructive sleep apnea, and restless legs syndrome, this article will focus on the relationship between insomnia and nightmares with suicide.
Insomnia
Insomnia disorder, according to the American Psychiatric Association’s DSM-5, is a dissatisfaction of sleep quantity or quality that occurs at least three nights per week for a minimum of 3 months despite adequate opportunity for sleep. This may present as difficulty with falling asleep, staying asleep, or early morning awakenings. The sleep disturbance results in functional impairment or significant distress in at least one area of life (American Psychiatric Association. Arlington, Virginia: APA; 2013). While insomnia is often a symptom of many psychiatric disorders, research has shown that insomnia is an independent risk factor for suicide, even when controlling for mental illness. Studies have shown that there is up to a 2.4 relative risk of suicide death with insomnia after adjusting for depression severity (McCall W, et al. J Clin Sleep Med. 2013;32[9]:135).
Nightmares
Nightmares, as defined by the American Psychiatric Association’s DSM-5, are “typically lengthy, elaborate, story-like sequences of dream imagery that seem real and incite anxiety, fear, or other dysphoric emotions” (American Psychiatric Association. Arlington, Virginia: APA; 2013). They are common symptoms in posttraumatic stress disorder (PTSD), with up to 90% of individuals with PTSD experiencing nightmares following a traumatic event (Littlewood DL, et al. J Clin Sleep Med. 2016;12[3]:393). Nightmares have also been shown to be an independent risk factor for suicide when controlling for mental illness. Studies have shown that nightmares are associated with an elevated risk factor of 1.5 to 3 times for suicidal ideation and 3 to 4 times for suicide attempts. The data suggest that nightmares may be a stronger risk factor for suicide than insomnia (McCall W, et al. Curr Psychiatr Rep. 2013;15[9]:389).
Proposed Mechanism
The mechanism linking insomnia and nightmares with suicide has been theorized and studied by researchers. A couple of the most noteworthy proposed psychological mechanisms involve dysfunctional beliefs and attitudes about sleep, as well as deficits in problem solving capability. Dysfunctional beliefs and attitudes about sleep (DBAS) are negative cognitions pertaining to sleep, and they have been shown to be related to the intensity of suicidal ideations. Many of the DBAS are pessimistic thoughts that contain a “hopelessness flavor” to them, which lead to the perpetuation of insomnia. Hopelessness has been found to be a strong risk factor for suicide. In addition to DBAS, insomnia has also shown to lead to impairments in complex problem solving. The lack of problem solving skills in these patients may lead to fewer quantity and quality of solutions during stressful situations and leave suicide as the perceived best or only option.
The biological theories focus on serotonin and hyperarousal mediated by the hypothalamic-pituitary-adrenal (HPA) axis. Serotonin is a neurotransmitter that is involved in the induction and maintenance of sleep. Of interesting note, low levels of serotonin’s main metabolite, 5-hydroxyindoleacetic acid (5-HIAA) have been found in the cerebrospinal fluid of suicide victims. Evidence has also shown that sleep and the HPA axis are closely related. The HPA axis is activated by stress leading to a cascade of hormones that can cause susceptibility of hyperarousal, REM alterations, and suicide. Hyperarousal, shared in context with PTSD and insomnia, can lead to hyperactivation of the noradrenergic systems in the medial prefrontal cortex, which can lead to decrease in executive decision making (McCall W, et al. Curr Psychiatr Rep. 2013;15[9]:389).
Treatment Strategies
The benefit of treating insomnia and nightmares, in regards to reducing suicidality, continues to be an area of active research. Many of the previous studies have theorized that treating symptoms of insomnia and nightmares may indirectly reduce suicide. Pharmaceutical and nonpharmaceutical treatments are currently being used to help treat patients with insomnia and nightmares, but the benefit for reducing suicidality is still unknown.
One of the main treatment modalities for insomnia is hypnotic medication; however, these medications carry their own potential risk for suicide. Reports of suicide death in conjunction with hypnotic medication has led the FDA to add warnings about the increased risk of suicide with these medications. Some of these medications include zolpidem, zaleplon, eszopiclone, doxepin, ramelteon, and suvorexant. A review of research studies and case reports was completed in 2017 and showed that there was an odds ratio of 2 to 3 for hypnotic use in suicide deaths. However, most of the studies that were reviewed reported a potential confounding bias of the individual’s current mental health state. Furthermore, many of the suicide case reports that involved hypnotics also had additional substances detected, such as alcohol. Hypnotic medication has been shown to be an effective treatment for insomnia, but caution needs to be used when prescribing these medications. Strategies that may be beneficial when using hypnotic medication to reduce the risk of an adverse outcome include using the lowest effective dose and educating the patient of not combining the medication with alcohol or other sedative/hypnotics (McCall W, et al. Am J Psychiatry. 2017;174[1]:18).
For patients who have recurrent nightmares in the context of PTSD, the alpha-1 adrenergic receptor antagonist, prazosin, may provide some benefit; however, the literature is divided. There have been several randomized, placebo-controlled clinical trials with prazosin, which has shown a moderate to large effect for alleviating trauma-related nightmares and improving sleep quality. Some of the limitations of these studies were that the trials were small to moderate in size, and the length of the trials was 15 weeks or less. In 2018, Raskin and colleagues completed a follow-up randomized, placebo-controlled study for 26 weeks with 304 participants and did not find a significant difference between prazosin and placebo in regards to nightmares and sleep quality (Raskind MA, et al. N Engl J Med. 2018;378[6]:507).
Cognitive behavioral therapy for insomnia (CBT-I) and image rehearsal therapy (IRT) are two sleep-targeted therapy modalities that are evidence based. CBT-I targets dysfunctional beliefs and attitudes regarding sleep (McCall W, et al. J Clin Sleep Med. 2013;9[2]:135). IRT, on the other hand, specifically targets nightmares by having the patient write out a narrative of the nightmare, followed by re-scripting an alternative ending to something that is less distressing. The patient will rehearse the new dream narrative before going to sleep. There is still insufficient evidence to determine if these therapies have benefit in reducing suicide (Littlewood DL, et al. J Clin Sleep Med. 2016;12[3]:393).
While the jury is still out on how best to target and treat the risk factors of insomnia and nightmares in regards to suicide, there are still steps that health-care providers can take to help keep their patients safe. During the patient interview, new or worsening insomnia and nightmares should prompt further investigation of suicidal thoughts and behaviors. After a thorough interview, treatment options, with a discussion of risks and benefits, can be tailored to the individual’s needs. Managing insomnia and nightmares may be one avenue of suicide prevention.
Drs. Locrotondo and McCall are with the Department of Psychiatry and Health Behavior at the Medical College of Georgia, Augusta University, Augusta, Georgia.
According to the Centers for Disease Control and Prevention, suicide is the 10th leading cause of mortality in the United States, with rates of suicide rising over the past 2 decades. In 2016, completed suicides accounted for approximately 45,000 deaths in the United States (Ivey-Stephenson AZ, et al. MMWR Surveill Summ. 2017;66[18]:1). While progress has been made to lower mortality rates of other leading causes of death, very little progress has been made on reducing the rates of suicide. The term “suicide,” as referred to in this article, encompasses suicidal ideation, suicidal behavior, and suicide death.
Researchers have been investigating potential risk factors and prevention strategies for suicide. The relationship between suicide and sleep disturbances, specifically insomnia and nightmares, has been well documented in the literature. Given that insomnia and nightmares are potentially modifiable risk factors, it continues to be an area of active exploration for suicide rate reduction. While there are many different types of sleep disorders, including excessive daytime sleepiness, parasomnias, obstructive sleep apnea, and restless legs syndrome, this article will focus on the relationship between insomnia and nightmares with suicide.
Insomnia
Insomnia disorder, according to the American Psychiatric Association’s DSM-5, is a dissatisfaction of sleep quantity or quality that occurs at least three nights per week for a minimum of 3 months despite adequate opportunity for sleep. This may present as difficulty with falling asleep, staying asleep, or early morning awakenings. The sleep disturbance results in functional impairment or significant distress in at least one area of life (American Psychiatric Association. Arlington, Virginia: APA; 2013). While insomnia is often a symptom of many psychiatric disorders, research has shown that insomnia is an independent risk factor for suicide, even when controlling for mental illness. Studies have shown that there is up to a 2.4 relative risk of suicide death with insomnia after adjusting for depression severity (McCall W, et al. J Clin Sleep Med. 2013;32[9]:135).
Nightmares
Nightmares, as defined by the American Psychiatric Association’s DSM-5, are “typically lengthy, elaborate, story-like sequences of dream imagery that seem real and incite anxiety, fear, or other dysphoric emotions” (American Psychiatric Association. Arlington, Virginia: APA; 2013). They are common symptoms in posttraumatic stress disorder (PTSD), with up to 90% of individuals with PTSD experiencing nightmares following a traumatic event (Littlewood DL, et al. J Clin Sleep Med. 2016;12[3]:393). Nightmares have also been shown to be an independent risk factor for suicide when controlling for mental illness. Studies have shown that nightmares are associated with an elevated risk factor of 1.5 to 3 times for suicidal ideation and 3 to 4 times for suicide attempts. The data suggest that nightmares may be a stronger risk factor for suicide than insomnia (McCall W, et al. Curr Psychiatr Rep. 2013;15[9]:389).
Proposed Mechanism
The mechanism linking insomnia and nightmares with suicide has been theorized and studied by researchers. A couple of the most noteworthy proposed psychological mechanisms involve dysfunctional beliefs and attitudes about sleep, as well as deficits in problem solving capability. Dysfunctional beliefs and attitudes about sleep (DBAS) are negative cognitions pertaining to sleep, and they have been shown to be related to the intensity of suicidal ideations. Many of the DBAS are pessimistic thoughts that contain a “hopelessness flavor” to them, which lead to the perpetuation of insomnia. Hopelessness has been found to be a strong risk factor for suicide. In addition to DBAS, insomnia has also shown to lead to impairments in complex problem solving. The lack of problem solving skills in these patients may lead to fewer quantity and quality of solutions during stressful situations and leave suicide as the perceived best or only option.
The biological theories focus on serotonin and hyperarousal mediated by the hypothalamic-pituitary-adrenal (HPA) axis. Serotonin is a neurotransmitter that is involved in the induction and maintenance of sleep. Of interesting note, low levels of serotonin’s main metabolite, 5-hydroxyindoleacetic acid (5-HIAA) have been found in the cerebrospinal fluid of suicide victims. Evidence has also shown that sleep and the HPA axis are closely related. The HPA axis is activated by stress leading to a cascade of hormones that can cause susceptibility of hyperarousal, REM alterations, and suicide. Hyperarousal, shared in context with PTSD and insomnia, can lead to hyperactivation of the noradrenergic systems in the medial prefrontal cortex, which can lead to decrease in executive decision making (McCall W, et al. Curr Psychiatr Rep. 2013;15[9]:389).
Treatment Strategies
The benefit of treating insomnia and nightmares, in regards to reducing suicidality, continues to be an area of active research. Many of the previous studies have theorized that treating symptoms of insomnia and nightmares may indirectly reduce suicide. Pharmaceutical and nonpharmaceutical treatments are currently being used to help treat patients with insomnia and nightmares, but the benefit for reducing suicidality is still unknown.
One of the main treatment modalities for insomnia is hypnotic medication; however, these medications carry their own potential risk for suicide. Reports of suicide death in conjunction with hypnotic medication has led the FDA to add warnings about the increased risk of suicide with these medications. Some of these medications include zolpidem, zaleplon, eszopiclone, doxepin, ramelteon, and suvorexant. A review of research studies and case reports was completed in 2017 and showed that there was an odds ratio of 2 to 3 for hypnotic use in suicide deaths. However, most of the studies that were reviewed reported a potential confounding bias of the individual’s current mental health state. Furthermore, many of the suicide case reports that involved hypnotics also had additional substances detected, such as alcohol. Hypnotic medication has been shown to be an effective treatment for insomnia, but caution needs to be used when prescribing these medications. Strategies that may be beneficial when using hypnotic medication to reduce the risk of an adverse outcome include using the lowest effective dose and educating the patient of not combining the medication with alcohol or other sedative/hypnotics (McCall W, et al. Am J Psychiatry. 2017;174[1]:18).
For patients who have recurrent nightmares in the context of PTSD, the alpha-1 adrenergic receptor antagonist, prazosin, may provide some benefit; however, the literature is divided. There have been several randomized, placebo-controlled clinical trials with prazosin, which has shown a moderate to large effect for alleviating trauma-related nightmares and improving sleep quality. Some of the limitations of these studies were that the trials were small to moderate in size, and the length of the trials was 15 weeks or less. In 2018, Raskin and colleagues completed a follow-up randomized, placebo-controlled study for 26 weeks with 304 participants and did not find a significant difference between prazosin and placebo in regards to nightmares and sleep quality (Raskind MA, et al. N Engl J Med. 2018;378[6]:507).
Cognitive behavioral therapy for insomnia (CBT-I) and image rehearsal therapy (IRT) are two sleep-targeted therapy modalities that are evidence based. CBT-I targets dysfunctional beliefs and attitudes regarding sleep (McCall W, et al. J Clin Sleep Med. 2013;9[2]:135). IRT, on the other hand, specifically targets nightmares by having the patient write out a narrative of the nightmare, followed by re-scripting an alternative ending to something that is less distressing. The patient will rehearse the new dream narrative before going to sleep. There is still insufficient evidence to determine if these therapies have benefit in reducing suicide (Littlewood DL, et al. J Clin Sleep Med. 2016;12[3]:393).
While the jury is still out on how best to target and treat the risk factors of insomnia and nightmares in regards to suicide, there are still steps that health-care providers can take to help keep their patients safe. During the patient interview, new or worsening insomnia and nightmares should prompt further investigation of suicidal thoughts and behaviors. After a thorough interview, treatment options, with a discussion of risks and benefits, can be tailored to the individual’s needs. Managing insomnia and nightmares may be one avenue of suicide prevention.
Drs. Locrotondo and McCall are with the Department of Psychiatry and Health Behavior at the Medical College of Georgia, Augusta University, Augusta, Georgia.