Cutting: Putting the pieces together

Article Type
Changed
Thu, 12/06/2018 - 16:55
Display Headline
Cutting: Putting the pieces together

Cutting, otherwise known as nonsuicidal self-injury (NSSI), is a frightening and complex disorder that is prevalent among adolescents, but poorly understood. Typically, pediatricians see distraught parents who, unaware that their children were even depressed, have discovered that they engage in self-harming activities. Quick answers are needed, and with most psychology services being overwhelmed, an immediate evaluation is unlikely. Therefore, it is important to have a clear understanding and resources available to help defuse the situation.

For most, it is hard to understand why young people would want to inflict bodily harm on themselves. The questions that always arise are, was this a suicide attempt? Or, was it a cry for help? Well, the answer to both is quite surprisingly "no," at least in the majority of cases.

Cutting, or NSSI, is an unhealthy reaction to anxiety, pain, frustration, or stress. It is an impulsive behavior that is not necessarily associated with intent to die.

A 2007 study showed that 46% of 633 9th and 10th graders admitted to a least one episode of cutting, burning, scratching, or hitting themselves in response to emotional stress (Psychol. Med. 2007;37:1183-92).

The prevalence of NSSI among adolescents is reported to be 14%-15% and declines to 4% by adulthood (J. Youth Adolesc. 2002;31:67-77). There is no significant gender difference, but the method of self-harm for females tends to be cutting, whereas males are more likely to hit or burn themselves.

So why do people want to inflect pain on themselves? Well, there is a physiologic basis for the most common reason, which is termed affect regulation. Although not completely understood, it is believed that by eliciting pain, endorphins are released, and there is an immediate relief of anxiety, pain, or stress. Most "cutters" report infrequent episodes, but some do become addicted to the sensation, and the episodes increase.

Another reason for cutting is self-punishment. Young people who suffer from low self-esteem, or self-degradation, may use self-harm to express anger toward themselves.

A surprising finding was that interpersonal influence was one of the least common reasons given for self-harm. It is not a common method for a "cry for help" or attention as is a suicide attempt. People who cut are looking for an immediate relief from the emotional stress they are feeling. In fact, many are very secretive about this behavior, and it usually goes unnoticed for several months to years.

Although NSSI can occur independently of any psychological dysfunction, it has been found to have a comorbidity with borderline personality disorder (BPD), anxiety, and depression. All of these disorders are associated with negative emotional stress. Sexual abuse and self-harm are associated because they have the same psychological risk factors but not a cause and effect relationship with NSSI (J. Clin. Psychol. 2007;63:1045-56).

One of the biggest risk factors for suicide is the frequency of the cutting. Addiction to the behavior resulting in daily or weekly episodes does significantly increase the risk of a suicide attempt. Therefore, anyone who presents with a history of cutting should have a Suicide Risk Assessment completed.

First-line treatment for nonsuicidal self-harm is psychotherapy, for example, cognitive-behavioral therapy. Pharmacotherapy of comorbid conditions such as depression and anxiety can be helpful in reducing symptoms, and therefore reducing episodes.

Understanding the psychology behind self-harm will be very helpful in educating and calming families through this difficult situation. Being able to direct the patient to the appropriate resources will expedite evaluation and treatment. Such resources include www.selfinjury.com, www.helpguide.org/mental/self_injury.htm, and www.selfinjury.bctr.cornell.edu.

Dr. Pearce is a pediatrician in Frankfort, Ill. E-mail her at [email protected]

Author and Disclosure Information

Publications
Legacy Keywords
Cutting, nonsuicidal self-injury, NSSI, depression, self-harm,
Sections
Author and Disclosure Information

Author and Disclosure Information

Cutting, otherwise known as nonsuicidal self-injury (NSSI), is a frightening and complex disorder that is prevalent among adolescents, but poorly understood. Typically, pediatricians see distraught parents who, unaware that their children were even depressed, have discovered that they engage in self-harming activities. Quick answers are needed, and with most psychology services being overwhelmed, an immediate evaluation is unlikely. Therefore, it is important to have a clear understanding and resources available to help defuse the situation.

For most, it is hard to understand why young people would want to inflict bodily harm on themselves. The questions that always arise are, was this a suicide attempt? Or, was it a cry for help? Well, the answer to both is quite surprisingly "no," at least in the majority of cases.

Cutting, or NSSI, is an unhealthy reaction to anxiety, pain, frustration, or stress. It is an impulsive behavior that is not necessarily associated with intent to die.

A 2007 study showed that 46% of 633 9th and 10th graders admitted to a least one episode of cutting, burning, scratching, or hitting themselves in response to emotional stress (Psychol. Med. 2007;37:1183-92).

The prevalence of NSSI among adolescents is reported to be 14%-15% and declines to 4% by adulthood (J. Youth Adolesc. 2002;31:67-77). There is no significant gender difference, but the method of self-harm for females tends to be cutting, whereas males are more likely to hit or burn themselves.

So why do people want to inflect pain on themselves? Well, there is a physiologic basis for the most common reason, which is termed affect regulation. Although not completely understood, it is believed that by eliciting pain, endorphins are released, and there is an immediate relief of anxiety, pain, or stress. Most "cutters" report infrequent episodes, but some do become addicted to the sensation, and the episodes increase.

Another reason for cutting is self-punishment. Young people who suffer from low self-esteem, or self-degradation, may use self-harm to express anger toward themselves.

A surprising finding was that interpersonal influence was one of the least common reasons given for self-harm. It is not a common method for a "cry for help" or attention as is a suicide attempt. People who cut are looking for an immediate relief from the emotional stress they are feeling. In fact, many are very secretive about this behavior, and it usually goes unnoticed for several months to years.

Although NSSI can occur independently of any psychological dysfunction, it has been found to have a comorbidity with borderline personality disorder (BPD), anxiety, and depression. All of these disorders are associated with negative emotional stress. Sexual abuse and self-harm are associated because they have the same psychological risk factors but not a cause and effect relationship with NSSI (J. Clin. Psychol. 2007;63:1045-56).

One of the biggest risk factors for suicide is the frequency of the cutting. Addiction to the behavior resulting in daily or weekly episodes does significantly increase the risk of a suicide attempt. Therefore, anyone who presents with a history of cutting should have a Suicide Risk Assessment completed.

First-line treatment for nonsuicidal self-harm is psychotherapy, for example, cognitive-behavioral therapy. Pharmacotherapy of comorbid conditions such as depression and anxiety can be helpful in reducing symptoms, and therefore reducing episodes.

Understanding the psychology behind self-harm will be very helpful in educating and calming families through this difficult situation. Being able to direct the patient to the appropriate resources will expedite evaluation and treatment. Such resources include www.selfinjury.com, www.helpguide.org/mental/self_injury.htm, and www.selfinjury.bctr.cornell.edu.

Dr. Pearce is a pediatrician in Frankfort, Ill. E-mail her at [email protected]

Cutting, otherwise known as nonsuicidal self-injury (NSSI), is a frightening and complex disorder that is prevalent among adolescents, but poorly understood. Typically, pediatricians see distraught parents who, unaware that their children were even depressed, have discovered that they engage in self-harming activities. Quick answers are needed, and with most psychology services being overwhelmed, an immediate evaluation is unlikely. Therefore, it is important to have a clear understanding and resources available to help defuse the situation.

For most, it is hard to understand why young people would want to inflict bodily harm on themselves. The questions that always arise are, was this a suicide attempt? Or, was it a cry for help? Well, the answer to both is quite surprisingly "no," at least in the majority of cases.

Cutting, or NSSI, is an unhealthy reaction to anxiety, pain, frustration, or stress. It is an impulsive behavior that is not necessarily associated with intent to die.

A 2007 study showed that 46% of 633 9th and 10th graders admitted to a least one episode of cutting, burning, scratching, or hitting themselves in response to emotional stress (Psychol. Med. 2007;37:1183-92).

The prevalence of NSSI among adolescents is reported to be 14%-15% and declines to 4% by adulthood (J. Youth Adolesc. 2002;31:67-77). There is no significant gender difference, but the method of self-harm for females tends to be cutting, whereas males are more likely to hit or burn themselves.

So why do people want to inflect pain on themselves? Well, there is a physiologic basis for the most common reason, which is termed affect regulation. Although not completely understood, it is believed that by eliciting pain, endorphins are released, and there is an immediate relief of anxiety, pain, or stress. Most "cutters" report infrequent episodes, but some do become addicted to the sensation, and the episodes increase.

Another reason for cutting is self-punishment. Young people who suffer from low self-esteem, or self-degradation, may use self-harm to express anger toward themselves.

A surprising finding was that interpersonal influence was one of the least common reasons given for self-harm. It is not a common method for a "cry for help" or attention as is a suicide attempt. People who cut are looking for an immediate relief from the emotional stress they are feeling. In fact, many are very secretive about this behavior, and it usually goes unnoticed for several months to years.

Although NSSI can occur independently of any psychological dysfunction, it has been found to have a comorbidity with borderline personality disorder (BPD), anxiety, and depression. All of these disorders are associated with negative emotional stress. Sexual abuse and self-harm are associated because they have the same psychological risk factors but not a cause and effect relationship with NSSI (J. Clin. Psychol. 2007;63:1045-56).

One of the biggest risk factors for suicide is the frequency of the cutting. Addiction to the behavior resulting in daily or weekly episodes does significantly increase the risk of a suicide attempt. Therefore, anyone who presents with a history of cutting should have a Suicide Risk Assessment completed.

First-line treatment for nonsuicidal self-harm is psychotherapy, for example, cognitive-behavioral therapy. Pharmacotherapy of comorbid conditions such as depression and anxiety can be helpful in reducing symptoms, and therefore reducing episodes.

Understanding the psychology behind self-harm will be very helpful in educating and calming families through this difficult situation. Being able to direct the patient to the appropriate resources will expedite evaluation and treatment. Such resources include www.selfinjury.com, www.helpguide.org/mental/self_injury.htm, and www.selfinjury.bctr.cornell.edu.

Dr. Pearce is a pediatrician in Frankfort, Ill. E-mail her at [email protected]

Publications
Publications
Article Type
Display Headline
Cutting: Putting the pieces together
Display Headline
Cutting: Putting the pieces together
Legacy Keywords
Cutting, nonsuicidal self-injury, NSSI, depression, self-harm,
Legacy Keywords
Cutting, nonsuicidal self-injury, NSSI, depression, self-harm,
Sections
Article Source

PURLs Copyright

Inside the Article

Brief Action Planning to Facilitate Behavior Change and Support Patient Self-Management

Article Type
Changed
Tue, 05/03/2022 - 15:51
Display Headline
Brief Action Planning to Facilitate Behavior Change and Support Patient Self-Management

From the New York University School of Medicine, New York, NY (Drs. Gutnick and Jay), University of Colorado Health Sciences Center, Denver, CO (Dr. Reims), University of British Columbia, BC, Canada (Dr. Davis), University College London, London, UK (Dr. Gainforth), and Stonybrook University School of Medicine, Stonybrook, NY (Dr. Cole [Emeritus]).

 

Abstract

  • Objective: To describe Brief Action Planning (BAP), a structured, stepped-care self-management support technique for chronic illness care and disease prevention.
  • Methods: A review of the theory and research supporting BAP and the questions and skills that comprise the technique with provision of a clinical example.
  • Results: BAP facilitates goal setting and action planning to build self-efficacy for behavior change. It is grounded in the principles and practice of Motivational Interviewing and evidence-based constructs from the behavior change literature. Comprised of a series of 3 questions and 5 skills, BAP can be implemented by medical teams to help meet the self-management support objectives of the Patient-Centered Medical Home.
  • Conclusion: BAP is a useful self-management support technique for busy medical practices to promote health behavior change and build patient self-efficacy for improved long-term clinical outcomes in chronic illness care and disease prevention.

 

Chronic disease is prevalent and time consuming, challenging, and expensive to manage [1]. Half of all adult primary care patients have more than 2 chronic diseases, and 75% of US health care dollars are spent on chronic illness care [2]. Given the health and financial impact of chronic disease, and recognizing that patients make daily decisions that affect disease control, efforts are needed to assist and empower patients to actively self-manage health behaviors that influence chronic illness outcomes. Patients who are supported to actively self-manage their own chronic illnesses have fewer symptoms, improved quality of life, and lower use of health care resources [3]. Historically, providers have tried to influence chronic illness self-management by advising behavior change (eg, smoking cessation, exercise) or telling patients to take medications; yet clinicians often become frustrated when patients do not “adhere” to their professional advice [4,5]. Many times, patients want to make changes that will improve their health but need support—commonly known as self-management support—to be successful.

Involving patients in decision making, emphasizing problem solving, setting goals, creating action plans (ie, when, where and how to enact a goal-directed behavior), and following up on goals are key features of successful self-management support methods [3,6–8]. Multiple approaches from the behavioral change literature, such as the 5 A’s (Assess, Advise, Agree, Assist, Arrange) [9], Motivational Interviewing (MI), and chronic disease self-management programs [10] have been used to provide more effective guidance for patients and their caregivers. However, the practicalities of these approaches in clinical settings have been questioned. The 5A’s, a counseling framework that is used to guide providers in health behavior change counseling, can feel overwhelming because it encompasses several different aspects of counseling [11,12]. Likewise, MI and adaptations of MI, which have been shown to outperform traditional “advice giving” in treatment of a broad range of behaviors and chronic conditions [13–16], have been critiqued since fidelity to this approach often involves multiple sessions of training, practice, and feedback to achieve proficiency [15,17,18]. Finally, while chronic disease self-management programs have been shown to be effective when used by peers in the community [10], similar results in primary care are not well established.

Given the challenges of providers practicing, learning, and using each of these approaches, efforts to develop an approach that supports patients to make behavioral changes that can be implemented in typical practice settings are needed. In addition, health delivery systems are transforming to team-based models with emphasis on leveraging each team member’s expertise and licensure [19]. In acknowledgement of these evolving practice realities, the National Committee for Quality Assurance (NCQA) included development and documentation of patient self-management plans and goals as a critical factor for achieving NCQA Patient-Centered Medical Home (PCMH) recognition [20]. Successful PCMH transformation therefore entails clinical practices developing effective and time efficient ways to incorporate self-management support strategies, a new service for many, into their care delivery systems often without additional staffing.

In this paper, we describe an evidence-informed, efficient self-management support technique called Brief Action Planning (BAP) [21–24]. BAP evolved into its current form through ongoing collaborative efforts of 4 of the authors (SC, DG, CD, KR) and is based on a foundation of original work by Steven Cole with contributions from Mary Cole in 2002 [25]. This technique addresses many of the barriers providers have cited to providing self-management support, as it can be used routinely by both individual providers and health care teams to facilitate patient-centered goal setting and action planning. BAP integrates principles and practice of MI with goal setting and action planning concepts from the self-management support, self-efficacy, and behavior change literature. In addition to reviewing the principles and theory that inform BAP, we introduce the steps of BAP and discuss practical considerations for incorporating BAP into clinical practice. In particular, we include suggestions about how BAP can be used in team-based clinical practice settings within the PCMH. Finally, we present a common clinical scenario to demonstrate BAP and provide resource links to online videos of BAP encounters. Throughout the paper, we use the word “clinician” to refer to professionals or other trained personnel using BAP, and “patient” to refer to those experiencing BAP, recognizing that other terms may be preferred in different settings.

What is BAP?

BAP is a highly structured, stepped-care, self-management support technique. Composed of a series of 3 questions and 5 skills (reviewed in detail below), BAP can be used to facilitate goal setting and action planning to build self-efficacy in chronic illness management and disease prevention [21–24]. The overall goal of BAP is to assist an individual to create an action plan for a self-management behavior that they feel confident that they can achieve. BAP is currently being used in diverse care settings including primary care, home health care, rehabilitation, mental health and public health to assist and empower patients to self-manage chronic illnesses and disabilities including diabetes, depression, spinal cord injury, arthritis, and hypertension. BAP is also being used to assist patients to develop action plans for disease prevention. For example, the Bellevue Hospital Personalized Prevention clinic, a pilot clinic that uses a mathematical model [26] to help patients and providers collaboratively prioritize prevention focus and strategies, systematically utilizes BAP as its self-management support technique for patient-centered action planning. At this time, BAP has been incorporated into teaching curriculums at multiple medical schools, presented at major national health care/academic conferences and is being increasingly integrated into health delivery systems across the United States and Canada to support patient self-management for NCQA-PCMH transformation. We have also developed a series of standardized programing to support fidelity in BAP skills development including a multidisciplinary introductory training curriculum, telephonic coaching, interactive web-based training tools, and a structured “Train the Trainer” curriculum [27]. In addition, a set of guidelines designed to ensure fidelity in BAP research has been developed [27].

Underlying Principles of BAP

BAP is grounded in the principles and practice of MI and the psychology of behavior change. Within behavior change, we draw primarily on self-efficacy and action planning theory and research. We discuss the key concepts in detail below.

The Spirit of MI

MI Spirit (Compassion, Acceptance, Partnership and Evocation) is an important overarching tenet for BAP. Compassionately supporting self-management with MI spirit involves a partnership with the patient rather than a prescription for change and the assurance that the clinician has the patients best interest always in mind (Compassion) [17]. Exemplifying “spirit” accepts that the ultimate choice to change is the patient’s alone (Acceptance) and acknowledges that individuals bring expertise about themselves and their lives to the conversation (Evocation). Adherence to “MI spirit” itself has been associated with positive behavior change outcomes in patients [5,28–32]. Demonstrating MI spirit throughout the change conversation is an essential foundational principle of BAP.

Action Planning and Self-Efficacy

In addition to the spirit of MI, BAP integrates 2 evidence-based constructs from the behavior change literature: action planning and self-efficacy [4,6,33–36]. Action planning requires that individuals specify when, where and how to enact a goal-directed behavior (eg, self-management behaviors). Action planning has been shown to mediate the intention-behavior relationship thereby increasing the likelihood that an individual’s intentions will lead to behavior change [37,38]. Given the demonstrated potential of action planning for ensuring individuals achieve their health goals, the BAP framework aspires to assist patients to create an action plan.

BAP also aims to build patients’ self-efficacy to enact the goals outlined in their action plans. Self-efficacy refers to a patient’s confidence in their ability to enact a behavior [33]. Several reviews of the literature have suggested a strong relationship between self-efficacy and adoption of healthy behaviors such as smoking cessation, weight control, contraception, alcohol abuse and physical activity [39–42]. Furthermore, Lorig et al demonstrated that the process of action planning itself contributes to enhanced self-efficacy [8]. BAP aims to build self-efficacy and ultimately change patients’ behaviors by helping patients to set an action plan that they feel confident in their ability to achieve.

Description of the BAP Steps

The flowchart in Figure 1 presents an overview of the key elements of BAP. An example dialogue illustrating the steps of BAP can be found in Figure 2.

Three questions and 3 of the BAP skills (ie, SMART plan, eliciting a commitment statement, and follow-up) are applied during every BAP interaction, while 2 skills (ie, behavioral menu and problem solving for low confidence) are used as needed. The distinct functions and the evidence supporting the 3 questions and 5 BAP skills are described below.

Question 1: Eliciting a Behavioral Focus or Goal

Once engagement has been established and the clinician determines the patient is ready for self-management planning to occur, the first question of BAP can be asked: “Is there anything you would like to do for your health in the next week or two?” 

This question elicits a person’s interest in self-management or behavior change and encourages the individual to view himself/herself as someone engaged in his or her health. The powerful link between consistency of word and action facilitates development and commitment to change the behavior of focus [43]. In some settings a broader question such as “Is there anything you would like to do about your current situation in the next week or two?” may be a better fit, or referring to a more specific question may flow more naturally from the conversation such as “We’ve been talking about diabetes, is there anything you would like to do for that or anything else in the next week or two?”

Although technically Question 1 is a closed-ended question (in that it can be answered “yes” or “no”), in actual practice it generates productive discussions about change. 

For example, whenever a patient answers “yes” or “no” or something in-between like, “I’m not sure,” the clinician can often smoothly transition to a dialogue about change based on that response. Responses to Question 1 generally take 3 forms (Figure 1):

1) Have an Idea. A group of patients immediately present an idea that they are ready to do or are ready to consider doing. For these patients, clinicians can proceed directly to Skill 2—SMART Behavioral Planning; that is, asking patients directly if they are ready to turn their idea into a concrete plan. Some evidence suggests that further discussion, assessment, or even additional "motivational" exploration in patients who are ready to make a plan and already have an idea may actually decrease motivation for change [17, 32].

2) Not Sure. Another group of patients may want or need suggestions before committing to something specific they want to work on. For these patients, clinicians should use the opportunity to offer a Behavioral Menu (Skill 1).

3) No or Not at This Time. A third group of patients may not be interested or ready to make a change at this time or at all. Some in this group may be healthy or already self-managing effectively and have no need to make a plan, in which case the clinician acknowledges their active self-management and moves to the next part of the visit. Others in this group may have considerable ambivalence about change or face complex situations where other priorities take precedence. Clinicians frequently label these individuals as "resistant." The Spirit of MI can be very useful when working with these patients to accept and respect their autonomy while encouraging ongoing partnership at a future time. For example, a clinician may say “It sounds like you are not interested in making a plan for your health right now. Would it be OK if I ask you about this again at our next visit?” Pushing forward to make a "plan for change" when a patient is not ready decreases both motivation for change as well as the likelihood for a successful outcome [32].

Other patients may benefit from additional motivational approaches to further explore change and ambivalence. If the clinician does not have these skills, patients may be seamlessly transitioned to another resource within or external to the care team.

Skill 1: Offering a Behavioral Menu

If in response to Question 1 an individual is unable to come up with an idea of their own or needs more information, then offering a Behavioral Menu may be helpful [44,45]. Consistent with the “Spirit of MI,” BAP attempts to elicit ideas from the individual themselves; however, it is important to recognize that some people require assistance to identify possible actions. A behavioral menu is comprised of 2 or 3 suggestions or ideas that will ideally trigger individuals to discover an idea of their own. There are 3 distinct evidence-based steps to follow when presenting a Behavioral Menu.

1) Ask permission to offer a behavioral menu. Asking permission to share ideas respects patient autonomy and prevents the provider from inadvertently assuming an expert role. For example: “Would it be OK if I shared with you some examples of what some other patients I work with have done?”

2) Offer 2 to 3 general yet varied ideas all at once (Figure 2, entry 5). It helps to mention things that other patients have decided to do with some success. Using this approach avoids the clinician assuming too much about the patient or allowing the patient to reject the ideas. It is important to remember that the list is to prompt ideas, not to find a perfect solution [17]. For example: “One patient I work with decided to join a gym and start exercising, another decided to pick up an old hobby he used to enjoy doing and another patient decided to schedule some time with a friend she hadn’t seen in a while.”

3) Ask if any of the ideas appeal to the individual as something that might work for them or if the patient has an idea of his/her own (Figure 2, entry 5). Evocation from the Spirit of MI is built in with this prompt [17]. For example: “These are some ideas that have worked for other patients I work with, do they trigger any ideas that might work for you?”

Clinicians may find it helpful to use visual prompts to guide Behavioral Menu conversations [44]. Diagrams with equally weighted spaces assist clinicians to resist prioritizing as might happen in a list. Empty circles alongside circles containing varied options evoke patient ideas, consistent with the Spirit of MI (Figure 3, Visual Behavioral Menu Example) [44].

Skill 2: SMART Planning

Once an individual decides on an area of focus, the clinician partners with the patient to clarify the details and create an action plan to achieve their goal. Given that individuals are more likely to successfully achieve goals that are specific, proximal, and achievable as opposed to vague and distal [46,47], the clinician works with patient to ensure that the patient’s goal is SMART (specific, measurable, achievable, relevant and time-bound). The term SMART has its roots in the business management literature [48] as an adaptation of Locke’s pioneering research (1968) on goal setting and motivation [49]. In particular, Locke and Latham’s theory of Goal Setting and Task performance, states that “specific and achievable” goals are more likely to be successfully reached [47,50].

We suggest helping the patient to make smart goals by eliciting answers to questions applicable to the plan, such as “what?” “where?” “when?” “how long?” “how often?” “how much?” and “when will you start?” [51]. A resulting plan might be “I will walk for 20 minutes, in my neighborhood, every Monday, Wednesday and Friday before dinner.”

Skill 3: Elicit a Commitment Statement

Once the individual has developed a specific plan, the next step of BAP is for the clinician to ask him or her to “tell back” the specifics of the plan. The provider might say something like, “Just to make sure we understand each other, would you repeat back what you’ve decided to do?” The act of “repeating back” organizes the details of the plan in the persons mind and may lead to an unconscious self-reflection about the feasibility of the plan [43,52], which then sets the stage for Question 2 of BAP (Scaling for Confidence). Commitment predicts subsequent behavior change, and the strength of the commitment language is the strongest predictor of success on an action plan [43,52,53]. For example saying “I will” is stronger than saying “I will try.”

Question 2: Scaling for Confidence

After a commitment statement has been elicited, the second question of BAP is asked. “How confident or sure do you feel about carrying out your plan on a scale from 0 to 10, where 0 is not confident at all and 10 is totally confident or sure?” Confidence scaling is a common tool used in behavioral interventions, MI, and chronic disease self-management programs [17,51]. Question 2 assesses an individual’s self-efficacy to complete the plan and facilitates discussion about potential barriers to implementation in order to increase the likelihood of success of a personal action plan.

For patients who have difficulty grasping the concept of a numerical scale, the word “sure” can be substituted for “confident” and a Likert scale including the terms “not at all sure,” “somewhat sure,” and “very sure” substituted for the numerical confidence ruler, ie, “How sure are you that you will be able to carry out your plan? Not at all sure, somewhat sure, or very sure?” Alternatively, people of different cultural backgrounds may find it easier to grasp the concept using familiar images or experiences. For example, Native Americans from the Southwest have adapted the scale to depict a series of images ranging from planting a corn seed to harvesting a crop or climbing a ladder, while in some Latino cultures the image of climbing a mountain (“How far up the mountain are you?”) is useful to demonstrate “level of confidence” concept [54].

Skill 4: Problem Solving for Low Confidence

When confidence is relatively low (ie, below 7), we suggest collaborative problem solving as the next step [8,51]. Low confidence or self-efficacy for plan completion is a concern since low self-efficacy predicts non-completion [8]. Successfully implementing the action plan, no matter how small, increases confidence and self-efficacy for engaging in the behavior [8].

There are several steps that a clinician follows when collaboratively problem-solving with a patient with low confidence (Figure 1).

• Recognize that a low confidence level is greater than no confidence at all. By affirming the strength of a patient’s confidence rather than negatively focusing on a low level of confidence, the provider emphasizes the patient’s strengths.

• Collaboratively explore ways that the plan could be modified in order to improve confidence. A Behavioral Menu can be offered if needed. For example, a clinician might say something like: “That’s great that your confidence level is a 5. A 5 is a lot higher than a 1. People are more likely to have success with their action plans when confidence levels are 7 or more. Do you have any ideas of how you might be able to increase your level confidence to a 7 or more?”

• If the patient has no ideas, ask permission to offer a Behavioral Menu: “Would it be ok to share some ideas about how other patients I’ve worked with have increased their confidence level?” If the patient agrees, then say... “Some people modify their plans to make them easier, some choose a less ambitious goal or adjust the frequency of their plan, and some people involve a friend or family member. Perhaps one of these ideas seems like a good one for you or maybe you have another idea?”

Question 3: Arranging Accountability

Once the details of the plan have been determined and confidence level for success is high, the next step is to ask Question 3: “Would you like to set a specific time to check in about your plan to see how things are going?” This question encourages a patient to be accountable for their plan, and reinforces the concept that the physician and care team consider the plan to be important. Research supports that people are more likely to follow through with a plan if they choose to report back their progress [43] and suggests that checking-in frequently earlier in the process is helpful [55]. Ideally the clinician and patient should agree on a time to check in on the plan within a week or two (Figure 2, entry 29).

Accountability in the form of a check-in may be arranged with the clinical provider, another member of the healthcare team or a support person of the patient’s choice (eg, spouse, friend). The patient may also choose to be accountable to themselves by using a calendar or a goal setting application on their smart phone device or computer.

Skill 5: Follow-up

Follow-up has been noted as one of the features of successful multifactorial self-management interventions and builds trust [55]. Follow-up with the care team includes a discussion of how the plan went, reassurance, and next steps (Figure 4). The next step is often a modification of the current BAP or a new BAP; however, if a patient decides not to make or work on a plan, in the spirit of MI (accepting/respecting the patient's autonomy) the clinician can say something like, "It sounds like you are not interested in making a plan today. Would it be OK if I ask you about this again at our next visit?"

The purpose of the check-in is for learning and adjustment of the plan as well as to provide support regardless of outcome. Checking-in encourages reflection on challenges and barriers as well as successes. Patients should be given guidance to think through what worked for them and what did not. Focusing just on “success” of the plan will be less helpful. If follow-up is not done with the care team in the near term, checking-in can be accomplished at the next scheduled visit. Patient portals provide another opportunity for patients to dialogue with the care team about their plan.

Experiential Insights from Clinical Experience Using BAP

The authors collective experience to date indicates that between 50% to 75% of individuals who are asked Question 1 go on to develop an action plan for change with relatively little need for additional skills. In other studies of action planning in primary care, 83% of patients made action plans during a visit, and at 3-week follow-up 53% had completed their action plan [56]. A recent study of action planning using an online self-management support program reported that action plans were successfully completed (49%), partially completed (40%) or incomplete (11% of the time) [35].

Another caveat to consider is that the process of planning is more important that the actual plan itself. It is imperative to allow the patient, not the clinician, to determine the plan. For example, a patient with multiple poorly controlled chronic illnesses including depression may decide to focus his action plan around cleaning out his car rather than disease control such as dietary modification, medication adherence or exercise. The clinician may initially fail to view this as a good use of clinician time or healthcare resources since it seems unrelated to health. However, successful completion of an action plan is not the only objective of action planning. Building self-efficacy, which may lead to additional action planning around health, is more important [4,46]. The challenge is therefore for the clinician to take a step back, relinquish the “expert role,” and support the goal setting process regardless of the plan. In this example, successfully cleaning out his car may increase the patient’s self-efficacy to control other aspects of his life including diet and the focus of future plans may shift [4].

When to Use BAP

Opportunities for patient engagement in action planning occur when addressing chronic illness concerns as well as during discussions about health maintenance and preventive care. BAP can be considered as part of any routine clinical agenda unless patient preferences or clinical acuity preclude it. As with most clinical encounters, the flow is often negotiated at the beginning of the visit. BAP can be accomplished at any time that works best for the flow and substance of the visit, but a few patterns have emerged based on our experience.

BAP fits naturally into the part of the visit when the care plan is being discussed. The term “care plan” is commonly used to describe all of the care that will be provided until the next visit. Care plans can include additional recommendations for testing or screening, therapeutic adjustments and or referrals for additional expertise. Ideally the patients “agreed upon” contribution to their care should also be captured and documented in their care plan. This is often described as the patients “self-management goal.” For patients who are ready to make a specific plan to change behavior, BAP is an efficient way to support patients to craft an action plan that can then be incorporated into the overall care plan.

Another variation of when to use BAP is the situation when the patient has had a prior action plan and is being seen for a recheck visit. Discussing the action plan early in the visit agenda focuses attention on the work patients have put into following their plan. Descriptions of success lead readily to action plans for the future. Time spent discussing failures or partial success is valuable to problem solve as well as to affirm continued efforts to self-manage.

BAP can also be used between scheduled visits. The check-in portion of BAP is particularly amenable to follow-up by phone or by another supporter. A pre-arranged follow-up 1 to 2 weeks after creation of a new action plan [8] provides encouragement to patients working on their plan and also helps identify those who need more support.

Finally, BAP can be completed over multiple visits. For patients who are thinking about change but are not yet committed to planning, a brief suggestion about the value of action planning with a behavioral menu may encourage additional self-reflection. Many times patients return to the next visit with clear ideas about changes that would be important for them to make.

Fitting BAP into a 20-Minute Visit

Using BAP is a time-efficient way to provide self-management support within the context of a 20-minute visit with engaged patients who are ready to set goals for health. With practice, clinicians can often conduct all the steps within 3 to 5 minutes. However, patients and clinicians often have competing demands and agendas and may not feel that they have time to conduct all the steps. Thus, utilizing other members of the health care team to deliver some or all of BAP can facilitate implementation.

Teams have been creative in their approach to BAP implementation but 2 common models involve a multidisciplinary approach to BAP. In one model, the clinician assesses the patient readiness to make a specific action plan by asking Question 1, usually after the current status of key problems have been addressed and discussions begin about the interim plan of care. If the patient indicates interest, another staff member trained in BAP, such as an medical assistant, health coach or nurse, guides the development of the specific plan, completes the remaining steps and inputs the patient’s BAP into the care plan.

In another commonly deployed model, the front desk clerk or medical assistant helps to get the patient thinking by asking Question 1 and perhaps by providing a behavioral menu. When the clinician sees the patient, he follows up on the behavior change the patient has chosen and affirms the choice. Clinicians often flex seamlessly with other team members to complete the action plan depending on the schedule and current patient flow.

Regardless of how the workflows are designed, BAP implementation requires staff that can provide BAP with fidelity, effective communication among team members involved in the process and a standardized approach to documentation of the specific action plan, plan for check-in and notes about follow-up. Care teams commonly test different variations of personnel and workflows to find what works best for their particular practice.

Implementing BAP to Support PCMH Transformation

To support PCMH transformation substantial changes are needed to make care more proactive, more patient-centered and more accountable. One of the common elements for PCMH recognition regardless of sponsor is to enhance self-management support [20,57,58]. Practices pursuing PCMH designation are searching for effective evidence-based approaches to provide self-management support and guide action planning for patients. The authors suggest implementation of BAP as a potential strategy to enhance self-management support. In addition to facilitating meeting the actual PCMH criteria, BAP is aligned with the transitions in care delivery that are an important part of the transformation including reliance on team-based care and meaningful engagement of patients in their care [59,60].

In our experience, BAP is introduced incrementally into a practice initially focusing on one or two patient segments and then including more as resources allow. Successful BAP implementation begins with an organizational commitment to self-management support, decisions about which populations would benefit most from self-management support and BAP, training of key staff and clearly defined workflows that ensure reliable BAP provision.

BAP’s stepped-care design makes it easy to teach to all team members and as described above, team-based delivery of BAP functions well in those situations where clinicians and trained ancillary staff can “hand off” the process at any time to optimize the value to the patient while respecting inherent time constraints.

Documentation of the actual goal and follow-up is an important component to fully leverage BAP. Goals captured in a template generate actionable lists for action plan follow-up. Since EHRs vary considerably in their capacity to capture goals, teams adding BAP to their workflow will benefit from discussion of standardized documentation practices and forms.

Summary

Brief Action Planning is a self-management support technique that can be used in busy clinical settings to support patient self-management through patient-centered goal setting. Each step of BAP is based on principles grounded in evidence. Health care teams can learn BAP and integrate it into clinical delivery systems to support self-management for PCMH transformation.

 

Corresponding author: Damara Gutnick, MD, New York University School of Medicine, New York, NY, [email protected].

Financial disclosures: None.

References

1. Hoffman C, Rice D, Sung HY. Persons withnic conditions. Their prevalence and costs. JAMA 1996;276(18):1473–9.

2. Institute of Medicine. Living well with chro:ic illness: a call for public health action. Washington (DC); The National Academies Press; 2012.

3. De Silva D. Evidence: helping people help themselves. London: The Health Foundation Inspiring Improvement; 2011.

4. Bodenheimer T, Lorig K, Holman H, Grumbach K. Patient self-management of chronic disease in primary care. JAMA 2002;288:2469–75.

5. Miller W, Benefield R, Tonigan J. Enhancing motivation for change in problem drinking: A controlled comparison of two therapist styles. J Consul Clin Psychol 1993;61:455–461.

6. Lorig K, Holman H. Self-management education: history, definition, outcomes, and mechanisms. Ann Behav Med 2003;26:1–7.

7. Artinian NT, Fletcher GF, Mozaffarian D, et al. Interventions to promote physical activity and dietary lifestyle changes for cardiovascular risk factor reduction in adults: a scientific statement from the American Heart Association. Circulation 2010;122:406–41.

8. Lorig K, Laurent DD, Plant K, Krishnan E, Ritter PL. The components of action planning and their associations with behavior and health outcomes. Chronic Illn 2013. Available at www.ncbi.nlm.nih.gov/pubmed/23838837.

9. Schlair S, Moore S, Mcmacken M, Jay M. How to deliver high-quality obesity counseling in primary care using the 5As framework. J Clin Outcomes Manag 2012;19:221–9.

10. Lorig KR, Ritter P, Stewart a L, et al. Chronic disease self-management program: 2-year health status and health care utilization outcomes. Med Care 2001;39:1217–23.

11. Jay MR, Gillespie CC, Schlair SL, et al. The impact of primary care resident physician training on patient weight loss at 12 months. Obesity 2013;21:45–50.

12. Goldstein MG, Whitlock EP, DePue J. Multiple behavioral risk factor interventions in primary care. Summary of research evidence. Am J Prev Med 2004;27:61–79.

13. Lundahl B, Moleni T, Burke BL, et al. Motivational interviewing in medical care settings: a systematic review and meta-analysis of randomized controlled trials. Patient Educ Couns 2013;93:157–68.

14. Rubak S, Sandbæk A, Lauritzen T, Christensen B. Motivational Interviewing: a systematic review and meta-analysis. Br J Gen Pract 2005;55:305–12.

15. Dunn C, Deroo L, Rivara F. The use of brief interventions adapted from motivational interviewing across behavioral domains: a systematic review. Addiction 2001;96:1725–42.

16. Heckman CJ, Egleston BL, Hofmann MT. Efficacy of motivational interviewing for smoking cessation: a systematic review and meta-analysis. Tob Control 2010;19:410–6.

17. Miller WR, Rollnick S. Motivational interviewing: helping people change. 3rd ed. New York: Guilford Press; 2013.

18. Resnicow K, DiIorio C, Soet J, et al. Motivational interviewing in health promotion: it sounds like something is changing. Health Psychol 2002;21:444–451.

19. Doherty RB, Crowley RA. Principles supporting dynamic clinical care teams: an American College of Physicians position paper. Ann Intern Med 2013;159:620–6.

20. NCQA PCMH 2011 Standards, Elements and Factors. Documentation Guideline/Data Sources. 4A: Provide self-care support and community resources. Available at www.ncqa.org/portals/0/Programs/Recognition/PCMH_2011_Data_Sources_6.6.12.pdf.

21. Reims K, Gutnick D, Davis C, Cole S. Brief action planning white paper. 2012. Available at www.centrecmi.ca.

22. Cole S, Davis C, Cole M, Gutnick D. Motivational interviewing and the patient centered medical home: a strategic approach to self-management support in primary care. In: Patient-Centered Primary Care Collaborative. Health IT in the patient centered medical home. October 2010. Available at www.pcpcc.net/guide/health-it-pcmh.

23. Cole S, Cole M, Gutnick D, Davis C. Function three: collaborate for management. In: Cole S, Bird J, editors. The medical interview: the three function approach. 3rd ed. Philadelphia:Saunders; 2014.

24. Cole S, Gutnick D, Davis C, Cole M. Brief action planning (BAP): a self-management support tool. In: Bickley L. Bates’ guide to physical examination and history taking. 11th ed. Philadelphia: Lippincott Williams and Wilkins; 2013.

25. AMA Physician tip sheet for self-management support. Available at www.ama-assn.org/ama1/pub/upload/mm/433/phys_tip_sheet.pdf.

26. Taksler G, Keshner M, Fagerlin A. Personalized estimates of benefit from preventive care guidelines. Ann Intern Med 2013;159:161–9.

27. Centre for Comprehensive Motivational Interventions [website]. Available at www.centreecmi.com.

28. Del Canale S, Louis DZ, Maio V, et al. The relationship between physician empathy and disease complications: an empirical study of primary care physicians and their diabetic patients in Parma, Italy. Acad Med 2012;87:1243–9.

29. Moyers TB, Miller WR, Hendrickson SML. How does motivational interviewing work? Therapist interpersonal skill predicts client involvement within motivational interviewing sessions. J Consult Clin Psychol 2005;73:590–8.

30. Hojat M, Louis DZ, Markham FW, et al. Physicians’ empathy and clinical outcomes for diabetic patients. Acad Med 2011;86:359–64.

31. Heisler M, Bouknight RR, Hayward RA, et al. The relative importance of physician communication, participatory decision making, and patient understanding in diabetes self-management. J Gen Intern Med 2002;17:243–52.

32. Miller WR, Rollnick S. Ten things that motivational interviewing is not. Behav Cogn Psychother 2009;37:129–40.

33. Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev 1977;85:191–215.

34. Kiesler, Charles A. The psychology of commitment: experiments linking behavior to belief. New York: Academic Press;1971.

35. Lorig K, Laurent DD, Plant K, et al. The components of action planning and their associations with behavior and health outcomes. Chronic Illn 2013.

36. MacGregor K, Handley M, Wong S, et al. Behavior-change action plans in primary care: a feasibility study of clinicians. J Am Board Fam Med 19:215–23.

37. Gollwitzer P. Implementation intentions. Am Psychol 1999;54:493–503.

38. Gollwitzer P, Sheeran P. Implementation intensions and goal achievement: A meta-analysis of effects and processes. Adv Exp Soc Psychology 2006;38:69–119.

39. Stretcher V, De Vellis B, Becker M, Rosenstock I. The role of self-efficacy in achieving behavior change. Health Educ Q 1986;13:73–92.

40. Ajzen I. Constructing a theory of planned behavior questionnaire. Available at people.umass.edu/aizen/pdf/tpb.measurement.pdf.

41. Rogers RW. Protection motivation theory of fear appeals and attitude-change. J Psychol 1975;91:93–114.

42. Schwarzer R. Modeling health behavior change: how to predict and modify the adoption and maintenance of health behaviors. Appl Psychol An Int Rev 2008;57:1–29.

43. Cialdini R. Influence: science and practice. 5th ed. Boston:Allyn and Bacon; 2008.

44. Stott NC, Rollnick S, Rees MR, Pill RM. Innovation in clinical method: diabetes care and negotiating skills. Fam Pract 1995;12:413–8.

45. Miller WR, Rollnick S, Butler C. Motivational interviewing in health care. New York: Guilford Press; 2008.

46. Bodenheimer T, Handley M. Goal-setting for behavior change in primary care: an exploration and status report. Patient Educ Couns 2009;76:174–80.

47. Locke EA, Latham GP. Building a practically useful theory of goal setting and task motivation. Am Psychol 2002;57:705–17.

48. Doran G. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manag Rev 1981;70:35–6.

49. Locke EA. Toward a theory of task motivation and incentives. Organ Behav Hum Perform 1968;3:157–89.

50. Locke EA, Latham GP, Erez M. The determinants of goal commitment. Acad Manag Rev 1988;13:23–39.

51. Lorig K, Homan H, Sobel D, et al. Living a healthy life with chronic conditions. 4th ed. Boulder: Bull Publishing; 2012.

52. Amrhein PC, Miller WR, Yahne CE, et al. Client commitment language during motivational interviewing predicts drug use outcomes. J Consult Clin Psychol 2003;71:862–78.

53. Ahaeonovich E, Amrhein PC, Bisaha A, et al. Cognition, commitment language and behavioral change among cocaine-dependent patients. Psychol Addict Behav 2008;22:557–62.

54. Gutnick D. Centre for Comprehensive Motivational Interventions community of practice webinar. Brief action planning and culture: developing culturally specific confidence rules. 2012. Available at www.centrecmi.ca.

55. Artinian NT, Fletcher GF, Mozaffarian D, et al. Interventions to promote physical activity and dietary lifestyle changes for cardiovascular risk factor reduction in adults. A scientific statement from the American Heart Association. Circulation 2010;122:406–41.

56. Handley M, MacGregor K, Schillinger D, et al. Using action plans to help primary care patients adopt healthy behaviors: a descriptive study. J Am Board Fam Med 2006;19:224–31.

57. Joint Commision. Primary care medical home option-additional requirements. Available at www.jointcommission.org/assets/1/18/PCMH_new_stds_by_5_characteristics.pdf.

58. Oregon Health Policy and Research. Standards for patient centered medical home recognition. Available at www.oregon.gov/oha/OHPR/pages/healthreform/pcpch/standards.aspx.

59. Nutting PA, Crabtree BF, Miller WL, et al. Journey to the patient-centered medical home: a qualitative analysis of the experiences of practices in the national demonstration project. Am Fam Med 2010;8(Suppl 1):S45–S56.

60. Stewart EE, Nutting PA, Crabtree BF, et al. Implementing the patient-centered medical home: observation and description of the National Demonstration Project. Am Fam Med 2010;8(Suppl 1):S21–S32.

Issue
Journal of Clinical Outcomes Management - January 2014, VOL. 21, NO. 1
Publications
Topics
Sections

From the New York University School of Medicine, New York, NY (Drs. Gutnick and Jay), University of Colorado Health Sciences Center, Denver, CO (Dr. Reims), University of British Columbia, BC, Canada (Dr. Davis), University College London, London, UK (Dr. Gainforth), and Stonybrook University School of Medicine, Stonybrook, NY (Dr. Cole [Emeritus]).

 

Abstract

  • Objective: To describe Brief Action Planning (BAP), a structured, stepped-care self-management support technique for chronic illness care and disease prevention.
  • Methods: A review of the theory and research supporting BAP and the questions and skills that comprise the technique with provision of a clinical example.
  • Results: BAP facilitates goal setting and action planning to build self-efficacy for behavior change. It is grounded in the principles and practice of Motivational Interviewing and evidence-based constructs from the behavior change literature. Comprised of a series of 3 questions and 5 skills, BAP can be implemented by medical teams to help meet the self-management support objectives of the Patient-Centered Medical Home.
  • Conclusion: BAP is a useful self-management support technique for busy medical practices to promote health behavior change and build patient self-efficacy for improved long-term clinical outcomes in chronic illness care and disease prevention.

 

Chronic disease is prevalent and time consuming, challenging, and expensive to manage [1]. Half of all adult primary care patients have more than 2 chronic diseases, and 75% of US health care dollars are spent on chronic illness care [2]. Given the health and financial impact of chronic disease, and recognizing that patients make daily decisions that affect disease control, efforts are needed to assist and empower patients to actively self-manage health behaviors that influence chronic illness outcomes. Patients who are supported to actively self-manage their own chronic illnesses have fewer symptoms, improved quality of life, and lower use of health care resources [3]. Historically, providers have tried to influence chronic illness self-management by advising behavior change (eg, smoking cessation, exercise) or telling patients to take medications; yet clinicians often become frustrated when patients do not “adhere” to their professional advice [4,5]. Many times, patients want to make changes that will improve their health but need support—commonly known as self-management support—to be successful.

Involving patients in decision making, emphasizing problem solving, setting goals, creating action plans (ie, when, where and how to enact a goal-directed behavior), and following up on goals are key features of successful self-management support methods [3,6–8]. Multiple approaches from the behavioral change literature, such as the 5 A’s (Assess, Advise, Agree, Assist, Arrange) [9], Motivational Interviewing (MI), and chronic disease self-management programs [10] have been used to provide more effective guidance for patients and their caregivers. However, the practicalities of these approaches in clinical settings have been questioned. The 5A’s, a counseling framework that is used to guide providers in health behavior change counseling, can feel overwhelming because it encompasses several different aspects of counseling [11,12]. Likewise, MI and adaptations of MI, which have been shown to outperform traditional “advice giving” in treatment of a broad range of behaviors and chronic conditions [13–16], have been critiqued since fidelity to this approach often involves multiple sessions of training, practice, and feedback to achieve proficiency [15,17,18]. Finally, while chronic disease self-management programs have been shown to be effective when used by peers in the community [10], similar results in primary care are not well established.

Given the challenges of providers practicing, learning, and using each of these approaches, efforts to develop an approach that supports patients to make behavioral changes that can be implemented in typical practice settings are needed. In addition, health delivery systems are transforming to team-based models with emphasis on leveraging each team member’s expertise and licensure [19]. In acknowledgement of these evolving practice realities, the National Committee for Quality Assurance (NCQA) included development and documentation of patient self-management plans and goals as a critical factor for achieving NCQA Patient-Centered Medical Home (PCMH) recognition [20]. Successful PCMH transformation therefore entails clinical practices developing effective and time efficient ways to incorporate self-management support strategies, a new service for many, into their care delivery systems often without additional staffing.

In this paper, we describe an evidence-informed, efficient self-management support technique called Brief Action Planning (BAP) [21–24]. BAP evolved into its current form through ongoing collaborative efforts of 4 of the authors (SC, DG, CD, KR) and is based on a foundation of original work by Steven Cole with contributions from Mary Cole in 2002 [25]. This technique addresses many of the barriers providers have cited to providing self-management support, as it can be used routinely by both individual providers and health care teams to facilitate patient-centered goal setting and action planning. BAP integrates principles and practice of MI with goal setting and action planning concepts from the self-management support, self-efficacy, and behavior change literature. In addition to reviewing the principles and theory that inform BAP, we introduce the steps of BAP and discuss practical considerations for incorporating BAP into clinical practice. In particular, we include suggestions about how BAP can be used in team-based clinical practice settings within the PCMH. Finally, we present a common clinical scenario to demonstrate BAP and provide resource links to online videos of BAP encounters. Throughout the paper, we use the word “clinician” to refer to professionals or other trained personnel using BAP, and “patient” to refer to those experiencing BAP, recognizing that other terms may be preferred in different settings.

What is BAP?

BAP is a highly structured, stepped-care, self-management support technique. Composed of a series of 3 questions and 5 skills (reviewed in detail below), BAP can be used to facilitate goal setting and action planning to build self-efficacy in chronic illness management and disease prevention [21–24]. The overall goal of BAP is to assist an individual to create an action plan for a self-management behavior that they feel confident that they can achieve. BAP is currently being used in diverse care settings including primary care, home health care, rehabilitation, mental health and public health to assist and empower patients to self-manage chronic illnesses and disabilities including diabetes, depression, spinal cord injury, arthritis, and hypertension. BAP is also being used to assist patients to develop action plans for disease prevention. For example, the Bellevue Hospital Personalized Prevention clinic, a pilot clinic that uses a mathematical model [26] to help patients and providers collaboratively prioritize prevention focus and strategies, systematically utilizes BAP as its self-management support technique for patient-centered action planning. At this time, BAP has been incorporated into teaching curriculums at multiple medical schools, presented at major national health care/academic conferences and is being increasingly integrated into health delivery systems across the United States and Canada to support patient self-management for NCQA-PCMH transformation. We have also developed a series of standardized programing to support fidelity in BAP skills development including a multidisciplinary introductory training curriculum, telephonic coaching, interactive web-based training tools, and a structured “Train the Trainer” curriculum [27]. In addition, a set of guidelines designed to ensure fidelity in BAP research has been developed [27].

Underlying Principles of BAP

BAP is grounded in the principles and practice of MI and the psychology of behavior change. Within behavior change, we draw primarily on self-efficacy and action planning theory and research. We discuss the key concepts in detail below.

The Spirit of MI

MI Spirit (Compassion, Acceptance, Partnership and Evocation) is an important overarching tenet for BAP. Compassionately supporting self-management with MI spirit involves a partnership with the patient rather than a prescription for change and the assurance that the clinician has the patients best interest always in mind (Compassion) [17]. Exemplifying “spirit” accepts that the ultimate choice to change is the patient’s alone (Acceptance) and acknowledges that individuals bring expertise about themselves and their lives to the conversation (Evocation). Adherence to “MI spirit” itself has been associated with positive behavior change outcomes in patients [5,28–32]. Demonstrating MI spirit throughout the change conversation is an essential foundational principle of BAP.

Action Planning and Self-Efficacy

In addition to the spirit of MI, BAP integrates 2 evidence-based constructs from the behavior change literature: action planning and self-efficacy [4,6,33–36]. Action planning requires that individuals specify when, where and how to enact a goal-directed behavior (eg, self-management behaviors). Action planning has been shown to mediate the intention-behavior relationship thereby increasing the likelihood that an individual’s intentions will lead to behavior change [37,38]. Given the demonstrated potential of action planning for ensuring individuals achieve their health goals, the BAP framework aspires to assist patients to create an action plan.

BAP also aims to build patients’ self-efficacy to enact the goals outlined in their action plans. Self-efficacy refers to a patient’s confidence in their ability to enact a behavior [33]. Several reviews of the literature have suggested a strong relationship between self-efficacy and adoption of healthy behaviors such as smoking cessation, weight control, contraception, alcohol abuse and physical activity [39–42]. Furthermore, Lorig et al demonstrated that the process of action planning itself contributes to enhanced self-efficacy [8]. BAP aims to build self-efficacy and ultimately change patients’ behaviors by helping patients to set an action plan that they feel confident in their ability to achieve.

Description of the BAP Steps

The flowchart in Figure 1 presents an overview of the key elements of BAP. An example dialogue illustrating the steps of BAP can be found in Figure 2.

Three questions and 3 of the BAP skills (ie, SMART plan, eliciting a commitment statement, and follow-up) are applied during every BAP interaction, while 2 skills (ie, behavioral menu and problem solving for low confidence) are used as needed. The distinct functions and the evidence supporting the 3 questions and 5 BAP skills are described below.

Question 1: Eliciting a Behavioral Focus or Goal

Once engagement has been established and the clinician determines the patient is ready for self-management planning to occur, the first question of BAP can be asked: “Is there anything you would like to do for your health in the next week or two?” 

This question elicits a person’s interest in self-management or behavior change and encourages the individual to view himself/herself as someone engaged in his or her health. The powerful link between consistency of word and action facilitates development and commitment to change the behavior of focus [43]. In some settings a broader question such as “Is there anything you would like to do about your current situation in the next week or two?” may be a better fit, or referring to a more specific question may flow more naturally from the conversation such as “We’ve been talking about diabetes, is there anything you would like to do for that or anything else in the next week or two?”

Although technically Question 1 is a closed-ended question (in that it can be answered “yes” or “no”), in actual practice it generates productive discussions about change. 

For example, whenever a patient answers “yes” or “no” or something in-between like, “I’m not sure,” the clinician can often smoothly transition to a dialogue about change based on that response. Responses to Question 1 generally take 3 forms (Figure 1):

1) Have an Idea. A group of patients immediately present an idea that they are ready to do or are ready to consider doing. For these patients, clinicians can proceed directly to Skill 2—SMART Behavioral Planning; that is, asking patients directly if they are ready to turn their idea into a concrete plan. Some evidence suggests that further discussion, assessment, or even additional "motivational" exploration in patients who are ready to make a plan and already have an idea may actually decrease motivation for change [17, 32].

2) Not Sure. Another group of patients may want or need suggestions before committing to something specific they want to work on. For these patients, clinicians should use the opportunity to offer a Behavioral Menu (Skill 1).

3) No or Not at This Time. A third group of patients may not be interested or ready to make a change at this time or at all. Some in this group may be healthy or already self-managing effectively and have no need to make a plan, in which case the clinician acknowledges their active self-management and moves to the next part of the visit. Others in this group may have considerable ambivalence about change or face complex situations where other priorities take precedence. Clinicians frequently label these individuals as "resistant." The Spirit of MI can be very useful when working with these patients to accept and respect their autonomy while encouraging ongoing partnership at a future time. For example, a clinician may say “It sounds like you are not interested in making a plan for your health right now. Would it be OK if I ask you about this again at our next visit?” Pushing forward to make a "plan for change" when a patient is not ready decreases both motivation for change as well as the likelihood for a successful outcome [32].

Other patients may benefit from additional motivational approaches to further explore change and ambivalence. If the clinician does not have these skills, patients may be seamlessly transitioned to another resource within or external to the care team.

Skill 1: Offering a Behavioral Menu

If in response to Question 1 an individual is unable to come up with an idea of their own or needs more information, then offering a Behavioral Menu may be helpful [44,45]. Consistent with the “Spirit of MI,” BAP attempts to elicit ideas from the individual themselves; however, it is important to recognize that some people require assistance to identify possible actions. A behavioral menu is comprised of 2 or 3 suggestions or ideas that will ideally trigger individuals to discover an idea of their own. There are 3 distinct evidence-based steps to follow when presenting a Behavioral Menu.

1) Ask permission to offer a behavioral menu. Asking permission to share ideas respects patient autonomy and prevents the provider from inadvertently assuming an expert role. For example: “Would it be OK if I shared with you some examples of what some other patients I work with have done?”

2) Offer 2 to 3 general yet varied ideas all at once (Figure 2, entry 5). It helps to mention things that other patients have decided to do with some success. Using this approach avoids the clinician assuming too much about the patient or allowing the patient to reject the ideas. It is important to remember that the list is to prompt ideas, not to find a perfect solution [17]. For example: “One patient I work with decided to join a gym and start exercising, another decided to pick up an old hobby he used to enjoy doing and another patient decided to schedule some time with a friend she hadn’t seen in a while.”

3) Ask if any of the ideas appeal to the individual as something that might work for them or if the patient has an idea of his/her own (Figure 2, entry 5). Evocation from the Spirit of MI is built in with this prompt [17]. For example: “These are some ideas that have worked for other patients I work with, do they trigger any ideas that might work for you?”

Clinicians may find it helpful to use visual prompts to guide Behavioral Menu conversations [44]. Diagrams with equally weighted spaces assist clinicians to resist prioritizing as might happen in a list. Empty circles alongside circles containing varied options evoke patient ideas, consistent with the Spirit of MI (Figure 3, Visual Behavioral Menu Example) [44].

Skill 2: SMART Planning

Once an individual decides on an area of focus, the clinician partners with the patient to clarify the details and create an action plan to achieve their goal. Given that individuals are more likely to successfully achieve goals that are specific, proximal, and achievable as opposed to vague and distal [46,47], the clinician works with patient to ensure that the patient’s goal is SMART (specific, measurable, achievable, relevant and time-bound). The term SMART has its roots in the business management literature [48] as an adaptation of Locke’s pioneering research (1968) on goal setting and motivation [49]. In particular, Locke and Latham’s theory of Goal Setting and Task performance, states that “specific and achievable” goals are more likely to be successfully reached [47,50].

We suggest helping the patient to make smart goals by eliciting answers to questions applicable to the plan, such as “what?” “where?” “when?” “how long?” “how often?” “how much?” and “when will you start?” [51]. A resulting plan might be “I will walk for 20 minutes, in my neighborhood, every Monday, Wednesday and Friday before dinner.”

Skill 3: Elicit a Commitment Statement

Once the individual has developed a specific plan, the next step of BAP is for the clinician to ask him or her to “tell back” the specifics of the plan. The provider might say something like, “Just to make sure we understand each other, would you repeat back what you’ve decided to do?” The act of “repeating back” organizes the details of the plan in the persons mind and may lead to an unconscious self-reflection about the feasibility of the plan [43,52], which then sets the stage for Question 2 of BAP (Scaling for Confidence). Commitment predicts subsequent behavior change, and the strength of the commitment language is the strongest predictor of success on an action plan [43,52,53]. For example saying “I will” is stronger than saying “I will try.”

Question 2: Scaling for Confidence

After a commitment statement has been elicited, the second question of BAP is asked. “How confident or sure do you feel about carrying out your plan on a scale from 0 to 10, where 0 is not confident at all and 10 is totally confident or sure?” Confidence scaling is a common tool used in behavioral interventions, MI, and chronic disease self-management programs [17,51]. Question 2 assesses an individual’s self-efficacy to complete the plan and facilitates discussion about potential barriers to implementation in order to increase the likelihood of success of a personal action plan.

For patients who have difficulty grasping the concept of a numerical scale, the word “sure” can be substituted for “confident” and a Likert scale including the terms “not at all sure,” “somewhat sure,” and “very sure” substituted for the numerical confidence ruler, ie, “How sure are you that you will be able to carry out your plan? Not at all sure, somewhat sure, or very sure?” Alternatively, people of different cultural backgrounds may find it easier to grasp the concept using familiar images or experiences. For example, Native Americans from the Southwest have adapted the scale to depict a series of images ranging from planting a corn seed to harvesting a crop or climbing a ladder, while in some Latino cultures the image of climbing a mountain (“How far up the mountain are you?”) is useful to demonstrate “level of confidence” concept [54].

Skill 4: Problem Solving for Low Confidence

When confidence is relatively low (ie, below 7), we suggest collaborative problem solving as the next step [8,51]. Low confidence or self-efficacy for plan completion is a concern since low self-efficacy predicts non-completion [8]. Successfully implementing the action plan, no matter how small, increases confidence and self-efficacy for engaging in the behavior [8].

There are several steps that a clinician follows when collaboratively problem-solving with a patient with low confidence (Figure 1).

• Recognize that a low confidence level is greater than no confidence at all. By affirming the strength of a patient’s confidence rather than negatively focusing on a low level of confidence, the provider emphasizes the patient’s strengths.

• Collaboratively explore ways that the plan could be modified in order to improve confidence. A Behavioral Menu can be offered if needed. For example, a clinician might say something like: “That’s great that your confidence level is a 5. A 5 is a lot higher than a 1. People are more likely to have success with their action plans when confidence levels are 7 or more. Do you have any ideas of how you might be able to increase your level confidence to a 7 or more?”

• If the patient has no ideas, ask permission to offer a Behavioral Menu: “Would it be ok to share some ideas about how other patients I’ve worked with have increased their confidence level?” If the patient agrees, then say... “Some people modify their plans to make them easier, some choose a less ambitious goal or adjust the frequency of their plan, and some people involve a friend or family member. Perhaps one of these ideas seems like a good one for you or maybe you have another idea?”

Question 3: Arranging Accountability

Once the details of the plan have been determined and confidence level for success is high, the next step is to ask Question 3: “Would you like to set a specific time to check in about your plan to see how things are going?” This question encourages a patient to be accountable for their plan, and reinforces the concept that the physician and care team consider the plan to be important. Research supports that people are more likely to follow through with a plan if they choose to report back their progress [43] and suggests that checking-in frequently earlier in the process is helpful [55]. Ideally the clinician and patient should agree on a time to check in on the plan within a week or two (Figure 2, entry 29).

Accountability in the form of a check-in may be arranged with the clinical provider, another member of the healthcare team or a support person of the patient’s choice (eg, spouse, friend). The patient may also choose to be accountable to themselves by using a calendar or a goal setting application on their smart phone device or computer.

Skill 5: Follow-up

Follow-up has been noted as one of the features of successful multifactorial self-management interventions and builds trust [55]. Follow-up with the care team includes a discussion of how the plan went, reassurance, and next steps (Figure 4). The next step is often a modification of the current BAP or a new BAP; however, if a patient decides not to make or work on a plan, in the spirit of MI (accepting/respecting the patient's autonomy) the clinician can say something like, "It sounds like you are not interested in making a plan today. Would it be OK if I ask you about this again at our next visit?"

The purpose of the check-in is for learning and adjustment of the plan as well as to provide support regardless of outcome. Checking-in encourages reflection on challenges and barriers as well as successes. Patients should be given guidance to think through what worked for them and what did not. Focusing just on “success” of the plan will be less helpful. If follow-up is not done with the care team in the near term, checking-in can be accomplished at the next scheduled visit. Patient portals provide another opportunity for patients to dialogue with the care team about their plan.

Experiential Insights from Clinical Experience Using BAP

The authors collective experience to date indicates that between 50% to 75% of individuals who are asked Question 1 go on to develop an action plan for change with relatively little need for additional skills. In other studies of action planning in primary care, 83% of patients made action plans during a visit, and at 3-week follow-up 53% had completed their action plan [56]. A recent study of action planning using an online self-management support program reported that action plans were successfully completed (49%), partially completed (40%) or incomplete (11% of the time) [35].

Another caveat to consider is that the process of planning is more important that the actual plan itself. It is imperative to allow the patient, not the clinician, to determine the plan. For example, a patient with multiple poorly controlled chronic illnesses including depression may decide to focus his action plan around cleaning out his car rather than disease control such as dietary modification, medication adherence or exercise. The clinician may initially fail to view this as a good use of clinician time or healthcare resources since it seems unrelated to health. However, successful completion of an action plan is not the only objective of action planning. Building self-efficacy, which may lead to additional action planning around health, is more important [4,46]. The challenge is therefore for the clinician to take a step back, relinquish the “expert role,” and support the goal setting process regardless of the plan. In this example, successfully cleaning out his car may increase the patient’s self-efficacy to control other aspects of his life including diet and the focus of future plans may shift [4].

When to Use BAP

Opportunities for patient engagement in action planning occur when addressing chronic illness concerns as well as during discussions about health maintenance and preventive care. BAP can be considered as part of any routine clinical agenda unless patient preferences or clinical acuity preclude it. As with most clinical encounters, the flow is often negotiated at the beginning of the visit. BAP can be accomplished at any time that works best for the flow and substance of the visit, but a few patterns have emerged based on our experience.

BAP fits naturally into the part of the visit when the care plan is being discussed. The term “care plan” is commonly used to describe all of the care that will be provided until the next visit. Care plans can include additional recommendations for testing or screening, therapeutic adjustments and or referrals for additional expertise. Ideally the patients “agreed upon” contribution to their care should also be captured and documented in their care plan. This is often described as the patients “self-management goal.” For patients who are ready to make a specific plan to change behavior, BAP is an efficient way to support patients to craft an action plan that can then be incorporated into the overall care plan.

Another variation of when to use BAP is the situation when the patient has had a prior action plan and is being seen for a recheck visit. Discussing the action plan early in the visit agenda focuses attention on the work patients have put into following their plan. Descriptions of success lead readily to action plans for the future. Time spent discussing failures or partial success is valuable to problem solve as well as to affirm continued efforts to self-manage.

BAP can also be used between scheduled visits. The check-in portion of BAP is particularly amenable to follow-up by phone or by another supporter. A pre-arranged follow-up 1 to 2 weeks after creation of a new action plan [8] provides encouragement to patients working on their plan and also helps identify those who need more support.

Finally, BAP can be completed over multiple visits. For patients who are thinking about change but are not yet committed to planning, a brief suggestion about the value of action planning with a behavioral menu may encourage additional self-reflection. Many times patients return to the next visit with clear ideas about changes that would be important for them to make.

Fitting BAP into a 20-Minute Visit

Using BAP is a time-efficient way to provide self-management support within the context of a 20-minute visit with engaged patients who are ready to set goals for health. With practice, clinicians can often conduct all the steps within 3 to 5 minutes. However, patients and clinicians often have competing demands and agendas and may not feel that they have time to conduct all the steps. Thus, utilizing other members of the health care team to deliver some or all of BAP can facilitate implementation.

Teams have been creative in their approach to BAP implementation but 2 common models involve a multidisciplinary approach to BAP. In one model, the clinician assesses the patient readiness to make a specific action plan by asking Question 1, usually after the current status of key problems have been addressed and discussions begin about the interim plan of care. If the patient indicates interest, another staff member trained in BAP, such as an medical assistant, health coach or nurse, guides the development of the specific plan, completes the remaining steps and inputs the patient’s BAP into the care plan.

In another commonly deployed model, the front desk clerk or medical assistant helps to get the patient thinking by asking Question 1 and perhaps by providing a behavioral menu. When the clinician sees the patient, he follows up on the behavior change the patient has chosen and affirms the choice. Clinicians often flex seamlessly with other team members to complete the action plan depending on the schedule and current patient flow.

Regardless of how the workflows are designed, BAP implementation requires staff that can provide BAP with fidelity, effective communication among team members involved in the process and a standardized approach to documentation of the specific action plan, plan for check-in and notes about follow-up. Care teams commonly test different variations of personnel and workflows to find what works best for their particular practice.

Implementing BAP to Support PCMH Transformation

To support PCMH transformation substantial changes are needed to make care more proactive, more patient-centered and more accountable. One of the common elements for PCMH recognition regardless of sponsor is to enhance self-management support [20,57,58]. Practices pursuing PCMH designation are searching for effective evidence-based approaches to provide self-management support and guide action planning for patients. The authors suggest implementation of BAP as a potential strategy to enhance self-management support. In addition to facilitating meeting the actual PCMH criteria, BAP is aligned with the transitions in care delivery that are an important part of the transformation including reliance on team-based care and meaningful engagement of patients in their care [59,60].

In our experience, BAP is introduced incrementally into a practice initially focusing on one or two patient segments and then including more as resources allow. Successful BAP implementation begins with an organizational commitment to self-management support, decisions about which populations would benefit most from self-management support and BAP, training of key staff and clearly defined workflows that ensure reliable BAP provision.

BAP’s stepped-care design makes it easy to teach to all team members and as described above, team-based delivery of BAP functions well in those situations where clinicians and trained ancillary staff can “hand off” the process at any time to optimize the value to the patient while respecting inherent time constraints.

Documentation of the actual goal and follow-up is an important component to fully leverage BAP. Goals captured in a template generate actionable lists for action plan follow-up. Since EHRs vary considerably in their capacity to capture goals, teams adding BAP to their workflow will benefit from discussion of standardized documentation practices and forms.

Summary

Brief Action Planning is a self-management support technique that can be used in busy clinical settings to support patient self-management through patient-centered goal setting. Each step of BAP is based on principles grounded in evidence. Health care teams can learn BAP and integrate it into clinical delivery systems to support self-management for PCMH transformation.

 

Corresponding author: Damara Gutnick, MD, New York University School of Medicine, New York, NY, [email protected].

Financial disclosures: None.

From the New York University School of Medicine, New York, NY (Drs. Gutnick and Jay), University of Colorado Health Sciences Center, Denver, CO (Dr. Reims), University of British Columbia, BC, Canada (Dr. Davis), University College London, London, UK (Dr. Gainforth), and Stonybrook University School of Medicine, Stonybrook, NY (Dr. Cole [Emeritus]).

 

Abstract

  • Objective: To describe Brief Action Planning (BAP), a structured, stepped-care self-management support technique for chronic illness care and disease prevention.
  • Methods: A review of the theory and research supporting BAP and the questions and skills that comprise the technique with provision of a clinical example.
  • Results: BAP facilitates goal setting and action planning to build self-efficacy for behavior change. It is grounded in the principles and practice of Motivational Interviewing and evidence-based constructs from the behavior change literature. Comprised of a series of 3 questions and 5 skills, BAP can be implemented by medical teams to help meet the self-management support objectives of the Patient-Centered Medical Home.
  • Conclusion: BAP is a useful self-management support technique for busy medical practices to promote health behavior change and build patient self-efficacy for improved long-term clinical outcomes in chronic illness care and disease prevention.

 

Chronic disease is prevalent and time consuming, challenging, and expensive to manage [1]. Half of all adult primary care patients have more than 2 chronic diseases, and 75% of US health care dollars are spent on chronic illness care [2]. Given the health and financial impact of chronic disease, and recognizing that patients make daily decisions that affect disease control, efforts are needed to assist and empower patients to actively self-manage health behaviors that influence chronic illness outcomes. Patients who are supported to actively self-manage their own chronic illnesses have fewer symptoms, improved quality of life, and lower use of health care resources [3]. Historically, providers have tried to influence chronic illness self-management by advising behavior change (eg, smoking cessation, exercise) or telling patients to take medications; yet clinicians often become frustrated when patients do not “adhere” to their professional advice [4,5]. Many times, patients want to make changes that will improve their health but need support—commonly known as self-management support—to be successful.

Involving patients in decision making, emphasizing problem solving, setting goals, creating action plans (ie, when, where and how to enact a goal-directed behavior), and following up on goals are key features of successful self-management support methods [3,6–8]. Multiple approaches from the behavioral change literature, such as the 5 A’s (Assess, Advise, Agree, Assist, Arrange) [9], Motivational Interviewing (MI), and chronic disease self-management programs [10] have been used to provide more effective guidance for patients and their caregivers. However, the practicalities of these approaches in clinical settings have been questioned. The 5A’s, a counseling framework that is used to guide providers in health behavior change counseling, can feel overwhelming because it encompasses several different aspects of counseling [11,12]. Likewise, MI and adaptations of MI, which have been shown to outperform traditional “advice giving” in treatment of a broad range of behaviors and chronic conditions [13–16], have been critiqued since fidelity to this approach often involves multiple sessions of training, practice, and feedback to achieve proficiency [15,17,18]. Finally, while chronic disease self-management programs have been shown to be effective when used by peers in the community [10], similar results in primary care are not well established.

Given the challenges of providers practicing, learning, and using each of these approaches, efforts to develop an approach that supports patients to make behavioral changes that can be implemented in typical practice settings are needed. In addition, health delivery systems are transforming to team-based models with emphasis on leveraging each team member’s expertise and licensure [19]. In acknowledgement of these evolving practice realities, the National Committee for Quality Assurance (NCQA) included development and documentation of patient self-management plans and goals as a critical factor for achieving NCQA Patient-Centered Medical Home (PCMH) recognition [20]. Successful PCMH transformation therefore entails clinical practices developing effective and time efficient ways to incorporate self-management support strategies, a new service for many, into their care delivery systems often without additional staffing.

In this paper, we describe an evidence-informed, efficient self-management support technique called Brief Action Planning (BAP) [21–24]. BAP evolved into its current form through ongoing collaborative efforts of 4 of the authors (SC, DG, CD, KR) and is based on a foundation of original work by Steven Cole with contributions from Mary Cole in 2002 [25]. This technique addresses many of the barriers providers have cited to providing self-management support, as it can be used routinely by both individual providers and health care teams to facilitate patient-centered goal setting and action planning. BAP integrates principles and practice of MI with goal setting and action planning concepts from the self-management support, self-efficacy, and behavior change literature. In addition to reviewing the principles and theory that inform BAP, we introduce the steps of BAP and discuss practical considerations for incorporating BAP into clinical practice. In particular, we include suggestions about how BAP can be used in team-based clinical practice settings within the PCMH. Finally, we present a common clinical scenario to demonstrate BAP and provide resource links to online videos of BAP encounters. Throughout the paper, we use the word “clinician” to refer to professionals or other trained personnel using BAP, and “patient” to refer to those experiencing BAP, recognizing that other terms may be preferred in different settings.

What is BAP?

BAP is a highly structured, stepped-care, self-management support technique. Composed of a series of 3 questions and 5 skills (reviewed in detail below), BAP can be used to facilitate goal setting and action planning to build self-efficacy in chronic illness management and disease prevention [21–24]. The overall goal of BAP is to assist an individual to create an action plan for a self-management behavior that they feel confident that they can achieve. BAP is currently being used in diverse care settings including primary care, home health care, rehabilitation, mental health and public health to assist and empower patients to self-manage chronic illnesses and disabilities including diabetes, depression, spinal cord injury, arthritis, and hypertension. BAP is also being used to assist patients to develop action plans for disease prevention. For example, the Bellevue Hospital Personalized Prevention clinic, a pilot clinic that uses a mathematical model [26] to help patients and providers collaboratively prioritize prevention focus and strategies, systematically utilizes BAP as its self-management support technique for patient-centered action planning. At this time, BAP has been incorporated into teaching curriculums at multiple medical schools, presented at major national health care/academic conferences and is being increasingly integrated into health delivery systems across the United States and Canada to support patient self-management for NCQA-PCMH transformation. We have also developed a series of standardized programing to support fidelity in BAP skills development including a multidisciplinary introductory training curriculum, telephonic coaching, interactive web-based training tools, and a structured “Train the Trainer” curriculum [27]. In addition, a set of guidelines designed to ensure fidelity in BAP research has been developed [27].

Underlying Principles of BAP

BAP is grounded in the principles and practice of MI and the psychology of behavior change. Within behavior change, we draw primarily on self-efficacy and action planning theory and research. We discuss the key concepts in detail below.

The Spirit of MI

MI Spirit (Compassion, Acceptance, Partnership and Evocation) is an important overarching tenet for BAP. Compassionately supporting self-management with MI spirit involves a partnership with the patient rather than a prescription for change and the assurance that the clinician has the patients best interest always in mind (Compassion) [17]. Exemplifying “spirit” accepts that the ultimate choice to change is the patient’s alone (Acceptance) and acknowledges that individuals bring expertise about themselves and their lives to the conversation (Evocation). Adherence to “MI spirit” itself has been associated with positive behavior change outcomes in patients [5,28–32]. Demonstrating MI spirit throughout the change conversation is an essential foundational principle of BAP.

Action Planning and Self-Efficacy

In addition to the spirit of MI, BAP integrates 2 evidence-based constructs from the behavior change literature: action planning and self-efficacy [4,6,33–36]. Action planning requires that individuals specify when, where and how to enact a goal-directed behavior (eg, self-management behaviors). Action planning has been shown to mediate the intention-behavior relationship thereby increasing the likelihood that an individual’s intentions will lead to behavior change [37,38]. Given the demonstrated potential of action planning for ensuring individuals achieve their health goals, the BAP framework aspires to assist patients to create an action plan.

BAP also aims to build patients’ self-efficacy to enact the goals outlined in their action plans. Self-efficacy refers to a patient’s confidence in their ability to enact a behavior [33]. Several reviews of the literature have suggested a strong relationship between self-efficacy and adoption of healthy behaviors such as smoking cessation, weight control, contraception, alcohol abuse and physical activity [39–42]. Furthermore, Lorig et al demonstrated that the process of action planning itself contributes to enhanced self-efficacy [8]. BAP aims to build self-efficacy and ultimately change patients’ behaviors by helping patients to set an action plan that they feel confident in their ability to achieve.

Description of the BAP Steps

The flowchart in Figure 1 presents an overview of the key elements of BAP. An example dialogue illustrating the steps of BAP can be found in Figure 2.

Three questions and 3 of the BAP skills (ie, SMART plan, eliciting a commitment statement, and follow-up) are applied during every BAP interaction, while 2 skills (ie, behavioral menu and problem solving for low confidence) are used as needed. The distinct functions and the evidence supporting the 3 questions and 5 BAP skills are described below.

Question 1: Eliciting a Behavioral Focus or Goal

Once engagement has been established and the clinician determines the patient is ready for self-management planning to occur, the first question of BAP can be asked: “Is there anything you would like to do for your health in the next week or two?” 

This question elicits a person’s interest in self-management or behavior change and encourages the individual to view himself/herself as someone engaged in his or her health. The powerful link between consistency of word and action facilitates development and commitment to change the behavior of focus [43]. In some settings a broader question such as “Is there anything you would like to do about your current situation in the next week or two?” may be a better fit, or referring to a more specific question may flow more naturally from the conversation such as “We’ve been talking about diabetes, is there anything you would like to do for that or anything else in the next week or two?”

Although technically Question 1 is a closed-ended question (in that it can be answered “yes” or “no”), in actual practice it generates productive discussions about change. 

For example, whenever a patient answers “yes” or “no” or something in-between like, “I’m not sure,” the clinician can often smoothly transition to a dialogue about change based on that response. Responses to Question 1 generally take 3 forms (Figure 1):

1) Have an Idea. A group of patients immediately present an idea that they are ready to do or are ready to consider doing. For these patients, clinicians can proceed directly to Skill 2—SMART Behavioral Planning; that is, asking patients directly if they are ready to turn their idea into a concrete plan. Some evidence suggests that further discussion, assessment, or even additional "motivational" exploration in patients who are ready to make a plan and already have an idea may actually decrease motivation for change [17, 32].

2) Not Sure. Another group of patients may want or need suggestions before committing to something specific they want to work on. For these patients, clinicians should use the opportunity to offer a Behavioral Menu (Skill 1).

3) No or Not at This Time. A third group of patients may not be interested or ready to make a change at this time or at all. Some in this group may be healthy or already self-managing effectively and have no need to make a plan, in which case the clinician acknowledges their active self-management and moves to the next part of the visit. Others in this group may have considerable ambivalence about change or face complex situations where other priorities take precedence. Clinicians frequently label these individuals as "resistant." The Spirit of MI can be very useful when working with these patients to accept and respect their autonomy while encouraging ongoing partnership at a future time. For example, a clinician may say “It sounds like you are not interested in making a plan for your health right now. Would it be OK if I ask you about this again at our next visit?” Pushing forward to make a "plan for change" when a patient is not ready decreases both motivation for change as well as the likelihood for a successful outcome [32].

Other patients may benefit from additional motivational approaches to further explore change and ambivalence. If the clinician does not have these skills, patients may be seamlessly transitioned to another resource within or external to the care team.

Skill 1: Offering a Behavioral Menu

If in response to Question 1 an individual is unable to come up with an idea of their own or needs more information, then offering a Behavioral Menu may be helpful [44,45]. Consistent with the “Spirit of MI,” BAP attempts to elicit ideas from the individual themselves; however, it is important to recognize that some people require assistance to identify possible actions. A behavioral menu is comprised of 2 or 3 suggestions or ideas that will ideally trigger individuals to discover an idea of their own. There are 3 distinct evidence-based steps to follow when presenting a Behavioral Menu.

1) Ask permission to offer a behavioral menu. Asking permission to share ideas respects patient autonomy and prevents the provider from inadvertently assuming an expert role. For example: “Would it be OK if I shared with you some examples of what some other patients I work with have done?”

2) Offer 2 to 3 general yet varied ideas all at once (Figure 2, entry 5). It helps to mention things that other patients have decided to do with some success. Using this approach avoids the clinician assuming too much about the patient or allowing the patient to reject the ideas. It is important to remember that the list is to prompt ideas, not to find a perfect solution [17]. For example: “One patient I work with decided to join a gym and start exercising, another decided to pick up an old hobby he used to enjoy doing and another patient decided to schedule some time with a friend she hadn’t seen in a while.”

3) Ask if any of the ideas appeal to the individual as something that might work for them or if the patient has an idea of his/her own (Figure 2, entry 5). Evocation from the Spirit of MI is built in with this prompt [17]. For example: “These are some ideas that have worked for other patients I work with, do they trigger any ideas that might work for you?”

Clinicians may find it helpful to use visual prompts to guide Behavioral Menu conversations [44]. Diagrams with equally weighted spaces assist clinicians to resist prioritizing as might happen in a list. Empty circles alongside circles containing varied options evoke patient ideas, consistent with the Spirit of MI (Figure 3, Visual Behavioral Menu Example) [44].

Skill 2: SMART Planning

Once an individual decides on an area of focus, the clinician partners with the patient to clarify the details and create an action plan to achieve their goal. Given that individuals are more likely to successfully achieve goals that are specific, proximal, and achievable as opposed to vague and distal [46,47], the clinician works with patient to ensure that the patient’s goal is SMART (specific, measurable, achievable, relevant and time-bound). The term SMART has its roots in the business management literature [48] as an adaptation of Locke’s pioneering research (1968) on goal setting and motivation [49]. In particular, Locke and Latham’s theory of Goal Setting and Task performance, states that “specific and achievable” goals are more likely to be successfully reached [47,50].

We suggest helping the patient to make smart goals by eliciting answers to questions applicable to the plan, such as “what?” “where?” “when?” “how long?” “how often?” “how much?” and “when will you start?” [51]. A resulting plan might be “I will walk for 20 minutes, in my neighborhood, every Monday, Wednesday and Friday before dinner.”

Skill 3: Elicit a Commitment Statement

Once the individual has developed a specific plan, the next step of BAP is for the clinician to ask him or her to “tell back” the specifics of the plan. The provider might say something like, “Just to make sure we understand each other, would you repeat back what you’ve decided to do?” The act of “repeating back” organizes the details of the plan in the persons mind and may lead to an unconscious self-reflection about the feasibility of the plan [43,52], which then sets the stage for Question 2 of BAP (Scaling for Confidence). Commitment predicts subsequent behavior change, and the strength of the commitment language is the strongest predictor of success on an action plan [43,52,53]. For example saying “I will” is stronger than saying “I will try.”

Question 2: Scaling for Confidence

After a commitment statement has been elicited, the second question of BAP is asked. “How confident or sure do you feel about carrying out your plan on a scale from 0 to 10, where 0 is not confident at all and 10 is totally confident or sure?” Confidence scaling is a common tool used in behavioral interventions, MI, and chronic disease self-management programs [17,51]. Question 2 assesses an individual’s self-efficacy to complete the plan and facilitates discussion about potential barriers to implementation in order to increase the likelihood of success of a personal action plan.

For patients who have difficulty grasping the concept of a numerical scale, the word “sure” can be substituted for “confident” and a Likert scale including the terms “not at all sure,” “somewhat sure,” and “very sure” substituted for the numerical confidence ruler, ie, “How sure are you that you will be able to carry out your plan? Not at all sure, somewhat sure, or very sure?” Alternatively, people of different cultural backgrounds may find it easier to grasp the concept using familiar images or experiences. For example, Native Americans from the Southwest have adapted the scale to depict a series of images ranging from planting a corn seed to harvesting a crop or climbing a ladder, while in some Latino cultures the image of climbing a mountain (“How far up the mountain are you?”) is useful to demonstrate “level of confidence” concept [54].

Skill 4: Problem Solving for Low Confidence

When confidence is relatively low (ie, below 7), we suggest collaborative problem solving as the next step [8,51]. Low confidence or self-efficacy for plan completion is a concern since low self-efficacy predicts non-completion [8]. Successfully implementing the action plan, no matter how small, increases confidence and self-efficacy for engaging in the behavior [8].

There are several steps that a clinician follows when collaboratively problem-solving with a patient with low confidence (Figure 1).

• Recognize that a low confidence level is greater than no confidence at all. By affirming the strength of a patient’s confidence rather than negatively focusing on a low level of confidence, the provider emphasizes the patient’s strengths.

• Collaboratively explore ways that the plan could be modified in order to improve confidence. A Behavioral Menu can be offered if needed. For example, a clinician might say something like: “That’s great that your confidence level is a 5. A 5 is a lot higher than a 1. People are more likely to have success with their action plans when confidence levels are 7 or more. Do you have any ideas of how you might be able to increase your level confidence to a 7 or more?”

• If the patient has no ideas, ask permission to offer a Behavioral Menu: “Would it be ok to share some ideas about how other patients I’ve worked with have increased their confidence level?” If the patient agrees, then say... “Some people modify their plans to make them easier, some choose a less ambitious goal or adjust the frequency of their plan, and some people involve a friend or family member. Perhaps one of these ideas seems like a good one for you or maybe you have another idea?”

Question 3: Arranging Accountability

Once the details of the plan have been determined and confidence level for success is high, the next step is to ask Question 3: “Would you like to set a specific time to check in about your plan to see how things are going?” This question encourages a patient to be accountable for their plan, and reinforces the concept that the physician and care team consider the plan to be important. Research supports that people are more likely to follow through with a plan if they choose to report back their progress [43] and suggests that checking-in frequently earlier in the process is helpful [55]. Ideally the clinician and patient should agree on a time to check in on the plan within a week or two (Figure 2, entry 29).

Accountability in the form of a check-in may be arranged with the clinical provider, another member of the healthcare team or a support person of the patient’s choice (eg, spouse, friend). The patient may also choose to be accountable to themselves by using a calendar or a goal setting application on their smart phone device or computer.

Skill 5: Follow-up

Follow-up has been noted as one of the features of successful multifactorial self-management interventions and builds trust [55]. Follow-up with the care team includes a discussion of how the plan went, reassurance, and next steps (Figure 4). The next step is often a modification of the current BAP or a new BAP; however, if a patient decides not to make or work on a plan, in the spirit of MI (accepting/respecting the patient's autonomy) the clinician can say something like, "It sounds like you are not interested in making a plan today. Would it be OK if I ask you about this again at our next visit?"

The purpose of the check-in is for learning and adjustment of the plan as well as to provide support regardless of outcome. Checking-in encourages reflection on challenges and barriers as well as successes. Patients should be given guidance to think through what worked for them and what did not. Focusing just on “success” of the plan will be less helpful. If follow-up is not done with the care team in the near term, checking-in can be accomplished at the next scheduled visit. Patient portals provide another opportunity for patients to dialogue with the care team about their plan.

Experiential Insights from Clinical Experience Using BAP

The authors collective experience to date indicates that between 50% to 75% of individuals who are asked Question 1 go on to develop an action plan for change with relatively little need for additional skills. In other studies of action planning in primary care, 83% of patients made action plans during a visit, and at 3-week follow-up 53% had completed their action plan [56]. A recent study of action planning using an online self-management support program reported that action plans were successfully completed (49%), partially completed (40%) or incomplete (11% of the time) [35].

Another caveat to consider is that the process of planning is more important that the actual plan itself. It is imperative to allow the patient, not the clinician, to determine the plan. For example, a patient with multiple poorly controlled chronic illnesses including depression may decide to focus his action plan around cleaning out his car rather than disease control such as dietary modification, medication adherence or exercise. The clinician may initially fail to view this as a good use of clinician time or healthcare resources since it seems unrelated to health. However, successful completion of an action plan is not the only objective of action planning. Building self-efficacy, which may lead to additional action planning around health, is more important [4,46]. The challenge is therefore for the clinician to take a step back, relinquish the “expert role,” and support the goal setting process regardless of the plan. In this example, successfully cleaning out his car may increase the patient’s self-efficacy to control other aspects of his life including diet and the focus of future plans may shift [4].

When to Use BAP

Opportunities for patient engagement in action planning occur when addressing chronic illness concerns as well as during discussions about health maintenance and preventive care. BAP can be considered as part of any routine clinical agenda unless patient preferences or clinical acuity preclude it. As with most clinical encounters, the flow is often negotiated at the beginning of the visit. BAP can be accomplished at any time that works best for the flow and substance of the visit, but a few patterns have emerged based on our experience.

BAP fits naturally into the part of the visit when the care plan is being discussed. The term “care plan” is commonly used to describe all of the care that will be provided until the next visit. Care plans can include additional recommendations for testing or screening, therapeutic adjustments and or referrals for additional expertise. Ideally the patients “agreed upon” contribution to their care should also be captured and documented in their care plan. This is often described as the patients “self-management goal.” For patients who are ready to make a specific plan to change behavior, BAP is an efficient way to support patients to craft an action plan that can then be incorporated into the overall care plan.

Another variation of when to use BAP is the situation when the patient has had a prior action plan and is being seen for a recheck visit. Discussing the action plan early in the visit agenda focuses attention on the work patients have put into following their plan. Descriptions of success lead readily to action plans for the future. Time spent discussing failures or partial success is valuable to problem solve as well as to affirm continued efforts to self-manage.

BAP can also be used between scheduled visits. The check-in portion of BAP is particularly amenable to follow-up by phone or by another supporter. A pre-arranged follow-up 1 to 2 weeks after creation of a new action plan [8] provides encouragement to patients working on their plan and also helps identify those who need more support.

Finally, BAP can be completed over multiple visits. For patients who are thinking about change but are not yet committed to planning, a brief suggestion about the value of action planning with a behavioral menu may encourage additional self-reflection. Many times patients return to the next visit with clear ideas about changes that would be important for them to make.

Fitting BAP into a 20-Minute Visit

Using BAP is a time-efficient way to provide self-management support within the context of a 20-minute visit with engaged patients who are ready to set goals for health. With practice, clinicians can often conduct all the steps within 3 to 5 minutes. However, patients and clinicians often have competing demands and agendas and may not feel that they have time to conduct all the steps. Thus, utilizing other members of the health care team to deliver some or all of BAP can facilitate implementation.

Teams have been creative in their approach to BAP implementation but 2 common models involve a multidisciplinary approach to BAP. In one model, the clinician assesses the patient readiness to make a specific action plan by asking Question 1, usually after the current status of key problems have been addressed and discussions begin about the interim plan of care. If the patient indicates interest, another staff member trained in BAP, such as an medical assistant, health coach or nurse, guides the development of the specific plan, completes the remaining steps and inputs the patient’s BAP into the care plan.

In another commonly deployed model, the front desk clerk or medical assistant helps to get the patient thinking by asking Question 1 and perhaps by providing a behavioral menu. When the clinician sees the patient, he follows up on the behavior change the patient has chosen and affirms the choice. Clinicians often flex seamlessly with other team members to complete the action plan depending on the schedule and current patient flow.

Regardless of how the workflows are designed, BAP implementation requires staff that can provide BAP with fidelity, effective communication among team members involved in the process and a standardized approach to documentation of the specific action plan, plan for check-in and notes about follow-up. Care teams commonly test different variations of personnel and workflows to find what works best for their particular practice.

Implementing BAP to Support PCMH Transformation

To support PCMH transformation substantial changes are needed to make care more proactive, more patient-centered and more accountable. One of the common elements for PCMH recognition regardless of sponsor is to enhance self-management support [20,57,58]. Practices pursuing PCMH designation are searching for effective evidence-based approaches to provide self-management support and guide action planning for patients. The authors suggest implementation of BAP as a potential strategy to enhance self-management support. In addition to facilitating meeting the actual PCMH criteria, BAP is aligned with the transitions in care delivery that are an important part of the transformation including reliance on team-based care and meaningful engagement of patients in their care [59,60].

In our experience, BAP is introduced incrementally into a practice initially focusing on one or two patient segments and then including more as resources allow. Successful BAP implementation begins with an organizational commitment to self-management support, decisions about which populations would benefit most from self-management support and BAP, training of key staff and clearly defined workflows that ensure reliable BAP provision.

BAP’s stepped-care design makes it easy to teach to all team members and as described above, team-based delivery of BAP functions well in those situations where clinicians and trained ancillary staff can “hand off” the process at any time to optimize the value to the patient while respecting inherent time constraints.

Documentation of the actual goal and follow-up is an important component to fully leverage BAP. Goals captured in a template generate actionable lists for action plan follow-up. Since EHRs vary considerably in their capacity to capture goals, teams adding BAP to their workflow will benefit from discussion of standardized documentation practices and forms.

Summary

Brief Action Planning is a self-management support technique that can be used in busy clinical settings to support patient self-management through patient-centered goal setting. Each step of BAP is based on principles grounded in evidence. Health care teams can learn BAP and integrate it into clinical delivery systems to support self-management for PCMH transformation.

 

Corresponding author: Damara Gutnick, MD, New York University School of Medicine, New York, NY, [email protected].

Financial disclosures: None.

References

1. Hoffman C, Rice D, Sung HY. Persons withnic conditions. Their prevalence and costs. JAMA 1996;276(18):1473–9.

2. Institute of Medicine. Living well with chro:ic illness: a call for public health action. Washington (DC); The National Academies Press; 2012.

3. De Silva D. Evidence: helping people help themselves. London: The Health Foundation Inspiring Improvement; 2011.

4. Bodenheimer T, Lorig K, Holman H, Grumbach K. Patient self-management of chronic disease in primary care. JAMA 2002;288:2469–75.

5. Miller W, Benefield R, Tonigan J. Enhancing motivation for change in problem drinking: A controlled comparison of two therapist styles. J Consul Clin Psychol 1993;61:455–461.

6. Lorig K, Holman H. Self-management education: history, definition, outcomes, and mechanisms. Ann Behav Med 2003;26:1–7.

7. Artinian NT, Fletcher GF, Mozaffarian D, et al. Interventions to promote physical activity and dietary lifestyle changes for cardiovascular risk factor reduction in adults: a scientific statement from the American Heart Association. Circulation 2010;122:406–41.

8. Lorig K, Laurent DD, Plant K, Krishnan E, Ritter PL. The components of action planning and their associations with behavior and health outcomes. Chronic Illn 2013. Available at www.ncbi.nlm.nih.gov/pubmed/23838837.

9. Schlair S, Moore S, Mcmacken M, Jay M. How to deliver high-quality obesity counseling in primary care using the 5As framework. J Clin Outcomes Manag 2012;19:221–9.

10. Lorig KR, Ritter P, Stewart a L, et al. Chronic disease self-management program: 2-year health status and health care utilization outcomes. Med Care 2001;39:1217–23.

11. Jay MR, Gillespie CC, Schlair SL, et al. The impact of primary care resident physician training on patient weight loss at 12 months. Obesity 2013;21:45–50.

12. Goldstein MG, Whitlock EP, DePue J. Multiple behavioral risk factor interventions in primary care. Summary of research evidence. Am J Prev Med 2004;27:61–79.

13. Lundahl B, Moleni T, Burke BL, et al. Motivational interviewing in medical care settings: a systematic review and meta-analysis of randomized controlled trials. Patient Educ Couns 2013;93:157–68.

14. Rubak S, Sandbæk A, Lauritzen T, Christensen B. Motivational Interviewing: a systematic review and meta-analysis. Br J Gen Pract 2005;55:305–12.

15. Dunn C, Deroo L, Rivara F. The use of brief interventions adapted from motivational interviewing across behavioral domains: a systematic review. Addiction 2001;96:1725–42.

16. Heckman CJ, Egleston BL, Hofmann MT. Efficacy of motivational interviewing for smoking cessation: a systematic review and meta-analysis. Tob Control 2010;19:410–6.

17. Miller WR, Rollnick S. Motivational interviewing: helping people change. 3rd ed. New York: Guilford Press; 2013.

18. Resnicow K, DiIorio C, Soet J, et al. Motivational interviewing in health promotion: it sounds like something is changing. Health Psychol 2002;21:444–451.

19. Doherty RB, Crowley RA. Principles supporting dynamic clinical care teams: an American College of Physicians position paper. Ann Intern Med 2013;159:620–6.

20. NCQA PCMH 2011 Standards, Elements and Factors. Documentation Guideline/Data Sources. 4A: Provide self-care support and community resources. Available at www.ncqa.org/portals/0/Programs/Recognition/PCMH_2011_Data_Sources_6.6.12.pdf.

21. Reims K, Gutnick D, Davis C, Cole S. Brief action planning white paper. 2012. Available at www.centrecmi.ca.

22. Cole S, Davis C, Cole M, Gutnick D. Motivational interviewing and the patient centered medical home: a strategic approach to self-management support in primary care. In: Patient-Centered Primary Care Collaborative. Health IT in the patient centered medical home. October 2010. Available at www.pcpcc.net/guide/health-it-pcmh.

23. Cole S, Cole M, Gutnick D, Davis C. Function three: collaborate for management. In: Cole S, Bird J, editors. The medical interview: the three function approach. 3rd ed. Philadelphia:Saunders; 2014.

24. Cole S, Gutnick D, Davis C, Cole M. Brief action planning (BAP): a self-management support tool. In: Bickley L. Bates’ guide to physical examination and history taking. 11th ed. Philadelphia: Lippincott Williams and Wilkins; 2013.

25. AMA Physician tip sheet for self-management support. Available at www.ama-assn.org/ama1/pub/upload/mm/433/phys_tip_sheet.pdf.

26. Taksler G, Keshner M, Fagerlin A. Personalized estimates of benefit from preventive care guidelines. Ann Intern Med 2013;159:161–9.

27. Centre for Comprehensive Motivational Interventions [website]. Available at www.centreecmi.com.

28. Del Canale S, Louis DZ, Maio V, et al. The relationship between physician empathy and disease complications: an empirical study of primary care physicians and their diabetic patients in Parma, Italy. Acad Med 2012;87:1243–9.

29. Moyers TB, Miller WR, Hendrickson SML. How does motivational interviewing work? Therapist interpersonal skill predicts client involvement within motivational interviewing sessions. J Consult Clin Psychol 2005;73:590–8.

30. Hojat M, Louis DZ, Markham FW, et al. Physicians’ empathy and clinical outcomes for diabetic patients. Acad Med 2011;86:359–64.

31. Heisler M, Bouknight RR, Hayward RA, et al. The relative importance of physician communication, participatory decision making, and patient understanding in diabetes self-management. J Gen Intern Med 2002;17:243–52.

32. Miller WR, Rollnick S. Ten things that motivational interviewing is not. Behav Cogn Psychother 2009;37:129–40.

33. Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev 1977;85:191–215.

34. Kiesler, Charles A. The psychology of commitment: experiments linking behavior to belief. New York: Academic Press;1971.

35. Lorig K, Laurent DD, Plant K, et al. The components of action planning and their associations with behavior and health outcomes. Chronic Illn 2013.

36. MacGregor K, Handley M, Wong S, et al. Behavior-change action plans in primary care: a feasibility study of clinicians. J Am Board Fam Med 19:215–23.

37. Gollwitzer P. Implementation intentions. Am Psychol 1999;54:493–503.

38. Gollwitzer P, Sheeran P. Implementation intensions and goal achievement: A meta-analysis of effects and processes. Adv Exp Soc Psychology 2006;38:69–119.

39. Stretcher V, De Vellis B, Becker M, Rosenstock I. The role of self-efficacy in achieving behavior change. Health Educ Q 1986;13:73–92.

40. Ajzen I. Constructing a theory of planned behavior questionnaire. Available at people.umass.edu/aizen/pdf/tpb.measurement.pdf.

41. Rogers RW. Protection motivation theory of fear appeals and attitude-change. J Psychol 1975;91:93–114.

42. Schwarzer R. Modeling health behavior change: how to predict and modify the adoption and maintenance of health behaviors. Appl Psychol An Int Rev 2008;57:1–29.

43. Cialdini R. Influence: science and practice. 5th ed. Boston:Allyn and Bacon; 2008.

44. Stott NC, Rollnick S, Rees MR, Pill RM. Innovation in clinical method: diabetes care and negotiating skills. Fam Pract 1995;12:413–8.

45. Miller WR, Rollnick S, Butler C. Motivational interviewing in health care. New York: Guilford Press; 2008.

46. Bodenheimer T, Handley M. Goal-setting for behavior change in primary care: an exploration and status report. Patient Educ Couns 2009;76:174–80.

47. Locke EA, Latham GP. Building a practically useful theory of goal setting and task motivation. Am Psychol 2002;57:705–17.

48. Doran G. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manag Rev 1981;70:35–6.

49. Locke EA. Toward a theory of task motivation and incentives. Organ Behav Hum Perform 1968;3:157–89.

50. Locke EA, Latham GP, Erez M. The determinants of goal commitment. Acad Manag Rev 1988;13:23–39.

51. Lorig K, Homan H, Sobel D, et al. Living a healthy life with chronic conditions. 4th ed. Boulder: Bull Publishing; 2012.

52. Amrhein PC, Miller WR, Yahne CE, et al. Client commitment language during motivational interviewing predicts drug use outcomes. J Consult Clin Psychol 2003;71:862–78.

53. Ahaeonovich E, Amrhein PC, Bisaha A, et al. Cognition, commitment language and behavioral change among cocaine-dependent patients. Psychol Addict Behav 2008;22:557–62.

54. Gutnick D. Centre for Comprehensive Motivational Interventions community of practice webinar. Brief action planning and culture: developing culturally specific confidence rules. 2012. Available at www.centrecmi.ca.

55. Artinian NT, Fletcher GF, Mozaffarian D, et al. Interventions to promote physical activity and dietary lifestyle changes for cardiovascular risk factor reduction in adults. A scientific statement from the American Heart Association. Circulation 2010;122:406–41.

56. Handley M, MacGregor K, Schillinger D, et al. Using action plans to help primary care patients adopt healthy behaviors: a descriptive study. J Am Board Fam Med 2006;19:224–31.

57. Joint Commision. Primary care medical home option-additional requirements. Available at www.jointcommission.org/assets/1/18/PCMH_new_stds_by_5_characteristics.pdf.

58. Oregon Health Policy and Research. Standards for patient centered medical home recognition. Available at www.oregon.gov/oha/OHPR/pages/healthreform/pcpch/standards.aspx.

59. Nutting PA, Crabtree BF, Miller WL, et al. Journey to the patient-centered medical home: a qualitative analysis of the experiences of practices in the national demonstration project. Am Fam Med 2010;8(Suppl 1):S45–S56.

60. Stewart EE, Nutting PA, Crabtree BF, et al. Implementing the patient-centered medical home: observation and description of the National Demonstration Project. Am Fam Med 2010;8(Suppl 1):S21–S32.

References

1. Hoffman C, Rice D, Sung HY. Persons withnic conditions. Their prevalence and costs. JAMA 1996;276(18):1473–9.

2. Institute of Medicine. Living well with chro:ic illness: a call for public health action. Washington (DC); The National Academies Press; 2012.

3. De Silva D. Evidence: helping people help themselves. London: The Health Foundation Inspiring Improvement; 2011.

4. Bodenheimer T, Lorig K, Holman H, Grumbach K. Patient self-management of chronic disease in primary care. JAMA 2002;288:2469–75.

5. Miller W, Benefield R, Tonigan J. Enhancing motivation for change in problem drinking: A controlled comparison of two therapist styles. J Consul Clin Psychol 1993;61:455–461.

6. Lorig K, Holman H. Self-management education: history, definition, outcomes, and mechanisms. Ann Behav Med 2003;26:1–7.

7. Artinian NT, Fletcher GF, Mozaffarian D, et al. Interventions to promote physical activity and dietary lifestyle changes for cardiovascular risk factor reduction in adults: a scientific statement from the American Heart Association. Circulation 2010;122:406–41.

8. Lorig K, Laurent DD, Plant K, Krishnan E, Ritter PL. The components of action planning and their associations with behavior and health outcomes. Chronic Illn 2013. Available at www.ncbi.nlm.nih.gov/pubmed/23838837.

9. Schlair S, Moore S, Mcmacken M, Jay M. How to deliver high-quality obesity counseling in primary care using the 5As framework. J Clin Outcomes Manag 2012;19:221–9.

10. Lorig KR, Ritter P, Stewart a L, et al. Chronic disease self-management program: 2-year health status and health care utilization outcomes. Med Care 2001;39:1217–23.

11. Jay MR, Gillespie CC, Schlair SL, et al. The impact of primary care resident physician training on patient weight loss at 12 months. Obesity 2013;21:45–50.

12. Goldstein MG, Whitlock EP, DePue J. Multiple behavioral risk factor interventions in primary care. Summary of research evidence. Am J Prev Med 2004;27:61–79.

13. Lundahl B, Moleni T, Burke BL, et al. Motivational interviewing in medical care settings: a systematic review and meta-analysis of randomized controlled trials. Patient Educ Couns 2013;93:157–68.

14. Rubak S, Sandbæk A, Lauritzen T, Christensen B. Motivational Interviewing: a systematic review and meta-analysis. Br J Gen Pract 2005;55:305–12.

15. Dunn C, Deroo L, Rivara F. The use of brief interventions adapted from motivational interviewing across behavioral domains: a systematic review. Addiction 2001;96:1725–42.

16. Heckman CJ, Egleston BL, Hofmann MT. Efficacy of motivational interviewing for smoking cessation: a systematic review and meta-analysis. Tob Control 2010;19:410–6.

17. Miller WR, Rollnick S. Motivational interviewing: helping people change. 3rd ed. New York: Guilford Press; 2013.

18. Resnicow K, DiIorio C, Soet J, et al. Motivational interviewing in health promotion: it sounds like something is changing. Health Psychol 2002;21:444–451.

19. Doherty RB, Crowley RA. Principles supporting dynamic clinical care teams: an American College of Physicians position paper. Ann Intern Med 2013;159:620–6.

20. NCQA PCMH 2011 Standards, Elements and Factors. Documentation Guideline/Data Sources. 4A: Provide self-care support and community resources. Available at www.ncqa.org/portals/0/Programs/Recognition/PCMH_2011_Data_Sources_6.6.12.pdf.

21. Reims K, Gutnick D, Davis C, Cole S. Brief action planning white paper. 2012. Available at www.centrecmi.ca.

22. Cole S, Davis C, Cole M, Gutnick D. Motivational interviewing and the patient centered medical home: a strategic approach to self-management support in primary care. In: Patient-Centered Primary Care Collaborative. Health IT in the patient centered medical home. October 2010. Available at www.pcpcc.net/guide/health-it-pcmh.

23. Cole S, Cole M, Gutnick D, Davis C. Function three: collaborate for management. In: Cole S, Bird J, editors. The medical interview: the three function approach. 3rd ed. Philadelphia:Saunders; 2014.

24. Cole S, Gutnick D, Davis C, Cole M. Brief action planning (BAP): a self-management support tool. In: Bickley L. Bates’ guide to physical examination and history taking. 11th ed. Philadelphia: Lippincott Williams and Wilkins; 2013.

25. AMA Physician tip sheet for self-management support. Available at www.ama-assn.org/ama1/pub/upload/mm/433/phys_tip_sheet.pdf.

26. Taksler G, Keshner M, Fagerlin A. Personalized estimates of benefit from preventive care guidelines. Ann Intern Med 2013;159:161–9.

27. Centre for Comprehensive Motivational Interventions [website]. Available at www.centreecmi.com.

28. Del Canale S, Louis DZ, Maio V, et al. The relationship between physician empathy and disease complications: an empirical study of primary care physicians and their diabetic patients in Parma, Italy. Acad Med 2012;87:1243–9.

29. Moyers TB, Miller WR, Hendrickson SML. How does motivational interviewing work? Therapist interpersonal skill predicts client involvement within motivational interviewing sessions. J Consult Clin Psychol 2005;73:590–8.

30. Hojat M, Louis DZ, Markham FW, et al. Physicians’ empathy and clinical outcomes for diabetic patients. Acad Med 2011;86:359–64.

31. Heisler M, Bouknight RR, Hayward RA, et al. The relative importance of physician communication, participatory decision making, and patient understanding in diabetes self-management. J Gen Intern Med 2002;17:243–52.

32. Miller WR, Rollnick S. Ten things that motivational interviewing is not. Behav Cogn Psychother 2009;37:129–40.

33. Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev 1977;85:191–215.

34. Kiesler, Charles A. The psychology of commitment: experiments linking behavior to belief. New York: Academic Press;1971.

35. Lorig K, Laurent DD, Plant K, et al. The components of action planning and their associations with behavior and health outcomes. Chronic Illn 2013.

36. MacGregor K, Handley M, Wong S, et al. Behavior-change action plans in primary care: a feasibility study of clinicians. J Am Board Fam Med 19:215–23.

37. Gollwitzer P. Implementation intentions. Am Psychol 1999;54:493–503.

38. Gollwitzer P, Sheeran P. Implementation intensions and goal achievement: A meta-analysis of effects and processes. Adv Exp Soc Psychology 2006;38:69–119.

39. Stretcher V, De Vellis B, Becker M, Rosenstock I. The role of self-efficacy in achieving behavior change. Health Educ Q 1986;13:73–92.

40. Ajzen I. Constructing a theory of planned behavior questionnaire. Available at people.umass.edu/aizen/pdf/tpb.measurement.pdf.

41. Rogers RW. Protection motivation theory of fear appeals and attitude-change. J Psychol 1975;91:93–114.

42. Schwarzer R. Modeling health behavior change: how to predict and modify the adoption and maintenance of health behaviors. Appl Psychol An Int Rev 2008;57:1–29.

43. Cialdini R. Influence: science and practice. 5th ed. Boston:Allyn and Bacon; 2008.

44. Stott NC, Rollnick S, Rees MR, Pill RM. Innovation in clinical method: diabetes care and negotiating skills. Fam Pract 1995;12:413–8.

45. Miller WR, Rollnick S, Butler C. Motivational interviewing in health care. New York: Guilford Press; 2008.

46. Bodenheimer T, Handley M. Goal-setting for behavior change in primary care: an exploration and status report. Patient Educ Couns 2009;76:174–80.

47. Locke EA, Latham GP. Building a practically useful theory of goal setting and task motivation. Am Psychol 2002;57:705–17.

48. Doran G. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manag Rev 1981;70:35–6.

49. Locke EA. Toward a theory of task motivation and incentives. Organ Behav Hum Perform 1968;3:157–89.

50. Locke EA, Latham GP, Erez M. The determinants of goal commitment. Acad Manag Rev 1988;13:23–39.

51. Lorig K, Homan H, Sobel D, et al. Living a healthy life with chronic conditions. 4th ed. Boulder: Bull Publishing; 2012.

52. Amrhein PC, Miller WR, Yahne CE, et al. Client commitment language during motivational interviewing predicts drug use outcomes. J Consult Clin Psychol 2003;71:862–78.

53. Ahaeonovich E, Amrhein PC, Bisaha A, et al. Cognition, commitment language and behavioral change among cocaine-dependent patients. Psychol Addict Behav 2008;22:557–62.

54. Gutnick D. Centre for Comprehensive Motivational Interventions community of practice webinar. Brief action planning and culture: developing culturally specific confidence rules. 2012. Available at www.centrecmi.ca.

55. Artinian NT, Fletcher GF, Mozaffarian D, et al. Interventions to promote physical activity and dietary lifestyle changes for cardiovascular risk factor reduction in adults. A scientific statement from the American Heart Association. Circulation 2010;122:406–41.

56. Handley M, MacGregor K, Schillinger D, et al. Using action plans to help primary care patients adopt healthy behaviors: a descriptive study. J Am Board Fam Med 2006;19:224–31.

57. Joint Commision. Primary care medical home option-additional requirements. Available at www.jointcommission.org/assets/1/18/PCMH_new_stds_by_5_characteristics.pdf.

58. Oregon Health Policy and Research. Standards for patient centered medical home recognition. Available at www.oregon.gov/oha/OHPR/pages/healthreform/pcpch/standards.aspx.

59. Nutting PA, Crabtree BF, Miller WL, et al. Journey to the patient-centered medical home: a qualitative analysis of the experiences of practices in the national demonstration project. Am Fam Med 2010;8(Suppl 1):S45–S56.

60. Stewart EE, Nutting PA, Crabtree BF, et al. Implementing the patient-centered medical home: observation and description of the National Demonstration Project. Am Fam Med 2010;8(Suppl 1):S21–S32.

Issue
Journal of Clinical Outcomes Management - January 2014, VOL. 21, NO. 1
Issue
Journal of Clinical Outcomes Management - January 2014, VOL. 21, NO. 1
Publications
Publications
Topics
Article Type
Display Headline
Brief Action Planning to Facilitate Behavior Change and Support Patient Self-Management
Display Headline
Brief Action Planning to Facilitate Behavior Change and Support Patient Self-Management
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Interventions can ease insomnia in cancer patients

Article Type
Changed
Tue, 01/14/2014 - 06:00
Display Headline
Interventions can ease insomnia in cancer patients

Sleeping woman

Credit: RelaxingMusic

A new study suggests cancer patients struggling with insomnia can choose between 2 behavioral interventions to obtain relief: cognitive behavioral therapy for insomnia (CBT-I) and mindfulness-based stress reduction (MBSR).

CBT-I is the gold standard of care, but the research showed that MBSR can also help improve sleep for cancer patients.

CBT-I involves stimulus control, sleep restriction, cognitive therapy, and relaxation training. When combined, these strategies target and reduce sleep-related physiologic and cognitive arousal to re-establish restorative sleep.

MBSR provides patients with psychoeducation on the relationship between stress and health. It also employs meditation techniques and gentle yoga to support mindful awareness and help patients respond better to stress.

Previous research has shown that MBSR can reduce distress and improve psychological well-being in patients with cancer. But this is the first study to directly compare MBSR to CBT-I in cancer patients.

The results are published in the Journal of Clinical Oncology.

“Insomnia and disturbed sleep are significant problems that can affect approximately half of all cancer patients,” said lead study author Sheila Garland, PhD, of Abramson Cancer Center at the University of Pennsylvania in Philadelphia.

“If not properly addressed, sleep disturbances can negatively influence therapeutic and supportive care measures for these patients, so it’s critical that clinicians can offer patients reliable, effective, and tailored interventions.”

With this in mind, Dr Garland and her colleagues tested behavioral interventions for insomnia in 111 patients recruited from a cancer center in Calgary, Alberta, Canada. Patients were randomized to either a CBT-I program (n=47) or an MBSR program (n=64) for 8 weeks.

Thirty-two patients completed the CBT-I program, and 40 completed the MBSR program. The researchers assessed patients immediately after program completion (at 2 months) and at 5 months from baseline.

Immediately after completion, MBSR was less effective than CBT-I at improving insomnia severity (P=0.35). But at the 5-month follow-up point, MBSR proved noninferior to CBT-I (P=0.02).

Patients in the CBT-I group showed greater overall improvement in subjectively measured sleep onset latency, sleep efficiency, sleep quality, and dysfunctional sleep beliefs than patients in the MBSR group.

But both groups showed progressive improvement over time when it came to subjectively measured total sleep time, wake after sleep onset, stress, and mood disturbance.

“That MBSR can produce similar improvements to CBT-I and that both [interventions] can effectively reduce stress and mood disturbance expands the available treatment options for insomnia in cancer patients,” Dr Garland said.

“This study suggests that we should not apply a ‘one-size-fits-all model’ to the treatment of insomnia and emphasizes the need to individualize treatment based on patient characteristics and preferences.”

Publications
Topics

Sleeping woman

Credit: RelaxingMusic

A new study suggests cancer patients struggling with insomnia can choose between 2 behavioral interventions to obtain relief: cognitive behavioral therapy for insomnia (CBT-I) and mindfulness-based stress reduction (MBSR).

CBT-I is the gold standard of care, but the research showed that MBSR can also help improve sleep for cancer patients.

CBT-I involves stimulus control, sleep restriction, cognitive therapy, and relaxation training. When combined, these strategies target and reduce sleep-related physiologic and cognitive arousal to re-establish restorative sleep.

MBSR provides patients with psychoeducation on the relationship between stress and health. It also employs meditation techniques and gentle yoga to support mindful awareness and help patients respond better to stress.

Previous research has shown that MBSR can reduce distress and improve psychological well-being in patients with cancer. But this is the first study to directly compare MBSR to CBT-I in cancer patients.

The results are published in the Journal of Clinical Oncology.

“Insomnia and disturbed sleep are significant problems that can affect approximately half of all cancer patients,” said lead study author Sheila Garland, PhD, of Abramson Cancer Center at the University of Pennsylvania in Philadelphia.

“If not properly addressed, sleep disturbances can negatively influence therapeutic and supportive care measures for these patients, so it’s critical that clinicians can offer patients reliable, effective, and tailored interventions.”

With this in mind, Dr Garland and her colleagues tested behavioral interventions for insomnia in 111 patients recruited from a cancer center in Calgary, Alberta, Canada. Patients were randomized to either a CBT-I program (n=47) or an MBSR program (n=64) for 8 weeks.

Thirty-two patients completed the CBT-I program, and 40 completed the MBSR program. The researchers assessed patients immediately after program completion (at 2 months) and at 5 months from baseline.

Immediately after completion, MBSR was less effective than CBT-I at improving insomnia severity (P=0.35). But at the 5-month follow-up point, MBSR proved noninferior to CBT-I (P=0.02).

Patients in the CBT-I group showed greater overall improvement in subjectively measured sleep onset latency, sleep efficiency, sleep quality, and dysfunctional sleep beliefs than patients in the MBSR group.

But both groups showed progressive improvement over time when it came to subjectively measured total sleep time, wake after sleep onset, stress, and mood disturbance.

“That MBSR can produce similar improvements to CBT-I and that both [interventions] can effectively reduce stress and mood disturbance expands the available treatment options for insomnia in cancer patients,” Dr Garland said.

“This study suggests that we should not apply a ‘one-size-fits-all model’ to the treatment of insomnia and emphasizes the need to individualize treatment based on patient characteristics and preferences.”

Sleeping woman

Credit: RelaxingMusic

A new study suggests cancer patients struggling with insomnia can choose between 2 behavioral interventions to obtain relief: cognitive behavioral therapy for insomnia (CBT-I) and mindfulness-based stress reduction (MBSR).

CBT-I is the gold standard of care, but the research showed that MBSR can also help improve sleep for cancer patients.

CBT-I involves stimulus control, sleep restriction, cognitive therapy, and relaxation training. When combined, these strategies target and reduce sleep-related physiologic and cognitive arousal to re-establish restorative sleep.

MBSR provides patients with psychoeducation on the relationship between stress and health. It also employs meditation techniques and gentle yoga to support mindful awareness and help patients respond better to stress.

Previous research has shown that MBSR can reduce distress and improve psychological well-being in patients with cancer. But this is the first study to directly compare MBSR to CBT-I in cancer patients.

The results are published in the Journal of Clinical Oncology.

“Insomnia and disturbed sleep are significant problems that can affect approximately half of all cancer patients,” said lead study author Sheila Garland, PhD, of Abramson Cancer Center at the University of Pennsylvania in Philadelphia.

“If not properly addressed, sleep disturbances can negatively influence therapeutic and supportive care measures for these patients, so it’s critical that clinicians can offer patients reliable, effective, and tailored interventions.”

With this in mind, Dr Garland and her colleagues tested behavioral interventions for insomnia in 111 patients recruited from a cancer center in Calgary, Alberta, Canada. Patients were randomized to either a CBT-I program (n=47) or an MBSR program (n=64) for 8 weeks.

Thirty-two patients completed the CBT-I program, and 40 completed the MBSR program. The researchers assessed patients immediately after program completion (at 2 months) and at 5 months from baseline.

Immediately after completion, MBSR was less effective than CBT-I at improving insomnia severity (P=0.35). But at the 5-month follow-up point, MBSR proved noninferior to CBT-I (P=0.02).

Patients in the CBT-I group showed greater overall improvement in subjectively measured sleep onset latency, sleep efficiency, sleep quality, and dysfunctional sleep beliefs than patients in the MBSR group.

But both groups showed progressive improvement over time when it came to subjectively measured total sleep time, wake after sleep onset, stress, and mood disturbance.

“That MBSR can produce similar improvements to CBT-I and that both [interventions] can effectively reduce stress and mood disturbance expands the available treatment options for insomnia in cancer patients,” Dr Garland said.

“This study suggests that we should not apply a ‘one-size-fits-all model’ to the treatment of insomnia and emphasizes the need to individualize treatment based on patient characteristics and preferences.”

Publications
Publications
Topics
Article Type
Display Headline
Interventions can ease insomnia in cancer patients
Display Headline
Interventions can ease insomnia in cancer patients
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Controlling cells after transplant

Article Type
Changed
Tue, 01/14/2014 - 06:00
Display Headline
Controlling cells after transplant

Cell culture in a tiny petri dish

Credit: Umberto Salvagnin

Scientists say they have devised a method for engineering cells that are more easily controlled after transplantation.

The team loaded cells with microparticles that release phenotype-altering agents for days to weeks after transplantation.

With this method, the researchers were able to control cells’ secretome, viability, proliferation, and differentiation. The approach was also successful in delivering drugs and other factors to the cell’s microenvironment.

The scientists described this method in Nature Protocols.

They provided step-by-step instructions for generating micrometer-sized agent-doped poly(lactic-co-glycolic) acid (PLGA) particles using a single-emulsion evaporation technique, engineering cultured cells, and confirming particle internalization.

“Once those particles are internalized into the cells, which can take on the order of 6 to 24 hours, we can deliver the transplant immediately or even cryopreserve the cells,” said study author Jeffrey Karp, PhD, of the Harvard Stem Cell Institute in Cambridge, Massachusetts.

“When the cells are thawed at the patient’s bedside, they can be administered, and the agents will start to be released inside the cells to control differentiation, immune modulation, or matrix production, for example.”

Of course, it could take more than a decade for this type of cell therapy to be a common medical practice. But Dr Karp and his colleagues detailed this research in Nature Protocols to encourage others in the scientific community to use the technique and potentially speed up the pace of this research.

The team’s paper shows the range of different cell types that can be particle-engineered, including stem cells, immune cells, and pancreatic cells.

“With this versatile platform . . . , we’ve demonstrated the ability to track cells in the body, control stem cell differentiation, and even change the way cells interact with immune cells,” said study author James Ankrum, PhD, who was a graduate student in Dr Karp’s lab when this research was conducted but is now at the University of Minnesota in Minneapolis.

“We’re excited to see what applications other researchers will imagine using this platform.”

Publications
Topics

Cell culture in a tiny petri dish

Credit: Umberto Salvagnin

Scientists say they have devised a method for engineering cells that are more easily controlled after transplantation.

The team loaded cells with microparticles that release phenotype-altering agents for days to weeks after transplantation.

With this method, the researchers were able to control cells’ secretome, viability, proliferation, and differentiation. The approach was also successful in delivering drugs and other factors to the cell’s microenvironment.

The scientists described this method in Nature Protocols.

They provided step-by-step instructions for generating micrometer-sized agent-doped poly(lactic-co-glycolic) acid (PLGA) particles using a single-emulsion evaporation technique, engineering cultured cells, and confirming particle internalization.

“Once those particles are internalized into the cells, which can take on the order of 6 to 24 hours, we can deliver the transplant immediately or even cryopreserve the cells,” said study author Jeffrey Karp, PhD, of the Harvard Stem Cell Institute in Cambridge, Massachusetts.

“When the cells are thawed at the patient’s bedside, they can be administered, and the agents will start to be released inside the cells to control differentiation, immune modulation, or matrix production, for example.”

Of course, it could take more than a decade for this type of cell therapy to be a common medical practice. But Dr Karp and his colleagues detailed this research in Nature Protocols to encourage others in the scientific community to use the technique and potentially speed up the pace of this research.

The team’s paper shows the range of different cell types that can be particle-engineered, including stem cells, immune cells, and pancreatic cells.

“With this versatile platform . . . , we’ve demonstrated the ability to track cells in the body, control stem cell differentiation, and even change the way cells interact with immune cells,” said study author James Ankrum, PhD, who was a graduate student in Dr Karp’s lab when this research was conducted but is now at the University of Minnesota in Minneapolis.

“We’re excited to see what applications other researchers will imagine using this platform.”

Cell culture in a tiny petri dish

Credit: Umberto Salvagnin

Scientists say they have devised a method for engineering cells that are more easily controlled after transplantation.

The team loaded cells with microparticles that release phenotype-altering agents for days to weeks after transplantation.

With this method, the researchers were able to control cells’ secretome, viability, proliferation, and differentiation. The approach was also successful in delivering drugs and other factors to the cell’s microenvironment.

The scientists described this method in Nature Protocols.

They provided step-by-step instructions for generating micrometer-sized agent-doped poly(lactic-co-glycolic) acid (PLGA) particles using a single-emulsion evaporation technique, engineering cultured cells, and confirming particle internalization.

“Once those particles are internalized into the cells, which can take on the order of 6 to 24 hours, we can deliver the transplant immediately or even cryopreserve the cells,” said study author Jeffrey Karp, PhD, of the Harvard Stem Cell Institute in Cambridge, Massachusetts.

“When the cells are thawed at the patient’s bedside, they can be administered, and the agents will start to be released inside the cells to control differentiation, immune modulation, or matrix production, for example.”

Of course, it could take more than a decade for this type of cell therapy to be a common medical practice. But Dr Karp and his colleagues detailed this research in Nature Protocols to encourage others in the scientific community to use the technique and potentially speed up the pace of this research.

The team’s paper shows the range of different cell types that can be particle-engineered, including stem cells, immune cells, and pancreatic cells.

“With this versatile platform . . . , we’ve demonstrated the ability to track cells in the body, control stem cell differentiation, and even change the way cells interact with immune cells,” said study author James Ankrum, PhD, who was a graduate student in Dr Karp’s lab when this research was conducted but is now at the University of Minnesota in Minneapolis.

“We’re excited to see what applications other researchers will imagine using this platform.”

Publications
Publications
Topics
Article Type
Display Headline
Controlling cells after transplant
Display Headline
Controlling cells after transplant
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Genetic events drive ALL subtype

Article Type
Changed
Tue, 01/14/2014 - 06:00
Display Headline
Genetic events drive ALL subtype

Bone marrow smear from

a patient with ALL

Investigators have identified the genetic events leading to leukemic transformation in ETV6-RUNX1 acute lymphoblastic leukemia (ALL), according to a paper published in Nature Genetics.

Previous studies have shown that, for 1 in 4 ALL patients, a key factor driving the disease is a chromosomal translocation that creates the ETV6-RUNX1 fusion gene.

However, the gene cannot cause overt leukemia on its own. Additional mutations are required for ALL to develop.

In this study, researchers found that RAG proteins—which rearrange the genome in normal immune cells to generate antibody diversity—can also rearrange the DNA of genes involved in cancer.

And this leads to ALL in individuals with the ETV6-RUNX1 fusion gene.

“For the first time, we see the combined events that are driving this treatable but highly devastating disease,” said lead study author Elli Papaemmanuil, PhD, of the Wellcome Trust Sanger Institute in Hinxton, UK.

“We now have a better understanding of the natural history of this disease and the critical events—from the initial acquisition of the fusion ETV6-RUNX1 to the sequential acquisition of RAG-mediated genome alterations—that ultimately result in this childhood leukemia.”

To unearth this discovery, the investigators sequenced the genomes of 57 ALL patients with the fusion gene. The team found that genomic rearrangements, and deletions in particular, were the predominant drivers of leukemia.

All samples showed evidence of events involving the RAG proteins. The proteins use a unique sequence of DNA letters as a signpost to direct them to antibody regions.

The researchers discovered that remnants of this sequence lay close to more than 50% of the cancer-driving genetic rearrangements. And this process often prompted the loss of the very genes required for normal immune cell development.

It is the deletion of these genes that, in combination with the fusion gene, leads to ALL, the investigators said. And the genetic signature linking the RAG proteins to genomic instability is not found in other types of leukemia or other common cancers.

“In this childhood leukemia, we see that the very process required to make normal antibodies is co-opted by the leukemia cells to knock out other genes with unprecedented specificity,” said Peter Campbell, PhD, also of the Wellcome Trust Sanger Institute.

To better understand the events that led to ALL development, the researchers used single-cell genomics to analyze samples from 2 patients. The team found that the cancer-causing process they identified occurs many times and results in continuous diversification of the leukemia.

“It may seem surprising that evolution should have provided a mechanism for diversifying antibodies that can collaterally damage genes that then contribute to cancer,” said Mel Greaves, PhD, of The Institute of Cancer Research in London, UK.

“But this only happens because the fusion gene that initiates the disease ‘traps’ cells in a normally very transient window of cell development where the RAG enzymes are active, teasing out their imperfect specificity.”

The researchers are now planning to investigate how the RAG-mediated genomic instability accrues in cells with the ETV6-RUNX1 fusion gene and what role this process plays in patients who relapse.

Publications
Topics

Bone marrow smear from

a patient with ALL

Investigators have identified the genetic events leading to leukemic transformation in ETV6-RUNX1 acute lymphoblastic leukemia (ALL), according to a paper published in Nature Genetics.

Previous studies have shown that, for 1 in 4 ALL patients, a key factor driving the disease is a chromosomal translocation that creates the ETV6-RUNX1 fusion gene.

However, the gene cannot cause overt leukemia on its own. Additional mutations are required for ALL to develop.

In this study, researchers found that RAG proteins—which rearrange the genome in normal immune cells to generate antibody diversity—can also rearrange the DNA of genes involved in cancer.

And this leads to ALL in individuals with the ETV6-RUNX1 fusion gene.

“For the first time, we see the combined events that are driving this treatable but highly devastating disease,” said lead study author Elli Papaemmanuil, PhD, of the Wellcome Trust Sanger Institute in Hinxton, UK.

“We now have a better understanding of the natural history of this disease and the critical events—from the initial acquisition of the fusion ETV6-RUNX1 to the sequential acquisition of RAG-mediated genome alterations—that ultimately result in this childhood leukemia.”

To unearth this discovery, the investigators sequenced the genomes of 57 ALL patients with the fusion gene. The team found that genomic rearrangements, and deletions in particular, were the predominant drivers of leukemia.

All samples showed evidence of events involving the RAG proteins. The proteins use a unique sequence of DNA letters as a signpost to direct them to antibody regions.

The researchers discovered that remnants of this sequence lay close to more than 50% of the cancer-driving genetic rearrangements. And this process often prompted the loss of the very genes required for normal immune cell development.

It is the deletion of these genes that, in combination with the fusion gene, leads to ALL, the investigators said. And the genetic signature linking the RAG proteins to genomic instability is not found in other types of leukemia or other common cancers.

“In this childhood leukemia, we see that the very process required to make normal antibodies is co-opted by the leukemia cells to knock out other genes with unprecedented specificity,” said Peter Campbell, PhD, also of the Wellcome Trust Sanger Institute.

To better understand the events that led to ALL development, the researchers used single-cell genomics to analyze samples from 2 patients. The team found that the cancer-causing process they identified occurs many times and results in continuous diversification of the leukemia.

“It may seem surprising that evolution should have provided a mechanism for diversifying antibodies that can collaterally damage genes that then contribute to cancer,” said Mel Greaves, PhD, of The Institute of Cancer Research in London, UK.

“But this only happens because the fusion gene that initiates the disease ‘traps’ cells in a normally very transient window of cell development where the RAG enzymes are active, teasing out their imperfect specificity.”

The researchers are now planning to investigate how the RAG-mediated genomic instability accrues in cells with the ETV6-RUNX1 fusion gene and what role this process plays in patients who relapse.

Bone marrow smear from

a patient with ALL

Investigators have identified the genetic events leading to leukemic transformation in ETV6-RUNX1 acute lymphoblastic leukemia (ALL), according to a paper published in Nature Genetics.

Previous studies have shown that, for 1 in 4 ALL patients, a key factor driving the disease is a chromosomal translocation that creates the ETV6-RUNX1 fusion gene.

However, the gene cannot cause overt leukemia on its own. Additional mutations are required for ALL to develop.

In this study, researchers found that RAG proteins—which rearrange the genome in normal immune cells to generate antibody diversity—can also rearrange the DNA of genes involved in cancer.

And this leads to ALL in individuals with the ETV6-RUNX1 fusion gene.

“For the first time, we see the combined events that are driving this treatable but highly devastating disease,” said lead study author Elli Papaemmanuil, PhD, of the Wellcome Trust Sanger Institute in Hinxton, UK.

“We now have a better understanding of the natural history of this disease and the critical events—from the initial acquisition of the fusion ETV6-RUNX1 to the sequential acquisition of RAG-mediated genome alterations—that ultimately result in this childhood leukemia.”

To unearth this discovery, the investigators sequenced the genomes of 57 ALL patients with the fusion gene. The team found that genomic rearrangements, and deletions in particular, were the predominant drivers of leukemia.

All samples showed evidence of events involving the RAG proteins. The proteins use a unique sequence of DNA letters as a signpost to direct them to antibody regions.

The researchers discovered that remnants of this sequence lay close to more than 50% of the cancer-driving genetic rearrangements. And this process often prompted the loss of the very genes required for normal immune cell development.

It is the deletion of these genes that, in combination with the fusion gene, leads to ALL, the investigators said. And the genetic signature linking the RAG proteins to genomic instability is not found in other types of leukemia or other common cancers.

“In this childhood leukemia, we see that the very process required to make normal antibodies is co-opted by the leukemia cells to knock out other genes with unprecedented specificity,” said Peter Campbell, PhD, also of the Wellcome Trust Sanger Institute.

To better understand the events that led to ALL development, the researchers used single-cell genomics to analyze samples from 2 patients. The team found that the cancer-causing process they identified occurs many times and results in continuous diversification of the leukemia.

“It may seem surprising that evolution should have provided a mechanism for diversifying antibodies that can collaterally damage genes that then contribute to cancer,” said Mel Greaves, PhD, of The Institute of Cancer Research in London, UK.

“But this only happens because the fusion gene that initiates the disease ‘traps’ cells in a normally very transient window of cell development where the RAG enzymes are active, teasing out their imperfect specificity.”

The researchers are now planning to investigate how the RAG-mediated genomic instability accrues in cells with the ETV6-RUNX1 fusion gene and what role this process plays in patients who relapse.

Publications
Publications
Topics
Article Type
Display Headline
Genetic events drive ALL subtype
Display Headline
Genetic events drive ALL subtype
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Antipsychotic drug is active against T-ALL

Article Type
Changed
Mon, 01/13/2014 - 07:00
Display Headline
Antipsychotic drug is active against T-ALL

Genetically modified zebrafish

Experiments in zebrafish have shown that a 50-year-old antipsychotic medication called perphenazine can actively combat T-cell acute lymphoblastic leukemia (T-ALL).

The drug works by turning on a cancer-suppressing enzyme called PP2A and causing malignant tumor cells to self-destruct.

The findings suggest that developing medications that activate PP2A, while avoiding perphenazine’s psychotropic effects, could help clinicians make much-needed headway against T-ALL and perhaps other tumors as well.

Alejandro Gutierrez, MD, of the Dana-Farber Cancer Institute in Boston, and his colleagues detailed this research in The Journal of Clinical Investigation.

The researchers screened a library of 4880 compounds—including FDA-approved drugs whose patents had expired, small molecules, and natural products—in a model of T-ALL engineered using zebrafish.

One of the strongest hits in the zebrafish screen was perphenazine. The drug is a member of the phenothiazines family of antipsychotic medications, which can block dopamine receptors.

The investigators verified perphenazine’s anti-leukemic potential in vitro in several mouse and human T-ALL cell lines. Biochemical studies indicated that perphenazine’s anti-tumor activity is independent of its psychotropic activity and that it attacks T-ALL cells by turning on PP2A.

The fact that perphenazine works by reactivating a protein shut down in cancer cells is novel in the drug development field.

“We rarely find potential drug molecules that activate an enzyme,” Dr Gutierrez explained. “Most new drugs deactivate some protein or signal that the cancer cell requires to survive. But, here, perphenazine is restoring the activity of PP2A in the T-ALL cell.”

The researchers are now working to better understand the interactions between PP2A and perphenazine. They also want to search for or develop molecules that bind to and activate the enzyme more tightly and specifically to avoid perphenazine’s psychiatric effects.

“The challenge is to use medicinal chemistry to develop new PP2A inhibitors similar to perphenazine and the other phenothiazines, but to dial down dopamine interactions and accentuate those with PP2A,” said study author A. Thomas Look, MD, also of Dana-Farber.

He added that future PP2A inhibitors could be important additions to the oncologist’s arsenal. When used in combination with other drugs, the inhibitors might “make a real difference” for patients with T-ALL.

The investigators also believe the benefits of PP2A-activating drugs could extend beyond T-ALL.

“The proteins that PP2A suppresses, such as Myc and Akt, are involved in many tumors,” Dr Look noted. “We are optimistic that PP2A activators will have quite broad activity against different kinds of cancer, and we’re anxious to study the pathway in other malignancies as well.”

Publications
Topics

Genetically modified zebrafish

Experiments in zebrafish have shown that a 50-year-old antipsychotic medication called perphenazine can actively combat T-cell acute lymphoblastic leukemia (T-ALL).

The drug works by turning on a cancer-suppressing enzyme called PP2A and causing malignant tumor cells to self-destruct.

The findings suggest that developing medications that activate PP2A, while avoiding perphenazine’s psychotropic effects, could help clinicians make much-needed headway against T-ALL and perhaps other tumors as well.

Alejandro Gutierrez, MD, of the Dana-Farber Cancer Institute in Boston, and his colleagues detailed this research in The Journal of Clinical Investigation.

The researchers screened a library of 4880 compounds—including FDA-approved drugs whose patents had expired, small molecules, and natural products—in a model of T-ALL engineered using zebrafish.

One of the strongest hits in the zebrafish screen was perphenazine. The drug is a member of the phenothiazines family of antipsychotic medications, which can block dopamine receptors.

The investigators verified perphenazine’s anti-leukemic potential in vitro in several mouse and human T-ALL cell lines. Biochemical studies indicated that perphenazine’s anti-tumor activity is independent of its psychotropic activity and that it attacks T-ALL cells by turning on PP2A.

The fact that perphenazine works by reactivating a protein shut down in cancer cells is novel in the drug development field.

“We rarely find potential drug molecules that activate an enzyme,” Dr Gutierrez explained. “Most new drugs deactivate some protein or signal that the cancer cell requires to survive. But, here, perphenazine is restoring the activity of PP2A in the T-ALL cell.”

The researchers are now working to better understand the interactions between PP2A and perphenazine. They also want to search for or develop molecules that bind to and activate the enzyme more tightly and specifically to avoid perphenazine’s psychiatric effects.

“The challenge is to use medicinal chemistry to develop new PP2A inhibitors similar to perphenazine and the other phenothiazines, but to dial down dopamine interactions and accentuate those with PP2A,” said study author A. Thomas Look, MD, also of Dana-Farber.

He added that future PP2A inhibitors could be important additions to the oncologist’s arsenal. When used in combination with other drugs, the inhibitors might “make a real difference” for patients with T-ALL.

The investigators also believe the benefits of PP2A-activating drugs could extend beyond T-ALL.

“The proteins that PP2A suppresses, such as Myc and Akt, are involved in many tumors,” Dr Look noted. “We are optimistic that PP2A activators will have quite broad activity against different kinds of cancer, and we’re anxious to study the pathway in other malignancies as well.”

Genetically modified zebrafish

Experiments in zebrafish have shown that a 50-year-old antipsychotic medication called perphenazine can actively combat T-cell acute lymphoblastic leukemia (T-ALL).

The drug works by turning on a cancer-suppressing enzyme called PP2A and causing malignant tumor cells to self-destruct.

The findings suggest that developing medications that activate PP2A, while avoiding perphenazine’s psychotropic effects, could help clinicians make much-needed headway against T-ALL and perhaps other tumors as well.

Alejandro Gutierrez, MD, of the Dana-Farber Cancer Institute in Boston, and his colleagues detailed this research in The Journal of Clinical Investigation.

The researchers screened a library of 4880 compounds—including FDA-approved drugs whose patents had expired, small molecules, and natural products—in a model of T-ALL engineered using zebrafish.

One of the strongest hits in the zebrafish screen was perphenazine. The drug is a member of the phenothiazines family of antipsychotic medications, which can block dopamine receptors.

The investigators verified perphenazine’s anti-leukemic potential in vitro in several mouse and human T-ALL cell lines. Biochemical studies indicated that perphenazine’s anti-tumor activity is independent of its psychotropic activity and that it attacks T-ALL cells by turning on PP2A.

The fact that perphenazine works by reactivating a protein shut down in cancer cells is novel in the drug development field.

“We rarely find potential drug molecules that activate an enzyme,” Dr Gutierrez explained. “Most new drugs deactivate some protein or signal that the cancer cell requires to survive. But, here, perphenazine is restoring the activity of PP2A in the T-ALL cell.”

The researchers are now working to better understand the interactions between PP2A and perphenazine. They also want to search for or develop molecules that bind to and activate the enzyme more tightly and specifically to avoid perphenazine’s psychiatric effects.

“The challenge is to use medicinal chemistry to develop new PP2A inhibitors similar to perphenazine and the other phenothiazines, but to dial down dopamine interactions and accentuate those with PP2A,” said study author A. Thomas Look, MD, also of Dana-Farber.

He added that future PP2A inhibitors could be important additions to the oncologist’s arsenal. When used in combination with other drugs, the inhibitors might “make a real difference” for patients with T-ALL.

The investigators also believe the benefits of PP2A-activating drugs could extend beyond T-ALL.

“The proteins that PP2A suppresses, such as Myc and Akt, are involved in many tumors,” Dr Look noted. “We are optimistic that PP2A activators will have quite broad activity against different kinds of cancer, and we’re anxious to study the pathway in other malignancies as well.”

Publications
Publications
Topics
Article Type
Display Headline
Antipsychotic drug is active against T-ALL
Display Headline
Antipsychotic drug is active against T-ALL
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Deaths from leukemia, NHL declining in the UK

Article Type
Changed
Mon, 01/13/2014 - 06:00
Display Headline
Deaths from leukemia, NHL declining in the UK

Doctor talks with cancer patient

Credit: National Cancer

Institute-Mathews Media Group

Deaths from leukemia and non-Hodgkin lymphoma (NHL) are on the decline in the UK, but these malignancies are still among the leading causes of cancer death, a new analysis suggests.

Leukemia and NHL are among the 10 most common causes of cancer death for men and women in the UK, according to data from 2011.

But deaths from these malignancies have decreased from the number of deaths seen in the early 2000s.

These findings, published on the Cancer Research UK website, are similar to the results of a recent report on cancer deaths in the US.

The Cancer Research UK analysis showed that the death rate from cancer has dropped by more than a fifth since the 1990s.

In 1990, 220 in every 100,000 people died of cancer. But by 2011, the death rate had fallen 22%—to 170 per 100,000 people. The cancer mortality rate fell by 20% for women and 26% for men.

“Today, cancer is not the death sentence people once believed it to be,” said Harpal Kumar, Cancer Research UK chief executive.

“As these new figures show, mortality rates from this much-feared disease are dropping significantly . . . . But while we’re heading in the right direction, too many lives are still being lost to the disease, highlighting how much more work there is to do.”

NHL and leukemia stats

The analysis showed that, in men, the 3-year mortality rate for NHL decreased by 16% from 2000-2002 to 2009-2012. And the 3-year mortality rate for leukemia decreased by 6%.

In women, the 3-year mortality rate for NHL decreased by 18% from 2000-2002 to 2009-2012. And the 3-year mortality rate for leukemia decreased by 9%.

But the 2011 data showed that both types of cancer are among the 10 most common causes of cancer death in both men and women.

Among women, 2156 patients died of NHL (7th leading cause of cancer death), and 1994 patients died of leukemia (8th leading cause).

Among men, 2609 patients died of leukemia (8th leading cause of cancer death), and 2490 died of NHL (10th leading cause).

For more details on cancer mortality, including projections up to the year 2030, visit the Cancer Research UK website.

Publications
Topics

Doctor talks with cancer patient

Credit: National Cancer

Institute-Mathews Media Group

Deaths from leukemia and non-Hodgkin lymphoma (NHL) are on the decline in the UK, but these malignancies are still among the leading causes of cancer death, a new analysis suggests.

Leukemia and NHL are among the 10 most common causes of cancer death for men and women in the UK, according to data from 2011.

But deaths from these malignancies have decreased from the number of deaths seen in the early 2000s.

These findings, published on the Cancer Research UK website, are similar to the results of a recent report on cancer deaths in the US.

The Cancer Research UK analysis showed that the death rate from cancer has dropped by more than a fifth since the 1990s.

In 1990, 220 in every 100,000 people died of cancer. But by 2011, the death rate had fallen 22%—to 170 per 100,000 people. The cancer mortality rate fell by 20% for women and 26% for men.

“Today, cancer is not the death sentence people once believed it to be,” said Harpal Kumar, Cancer Research UK chief executive.

“As these new figures show, mortality rates from this much-feared disease are dropping significantly . . . . But while we’re heading in the right direction, too many lives are still being lost to the disease, highlighting how much more work there is to do.”

NHL and leukemia stats

The analysis showed that, in men, the 3-year mortality rate for NHL decreased by 16% from 2000-2002 to 2009-2012. And the 3-year mortality rate for leukemia decreased by 6%.

In women, the 3-year mortality rate for NHL decreased by 18% from 2000-2002 to 2009-2012. And the 3-year mortality rate for leukemia decreased by 9%.

But the 2011 data showed that both types of cancer are among the 10 most common causes of cancer death in both men and women.

Among women, 2156 patients died of NHL (7th leading cause of cancer death), and 1994 patients died of leukemia (8th leading cause).

Among men, 2609 patients died of leukemia (8th leading cause of cancer death), and 2490 died of NHL (10th leading cause).

For more details on cancer mortality, including projections up to the year 2030, visit the Cancer Research UK website.

Doctor talks with cancer patient

Credit: National Cancer

Institute-Mathews Media Group

Deaths from leukemia and non-Hodgkin lymphoma (NHL) are on the decline in the UK, but these malignancies are still among the leading causes of cancer death, a new analysis suggests.

Leukemia and NHL are among the 10 most common causes of cancer death for men and women in the UK, according to data from 2011.

But deaths from these malignancies have decreased from the number of deaths seen in the early 2000s.

These findings, published on the Cancer Research UK website, are similar to the results of a recent report on cancer deaths in the US.

The Cancer Research UK analysis showed that the death rate from cancer has dropped by more than a fifth since the 1990s.

In 1990, 220 in every 100,000 people died of cancer. But by 2011, the death rate had fallen 22%—to 170 per 100,000 people. The cancer mortality rate fell by 20% for women and 26% for men.

“Today, cancer is not the death sentence people once believed it to be,” said Harpal Kumar, Cancer Research UK chief executive.

“As these new figures show, mortality rates from this much-feared disease are dropping significantly . . . . But while we’re heading in the right direction, too many lives are still being lost to the disease, highlighting how much more work there is to do.”

NHL and leukemia stats

The analysis showed that, in men, the 3-year mortality rate for NHL decreased by 16% from 2000-2002 to 2009-2012. And the 3-year mortality rate for leukemia decreased by 6%.

In women, the 3-year mortality rate for NHL decreased by 18% from 2000-2002 to 2009-2012. And the 3-year mortality rate for leukemia decreased by 9%.

But the 2011 data showed that both types of cancer are among the 10 most common causes of cancer death in both men and women.

Among women, 2156 patients died of NHL (7th leading cause of cancer death), and 1994 patients died of leukemia (8th leading cause).

Among men, 2609 patients died of leukemia (8th leading cause of cancer death), and 2490 died of NHL (10th leading cause).

For more details on cancer mortality, including projections up to the year 2030, visit the Cancer Research UK website.

Publications
Publications
Topics
Article Type
Display Headline
Deaths from leukemia, NHL declining in the UK
Display Headline
Deaths from leukemia, NHL declining in the UK
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Benefit of Teamwork Training

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Making the potential benefit of teamwork training a reality

Teamwork is tightly linked to patient safety for hospitalized patients. Barriers to teamwork in hospital settings abound, including large team sizes and dynamic team membership because of the need to provide care 24 hours a day, 7 days a week. Team members are often dispersed across clinical service areas and care for multiple patients at the same time. Compounding the potential for these structural barriers to impede teamwork, professionals seldom receive any formal training to enhance teamwork skills, and students and trainees have relatively few interactions during their formative years with individuals outside of their own profession. In this issue of the Journal of Hospital Medicine, Tofil et al. describe the effect of a novel interprofessional training program to improve teamwork among medical and nursing students at the University of Alabama.[1] The curriculum included 4, 1‐hour simulation sessions and resulted in improved ratings of self‐efficacy with communication and teamwork attitudes. The authors report that the curriculum has continued and expanded to include other health professionals.

Beyond the short‐term results, the curriculum developed by Tofil and colleagues may have lasting effects on individual participants. Students, exposed to one another during a particularly impressionable period of their professional development, may develop better appreciation for the priorities, responsibilities, needs, and expertise of others. The experience may inoculate them from adopting unfavorable behaviors and attitudes that are common among practicing clinicians and comprise the hidden curriculum, which often undermines the goals of the formal curriculum.[2] An early, positive experience with other team members may be especially important for medical students, as physicians tend to be relatively unaware of deficiencies in interprofessional collaboration.[3]

Though undoubtedly valuable to the learners and contributing to our collective knowledge on the subject, the study by Tofil and colleagues includes limitations common to teamwork training curricula.[4] To make the potential of teamwork training a reality in improving patient outcomes, we must first revisit some key teamwork concepts and principles of curriculum development. Baker and colleagues define a team as consisting of 2 or more individuals, who have specific roles, perform interdependent tasks, are adaptable, and share a common goal.[5] For a team to be successful, individual team members must have specific knowledge, skills, and attitudes (ie, competencies).[6] For team training curricula to be successful, existing frameworks like TeamSTEPPS (Team Strategies and Tools to Enhance Performance and Patient Safety) should be used to define learning objectives.[7] Because teamwork is largely behavioral and affective, simulation is the most appropriate instruction method. Simulation involves deliberate practice and expert feedback so that learners can iteratively enhance teamwork skills. Other instructional methods (eg, didactics, video observation and debriefing, brief role play without feedback) are too weak to be effective.

Importantly, Tofil and colleagues used an accepted teamwork framework to develop learning objectives, simulation as the instructional method, and an interprofessional team (ie, a physician, nurse, and an adult learning professional with simulation expertise) to perform simulation debriefings. However, for team training to achieve its full potential, leaders of future efforts need to aim for higher level outcomes. Positive reactions are encouraging, but what we really want to know is that learners truly adopted new skills and attitudes, applied them in real‐world clinical settings, and that patients benefited from them. These are high but achievable goals and absolutely necessary to advance the credibility of team training. Relatively few studies have evaluated the impact of team training on patient outcomes, and the available evidence is equivocal.[8, 9] The intensity and duration of deliberate practice during simulation exercises must be sufficient to change ingrained behaviors and to ensure transfer of enhanced skills to the clinical setting if our goal is to improve patient outcomes.

Leaders of future efforts must also develop innovative simulation exercises that reflect the real‐life challenges and contexts for medical teamwork including dispersion of team members, challenges of communication in hierarchical teams, and competing demands under increasing time pressure. Simulated communication events could include a nurse deciding whether and how to contact a physician not immediately present (and vice versa). Sessions should include interruptions and require participants to multitask to replicate the clinical environment. Notably, simulation exercises provide an opportunity for assessment using a behaviorally anchored rating scale, which is often impractical in real clinical settings because team members are seldom in the same place at the same time. Booster simulation sessions should be provided to ensure skills do not decay over time. In situ simulation (ie, simulation events in the real clinical setting) offers the ability to reveal latent conditions impeding the efficiency or quality of communication among team members.

Most importantly, simulation‐based teamwork training must be combined with system redesign and improvement. Enhanced communication skills will only go so far if team members never have a chance to use them. Leaders should work with their hospitals to remove systemic barriers to teamwork. Opportunities for improvement include geographic localization of physicians, assigning patients to nurses to maximize homogeneity of team members, optimizing interprofessional rounds, and leveraging information and communication technologies. Simulation training should be seen as a complement to these interventions rather than a substitute.

Challenges to teamwork are multifactorial and therefore require multifaceted interventions. Simulation is essential to enhance teamwork skills and attitudes. For efforts to translate into improved patient outcomes, leaders must use innovative approaches and combine simulation training with system redesign and improvement.

References
  1. Tofil NM, Morris JL, Peterson DT, et al. Interprofessional simulation training improves knowledge and teamwork in nursing and medical students during internal medicine clerkship. J Hosp Med. 2014;9(3):189192.
  2. Hafferty FW. Beyond curriculum reform: confronting medicine's hidden curriculum. Acad Med. 1998;73(4):403407.
  3. O'Leary KJ, Ritter CD, Wheeler H, Szekendi MK, Brinton TS, Williams MV. Teamwork on inpatient medical units: assessing attitudes and barriers. Qual Saf Health Care. 2010;19(2):117121.
  4. McGaghie WC, Eppich WJ, O'Leary KJ. Contributions of simulation‐based training to teamwork. In: Baker DP, Battles JB, King HB, Wears RL, eds. Improving Patient Safety Through Teamwork and Team Training. New York, NY: Oxford University Press; 2013:218227.
  5. Baker DP, Day R, Salas E. Teamwork as an essential component of high‐reliability organizations. Health Serv. Res. 2006;41(4 pt 2):15761598.
  6. Baker DP, Salas E, King H, Battles J, Barach P. The role of teamwork in the professional education of physicians: current status and assessment recommendations. Jt Comm J Qual Patient Saf. 2005;31(4):185202.
  7. King HB, Battles J, Baker DP, et al. TeamSTEPPS: Team Strategies and Tools to Enhance Performance and Patient Safety Advances in Patient Safety: New Directions and Alternative Approaches. Vol. 3. Performance and Tools. Rockville, MD: Agency for Healthcare Research and Quality; 2008.
  8. Auerbach AD, Sehgal NL, Blegen MA, et al. Effects of a multicentre teamwork and communication programme on patient outcomes: results from the Triad for Optimal Patient Safety (TOPS) project. BMJ Qual Saf. 2012;21(2):118126.
  9. Schmidt E, Goldhaber‐Fiebert SN, Ho LA, McDonald KM. Simulation exercises as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):426432.
Article PDF
Issue
Journal of Hospital Medicine - 9(3)
Page Number
201-202
Sections
Article PDF
Article PDF

Teamwork is tightly linked to patient safety for hospitalized patients. Barriers to teamwork in hospital settings abound, including large team sizes and dynamic team membership because of the need to provide care 24 hours a day, 7 days a week. Team members are often dispersed across clinical service areas and care for multiple patients at the same time. Compounding the potential for these structural barriers to impede teamwork, professionals seldom receive any formal training to enhance teamwork skills, and students and trainees have relatively few interactions during their formative years with individuals outside of their own profession. In this issue of the Journal of Hospital Medicine, Tofil et al. describe the effect of a novel interprofessional training program to improve teamwork among medical and nursing students at the University of Alabama.[1] The curriculum included 4, 1‐hour simulation sessions and resulted in improved ratings of self‐efficacy with communication and teamwork attitudes. The authors report that the curriculum has continued and expanded to include other health professionals.

Beyond the short‐term results, the curriculum developed by Tofil and colleagues may have lasting effects on individual participants. Students, exposed to one another during a particularly impressionable period of their professional development, may develop better appreciation for the priorities, responsibilities, needs, and expertise of others. The experience may inoculate them from adopting unfavorable behaviors and attitudes that are common among practicing clinicians and comprise the hidden curriculum, which often undermines the goals of the formal curriculum.[2] An early, positive experience with other team members may be especially important for medical students, as physicians tend to be relatively unaware of deficiencies in interprofessional collaboration.[3]

Though undoubtedly valuable to the learners and contributing to our collective knowledge on the subject, the study by Tofil and colleagues includes limitations common to teamwork training curricula.[4] To make the potential of teamwork training a reality in improving patient outcomes, we must first revisit some key teamwork concepts and principles of curriculum development. Baker and colleagues define a team as consisting of 2 or more individuals, who have specific roles, perform interdependent tasks, are adaptable, and share a common goal.[5] For a team to be successful, individual team members must have specific knowledge, skills, and attitudes (ie, competencies).[6] For team training curricula to be successful, existing frameworks like TeamSTEPPS (Team Strategies and Tools to Enhance Performance and Patient Safety) should be used to define learning objectives.[7] Because teamwork is largely behavioral and affective, simulation is the most appropriate instruction method. Simulation involves deliberate practice and expert feedback so that learners can iteratively enhance teamwork skills. Other instructional methods (eg, didactics, video observation and debriefing, brief role play without feedback) are too weak to be effective.

Importantly, Tofil and colleagues used an accepted teamwork framework to develop learning objectives, simulation as the instructional method, and an interprofessional team (ie, a physician, nurse, and an adult learning professional with simulation expertise) to perform simulation debriefings. However, for team training to achieve its full potential, leaders of future efforts need to aim for higher level outcomes. Positive reactions are encouraging, but what we really want to know is that learners truly adopted new skills and attitudes, applied them in real‐world clinical settings, and that patients benefited from them. These are high but achievable goals and absolutely necessary to advance the credibility of team training. Relatively few studies have evaluated the impact of team training on patient outcomes, and the available evidence is equivocal.[8, 9] The intensity and duration of deliberate practice during simulation exercises must be sufficient to change ingrained behaviors and to ensure transfer of enhanced skills to the clinical setting if our goal is to improve patient outcomes.

Leaders of future efforts must also develop innovative simulation exercises that reflect the real‐life challenges and contexts for medical teamwork including dispersion of team members, challenges of communication in hierarchical teams, and competing demands under increasing time pressure. Simulated communication events could include a nurse deciding whether and how to contact a physician not immediately present (and vice versa). Sessions should include interruptions and require participants to multitask to replicate the clinical environment. Notably, simulation exercises provide an opportunity for assessment using a behaviorally anchored rating scale, which is often impractical in real clinical settings because team members are seldom in the same place at the same time. Booster simulation sessions should be provided to ensure skills do not decay over time. In situ simulation (ie, simulation events in the real clinical setting) offers the ability to reveal latent conditions impeding the efficiency or quality of communication among team members.

Most importantly, simulation‐based teamwork training must be combined with system redesign and improvement. Enhanced communication skills will only go so far if team members never have a chance to use them. Leaders should work with their hospitals to remove systemic barriers to teamwork. Opportunities for improvement include geographic localization of physicians, assigning patients to nurses to maximize homogeneity of team members, optimizing interprofessional rounds, and leveraging information and communication technologies. Simulation training should be seen as a complement to these interventions rather than a substitute.

Challenges to teamwork are multifactorial and therefore require multifaceted interventions. Simulation is essential to enhance teamwork skills and attitudes. For efforts to translate into improved patient outcomes, leaders must use innovative approaches and combine simulation training with system redesign and improvement.

Teamwork is tightly linked to patient safety for hospitalized patients. Barriers to teamwork in hospital settings abound, including large team sizes and dynamic team membership because of the need to provide care 24 hours a day, 7 days a week. Team members are often dispersed across clinical service areas and care for multiple patients at the same time. Compounding the potential for these structural barriers to impede teamwork, professionals seldom receive any formal training to enhance teamwork skills, and students and trainees have relatively few interactions during their formative years with individuals outside of their own profession. In this issue of the Journal of Hospital Medicine, Tofil et al. describe the effect of a novel interprofessional training program to improve teamwork among medical and nursing students at the University of Alabama.[1] The curriculum included 4, 1‐hour simulation sessions and resulted in improved ratings of self‐efficacy with communication and teamwork attitudes. The authors report that the curriculum has continued and expanded to include other health professionals.

Beyond the short‐term results, the curriculum developed by Tofil and colleagues may have lasting effects on individual participants. Students, exposed to one another during a particularly impressionable period of their professional development, may develop better appreciation for the priorities, responsibilities, needs, and expertise of others. The experience may inoculate them from adopting unfavorable behaviors and attitudes that are common among practicing clinicians and comprise the hidden curriculum, which often undermines the goals of the formal curriculum.[2] An early, positive experience with other team members may be especially important for medical students, as physicians tend to be relatively unaware of deficiencies in interprofessional collaboration.[3]

Though undoubtedly valuable to the learners and contributing to our collective knowledge on the subject, the study by Tofil and colleagues includes limitations common to teamwork training curricula.[4] To make the potential of teamwork training a reality in improving patient outcomes, we must first revisit some key teamwork concepts and principles of curriculum development. Baker and colleagues define a team as consisting of 2 or more individuals, who have specific roles, perform interdependent tasks, are adaptable, and share a common goal.[5] For a team to be successful, individual team members must have specific knowledge, skills, and attitudes (ie, competencies).[6] For team training curricula to be successful, existing frameworks like TeamSTEPPS (Team Strategies and Tools to Enhance Performance and Patient Safety) should be used to define learning objectives.[7] Because teamwork is largely behavioral and affective, simulation is the most appropriate instruction method. Simulation involves deliberate practice and expert feedback so that learners can iteratively enhance teamwork skills. Other instructional methods (eg, didactics, video observation and debriefing, brief role play without feedback) are too weak to be effective.

Importantly, Tofil and colleagues used an accepted teamwork framework to develop learning objectives, simulation as the instructional method, and an interprofessional team (ie, a physician, nurse, and an adult learning professional with simulation expertise) to perform simulation debriefings. However, for team training to achieve its full potential, leaders of future efforts need to aim for higher level outcomes. Positive reactions are encouraging, but what we really want to know is that learners truly adopted new skills and attitudes, applied them in real‐world clinical settings, and that patients benefited from them. These are high but achievable goals and absolutely necessary to advance the credibility of team training. Relatively few studies have evaluated the impact of team training on patient outcomes, and the available evidence is equivocal.[8, 9] The intensity and duration of deliberate practice during simulation exercises must be sufficient to change ingrained behaviors and to ensure transfer of enhanced skills to the clinical setting if our goal is to improve patient outcomes.

Leaders of future efforts must also develop innovative simulation exercises that reflect the real‐life challenges and contexts for medical teamwork including dispersion of team members, challenges of communication in hierarchical teams, and competing demands under increasing time pressure. Simulated communication events could include a nurse deciding whether and how to contact a physician not immediately present (and vice versa). Sessions should include interruptions and require participants to multitask to replicate the clinical environment. Notably, simulation exercises provide an opportunity for assessment using a behaviorally anchored rating scale, which is often impractical in real clinical settings because team members are seldom in the same place at the same time. Booster simulation sessions should be provided to ensure skills do not decay over time. In situ simulation (ie, simulation events in the real clinical setting) offers the ability to reveal latent conditions impeding the efficiency or quality of communication among team members.

Most importantly, simulation‐based teamwork training must be combined with system redesign and improvement. Enhanced communication skills will only go so far if team members never have a chance to use them. Leaders should work with their hospitals to remove systemic barriers to teamwork. Opportunities for improvement include geographic localization of physicians, assigning patients to nurses to maximize homogeneity of team members, optimizing interprofessional rounds, and leveraging information and communication technologies. Simulation training should be seen as a complement to these interventions rather than a substitute.

Challenges to teamwork are multifactorial and therefore require multifaceted interventions. Simulation is essential to enhance teamwork skills and attitudes. For efforts to translate into improved patient outcomes, leaders must use innovative approaches and combine simulation training with system redesign and improvement.

References
  1. Tofil NM, Morris JL, Peterson DT, et al. Interprofessional simulation training improves knowledge and teamwork in nursing and medical students during internal medicine clerkship. J Hosp Med. 2014;9(3):189192.
  2. Hafferty FW. Beyond curriculum reform: confronting medicine's hidden curriculum. Acad Med. 1998;73(4):403407.
  3. O'Leary KJ, Ritter CD, Wheeler H, Szekendi MK, Brinton TS, Williams MV. Teamwork on inpatient medical units: assessing attitudes and barriers. Qual Saf Health Care. 2010;19(2):117121.
  4. McGaghie WC, Eppich WJ, O'Leary KJ. Contributions of simulation‐based training to teamwork. In: Baker DP, Battles JB, King HB, Wears RL, eds. Improving Patient Safety Through Teamwork and Team Training. New York, NY: Oxford University Press; 2013:218227.
  5. Baker DP, Day R, Salas E. Teamwork as an essential component of high‐reliability organizations. Health Serv. Res. 2006;41(4 pt 2):15761598.
  6. Baker DP, Salas E, King H, Battles J, Barach P. The role of teamwork in the professional education of physicians: current status and assessment recommendations. Jt Comm J Qual Patient Saf. 2005;31(4):185202.
  7. King HB, Battles J, Baker DP, et al. TeamSTEPPS: Team Strategies and Tools to Enhance Performance and Patient Safety Advances in Patient Safety: New Directions and Alternative Approaches. Vol. 3. Performance and Tools. Rockville, MD: Agency for Healthcare Research and Quality; 2008.
  8. Auerbach AD, Sehgal NL, Blegen MA, et al. Effects of a multicentre teamwork and communication programme on patient outcomes: results from the Triad for Optimal Patient Safety (TOPS) project. BMJ Qual Saf. 2012;21(2):118126.
  9. Schmidt E, Goldhaber‐Fiebert SN, Ho LA, McDonald KM. Simulation exercises as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):426432.
References
  1. Tofil NM, Morris JL, Peterson DT, et al. Interprofessional simulation training improves knowledge and teamwork in nursing and medical students during internal medicine clerkship. J Hosp Med. 2014;9(3):189192.
  2. Hafferty FW. Beyond curriculum reform: confronting medicine's hidden curriculum. Acad Med. 1998;73(4):403407.
  3. O'Leary KJ, Ritter CD, Wheeler H, Szekendi MK, Brinton TS, Williams MV. Teamwork on inpatient medical units: assessing attitudes and barriers. Qual Saf Health Care. 2010;19(2):117121.
  4. McGaghie WC, Eppich WJ, O'Leary KJ. Contributions of simulation‐based training to teamwork. In: Baker DP, Battles JB, King HB, Wears RL, eds. Improving Patient Safety Through Teamwork and Team Training. New York, NY: Oxford University Press; 2013:218227.
  5. Baker DP, Day R, Salas E. Teamwork as an essential component of high‐reliability organizations. Health Serv. Res. 2006;41(4 pt 2):15761598.
  6. Baker DP, Salas E, King H, Battles J, Barach P. The role of teamwork in the professional education of physicians: current status and assessment recommendations. Jt Comm J Qual Patient Saf. 2005;31(4):185202.
  7. King HB, Battles J, Baker DP, et al. TeamSTEPPS: Team Strategies and Tools to Enhance Performance and Patient Safety Advances in Patient Safety: New Directions and Alternative Approaches. Vol. 3. Performance and Tools. Rockville, MD: Agency for Healthcare Research and Quality; 2008.
  8. Auerbach AD, Sehgal NL, Blegen MA, et al. Effects of a multicentre teamwork and communication programme on patient outcomes: results from the Triad for Optimal Patient Safety (TOPS) project. BMJ Qual Saf. 2012;21(2):118126.
  9. Schmidt E, Goldhaber‐Fiebert SN, Ho LA, McDonald KM. Simulation exercises as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158(5 pt 2):426432.
Issue
Journal of Hospital Medicine - 9(3)
Issue
Journal of Hospital Medicine - 9(3)
Page Number
201-202
Page Number
201-202
Article Type
Display Headline
Making the potential benefit of teamwork training a reality
Display Headline
Making the potential benefit of teamwork training a reality
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Kevin J. O'Leary, MD, Division of Hospital Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL 60611; Telephone: 312‐926‐5924; Fax: 312‐926‐4588; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Interprofessional IM Simulation Course

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Interprofessional simulation training improves knowledge and teamwork in nursing and medical students during internal medicine clerkship

Medical simulation is an effective tool in teaching health professions students.[1] It allows a wide range of experiences to be practiced including rare but crucial cases, skills training, counseling cases, and integrative medical cases.[2, 3, 4, 5, 6] Simulation also allows healthcare professionals to work and learn side by side as they do in actual patient‐care situations.

Previous studies have confirmed the effectiveness of high‐fidelity simulation in improving nursing students' and medical students' knowledge and communication skills.[7, 8, 9, 10, 11] However, only a few are designed where different professions learn together. Robertson et al. found that a simulation and modified Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPs) curriculum was successful in improving nursing students' and medical students' communication skills, including an improvement in identification of effective team skills and attitudes toward working together as a team.[12] Stewart et al. also found communication, teamwork skills, and knowledge was improved with nursing students and medical students using pediatric simulation.[13] We hypothesized that simulation training would improve both nursing students' and medical students' medical knowledge, communication skills, and understanding of each profession's role in patient care.

METHODS

Aligning with the University of Alabama at Birmingham School of Medicine calendar, starting in May 2011, weekly simulations were introduced to the current curriculum of the 8‐week internal medicine clerkship for third‐year medical students. Due to differences in academic calendars, the senior nursing students did not start on a recurring basis until July 2011. The first two months served as a pilot phase to assess the validity of the pre‐ and post‐tests as well as the simulation scenarios. Data from this period were used for quality purposes and not in the final data analysis. Data were collected for this study from July 2011 through April 2012. The institutional review board of the University of Alabama at Birmingham approved this study.

Third‐year School of Medicine (SOM) students and senior baccalaureate nursing students participated in four every‐other‐week 1‐hour simulation sessions during the medical students' 8‐week internal medicine clerkship. Each scenario's participants consisted of three nursing students and five or six medical students, with five or six additional medical students observing in the control room. All students participated in the debriefing. Each cohort worked together for the four scenarios in an attempt to build camaraderie over time. Scenarios occurred over approximately 20 minutes, with the remaining 40 minutes used for debriefing. Debriefing with good judgment utilizing advocacy inquiry questioning was our debriefing model,[14] and each scenario's debriefers included at least one physician, one nurse, and one adult learning professional with simulation expertise. All debriefing sessions started with reactions, followed by an exploration phase and finally a summary phase. Debriefings were guided by a debriefing script highlighting key teaching points. TeamSTEPPS was used as the structure of team‐based learning.

Scenarios included acute myocardial infarction, pancreatitis with hyperkalemia, upper gastrointestinal bleed, and chronic obstructive pulmonary disease exacerbation with allow natural death order. Learning objectives for each case focused on teamwork and communication as well as exploring the differential diagnosis. For each scenario, physical exam findings, laboratory results, radiographs, and electrocardiogram results were developed and reviewed by experts for clarity and accuracy. All cases were programmed utilizing Laerdal (Laerdal Medical Corp., Wappinger Falls, NY) programming software and SimMan Essential mannequin (Laerdal Medical Corp.). All scenarios occurred in a simulated emergency department room for patients being admitted to the inpatient internal medicine service.

Identical pre‐ and post‐tests were given to medical and nursing students. Case‐specific knowledge was assessed with multiple choice items. Self‐efficacy related to professional roles and attitudes toward team communication were each assessed with a 6‐item evaluation using anchored 5‐point Likert response scales (see Supporting Information, Table 1, in the online version of this article). Self‐efficacy items formed a scale, whereas attitude items assessed individual dimensions. These measures were pilot tested with 34 matched pre‐ and post‐tests from medical and nursing students. Pilot data were only for quality purposes and are not in the final data analysis.

Pre‐ and Post‐test Results for School of Medicine and School of Nursing Students Completing 4‐Session Simulation Block
Medicine, n=72 Nursing, n=28
Pretest Post‐test P Value Pretest Post‐test P Value
  • NOTE: Each cell presents the proportion of learners that responded Agree or Strongly Agree. Abbreviations: Medicine=School of Medicine; NC=not computed due to limited variance; Nursing=School of Nursing; SD=standard deviation.

Knowledge, meanSD 5317% 7015% <0.0001 3215% 4316% 0.003
Communication self‐efficacy, mean (SD), range, 030 18.9 (3.3) 23.7 (3.7) <0.0001 19.6 (2.7) 24.5 (2.5) <0.0001
Attitudes
Working well in a medical team is a crucial part of my job. 100%, n=72 97%, n=69 NC 100%, n=28 100%, n=28 NC
In an emergency situation, patient care is more important than patient safety. 25%, n=18 25%, n=18 0.025 21%, n=6 29%, n=8 0.032
In an emergency situation, providing immediate care is more important than assigning medical team roles. 35%, n=25 29%, n=21 0.067 39%, n=11 36%, n=10 0.340
Closing the loop in communication is important even when it slows down patient care. 67%, n=48 80%, n=58 0.005 54%, n=15 79%, n=22 0.212
The highest ranking physician has the most important role on the medical team. 33%, n=24 26%, n=19 <0.0001 0%, n=0 4%, n=1 0.836
Multidisciplinary care, where each team member is responsible for their area of expertise, is more productive than cross‐integrated care where roles are less defined. 63%, n=45 71%, n=51 0.037 68%, n=19 71%, n=20 0.827

The self‐efficacy scale was examined for clarity and discrimination with Cronbach's . Individual attitudes were examined for response variation. Knowledge questions were examined for evidence of change. Two questions were dropped from the pilot measure (1 for inappropriate material given the case and 1 for ceiling scores at pretest), and one question was reworded to include ethics, resulting in the final version of the pretest. This pretest was completed at the medical student clerkship orientation and the nursing student introduction prior to any simulation scenario. After each debriefing, all students completed an anonymous evaluation survey about the simulation and debriefing consisting of nine questions with a 5‐point Likert response scale. The survey also included open‐ended questions related to the simulation's effectiveness and areas for improvement. At the end of the 8‐week clerkship after the final scenario, the post‐test and postcourse surveys were completed. All data were anonymous but coded with unique ID numbers to allow for comparing individual change in scores.

Statistics

Quantitative statistical analysis was performed using SPSS version 21.0 (SPSS Inc., Chicago, IL). All tests were 2‐tailed, with significance set at P=0.05. Paired t tests were used to determine differences between pre‐ and post‐test self‐efficacy for participants. A series of attitudinal statements were examined with [2] tests; response categories were collapsed due to the sparse n in some cells (strongly agree and somewhat agree=agree; strongly disagree and somewhat disagree=disagree). Significance was set at P=0.05, and the self‐efficacy scale was examined for internal consistency with Cronbach's . Reported knowledge scores are based on percentage correct; self‐efficacy results are reported as a total score for all items.

RESULTS

A total of 108 students, 78 medical students and 30 nursing students, participated in this study. Paired pre‐ and post‐tests available for 72 medical students and 28 nursing students were included in the analyses (Table 1). Knowledge scores improved significantly and similarly for medical students by 9.4% and School of Nursing (SON) students by 10.4%. The self‐efficacy scale (range, 030) had moderate to good internal consistency (Cronbach's range was 0.68 [pretest] to 0.82 [post‐test]). Both medical students and nursing students demonstrated significant improvements in the self‐efficacy scale mean scores, with increases of 4.8 points (P<0.0001) and 4.9 points (P<0.0001), respectively. Both medical student and nursing student groups showed the greatest change in confidence to correct another healthcare provider at bedside in a collaborative manner (=0.97 and =1.2, respectively). SOM students showed a large change in confidence to always close the loop in patient care (=0.93), whereas SON students showed a large change in confidence to always figure out role on a medical team without explicit directions (=1.1).

Results of the postsimulation evaluations indicate that students felt the activity was applicable to their field (mean=4.93/5 medicine, 4.99/5 nursing) and a beneficial educational experience (mean=4.90/5 medicine, 4.95/5 nursing). Among the open‐ended responses, the most frequent positive response for both groups was increased medical knowledge (37% of all medical students' comments, 30% nursing students). An improved sense of teamwork and team communication were the second and third most common positive comments for both groups (17% medicine, 19% nursing and 16% medicine, 15% nursing, respectively). The most commonly recognized area for improvement among medical students was medical knowledge (24%). The most commonly cited area for improvement among nursing students was communication within the team (19%).

DISCUSSION

Immersive interprofessional simulations can be successfully implemented with third‐year medical students and senior nursing students. The participants, regardless of profession, had a significant improvement in clinical knowledge. These participants also improved their attitudes toward interprofessional teamwork and role clarity.

Our results also showed that both groups of students had the greatest improvement in confidence to correct another healthcare provider at bedside in a collaborative manner. The debriefing team consisted of professionals from both nursing and medicine, which allowed for time to be spent on both the knowledge objectives of the case as well as the communication aspects of the team.

Combining learners with equivalent levels of knowledge and hands‐on experience from different professions is challenging and requires early planning. The nursing student participants were in their final of five semesters before completing baccalaureate requirements, and the medical students were in their third of four years of school. This grouping of medical and nursing students worked well. Medical students had more book knowledge, whereas nursing students had more hands‐on experience, such as administering medications and oxygen, but less specific clinical knowledge. Therefore, each group complemented the other.

Although this study was initially funded by an internal grant, the simulation course described in this report is now required for medical students during their internal medicine clerkship and nursing students during their final semester. The course has expanded from one hour each week to two hours each week and now includes eight cases instead of four. Other disciplines such as respiratory therapy and social work are now involved, and the interprofessional debriefing continues to be a part of every case with faculty from each discipline serving as content experts, and a PhD educator serving as the lead debriefer. The expansion of this course was due to faculty from each discipline observing students in action and attending the debriefing to witness the rich discussion that occurs after every case. Faculty who observed the course had the opportunity to talk to learners after the debriefing and get their feedback on the learning experience and on working with other disciplines. These faculty have become champions for simulation education within their own schools and now serve as content experts for the simulations. Aside from developing champions within each discipline and debriefers from each field, another key factor of success was giving nursing students credit for clinical time. This required nursing course directors to rethink their course structure.

The study has several limitations. Knowledge learned during the 2‐month period between the pre‐ and post‐test was not solely related to that learned during the simulation. The rise in level in the post‐test results could indicate that the questions had substantial ceiling effects. This study assessed self‐reported confidence and not qualitative improvements in medical care. Our self‐efficacy and communication surveys were created for this study and have not been previously validated. Our study was also conducted at 1 institution with strong institutional support for both simulation and interprofessional education, and its reproducibility at other institutions is unknown.

CONCLUSIONS

Interprofessional simulation training for nursing and medical students can potentially increase communication self‐efficacy as well as improve team role attitudes. By instituting a high‐fidelity simulation curriculum similar to the one used in this study, students could be exposed to other disciplines and professions in a safe and realistic environment. Further research is needed to demonstrate the effectiveness of interprofessional training in additional areas and to evaluate effects of early interprofessional training on healthcare outcomes.

Disclosures

This study was funded by the Health Services Foundation General Endowment Fund, University of Alabama at Birmingham, Birmingham, Alabama. The abstract only was presented at the 13th Annual International Meeting on Simulation in Healthcare, January 2630, 2013, Orlando, Florida. No author has any conflict of interest or financial disclosures except Dr. Tofil, who was reimbursed by Laerdal for travel expenses for a Laerdal‐sponsored meeting in the fall of 2011 and 2013 while giving an independently produced lecture on pediatric simulation. No fees were paid.

Files
References
  1. Cook DA, Hatala R, Brydges R, et al. Technology‐enhanced simulation for health professions education: a systematic review and meta‐analysis. JAMA. 2011;306(9):978988.
  2. Tofil NM, Manzella B, McGill D, Zinkan JL, White ML. Initiation of a mock code program at a children's hospital. Med Teach. 2009;31(6):e241e247.
  3. Andreatta P, Saxton E, Thompson M, et al. Simulation‐based mock codes significantly correltate with improved patient cardiopulmonary arrest survival rates. Pediatr Crit Care Med. 2011;12(1):3338.
  4. Brim NM, Venkatan SK, Gordon JA, Alexander EK. Long‐term educational impact of a simulator curriculum on medical student education in an internal medicine clerkship. Simul Healthc. 2010;5:7581.
  5. Halm BM, Lee MT, Franke AA. Improving medical student toxicology knowledge and self‐confidence using mannequin simulation. Hawaii Med J. 2010;69:47.
  6. Morgan PJ, Cleave‐Hogg D, McIlroy J, Devitt JH. Simulation technology: a comparison of experiential and visual learning for undergraduate medical students. Anesthesiology. 2002;96:1016.
  7. Alinier G, Hunt B, Gordon R, Harwood C. Effectiveness of intermediate‐fidelity simulation training technology in undergraduate nursing education. J Adv Nurs. 2006;54(3):359369.
  8. Chakravarthy B, Ter Haar E, Bhat SS, McCoy CE, Denmark TK, Lotfipour S. Simulation in medical school education: review for emergency medicine. West J Emerg Med. 2011;12(4):461466.
  9. Sanko J, Shekhter I, Rosen L, Arheart K, Birnbach D. Man versus machine: the preferred modality. Clin Teach. 2012;9(6):387391.
  10. Littlewood KE, Shilling AM, Stemland CJ, Wright EB, Kirk MA. High‐fidelity simulation is superior to case‐based discussion in teaching the management of shock. Med Teach. 2013;35(3):e1003e1010.
  11. McGregor CA, Paton C, Thomson C, Chandratilake M, Scott H. Preparing medical students for clinical decision making: a pilot study exploring how students make decisions and the perceived impact of a clinical decision making teaching intervention. Med Teach. 2012;34(7):e508e517.
  12. Robertson B, Kaplan B, Atallah H, Higgins M, Lewitt MJ, Ander DS. The use of simulation and a modified TeamSTEPPS curriculum for medical and nursing student team training. Simul Healthc. 2010;5(6):332337.
  13. Stewart M, Kennedy N, Cuene‐Grandidier H. Undergraduate interprofessional education using high‐fidelity paediatric simulation. Clin Teach. 2010;7(2):9096.
  14. Rudolph JW, Simon R, Rivard P, RL Dufresne, DB Raemer. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clin. 2007;25(2):361376.
Article PDF
Issue
Journal of Hospital Medicine - 9(3)
Page Number
189-192
Sections
Files
Files
Article PDF
Article PDF

Medical simulation is an effective tool in teaching health professions students.[1] It allows a wide range of experiences to be practiced including rare but crucial cases, skills training, counseling cases, and integrative medical cases.[2, 3, 4, 5, 6] Simulation also allows healthcare professionals to work and learn side by side as they do in actual patient‐care situations.

Previous studies have confirmed the effectiveness of high‐fidelity simulation in improving nursing students' and medical students' knowledge and communication skills.[7, 8, 9, 10, 11] However, only a few are designed where different professions learn together. Robertson et al. found that a simulation and modified Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPs) curriculum was successful in improving nursing students' and medical students' communication skills, including an improvement in identification of effective team skills and attitudes toward working together as a team.[12] Stewart et al. also found communication, teamwork skills, and knowledge was improved with nursing students and medical students using pediatric simulation.[13] We hypothesized that simulation training would improve both nursing students' and medical students' medical knowledge, communication skills, and understanding of each profession's role in patient care.

METHODS

Aligning with the University of Alabama at Birmingham School of Medicine calendar, starting in May 2011, weekly simulations were introduced to the current curriculum of the 8‐week internal medicine clerkship for third‐year medical students. Due to differences in academic calendars, the senior nursing students did not start on a recurring basis until July 2011. The first two months served as a pilot phase to assess the validity of the pre‐ and post‐tests as well as the simulation scenarios. Data from this period were used for quality purposes and not in the final data analysis. Data were collected for this study from July 2011 through April 2012. The institutional review board of the University of Alabama at Birmingham approved this study.

Third‐year School of Medicine (SOM) students and senior baccalaureate nursing students participated in four every‐other‐week 1‐hour simulation sessions during the medical students' 8‐week internal medicine clerkship. Each scenario's participants consisted of three nursing students and five or six medical students, with five or six additional medical students observing in the control room. All students participated in the debriefing. Each cohort worked together for the four scenarios in an attempt to build camaraderie over time. Scenarios occurred over approximately 20 minutes, with the remaining 40 minutes used for debriefing. Debriefing with good judgment utilizing advocacy inquiry questioning was our debriefing model,[14] and each scenario's debriefers included at least one physician, one nurse, and one adult learning professional with simulation expertise. All debriefing sessions started with reactions, followed by an exploration phase and finally a summary phase. Debriefings were guided by a debriefing script highlighting key teaching points. TeamSTEPPS was used as the structure of team‐based learning.

Scenarios included acute myocardial infarction, pancreatitis with hyperkalemia, upper gastrointestinal bleed, and chronic obstructive pulmonary disease exacerbation with allow natural death order. Learning objectives for each case focused on teamwork and communication as well as exploring the differential diagnosis. For each scenario, physical exam findings, laboratory results, radiographs, and electrocardiogram results were developed and reviewed by experts for clarity and accuracy. All cases were programmed utilizing Laerdal (Laerdal Medical Corp., Wappinger Falls, NY) programming software and SimMan Essential mannequin (Laerdal Medical Corp.). All scenarios occurred in a simulated emergency department room for patients being admitted to the inpatient internal medicine service.

Identical pre‐ and post‐tests were given to medical and nursing students. Case‐specific knowledge was assessed with multiple choice items. Self‐efficacy related to professional roles and attitudes toward team communication were each assessed with a 6‐item evaluation using anchored 5‐point Likert response scales (see Supporting Information, Table 1, in the online version of this article). Self‐efficacy items formed a scale, whereas attitude items assessed individual dimensions. These measures were pilot tested with 34 matched pre‐ and post‐tests from medical and nursing students. Pilot data were only for quality purposes and are not in the final data analysis.

Pre‐ and Post‐test Results for School of Medicine and School of Nursing Students Completing 4‐Session Simulation Block
Medicine, n=72 Nursing, n=28
Pretest Post‐test P Value Pretest Post‐test P Value
  • NOTE: Each cell presents the proportion of learners that responded Agree or Strongly Agree. Abbreviations: Medicine=School of Medicine; NC=not computed due to limited variance; Nursing=School of Nursing; SD=standard deviation.

Knowledge, meanSD 5317% 7015% <0.0001 3215% 4316% 0.003
Communication self‐efficacy, mean (SD), range, 030 18.9 (3.3) 23.7 (3.7) <0.0001 19.6 (2.7) 24.5 (2.5) <0.0001
Attitudes
Working well in a medical team is a crucial part of my job. 100%, n=72 97%, n=69 NC 100%, n=28 100%, n=28 NC
In an emergency situation, patient care is more important than patient safety. 25%, n=18 25%, n=18 0.025 21%, n=6 29%, n=8 0.032
In an emergency situation, providing immediate care is more important than assigning medical team roles. 35%, n=25 29%, n=21 0.067 39%, n=11 36%, n=10 0.340
Closing the loop in communication is important even when it slows down patient care. 67%, n=48 80%, n=58 0.005 54%, n=15 79%, n=22 0.212
The highest ranking physician has the most important role on the medical team. 33%, n=24 26%, n=19 <0.0001 0%, n=0 4%, n=1 0.836
Multidisciplinary care, where each team member is responsible for their area of expertise, is more productive than cross‐integrated care where roles are less defined. 63%, n=45 71%, n=51 0.037 68%, n=19 71%, n=20 0.827

The self‐efficacy scale was examined for clarity and discrimination with Cronbach's . Individual attitudes were examined for response variation. Knowledge questions were examined for evidence of change. Two questions were dropped from the pilot measure (1 for inappropriate material given the case and 1 for ceiling scores at pretest), and one question was reworded to include ethics, resulting in the final version of the pretest. This pretest was completed at the medical student clerkship orientation and the nursing student introduction prior to any simulation scenario. After each debriefing, all students completed an anonymous evaluation survey about the simulation and debriefing consisting of nine questions with a 5‐point Likert response scale. The survey also included open‐ended questions related to the simulation's effectiveness and areas for improvement. At the end of the 8‐week clerkship after the final scenario, the post‐test and postcourse surveys were completed. All data were anonymous but coded with unique ID numbers to allow for comparing individual change in scores.

Statistics

Quantitative statistical analysis was performed using SPSS version 21.0 (SPSS Inc., Chicago, IL). All tests were 2‐tailed, with significance set at P=0.05. Paired t tests were used to determine differences between pre‐ and post‐test self‐efficacy for participants. A series of attitudinal statements were examined with [2] tests; response categories were collapsed due to the sparse n in some cells (strongly agree and somewhat agree=agree; strongly disagree and somewhat disagree=disagree). Significance was set at P=0.05, and the self‐efficacy scale was examined for internal consistency with Cronbach's . Reported knowledge scores are based on percentage correct; self‐efficacy results are reported as a total score for all items.

RESULTS

A total of 108 students, 78 medical students and 30 nursing students, participated in this study. Paired pre‐ and post‐tests available for 72 medical students and 28 nursing students were included in the analyses (Table 1). Knowledge scores improved significantly and similarly for medical students by 9.4% and School of Nursing (SON) students by 10.4%. The self‐efficacy scale (range, 030) had moderate to good internal consistency (Cronbach's range was 0.68 [pretest] to 0.82 [post‐test]). Both medical students and nursing students demonstrated significant improvements in the self‐efficacy scale mean scores, with increases of 4.8 points (P<0.0001) and 4.9 points (P<0.0001), respectively. Both medical student and nursing student groups showed the greatest change in confidence to correct another healthcare provider at bedside in a collaborative manner (=0.97 and =1.2, respectively). SOM students showed a large change in confidence to always close the loop in patient care (=0.93), whereas SON students showed a large change in confidence to always figure out role on a medical team without explicit directions (=1.1).

Results of the postsimulation evaluations indicate that students felt the activity was applicable to their field (mean=4.93/5 medicine, 4.99/5 nursing) and a beneficial educational experience (mean=4.90/5 medicine, 4.95/5 nursing). Among the open‐ended responses, the most frequent positive response for both groups was increased medical knowledge (37% of all medical students' comments, 30% nursing students). An improved sense of teamwork and team communication were the second and third most common positive comments for both groups (17% medicine, 19% nursing and 16% medicine, 15% nursing, respectively). The most commonly recognized area for improvement among medical students was medical knowledge (24%). The most commonly cited area for improvement among nursing students was communication within the team (19%).

DISCUSSION

Immersive interprofessional simulations can be successfully implemented with third‐year medical students and senior nursing students. The participants, regardless of profession, had a significant improvement in clinical knowledge. These participants also improved their attitudes toward interprofessional teamwork and role clarity.

Our results also showed that both groups of students had the greatest improvement in confidence to correct another healthcare provider at bedside in a collaborative manner. The debriefing team consisted of professionals from both nursing and medicine, which allowed for time to be spent on both the knowledge objectives of the case as well as the communication aspects of the team.

Combining learners with equivalent levels of knowledge and hands‐on experience from different professions is challenging and requires early planning. The nursing student participants were in their final of five semesters before completing baccalaureate requirements, and the medical students were in their third of four years of school. This grouping of medical and nursing students worked well. Medical students had more book knowledge, whereas nursing students had more hands‐on experience, such as administering medications and oxygen, but less specific clinical knowledge. Therefore, each group complemented the other.

Although this study was initially funded by an internal grant, the simulation course described in this report is now required for medical students during their internal medicine clerkship and nursing students during their final semester. The course has expanded from one hour each week to two hours each week and now includes eight cases instead of four. Other disciplines such as respiratory therapy and social work are now involved, and the interprofessional debriefing continues to be a part of every case with faculty from each discipline serving as content experts, and a PhD educator serving as the lead debriefer. The expansion of this course was due to faculty from each discipline observing students in action and attending the debriefing to witness the rich discussion that occurs after every case. Faculty who observed the course had the opportunity to talk to learners after the debriefing and get their feedback on the learning experience and on working with other disciplines. These faculty have become champions for simulation education within their own schools and now serve as content experts for the simulations. Aside from developing champions within each discipline and debriefers from each field, another key factor of success was giving nursing students credit for clinical time. This required nursing course directors to rethink their course structure.

The study has several limitations. Knowledge learned during the 2‐month period between the pre‐ and post‐test was not solely related to that learned during the simulation. The rise in level in the post‐test results could indicate that the questions had substantial ceiling effects. This study assessed self‐reported confidence and not qualitative improvements in medical care. Our self‐efficacy and communication surveys were created for this study and have not been previously validated. Our study was also conducted at 1 institution with strong institutional support for both simulation and interprofessional education, and its reproducibility at other institutions is unknown.

CONCLUSIONS

Interprofessional simulation training for nursing and medical students can potentially increase communication self‐efficacy as well as improve team role attitudes. By instituting a high‐fidelity simulation curriculum similar to the one used in this study, students could be exposed to other disciplines and professions in a safe and realistic environment. Further research is needed to demonstrate the effectiveness of interprofessional training in additional areas and to evaluate effects of early interprofessional training on healthcare outcomes.

Disclosures

This study was funded by the Health Services Foundation General Endowment Fund, University of Alabama at Birmingham, Birmingham, Alabama. The abstract only was presented at the 13th Annual International Meeting on Simulation in Healthcare, January 2630, 2013, Orlando, Florida. No author has any conflict of interest or financial disclosures except Dr. Tofil, who was reimbursed by Laerdal for travel expenses for a Laerdal‐sponsored meeting in the fall of 2011 and 2013 while giving an independently produced lecture on pediatric simulation. No fees were paid.

Medical simulation is an effective tool in teaching health professions students.[1] It allows a wide range of experiences to be practiced including rare but crucial cases, skills training, counseling cases, and integrative medical cases.[2, 3, 4, 5, 6] Simulation also allows healthcare professionals to work and learn side by side as they do in actual patient‐care situations.

Previous studies have confirmed the effectiveness of high‐fidelity simulation in improving nursing students' and medical students' knowledge and communication skills.[7, 8, 9, 10, 11] However, only a few are designed where different professions learn together. Robertson et al. found that a simulation and modified Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPs) curriculum was successful in improving nursing students' and medical students' communication skills, including an improvement in identification of effective team skills and attitudes toward working together as a team.[12] Stewart et al. also found communication, teamwork skills, and knowledge was improved with nursing students and medical students using pediatric simulation.[13] We hypothesized that simulation training would improve both nursing students' and medical students' medical knowledge, communication skills, and understanding of each profession's role in patient care.

METHODS

Aligning with the University of Alabama at Birmingham School of Medicine calendar, starting in May 2011, weekly simulations were introduced to the current curriculum of the 8‐week internal medicine clerkship for third‐year medical students. Due to differences in academic calendars, the senior nursing students did not start on a recurring basis until July 2011. The first two months served as a pilot phase to assess the validity of the pre‐ and post‐tests as well as the simulation scenarios. Data from this period were used for quality purposes and not in the final data analysis. Data were collected for this study from July 2011 through April 2012. The institutional review board of the University of Alabama at Birmingham approved this study.

Third‐year School of Medicine (SOM) students and senior baccalaureate nursing students participated in four every‐other‐week 1‐hour simulation sessions during the medical students' 8‐week internal medicine clerkship. Each scenario's participants consisted of three nursing students and five or six medical students, with five or six additional medical students observing in the control room. All students participated in the debriefing. Each cohort worked together for the four scenarios in an attempt to build camaraderie over time. Scenarios occurred over approximately 20 minutes, with the remaining 40 minutes used for debriefing. Debriefing with good judgment utilizing advocacy inquiry questioning was our debriefing model,[14] and each scenario's debriefers included at least one physician, one nurse, and one adult learning professional with simulation expertise. All debriefing sessions started with reactions, followed by an exploration phase and finally a summary phase. Debriefings were guided by a debriefing script highlighting key teaching points. TeamSTEPPS was used as the structure of team‐based learning.

Scenarios included acute myocardial infarction, pancreatitis with hyperkalemia, upper gastrointestinal bleed, and chronic obstructive pulmonary disease exacerbation with allow natural death order. Learning objectives for each case focused on teamwork and communication as well as exploring the differential diagnosis. For each scenario, physical exam findings, laboratory results, radiographs, and electrocardiogram results were developed and reviewed by experts for clarity and accuracy. All cases were programmed utilizing Laerdal (Laerdal Medical Corp., Wappinger Falls, NY) programming software and SimMan Essential mannequin (Laerdal Medical Corp.). All scenarios occurred in a simulated emergency department room for patients being admitted to the inpatient internal medicine service.

Identical pre‐ and post‐tests were given to medical and nursing students. Case‐specific knowledge was assessed with multiple choice items. Self‐efficacy related to professional roles and attitudes toward team communication were each assessed with a 6‐item evaluation using anchored 5‐point Likert response scales (see Supporting Information, Table 1, in the online version of this article). Self‐efficacy items formed a scale, whereas attitude items assessed individual dimensions. These measures were pilot tested with 34 matched pre‐ and post‐tests from medical and nursing students. Pilot data were only for quality purposes and are not in the final data analysis.

Pre‐ and Post‐test Results for School of Medicine and School of Nursing Students Completing 4‐Session Simulation Block
Medicine, n=72 Nursing, n=28
Pretest Post‐test P Value Pretest Post‐test P Value
  • NOTE: Each cell presents the proportion of learners that responded Agree or Strongly Agree. Abbreviations: Medicine=School of Medicine; NC=not computed due to limited variance; Nursing=School of Nursing; SD=standard deviation.

Knowledge, meanSD 5317% 7015% <0.0001 3215% 4316% 0.003
Communication self‐efficacy, mean (SD), range, 030 18.9 (3.3) 23.7 (3.7) <0.0001 19.6 (2.7) 24.5 (2.5) <0.0001
Attitudes
Working well in a medical team is a crucial part of my job. 100%, n=72 97%, n=69 NC 100%, n=28 100%, n=28 NC
In an emergency situation, patient care is more important than patient safety. 25%, n=18 25%, n=18 0.025 21%, n=6 29%, n=8 0.032
In an emergency situation, providing immediate care is more important than assigning medical team roles. 35%, n=25 29%, n=21 0.067 39%, n=11 36%, n=10 0.340
Closing the loop in communication is important even when it slows down patient care. 67%, n=48 80%, n=58 0.005 54%, n=15 79%, n=22 0.212
The highest ranking physician has the most important role on the medical team. 33%, n=24 26%, n=19 <0.0001 0%, n=0 4%, n=1 0.836
Multidisciplinary care, where each team member is responsible for their area of expertise, is more productive than cross‐integrated care where roles are less defined. 63%, n=45 71%, n=51 0.037 68%, n=19 71%, n=20 0.827

The self‐efficacy scale was examined for clarity and discrimination with Cronbach's . Individual attitudes were examined for response variation. Knowledge questions were examined for evidence of change. Two questions were dropped from the pilot measure (1 for inappropriate material given the case and 1 for ceiling scores at pretest), and one question was reworded to include ethics, resulting in the final version of the pretest. This pretest was completed at the medical student clerkship orientation and the nursing student introduction prior to any simulation scenario. After each debriefing, all students completed an anonymous evaluation survey about the simulation and debriefing consisting of nine questions with a 5‐point Likert response scale. The survey also included open‐ended questions related to the simulation's effectiveness and areas for improvement. At the end of the 8‐week clerkship after the final scenario, the post‐test and postcourse surveys were completed. All data were anonymous but coded with unique ID numbers to allow for comparing individual change in scores.

Statistics

Quantitative statistical analysis was performed using SPSS version 21.0 (SPSS Inc., Chicago, IL). All tests were 2‐tailed, with significance set at P=0.05. Paired t tests were used to determine differences between pre‐ and post‐test self‐efficacy for participants. A series of attitudinal statements were examined with [2] tests; response categories were collapsed due to the sparse n in some cells (strongly agree and somewhat agree=agree; strongly disagree and somewhat disagree=disagree). Significance was set at P=0.05, and the self‐efficacy scale was examined for internal consistency with Cronbach's . Reported knowledge scores are based on percentage correct; self‐efficacy results are reported as a total score for all items.

RESULTS

A total of 108 students, 78 medical students and 30 nursing students, participated in this study. Paired pre‐ and post‐tests available for 72 medical students and 28 nursing students were included in the analyses (Table 1). Knowledge scores improved significantly and similarly for medical students by 9.4% and School of Nursing (SON) students by 10.4%. The self‐efficacy scale (range, 030) had moderate to good internal consistency (Cronbach's range was 0.68 [pretest] to 0.82 [post‐test]). Both medical students and nursing students demonstrated significant improvements in the self‐efficacy scale mean scores, with increases of 4.8 points (P<0.0001) and 4.9 points (P<0.0001), respectively. Both medical student and nursing student groups showed the greatest change in confidence to correct another healthcare provider at bedside in a collaborative manner (=0.97 and =1.2, respectively). SOM students showed a large change in confidence to always close the loop in patient care (=0.93), whereas SON students showed a large change in confidence to always figure out role on a medical team without explicit directions (=1.1).

Results of the postsimulation evaluations indicate that students felt the activity was applicable to their field (mean=4.93/5 medicine, 4.99/5 nursing) and a beneficial educational experience (mean=4.90/5 medicine, 4.95/5 nursing). Among the open‐ended responses, the most frequent positive response for both groups was increased medical knowledge (37% of all medical students' comments, 30% nursing students). An improved sense of teamwork and team communication were the second and third most common positive comments for both groups (17% medicine, 19% nursing and 16% medicine, 15% nursing, respectively). The most commonly recognized area for improvement among medical students was medical knowledge (24%). The most commonly cited area for improvement among nursing students was communication within the team (19%).

DISCUSSION

Immersive interprofessional simulations can be successfully implemented with third‐year medical students and senior nursing students. The participants, regardless of profession, had a significant improvement in clinical knowledge. These participants also improved their attitudes toward interprofessional teamwork and role clarity.

Our results also showed that both groups of students had the greatest improvement in confidence to correct another healthcare provider at bedside in a collaborative manner. The debriefing team consisted of professionals from both nursing and medicine, which allowed for time to be spent on both the knowledge objectives of the case as well as the communication aspects of the team.

Combining learners with equivalent levels of knowledge and hands‐on experience from different professions is challenging and requires early planning. The nursing student participants were in their final of five semesters before completing baccalaureate requirements, and the medical students were in their third of four years of school. This grouping of medical and nursing students worked well. Medical students had more book knowledge, whereas nursing students had more hands‐on experience, such as administering medications and oxygen, but less specific clinical knowledge. Therefore, each group complemented the other.

Although this study was initially funded by an internal grant, the simulation course described in this report is now required for medical students during their internal medicine clerkship and nursing students during their final semester. The course has expanded from one hour each week to two hours each week and now includes eight cases instead of four. Other disciplines such as respiratory therapy and social work are now involved, and the interprofessional debriefing continues to be a part of every case with faculty from each discipline serving as content experts, and a PhD educator serving as the lead debriefer. The expansion of this course was due to faculty from each discipline observing students in action and attending the debriefing to witness the rich discussion that occurs after every case. Faculty who observed the course had the opportunity to talk to learners after the debriefing and get their feedback on the learning experience and on working with other disciplines. These faculty have become champions for simulation education within their own schools and now serve as content experts for the simulations. Aside from developing champions within each discipline and debriefers from each field, another key factor of success was giving nursing students credit for clinical time. This required nursing course directors to rethink their course structure.

The study has several limitations. Knowledge learned during the 2‐month period between the pre‐ and post‐test was not solely related to that learned during the simulation. The rise in level in the post‐test results could indicate that the questions had substantial ceiling effects. This study assessed self‐reported confidence and not qualitative improvements in medical care. Our self‐efficacy and communication surveys were created for this study and have not been previously validated. Our study was also conducted at 1 institution with strong institutional support for both simulation and interprofessional education, and its reproducibility at other institutions is unknown.

CONCLUSIONS

Interprofessional simulation training for nursing and medical students can potentially increase communication self‐efficacy as well as improve team role attitudes. By instituting a high‐fidelity simulation curriculum similar to the one used in this study, students could be exposed to other disciplines and professions in a safe and realistic environment. Further research is needed to demonstrate the effectiveness of interprofessional training in additional areas and to evaluate effects of early interprofessional training on healthcare outcomes.

Disclosures

This study was funded by the Health Services Foundation General Endowment Fund, University of Alabama at Birmingham, Birmingham, Alabama. The abstract only was presented at the 13th Annual International Meeting on Simulation in Healthcare, January 2630, 2013, Orlando, Florida. No author has any conflict of interest or financial disclosures except Dr. Tofil, who was reimbursed by Laerdal for travel expenses for a Laerdal‐sponsored meeting in the fall of 2011 and 2013 while giving an independently produced lecture on pediatric simulation. No fees were paid.

References
  1. Cook DA, Hatala R, Brydges R, et al. Technology‐enhanced simulation for health professions education: a systematic review and meta‐analysis. JAMA. 2011;306(9):978988.
  2. Tofil NM, Manzella B, McGill D, Zinkan JL, White ML. Initiation of a mock code program at a children's hospital. Med Teach. 2009;31(6):e241e247.
  3. Andreatta P, Saxton E, Thompson M, et al. Simulation‐based mock codes significantly correltate with improved patient cardiopulmonary arrest survival rates. Pediatr Crit Care Med. 2011;12(1):3338.
  4. Brim NM, Venkatan SK, Gordon JA, Alexander EK. Long‐term educational impact of a simulator curriculum on medical student education in an internal medicine clerkship. Simul Healthc. 2010;5:7581.
  5. Halm BM, Lee MT, Franke AA. Improving medical student toxicology knowledge and self‐confidence using mannequin simulation. Hawaii Med J. 2010;69:47.
  6. Morgan PJ, Cleave‐Hogg D, McIlroy J, Devitt JH. Simulation technology: a comparison of experiential and visual learning for undergraduate medical students. Anesthesiology. 2002;96:1016.
  7. Alinier G, Hunt B, Gordon R, Harwood C. Effectiveness of intermediate‐fidelity simulation training technology in undergraduate nursing education. J Adv Nurs. 2006;54(3):359369.
  8. Chakravarthy B, Ter Haar E, Bhat SS, McCoy CE, Denmark TK, Lotfipour S. Simulation in medical school education: review for emergency medicine. West J Emerg Med. 2011;12(4):461466.
  9. Sanko J, Shekhter I, Rosen L, Arheart K, Birnbach D. Man versus machine: the preferred modality. Clin Teach. 2012;9(6):387391.
  10. Littlewood KE, Shilling AM, Stemland CJ, Wright EB, Kirk MA. High‐fidelity simulation is superior to case‐based discussion in teaching the management of shock. Med Teach. 2013;35(3):e1003e1010.
  11. McGregor CA, Paton C, Thomson C, Chandratilake M, Scott H. Preparing medical students for clinical decision making: a pilot study exploring how students make decisions and the perceived impact of a clinical decision making teaching intervention. Med Teach. 2012;34(7):e508e517.
  12. Robertson B, Kaplan B, Atallah H, Higgins M, Lewitt MJ, Ander DS. The use of simulation and a modified TeamSTEPPS curriculum for medical and nursing student team training. Simul Healthc. 2010;5(6):332337.
  13. Stewart M, Kennedy N, Cuene‐Grandidier H. Undergraduate interprofessional education using high‐fidelity paediatric simulation. Clin Teach. 2010;7(2):9096.
  14. Rudolph JW, Simon R, Rivard P, RL Dufresne, DB Raemer. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clin. 2007;25(2):361376.
References
  1. Cook DA, Hatala R, Brydges R, et al. Technology‐enhanced simulation for health professions education: a systematic review and meta‐analysis. JAMA. 2011;306(9):978988.
  2. Tofil NM, Manzella B, McGill D, Zinkan JL, White ML. Initiation of a mock code program at a children's hospital. Med Teach. 2009;31(6):e241e247.
  3. Andreatta P, Saxton E, Thompson M, et al. Simulation‐based mock codes significantly correltate with improved patient cardiopulmonary arrest survival rates. Pediatr Crit Care Med. 2011;12(1):3338.
  4. Brim NM, Venkatan SK, Gordon JA, Alexander EK. Long‐term educational impact of a simulator curriculum on medical student education in an internal medicine clerkship. Simul Healthc. 2010;5:7581.
  5. Halm BM, Lee MT, Franke AA. Improving medical student toxicology knowledge and self‐confidence using mannequin simulation. Hawaii Med J. 2010;69:47.
  6. Morgan PJ, Cleave‐Hogg D, McIlroy J, Devitt JH. Simulation technology: a comparison of experiential and visual learning for undergraduate medical students. Anesthesiology. 2002;96:1016.
  7. Alinier G, Hunt B, Gordon R, Harwood C. Effectiveness of intermediate‐fidelity simulation training technology in undergraduate nursing education. J Adv Nurs. 2006;54(3):359369.
  8. Chakravarthy B, Ter Haar E, Bhat SS, McCoy CE, Denmark TK, Lotfipour S. Simulation in medical school education: review for emergency medicine. West J Emerg Med. 2011;12(4):461466.
  9. Sanko J, Shekhter I, Rosen L, Arheart K, Birnbach D. Man versus machine: the preferred modality. Clin Teach. 2012;9(6):387391.
  10. Littlewood KE, Shilling AM, Stemland CJ, Wright EB, Kirk MA. High‐fidelity simulation is superior to case‐based discussion in teaching the management of shock. Med Teach. 2013;35(3):e1003e1010.
  11. McGregor CA, Paton C, Thomson C, Chandratilake M, Scott H. Preparing medical students for clinical decision making: a pilot study exploring how students make decisions and the perceived impact of a clinical decision making teaching intervention. Med Teach. 2012;34(7):e508e517.
  12. Robertson B, Kaplan B, Atallah H, Higgins M, Lewitt MJ, Ander DS. The use of simulation and a modified TeamSTEPPS curriculum for medical and nursing student team training. Simul Healthc. 2010;5(6):332337.
  13. Stewart M, Kennedy N, Cuene‐Grandidier H. Undergraduate interprofessional education using high‐fidelity paediatric simulation. Clin Teach. 2010;7(2):9096.
  14. Rudolph JW, Simon R, Rivard P, RL Dufresne, DB Raemer. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clin. 2007;25(2):361376.
Issue
Journal of Hospital Medicine - 9(3)
Issue
Journal of Hospital Medicine - 9(3)
Page Number
189-192
Page Number
189-192
Article Type
Display Headline
Interprofessional simulation training improves knowledge and teamwork in nursing and medical students during internal medicine clerkship
Display Headline
Interprofessional simulation training improves knowledge and teamwork in nursing and medical students during internal medicine clerkship
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Dawn Taylor Peterson, PhD, Department of Pediatrics, University of Alabama at Birmingham, 1600 7th Avenue South, CPP1 Suite 102, Birmingham, AL 35223; Telephone: 205–638‐7535; Fax: 205–638‐2444; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Hospital Safety Grade

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Hospital patient safety grades may misrepresent hospital performance

The Institute of Medicine (IOM) reported over a decade ago that between 44,000 and 98,000 deaths occurred every year due to preventable medical errors.[1] The report sparked an intense interest in identifying, measuring, and reporting hospital performance in patient safety.[2] The report also sparked the implementation of many initiatives aiming to improve patient safety.[3] Despite these efforts, there is still much room for improvement in the area of patient safety.[4] As the public has become more aware of patient safety issues, there has been an increased demand for information on hospital safety. The Leapfrog Group, a leading organization that examines and reports on hospital performance in patient safety, cites the IOM report as providing the focus that their newly formed organization required.[5]

Using 26 national measures of safety, The Leapfrog Group calculates a numeric Hospital Safety Score for over 2,600 acute care hospitals in the United States.[6] The primary data used to calculate this score are collected through the Leapfrog Hospital Survey, the Agency for Healthcare Research and Quality, the Centers for Disease Control and Prevention, and the Centers for Medicare and Medicaid Services (CMS). The American Hospital Association's (AHA) Annual Survey is used as a secondary data source as necessary. The Leapfrog Group conducts the survey annually, and substantial efforts are put forth to invite hospital administrators to participate in the survey. Participation in the Leapfrog survey is optional and free of charge.

Leapfrog recently moved a step further in their evaluation of hospital safety by releasing the Hidden Surcharge Calculator to enable employers to estimate the hidden surcharge they pay for their employees and dependents because of hospital errors.[7] The calculation depends largely on the letter grade (AF) that the hospital received from Leapfrog's Hospital Safety Score. For example, Leapfrog estimated a commercially insured patient admitted to a hospital with a grade of C or lower would incur $1845 additional cost per admission than if the same patient was admitted to a hospital with a grade of A.[7] The Leapfrog group encourages employers and payers to use this information to adjust benefits structures so that employees are discouraged from using hospitals that receive lower hospital safety scores. Leapfrog also encourages payers to negotiate lower reimbursement rates for hospitals with lower hospital safety scores.

The accuracy of Leapfrog's hospital safety grades warrants attention because of the methodology used to score hospitals that do not participate in the Leapfrog Survey. One common barrier that prevents hospitals from participating is the amount of effort required to complete the annual survey, including extensive inputs from hospital executives and staff. According to Leapfrog, 4 to 6 days are required for a hospital to compile the necessary survey data.[8] Leapfrog estimates a 90‐minute commitment for the hospital chief executive officer or designated administrator to enter the information into the online questionnaire. This is a significant commitment for many hospitals. As a result, among the approximately 2600 acute care hospitals covered by Leapfrog's 2012 to 2013 safety grading, only 1100 (or 42.3%) actually participated in the Leapfrog hospital survey. This limits Leapfrog's ability to provide accurate scores and assign fair safety grades to many hospitals.

METHODS

Leapfrog Hospital Safety Score

Leapfrog's designated Hospital Safety Score is determined by 26 measures. The set of safety measures and their relative weight are determined by a 9‐member Leapfrog expert panel of patient safety experts.[9] The hospital safety score is divided equally into 2 domains of safety measures: process/structural and outcomes.[6] The process measures represent how often a hospital gives patients recommended treatment for a given medical condition or procedure, whereas structural measures represent the environment in which patients receive care.[10] The process/structural measures include computerized physician order entry (CPOE), intensive care unit (ICU) physician staffing (IPS), 8 Leapfrog safety practices, and 5 surgical care improvement project measures. The outcome measures represent what happens to a patient while receiving care. The outcomes domain includes 5 hospital‐acquired conditions and 6 patient safety indicators. A score is assigned and weighted for each measure. All scores are then summed to produce a single number denoting the safety performance score received by each hospital. Every hospital is assigned 1 of 5 letter grades depending on how the hospital's numeric score stands in safety performance relative to all other hospitals. The letter grade A denotes the best hospital safety performance, followed in order by letter grades B through F. The cutoffs for A and B grades represent the first and second quartile of hospital safety scores. The cutoff for the C grade represents the hospitals that were between the mean and 1.5 standard deviations below the mean. The cutoff for the D grade represents the hospitals that were between 1.5 and 3.0 standard deviations below the mean. F grades indicate safety scores more than 3.0 standard deviations below the mean.[11]

Nonparticipating Hospitals

The Leapfrog Survey contributes values for 11 of the 26 measures utilized to calculate the Hospital Safety Score. The score of a nonparticipating hospital will not reflect 8 of these 11 measures. For the 3 remaining measures, CPOE, IPS, and central line‐associated blood stream infection, secondary data from the AHA Survey, AHA Information Technology Supplement Survey, and CMS Hospital Compare were used as proxies, respectively (Table 1). The use of a proxy effectively limits the maximum score attainable by nonparticipating hospitals. For instance, 2 of these 3 measures, CPOE and IPS, are calculated on different scales depending on hospital survey participation status. For CPOE, nonparticipating hospitals are limited to a maximum of 65 out of 100 points; for IPS, they are limited to 85 out of 100 points.[6] Because the actual weight for each of these proxy measures is increased for nonparticipating hospitals in the calculation of the final score, their effective impact is exacerbated. The weight of CPOE and IPS measures in the overall weighted score are increased from 6.1% and 7.0% to 11.0% and 12.6%, respectively.

Data Sources for the Patient Safety Score: Survey Participants Versus Nonparticipants
Participants Nonparticipants
  • NOTE: Abbreviations: AHA, American Hospital Association; CMS, Centers for Medicare and Medicaid Services; DVT, deep vein thrombosis; HACs, hospital‐acquired conditions; HAIs, healthcare‐associated infections; ICU, intensive care unit; INF, infection; IPS, ICU physician staffing; IT, Information Technology; PE, pulmonary embolism; PSI, patient safety indicators; SCIP, Surgical Care Improvement Project; VTE, venous thromboembolism; *Based on publicly available Leapfrog methodology, accessed September 2013.

Process/structural measures (50% of score)
Computerized Physician Order Entry 2012 Leapfrog Hospital Survey 2010 IT Supplement (AHA)
ICU Physician Staffing (IPS) 2012 Leapfrog Hospital Survey 2011 AHA Annual Survey
Safe Practice 1: Leadership Structures and Systems 2012 Leapfrog Hospital Survey Excluded
Safe Practice 2: Culture Measurement, Feedback, and Intervention 2012 Leapfrog Hospital Survey Excluded
Safe Practice 3: Teamwork Training and Skill Building 2012 Leapfrog Hospital Survey Excluded
Safe Practice 4: Identification and Mitigation of Risks and Hazards 2012 Leapfrog Hospital Survey Excluded
Safe Practice 9: Nursing Workforce 2012 Leapfrog Hospital Survey Excluded
Safe Practice 17: Medication Reconciliation 2012 Leapfrog Hospital Survey Excluded
Safe Practice 19: Hand Hygiene 2012 Leapfrog Hospital Survey Excluded
Safe Practice 23: Care of the Ventilated Patient 2012 Leapfrog Hospital Survey Excluded
SCIP‐INF‐1: Antibiotic Within 1 Hour CMS Hospital Compare CMS Hospital Compare
SCIP‐INF‐2: Antibiotic Selection CMS Hospital Compare CMS Hospital Compare
SCIP‐INF‐3: Antibiotic Discontinued After 24 Hours CMS Hospital Compare CMS Hospital Compare
SCIP‐INF‐9: Catheter Removal CMS Hospital Compare CMS Hospital Compare
SCIP‐VTE‐2: VTE Prophylaxis CMS Hospital Compare CMS Hospital Compare
Outcome measures (50% of score)
HAC: Foreign Object Retained CMS HACs CMS HACs
HAC: Air Embolism CMS HACs CMS HACs
HAC: Pressure Ulcers CMS HACs CMS HACs
HAC: Falls and Trauma CMS HACs CMS HACs
Central Line‐Associated Bloodstream Infection 2012 Leapfrog Hospital Survey CMS HAIs
PSI 4: Death Among Surgical Inpatients With Serious Treatable Complications CMS Hospital Compare CMS Hospital Compare
PSI 6: Collapsed Lung Due to Medical Treatment CMS Hospital Compare CMS Hospital Compare
PSI 12: Postoperative PE/DVT CMS Hospital Compare CMS Hospital Compare
PSI 14: Wounds Split Open After Surgery CMS Hospital Compare CMS Hospital Compare
PSI 15: Accidental Cuts or Tears From Medical Treatment CMS Hospital Compare CMS Hospital Compare

Study Sample

We examined the Leapfrog safety grades for top hospitals," as ranked by U.S. News & World Report. Included in this sample were the top 15 ranked hospitals in each of the specialties, excluding those specialties whose ranks are based solely on reputation. Hospitals ranked in more than 1 specialty were only included once in the sample. This resulted in a final study sample of 35 top hospitals. Eighteen of these top hospitals participated in the Leapfrog Survey, whereas 17 did not.

Utilizing Leapfrog's spring 2013 methodology,[6] the Hospital Safety Scores for the 35 top hospitals were calculated. The mean safety score for the 18 participating hospitals was then compared with the mean score for the 17 nonparticipating hospitals. Finally, the safety scores for each of the 17 nonparticipating hospitals, listed in Table 2, were estimated as if they had participated in the Leapfrog Survey. To do this, we assumed that the 17 nonparticipating hospitals could each earn average scores for the CPOE, IPS, and 8 process/structural Leapfrog measures as received by their 18 participating counterparts.

Participants Leapfrog Grade Nonparticipants Leapfrog Grade
  • NOTE: Abbreviations: NYU, New York University; UCLA, University of California Los Angeles; UCSF, University of California San Francisco; UPMC, University of Pittsburgh Medical Center.

Brigham and Women's Hospital, Boston, MA A Abbott Northwestern Hospital, Minneapolis, MN A
Duke University Medical Center, Durham, NC A Barnes‐Jewish Hospital/Washington University, St. Louis, MO C
Massachusetts General Hospital, Boston, MA B Baylor University Medical Center, Dallas, TX C
Mayo Clinic, Rochester, MN A Cedars‐Sinai Medical Center, Los Angeles, CA C
Methodist Hospital, Houston, TX A Cleveland Clinic, Cleveland, OH C
Northwestern Memorial Hospital, Chicago, IL A Florida Hospital, Orlando, FL B
Ronald Reagan UCLA Medical Center, Los Angeles, CA D Hospital of the University of Pennsylvania, Philadelphia, PA A
Rush University Medical Center, Chicago, IL A Indiana University Health, Indianapolis, IN A
St. Francis Hospital, Roslyn, NY A Mount Sinai Medical Center, New York, NY B
St. Joseph's Hospital and Medical Center, Phoenix, AZ B New York‐Presbyterian Hospital, New York, NY C
Stanford Hospital and Clinics, Stanford, CA A NYU Langone Medical Center, New York, NY A
Thomas Jefferson University Hospital, Philadelphia, PA C Ochsner Medical Center, New Orleans, LA A
UCSF Medical Center, San Francisco, CA B Tampa General Hospital, Tampa, FL C
University Hospitals Case Medical Center, Cleveland, OH A University of Iowa Hospitals and Clinics, Iowa City, IA C
University of Michigan Hospitals and Health Centers, Ann Arbor, MI A University of Kansas Hospital, Kansas City, KS A
University of Washington Medical Center, Seattle, WA C UPMC, Pittsburgh, PA B
Vanderbilt University Medical Center, Nashville, TN A Yale‐New Haven Hospital, New Haven, CT B
Wake Forest Baptist Medical Center, Winston‐Salem, NC A

RESULTS

Out of these 35 top hospitals, those that participated in the Leapfrog Survey generally received higher scores than the nonparticipants (Table 2). The group of participating hospitals received an average grade of A (mean safety score, 3.165; standard error of the mean [SE], 0.081), whereas the nonparticipating hospitals received an average grade of B (mean safety score, 3.012; SE, 0.047). These grades were consistent whether mean or median scores were used.

To further examine the potential bias against nonparticipating hospitals, the safety scores for each of the 17 nonparticipating hospitals were estimated as if they had participated in the Leapfrog Survey. The letter grade of this group increased from an average of B (mean safety score, 3.012; SE, 0.047) to an average of A (mean safety score, 3.216; SE, 0.046). Among the 17 nonparticipating hospitals, 15 showed an increase in safety score, of which 8 hospitals rescored a change in score significant enough to receive 1 or 2 letter grades higher (Table 3). Only 2 hospitals had slight decreases in safety score, without any impact on letter grade.

Estimated Safety Scores and Letter Grades for the 17 Nonparticipants Rescored as Participants
Hospital Original Score (Grade) Estimated Scorea (Grade)
  • NOTE: Abbreviations: ICU, intensive care unit; NYU, New York University; UPMC, University of Pittsburgh Medical Center.

  • Average scores for the following measures were substituted for missing or incomplete data: computerized physician order entry; ICU physician staffing; Safe Practice 1: Leadership Structures and Systems; Safe Practice 2: Culture Measurement, Feedback, and Intervention; Safe Practice 3: Teamwork Training and Skill Building; Safe Practice 4: Identification and Mitigation of Risks and Hazards; Safe Practice 9: Nursing Workforce; Safe Practice 17: Medication Reconciliation; Safe Practice 19: Hand Hygiene; Safe Practice 23: Care of the Ventilated Patient.

Abbott Northwestern Hospital, Minneapolis, MN 3.17 (A) 3.44 (A)
Barnes‐Jewish Hospital/Washington University, St. Louis, MO 2.83 (C) 3.11 (B)
Baylor University Medical Center, Dallas, TX 2.90 (C) 3.25 (A)
Cedars‐Sinai Medical Center, Los Angeles, CA 2.92 (C) 3.30 (A)
Cleveland Clinic, Cleveland, OH 2.76 (C) 2.78 (C)
Florida Hospital, Orlando, FL 2.98 (B) 3.38 (A)
Hospital of the University of Pennsylvania, Philadelphia, PA 3.29 (A) 3.26 (A)
Indiana University Health, Indianapolis, IN 3.14 (A) 3.37 (A)
Mount Sinai Medical Center, New York, NY 3.01 (B) 3.02 (B)
New York‐Presbyterian Hospital, New York, NY 2.76 (C) 3.15 (A)
NYU Langone Medical Center, New York, NY 3.26 (A) 3.30 (A)
Ochsner Medical Center, New Orleans, LA 3.19 (A) 3.59 (A)
Tampa General Hospital, Tampa, FL 2.86 (C) 3.05 (B)
University of Iowa Hospitals and Clinics, Iowa City, IA 2.70 (C) 3.00 (B)
University of Kansas Hospital, Kansas City, KS 3.29 (A) 3.35 (A)
UPMC, Pittsburgh, PA 3.04 (B) 3.24 (A)
Yale‐New Haven Hospital, New Haven, CT 3.10 (B) 3.08 (B)

We applied the same methods to test the top 17 Honor Roll Hospitals as designated by US News & World Report; among them, half are participating hospitals and another half nonparticipating hospitals. One hospital, Johns Hopkins Hospital was not scored by Leapfrog because no relevant Medicare data are available for Leapfrog to calculate its safety score. For this reason, Johns Hopkins was excluded from our comparison. The results persist even with this smaller sample of top hospitals. The group of 8 participating hospitals had an average grade of A (mean safety score, 3.145; SE, 0.146), whereas another 8 nonparticipating hospitals received an average grade of B (mean safety score, 3.011; SE, 0.075).

DISCUSSION

The Leapfrog Group's intent to provide patient safety information to patients, physicians, healthcare purchasers, and hospital executives should be commended. However, the current methodology may disadvantage nonparticipating hospitals. The combination of lower maximum scores and increased weight of the CPOE and IPS scores may result in a lower hospital safety score than is justified. Nonparticipating hospitals may also face more intensive pressure from employers and payors to lower their reimbursement rates due to the newly released Leapfrog Hidden Surcharge Calculator.

Leapfrog acknowledges that the more data points a hospital has to be scored on, the better its opportunity to achieve a higher score.[8] This justification may lead to bias against nonparticipating hospitals. On the other hand, it is possible that hospitals with good safety records are more likely to participate in the Leapfrog Survey than those with poorer safety records. Without detailed nonresponse analysis from Leapfrog, it is impossible to know if there is a selection bias. Regardless, the Leapfrog result can subsequently misguide the payment rate negotiation between insurers and hospitals.

With this consideration in mind, Leapfrog should explicitly acknowledge the limitations of its methodology and consider revising it in future studies. For example, Leapfrog could only report on those measures for which there are data available for both participating and nonparticipating hospitals. Pending this revision, every effort must be made to distinguish between participating and nonparticipating hospitals. The outcomes of Leapfrog's hospital safety grades are made available online to consumers without distinguishing between participating and nonparticipating hospitals. The only method to differentiate the categories is to examine the data sources in detail amid a large volume of data. It is unlikely that consumers comparing hospital safety grades will take note of this caveat. Thus, Leapfrog's grading system can drastically misrepresent many nonparticipating hospitals' patient safety performances.

This study of The Leapfrog Group's Hospital Safety Score is not without limitations. The small sample utilized in this study limited the power of statistical testing. The difference in mean scores between participating and nonparticipating hospitals is not statistically significant. However, The Leapfrog Group uses specific numerical cutoff points for each letter grade classification. In this classification system statistical significance is not considered when assigning hospitals with different letter grades. It was clear that nonparticipating hospitals were more likely to receive lower letter grades than participating hospitals.

The small sample also posed challenges when attempting to account for missing data when comparing participating hospitals versus nonparticipating hospitals. Although a multiple imputation approach may have been ideal to address this, the small sample size coupled with the large amount of missing data (58% of hospitals did not participate in the Leapfrog Survey) led us to question the accuracy of this approach in this situation.[12] Instead, a crude, mean imputation approach was utilized, relying on the assumption that nonresponding hospitals had the same mean performance as responding hospitals on those domains where data were missing. In this study, we purposely selected a sample of hospitals from U.S. News & World Report's top hospitals. We believe the mean imputation approach, although not perfect, is appropriate for this sample of hospitals. Future study, however, should examine if hospitals that anticipated lower performance scores would be less likely to participate in the Leapfrog Survey. This would help strengthen Leapfrog's methodology in dealing with nonresponsive hospitals.

ACKNOWLEDGMENTS

Disclosures: Harold Paz is the CEO of Penn State Hershey Medical Center, which did not participate in the Leapfrog Survey. The authors have no financial conflicts of interest to report.

Files
References
  1. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
  2. Stelfox HT, Palmisani S, Scurlock C, Orav EJ, Bates DW. The “To Err is Human” report and the patient safety literature. Qual Saf Health Care. 2006;15(3):174178.
  3. Clancy CM, Scully T. A call to excellence. Health Aff (Millwood). 2003;22(2):113115.
  4. US Department of Health and Human Services. Adverse events in hospitals: national incidence among Medicare beneficiaries. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐09‐00090.pdf. Published November 2010. Accessed on August 2, 2013.
  5. The Leapfrog Group. The Leapfrog Group—fact sheet 2013. Available at: https://leapfroghospitalsurvey.org/web/wp‐content/uploads/Fsleapfrog.pdf. Accessed October 9, 2013.
  6. The Leapfrog Group. Hospital Safety score scoring methodology. Available at: http://www.hospitalsafetyscore.org/media/file/HospitalSafetyScore_ScoringMethodology_May2013.pdf. Published May 2013. Accessed June 17, 2013.
  7. The Leapfrog Group. The Hidden Surcharge Americans Pay for Hospital Errors 2013. Available at: http://www.leapfroggroup.org/employers_purchasers/HiddenSurchargeCalculator. Accessed August 2, 2013.
  8. The Leapfrog Group. 2013 Leapfrog Hospital Survey Reference Book 2013. https://leapfroghospitalsurvey.org/web/wp‐content/uploads/reference.pdf. Published April 1, 2013. Accessed June 17, 2013.
  9. Austin JM, D'Andrea G, Birkmeyer JD, et al. Safety in numbers: the development of Leapfrog's composite patient safety score for U.S. hospitals [published online ahead of print September 27, 2013]. J Patient Saf. doi: 10.1097/PTS.0b013e3182952644.
  10. The Leapfrog Group. Measures in detail. Available at: http://www. hospitalsafetyscore.org/about‐the‐score/measures‐in‐detail. Accessed June 17, 2013.
  11. The Leapfrog Group. Explanation of safety score grades. Available at: http://www.hospitalsafetyscore.org/media/file/ExplanationofSafety ScoreGrades_May2013.pdf. Published May 2013. Accessed June 17, 2013.
  12. Sterne JA, White IR, Carlin JB, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ. 2009;338:b2393.
Article PDF
Issue
Journal of Hospital Medicine - 9(2)
Page Number
111-115
Sections
Files
Files
Article PDF
Article PDF

The Institute of Medicine (IOM) reported over a decade ago that between 44,000 and 98,000 deaths occurred every year due to preventable medical errors.[1] The report sparked an intense interest in identifying, measuring, and reporting hospital performance in patient safety.[2] The report also sparked the implementation of many initiatives aiming to improve patient safety.[3] Despite these efforts, there is still much room for improvement in the area of patient safety.[4] As the public has become more aware of patient safety issues, there has been an increased demand for information on hospital safety. The Leapfrog Group, a leading organization that examines and reports on hospital performance in patient safety, cites the IOM report as providing the focus that their newly formed organization required.[5]

Using 26 national measures of safety, The Leapfrog Group calculates a numeric Hospital Safety Score for over 2,600 acute care hospitals in the United States.[6] The primary data used to calculate this score are collected through the Leapfrog Hospital Survey, the Agency for Healthcare Research and Quality, the Centers for Disease Control and Prevention, and the Centers for Medicare and Medicaid Services (CMS). The American Hospital Association's (AHA) Annual Survey is used as a secondary data source as necessary. The Leapfrog Group conducts the survey annually, and substantial efforts are put forth to invite hospital administrators to participate in the survey. Participation in the Leapfrog survey is optional and free of charge.

Leapfrog recently moved a step further in their evaluation of hospital safety by releasing the Hidden Surcharge Calculator to enable employers to estimate the hidden surcharge they pay for their employees and dependents because of hospital errors.[7] The calculation depends largely on the letter grade (AF) that the hospital received from Leapfrog's Hospital Safety Score. For example, Leapfrog estimated a commercially insured patient admitted to a hospital with a grade of C or lower would incur $1845 additional cost per admission than if the same patient was admitted to a hospital with a grade of A.[7] The Leapfrog group encourages employers and payers to use this information to adjust benefits structures so that employees are discouraged from using hospitals that receive lower hospital safety scores. Leapfrog also encourages payers to negotiate lower reimbursement rates for hospitals with lower hospital safety scores.

The accuracy of Leapfrog's hospital safety grades warrants attention because of the methodology used to score hospitals that do not participate in the Leapfrog Survey. One common barrier that prevents hospitals from participating is the amount of effort required to complete the annual survey, including extensive inputs from hospital executives and staff. According to Leapfrog, 4 to 6 days are required for a hospital to compile the necessary survey data.[8] Leapfrog estimates a 90‐minute commitment for the hospital chief executive officer or designated administrator to enter the information into the online questionnaire. This is a significant commitment for many hospitals. As a result, among the approximately 2600 acute care hospitals covered by Leapfrog's 2012 to 2013 safety grading, only 1100 (or 42.3%) actually participated in the Leapfrog hospital survey. This limits Leapfrog's ability to provide accurate scores and assign fair safety grades to many hospitals.

METHODS

Leapfrog Hospital Safety Score

Leapfrog's designated Hospital Safety Score is determined by 26 measures. The set of safety measures and their relative weight are determined by a 9‐member Leapfrog expert panel of patient safety experts.[9] The hospital safety score is divided equally into 2 domains of safety measures: process/structural and outcomes.[6] The process measures represent how often a hospital gives patients recommended treatment for a given medical condition or procedure, whereas structural measures represent the environment in which patients receive care.[10] The process/structural measures include computerized physician order entry (CPOE), intensive care unit (ICU) physician staffing (IPS), 8 Leapfrog safety practices, and 5 surgical care improvement project measures. The outcome measures represent what happens to a patient while receiving care. The outcomes domain includes 5 hospital‐acquired conditions and 6 patient safety indicators. A score is assigned and weighted for each measure. All scores are then summed to produce a single number denoting the safety performance score received by each hospital. Every hospital is assigned 1 of 5 letter grades depending on how the hospital's numeric score stands in safety performance relative to all other hospitals. The letter grade A denotes the best hospital safety performance, followed in order by letter grades B through F. The cutoffs for A and B grades represent the first and second quartile of hospital safety scores. The cutoff for the C grade represents the hospitals that were between the mean and 1.5 standard deviations below the mean. The cutoff for the D grade represents the hospitals that were between 1.5 and 3.0 standard deviations below the mean. F grades indicate safety scores more than 3.0 standard deviations below the mean.[11]

Nonparticipating Hospitals

The Leapfrog Survey contributes values for 11 of the 26 measures utilized to calculate the Hospital Safety Score. The score of a nonparticipating hospital will not reflect 8 of these 11 measures. For the 3 remaining measures, CPOE, IPS, and central line‐associated blood stream infection, secondary data from the AHA Survey, AHA Information Technology Supplement Survey, and CMS Hospital Compare were used as proxies, respectively (Table 1). The use of a proxy effectively limits the maximum score attainable by nonparticipating hospitals. For instance, 2 of these 3 measures, CPOE and IPS, are calculated on different scales depending on hospital survey participation status. For CPOE, nonparticipating hospitals are limited to a maximum of 65 out of 100 points; for IPS, they are limited to 85 out of 100 points.[6] Because the actual weight for each of these proxy measures is increased for nonparticipating hospitals in the calculation of the final score, their effective impact is exacerbated. The weight of CPOE and IPS measures in the overall weighted score are increased from 6.1% and 7.0% to 11.0% and 12.6%, respectively.

Data Sources for the Patient Safety Score: Survey Participants Versus Nonparticipants
Participants Nonparticipants
  • NOTE: Abbreviations: AHA, American Hospital Association; CMS, Centers for Medicare and Medicaid Services; DVT, deep vein thrombosis; HACs, hospital‐acquired conditions; HAIs, healthcare‐associated infections; ICU, intensive care unit; INF, infection; IPS, ICU physician staffing; IT, Information Technology; PE, pulmonary embolism; PSI, patient safety indicators; SCIP, Surgical Care Improvement Project; VTE, venous thromboembolism; *Based on publicly available Leapfrog methodology, accessed September 2013.

Process/structural measures (50% of score)
Computerized Physician Order Entry 2012 Leapfrog Hospital Survey 2010 IT Supplement (AHA)
ICU Physician Staffing (IPS) 2012 Leapfrog Hospital Survey 2011 AHA Annual Survey
Safe Practice 1: Leadership Structures and Systems 2012 Leapfrog Hospital Survey Excluded
Safe Practice 2: Culture Measurement, Feedback, and Intervention 2012 Leapfrog Hospital Survey Excluded
Safe Practice 3: Teamwork Training and Skill Building 2012 Leapfrog Hospital Survey Excluded
Safe Practice 4: Identification and Mitigation of Risks and Hazards 2012 Leapfrog Hospital Survey Excluded
Safe Practice 9: Nursing Workforce 2012 Leapfrog Hospital Survey Excluded
Safe Practice 17: Medication Reconciliation 2012 Leapfrog Hospital Survey Excluded
Safe Practice 19: Hand Hygiene 2012 Leapfrog Hospital Survey Excluded
Safe Practice 23: Care of the Ventilated Patient 2012 Leapfrog Hospital Survey Excluded
SCIP‐INF‐1: Antibiotic Within 1 Hour CMS Hospital Compare CMS Hospital Compare
SCIP‐INF‐2: Antibiotic Selection CMS Hospital Compare CMS Hospital Compare
SCIP‐INF‐3: Antibiotic Discontinued After 24 Hours CMS Hospital Compare CMS Hospital Compare
SCIP‐INF‐9: Catheter Removal CMS Hospital Compare CMS Hospital Compare
SCIP‐VTE‐2: VTE Prophylaxis CMS Hospital Compare CMS Hospital Compare
Outcome measures (50% of score)
HAC: Foreign Object Retained CMS HACs CMS HACs
HAC: Air Embolism CMS HACs CMS HACs
HAC: Pressure Ulcers CMS HACs CMS HACs
HAC: Falls and Trauma CMS HACs CMS HACs
Central Line‐Associated Bloodstream Infection 2012 Leapfrog Hospital Survey CMS HAIs
PSI 4: Death Among Surgical Inpatients With Serious Treatable Complications CMS Hospital Compare CMS Hospital Compare
PSI 6: Collapsed Lung Due to Medical Treatment CMS Hospital Compare CMS Hospital Compare
PSI 12: Postoperative PE/DVT CMS Hospital Compare CMS Hospital Compare
PSI 14: Wounds Split Open After Surgery CMS Hospital Compare CMS Hospital Compare
PSI 15: Accidental Cuts or Tears From Medical Treatment CMS Hospital Compare CMS Hospital Compare

Study Sample

We examined the Leapfrog safety grades for top hospitals," as ranked by U.S. News & World Report. Included in this sample were the top 15 ranked hospitals in each of the specialties, excluding those specialties whose ranks are based solely on reputation. Hospitals ranked in more than 1 specialty were only included once in the sample. This resulted in a final study sample of 35 top hospitals. Eighteen of these top hospitals participated in the Leapfrog Survey, whereas 17 did not.

Utilizing Leapfrog's spring 2013 methodology,[6] the Hospital Safety Scores for the 35 top hospitals were calculated. The mean safety score for the 18 participating hospitals was then compared with the mean score for the 17 nonparticipating hospitals. Finally, the safety scores for each of the 17 nonparticipating hospitals, listed in Table 2, were estimated as if they had participated in the Leapfrog Survey. To do this, we assumed that the 17 nonparticipating hospitals could each earn average scores for the CPOE, IPS, and 8 process/structural Leapfrog measures as received by their 18 participating counterparts.

Participants Leapfrog Grade Nonparticipants Leapfrog Grade
  • NOTE: Abbreviations: NYU, New York University; UCLA, University of California Los Angeles; UCSF, University of California San Francisco; UPMC, University of Pittsburgh Medical Center.

Brigham and Women's Hospital, Boston, MA A Abbott Northwestern Hospital, Minneapolis, MN A
Duke University Medical Center, Durham, NC A Barnes‐Jewish Hospital/Washington University, St. Louis, MO C
Massachusetts General Hospital, Boston, MA B Baylor University Medical Center, Dallas, TX C
Mayo Clinic, Rochester, MN A Cedars‐Sinai Medical Center, Los Angeles, CA C
Methodist Hospital, Houston, TX A Cleveland Clinic, Cleveland, OH C
Northwestern Memorial Hospital, Chicago, IL A Florida Hospital, Orlando, FL B
Ronald Reagan UCLA Medical Center, Los Angeles, CA D Hospital of the University of Pennsylvania, Philadelphia, PA A
Rush University Medical Center, Chicago, IL A Indiana University Health, Indianapolis, IN A
St. Francis Hospital, Roslyn, NY A Mount Sinai Medical Center, New York, NY B
St. Joseph's Hospital and Medical Center, Phoenix, AZ B New York‐Presbyterian Hospital, New York, NY C
Stanford Hospital and Clinics, Stanford, CA A NYU Langone Medical Center, New York, NY A
Thomas Jefferson University Hospital, Philadelphia, PA C Ochsner Medical Center, New Orleans, LA A
UCSF Medical Center, San Francisco, CA B Tampa General Hospital, Tampa, FL C
University Hospitals Case Medical Center, Cleveland, OH A University of Iowa Hospitals and Clinics, Iowa City, IA C
University of Michigan Hospitals and Health Centers, Ann Arbor, MI A University of Kansas Hospital, Kansas City, KS A
University of Washington Medical Center, Seattle, WA C UPMC, Pittsburgh, PA B
Vanderbilt University Medical Center, Nashville, TN A Yale‐New Haven Hospital, New Haven, CT B
Wake Forest Baptist Medical Center, Winston‐Salem, NC A

RESULTS

Out of these 35 top hospitals, those that participated in the Leapfrog Survey generally received higher scores than the nonparticipants (Table 2). The group of participating hospitals received an average grade of A (mean safety score, 3.165; standard error of the mean [SE], 0.081), whereas the nonparticipating hospitals received an average grade of B (mean safety score, 3.012; SE, 0.047). These grades were consistent whether mean or median scores were used.

To further examine the potential bias against nonparticipating hospitals, the safety scores for each of the 17 nonparticipating hospitals were estimated as if they had participated in the Leapfrog Survey. The letter grade of this group increased from an average of B (mean safety score, 3.012; SE, 0.047) to an average of A (mean safety score, 3.216; SE, 0.046). Among the 17 nonparticipating hospitals, 15 showed an increase in safety score, of which 8 hospitals rescored a change in score significant enough to receive 1 or 2 letter grades higher (Table 3). Only 2 hospitals had slight decreases in safety score, without any impact on letter grade.

Estimated Safety Scores and Letter Grades for the 17 Nonparticipants Rescored as Participants
Hospital Original Score (Grade) Estimated Scorea (Grade)
  • NOTE: Abbreviations: ICU, intensive care unit; NYU, New York University; UPMC, University of Pittsburgh Medical Center.

  • Average scores for the following measures were substituted for missing or incomplete data: computerized physician order entry; ICU physician staffing; Safe Practice 1: Leadership Structures and Systems; Safe Practice 2: Culture Measurement, Feedback, and Intervention; Safe Practice 3: Teamwork Training and Skill Building; Safe Practice 4: Identification and Mitigation of Risks and Hazards; Safe Practice 9: Nursing Workforce; Safe Practice 17: Medication Reconciliation; Safe Practice 19: Hand Hygiene; Safe Practice 23: Care of the Ventilated Patient.

Abbott Northwestern Hospital, Minneapolis, MN 3.17 (A) 3.44 (A)
Barnes‐Jewish Hospital/Washington University, St. Louis, MO 2.83 (C) 3.11 (B)
Baylor University Medical Center, Dallas, TX 2.90 (C) 3.25 (A)
Cedars‐Sinai Medical Center, Los Angeles, CA 2.92 (C) 3.30 (A)
Cleveland Clinic, Cleveland, OH 2.76 (C) 2.78 (C)
Florida Hospital, Orlando, FL 2.98 (B) 3.38 (A)
Hospital of the University of Pennsylvania, Philadelphia, PA 3.29 (A) 3.26 (A)
Indiana University Health, Indianapolis, IN 3.14 (A) 3.37 (A)
Mount Sinai Medical Center, New York, NY 3.01 (B) 3.02 (B)
New York‐Presbyterian Hospital, New York, NY 2.76 (C) 3.15 (A)
NYU Langone Medical Center, New York, NY 3.26 (A) 3.30 (A)
Ochsner Medical Center, New Orleans, LA 3.19 (A) 3.59 (A)
Tampa General Hospital, Tampa, FL 2.86 (C) 3.05 (B)
University of Iowa Hospitals and Clinics, Iowa City, IA 2.70 (C) 3.00 (B)
University of Kansas Hospital, Kansas City, KS 3.29 (A) 3.35 (A)
UPMC, Pittsburgh, PA 3.04 (B) 3.24 (A)
Yale‐New Haven Hospital, New Haven, CT 3.10 (B) 3.08 (B)

We applied the same methods to test the top 17 Honor Roll Hospitals as designated by US News & World Report; among them, half are participating hospitals and another half nonparticipating hospitals. One hospital, Johns Hopkins Hospital was not scored by Leapfrog because no relevant Medicare data are available for Leapfrog to calculate its safety score. For this reason, Johns Hopkins was excluded from our comparison. The results persist even with this smaller sample of top hospitals. The group of 8 participating hospitals had an average grade of A (mean safety score, 3.145; SE, 0.146), whereas another 8 nonparticipating hospitals received an average grade of B (mean safety score, 3.011; SE, 0.075).

DISCUSSION

The Leapfrog Group's intent to provide patient safety information to patients, physicians, healthcare purchasers, and hospital executives should be commended. However, the current methodology may disadvantage nonparticipating hospitals. The combination of lower maximum scores and increased weight of the CPOE and IPS scores may result in a lower hospital safety score than is justified. Nonparticipating hospitals may also face more intensive pressure from employers and payors to lower their reimbursement rates due to the newly released Leapfrog Hidden Surcharge Calculator.

Leapfrog acknowledges that the more data points a hospital has to be scored on, the better its opportunity to achieve a higher score.[8] This justification may lead to bias against nonparticipating hospitals. On the other hand, it is possible that hospitals with good safety records are more likely to participate in the Leapfrog Survey than those with poorer safety records. Without detailed nonresponse analysis from Leapfrog, it is impossible to know if there is a selection bias. Regardless, the Leapfrog result can subsequently misguide the payment rate negotiation between insurers and hospitals.

With this consideration in mind, Leapfrog should explicitly acknowledge the limitations of its methodology and consider revising it in future studies. For example, Leapfrog could only report on those measures for which there are data available for both participating and nonparticipating hospitals. Pending this revision, every effort must be made to distinguish between participating and nonparticipating hospitals. The outcomes of Leapfrog's hospital safety grades are made available online to consumers without distinguishing between participating and nonparticipating hospitals. The only method to differentiate the categories is to examine the data sources in detail amid a large volume of data. It is unlikely that consumers comparing hospital safety grades will take note of this caveat. Thus, Leapfrog's grading system can drastically misrepresent many nonparticipating hospitals' patient safety performances.

This study of The Leapfrog Group's Hospital Safety Score is not without limitations. The small sample utilized in this study limited the power of statistical testing. The difference in mean scores between participating and nonparticipating hospitals is not statistically significant. However, The Leapfrog Group uses specific numerical cutoff points for each letter grade classification. In this classification system statistical significance is not considered when assigning hospitals with different letter grades. It was clear that nonparticipating hospitals were more likely to receive lower letter grades than participating hospitals.

The small sample also posed challenges when attempting to account for missing data when comparing participating hospitals versus nonparticipating hospitals. Although a multiple imputation approach may have been ideal to address this, the small sample size coupled with the large amount of missing data (58% of hospitals did not participate in the Leapfrog Survey) led us to question the accuracy of this approach in this situation.[12] Instead, a crude, mean imputation approach was utilized, relying on the assumption that nonresponding hospitals had the same mean performance as responding hospitals on those domains where data were missing. In this study, we purposely selected a sample of hospitals from U.S. News & World Report's top hospitals. We believe the mean imputation approach, although not perfect, is appropriate for this sample of hospitals. Future study, however, should examine if hospitals that anticipated lower performance scores would be less likely to participate in the Leapfrog Survey. This would help strengthen Leapfrog's methodology in dealing with nonresponsive hospitals.

ACKNOWLEDGMENTS

Disclosures: Harold Paz is the CEO of Penn State Hershey Medical Center, which did not participate in the Leapfrog Survey. The authors have no financial conflicts of interest to report.

The Institute of Medicine (IOM) reported over a decade ago that between 44,000 and 98,000 deaths occurred every year due to preventable medical errors.[1] The report sparked an intense interest in identifying, measuring, and reporting hospital performance in patient safety.[2] The report also sparked the implementation of many initiatives aiming to improve patient safety.[3] Despite these efforts, there is still much room for improvement in the area of patient safety.[4] As the public has become more aware of patient safety issues, there has been an increased demand for information on hospital safety. The Leapfrog Group, a leading organization that examines and reports on hospital performance in patient safety, cites the IOM report as providing the focus that their newly formed organization required.[5]

Using 26 national measures of safety, The Leapfrog Group calculates a numeric Hospital Safety Score for over 2,600 acute care hospitals in the United States.[6] The primary data used to calculate this score are collected through the Leapfrog Hospital Survey, the Agency for Healthcare Research and Quality, the Centers for Disease Control and Prevention, and the Centers for Medicare and Medicaid Services (CMS). The American Hospital Association's (AHA) Annual Survey is used as a secondary data source as necessary. The Leapfrog Group conducts the survey annually, and substantial efforts are put forth to invite hospital administrators to participate in the survey. Participation in the Leapfrog survey is optional and free of charge.

Leapfrog recently moved a step further in their evaluation of hospital safety by releasing the Hidden Surcharge Calculator to enable employers to estimate the hidden surcharge they pay for their employees and dependents because of hospital errors.[7] The calculation depends largely on the letter grade (AF) that the hospital received from Leapfrog's Hospital Safety Score. For example, Leapfrog estimated a commercially insured patient admitted to a hospital with a grade of C or lower would incur $1845 additional cost per admission than if the same patient was admitted to a hospital with a grade of A.[7] The Leapfrog group encourages employers and payers to use this information to adjust benefits structures so that employees are discouraged from using hospitals that receive lower hospital safety scores. Leapfrog also encourages payers to negotiate lower reimbursement rates for hospitals with lower hospital safety scores.

The accuracy of Leapfrog's hospital safety grades warrants attention because of the methodology used to score hospitals that do not participate in the Leapfrog Survey. One common barrier that prevents hospitals from participating is the amount of effort required to complete the annual survey, including extensive inputs from hospital executives and staff. According to Leapfrog, 4 to 6 days are required for a hospital to compile the necessary survey data.[8] Leapfrog estimates a 90‐minute commitment for the hospital chief executive officer or designated administrator to enter the information into the online questionnaire. This is a significant commitment for many hospitals. As a result, among the approximately 2600 acute care hospitals covered by Leapfrog's 2012 to 2013 safety grading, only 1100 (or 42.3%) actually participated in the Leapfrog hospital survey. This limits Leapfrog's ability to provide accurate scores and assign fair safety grades to many hospitals.

METHODS

Leapfrog Hospital Safety Score

Leapfrog's designated Hospital Safety Score is determined by 26 measures. The set of safety measures and their relative weight are determined by a 9‐member Leapfrog expert panel of patient safety experts.[9] The hospital safety score is divided equally into 2 domains of safety measures: process/structural and outcomes.[6] The process measures represent how often a hospital gives patients recommended treatment for a given medical condition or procedure, whereas structural measures represent the environment in which patients receive care.[10] The process/structural measures include computerized physician order entry (CPOE), intensive care unit (ICU) physician staffing (IPS), 8 Leapfrog safety practices, and 5 surgical care improvement project measures. The outcome measures represent what happens to a patient while receiving care. The outcomes domain includes 5 hospital‐acquired conditions and 6 patient safety indicators. A score is assigned and weighted for each measure. All scores are then summed to produce a single number denoting the safety performance score received by each hospital. Every hospital is assigned 1 of 5 letter grades depending on how the hospital's numeric score stands in safety performance relative to all other hospitals. The letter grade A denotes the best hospital safety performance, followed in order by letter grades B through F. The cutoffs for A and B grades represent the first and second quartile of hospital safety scores. The cutoff for the C grade represents the hospitals that were between the mean and 1.5 standard deviations below the mean. The cutoff for the D grade represents the hospitals that were between 1.5 and 3.0 standard deviations below the mean. F grades indicate safety scores more than 3.0 standard deviations below the mean.[11]

Nonparticipating Hospitals

The Leapfrog Survey contributes values for 11 of the 26 measures utilized to calculate the Hospital Safety Score. The score of a nonparticipating hospital will not reflect 8 of these 11 measures. For the 3 remaining measures, CPOE, IPS, and central line‐associated blood stream infection, secondary data from the AHA Survey, AHA Information Technology Supplement Survey, and CMS Hospital Compare were used as proxies, respectively (Table 1). The use of a proxy effectively limits the maximum score attainable by nonparticipating hospitals. For instance, 2 of these 3 measures, CPOE and IPS, are calculated on different scales depending on hospital survey participation status. For CPOE, nonparticipating hospitals are limited to a maximum of 65 out of 100 points; for IPS, they are limited to 85 out of 100 points.[6] Because the actual weight for each of these proxy measures is increased for nonparticipating hospitals in the calculation of the final score, their effective impact is exacerbated. The weight of CPOE and IPS measures in the overall weighted score are increased from 6.1% and 7.0% to 11.0% and 12.6%, respectively.

Data Sources for the Patient Safety Score: Survey Participants Versus Nonparticipants
Participants Nonparticipants
  • NOTE: Abbreviations: AHA, American Hospital Association; CMS, Centers for Medicare and Medicaid Services; DVT, deep vein thrombosis; HACs, hospital‐acquired conditions; HAIs, healthcare‐associated infections; ICU, intensive care unit; INF, infection; IPS, ICU physician staffing; IT, Information Technology; PE, pulmonary embolism; PSI, patient safety indicators; SCIP, Surgical Care Improvement Project; VTE, venous thromboembolism; *Based on publicly available Leapfrog methodology, accessed September 2013.

Process/structural measures (50% of score)
Computerized Physician Order Entry 2012 Leapfrog Hospital Survey 2010 IT Supplement (AHA)
ICU Physician Staffing (IPS) 2012 Leapfrog Hospital Survey 2011 AHA Annual Survey
Safe Practice 1: Leadership Structures and Systems 2012 Leapfrog Hospital Survey Excluded
Safe Practice 2: Culture Measurement, Feedback, and Intervention 2012 Leapfrog Hospital Survey Excluded
Safe Practice 3: Teamwork Training and Skill Building 2012 Leapfrog Hospital Survey Excluded
Safe Practice 4: Identification and Mitigation of Risks and Hazards 2012 Leapfrog Hospital Survey Excluded
Safe Practice 9: Nursing Workforce 2012 Leapfrog Hospital Survey Excluded
Safe Practice 17: Medication Reconciliation 2012 Leapfrog Hospital Survey Excluded
Safe Practice 19: Hand Hygiene 2012 Leapfrog Hospital Survey Excluded
Safe Practice 23: Care of the Ventilated Patient 2012 Leapfrog Hospital Survey Excluded
SCIP‐INF‐1: Antibiotic Within 1 Hour CMS Hospital Compare CMS Hospital Compare
SCIP‐INF‐2: Antibiotic Selection CMS Hospital Compare CMS Hospital Compare
SCIP‐INF‐3: Antibiotic Discontinued After 24 Hours CMS Hospital Compare CMS Hospital Compare
SCIP‐INF‐9: Catheter Removal CMS Hospital Compare CMS Hospital Compare
SCIP‐VTE‐2: VTE Prophylaxis CMS Hospital Compare CMS Hospital Compare
Outcome measures (50% of score)
HAC: Foreign Object Retained CMS HACs CMS HACs
HAC: Air Embolism CMS HACs CMS HACs
HAC: Pressure Ulcers CMS HACs CMS HACs
HAC: Falls and Trauma CMS HACs CMS HACs
Central Line‐Associated Bloodstream Infection 2012 Leapfrog Hospital Survey CMS HAIs
PSI 4: Death Among Surgical Inpatients With Serious Treatable Complications CMS Hospital Compare CMS Hospital Compare
PSI 6: Collapsed Lung Due to Medical Treatment CMS Hospital Compare CMS Hospital Compare
PSI 12: Postoperative PE/DVT CMS Hospital Compare CMS Hospital Compare
PSI 14: Wounds Split Open After Surgery CMS Hospital Compare CMS Hospital Compare
PSI 15: Accidental Cuts or Tears From Medical Treatment CMS Hospital Compare CMS Hospital Compare

Study Sample

We examined the Leapfrog safety grades for top hospitals," as ranked by U.S. News & World Report. Included in this sample were the top 15 ranked hospitals in each of the specialties, excluding those specialties whose ranks are based solely on reputation. Hospitals ranked in more than 1 specialty were only included once in the sample. This resulted in a final study sample of 35 top hospitals. Eighteen of these top hospitals participated in the Leapfrog Survey, whereas 17 did not.

Utilizing Leapfrog's spring 2013 methodology,[6] the Hospital Safety Scores for the 35 top hospitals were calculated. The mean safety score for the 18 participating hospitals was then compared with the mean score for the 17 nonparticipating hospitals. Finally, the safety scores for each of the 17 nonparticipating hospitals, listed in Table 2, were estimated as if they had participated in the Leapfrog Survey. To do this, we assumed that the 17 nonparticipating hospitals could each earn average scores for the CPOE, IPS, and 8 process/structural Leapfrog measures as received by their 18 participating counterparts.

Participants Leapfrog Grade Nonparticipants Leapfrog Grade
  • NOTE: Abbreviations: NYU, New York University; UCLA, University of California Los Angeles; UCSF, University of California San Francisco; UPMC, University of Pittsburgh Medical Center.

Brigham and Women's Hospital, Boston, MA A Abbott Northwestern Hospital, Minneapolis, MN A
Duke University Medical Center, Durham, NC A Barnes‐Jewish Hospital/Washington University, St. Louis, MO C
Massachusetts General Hospital, Boston, MA B Baylor University Medical Center, Dallas, TX C
Mayo Clinic, Rochester, MN A Cedars‐Sinai Medical Center, Los Angeles, CA C
Methodist Hospital, Houston, TX A Cleveland Clinic, Cleveland, OH C
Northwestern Memorial Hospital, Chicago, IL A Florida Hospital, Orlando, FL B
Ronald Reagan UCLA Medical Center, Los Angeles, CA D Hospital of the University of Pennsylvania, Philadelphia, PA A
Rush University Medical Center, Chicago, IL A Indiana University Health, Indianapolis, IN A
St. Francis Hospital, Roslyn, NY A Mount Sinai Medical Center, New York, NY B
St. Joseph's Hospital and Medical Center, Phoenix, AZ B New York‐Presbyterian Hospital, New York, NY C
Stanford Hospital and Clinics, Stanford, CA A NYU Langone Medical Center, New York, NY A
Thomas Jefferson University Hospital, Philadelphia, PA C Ochsner Medical Center, New Orleans, LA A
UCSF Medical Center, San Francisco, CA B Tampa General Hospital, Tampa, FL C
University Hospitals Case Medical Center, Cleveland, OH A University of Iowa Hospitals and Clinics, Iowa City, IA C
University of Michigan Hospitals and Health Centers, Ann Arbor, MI A University of Kansas Hospital, Kansas City, KS A
University of Washington Medical Center, Seattle, WA C UPMC, Pittsburgh, PA B
Vanderbilt University Medical Center, Nashville, TN A Yale‐New Haven Hospital, New Haven, CT B
Wake Forest Baptist Medical Center, Winston‐Salem, NC A

RESULTS

Out of these 35 top hospitals, those that participated in the Leapfrog Survey generally received higher scores than the nonparticipants (Table 2). The group of participating hospitals received an average grade of A (mean safety score, 3.165; standard error of the mean [SE], 0.081), whereas the nonparticipating hospitals received an average grade of B (mean safety score, 3.012; SE, 0.047). These grades were consistent whether mean or median scores were used.

To further examine the potential bias against nonparticipating hospitals, the safety scores for each of the 17 nonparticipating hospitals were estimated as if they had participated in the Leapfrog Survey. The letter grade of this group increased from an average of B (mean safety score, 3.012; SE, 0.047) to an average of A (mean safety score, 3.216; SE, 0.046). Among the 17 nonparticipating hospitals, 15 showed an increase in safety score, of which 8 hospitals rescored a change in score significant enough to receive 1 or 2 letter grades higher (Table 3). Only 2 hospitals had slight decreases in safety score, without any impact on letter grade.

Estimated Safety Scores and Letter Grades for the 17 Nonparticipants Rescored as Participants
Hospital Original Score (Grade) Estimated Scorea (Grade)
  • NOTE: Abbreviations: ICU, intensive care unit; NYU, New York University; UPMC, University of Pittsburgh Medical Center.

  • Average scores for the following measures were substituted for missing or incomplete data: computerized physician order entry; ICU physician staffing; Safe Practice 1: Leadership Structures and Systems; Safe Practice 2: Culture Measurement, Feedback, and Intervention; Safe Practice 3: Teamwork Training and Skill Building; Safe Practice 4: Identification and Mitigation of Risks and Hazards; Safe Practice 9: Nursing Workforce; Safe Practice 17: Medication Reconciliation; Safe Practice 19: Hand Hygiene; Safe Practice 23: Care of the Ventilated Patient.

Abbott Northwestern Hospital, Minneapolis, MN 3.17 (A) 3.44 (A)
Barnes‐Jewish Hospital/Washington University, St. Louis, MO 2.83 (C) 3.11 (B)
Baylor University Medical Center, Dallas, TX 2.90 (C) 3.25 (A)
Cedars‐Sinai Medical Center, Los Angeles, CA 2.92 (C) 3.30 (A)
Cleveland Clinic, Cleveland, OH 2.76 (C) 2.78 (C)
Florida Hospital, Orlando, FL 2.98 (B) 3.38 (A)
Hospital of the University of Pennsylvania, Philadelphia, PA 3.29 (A) 3.26 (A)
Indiana University Health, Indianapolis, IN 3.14 (A) 3.37 (A)
Mount Sinai Medical Center, New York, NY 3.01 (B) 3.02 (B)
New York‐Presbyterian Hospital, New York, NY 2.76 (C) 3.15 (A)
NYU Langone Medical Center, New York, NY 3.26 (A) 3.30 (A)
Ochsner Medical Center, New Orleans, LA 3.19 (A) 3.59 (A)
Tampa General Hospital, Tampa, FL 2.86 (C) 3.05 (B)
University of Iowa Hospitals and Clinics, Iowa City, IA 2.70 (C) 3.00 (B)
University of Kansas Hospital, Kansas City, KS 3.29 (A) 3.35 (A)
UPMC, Pittsburgh, PA 3.04 (B) 3.24 (A)
Yale‐New Haven Hospital, New Haven, CT 3.10 (B) 3.08 (B)

We applied the same methods to test the top 17 Honor Roll Hospitals as designated by US News & World Report; among them, half are participating hospitals and another half nonparticipating hospitals. One hospital, Johns Hopkins Hospital was not scored by Leapfrog because no relevant Medicare data are available for Leapfrog to calculate its safety score. For this reason, Johns Hopkins was excluded from our comparison. The results persist even with this smaller sample of top hospitals. The group of 8 participating hospitals had an average grade of A (mean safety score, 3.145; SE, 0.146), whereas another 8 nonparticipating hospitals received an average grade of B (mean safety score, 3.011; SE, 0.075).

DISCUSSION

The Leapfrog Group's intent to provide patient safety information to patients, physicians, healthcare purchasers, and hospital executives should be commended. However, the current methodology may disadvantage nonparticipating hospitals. The combination of lower maximum scores and increased weight of the CPOE and IPS scores may result in a lower hospital safety score than is justified. Nonparticipating hospitals may also face more intensive pressure from employers and payors to lower their reimbursement rates due to the newly released Leapfrog Hidden Surcharge Calculator.

Leapfrog acknowledges that the more data points a hospital has to be scored on, the better its opportunity to achieve a higher score.[8] This justification may lead to bias against nonparticipating hospitals. On the other hand, it is possible that hospitals with good safety records are more likely to participate in the Leapfrog Survey than those with poorer safety records. Without detailed nonresponse analysis from Leapfrog, it is impossible to know if there is a selection bias. Regardless, the Leapfrog result can subsequently misguide the payment rate negotiation between insurers and hospitals.

With this consideration in mind, Leapfrog should explicitly acknowledge the limitations of its methodology and consider revising it in future studies. For example, Leapfrog could only report on those measures for which there are data available for both participating and nonparticipating hospitals. Pending this revision, every effort must be made to distinguish between participating and nonparticipating hospitals. The outcomes of Leapfrog's hospital safety grades are made available online to consumers without distinguishing between participating and nonparticipating hospitals. The only method to differentiate the categories is to examine the data sources in detail amid a large volume of data. It is unlikely that consumers comparing hospital safety grades will take note of this caveat. Thus, Leapfrog's grading system can drastically misrepresent many nonparticipating hospitals' patient safety performances.

This study of The Leapfrog Group's Hospital Safety Score is not without limitations. The small sample utilized in this study limited the power of statistical testing. The difference in mean scores between participating and nonparticipating hospitals is not statistically significant. However, The Leapfrog Group uses specific numerical cutoff points for each letter grade classification. In this classification system statistical significance is not considered when assigning hospitals with different letter grades. It was clear that nonparticipating hospitals were more likely to receive lower letter grades than participating hospitals.

The small sample also posed challenges when attempting to account for missing data when comparing participating hospitals versus nonparticipating hospitals. Although a multiple imputation approach may have been ideal to address this, the small sample size coupled with the large amount of missing data (58% of hospitals did not participate in the Leapfrog Survey) led us to question the accuracy of this approach in this situation.[12] Instead, a crude, mean imputation approach was utilized, relying on the assumption that nonresponding hospitals had the same mean performance as responding hospitals on those domains where data were missing. In this study, we purposely selected a sample of hospitals from U.S. News & World Report's top hospitals. We believe the mean imputation approach, although not perfect, is appropriate for this sample of hospitals. Future study, however, should examine if hospitals that anticipated lower performance scores would be less likely to participate in the Leapfrog Survey. This would help strengthen Leapfrog's methodology in dealing with nonresponsive hospitals.

ACKNOWLEDGMENTS

Disclosures: Harold Paz is the CEO of Penn State Hershey Medical Center, which did not participate in the Leapfrog Survey. The authors have no financial conflicts of interest to report.

References
  1. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
  2. Stelfox HT, Palmisani S, Scurlock C, Orav EJ, Bates DW. The “To Err is Human” report and the patient safety literature. Qual Saf Health Care. 2006;15(3):174178.
  3. Clancy CM, Scully T. A call to excellence. Health Aff (Millwood). 2003;22(2):113115.
  4. US Department of Health and Human Services. Adverse events in hospitals: national incidence among Medicare beneficiaries. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐09‐00090.pdf. Published November 2010. Accessed on August 2, 2013.
  5. The Leapfrog Group. The Leapfrog Group—fact sheet 2013. Available at: https://leapfroghospitalsurvey.org/web/wp‐content/uploads/Fsleapfrog.pdf. Accessed October 9, 2013.
  6. The Leapfrog Group. Hospital Safety score scoring methodology. Available at: http://www.hospitalsafetyscore.org/media/file/HospitalSafetyScore_ScoringMethodology_May2013.pdf. Published May 2013. Accessed June 17, 2013.
  7. The Leapfrog Group. The Hidden Surcharge Americans Pay for Hospital Errors 2013. Available at: http://www.leapfroggroup.org/employers_purchasers/HiddenSurchargeCalculator. Accessed August 2, 2013.
  8. The Leapfrog Group. 2013 Leapfrog Hospital Survey Reference Book 2013. https://leapfroghospitalsurvey.org/web/wp‐content/uploads/reference.pdf. Published April 1, 2013. Accessed June 17, 2013.
  9. Austin JM, D'Andrea G, Birkmeyer JD, et al. Safety in numbers: the development of Leapfrog's composite patient safety score for U.S. hospitals [published online ahead of print September 27, 2013]. J Patient Saf. doi: 10.1097/PTS.0b013e3182952644.
  10. The Leapfrog Group. Measures in detail. Available at: http://www. hospitalsafetyscore.org/about‐the‐score/measures‐in‐detail. Accessed June 17, 2013.
  11. The Leapfrog Group. Explanation of safety score grades. Available at: http://www.hospitalsafetyscore.org/media/file/ExplanationofSafety ScoreGrades_May2013.pdf. Published May 2013. Accessed June 17, 2013.
  12. Sterne JA, White IR, Carlin JB, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ. 2009;338:b2393.
References
  1. Kohn LT, Corrigan J, Donaldson MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.
  2. Stelfox HT, Palmisani S, Scurlock C, Orav EJ, Bates DW. The “To Err is Human” report and the patient safety literature. Qual Saf Health Care. 2006;15(3):174178.
  3. Clancy CM, Scully T. A call to excellence. Health Aff (Millwood). 2003;22(2):113115.
  4. US Department of Health and Human Services. Adverse events in hospitals: national incidence among Medicare beneficiaries. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐09‐00090.pdf. Published November 2010. Accessed on August 2, 2013.
  5. The Leapfrog Group. The Leapfrog Group—fact sheet 2013. Available at: https://leapfroghospitalsurvey.org/web/wp‐content/uploads/Fsleapfrog.pdf. Accessed October 9, 2013.
  6. The Leapfrog Group. Hospital Safety score scoring methodology. Available at: http://www.hospitalsafetyscore.org/media/file/HospitalSafetyScore_ScoringMethodology_May2013.pdf. Published May 2013. Accessed June 17, 2013.
  7. The Leapfrog Group. The Hidden Surcharge Americans Pay for Hospital Errors 2013. Available at: http://www.leapfroggroup.org/employers_purchasers/HiddenSurchargeCalculator. Accessed August 2, 2013.
  8. The Leapfrog Group. 2013 Leapfrog Hospital Survey Reference Book 2013. https://leapfroghospitalsurvey.org/web/wp‐content/uploads/reference.pdf. Published April 1, 2013. Accessed June 17, 2013.
  9. Austin JM, D'Andrea G, Birkmeyer JD, et al. Safety in numbers: the development of Leapfrog's composite patient safety score for U.S. hospitals [published online ahead of print September 27, 2013]. J Patient Saf. doi: 10.1097/PTS.0b013e3182952644.
  10. The Leapfrog Group. Measures in detail. Available at: http://www. hospitalsafetyscore.org/about‐the‐score/measures‐in‐detail. Accessed June 17, 2013.
  11. The Leapfrog Group. Explanation of safety score grades. Available at: http://www.hospitalsafetyscore.org/media/file/ExplanationofSafety ScoreGrades_May2013.pdf. Published May 2013. Accessed June 17, 2013.
  12. Sterne JA, White IR, Carlin JB, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ. 2009;338:b2393.
Issue
Journal of Hospital Medicine - 9(2)
Issue
Journal of Hospital Medicine - 9(2)
Page Number
111-115
Page Number
111-115
Article Type
Display Headline
Hospital patient safety grades may misrepresent hospital performance
Display Headline
Hospital patient safety grades may misrepresent hospital performance
Sections
Article Source
© 2014 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Wenke Hwang, PhD, Department of Public Health Sciences, Division of Health Services Research, Penn State University College of Medicine, 600 Centerview Drive, Suite 2200, Hershey, PA 17033; Telephone: 717‐531‐7070; Fax: 717‐531‐4359; E‐mail:[email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files