User login
The microbiome in celiac disease: Beyond diet-genetic interactions
Inheriting the wrong genes and eating the wrong food (ie, gluten) are necessary for celiac disease to develop, but are not enough by themselves. Something else must be contributing, and evidence is pointing to the mix of bacteria that make our guts their home, collectively called the microbiome.
Celiac disease is a highly prevalent, chronic, immune-mediated form of enteropathy.1 It affects 0.5% to 1% of the population, and although it is mostly seen in people of northern European descent, those in other populations can develop the disease as well. Historically, celiac disease was classified as an infant condition. However, it now commonly presents later in life (between ages 10 and 40) and often with extraintestinal manifestations.2
In this issue of Cleveland Clinic Journal of Medicine, Kochhar et al provide a comprehensive updated review of celiac disease.3
GENES AND GLUTEN ARE NECESSARY BUT NOT SUFFICIENT
Although genetic factors and exposure to gluten in the diet are proven to be necessary for celiac disease to develop, they are not sufficient. Evidence of this is in the numbers; although one-third of the general population carries the HLA susceptibility genes (specifically HLA-DQ2 and DQ8),4 only 2% to 5% of people with these genes develop clinically evident celiac disease.
Additional environmental factors must be contributing to disease development, but these other factors are poorly understood. Some of the possible culprits that might influence the risk of disease occurrence and the timing of its onset include5:
- The amount and quality of gluten ingested—the higher the concentration of gluten, the higher the risk, and different grains have gluten varieties with more or less immunogenic capabilities, ie, T-cell activation properties
- The pattern of infant feeding—the risk may be lower with breastfeeding than with formula
- The age at which gluten is introduced into the diet—the risk may be higher if gluten is introduced earlier.6
More recently, studies of the pathogenesis of celiac disease and gene-environmental interactions have expanded beyond host predisposition and dietary factors.
OUR BODIES, OUR MICROBIOMES: A SYMBIOTIC RELATIONSHIP
The role of the human microbiome in autoimmune disease is now being elucidated.7 Remarkably, the microorganisms living in our bodies outnumber our body cells by a factor of 10, and their genomes vastly exceed our own protein-coding genome capabilities by a factor of 100.
The gut microbiome is now considered a true bioreactor with enzymatic and immunologic capabilities beyond (and complementary to) those of its host. The commensal microbiome of the host intestine provides benefits that can be broken down into three broad categories:
- Nutritional—producing essential amino acids and vitamins
- Metabolic—degrading complex polysaccharides from dietary fibers
- Immunologic—shaping the host immune system while cooperating with it against pathogenic microorganisms.
The immunologic function is highly relevant. We have coevolved with our bacteria in a mutually beneficial, symbiotic relationship in which we maintain an active state of low inflammation so that a constant bacterial and dietary antigenic load can be tolerated.
Is there a core human microbiome shared by all individuals? And what is the impact of altering the relative microbial composition (dysbiosis) in physiologic and disease states? To find out, the National Institutes of Health launched the Human Microbiome Project8 in 2008. Important tools in this work include novel culture-independent approaches (high-throughput DNA sequencing and whole-microbiome “shotgun” sequencing with metagenomic analysis) and computational analytical tools.9
An accumulating body of evidence is now available from animal models and human studies correlating states of intestinal dysbiosis (disruption in homeostatic community composition) with various disease processes. These have ranged from inflammatory bowel disease to systemic autoimmune disorders such as psoriasis, inflammatory arthropathies, and demyelinating central nervous system diseases.10–14
RESEARCH INTO THE MICROBIOME IN CELIAC DISEASE
Celiac disease has also served as a unique model for studying this biologic relationship, and the microbiome has been postulated to have a role in its pathogenesis.15 Multiple clinical studies demonstrate that a state of intestinal dysbiosis is indeed associated with celiac disease.
Specifically, decreases in the abundance of Firmicutes spp and increases in Proteobacteria spp have been detected in both children and adults with active celiac disease.16,17 Intriguingly, overrepresentation of Proteobacteria was also correlated with disease activity. Other studies have reported decreases in the proportion of reportedly protective, anti-inflammatory bacteria such as Bifidobacterium and increases in the proportion of Bacteroides and Escherichia coli in patients with active disease.18,19 Altered diversity and altered metabolic function, ie, decreased concentration of protective short-chain fatty acids of the microbiota, have also been reported in patients with celiac disease.19,20
To move beyond correlative studies and mechanistically address the possibility of causation, multiple groups have used a gnotobiotic approach, ie, maintaining animals under germ-free conditions and incorporating microbes of interest. This approach is highly relevant in studying whether the bacterial community composition is capable of modulating loss of tolerance to gluten in genetically susceptible hosts. A few notable examples have been published.
In germ-free rats, long-term feeding of gliadin, but not albumin, from birth until 2 months of age induced moderate small-intestinal damage.21 Similarly, germ-free nonobese diabetic-DQ8 mice developed more severe gluten-induced disease than mice with normal intestinal bacteria.22
These findings suggest that the normal gut microbiome may have intrinsic beneficial properties capable of reducing the inflammatory effects associated with gluten ingestion. Notably, the specific composition of the intestinal microbiome can define the fate of gluten-induced pathology. Mice colonized with commensal microbiota are indeed protected from gluten-induced pathology, while mice colonized with Proteobacteria spp develop a moderate degree of gluten-induced disease. When Escherichia coli derived from patients with celiac disease is added to commensal colonization, the celiac disease-like phenotype develops.23
Taken together, these studies support the hypothesis that the intestinal microbiome may be another environmental factor involved in the development of celiac disease.
QUESTIONS AND CHALLENGES REMAIN
The results of clinical studies are not necessarily consistent at the taxonomy level. The fields of metagenomics, which investigates all genes and their enzymatic function in a given community, and metabolomics, which identifies bacterial end-products, characterizing their functional capabilities, are still in their infancy and will be required to further investigate functionality of the altered microbiome in celiac disease.
Second, the directionality—the causality or consequences of this dysbiosis—and timing—the moment at which changes occur, ie, after introducing gluten or at the time when symptoms appear—remain elusive, and prospective studies in humans will be essential.
Finally, more mechanistic studies in animal models are needed to dissect the host immune response to dietary gluten and perturbation of intestinal community composition. This may lead to the possibility of future interventions in the form of prebiotics, probiotics, or specific metabolites, complementary to gluten avoidance.
In the meantime, increasing disease awareness and rapid diagnosis and treatment continue to be of utmost importance to address the clinical consequences of celiac disease in both children and adults.
- Guandalini S, Assiri A. Celiac disease: a review. JAMA Pediatr 2014; 168:272–278.
- Green PH, Cellier C. Celiac disease. N Engl J Med 2007; 357:1731–1743.
- Kochhar GS, Singh T, Gill A, Kirby DF. Celiac disease: an internist’s perspective. Cleve Clin J Med 2016; 83:217–227.
- Gutierrez-Achury J, Zhernakova A, Pulit SL, et al. Fine mapping in the MHC region accounts for 18% additional genetic risk for celiac disease. Nat Genet 2015; 47:577–578.
- Catassi C, Kryszak D, Bhatti B, et al. Natural history of celiac disease autoimmunity in a USA cohort followed since 1974. Ann Med 2010; 42:530–538.
- Norris JM, Barriga K, Hoffenberg EJ, et al. Risk of celiac disease autoimmunity and timing of gluten introduction in the diet of infants at increased risk of disease. JAMA 2005; 293:2343–2351.
- Turnbaugh PJ, Ley RE, Hamady M, Fraser-Liggett CM, Knight R, Gordon JI. The human microbiome project. Nature 2007; 449:804–810.
- NIH HMP Working Group; Peterson J, Garges S, Giovanni M, et al. The NIH Human Microbiome Project. Genome Res 2009; 19:2317–2323.
- Qin J, Li R, Raes J, et al. A human gut microbial gene catalogue established by metagenomic sequencing. Nature 2010; 464:59–65.
- Scher JU, Sczesnak A, Longman RS, et al. Expansion of intestinal Prevotella copri correlates with enhanced susceptibility to arthritis. Elife 2013; 2:e01202.
- Scher JU, Ubeda C, Artacho A, et al. Decreased bacterial diversity characterizes the altered gut microbiota in patients with psoriatic arthritis, resembling dysbiosis in inflammatory bowel disease. Arthritis Rheumatol 2015; 67:128–139.
- Gao Z, Tseng CH, Strober BE, Pei Z, Blaser MJ. Substantial alterations of the cutaneous bacterial biota in psoriatic lesions. PLoS One 2008; 3:e2719.
- Hsiao EY, McBride SW, Hsien S, et al. Microbiota modulate behavioral and physiological abnormalities associated with neurodevelopmental disorders. Cell 2013; 155:1451–1463.
- Gevers D, Kugathasan S, Denson LA, et al. The treatment-naive microbiome in new-onset Crohn‘s disease. Cell Host Microbe 2014; 15:382–392.
- Verdu EF, Galipeau HJ, Jabri B. Novel players in coeliac disease pathogenesis: role of the gut microbiota. Nat Rev Gastroenterol Hepatol 2015; 12:497–506.
- Sanchez E, Donat E, Ribes-Koninckx C, Fernandez-Murga ML, Sanz Y. Duodenal-mucosal bacteria associated with celiac disease in children. Appl Environ Microbiol 2013; 79:5472–5479.
- Wacklin P, Kaukinen K, Tuovinen E, et al. The duodenal microbiota composition of adult celiac disease patients is associated with the clinical manifestation of the disease. Inflamm Bowel Dis 2013; 19:934–941.
- Collado MC, Donat E, Ribes-Koninckx C, Calabuig M, Sanz Y. Specific duodenal and faecal bacterial groups associated with paediatric coeliac disease. J Clin Pathol 2009; 62:264–269.
- Di Cagno R, De Angelis M, De Pasquale I, et al. Duodenal and faecal microbiota of celiac children: molecular, phenotype and metabolome characterization. BMC Microbiol 2011; 11:219.
- Schippa S, Iebba V, Barbato M, et al. A distinctive ‘microbial signature’ in celiac pediatric patients. BMC Microbiol 2010; 10:175.
- Stepankova R, Tlaskalova-Hogenova H, Sinkora J, Jodl J, Fric P. Changes in jejunal mucosa after long-term feeding of germfree rats with gluten. Scand J Gastroenterol 1996; 31:551–557.
- Galipeau HJ, Rulli NE, Jury J, et al. Sensitization to gliadin induces moderate enteropathy and insulitis in nonobese diabetic-DQ8 mice. J Immunol 2011; 187:4338–4346.
- Galipeau HJ, Verdu EF. Gut microbes and adverse food reactions: focus on gluten related disorders. Gut Microbes 2014; 5:594–605.
Inheriting the wrong genes and eating the wrong food (ie, gluten) are necessary for celiac disease to develop, but are not enough by themselves. Something else must be contributing, and evidence is pointing to the mix of bacteria that make our guts their home, collectively called the microbiome.
Celiac disease is a highly prevalent, chronic, immune-mediated form of enteropathy.1 It affects 0.5% to 1% of the population, and although it is mostly seen in people of northern European descent, those in other populations can develop the disease as well. Historically, celiac disease was classified as an infant condition. However, it now commonly presents later in life (between ages 10 and 40) and often with extraintestinal manifestations.2
In this issue of Cleveland Clinic Journal of Medicine, Kochhar et al provide a comprehensive updated review of celiac disease.3
GENES AND GLUTEN ARE NECESSARY BUT NOT SUFFICIENT
Although genetic factors and exposure to gluten in the diet are proven to be necessary for celiac disease to develop, they are not sufficient. Evidence of this is in the numbers; although one-third of the general population carries the HLA susceptibility genes (specifically HLA-DQ2 and DQ8),4 only 2% to 5% of people with these genes develop clinically evident celiac disease.
Additional environmental factors must be contributing to disease development, but these other factors are poorly understood. Some of the possible culprits that might influence the risk of disease occurrence and the timing of its onset include5:
- The amount and quality of gluten ingested—the higher the concentration of gluten, the higher the risk, and different grains have gluten varieties with more or less immunogenic capabilities, ie, T-cell activation properties
- The pattern of infant feeding—the risk may be lower with breastfeeding than with formula
- The age at which gluten is introduced into the diet—the risk may be higher if gluten is introduced earlier.6
More recently, studies of the pathogenesis of celiac disease and gene-environmental interactions have expanded beyond host predisposition and dietary factors.
OUR BODIES, OUR MICROBIOMES: A SYMBIOTIC RELATIONSHIP
The role of the human microbiome in autoimmune disease is now being elucidated.7 Remarkably, the microorganisms living in our bodies outnumber our body cells by a factor of 10, and their genomes vastly exceed our own protein-coding genome capabilities by a factor of 100.
The gut microbiome is now considered a true bioreactor with enzymatic and immunologic capabilities beyond (and complementary to) those of its host. The commensal microbiome of the host intestine provides benefits that can be broken down into three broad categories:
- Nutritional—producing essential amino acids and vitamins
- Metabolic—degrading complex polysaccharides from dietary fibers
- Immunologic—shaping the host immune system while cooperating with it against pathogenic microorganisms.
The immunologic function is highly relevant. We have coevolved with our bacteria in a mutually beneficial, symbiotic relationship in which we maintain an active state of low inflammation so that a constant bacterial and dietary antigenic load can be tolerated.
Is there a core human microbiome shared by all individuals? And what is the impact of altering the relative microbial composition (dysbiosis) in physiologic and disease states? To find out, the National Institutes of Health launched the Human Microbiome Project8 in 2008. Important tools in this work include novel culture-independent approaches (high-throughput DNA sequencing and whole-microbiome “shotgun” sequencing with metagenomic analysis) and computational analytical tools.9
An accumulating body of evidence is now available from animal models and human studies correlating states of intestinal dysbiosis (disruption in homeostatic community composition) with various disease processes. These have ranged from inflammatory bowel disease to systemic autoimmune disorders such as psoriasis, inflammatory arthropathies, and demyelinating central nervous system diseases.10–14
RESEARCH INTO THE MICROBIOME IN CELIAC DISEASE
Celiac disease has also served as a unique model for studying this biologic relationship, and the microbiome has been postulated to have a role in its pathogenesis.15 Multiple clinical studies demonstrate that a state of intestinal dysbiosis is indeed associated with celiac disease.
Specifically, decreases in the abundance of Firmicutes spp and increases in Proteobacteria spp have been detected in both children and adults with active celiac disease.16,17 Intriguingly, overrepresentation of Proteobacteria was also correlated with disease activity. Other studies have reported decreases in the proportion of reportedly protective, anti-inflammatory bacteria such as Bifidobacterium and increases in the proportion of Bacteroides and Escherichia coli in patients with active disease.18,19 Altered diversity and altered metabolic function, ie, decreased concentration of protective short-chain fatty acids of the microbiota, have also been reported in patients with celiac disease.19,20
To move beyond correlative studies and mechanistically address the possibility of causation, multiple groups have used a gnotobiotic approach, ie, maintaining animals under germ-free conditions and incorporating microbes of interest. This approach is highly relevant in studying whether the bacterial community composition is capable of modulating loss of tolerance to gluten in genetically susceptible hosts. A few notable examples have been published.
In germ-free rats, long-term feeding of gliadin, but not albumin, from birth until 2 months of age induced moderate small-intestinal damage.21 Similarly, germ-free nonobese diabetic-DQ8 mice developed more severe gluten-induced disease than mice with normal intestinal bacteria.22
These findings suggest that the normal gut microbiome may have intrinsic beneficial properties capable of reducing the inflammatory effects associated with gluten ingestion. Notably, the specific composition of the intestinal microbiome can define the fate of gluten-induced pathology. Mice colonized with commensal microbiota are indeed protected from gluten-induced pathology, while mice colonized with Proteobacteria spp develop a moderate degree of gluten-induced disease. When Escherichia coli derived from patients with celiac disease is added to commensal colonization, the celiac disease-like phenotype develops.23
Taken together, these studies support the hypothesis that the intestinal microbiome may be another environmental factor involved in the development of celiac disease.
QUESTIONS AND CHALLENGES REMAIN
The results of clinical studies are not necessarily consistent at the taxonomy level. The fields of metagenomics, which investigates all genes and their enzymatic function in a given community, and metabolomics, which identifies bacterial end-products, characterizing their functional capabilities, are still in their infancy and will be required to further investigate functionality of the altered microbiome in celiac disease.
Second, the directionality—the causality or consequences of this dysbiosis—and timing—the moment at which changes occur, ie, after introducing gluten or at the time when symptoms appear—remain elusive, and prospective studies in humans will be essential.
Finally, more mechanistic studies in animal models are needed to dissect the host immune response to dietary gluten and perturbation of intestinal community composition. This may lead to the possibility of future interventions in the form of prebiotics, probiotics, or specific metabolites, complementary to gluten avoidance.
In the meantime, increasing disease awareness and rapid diagnosis and treatment continue to be of utmost importance to address the clinical consequences of celiac disease in both children and adults.
Inheriting the wrong genes and eating the wrong food (ie, gluten) are necessary for celiac disease to develop, but are not enough by themselves. Something else must be contributing, and evidence is pointing to the mix of bacteria that make our guts their home, collectively called the microbiome.
Celiac disease is a highly prevalent, chronic, immune-mediated form of enteropathy.1 It affects 0.5% to 1% of the population, and although it is mostly seen in people of northern European descent, those in other populations can develop the disease as well. Historically, celiac disease was classified as an infant condition. However, it now commonly presents later in life (between ages 10 and 40) and often with extraintestinal manifestations.2
In this issue of Cleveland Clinic Journal of Medicine, Kochhar et al provide a comprehensive updated review of celiac disease.3
GENES AND GLUTEN ARE NECESSARY BUT NOT SUFFICIENT
Although genetic factors and exposure to gluten in the diet are proven to be necessary for celiac disease to develop, they are not sufficient. Evidence of this is in the numbers; although one-third of the general population carries the HLA susceptibility genes (specifically HLA-DQ2 and DQ8),4 only 2% to 5% of people with these genes develop clinically evident celiac disease.
Additional environmental factors must be contributing to disease development, but these other factors are poorly understood. Some of the possible culprits that might influence the risk of disease occurrence and the timing of its onset include5:
- The amount and quality of gluten ingested—the higher the concentration of gluten, the higher the risk, and different grains have gluten varieties with more or less immunogenic capabilities, ie, T-cell activation properties
- The pattern of infant feeding—the risk may be lower with breastfeeding than with formula
- The age at which gluten is introduced into the diet—the risk may be higher if gluten is introduced earlier.6
More recently, studies of the pathogenesis of celiac disease and gene-environmental interactions have expanded beyond host predisposition and dietary factors.
OUR BODIES, OUR MICROBIOMES: A SYMBIOTIC RELATIONSHIP
The role of the human microbiome in autoimmune disease is now being elucidated.7 Remarkably, the microorganisms living in our bodies outnumber our body cells by a factor of 10, and their genomes vastly exceed our own protein-coding genome capabilities by a factor of 100.
The gut microbiome is now considered a true bioreactor with enzymatic and immunologic capabilities beyond (and complementary to) those of its host. The commensal microbiome of the host intestine provides benefits that can be broken down into three broad categories:
- Nutritional—producing essential amino acids and vitamins
- Metabolic—degrading complex polysaccharides from dietary fibers
- Immunologic—shaping the host immune system while cooperating with it against pathogenic microorganisms.
The immunologic function is highly relevant. We have coevolved with our bacteria in a mutually beneficial, symbiotic relationship in which we maintain an active state of low inflammation so that a constant bacterial and dietary antigenic load can be tolerated.
Is there a core human microbiome shared by all individuals? And what is the impact of altering the relative microbial composition (dysbiosis) in physiologic and disease states? To find out, the National Institutes of Health launched the Human Microbiome Project8 in 2008. Important tools in this work include novel culture-independent approaches (high-throughput DNA sequencing and whole-microbiome “shotgun” sequencing with metagenomic analysis) and computational analytical tools.9
An accumulating body of evidence is now available from animal models and human studies correlating states of intestinal dysbiosis (disruption in homeostatic community composition) with various disease processes. These have ranged from inflammatory bowel disease to systemic autoimmune disorders such as psoriasis, inflammatory arthropathies, and demyelinating central nervous system diseases.10–14
RESEARCH INTO THE MICROBIOME IN CELIAC DISEASE
Celiac disease has also served as a unique model for studying this biologic relationship, and the microbiome has been postulated to have a role in its pathogenesis.15 Multiple clinical studies demonstrate that a state of intestinal dysbiosis is indeed associated with celiac disease.
Specifically, decreases in the abundance of Firmicutes spp and increases in Proteobacteria spp have been detected in both children and adults with active celiac disease.16,17 Intriguingly, overrepresentation of Proteobacteria was also correlated with disease activity. Other studies have reported decreases in the proportion of reportedly protective, anti-inflammatory bacteria such as Bifidobacterium and increases in the proportion of Bacteroides and Escherichia coli in patients with active disease.18,19 Altered diversity and altered metabolic function, ie, decreased concentration of protective short-chain fatty acids of the microbiota, have also been reported in patients with celiac disease.19,20
To move beyond correlative studies and mechanistically address the possibility of causation, multiple groups have used a gnotobiotic approach, ie, maintaining animals under germ-free conditions and incorporating microbes of interest. This approach is highly relevant in studying whether the bacterial community composition is capable of modulating loss of tolerance to gluten in genetically susceptible hosts. A few notable examples have been published.
In germ-free rats, long-term feeding of gliadin, but not albumin, from birth until 2 months of age induced moderate small-intestinal damage.21 Similarly, germ-free nonobese diabetic-DQ8 mice developed more severe gluten-induced disease than mice with normal intestinal bacteria.22
These findings suggest that the normal gut microbiome may have intrinsic beneficial properties capable of reducing the inflammatory effects associated with gluten ingestion. Notably, the specific composition of the intestinal microbiome can define the fate of gluten-induced pathology. Mice colonized with commensal microbiota are indeed protected from gluten-induced pathology, while mice colonized with Proteobacteria spp develop a moderate degree of gluten-induced disease. When Escherichia coli derived from patients with celiac disease is added to commensal colonization, the celiac disease-like phenotype develops.23
Taken together, these studies support the hypothesis that the intestinal microbiome may be another environmental factor involved in the development of celiac disease.
QUESTIONS AND CHALLENGES REMAIN
The results of clinical studies are not necessarily consistent at the taxonomy level. The fields of metagenomics, which investigates all genes and their enzymatic function in a given community, and metabolomics, which identifies bacterial end-products, characterizing their functional capabilities, are still in their infancy and will be required to further investigate functionality of the altered microbiome in celiac disease.
Second, the directionality—the causality or consequences of this dysbiosis—and timing—the moment at which changes occur, ie, after introducing gluten or at the time when symptoms appear—remain elusive, and prospective studies in humans will be essential.
Finally, more mechanistic studies in animal models are needed to dissect the host immune response to dietary gluten and perturbation of intestinal community composition. This may lead to the possibility of future interventions in the form of prebiotics, probiotics, or specific metabolites, complementary to gluten avoidance.
In the meantime, increasing disease awareness and rapid diagnosis and treatment continue to be of utmost importance to address the clinical consequences of celiac disease in both children and adults.
- Guandalini S, Assiri A. Celiac disease: a review. JAMA Pediatr 2014; 168:272–278.
- Green PH, Cellier C. Celiac disease. N Engl J Med 2007; 357:1731–1743.
- Kochhar GS, Singh T, Gill A, Kirby DF. Celiac disease: an internist’s perspective. Cleve Clin J Med 2016; 83:217–227.
- Gutierrez-Achury J, Zhernakova A, Pulit SL, et al. Fine mapping in the MHC region accounts for 18% additional genetic risk for celiac disease. Nat Genet 2015; 47:577–578.
- Catassi C, Kryszak D, Bhatti B, et al. Natural history of celiac disease autoimmunity in a USA cohort followed since 1974. Ann Med 2010; 42:530–538.
- Norris JM, Barriga K, Hoffenberg EJ, et al. Risk of celiac disease autoimmunity and timing of gluten introduction in the diet of infants at increased risk of disease. JAMA 2005; 293:2343–2351.
- Turnbaugh PJ, Ley RE, Hamady M, Fraser-Liggett CM, Knight R, Gordon JI. The human microbiome project. Nature 2007; 449:804–810.
- NIH HMP Working Group; Peterson J, Garges S, Giovanni M, et al. The NIH Human Microbiome Project. Genome Res 2009; 19:2317–2323.
- Qin J, Li R, Raes J, et al. A human gut microbial gene catalogue established by metagenomic sequencing. Nature 2010; 464:59–65.
- Scher JU, Sczesnak A, Longman RS, et al. Expansion of intestinal Prevotella copri correlates with enhanced susceptibility to arthritis. Elife 2013; 2:e01202.
- Scher JU, Ubeda C, Artacho A, et al. Decreased bacterial diversity characterizes the altered gut microbiota in patients with psoriatic arthritis, resembling dysbiosis in inflammatory bowel disease. Arthritis Rheumatol 2015; 67:128–139.
- Gao Z, Tseng CH, Strober BE, Pei Z, Blaser MJ. Substantial alterations of the cutaneous bacterial biota in psoriatic lesions. PLoS One 2008; 3:e2719.
- Hsiao EY, McBride SW, Hsien S, et al. Microbiota modulate behavioral and physiological abnormalities associated with neurodevelopmental disorders. Cell 2013; 155:1451–1463.
- Gevers D, Kugathasan S, Denson LA, et al. The treatment-naive microbiome in new-onset Crohn‘s disease. Cell Host Microbe 2014; 15:382–392.
- Verdu EF, Galipeau HJ, Jabri B. Novel players in coeliac disease pathogenesis: role of the gut microbiota. Nat Rev Gastroenterol Hepatol 2015; 12:497–506.
- Sanchez E, Donat E, Ribes-Koninckx C, Fernandez-Murga ML, Sanz Y. Duodenal-mucosal bacteria associated with celiac disease in children. Appl Environ Microbiol 2013; 79:5472–5479.
- Wacklin P, Kaukinen K, Tuovinen E, et al. The duodenal microbiota composition of adult celiac disease patients is associated with the clinical manifestation of the disease. Inflamm Bowel Dis 2013; 19:934–941.
- Collado MC, Donat E, Ribes-Koninckx C, Calabuig M, Sanz Y. Specific duodenal and faecal bacterial groups associated with paediatric coeliac disease. J Clin Pathol 2009; 62:264–269.
- Di Cagno R, De Angelis M, De Pasquale I, et al. Duodenal and faecal microbiota of celiac children: molecular, phenotype and metabolome characterization. BMC Microbiol 2011; 11:219.
- Schippa S, Iebba V, Barbato M, et al. A distinctive ‘microbial signature’ in celiac pediatric patients. BMC Microbiol 2010; 10:175.
- Stepankova R, Tlaskalova-Hogenova H, Sinkora J, Jodl J, Fric P. Changes in jejunal mucosa after long-term feeding of germfree rats with gluten. Scand J Gastroenterol 1996; 31:551–557.
- Galipeau HJ, Rulli NE, Jury J, et al. Sensitization to gliadin induces moderate enteropathy and insulitis in nonobese diabetic-DQ8 mice. J Immunol 2011; 187:4338–4346.
- Galipeau HJ, Verdu EF. Gut microbes and adverse food reactions: focus on gluten related disorders. Gut Microbes 2014; 5:594–605.
- Guandalini S, Assiri A. Celiac disease: a review. JAMA Pediatr 2014; 168:272–278.
- Green PH, Cellier C. Celiac disease. N Engl J Med 2007; 357:1731–1743.
- Kochhar GS, Singh T, Gill A, Kirby DF. Celiac disease: an internist’s perspective. Cleve Clin J Med 2016; 83:217–227.
- Gutierrez-Achury J, Zhernakova A, Pulit SL, et al. Fine mapping in the MHC region accounts for 18% additional genetic risk for celiac disease. Nat Genet 2015; 47:577–578.
- Catassi C, Kryszak D, Bhatti B, et al. Natural history of celiac disease autoimmunity in a USA cohort followed since 1974. Ann Med 2010; 42:530–538.
- Norris JM, Barriga K, Hoffenberg EJ, et al. Risk of celiac disease autoimmunity and timing of gluten introduction in the diet of infants at increased risk of disease. JAMA 2005; 293:2343–2351.
- Turnbaugh PJ, Ley RE, Hamady M, Fraser-Liggett CM, Knight R, Gordon JI. The human microbiome project. Nature 2007; 449:804–810.
- NIH HMP Working Group; Peterson J, Garges S, Giovanni M, et al. The NIH Human Microbiome Project. Genome Res 2009; 19:2317–2323.
- Qin J, Li R, Raes J, et al. A human gut microbial gene catalogue established by metagenomic sequencing. Nature 2010; 464:59–65.
- Scher JU, Sczesnak A, Longman RS, et al. Expansion of intestinal Prevotella copri correlates with enhanced susceptibility to arthritis. Elife 2013; 2:e01202.
- Scher JU, Ubeda C, Artacho A, et al. Decreased bacterial diversity characterizes the altered gut microbiota in patients with psoriatic arthritis, resembling dysbiosis in inflammatory bowel disease. Arthritis Rheumatol 2015; 67:128–139.
- Gao Z, Tseng CH, Strober BE, Pei Z, Blaser MJ. Substantial alterations of the cutaneous bacterial biota in psoriatic lesions. PLoS One 2008; 3:e2719.
- Hsiao EY, McBride SW, Hsien S, et al. Microbiota modulate behavioral and physiological abnormalities associated with neurodevelopmental disorders. Cell 2013; 155:1451–1463.
- Gevers D, Kugathasan S, Denson LA, et al. The treatment-naive microbiome in new-onset Crohn‘s disease. Cell Host Microbe 2014; 15:382–392.
- Verdu EF, Galipeau HJ, Jabri B. Novel players in coeliac disease pathogenesis: role of the gut microbiota. Nat Rev Gastroenterol Hepatol 2015; 12:497–506.
- Sanchez E, Donat E, Ribes-Koninckx C, Fernandez-Murga ML, Sanz Y. Duodenal-mucosal bacteria associated with celiac disease in children. Appl Environ Microbiol 2013; 79:5472–5479.
- Wacklin P, Kaukinen K, Tuovinen E, et al. The duodenal microbiota composition of adult celiac disease patients is associated with the clinical manifestation of the disease. Inflamm Bowel Dis 2013; 19:934–941.
- Collado MC, Donat E, Ribes-Koninckx C, Calabuig M, Sanz Y. Specific duodenal and faecal bacterial groups associated with paediatric coeliac disease. J Clin Pathol 2009; 62:264–269.
- Di Cagno R, De Angelis M, De Pasquale I, et al. Duodenal and faecal microbiota of celiac children: molecular, phenotype and metabolome characterization. BMC Microbiol 2011; 11:219.
- Schippa S, Iebba V, Barbato M, et al. A distinctive ‘microbial signature’ in celiac pediatric patients. BMC Microbiol 2010; 10:175.
- Stepankova R, Tlaskalova-Hogenova H, Sinkora J, Jodl J, Fric P. Changes in jejunal mucosa after long-term feeding of germfree rats with gluten. Scand J Gastroenterol 1996; 31:551–557.
- Galipeau HJ, Rulli NE, Jury J, et al. Sensitization to gliadin induces moderate enteropathy and insulitis in nonobese diabetic-DQ8 mice. J Immunol 2011; 187:4338–4346.
- Galipeau HJ, Verdu EF. Gut microbes and adverse food reactions: focus on gluten related disorders. Gut Microbes 2014; 5:594–605.
Blood pressure management in the wake of SPRINT
High blood pressure is a major cause of morbidity and death worldwide.1 Observational data from the general population show a log-linear relationship between both systolic and diastolic blood pressure and the rate of cardiovascular death.2 Placebo-controlled trials have shown a clear-cut benefit in treating moderate to severe hypertension based on diastolic pressure in initial trials, and systolic pressure subsequently.3 What remains uncertain is the optimal target for a particular patient, and whether other factors such as number of medications, starting blood pressure, and other comorbidities should influence this target.
Publication of the Systolic Blood Pressure Intervention Trial (SPRINT) furthered the debate regarding the optimal blood pressure target in hypertension treatment.4 SPRINT randomized 9,361 nondiabetic persons with systolic pressure higher than 130 mm Hg and increased cardiovascular risk but without prior stroke to intensive therapy (goal systolic pressure < 120 mm Hg) or standard therapy as control (goal systolic pressure < 140 mm Hg) and showed a significant reduction in the composite end point and all-cause mortality—at the expense of an increase in serious adverse events.
EARLIER TRIALS WERE GENERALLY NEGATIVE
Before SPRINT, approximately 20 randomized controlled trials attempted to define whether a more intensive target was better than standard control. These included the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial restricted to patients with diabetes5 and the Secondary Prevention of Small Subcortical Strokes (SPS3) trial restricted to patients with lacunar infarcts.6 These two groups of patients were specifically excluded from SPRINT.6 Many of the other trials had primary renal end points, although several had primary cardiovascular end points.
As we reviewed previously in this Journal, individually these trials were generally inconclusive.7 When analyzed by meta-analysis, a significant benefit was found for cardiovascular events, stroke, and end-stage renal disease, with a marginal benefit for myocardial infarction.8 The validity of such analysis may be questioned due to heterogeneous populations, lack of individual patient data, different blood pressure targets and medication regimens, and different primary end points.
Together, ACCORD in patients with diabetes, SPS3 in patients with stroke, and SPRINT in patients at increased cardiovascular risk but without diabetes or stroke cover most hypertensive patients with more than low cardiovascular risk. All three trials were government-funded, and ACCORD and SPRINT used the same blood pressure targets and treatment algorithm. It remains speculative why ACCORD was essentially negative and SPRINT was positive.
CAUTION IN GENERALIZING THE RESULTS
In this issue of the Journal, Thomas and colleagues9 review the SPRINT results in detail and attempt to reconcile the disparity with ACCORD.
We agree with their interpretation that risks and benefits of a more intensive blood pressure target (ie, < 120 mm Hg systolic) need to be addressed in the individual patient and do not apply across the board to all hypertensive patients. This more intensive target would be appropriate for patients fulfilling criteria for entry into SPRINT, ie, no diabetes or prior stroke. They must be able to tolerate more intensive therapy and should not be frail or at risk for falls. Furthermore, the increased hypertension medication burden required for stricter control will increase side effects and complexity of overall medication regimens, and will possibly foster noncompliance.
In our opinion, one must be careful in generalizing the results of SPRINT to more than the type of patient enrolled. At best, one can say that a lower target is acceptable in a patient over age 50 at increased cardiovascular risk but without diabetes or stroke.
SPRINT may not even be representative of all such patients, however. Patients requiring more than four medications were excluded from the trial, as were patients with systolic pressure higher than 180 mm Hg, or with pressure higher than 170 mm Hg requiring two medications, or with pressure higher than 160 mm Hg requiring three medications, or with pressure higher than 150 mm Hg requiring four medications. Hence, SPRINT has not determined the appropriate approach to the patient with a systolic pressure between 150 and 180 mm Hg already on multiple medications above these cutoffs. It is not hard to envision the potential for adverse events and drug interactions using four or more antihypertensive medications to achieve a lower target, in addition to other classes of medications that many patients need.
The average systolic pressure on entry into SPRINT was 139 mm Hg, and patients were taking an average of 1.8 medications. In fact, one-third of patients had systolic pressures between 130 and 132 mm Hg, a range where most physicians would probably not want to intensify therapy. By protocol, such patients in the standard treatment group in SPRINT would actually have had their baseline antihypertensive therapy reduced if the systolic pressure fell below 130 mm Hg on one occasion or below 135 mm Hg on two consecutive visits. Reduction of therapy would seem to bias the trial against the standard treatment. An identical algorithm was used in ACCORD.
We are unable to reconcile the differences in outcome between ACCORD and SPRINT, although they were congruent in one important aspect: significantly higher rates of serious adverse events with more intensive therapy. ACCORD had fewer patients, but they were at higher risk since all had diabetes, and more had previous cardiovascular events (34% vs 17% in SPRINT). This is reflected in higher event rates:
- Myocardial infarction occurred in 1.13% per year in the intensive therapy group, and 1.28% per year with standard therapy in ACCORD, compared with 0.65% and 0.78% per year, respectively, in SPRINT.
- Cardiovascular death occurred in 0.52% per year with intensive therapy and 0.49% per year with standard therapy in ACCORD, compared with 0.25% and 0.43% per year, respectively, in SPRINT. Event rates for stroke were similar.
Overall, 445 primary end points occurred in ACCORD compared with 562 with SPRINT. After subtracting heart failure from the SPRINT data (not included in the primary end point of ACCORD), 400 events occurred, actually less than in ACCORD. The early termination of SPRINT may be partly to blame. In our opinion ACCORD and SPRINT were equally powered. While cardiovascular event risk reductions in ACCORD trended in the same direction as those in SPRINT, the total mortality rate trended in the opposite direction. Perhaps the play of chance is the best explanation.
ONE TARGET DOES NOT FIT ALL
SPRINT clearly added much needed data, but results should be interpreted in the context of previous trials as well as of the specific inclusion and exclusion criteria. One target does not fit all, and systolic pressure of less than 120 mm Hg should not automatically be the target for all hypertensive patients.
Should patients with diabetes be targeted to systolic pressure of less than 140 mm Hg based on the ACCORD results, and patients with stroke to systolic pressure of less than 130 mm Hg based on the SPS3 results? We are unsure. More data are clearly required, especially in patients already on multiple antihypertensive medications with unacceptable blood pressure.
As pointed out by Thomas and colleagues, lower systolic pressure may be better in select patients, but only as long as adverse events can be avoided or managed.
- Lim SS, Vos T, Flaxman AD, et al. A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet 2012; 380:2224–2260.
- Prospective Studies Collaboration. Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studies. Lancet 2002; 360:1903–1913.
- Psaty BM, Smith NL, Siscovick DS, et al. Health outcomes associated with antihypertensive therapies used as first-line agents. A systematic review and meta-analysis. JAMA 1997; 277:739–745.
- SPRINT Research Group; Wright JT Jr, Williamson JD, Whelton PK, et al. A randomized trial of intensive versus standard blood-pressure control. N Engl J Med 2015; 373:2103–2116.
- ACCORD Study Group; Cushman WC, Evans GW, Byington RP, et al. Effects of intensive blood-pressure control in type 2 diabetes mellitus. N Engl J Med 2010; 362:1575–1585.
- SPS3 Study Group; Benavente OR, Coffey CS, Conwit R, et al. Blood-pressure targets in patients with recent lacunar stroke: the SPS3 randomised trial. Lancet 2013; 382:507–515.
- Filippone EJ, Foy A, Newman E. Goal-directed antihypertensive therapy: lower may not always be better. Cleve Clin J Med 2011; 78:123–133.
- Lv J, Neal B, Ehteshami P, et al. Effects of intensive blood pressure lowering on cardiovascular and renal outcomes: a systematic review and meta-analysis. PLoS Med 2012; 9:e1001293.
- Thomas G, Nally JV, Pohl MA. Interpreting SPRINT: how low should you go? Cleve Clin J Med 2016; 83:187–195.
High blood pressure is a major cause of morbidity and death worldwide.1 Observational data from the general population show a log-linear relationship between both systolic and diastolic blood pressure and the rate of cardiovascular death.2 Placebo-controlled trials have shown a clear-cut benefit in treating moderate to severe hypertension based on diastolic pressure in initial trials, and systolic pressure subsequently.3 What remains uncertain is the optimal target for a particular patient, and whether other factors such as number of medications, starting blood pressure, and other comorbidities should influence this target.
Publication of the Systolic Blood Pressure Intervention Trial (SPRINT) furthered the debate regarding the optimal blood pressure target in hypertension treatment.4 SPRINT randomized 9,361 nondiabetic persons with systolic pressure higher than 130 mm Hg and increased cardiovascular risk but without prior stroke to intensive therapy (goal systolic pressure < 120 mm Hg) or standard therapy as control (goal systolic pressure < 140 mm Hg) and showed a significant reduction in the composite end point and all-cause mortality—at the expense of an increase in serious adverse events.
EARLIER TRIALS WERE GENERALLY NEGATIVE
Before SPRINT, approximately 20 randomized controlled trials attempted to define whether a more intensive target was better than standard control. These included the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial restricted to patients with diabetes5 and the Secondary Prevention of Small Subcortical Strokes (SPS3) trial restricted to patients with lacunar infarcts.6 These two groups of patients were specifically excluded from SPRINT.6 Many of the other trials had primary renal end points, although several had primary cardiovascular end points.
As we reviewed previously in this Journal, individually these trials were generally inconclusive.7 When analyzed by meta-analysis, a significant benefit was found for cardiovascular events, stroke, and end-stage renal disease, with a marginal benefit for myocardial infarction.8 The validity of such analysis may be questioned due to heterogeneous populations, lack of individual patient data, different blood pressure targets and medication regimens, and different primary end points.
Together, ACCORD in patients with diabetes, SPS3 in patients with stroke, and SPRINT in patients at increased cardiovascular risk but without diabetes or stroke cover most hypertensive patients with more than low cardiovascular risk. All three trials were government-funded, and ACCORD and SPRINT used the same blood pressure targets and treatment algorithm. It remains speculative why ACCORD was essentially negative and SPRINT was positive.
CAUTION IN GENERALIZING THE RESULTS
In this issue of the Journal, Thomas and colleagues9 review the SPRINT results in detail and attempt to reconcile the disparity with ACCORD.
We agree with their interpretation that risks and benefits of a more intensive blood pressure target (ie, < 120 mm Hg systolic) need to be addressed in the individual patient and do not apply across the board to all hypertensive patients. This more intensive target would be appropriate for patients fulfilling criteria for entry into SPRINT, ie, no diabetes or prior stroke. They must be able to tolerate more intensive therapy and should not be frail or at risk for falls. Furthermore, the increased hypertension medication burden required for stricter control will increase side effects and complexity of overall medication regimens, and will possibly foster noncompliance.
In our opinion, one must be careful in generalizing the results of SPRINT to more than the type of patient enrolled. At best, one can say that a lower target is acceptable in a patient over age 50 at increased cardiovascular risk but without diabetes or stroke.
SPRINT may not even be representative of all such patients, however. Patients requiring more than four medications were excluded from the trial, as were patients with systolic pressure higher than 180 mm Hg, or with pressure higher than 170 mm Hg requiring two medications, or with pressure higher than 160 mm Hg requiring three medications, or with pressure higher than 150 mm Hg requiring four medications. Hence, SPRINT has not determined the appropriate approach to the patient with a systolic pressure between 150 and 180 mm Hg already on multiple medications above these cutoffs. It is not hard to envision the potential for adverse events and drug interactions using four or more antihypertensive medications to achieve a lower target, in addition to other classes of medications that many patients need.
The average systolic pressure on entry into SPRINT was 139 mm Hg, and patients were taking an average of 1.8 medications. In fact, one-third of patients had systolic pressures between 130 and 132 mm Hg, a range where most physicians would probably not want to intensify therapy. By protocol, such patients in the standard treatment group in SPRINT would actually have had their baseline antihypertensive therapy reduced if the systolic pressure fell below 130 mm Hg on one occasion or below 135 mm Hg on two consecutive visits. Reduction of therapy would seem to bias the trial against the standard treatment. An identical algorithm was used in ACCORD.
We are unable to reconcile the differences in outcome between ACCORD and SPRINT, although they were congruent in one important aspect: significantly higher rates of serious adverse events with more intensive therapy. ACCORD had fewer patients, but they were at higher risk since all had diabetes, and more had previous cardiovascular events (34% vs 17% in SPRINT). This is reflected in higher event rates:
- Myocardial infarction occurred in 1.13% per year in the intensive therapy group, and 1.28% per year with standard therapy in ACCORD, compared with 0.65% and 0.78% per year, respectively, in SPRINT.
- Cardiovascular death occurred in 0.52% per year with intensive therapy and 0.49% per year with standard therapy in ACCORD, compared with 0.25% and 0.43% per year, respectively, in SPRINT. Event rates for stroke were similar.
Overall, 445 primary end points occurred in ACCORD compared with 562 with SPRINT. After subtracting heart failure from the SPRINT data (not included in the primary end point of ACCORD), 400 events occurred, actually less than in ACCORD. The early termination of SPRINT may be partly to blame. In our opinion ACCORD and SPRINT were equally powered. While cardiovascular event risk reductions in ACCORD trended in the same direction as those in SPRINT, the total mortality rate trended in the opposite direction. Perhaps the play of chance is the best explanation.
ONE TARGET DOES NOT FIT ALL
SPRINT clearly added much needed data, but results should be interpreted in the context of previous trials as well as of the specific inclusion and exclusion criteria. One target does not fit all, and systolic pressure of less than 120 mm Hg should not automatically be the target for all hypertensive patients.
Should patients with diabetes be targeted to systolic pressure of less than 140 mm Hg based on the ACCORD results, and patients with stroke to systolic pressure of less than 130 mm Hg based on the SPS3 results? We are unsure. More data are clearly required, especially in patients already on multiple antihypertensive medications with unacceptable blood pressure.
As pointed out by Thomas and colleagues, lower systolic pressure may be better in select patients, but only as long as adverse events can be avoided or managed.
High blood pressure is a major cause of morbidity and death worldwide.1 Observational data from the general population show a log-linear relationship between both systolic and diastolic blood pressure and the rate of cardiovascular death.2 Placebo-controlled trials have shown a clear-cut benefit in treating moderate to severe hypertension based on diastolic pressure in initial trials, and systolic pressure subsequently.3 What remains uncertain is the optimal target for a particular patient, and whether other factors such as number of medications, starting blood pressure, and other comorbidities should influence this target.
Publication of the Systolic Blood Pressure Intervention Trial (SPRINT) furthered the debate regarding the optimal blood pressure target in hypertension treatment.4 SPRINT randomized 9,361 nondiabetic persons with systolic pressure higher than 130 mm Hg and increased cardiovascular risk but without prior stroke to intensive therapy (goal systolic pressure < 120 mm Hg) or standard therapy as control (goal systolic pressure < 140 mm Hg) and showed a significant reduction in the composite end point and all-cause mortality—at the expense of an increase in serious adverse events.
EARLIER TRIALS WERE GENERALLY NEGATIVE
Before SPRINT, approximately 20 randomized controlled trials attempted to define whether a more intensive target was better than standard control. These included the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial restricted to patients with diabetes5 and the Secondary Prevention of Small Subcortical Strokes (SPS3) trial restricted to patients with lacunar infarcts.6 These two groups of patients were specifically excluded from SPRINT.6 Many of the other trials had primary renal end points, although several had primary cardiovascular end points.
As we reviewed previously in this Journal, individually these trials were generally inconclusive.7 When analyzed by meta-analysis, a significant benefit was found for cardiovascular events, stroke, and end-stage renal disease, with a marginal benefit for myocardial infarction.8 The validity of such analysis may be questioned due to heterogeneous populations, lack of individual patient data, different blood pressure targets and medication regimens, and different primary end points.
Together, ACCORD in patients with diabetes, SPS3 in patients with stroke, and SPRINT in patients at increased cardiovascular risk but without diabetes or stroke cover most hypertensive patients with more than low cardiovascular risk. All three trials were government-funded, and ACCORD and SPRINT used the same blood pressure targets and treatment algorithm. It remains speculative why ACCORD was essentially negative and SPRINT was positive.
CAUTION IN GENERALIZING THE RESULTS
In this issue of the Journal, Thomas and colleagues9 review the SPRINT results in detail and attempt to reconcile the disparity with ACCORD.
We agree with their interpretation that risks and benefits of a more intensive blood pressure target (ie, < 120 mm Hg systolic) need to be addressed in the individual patient and do not apply across the board to all hypertensive patients. This more intensive target would be appropriate for patients fulfilling criteria for entry into SPRINT, ie, no diabetes or prior stroke. They must be able to tolerate more intensive therapy and should not be frail or at risk for falls. Furthermore, the increased hypertension medication burden required for stricter control will increase side effects and complexity of overall medication regimens, and will possibly foster noncompliance.
In our opinion, one must be careful in generalizing the results of SPRINT to more than the type of patient enrolled. At best, one can say that a lower target is acceptable in a patient over age 50 at increased cardiovascular risk but without diabetes or stroke.
SPRINT may not even be representative of all such patients, however. Patients requiring more than four medications were excluded from the trial, as were patients with systolic pressure higher than 180 mm Hg, or with pressure higher than 170 mm Hg requiring two medications, or with pressure higher than 160 mm Hg requiring three medications, or with pressure higher than 150 mm Hg requiring four medications. Hence, SPRINT has not determined the appropriate approach to the patient with a systolic pressure between 150 and 180 mm Hg already on multiple medications above these cutoffs. It is not hard to envision the potential for adverse events and drug interactions using four or more antihypertensive medications to achieve a lower target, in addition to other classes of medications that many patients need.
The average systolic pressure on entry into SPRINT was 139 mm Hg, and patients were taking an average of 1.8 medications. In fact, one-third of patients had systolic pressures between 130 and 132 mm Hg, a range where most physicians would probably not want to intensify therapy. By protocol, such patients in the standard treatment group in SPRINT would actually have had their baseline antihypertensive therapy reduced if the systolic pressure fell below 130 mm Hg on one occasion or below 135 mm Hg on two consecutive visits. Reduction of therapy would seem to bias the trial against the standard treatment. An identical algorithm was used in ACCORD.
We are unable to reconcile the differences in outcome between ACCORD and SPRINT, although they were congruent in one important aspect: significantly higher rates of serious adverse events with more intensive therapy. ACCORD had fewer patients, but they were at higher risk since all had diabetes, and more had previous cardiovascular events (34% vs 17% in SPRINT). This is reflected in higher event rates:
- Myocardial infarction occurred in 1.13% per year in the intensive therapy group, and 1.28% per year with standard therapy in ACCORD, compared with 0.65% and 0.78% per year, respectively, in SPRINT.
- Cardiovascular death occurred in 0.52% per year with intensive therapy and 0.49% per year with standard therapy in ACCORD, compared with 0.25% and 0.43% per year, respectively, in SPRINT. Event rates for stroke were similar.
Overall, 445 primary end points occurred in ACCORD compared with 562 with SPRINT. After subtracting heart failure from the SPRINT data (not included in the primary end point of ACCORD), 400 events occurred, actually less than in ACCORD. The early termination of SPRINT may be partly to blame. In our opinion ACCORD and SPRINT were equally powered. While cardiovascular event risk reductions in ACCORD trended in the same direction as those in SPRINT, the total mortality rate trended in the opposite direction. Perhaps the play of chance is the best explanation.
ONE TARGET DOES NOT FIT ALL
SPRINT clearly added much needed data, but results should be interpreted in the context of previous trials as well as of the specific inclusion and exclusion criteria. One target does not fit all, and systolic pressure of less than 120 mm Hg should not automatically be the target for all hypertensive patients.
Should patients with diabetes be targeted to systolic pressure of less than 140 mm Hg based on the ACCORD results, and patients with stroke to systolic pressure of less than 130 mm Hg based on the SPS3 results? We are unsure. More data are clearly required, especially in patients already on multiple antihypertensive medications with unacceptable blood pressure.
As pointed out by Thomas and colleagues, lower systolic pressure may be better in select patients, but only as long as adverse events can be avoided or managed.
- Lim SS, Vos T, Flaxman AD, et al. A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet 2012; 380:2224–2260.
- Prospective Studies Collaboration. Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studies. Lancet 2002; 360:1903–1913.
- Psaty BM, Smith NL, Siscovick DS, et al. Health outcomes associated with antihypertensive therapies used as first-line agents. A systematic review and meta-analysis. JAMA 1997; 277:739–745.
- SPRINT Research Group; Wright JT Jr, Williamson JD, Whelton PK, et al. A randomized trial of intensive versus standard blood-pressure control. N Engl J Med 2015; 373:2103–2116.
- ACCORD Study Group; Cushman WC, Evans GW, Byington RP, et al. Effects of intensive blood-pressure control in type 2 diabetes mellitus. N Engl J Med 2010; 362:1575–1585.
- SPS3 Study Group; Benavente OR, Coffey CS, Conwit R, et al. Blood-pressure targets in patients with recent lacunar stroke: the SPS3 randomised trial. Lancet 2013; 382:507–515.
- Filippone EJ, Foy A, Newman E. Goal-directed antihypertensive therapy: lower may not always be better. Cleve Clin J Med 2011; 78:123–133.
- Lv J, Neal B, Ehteshami P, et al. Effects of intensive blood pressure lowering on cardiovascular and renal outcomes: a systematic review and meta-analysis. PLoS Med 2012; 9:e1001293.
- Thomas G, Nally JV, Pohl MA. Interpreting SPRINT: how low should you go? Cleve Clin J Med 2016; 83:187–195.
- Lim SS, Vos T, Flaxman AD, et al. A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet 2012; 380:2224–2260.
- Prospective Studies Collaboration. Age-specific relevance of usual blood pressure to vascular mortality: a meta-analysis of individual data for one million adults in 61 prospective studies. Lancet 2002; 360:1903–1913.
- Psaty BM, Smith NL, Siscovick DS, et al. Health outcomes associated with antihypertensive therapies used as first-line agents. A systematic review and meta-analysis. JAMA 1997; 277:739–745.
- SPRINT Research Group; Wright JT Jr, Williamson JD, Whelton PK, et al. A randomized trial of intensive versus standard blood-pressure control. N Engl J Med 2015; 373:2103–2116.
- ACCORD Study Group; Cushman WC, Evans GW, Byington RP, et al. Effects of intensive blood-pressure control in type 2 diabetes mellitus. N Engl J Med 2010; 362:1575–1585.
- SPS3 Study Group; Benavente OR, Coffey CS, Conwit R, et al. Blood-pressure targets in patients with recent lacunar stroke: the SPS3 randomised trial. Lancet 2013; 382:507–515.
- Filippone EJ, Foy A, Newman E. Goal-directed antihypertensive therapy: lower may not always be better. Cleve Clin J Med 2011; 78:123–133.
- Lv J, Neal B, Ehteshami P, et al. Effects of intensive blood pressure lowering on cardiovascular and renal outcomes: a systematic review and meta-analysis. PLoS Med 2012; 9:e1001293.
- Thomas G, Nally JV, Pohl MA. Interpreting SPRINT: how low should you go? Cleve Clin J Med 2016; 83:187–195.
The emotional impact of a malpractice suit on physicians: Maintaining resilience
Physicians who have been involved in malpractice actions are all too familiar with the range of emotions they experience during the process. Anxiety, fear, frustration, remorse, self-doubt, shame, betrayal, anger…no pleasant feelings here. Add malpractice stress to the high level of pressure experienced at home and at work, and crisis looms.
In his commentary in this issue, Kevin Giordano states, “it is not easy to stay connected in a healthcare system in which the system’s structure is driving physicians and other members of the healthcare team toward disconnection.”1
Because of the nature of our work as physicians, we are isolated, and malpractice isolates us further. Because of embarrassment, we avoid talking with our colleagues and managers. Legal counsel reminds us to correspond with no one about the details of the case. Spouses and friends may offer support, but it is difficult— perhaps impossible—to be reassured.
Isolation fuels our self-doubt and erodes our confidence, leading us to focus on what may go wrong, rather than on healing. Every decision is fraught with anxiety, and efficiency evaporates. Paralysis may set in, leading to disengagement from patient care and increasing the chance of further problems.
IT TAKES RESILIENCE TO THRIVE
It takes resilience to thrive in today’s pressure-cooker healthcare environment, let alone in the setting of malpractice stress. Resilient people are able to face reality and see a better future, put things into perspective, and bounce back from adversity.2 Resilience, a trait that protects against stress and burnout, is relevant at the personal, managerial, and system levels. Though this definition is not specific to caregiver or malpractice stress, it is applicable. It is an essential component of wellness and requires perpetual attention to self-care for successful maintenance.
Studies of physicians who have avoided burnout reveal remarkably consistent qualities, including finding gratification related to work, maintaining useful habits and practices, and developing attitudes that keep them resilient.3 Rather than adding activities to their full schedules, these physicians stayed resilient through mindfulness of various aspects of their daily lives. Interactions with colleagues—discussing cases, treatments, and outcomes (including errors)—proved vital. Professional development, encompassing activities such as continuing education, coaching, mentoring, and counseling, was recognized as an important self-directed resilience measure. Maintenance of relationships with family and friends, cultivation of leisure-time activities, and appreciation of the need for personal reflection time were traits often found in resilient physicians.
FOSTERING RESILIENCE
As part of the Mayo Clinic’s biannual survey of its physicians, Shanafelt et al4 studied relationships between qualities of physician leaders and burnout and satisfaction among the physicians they supervised. Many of the desirable leadership traits were related to building relationships through respectful communication, along with provision of opportunities for personal and professional development. The acknowledgment that resilient, healthy physicians are satisfied, productive, and able to provide safer and higher-quality patient care should lead to the establishment of physician wellness as a “dashboard metric.” This makes priorities clear by rewarding managers who foster self-care and resilience among their staff.
Likewise, at the healthcare system level, Beckman5 recognized that organizations can provide opportunities to promote resilience among caregivers. Organizational initiatives that set the stage for resilience include:
- Curricula to enhance communication with patients, coworkers, and family
- “Best practices” for efficient and effective patient care
- Self-care through health insurance incentives and educational sessions
- Accessible, affordable, and confidential behavioral health support
- Time for self-care activities during the workday
- Coaching and mentoring programs
- Narrative-and-reflection groups and mindfulness training.5
Through an atmosphere of support for resilience, organizations provide a place for physicians to maintain a sense of meaning and purpose in their work. For individuals facing malpractice action, this infrastructure can be used to weather the storm. As Mr. Giordano writes, staying engaged “may allow you to draw meaning and reconciliation from the fact that throughout the patient’s illness, undeterred by the complexities of today’s healthcare system, you remained the attentive and compassionate healer you hoped to be when you first became a healthcare professional.”1 We must pay attention to developing individual physicians, educating managers, and building systems so that caregivers can remain engaged and resilient. It may help those affected by malpractice stress, and perhaps as importantly, it may reduce the chance of future “disconnection” leading to recourse in the legal system.
- Giordano KC. It is not the critic’s voice that should count. Cleve Clin J Med 2016; 83:174–176.
- Coutu DL. How resilience works. Harv Bus Rev 2002; 80(5):46–55.
- Zwack J, Schweitzer J. If every fifth physician is affected by burnout, what about the other four? Acad Med 2013; 88:382–389.
- Shanafelt TD, Gorringe G, Menaker R, et al. Impact of organizational leadership on physician burnout and satisfaction. Mayo Clin Proc 2015; 90:432–440.
- Beckman H. The role of medical culture in the journey to resilience. Acad Med 2015; 90:710–712.
Physicians who have been involved in malpractice actions are all too familiar with the range of emotions they experience during the process. Anxiety, fear, frustration, remorse, self-doubt, shame, betrayal, anger…no pleasant feelings here. Add malpractice stress to the high level of pressure experienced at home and at work, and crisis looms.
In his commentary in this issue, Kevin Giordano states, “it is not easy to stay connected in a healthcare system in which the system’s structure is driving physicians and other members of the healthcare team toward disconnection.”1
Because of the nature of our work as physicians, we are isolated, and malpractice isolates us further. Because of embarrassment, we avoid talking with our colleagues and managers. Legal counsel reminds us to correspond with no one about the details of the case. Spouses and friends may offer support, but it is difficult— perhaps impossible—to be reassured.
Isolation fuels our self-doubt and erodes our confidence, leading us to focus on what may go wrong, rather than on healing. Every decision is fraught with anxiety, and efficiency evaporates. Paralysis may set in, leading to disengagement from patient care and increasing the chance of further problems.
IT TAKES RESILIENCE TO THRIVE
It takes resilience to thrive in today’s pressure-cooker healthcare environment, let alone in the setting of malpractice stress. Resilient people are able to face reality and see a better future, put things into perspective, and bounce back from adversity.2 Resilience, a trait that protects against stress and burnout, is relevant at the personal, managerial, and system levels. Though this definition is not specific to caregiver or malpractice stress, it is applicable. It is an essential component of wellness and requires perpetual attention to self-care for successful maintenance.
Studies of physicians who have avoided burnout reveal remarkably consistent qualities, including finding gratification related to work, maintaining useful habits and practices, and developing attitudes that keep them resilient.3 Rather than adding activities to their full schedules, these physicians stayed resilient through mindfulness of various aspects of their daily lives. Interactions with colleagues—discussing cases, treatments, and outcomes (including errors)—proved vital. Professional development, encompassing activities such as continuing education, coaching, mentoring, and counseling, was recognized as an important self-directed resilience measure. Maintenance of relationships with family and friends, cultivation of leisure-time activities, and appreciation of the need for personal reflection time were traits often found in resilient physicians.
FOSTERING RESILIENCE
As part of the Mayo Clinic’s biannual survey of its physicians, Shanafelt et al4 studied relationships between qualities of physician leaders and burnout and satisfaction among the physicians they supervised. Many of the desirable leadership traits were related to building relationships through respectful communication, along with provision of opportunities for personal and professional development. The acknowledgment that resilient, healthy physicians are satisfied, productive, and able to provide safer and higher-quality patient care should lead to the establishment of physician wellness as a “dashboard metric.” This makes priorities clear by rewarding managers who foster self-care and resilience among their staff.
Likewise, at the healthcare system level, Beckman5 recognized that organizations can provide opportunities to promote resilience among caregivers. Organizational initiatives that set the stage for resilience include:
- Curricula to enhance communication with patients, coworkers, and family
- “Best practices” for efficient and effective patient care
- Self-care through health insurance incentives and educational sessions
- Accessible, affordable, and confidential behavioral health support
- Time for self-care activities during the workday
- Coaching and mentoring programs
- Narrative-and-reflection groups and mindfulness training.5
Through an atmosphere of support for resilience, organizations provide a place for physicians to maintain a sense of meaning and purpose in their work. For individuals facing malpractice action, this infrastructure can be used to weather the storm. As Mr. Giordano writes, staying engaged “may allow you to draw meaning and reconciliation from the fact that throughout the patient’s illness, undeterred by the complexities of today’s healthcare system, you remained the attentive and compassionate healer you hoped to be when you first became a healthcare professional.”1 We must pay attention to developing individual physicians, educating managers, and building systems so that caregivers can remain engaged and resilient. It may help those affected by malpractice stress, and perhaps as importantly, it may reduce the chance of future “disconnection” leading to recourse in the legal system.
Physicians who have been involved in malpractice actions are all too familiar with the range of emotions they experience during the process. Anxiety, fear, frustration, remorse, self-doubt, shame, betrayal, anger…no pleasant feelings here. Add malpractice stress to the high level of pressure experienced at home and at work, and crisis looms.
In his commentary in this issue, Kevin Giordano states, “it is not easy to stay connected in a healthcare system in which the system’s structure is driving physicians and other members of the healthcare team toward disconnection.”1
Because of the nature of our work as physicians, we are isolated, and malpractice isolates us further. Because of embarrassment, we avoid talking with our colleagues and managers. Legal counsel reminds us to correspond with no one about the details of the case. Spouses and friends may offer support, but it is difficult— perhaps impossible—to be reassured.
Isolation fuels our self-doubt and erodes our confidence, leading us to focus on what may go wrong, rather than on healing. Every decision is fraught with anxiety, and efficiency evaporates. Paralysis may set in, leading to disengagement from patient care and increasing the chance of further problems.
IT TAKES RESILIENCE TO THRIVE
It takes resilience to thrive in today’s pressure-cooker healthcare environment, let alone in the setting of malpractice stress. Resilient people are able to face reality and see a better future, put things into perspective, and bounce back from adversity.2 Resilience, a trait that protects against stress and burnout, is relevant at the personal, managerial, and system levels. Though this definition is not specific to caregiver or malpractice stress, it is applicable. It is an essential component of wellness and requires perpetual attention to self-care for successful maintenance.
Studies of physicians who have avoided burnout reveal remarkably consistent qualities, including finding gratification related to work, maintaining useful habits and practices, and developing attitudes that keep them resilient.3 Rather than adding activities to their full schedules, these physicians stayed resilient through mindfulness of various aspects of their daily lives. Interactions with colleagues—discussing cases, treatments, and outcomes (including errors)—proved vital. Professional development, encompassing activities such as continuing education, coaching, mentoring, and counseling, was recognized as an important self-directed resilience measure. Maintenance of relationships with family and friends, cultivation of leisure-time activities, and appreciation of the need for personal reflection time were traits often found in resilient physicians.
FOSTERING RESILIENCE
As part of the Mayo Clinic’s biannual survey of its physicians, Shanafelt et al4 studied relationships between qualities of physician leaders and burnout and satisfaction among the physicians they supervised. Many of the desirable leadership traits were related to building relationships through respectful communication, along with provision of opportunities for personal and professional development. The acknowledgment that resilient, healthy physicians are satisfied, productive, and able to provide safer and higher-quality patient care should lead to the establishment of physician wellness as a “dashboard metric.” This makes priorities clear by rewarding managers who foster self-care and resilience among their staff.
Likewise, at the healthcare system level, Beckman5 recognized that organizations can provide opportunities to promote resilience among caregivers. Organizational initiatives that set the stage for resilience include:
- Curricula to enhance communication with patients, coworkers, and family
- “Best practices” for efficient and effective patient care
- Self-care through health insurance incentives and educational sessions
- Accessible, affordable, and confidential behavioral health support
- Time for self-care activities during the workday
- Coaching and mentoring programs
- Narrative-and-reflection groups and mindfulness training.5
Through an atmosphere of support for resilience, organizations provide a place for physicians to maintain a sense of meaning and purpose in their work. For individuals facing malpractice action, this infrastructure can be used to weather the storm. As Mr. Giordano writes, staying engaged “may allow you to draw meaning and reconciliation from the fact that throughout the patient’s illness, undeterred by the complexities of today’s healthcare system, you remained the attentive and compassionate healer you hoped to be when you first became a healthcare professional.”1 We must pay attention to developing individual physicians, educating managers, and building systems so that caregivers can remain engaged and resilient. It may help those affected by malpractice stress, and perhaps as importantly, it may reduce the chance of future “disconnection” leading to recourse in the legal system.
- Giordano KC. It is not the critic’s voice that should count. Cleve Clin J Med 2016; 83:174–176.
- Coutu DL. How resilience works. Harv Bus Rev 2002; 80(5):46–55.
- Zwack J, Schweitzer J. If every fifth physician is affected by burnout, what about the other four? Acad Med 2013; 88:382–389.
- Shanafelt TD, Gorringe G, Menaker R, et al. Impact of organizational leadership on physician burnout and satisfaction. Mayo Clin Proc 2015; 90:432–440.
- Beckman H. The role of medical culture in the journey to resilience. Acad Med 2015; 90:710–712.
- Giordano KC. It is not the critic’s voice that should count. Cleve Clin J Med 2016; 83:174–176.
- Coutu DL. How resilience works. Harv Bus Rev 2002; 80(5):46–55.
- Zwack J, Schweitzer J. If every fifth physician is affected by burnout, what about the other four? Acad Med 2013; 88:382–389.
- Shanafelt TD, Gorringe G, Menaker R, et al. Impact of organizational leadership on physician burnout and satisfaction. Mayo Clin Proc 2015; 90:432–440.
- Beckman H. The role of medical culture in the journey to resilience. Acad Med 2015; 90:710–712.
Timely Discharge Communication
In July 2003, as a fresh intern, I was introduced to care transitions and our tool for information transfer at hospital dischargethe fax machine. After writing our discharge order and discharge prescriptions, the team would compose the discharge summary and transmit the document via fax. I asked my resident where these faxes were going, because they were all sent to the same number in the hospital. Humorously, he did not know. Summaries were completed within days, or sometimes weeks, of discharge and faxed to a mysterious destination for filing and presumably for dissemination to outside providers. The message was clear to me that discharge summaries were not very useful or important, and they were definitely not seen as a critical part of the care‐transition process.
This attitude toward the discharge summary is not surprising. Historically, when physicians cared for their patients prior to, during, and after hospitalization, the goal of the discharge summary was to document patients' care for hospital records. It was not critical as a communication tool unless a patient was being transferred to another healthcare facility and a new care team. However, that all changed with decreasing hospital length of stay, the contemporaneous rise in postacute care discharges, the rise of the hospitalist care model, and the resulting transition of care from hospitalist to outpatient physician. Clear, rapid completion and communication of discharge summaries became essential for safe transitions in care.
The lack of focus on the discharge summary as a communication tool is reflected in regulations and standards of accreditation bodies. In 1986, the Medicare Condition of Participation required that inpatient records be completed within 30 days of discharge. Despite all of the changes in healthcare, the 30‐day requirement for discharge summary completion has persisted, often as a medical staff requirement. Similarly, The Joint Commission requires that discharge summaries include 6 components (reason for hospitalization, findings, treatment provided, discharge condition, instructions, and physician signature) but does not provide a timeframe. As a result of this lack of emphasis on timely completion of discharge summaries, studies have shown that although summaries usually include core elements, they are not completed in a timely fashion. Consequently, most postdischarge visits occur without the benefit of a discharge summary.[1] The most complex patients, who ideally are seen within a few days of discharge, are the least likely to have received the discharge summary at the first postdischarge visit.
Although it seems intuitively obvious that more timely communication of discharge summaries should lead to better outcomes, especially lower readmission rates, few studies have examined this issue, and the findings have not been consistent.[2, 3, 4, 5] Is it possible that physicians and other members of the healthcare team often communicate with each other through telephone calls and text messaging, especially about the sickest patients? If so, timely discharge summaries could have a small marginal effect on outcomes. Therefore, the study in this issue of the Journal of Hospital Medicine by Hoyer and colleagues is a welcome addition to the literature.[6] They found that discharge summary completion 3 or more days after discharge was associated with an adjusted odds ratio of 1.09, and the odds ratio increased with every additional 3‐day delay in completion.
It is possible that the analysis by Hoyer et al. underestimated the benefit of timely discharge summaries. To achieve full benefit, the discharge summary must be completed, accurately delivered, read by the receiving provider, and used at the first follow‐up visit. Their claims‐based analysis did not contains these latter elements, which would bias their results toward the null hypothesis. Future studies should examine how receipt of a summary, as opposed to transmission, is associated with postdischarge outcomes.
In subgroup analyses, no associations between discharge summary timeliness and readmissions were found for patients cared for on the gynecology‐obstetrics and surgical sciences services. Although caution is always needed when interpreting subgroup analyses, it is possible that the lack of association is attributable to the relatively acute conditions of many patients on these services, the relative provider continuity that persists in surgical disciplines, or whether these disciplines use other means of communication more frequently (eg, postdischarge phone calls among providers), mitigating the impact of the written discharge summary. Additional studies are needed to examine these issues. In addition, studies should examine how community or social factors might attenuate the benefit of timely communication, and explore the effect of discharge summaries on outcomes for patients admitted to an observation level of care, which is increasingly common and for which discharge summaries are less likely to be required.
The findings of the study by Hoyer et al. support proposed federal legislationthe Improving Medicare Post‐Acute Care Transformation Act of 2014. The proposed rule for discharge planning would change the Medicare Conditions of Participation to require transmission of discharge information, including the discharge summary, within 48 hours of discharge (
Fortunately, the work of preparing and transmitting the discharge summary is already part of the physician workflow, albeit often delayed. This traditional means of communication could even remain unchanged in form, if the order of the workflow could be altered in terms of timeliness, and no additional work would be created. With the hospitalization fresher in memory at the time of discharge, work might even be reduced. This efficiency presents a reasonable and immediately actionable appeal to providers.
The challenge to providers and systems remains to refine the quality and efficiency of communication and to move health communication into the 21st century. Tremendous potential exists for interactive communication among providers at discharge, which will not only build the quality of information delivered, but possibly also the qualitative experience of communication, building relationships in our increasingly complex and fragmented delivery networks. This may be a disappointment to the manufacturers of fax machines, but it will be a welcome improvement for caregivers and patients.
Disclosure
Nothing to report.
- Deficits in communication and information transfer between hospital‐based and primary care physicians: implications for patient safety and continuity of care. JAMA. 2007;297(8):831–841. , , , , , .
- Redefining and redesigning hospital discharge to enhance patient care: a randomized controlled study. J Gen Intern Med. 2008;23(8):1228–1233. , , , .
- A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150(3):178–187. , , , et al.
- Reduction of 30‐day postdischarge hospital readmission or emergency department (ED) visit rates in high‐risk elderly medical patients through delivery of a targeted care bundle. J Hosp Med. 2009;4(4):211–218. , , , et al.
- Association of discharge summary quality with readmission risk for patients hospitalized with heart failure exacerbation. Circ Cardiovasc Qual Outcomes. 2015;8(1):109–111. , , , et al.
- Association between days‐to‐complete inpatient discharge summaries with all‐payer hospital readmissions in Maryland. J Hosp Med. 2016;11(00):000–000. , , , , .
- Revisions to requirements for discharge planning for hospitals, critical access hospitals, and home health agencies. Fed Regist. 2015;80(212):68126–68155.
In July 2003, as a fresh intern, I was introduced to care transitions and our tool for information transfer at hospital dischargethe fax machine. After writing our discharge order and discharge prescriptions, the team would compose the discharge summary and transmit the document via fax. I asked my resident where these faxes were going, because they were all sent to the same number in the hospital. Humorously, he did not know. Summaries were completed within days, or sometimes weeks, of discharge and faxed to a mysterious destination for filing and presumably for dissemination to outside providers. The message was clear to me that discharge summaries were not very useful or important, and they were definitely not seen as a critical part of the care‐transition process.
This attitude toward the discharge summary is not surprising. Historically, when physicians cared for their patients prior to, during, and after hospitalization, the goal of the discharge summary was to document patients' care for hospital records. It was not critical as a communication tool unless a patient was being transferred to another healthcare facility and a new care team. However, that all changed with decreasing hospital length of stay, the contemporaneous rise in postacute care discharges, the rise of the hospitalist care model, and the resulting transition of care from hospitalist to outpatient physician. Clear, rapid completion and communication of discharge summaries became essential for safe transitions in care.
The lack of focus on the discharge summary as a communication tool is reflected in regulations and standards of accreditation bodies. In 1986, the Medicare Condition of Participation required that inpatient records be completed within 30 days of discharge. Despite all of the changes in healthcare, the 30‐day requirement for discharge summary completion has persisted, often as a medical staff requirement. Similarly, The Joint Commission requires that discharge summaries include 6 components (reason for hospitalization, findings, treatment provided, discharge condition, instructions, and physician signature) but does not provide a timeframe. As a result of this lack of emphasis on timely completion of discharge summaries, studies have shown that although summaries usually include core elements, they are not completed in a timely fashion. Consequently, most postdischarge visits occur without the benefit of a discharge summary.[1] The most complex patients, who ideally are seen within a few days of discharge, are the least likely to have received the discharge summary at the first postdischarge visit.
Although it seems intuitively obvious that more timely communication of discharge summaries should lead to better outcomes, especially lower readmission rates, few studies have examined this issue, and the findings have not been consistent.[2, 3, 4, 5] Is it possible that physicians and other members of the healthcare team often communicate with each other through telephone calls and text messaging, especially about the sickest patients? If so, timely discharge summaries could have a small marginal effect on outcomes. Therefore, the study in this issue of the Journal of Hospital Medicine by Hoyer and colleagues is a welcome addition to the literature.[6] They found that discharge summary completion 3 or more days after discharge was associated with an adjusted odds ratio of 1.09, and the odds ratio increased with every additional 3‐day delay in completion.
It is possible that the analysis by Hoyer et al. underestimated the benefit of timely discharge summaries. To achieve full benefit, the discharge summary must be completed, accurately delivered, read by the receiving provider, and used at the first follow‐up visit. Their claims‐based analysis did not contains these latter elements, which would bias their results toward the null hypothesis. Future studies should examine how receipt of a summary, as opposed to transmission, is associated with postdischarge outcomes.
In subgroup analyses, no associations between discharge summary timeliness and readmissions were found for patients cared for on the gynecology‐obstetrics and surgical sciences services. Although caution is always needed when interpreting subgroup analyses, it is possible that the lack of association is attributable to the relatively acute conditions of many patients on these services, the relative provider continuity that persists in surgical disciplines, or whether these disciplines use other means of communication more frequently (eg, postdischarge phone calls among providers), mitigating the impact of the written discharge summary. Additional studies are needed to examine these issues. In addition, studies should examine how community or social factors might attenuate the benefit of timely communication, and explore the effect of discharge summaries on outcomes for patients admitted to an observation level of care, which is increasingly common and for which discharge summaries are less likely to be required.
The findings of the study by Hoyer et al. support proposed federal legislationthe Improving Medicare Post‐Acute Care Transformation Act of 2014. The proposed rule for discharge planning would change the Medicare Conditions of Participation to require transmission of discharge information, including the discharge summary, within 48 hours of discharge (
Fortunately, the work of preparing and transmitting the discharge summary is already part of the physician workflow, albeit often delayed. This traditional means of communication could even remain unchanged in form, if the order of the workflow could be altered in terms of timeliness, and no additional work would be created. With the hospitalization fresher in memory at the time of discharge, work might even be reduced. This efficiency presents a reasonable and immediately actionable appeal to providers.
The challenge to providers and systems remains to refine the quality and efficiency of communication and to move health communication into the 21st century. Tremendous potential exists for interactive communication among providers at discharge, which will not only build the quality of information delivered, but possibly also the qualitative experience of communication, building relationships in our increasingly complex and fragmented delivery networks. This may be a disappointment to the manufacturers of fax machines, but it will be a welcome improvement for caregivers and patients.
Disclosure
Nothing to report.
In July 2003, as a fresh intern, I was introduced to care transitions and our tool for information transfer at hospital dischargethe fax machine. After writing our discharge order and discharge prescriptions, the team would compose the discharge summary and transmit the document via fax. I asked my resident where these faxes were going, because they were all sent to the same number in the hospital. Humorously, he did not know. Summaries were completed within days, or sometimes weeks, of discharge and faxed to a mysterious destination for filing and presumably for dissemination to outside providers. The message was clear to me that discharge summaries were not very useful or important, and they were definitely not seen as a critical part of the care‐transition process.
This attitude toward the discharge summary is not surprising. Historically, when physicians cared for their patients prior to, during, and after hospitalization, the goal of the discharge summary was to document patients' care for hospital records. It was not critical as a communication tool unless a patient was being transferred to another healthcare facility and a new care team. However, that all changed with decreasing hospital length of stay, the contemporaneous rise in postacute care discharges, the rise of the hospitalist care model, and the resulting transition of care from hospitalist to outpatient physician. Clear, rapid completion and communication of discharge summaries became essential for safe transitions in care.
The lack of focus on the discharge summary as a communication tool is reflected in regulations and standards of accreditation bodies. In 1986, the Medicare Condition of Participation required that inpatient records be completed within 30 days of discharge. Despite all of the changes in healthcare, the 30‐day requirement for discharge summary completion has persisted, often as a medical staff requirement. Similarly, The Joint Commission requires that discharge summaries include 6 components (reason for hospitalization, findings, treatment provided, discharge condition, instructions, and physician signature) but does not provide a timeframe. As a result of this lack of emphasis on timely completion of discharge summaries, studies have shown that although summaries usually include core elements, they are not completed in a timely fashion. Consequently, most postdischarge visits occur without the benefit of a discharge summary.[1] The most complex patients, who ideally are seen within a few days of discharge, are the least likely to have received the discharge summary at the first postdischarge visit.
Although it seems intuitively obvious that more timely communication of discharge summaries should lead to better outcomes, especially lower readmission rates, few studies have examined this issue, and the findings have not been consistent.[2, 3, 4, 5] Is it possible that physicians and other members of the healthcare team often communicate with each other through telephone calls and text messaging, especially about the sickest patients? If so, timely discharge summaries could have a small marginal effect on outcomes. Therefore, the study in this issue of the Journal of Hospital Medicine by Hoyer and colleagues is a welcome addition to the literature.[6] They found that discharge summary completion 3 or more days after discharge was associated with an adjusted odds ratio of 1.09, and the odds ratio increased with every additional 3‐day delay in completion.
It is possible that the analysis by Hoyer et al. underestimated the benefit of timely discharge summaries. To achieve full benefit, the discharge summary must be completed, accurately delivered, read by the receiving provider, and used at the first follow‐up visit. Their claims‐based analysis did not contains these latter elements, which would bias their results toward the null hypothesis. Future studies should examine how receipt of a summary, as opposed to transmission, is associated with postdischarge outcomes.
In subgroup analyses, no associations between discharge summary timeliness and readmissions were found for patients cared for on the gynecology‐obstetrics and surgical sciences services. Although caution is always needed when interpreting subgroup analyses, it is possible that the lack of association is attributable to the relatively acute conditions of many patients on these services, the relative provider continuity that persists in surgical disciplines, or whether these disciplines use other means of communication more frequently (eg, postdischarge phone calls among providers), mitigating the impact of the written discharge summary. Additional studies are needed to examine these issues. In addition, studies should examine how community or social factors might attenuate the benefit of timely communication, and explore the effect of discharge summaries on outcomes for patients admitted to an observation level of care, which is increasingly common and for which discharge summaries are less likely to be required.
The findings of the study by Hoyer et al. support proposed federal legislationthe Improving Medicare Post‐Acute Care Transformation Act of 2014. The proposed rule for discharge planning would change the Medicare Conditions of Participation to require transmission of discharge information, including the discharge summary, within 48 hours of discharge (
Fortunately, the work of preparing and transmitting the discharge summary is already part of the physician workflow, albeit often delayed. This traditional means of communication could even remain unchanged in form, if the order of the workflow could be altered in terms of timeliness, and no additional work would be created. With the hospitalization fresher in memory at the time of discharge, work might even be reduced. This efficiency presents a reasonable and immediately actionable appeal to providers.
The challenge to providers and systems remains to refine the quality and efficiency of communication and to move health communication into the 21st century. Tremendous potential exists for interactive communication among providers at discharge, which will not only build the quality of information delivered, but possibly also the qualitative experience of communication, building relationships in our increasingly complex and fragmented delivery networks. This may be a disappointment to the manufacturers of fax machines, but it will be a welcome improvement for caregivers and patients.
Disclosure
Nothing to report.
- Deficits in communication and information transfer between hospital‐based and primary care physicians: implications for patient safety and continuity of care. JAMA. 2007;297(8):831–841. , , , , , .
- Redefining and redesigning hospital discharge to enhance patient care: a randomized controlled study. J Gen Intern Med. 2008;23(8):1228–1233. , , , .
- A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150(3):178–187. , , , et al.
- Reduction of 30‐day postdischarge hospital readmission or emergency department (ED) visit rates in high‐risk elderly medical patients through delivery of a targeted care bundle. J Hosp Med. 2009;4(4):211–218. , , , et al.
- Association of discharge summary quality with readmission risk for patients hospitalized with heart failure exacerbation. Circ Cardiovasc Qual Outcomes. 2015;8(1):109–111. , , , et al.
- Association between days‐to‐complete inpatient discharge summaries with all‐payer hospital readmissions in Maryland. J Hosp Med. 2016;11(00):000–000. , , , , .
- Revisions to requirements for discharge planning for hospitals, critical access hospitals, and home health agencies. Fed Regist. 2015;80(212):68126–68155.
- Deficits in communication and information transfer between hospital‐based and primary care physicians: implications for patient safety and continuity of care. JAMA. 2007;297(8):831–841. , , , , , .
- Redefining and redesigning hospital discharge to enhance patient care: a randomized controlled study. J Gen Intern Med. 2008;23(8):1228–1233. , , , .
- A reengineered hospital discharge program to decrease rehospitalization: a randomized trial. Ann Intern Med. 2009;150(3):178–187. , , , et al.
- Reduction of 30‐day postdischarge hospital readmission or emergency department (ED) visit rates in high‐risk elderly medical patients through delivery of a targeted care bundle. J Hosp Med. 2009;4(4):211–218. , , , et al.
- Association of discharge summary quality with readmission risk for patients hospitalized with heart failure exacerbation. Circ Cardiovasc Qual Outcomes. 2015;8(1):109–111. , , , et al.
- Association between days‐to‐complete inpatient discharge summaries with all‐payer hospital readmissions in Maryland. J Hosp Med. 2016;11(00):000–000. , , , , .
- Revisions to requirements for discharge planning for hospitals, critical access hospitals, and home health agencies. Fed Regist. 2015;80(212):68126–68155.
What Gets Lost
This issue of the Journal of Hospital Medicine highlights an important contribution to the evolving state of graduate medical education (GME). The study assesses the relationship between attending physician workload and teaching effectiveness and patient safety.[1]
From the outset, it is important to note that although the focus of this study is on teaching on the wards, this is not necessarily synonymous with learning on the wards. Even if a busy service compromises a faculty's teaching on the wards, more patients on a service might augment a resident's learning on the wards, from patients, peers, active clinical decision making, and overall exposure to diversity of disease.
The independent variable in this study is intensity, with the presumption that the number of patients is proportional to intensity, as codified by the Accreditation Council for Graduate Medical Education (ACGME) regulations regarding caps for admissions and service size. However, are 10 single‐organ chest pain patients the same intensity of 5 septic patients? The authors address this issue as much as possible by integrating expected mortality as a surrogate measure of intensity. Yet, given the heterogeneity of severity of illness even within a diagnosis, this too is likely to be an inaccurate measure of the true intensity of a service. Of course, such measures do not touch upon the social intensity that varies widely from patient to patient, which might be more time consuming and mentally exhausting than managing the diagnosis itself.
However, these limitations aside, this study's biggest contribution is that it raises the question that will define GME in the years to come, How does learning fluctuate with service intensity? The Yerkes‐Dodson curve was published in 1908, defining the relationship between stress and performance (Figure 1).[2] Many have interpreted the ACGME rules on admission caps and duty hours as being designed to make a kinder, gentler learning environment. However, as the curve suggests, optimizing service intensity (stress) is much more than just being nice; it is about optimizing performance, both in the way of patient care and learning. The question of how learning fluctuates with service intensity might be better framed as, What gets lost in the space as you move to the right of the optimal stress zone on the Yerkes‐Dodson curve?
Quality is first. This study correlates intensity with adverse events, and though there is a modest association, this likely underestimates the true magnitude of the problem. The measures in this study are documented adverse events, and are thus unlikely to capture the near misses that increase with heightened stress and intensity. Mistakes increase as mental bandwidth is insufficient to think through the consequences of each decision. Slipsthings you know you need to do but forget to doincrease as the mind becomes distracted.
Good work is next. All hospitalists know that it is possible to get a patient in and out of the hospital, but it is also possible to do so with such poor quality that the patient comes right back. Csikszentmihalyi described the concept of flow: the ability to become fully immersed in a task, concentrating on nothing except that task at hand.[3] What comes from flow is good work. Achieving flow requires the time to engage in a task, but it also requires that the mind is not distracted by the worry of what else needs to be done. As service intensity increases, so does fragmentation and distractions, both of which are enemies to flow. Achieving flow also might have implications for teaching and learning: Does it matter how good the teacher is, or how often she teaches, if the residents are so distracted that they are not mentally there and ready to receive that teaching?
The presumption underlying all GME is that practice makes perfect. However, practice does not make perfect; perfect practice makes perfect. Furthermore, just because you were physically there for an experience, does not mean you actually experienced it. It is possible to be engaged in a patient encounter, and mentally drive right past it, missing the full implications of the experience that would have presumptively allowed for improvement. The difference between practice and perfect practice is contingent upon mentally being there and upon the ability to reflect upon that experience such that improvement is possible. However, experiencing the experience and reflection require time and mental bandwidth; both are diminished as you move to the right of the optimal zone. One of the central roles of the attending is to help learners fully experience the experience and reflect upon how things could have been done better. Though not specifically addressed by this study, one wonders if an attending on an intense teaching service has the time to provide that counsel, and even if they do, if the residents are in a mental position to receive it.
This study assesses the implications of a highly intense service on patient outcomes; what is not assessed are the implications for the future patients who will receive care from these residents. In Strangers to Ourselves, Wilson describes the adaptive unconscious: the mind's ability to take routinely performed tasks and put them into an unconscious hard drive such that they can be completed at a later time without any conscious thought.[4] It is adaptive, because it allows multitasking while doing rote activities. However, it is dangerous too, because once a rote task has been relegated to the adaptive unconscious, it is beyond the ability of the conscious mind to inspect and change it. The exponential consequence of imperfect practice is that the wrong thing done again and again settles into the adaptive unconscious, and there it will be for the rest of that resident's career. What is not specifically explored by this study, though nonetheless reasonable to assume, is that as a teaching service's intensity increases, the quality and frequency of attending feedback and resident self‐reflection declines. The risk of a dysfunctional adaptive unconscious is inversely proportional to feedback and self‐reflection.
So how do we redesign the inpatient GME experience to optimize performance? The architect tasked with designing an optimal learning environment for an inpatient service is tasked with addressing both ends of the Yerkes‐Dodson curve. Too low of service intensity, residents lose out on exposure to diverse medical disease, and subsequent engagement in complex decision making requisite for developing their confidence and autonomy. Too high of service intensity, residents lose out on the teaching and feedback from their attendings, and the ability to truly experience and reflect upon the patients for whom they provide care. However, to do this effectively, the GME architect will need an accurate measure of inpatient intensity, something better than our current measures of duty hours and patient caps. Without that, it will be difficult to construct a learning environment that benefits not only the patients of today, but also the patients of tomorrow. One thing is for sure, the intensity of an inpatient service will only increase in the years to come, and the answer to the question of balancing intensity with learning, more than any other, will determine GME effectiveness. Achieving that balance will be a road of a thousand miles, but in raising this central question, this study gives us the first step.
- Associations between attending physician workload, teaching effectiveness, and patient safety. J Hosp Med. 2016;11:169–173. , , , , .
- The relation of strength of stimulus to rapidity of habit formation. J Comp Neurol Psychol. 1908;18:459–482. , .
- Flow: The Psychology of Optimal Experience. New York, NY: Harper and Row; 1990. .
- Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge, MA: Harvard University Press; 2002. .
This issue of the Journal of Hospital Medicine highlights an important contribution to the evolving state of graduate medical education (GME). The study assesses the relationship between attending physician workload and teaching effectiveness and patient safety.[1]
From the outset, it is important to note that although the focus of this study is on teaching on the wards, this is not necessarily synonymous with learning on the wards. Even if a busy service compromises a faculty's teaching on the wards, more patients on a service might augment a resident's learning on the wards, from patients, peers, active clinical decision making, and overall exposure to diversity of disease.
The independent variable in this study is intensity, with the presumption that the number of patients is proportional to intensity, as codified by the Accreditation Council for Graduate Medical Education (ACGME) regulations regarding caps for admissions and service size. However, are 10 single‐organ chest pain patients the same intensity of 5 septic patients? The authors address this issue as much as possible by integrating expected mortality as a surrogate measure of intensity. Yet, given the heterogeneity of severity of illness even within a diagnosis, this too is likely to be an inaccurate measure of the true intensity of a service. Of course, such measures do not touch upon the social intensity that varies widely from patient to patient, which might be more time consuming and mentally exhausting than managing the diagnosis itself.
However, these limitations aside, this study's biggest contribution is that it raises the question that will define GME in the years to come, How does learning fluctuate with service intensity? The Yerkes‐Dodson curve was published in 1908, defining the relationship between stress and performance (Figure 1).[2] Many have interpreted the ACGME rules on admission caps and duty hours as being designed to make a kinder, gentler learning environment. However, as the curve suggests, optimizing service intensity (stress) is much more than just being nice; it is about optimizing performance, both in the way of patient care and learning. The question of how learning fluctuates with service intensity might be better framed as, What gets lost in the space as you move to the right of the optimal stress zone on the Yerkes‐Dodson curve?
Quality is first. This study correlates intensity with adverse events, and though there is a modest association, this likely underestimates the true magnitude of the problem. The measures in this study are documented adverse events, and are thus unlikely to capture the near misses that increase with heightened stress and intensity. Mistakes increase as mental bandwidth is insufficient to think through the consequences of each decision. Slipsthings you know you need to do but forget to doincrease as the mind becomes distracted.
Good work is next. All hospitalists know that it is possible to get a patient in and out of the hospital, but it is also possible to do so with such poor quality that the patient comes right back. Csikszentmihalyi described the concept of flow: the ability to become fully immersed in a task, concentrating on nothing except that task at hand.[3] What comes from flow is good work. Achieving flow requires the time to engage in a task, but it also requires that the mind is not distracted by the worry of what else needs to be done. As service intensity increases, so does fragmentation and distractions, both of which are enemies to flow. Achieving flow also might have implications for teaching and learning: Does it matter how good the teacher is, or how often she teaches, if the residents are so distracted that they are not mentally there and ready to receive that teaching?
The presumption underlying all GME is that practice makes perfect. However, practice does not make perfect; perfect practice makes perfect. Furthermore, just because you were physically there for an experience, does not mean you actually experienced it. It is possible to be engaged in a patient encounter, and mentally drive right past it, missing the full implications of the experience that would have presumptively allowed for improvement. The difference between practice and perfect practice is contingent upon mentally being there and upon the ability to reflect upon that experience such that improvement is possible. However, experiencing the experience and reflection require time and mental bandwidth; both are diminished as you move to the right of the optimal zone. One of the central roles of the attending is to help learners fully experience the experience and reflect upon how things could have been done better. Though not specifically addressed by this study, one wonders if an attending on an intense teaching service has the time to provide that counsel, and even if they do, if the residents are in a mental position to receive it.
This study assesses the implications of a highly intense service on patient outcomes; what is not assessed are the implications for the future patients who will receive care from these residents. In Strangers to Ourselves, Wilson describes the adaptive unconscious: the mind's ability to take routinely performed tasks and put them into an unconscious hard drive such that they can be completed at a later time without any conscious thought.[4] It is adaptive, because it allows multitasking while doing rote activities. However, it is dangerous too, because once a rote task has been relegated to the adaptive unconscious, it is beyond the ability of the conscious mind to inspect and change it. The exponential consequence of imperfect practice is that the wrong thing done again and again settles into the adaptive unconscious, and there it will be for the rest of that resident's career. What is not specifically explored by this study, though nonetheless reasonable to assume, is that as a teaching service's intensity increases, the quality and frequency of attending feedback and resident self‐reflection declines. The risk of a dysfunctional adaptive unconscious is inversely proportional to feedback and self‐reflection.
So how do we redesign the inpatient GME experience to optimize performance? The architect tasked with designing an optimal learning environment for an inpatient service is tasked with addressing both ends of the Yerkes‐Dodson curve. Too low of service intensity, residents lose out on exposure to diverse medical disease, and subsequent engagement in complex decision making requisite for developing their confidence and autonomy. Too high of service intensity, residents lose out on the teaching and feedback from their attendings, and the ability to truly experience and reflect upon the patients for whom they provide care. However, to do this effectively, the GME architect will need an accurate measure of inpatient intensity, something better than our current measures of duty hours and patient caps. Without that, it will be difficult to construct a learning environment that benefits not only the patients of today, but also the patients of tomorrow. One thing is for sure, the intensity of an inpatient service will only increase in the years to come, and the answer to the question of balancing intensity with learning, more than any other, will determine GME effectiveness. Achieving that balance will be a road of a thousand miles, but in raising this central question, this study gives us the first step.
This issue of the Journal of Hospital Medicine highlights an important contribution to the evolving state of graduate medical education (GME). The study assesses the relationship between attending physician workload and teaching effectiveness and patient safety.[1]
From the outset, it is important to note that although the focus of this study is on teaching on the wards, this is not necessarily synonymous with learning on the wards. Even if a busy service compromises a faculty's teaching on the wards, more patients on a service might augment a resident's learning on the wards, from patients, peers, active clinical decision making, and overall exposure to diversity of disease.
The independent variable in this study is intensity, with the presumption that the number of patients is proportional to intensity, as codified by the Accreditation Council for Graduate Medical Education (ACGME) regulations regarding caps for admissions and service size. However, are 10 single‐organ chest pain patients the same intensity of 5 septic patients? The authors address this issue as much as possible by integrating expected mortality as a surrogate measure of intensity. Yet, given the heterogeneity of severity of illness even within a diagnosis, this too is likely to be an inaccurate measure of the true intensity of a service. Of course, such measures do not touch upon the social intensity that varies widely from patient to patient, which might be more time consuming and mentally exhausting than managing the diagnosis itself.
However, these limitations aside, this study's biggest contribution is that it raises the question that will define GME in the years to come, How does learning fluctuate with service intensity? The Yerkes‐Dodson curve was published in 1908, defining the relationship between stress and performance (Figure 1).[2] Many have interpreted the ACGME rules on admission caps and duty hours as being designed to make a kinder, gentler learning environment. However, as the curve suggests, optimizing service intensity (stress) is much more than just being nice; it is about optimizing performance, both in the way of patient care and learning. The question of how learning fluctuates with service intensity might be better framed as, What gets lost in the space as you move to the right of the optimal stress zone on the Yerkes‐Dodson curve?
Quality is first. This study correlates intensity with adverse events, and though there is a modest association, this likely underestimates the true magnitude of the problem. The measures in this study are documented adverse events, and are thus unlikely to capture the near misses that increase with heightened stress and intensity. Mistakes increase as mental bandwidth is insufficient to think through the consequences of each decision. Slipsthings you know you need to do but forget to doincrease as the mind becomes distracted.
Good work is next. All hospitalists know that it is possible to get a patient in and out of the hospital, but it is also possible to do so with such poor quality that the patient comes right back. Csikszentmihalyi described the concept of flow: the ability to become fully immersed in a task, concentrating on nothing except that task at hand.[3] What comes from flow is good work. Achieving flow requires the time to engage in a task, but it also requires that the mind is not distracted by the worry of what else needs to be done. As service intensity increases, so does fragmentation and distractions, both of which are enemies to flow. Achieving flow also might have implications for teaching and learning: Does it matter how good the teacher is, or how often she teaches, if the residents are so distracted that they are not mentally there and ready to receive that teaching?
The presumption underlying all GME is that practice makes perfect. However, practice does not make perfect; perfect practice makes perfect. Furthermore, just because you were physically there for an experience, does not mean you actually experienced it. It is possible to be engaged in a patient encounter, and mentally drive right past it, missing the full implications of the experience that would have presumptively allowed for improvement. The difference between practice and perfect practice is contingent upon mentally being there and upon the ability to reflect upon that experience such that improvement is possible. However, experiencing the experience and reflection require time and mental bandwidth; both are diminished as you move to the right of the optimal zone. One of the central roles of the attending is to help learners fully experience the experience and reflect upon how things could have been done better. Though not specifically addressed by this study, one wonders if an attending on an intense teaching service has the time to provide that counsel, and even if they do, if the residents are in a mental position to receive it.
This study assesses the implications of a highly intense service on patient outcomes; what is not assessed are the implications for the future patients who will receive care from these residents. In Strangers to Ourselves, Wilson describes the adaptive unconscious: the mind's ability to take routinely performed tasks and put them into an unconscious hard drive such that they can be completed at a later time without any conscious thought.[4] It is adaptive, because it allows multitasking while doing rote activities. However, it is dangerous too, because once a rote task has been relegated to the adaptive unconscious, it is beyond the ability of the conscious mind to inspect and change it. The exponential consequence of imperfect practice is that the wrong thing done again and again settles into the adaptive unconscious, and there it will be for the rest of that resident's career. What is not specifically explored by this study, though nonetheless reasonable to assume, is that as a teaching service's intensity increases, the quality and frequency of attending feedback and resident self‐reflection declines. The risk of a dysfunctional adaptive unconscious is inversely proportional to feedback and self‐reflection.
So how do we redesign the inpatient GME experience to optimize performance? The architect tasked with designing an optimal learning environment for an inpatient service is tasked with addressing both ends of the Yerkes‐Dodson curve. Too low of service intensity, residents lose out on exposure to diverse medical disease, and subsequent engagement in complex decision making requisite for developing their confidence and autonomy. Too high of service intensity, residents lose out on the teaching and feedback from their attendings, and the ability to truly experience and reflect upon the patients for whom they provide care. However, to do this effectively, the GME architect will need an accurate measure of inpatient intensity, something better than our current measures of duty hours and patient caps. Without that, it will be difficult to construct a learning environment that benefits not only the patients of today, but also the patients of tomorrow. One thing is for sure, the intensity of an inpatient service will only increase in the years to come, and the answer to the question of balancing intensity with learning, more than any other, will determine GME effectiveness. Achieving that balance will be a road of a thousand miles, but in raising this central question, this study gives us the first step.
- Associations between attending physician workload, teaching effectiveness, and patient safety. J Hosp Med. 2016;11:169–173. , , , , .
- The relation of strength of stimulus to rapidity of habit formation. J Comp Neurol Psychol. 1908;18:459–482. , .
- Flow: The Psychology of Optimal Experience. New York, NY: Harper and Row; 1990. .
- Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge, MA: Harvard University Press; 2002. .
- Associations between attending physician workload, teaching effectiveness, and patient safety. J Hosp Med. 2016;11:169–173. , , , , .
- The relation of strength of stimulus to rapidity of habit formation. J Comp Neurol Psychol. 1908;18:459–482. , .
- Flow: The Psychology of Optimal Experience. New York, NY: Harper and Row; 1990. .
- Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge, MA: Harvard University Press; 2002. .
Alarm Fatigue
Alarm fatigue is not a new issue for hospitals. In a commentary written over 3 decades ago, Kerr and Hayes described what they saw as an alarming issue developing in intensive care units.[1] Recently multiple organizations, including The Joint Commission and the Emergency Care Research Institute have called out alarm fatigue as a patient safety problem,[2, 3, 4] and organizations such as the American Academy of Pediatrics and the American Heart Association are backing away from recommendations for continuous monitoring.[5, 6] Hospitals are in a scramble to set up alarm committees and address alarms locally as recommended by The Joint Commission.[2] In this issue of the Journal of Hospital Medicine, Paine and colleagues set out to review the small but growing body of literature addressing physiologic monitor alarms and interventions that have tried to address alarm fatigue.[7]
After searching through 4629 titles, the authors found 32 articles addressing their key questions: What proportion of alarms are actionable? What is the relationship between clinicians' alarm exposure and response time? Which interventions are effective for reducing alarm rates? The majority of studies identified were observational, with only 8 studies addressing interventions to reduce alarms. Many of the identified studies occurred in units taking care of adults, though 10 descriptive studies and 1 intervention study occurred in pediatric settings. Perhaps the most concerning finding of all, though not surprising to those who work in the hospital setting, was that somewhere between <1% and 26% of alarms across all studies were considered actionable. Although only specifically addressed in 2 studies, the issue of alarm fatigue (i.e., more alarms leading to slower and sometimes absent clinician response) was supported in both, with nurses having slower responses when exposed to a higher numbers of alarms.[8, 9]
The authors note several limitations of their work, one of which is the modest body of literature on the topic. Although several interventions, including widening alarm parameters, increasing alarm delays, and using disposable leads or daily lead changes, have early evidence of success in safely reducing unnecessary alarms, the heterogeneity of this literature precluded a meta‐analysis. Further, the lack of standard definitions and the variety of methods of determining alarm validity make comparison across studies challenging. For this reason, the authors note that they did not distinguish nuisance alarms (i.e., alarms that accurately reflect the patient condition but do not require any intervention) from invalid alarms (i.e., alarms that do not correctly reflect the patient condition). This is relevant because it is likely that interventions to reduce invalid alarms (e.g., frequent lead changes) may be distinct from those that will successfully address nuisance alarms (e.g., widening alarm limits). It is also important to note that although patient safety is of paramount importance, there were other negative consequences of alarms that the authors did not address in this systemic review. Moreover, although avoiding unrecognized deterioration should be a primary goal of any program to reduce alarm fatigue, death remains uncommon compared to the number of patients, families, and healthcare workers exposed to high numbers of alarms during hospitalization. The high number of nonactionable alarms suggests that part of the burden of this problem may lie in more difficult to quantify outcomes such as sleep quality,[10, 11, 12] patient and parent quality of life during hospitalization,[13, 14] and interrupted tasks and cognitive work of healthcare providers.[15]
Paine and colleagues' review has some certain and some less certain implications for the future of alarm research. First, there is an imminent need for researchers and improvers to develop a consensus around terminology and metrics. We need to agree on what is and is not an actionable alarm, and we need valid and sensitive metrics to better understand the consequences of not monitoring a patient who should be on monitors. Second, hospitals addressing alarm fatigue need benchmarks. As hospitals rush to comply with The Joint Commission National Patient Safety Goals for alarm management,[2] it is safe to say that our goal should not be zero alarms, but how low do you go? What can we consider a safe number of alarms in our hospitals? Smart alarms hold tremendous potential to improve the sensitivity and positive predictive value of alarms. However, their ultimate success is dependent on engineers in industry to develop the technology as well as researchers in the hospital setting to validate the technology's performance in clinical care. Additionally, hospitals need to know which interventions are most effective to implement and how to reliably implement these in daily practice. What seems less certain is what type of research is best suited to address this need. The authors recommend randomized trials as an immediate next step, and certainly trials are the gold standard in determining efficacy. However, trials may overstate effectiveness as complex bundled interventions play out in complex and dynamic hospital systems. Quasiexperimental study designs, including time series and step‐wedge designs, would allow for further scientific discovery, such as which interventions are most effective in certain patient populations, while describing reliable implementation of effective methods that lead to lower alarms rates. In both classical randomized controlled trials and quasiexperiments, factorial designs[16, 17] could give us a better understanding of both the comparative effect and any interaction between interventions.
Alarm fatigue is a widespread problem that has negative effects for patients, families, nurses, and physicians. This review demonstrates that the great majority of alarms do not help clinicians and likely contribute to alarm fatigue. The opportunity to improve care is unquestionably vast, and attention from The Joint Commission and the lay press ensures change will occur. What is critical now is for hospitalists, intensivists, nurses, researchers, and hospital administrators to find the right combination of scientific discovery, thoughtful collaboration with industry, and quality improvement that will inform the literature on which interventions worked, how, and in what setting, and ultimately lead to safer (and quieter) hospitals.
Disclosures
Dr. Brady is supported by the Agency for Healthcare Research and Quality under award number K08HS023827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an executive council member of the Pediatric Research in Inpatient Settings network. Dr. Landrigan serves as a consultant to Virgin Pulse regarding sleep, safety, and health. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for delivering lectures on sleep deprivation, physician performance, handoffs, and patient safety, and has served as an expert witness in cases regarding patient safety. The authors report no other funding, financial relationships, or conflicts of interest.
- An “alarming” situation in the intensive therapy unit. Intensive Care Med. 1983;9(3):103–104. , .
- The Joint Commission. National Patient Safety Goal on Alarm Management. Available at: http://www.jointcommission.org/assets/1/18/JCP0713_Announce_New_NSPG.pdf. Accessed October 23, 2015.
- Joint Commission. Medical device alarm safety in hospitals. Sentinel Event Alert. 2013;(50):1–3.
- Top 10 health technology hazards for 2014. Health Devices. 2013;42(11):354–380.
- Clinical practice guideline: the diagnosis, management, and prevention of bronchiolitis. Pediatrics. 2014;134(5):e1474–e1502. , , , et al.
- Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):2721–2746. , , , et al.
- Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med. 2016;11(2):136–144. , , , , , , .
- Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):1351–1358. , , , et al.
- Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345–351. , , , et al.
- Sleep deprivation is an additional stress for parents staying in hospital. J Spec Pediatr Nurs. 2008;13(2):111–122. .
- The sound intensity and characteristics of variable‐pitch pulse oximeters. J Clin Monit Comput. 2008;22(3):199–207. , , , , .
- Factors influencing sleep for parents of critically ill hospitalised children: a qualitative analysis. Intensive Crit Care Nurs. 2011;27(1):37–45. , , , .
- Perceptions of stress, worry, and support in Black and White mothers of hospitalized, medically fragile infants. J Pediatr Nurs. 2002;17(2):82–88. , , , , .
- Parents' responses to stress in the neonatal intensive care unit. Crit Care Nurs. 2013;33(4):52–59; quiz 60. , , , .
- Alarm fatigue and its influence on staff performance. IIE Trans Healthc Syst Eng. 2015;5(3):183–196. , .
- Quality Improvement Through Planned Experimentation. 3rd ed. New York, NY: McGraw‐Hill; 1991. , , .
- The Health Care Data Guide: Learning From Data for Improvement. San Francisco, CA: Jossey‐Bass; 2011. , .
Alarm fatigue is not a new issue for hospitals. In a commentary written over 3 decades ago, Kerr and Hayes described what they saw as an alarming issue developing in intensive care units.[1] Recently multiple organizations, including The Joint Commission and the Emergency Care Research Institute have called out alarm fatigue as a patient safety problem,[2, 3, 4] and organizations such as the American Academy of Pediatrics and the American Heart Association are backing away from recommendations for continuous monitoring.[5, 6] Hospitals are in a scramble to set up alarm committees and address alarms locally as recommended by The Joint Commission.[2] In this issue of the Journal of Hospital Medicine, Paine and colleagues set out to review the small but growing body of literature addressing physiologic monitor alarms and interventions that have tried to address alarm fatigue.[7]
After searching through 4629 titles, the authors found 32 articles addressing their key questions: What proportion of alarms are actionable? What is the relationship between clinicians' alarm exposure and response time? Which interventions are effective for reducing alarm rates? The majority of studies identified were observational, with only 8 studies addressing interventions to reduce alarms. Many of the identified studies occurred in units taking care of adults, though 10 descriptive studies and 1 intervention study occurred in pediatric settings. Perhaps the most concerning finding of all, though not surprising to those who work in the hospital setting, was that somewhere between <1% and 26% of alarms across all studies were considered actionable. Although only specifically addressed in 2 studies, the issue of alarm fatigue (i.e., more alarms leading to slower and sometimes absent clinician response) was supported in both, with nurses having slower responses when exposed to a higher numbers of alarms.[8, 9]
The authors note several limitations of their work, one of which is the modest body of literature on the topic. Although several interventions, including widening alarm parameters, increasing alarm delays, and using disposable leads or daily lead changes, have early evidence of success in safely reducing unnecessary alarms, the heterogeneity of this literature precluded a meta‐analysis. Further, the lack of standard definitions and the variety of methods of determining alarm validity make comparison across studies challenging. For this reason, the authors note that they did not distinguish nuisance alarms (i.e., alarms that accurately reflect the patient condition but do not require any intervention) from invalid alarms (i.e., alarms that do not correctly reflect the patient condition). This is relevant because it is likely that interventions to reduce invalid alarms (e.g., frequent lead changes) may be distinct from those that will successfully address nuisance alarms (e.g., widening alarm limits). It is also important to note that although patient safety is of paramount importance, there were other negative consequences of alarms that the authors did not address in this systemic review. Moreover, although avoiding unrecognized deterioration should be a primary goal of any program to reduce alarm fatigue, death remains uncommon compared to the number of patients, families, and healthcare workers exposed to high numbers of alarms during hospitalization. The high number of nonactionable alarms suggests that part of the burden of this problem may lie in more difficult to quantify outcomes such as sleep quality,[10, 11, 12] patient and parent quality of life during hospitalization,[13, 14] and interrupted tasks and cognitive work of healthcare providers.[15]
Paine and colleagues' review has some certain and some less certain implications for the future of alarm research. First, there is an imminent need for researchers and improvers to develop a consensus around terminology and metrics. We need to agree on what is and is not an actionable alarm, and we need valid and sensitive metrics to better understand the consequences of not monitoring a patient who should be on monitors. Second, hospitals addressing alarm fatigue need benchmarks. As hospitals rush to comply with The Joint Commission National Patient Safety Goals for alarm management,[2] it is safe to say that our goal should not be zero alarms, but how low do you go? What can we consider a safe number of alarms in our hospitals? Smart alarms hold tremendous potential to improve the sensitivity and positive predictive value of alarms. However, their ultimate success is dependent on engineers in industry to develop the technology as well as researchers in the hospital setting to validate the technology's performance in clinical care. Additionally, hospitals need to know which interventions are most effective to implement and how to reliably implement these in daily practice. What seems less certain is what type of research is best suited to address this need. The authors recommend randomized trials as an immediate next step, and certainly trials are the gold standard in determining efficacy. However, trials may overstate effectiveness as complex bundled interventions play out in complex and dynamic hospital systems. Quasiexperimental study designs, including time series and step‐wedge designs, would allow for further scientific discovery, such as which interventions are most effective in certain patient populations, while describing reliable implementation of effective methods that lead to lower alarms rates. In both classical randomized controlled trials and quasiexperiments, factorial designs[16, 17] could give us a better understanding of both the comparative effect and any interaction between interventions.
Alarm fatigue is a widespread problem that has negative effects for patients, families, nurses, and physicians. This review demonstrates that the great majority of alarms do not help clinicians and likely contribute to alarm fatigue. The opportunity to improve care is unquestionably vast, and attention from The Joint Commission and the lay press ensures change will occur. What is critical now is for hospitalists, intensivists, nurses, researchers, and hospital administrators to find the right combination of scientific discovery, thoughtful collaboration with industry, and quality improvement that will inform the literature on which interventions worked, how, and in what setting, and ultimately lead to safer (and quieter) hospitals.
Disclosures
Dr. Brady is supported by the Agency for Healthcare Research and Quality under award number K08HS023827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an executive council member of the Pediatric Research in Inpatient Settings network. Dr. Landrigan serves as a consultant to Virgin Pulse regarding sleep, safety, and health. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for delivering lectures on sleep deprivation, physician performance, handoffs, and patient safety, and has served as an expert witness in cases regarding patient safety. The authors report no other funding, financial relationships, or conflicts of interest.
Alarm fatigue is not a new issue for hospitals. In a commentary written over 3 decades ago, Kerr and Hayes described what they saw as an alarming issue developing in intensive care units.[1] Recently multiple organizations, including The Joint Commission and the Emergency Care Research Institute have called out alarm fatigue as a patient safety problem,[2, 3, 4] and organizations such as the American Academy of Pediatrics and the American Heart Association are backing away from recommendations for continuous monitoring.[5, 6] Hospitals are in a scramble to set up alarm committees and address alarms locally as recommended by The Joint Commission.[2] In this issue of the Journal of Hospital Medicine, Paine and colleagues set out to review the small but growing body of literature addressing physiologic monitor alarms and interventions that have tried to address alarm fatigue.[7]
After searching through 4629 titles, the authors found 32 articles addressing their key questions: What proportion of alarms are actionable? What is the relationship between clinicians' alarm exposure and response time? Which interventions are effective for reducing alarm rates? The majority of studies identified were observational, with only 8 studies addressing interventions to reduce alarms. Many of the identified studies occurred in units taking care of adults, though 10 descriptive studies and 1 intervention study occurred in pediatric settings. Perhaps the most concerning finding of all, though not surprising to those who work in the hospital setting, was that somewhere between <1% and 26% of alarms across all studies were considered actionable. Although only specifically addressed in 2 studies, the issue of alarm fatigue (i.e., more alarms leading to slower and sometimes absent clinician response) was supported in both, with nurses having slower responses when exposed to a higher numbers of alarms.[8, 9]
The authors note several limitations of their work, one of which is the modest body of literature on the topic. Although several interventions, including widening alarm parameters, increasing alarm delays, and using disposable leads or daily lead changes, have early evidence of success in safely reducing unnecessary alarms, the heterogeneity of this literature precluded a meta‐analysis. Further, the lack of standard definitions and the variety of methods of determining alarm validity make comparison across studies challenging. For this reason, the authors note that they did not distinguish nuisance alarms (i.e., alarms that accurately reflect the patient condition but do not require any intervention) from invalid alarms (i.e., alarms that do not correctly reflect the patient condition). This is relevant because it is likely that interventions to reduce invalid alarms (e.g., frequent lead changes) may be distinct from those that will successfully address nuisance alarms (e.g., widening alarm limits). It is also important to note that although patient safety is of paramount importance, there were other negative consequences of alarms that the authors did not address in this systemic review. Moreover, although avoiding unrecognized deterioration should be a primary goal of any program to reduce alarm fatigue, death remains uncommon compared to the number of patients, families, and healthcare workers exposed to high numbers of alarms during hospitalization. The high number of nonactionable alarms suggests that part of the burden of this problem may lie in more difficult to quantify outcomes such as sleep quality,[10, 11, 12] patient and parent quality of life during hospitalization,[13, 14] and interrupted tasks and cognitive work of healthcare providers.[15]
Paine and colleagues' review has some certain and some less certain implications for the future of alarm research. First, there is an imminent need for researchers and improvers to develop a consensus around terminology and metrics. We need to agree on what is and is not an actionable alarm, and we need valid and sensitive metrics to better understand the consequences of not monitoring a patient who should be on monitors. Second, hospitals addressing alarm fatigue need benchmarks. As hospitals rush to comply with The Joint Commission National Patient Safety Goals for alarm management,[2] it is safe to say that our goal should not be zero alarms, but how low do you go? What can we consider a safe number of alarms in our hospitals? Smart alarms hold tremendous potential to improve the sensitivity and positive predictive value of alarms. However, their ultimate success is dependent on engineers in industry to develop the technology as well as researchers in the hospital setting to validate the technology's performance in clinical care. Additionally, hospitals need to know which interventions are most effective to implement and how to reliably implement these in daily practice. What seems less certain is what type of research is best suited to address this need. The authors recommend randomized trials as an immediate next step, and certainly trials are the gold standard in determining efficacy. However, trials may overstate effectiveness as complex bundled interventions play out in complex and dynamic hospital systems. Quasiexperimental study designs, including time series and step‐wedge designs, would allow for further scientific discovery, such as which interventions are most effective in certain patient populations, while describing reliable implementation of effective methods that lead to lower alarms rates. In both classical randomized controlled trials and quasiexperiments, factorial designs[16, 17] could give us a better understanding of both the comparative effect and any interaction between interventions.
Alarm fatigue is a widespread problem that has negative effects for patients, families, nurses, and physicians. This review demonstrates that the great majority of alarms do not help clinicians and likely contribute to alarm fatigue. The opportunity to improve care is unquestionably vast, and attention from The Joint Commission and the lay press ensures change will occur. What is critical now is for hospitalists, intensivists, nurses, researchers, and hospital administrators to find the right combination of scientific discovery, thoughtful collaboration with industry, and quality improvement that will inform the literature on which interventions worked, how, and in what setting, and ultimately lead to safer (and quieter) hospitals.
Disclosures
Dr. Brady is supported by the Agency for Healthcare Research and Quality under award number K08HS023827. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an executive council member of the Pediatric Research in Inpatient Settings network. Dr. Landrigan serves as a consultant to Virgin Pulse regarding sleep, safety, and health. In addition, Dr. Landrigan has received monetary awards, honoraria, and travel reimbursement from multiple academic and professional organizations for delivering lectures on sleep deprivation, physician performance, handoffs, and patient safety, and has served as an expert witness in cases regarding patient safety. The authors report no other funding, financial relationships, or conflicts of interest.
- An “alarming” situation in the intensive therapy unit. Intensive Care Med. 1983;9(3):103–104. , .
- The Joint Commission. National Patient Safety Goal on Alarm Management. Available at: http://www.jointcommission.org/assets/1/18/JCP0713_Announce_New_NSPG.pdf. Accessed October 23, 2015.
- Joint Commission. Medical device alarm safety in hospitals. Sentinel Event Alert. 2013;(50):1–3.
- Top 10 health technology hazards for 2014. Health Devices. 2013;42(11):354–380.
- Clinical practice guideline: the diagnosis, management, and prevention of bronchiolitis. Pediatrics. 2014;134(5):e1474–e1502. , , , et al.
- Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):2721–2746. , , , et al.
- Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med. 2016;11(2):136–144. , , , , , , .
- Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):1351–1358. , , , et al.
- Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345–351. , , , et al.
- Sleep deprivation is an additional stress for parents staying in hospital. J Spec Pediatr Nurs. 2008;13(2):111–122. .
- The sound intensity and characteristics of variable‐pitch pulse oximeters. J Clin Monit Comput. 2008;22(3):199–207. , , , , .
- Factors influencing sleep for parents of critically ill hospitalised children: a qualitative analysis. Intensive Crit Care Nurs. 2011;27(1):37–45. , , , .
- Perceptions of stress, worry, and support in Black and White mothers of hospitalized, medically fragile infants. J Pediatr Nurs. 2002;17(2):82–88. , , , , .
- Parents' responses to stress in the neonatal intensive care unit. Crit Care Nurs. 2013;33(4):52–59; quiz 60. , , , .
- Alarm fatigue and its influence on staff performance. IIE Trans Healthc Syst Eng. 2015;5(3):183–196. , .
- Quality Improvement Through Planned Experimentation. 3rd ed. New York, NY: McGraw‐Hill; 1991. , , .
- The Health Care Data Guide: Learning From Data for Improvement. San Francisco, CA: Jossey‐Bass; 2011. , .
- An “alarming” situation in the intensive therapy unit. Intensive Care Med. 1983;9(3):103–104. , .
- The Joint Commission. National Patient Safety Goal on Alarm Management. Available at: http://www.jointcommission.org/assets/1/18/JCP0713_Announce_New_NSPG.pdf. Accessed October 23, 2015.
- Joint Commission. Medical device alarm safety in hospitals. Sentinel Event Alert. 2013;(50):1–3.
- Top 10 health technology hazards for 2014. Health Devices. 2013;42(11):354–380.
- Clinical practice guideline: the diagnosis, management, and prevention of bronchiolitis. Pediatrics. 2014;134(5):e1474–e1502. , , , et al.
- Practice standards for electrocardiographic monitoring in hospital settings: an American Heart Association scientific statement from the Councils on Cardiovascular Nursing, Clinical Cardiology, and Cardiovascular Disease in the Young: endorsed by the International Society of Computerized Electrocardiology and the American Association of Critical‐Care Nurses. Circulation. 2004;110(17):2721–2746. , , , et al.
- Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. J Hosp Med. 2016;11(2):136–144. , , , , , , .
- Pulse oximetry desaturation alarms on a general postoperative adult unit: a prospective observational study of nurse response time. Int J Nurs Stud. 2013;50(10):1351–1358. , , , et al.
- Association between exposure to nonactionable physiologic monitor alarms and response time in a children's hospital. J Hosp Med. 2015;10(6):345–351. , , , et al.
- Sleep deprivation is an additional stress for parents staying in hospital. J Spec Pediatr Nurs. 2008;13(2):111–122. .
- The sound intensity and characteristics of variable‐pitch pulse oximeters. J Clin Monit Comput. 2008;22(3):199–207. , , , , .
- Factors influencing sleep for parents of critically ill hospitalised children: a qualitative analysis. Intensive Crit Care Nurs. 2011;27(1):37–45. , , , .
- Perceptions of stress, worry, and support in Black and White mothers of hospitalized, medically fragile infants. J Pediatr Nurs. 2002;17(2):82–88. , , , , .
- Parents' responses to stress in the neonatal intensive care unit. Crit Care Nurs. 2013;33(4):52–59; quiz 60. , , , .
- Alarm fatigue and its influence on staff performance. IIE Trans Healthc Syst Eng. 2015;5(3):183–196. , .
- Quality Improvement Through Planned Experimentation. 3rd ed. New York, NY: McGraw‐Hill; 1991. , , .
- The Health Care Data Guide: Learning From Data for Improvement. San Francisco, CA: Jossey‐Bass; 2011. , .
ED Observation
Over the past 3 decades, emergency department observation units (EDOUs) have been increasingly implemented in the United States to supplement emergency department (ED) care in a time of increasing patient volume and hospital crowding. Given the limited availability of hospital resources, EDOUs provide emergency clinicians an extended period of time to evaluate and risk‐stratify patients without necessitating difficult‐to‐obtain outpatient follow‐up or a short‐stay hospitalization. Changes in Medicare and insurer reimbursement policies have incentivized the adoption of EDOUs, and now, over one‐third of EDs nationally offer an observation unit.[1]
Much of the observation‐science literature has been condition and institution specific, showing benefits with respect to cost, quality of care, safety, and patient satisfaction.[2, 3, 4, 5] Until now, there had not been a national study on the impact of EDOUs to investigate important outcome: hospital admission rates. Capp and colleagues, using the National Hospital Ambulatory Care Survey (NHAMCS), attempt to answer a very important question: Do EDs with observation units have lower hospital admission rates?[6] To do so, they first standardize admission rates to sociodemographic and clinical features of the patients, while adjusting for hospital‐level factors. Then they compare the risk‐standardized hospital admission rate between EDs with and without an observation unit as reported in the NHAMCS. The authors make creative and elegant use of this publicly available, national dataset to suggest that EDOUs do not decrease hospital admissions.
The authors appropriately identify some limitations of using such data to answer questions where nuanced, countervailing forces drive the outcome of interest. It is important to note the basic statistical premise that the inability to disprove the null hypothesis is not the same thing as proving that the null hypothesis is true. In other words, although this study was not able to detect a difference between admission rates for hospitals with EDOUs and those without, it cannot be absolutely taken to mean that there is no relationship. The authors clearly state that this study was underpowered given that the difference of ED risk‐standardized hospital admission rates was small and therefore is at risk of type II error. In addition, unmeasured confounding may hide a true association between EDOUs and admission rates. Both static and dynamic measures of ED volume, crowding, and boarding, as well as changes in case mix or acuity may drive adoption of EDOUs,[7] while simultaneously associated with risk of hospitalization. Without balance between the EDs with and without observation units, or longitudinal measures of EDs over time as they are implemented, we are left with potentially biased estimates.
It is also important to highlight that not all EDOUs are created equal.[8] EDs may admit patients to the observation unit based on prespecified conditions or include all comers at physician discretion. Once placed in observation status, patients may or may not be managed by specific protocols to provide guidance on timing, order, and scope of testing and decision making.
Finally, care in EDOUs may be provided by emergency physicians, hospitalists, or other clinicians such as advanced practice providers (eg, physician assistants, nurse practitioners), a distinction that likely impacts the ultimate patient disposition. In fact, the NHAMCS asks the question, What type of physicians make decisions for patients in this observation or clinical decision unit? Capp et al., however, did not include this variable to further stratify the data. Although we do not know whether or not inclusion of this factor may have ultimately changed the results, it could have implications for how distinctions in who manages EDOUs could affect admission rates.
Still, the negative findings of this study seem to raise a number of questions, which should spark a broader discussion on EDOUs. The current analysis provides an important first step toward a national understanding of EDOUs and their role in acute care. Future inquiries should account for variation in observation units and the hospitals in which they are housed as well as inclusion of meaningful outcomes beyond admission rates. A number of methodological approaches can be considered to achieve this; propensity score matching within observational data may provide better balance between facilities with and without EDOUs, whereas multicenter impact analyses using controlled before‐and‐after or cluster‐randomized trials should be considered the gold standard for studying observation unit implementation. Outcomes in these studies should include long‐term changes in health, aggregate healthcare utilization, overuse of resources that do not provide high‐value care, and impacts on how care and costs may be redistributed when patients receive more care in observation units.
Although cost containment is often touted as a cornerstone of EDOUs, it is critical to know how the costs are measured and who is paying. For example, when an option to place a patient in observation exists, might clinicians utilize it for some patients who do not require further evaluation and testing and could have been safely discharged?[9] This observation creep may arise because clinicians can use EDOUs, not because they should. Motivations may include delaying difficult disposition decisions, avoiding uncertainty or liability when discharging patients, limited access to outpatient follow‐up, or a desire to utilize observation status to justify the existence of EDOUs within the institution. In this way, EDOUs may, in fact, provide low‐value care at a time of soaring healthcare costs.
Perhaps even more perplexing is the question of how costs are shifted through use of EDOUs.[10, 11] Much of the literature advertising its cost savings are only from the perspective of the insurers' or hospitals' perspective,[12] with 1 study estimating a potential annual cost savings of $4.6 million for each hospital, or $3 billion nationally, associated with the implementation of observation care.[5] But are medical centers just passing costs on to patients to avoid penalties and disincentives associated with short‐stay hospitalizations? Both private insurers and the Centers for Medicare and Medicaid Services may deny payments for admissions deemed unnecessary. Further, under the Affordable Care Act, avoiding hospitalizations may mean fewer penalties when Medicare patients later require admission for certain conditions. As such, hospitals may find huge incentives and cost savings associated with observation units. However, using EDOUs to avoid the Medicare readmission penalty may backfire when less‐sick patients requiring care beyond the ED are treated and discharged from observation, leaving more medically complex and ill patients for hospitalization, a group potentially more likely to be rehospitalized within 30 days, making readmission rates appear higher.
Nonetheless, because services provided during observation status are billed as an outpatient visit, patients may be liable for a proportion of the overall visit. In contrast to inpatient stays where, in general, patients owe a single copay for most or all of services rendered, outpatient visits typically involve a la carte billing. When accounting for costs related to professional and facilities fees, medications, laboratory tests, and advanced diagnostics and procedures, patient bills may be markedly higher when they are placed in observation status. This is especially true for patients covered by Medicare, where observation stays are not covered under Part A.
Research will need to simultaneously identify best practices for how EDOUs are implemented and administered while appraising their impact on patient‐centered outcomes and true costs, from multiple perspectives, including the patient, hospital, and healthcare system. There is reason to be optimistic about EDOUs as potentially high‐value components of the acute care delivery system. However, the widespread implementation of observation units with the assumption that it is cost saving to hospitals and insurers, without high‐quality population studies to inform their impact more broadly, may undermine acceptance by patients and health‐policy experts.
Disclosure
Nothing to report.
- National study of emergency department observation services. Acad Emerg Med. 2011;18(9):959–965. , , .
- Emergency department observation units: a clinical and financial benefit for hospitals. Health Care Manag Rev. 2011;36(1):28–37. , , .
- Randomised controlled trial and economic evaluation of a chest pain observation unit compared with routine care. BMJ. 2004;328(7434):254. , , , et al.
- Patient satisfaction with an emergency department asthma observation unit. Acad Emerg Med. 1999;6(3):178–183. , , , , .
- Making greater use of dedicated hospital observation units for many short‐stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):2314–2323. , , , , , .
- The Impact of emergency department observation units on U.S. emergency department admission rates. J Hosp Med. 2015;10(11):738–742. , , , .
- Systematic review of emergency department crowding: causes, effects, and solutions. Ann Emerg Med. 2008;52(2):126–136. , .
- A national survey of observation units in the United States. Am J Emerg Med. 2003;21(7):529–533. , , , .
- An evaluation of emergency physician selection of observation unit patients. Am J Emerg Med. 2006;24(3):271–279. , , , .
- Reducing patient financial liability for hospitalizations: the physician role. J Hosp Med. 2010;5(3):160–162. , .
- Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251–1259. , , .
- Revisiting the economic efficiencies of observation units. Manag Care. 2015;24(3):46–52. , , , , , .
Over the past 3 decades, emergency department observation units (EDOUs) have been increasingly implemented in the United States to supplement emergency department (ED) care in a time of increasing patient volume and hospital crowding. Given the limited availability of hospital resources, EDOUs provide emergency clinicians an extended period of time to evaluate and risk‐stratify patients without necessitating difficult‐to‐obtain outpatient follow‐up or a short‐stay hospitalization. Changes in Medicare and insurer reimbursement policies have incentivized the adoption of EDOUs, and now, over one‐third of EDs nationally offer an observation unit.[1]
Much of the observation‐science literature has been condition and institution specific, showing benefits with respect to cost, quality of care, safety, and patient satisfaction.[2, 3, 4, 5] Until now, there had not been a national study on the impact of EDOUs to investigate important outcome: hospital admission rates. Capp and colleagues, using the National Hospital Ambulatory Care Survey (NHAMCS), attempt to answer a very important question: Do EDs with observation units have lower hospital admission rates?[6] To do so, they first standardize admission rates to sociodemographic and clinical features of the patients, while adjusting for hospital‐level factors. Then they compare the risk‐standardized hospital admission rate between EDs with and without an observation unit as reported in the NHAMCS. The authors make creative and elegant use of this publicly available, national dataset to suggest that EDOUs do not decrease hospital admissions.
The authors appropriately identify some limitations of using such data to answer questions where nuanced, countervailing forces drive the outcome of interest. It is important to note the basic statistical premise that the inability to disprove the null hypothesis is not the same thing as proving that the null hypothesis is true. In other words, although this study was not able to detect a difference between admission rates for hospitals with EDOUs and those without, it cannot be absolutely taken to mean that there is no relationship. The authors clearly state that this study was underpowered given that the difference of ED risk‐standardized hospital admission rates was small and therefore is at risk of type II error. In addition, unmeasured confounding may hide a true association between EDOUs and admission rates. Both static and dynamic measures of ED volume, crowding, and boarding, as well as changes in case mix or acuity may drive adoption of EDOUs,[7] while simultaneously associated with risk of hospitalization. Without balance between the EDs with and without observation units, or longitudinal measures of EDs over time as they are implemented, we are left with potentially biased estimates.
It is also important to highlight that not all EDOUs are created equal.[8] EDs may admit patients to the observation unit based on prespecified conditions or include all comers at physician discretion. Once placed in observation status, patients may or may not be managed by specific protocols to provide guidance on timing, order, and scope of testing and decision making.
Finally, care in EDOUs may be provided by emergency physicians, hospitalists, or other clinicians such as advanced practice providers (eg, physician assistants, nurse practitioners), a distinction that likely impacts the ultimate patient disposition. In fact, the NHAMCS asks the question, What type of physicians make decisions for patients in this observation or clinical decision unit? Capp et al., however, did not include this variable to further stratify the data. Although we do not know whether or not inclusion of this factor may have ultimately changed the results, it could have implications for how distinctions in who manages EDOUs could affect admission rates.
Still, the negative findings of this study seem to raise a number of questions, which should spark a broader discussion on EDOUs. The current analysis provides an important first step toward a national understanding of EDOUs and their role in acute care. Future inquiries should account for variation in observation units and the hospitals in which they are housed as well as inclusion of meaningful outcomes beyond admission rates. A number of methodological approaches can be considered to achieve this; propensity score matching within observational data may provide better balance between facilities with and without EDOUs, whereas multicenter impact analyses using controlled before‐and‐after or cluster‐randomized trials should be considered the gold standard for studying observation unit implementation. Outcomes in these studies should include long‐term changes in health, aggregate healthcare utilization, overuse of resources that do not provide high‐value care, and impacts on how care and costs may be redistributed when patients receive more care in observation units.
Although cost containment is often touted as a cornerstone of EDOUs, it is critical to know how the costs are measured and who is paying. For example, when an option to place a patient in observation exists, might clinicians utilize it for some patients who do not require further evaluation and testing and could have been safely discharged?[9] This observation creep may arise because clinicians can use EDOUs, not because they should. Motivations may include delaying difficult disposition decisions, avoiding uncertainty or liability when discharging patients, limited access to outpatient follow‐up, or a desire to utilize observation status to justify the existence of EDOUs within the institution. In this way, EDOUs may, in fact, provide low‐value care at a time of soaring healthcare costs.
Perhaps even more perplexing is the question of how costs are shifted through use of EDOUs.[10, 11] Much of the literature advertising its cost savings are only from the perspective of the insurers' or hospitals' perspective,[12] with 1 study estimating a potential annual cost savings of $4.6 million for each hospital, or $3 billion nationally, associated with the implementation of observation care.[5] But are medical centers just passing costs on to patients to avoid penalties and disincentives associated with short‐stay hospitalizations? Both private insurers and the Centers for Medicare and Medicaid Services may deny payments for admissions deemed unnecessary. Further, under the Affordable Care Act, avoiding hospitalizations may mean fewer penalties when Medicare patients later require admission for certain conditions. As such, hospitals may find huge incentives and cost savings associated with observation units. However, using EDOUs to avoid the Medicare readmission penalty may backfire when less‐sick patients requiring care beyond the ED are treated and discharged from observation, leaving more medically complex and ill patients for hospitalization, a group potentially more likely to be rehospitalized within 30 days, making readmission rates appear higher.
Nonetheless, because services provided during observation status are billed as an outpatient visit, patients may be liable for a proportion of the overall visit. In contrast to inpatient stays where, in general, patients owe a single copay for most or all of services rendered, outpatient visits typically involve a la carte billing. When accounting for costs related to professional and facilities fees, medications, laboratory tests, and advanced diagnostics and procedures, patient bills may be markedly higher when they are placed in observation status. This is especially true for patients covered by Medicare, where observation stays are not covered under Part A.
Research will need to simultaneously identify best practices for how EDOUs are implemented and administered while appraising their impact on patient‐centered outcomes and true costs, from multiple perspectives, including the patient, hospital, and healthcare system. There is reason to be optimistic about EDOUs as potentially high‐value components of the acute care delivery system. However, the widespread implementation of observation units with the assumption that it is cost saving to hospitals and insurers, without high‐quality population studies to inform their impact more broadly, may undermine acceptance by patients and health‐policy experts.
Disclosure
Nothing to report.
Over the past 3 decades, emergency department observation units (EDOUs) have been increasingly implemented in the United States to supplement emergency department (ED) care in a time of increasing patient volume and hospital crowding. Given the limited availability of hospital resources, EDOUs provide emergency clinicians an extended period of time to evaluate and risk‐stratify patients without necessitating difficult‐to‐obtain outpatient follow‐up or a short‐stay hospitalization. Changes in Medicare and insurer reimbursement policies have incentivized the adoption of EDOUs, and now, over one‐third of EDs nationally offer an observation unit.[1]
Much of the observation‐science literature has been condition and institution specific, showing benefits with respect to cost, quality of care, safety, and patient satisfaction.[2, 3, 4, 5] Until now, there had not been a national study on the impact of EDOUs to investigate important outcome: hospital admission rates. Capp and colleagues, using the National Hospital Ambulatory Care Survey (NHAMCS), attempt to answer a very important question: Do EDs with observation units have lower hospital admission rates?[6] To do so, they first standardize admission rates to sociodemographic and clinical features of the patients, while adjusting for hospital‐level factors. Then they compare the risk‐standardized hospital admission rate between EDs with and without an observation unit as reported in the NHAMCS. The authors make creative and elegant use of this publicly available, national dataset to suggest that EDOUs do not decrease hospital admissions.
The authors appropriately identify some limitations of using such data to answer questions where nuanced, countervailing forces drive the outcome of interest. It is important to note the basic statistical premise that the inability to disprove the null hypothesis is not the same thing as proving that the null hypothesis is true. In other words, although this study was not able to detect a difference between admission rates for hospitals with EDOUs and those without, it cannot be absolutely taken to mean that there is no relationship. The authors clearly state that this study was underpowered given that the difference of ED risk‐standardized hospital admission rates was small and therefore is at risk of type II error. In addition, unmeasured confounding may hide a true association between EDOUs and admission rates. Both static and dynamic measures of ED volume, crowding, and boarding, as well as changes in case mix or acuity may drive adoption of EDOUs,[7] while simultaneously associated with risk of hospitalization. Without balance between the EDs with and without observation units, or longitudinal measures of EDs over time as they are implemented, we are left with potentially biased estimates.
It is also important to highlight that not all EDOUs are created equal.[8] EDs may admit patients to the observation unit based on prespecified conditions or include all comers at physician discretion. Once placed in observation status, patients may or may not be managed by specific protocols to provide guidance on timing, order, and scope of testing and decision making.
Finally, care in EDOUs may be provided by emergency physicians, hospitalists, or other clinicians such as advanced practice providers (eg, physician assistants, nurse practitioners), a distinction that likely impacts the ultimate patient disposition. In fact, the NHAMCS asks the question, What type of physicians make decisions for patients in this observation or clinical decision unit? Capp et al., however, did not include this variable to further stratify the data. Although we do not know whether or not inclusion of this factor may have ultimately changed the results, it could have implications for how distinctions in who manages EDOUs could affect admission rates.
Still, the negative findings of this study seem to raise a number of questions, which should spark a broader discussion on EDOUs. The current analysis provides an important first step toward a national understanding of EDOUs and their role in acute care. Future inquiries should account for variation in observation units and the hospitals in which they are housed as well as inclusion of meaningful outcomes beyond admission rates. A number of methodological approaches can be considered to achieve this; propensity score matching within observational data may provide better balance between facilities with and without EDOUs, whereas multicenter impact analyses using controlled before‐and‐after or cluster‐randomized trials should be considered the gold standard for studying observation unit implementation. Outcomes in these studies should include long‐term changes in health, aggregate healthcare utilization, overuse of resources that do not provide high‐value care, and impacts on how care and costs may be redistributed when patients receive more care in observation units.
Although cost containment is often touted as a cornerstone of EDOUs, it is critical to know how the costs are measured and who is paying. For example, when an option to place a patient in observation exists, might clinicians utilize it for some patients who do not require further evaluation and testing and could have been safely discharged?[9] This observation creep may arise because clinicians can use EDOUs, not because they should. Motivations may include delaying difficult disposition decisions, avoiding uncertainty or liability when discharging patients, limited access to outpatient follow‐up, or a desire to utilize observation status to justify the existence of EDOUs within the institution. In this way, EDOUs may, in fact, provide low‐value care at a time of soaring healthcare costs.
Perhaps even more perplexing is the question of how costs are shifted through use of EDOUs.[10, 11] Much of the literature advertising its cost savings are only from the perspective of the insurers' or hospitals' perspective,[12] with 1 study estimating a potential annual cost savings of $4.6 million for each hospital, or $3 billion nationally, associated with the implementation of observation care.[5] But are medical centers just passing costs on to patients to avoid penalties and disincentives associated with short‐stay hospitalizations? Both private insurers and the Centers for Medicare and Medicaid Services may deny payments for admissions deemed unnecessary. Further, under the Affordable Care Act, avoiding hospitalizations may mean fewer penalties when Medicare patients later require admission for certain conditions. As such, hospitals may find huge incentives and cost savings associated with observation units. However, using EDOUs to avoid the Medicare readmission penalty may backfire when less‐sick patients requiring care beyond the ED are treated and discharged from observation, leaving more medically complex and ill patients for hospitalization, a group potentially more likely to be rehospitalized within 30 days, making readmission rates appear higher.
Nonetheless, because services provided during observation status are billed as an outpatient visit, patients may be liable for a proportion of the overall visit. In contrast to inpatient stays where, in general, patients owe a single copay for most or all of services rendered, outpatient visits typically involve a la carte billing. When accounting for costs related to professional and facilities fees, medications, laboratory tests, and advanced diagnostics and procedures, patient bills may be markedly higher when they are placed in observation status. This is especially true for patients covered by Medicare, where observation stays are not covered under Part A.
Research will need to simultaneously identify best practices for how EDOUs are implemented and administered while appraising their impact on patient‐centered outcomes and true costs, from multiple perspectives, including the patient, hospital, and healthcare system. There is reason to be optimistic about EDOUs as potentially high‐value components of the acute care delivery system. However, the widespread implementation of observation units with the assumption that it is cost saving to hospitals and insurers, without high‐quality population studies to inform their impact more broadly, may undermine acceptance by patients and health‐policy experts.
Disclosure
Nothing to report.
- National study of emergency department observation services. Acad Emerg Med. 2011;18(9):959–965. , , .
- Emergency department observation units: a clinical and financial benefit for hospitals. Health Care Manag Rev. 2011;36(1):28–37. , , .
- Randomised controlled trial and economic evaluation of a chest pain observation unit compared with routine care. BMJ. 2004;328(7434):254. , , , et al.
- Patient satisfaction with an emergency department asthma observation unit. Acad Emerg Med. 1999;6(3):178–183. , , , , .
- Making greater use of dedicated hospital observation units for many short‐stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):2314–2323. , , , , , .
- The Impact of emergency department observation units on U.S. emergency department admission rates. J Hosp Med. 2015;10(11):738–742. , , , .
- Systematic review of emergency department crowding: causes, effects, and solutions. Ann Emerg Med. 2008;52(2):126–136. , .
- A national survey of observation units in the United States. Am J Emerg Med. 2003;21(7):529–533. , , , .
- An evaluation of emergency physician selection of observation unit patients. Am J Emerg Med. 2006;24(3):271–279. , , , .
- Reducing patient financial liability for hospitalizations: the physician role. J Hosp Med. 2010;5(3):160–162. , .
- Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251–1259. , , .
- Revisiting the economic efficiencies of observation units. Manag Care. 2015;24(3):46–52. , , , , , .
- National study of emergency department observation services. Acad Emerg Med. 2011;18(9):959–965. , , .
- Emergency department observation units: a clinical and financial benefit for hospitals. Health Care Manag Rev. 2011;36(1):28–37. , , .
- Randomised controlled trial and economic evaluation of a chest pain observation unit compared with routine care. BMJ. 2004;328(7434):254. , , , et al.
- Patient satisfaction with an emergency department asthma observation unit. Acad Emerg Med. 1999;6(3):178–183. , , , , .
- Making greater use of dedicated hospital observation units for many short‐stay patients could save $3.1 billion a year. Health Aff (Millwood). 2012;31(10):2314–2323. , , , , , .
- The Impact of emergency department observation units on U.S. emergency department admission rates. J Hosp Med. 2015;10(11):738–742. , , , .
- Systematic review of emergency department crowding: causes, effects, and solutions. Ann Emerg Med. 2008;52(2):126–136. , .
- A national survey of observation units in the United States. Am J Emerg Med. 2003;21(7):529–533. , , , .
- An evaluation of emergency physician selection of observation unit patients. Am J Emerg Med. 2006;24(3):271–279. , , , .
- Reducing patient financial liability for hospitalizations: the physician role. J Hosp Med. 2010;5(3):160–162. , .
- Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251–1259. , , .
- Revisiting the economic efficiencies of observation units. Manag Care. 2015;24(3):46–52. , , , , , .
Assessing Discharge Readiness
Widespread evidence suggests that the period around hospitalization remains a vulnerable time for patients. Nearly 20% of patients experience adverse events, including medication errors and hospital readmissions, within 3 weeks of discharge.[1] Multiple factors contribute to adverse events, including the overwhelming volume of information patients receive on their last day in the hospital and fragmented interdisciplinary communication, both among hospital‐based providers and with community providers.[2, 3, 4] A growing body of literature suggests that to ensure patient understanding and a safe transition, discharge planning should start at time of admission. Yet, in the context of high patient volumes and competing priorities, clinicians often postpone discharge planning until they perceive a patient's discharge is imminent. Discharge bundles, designed to improve the safety of hospital discharge, such as those developed by Project BOOST (Better Outcomes by Optimizing Safe Transitions) or Project RED (Re‐Engineered Discharge), are not designed to help providers determine when a patient might be approaching discharge.[5, 6] Early identification of a patient's probable discharge date can provide vital information to inpatient and outpatient teams as they establish comprehensive discharge plans. Accurate discharge‐date predictions allow for effective discharge planning, serving to reduce length of stay (LOS) and consequently improving patient satisfaction and patient safety.[7] However, in the complex world of internal medicine, can clinicians accurately predict the timing of discharge?
A study by Sullivan and colleagues[8] in this issue of the Journal of Hospital Medicine explores a physician's ability to predict hospital discharge. Trainees and attending physicians on general internal medicine wards were asked to predict whether each patient under their care would be discharged on the next day, on the same day, or neither. Discharge predictions were recorded at 3 time points: mornings (79 am), midday (122 pm), or afternoons (57 pm). For predictions of next‐day discharges, the sensitivity (SN) and positive predictive value (PPV) were highest in the afternoon (SN 67%, PPV 69%), whereas for same‐day discharges, accuracy was highest midday (SN 88%, PPV 79%). The authors note that physicians' ability to correctly predict discharges continually improved as time to actual discharge fell.
This study is novel; to our knowledge, no other studies have evaluated the accuracy with which physicians can predict the actual day of discharge. Although this study is particular to a trainee setting and more specific to a single academic medical center, the results are thought provoking. Why are attendings and trainees unable to predict next‐day discharges more accurately? Can we do better? The majority of medical patients are not electively admitted and therefore may have complex and unpredictable courses compared to elective or surgical admissions. Subspecialty consultants may be guiding clinical care and potentially even determining readiness for discharge. Furthermore, the additional responsibilities of teaching and supervising trainees in academic medical centers may further delay discussions and decisions about patient discharges. Another plausible hypothesis, however, is that determination of barriers to discharge and discharge readiness is a clinical skill that is underappreciated and not taught or modeled sufficiently.
If we are to do better at predicting and planning for discharge, we need to build prompts for discharge readiness assessment into our daily work and education of trainees. Although interdisciplinary rounds are typically held in the morning, Wertheimer and colleagues show that additional afternoon interdisciplinary rounds can help identify patients who might be discharged before noon the next day.[9] In their study, identifying such patients in advance improved the overall early discharge rate, moved the average discharge time to earlier in the day, and decreased the observed‐to‐expected LOS, all without any adverse effects on readmissions. We also need more communication between members of the physician care team, especially with subspecialists helping manage care. The authors describe moderate agreement with next‐day and substantial agreement with same‐day discharges between trainees and attendings. Although the authors do not reveal whether trainees or attendings were more accurate, the discrepancy with next‐day discharges is notable. The disagreement suggests a lack of communication between team members about discharge barriers that can hinder planning efforts. Assessing a patient's readiness for and needs upon discharge, and anticipating a patient's disease trajectory, are important clinical skills. Trainees may lack clinical judgment and experience to accurately predict a patient's clinical evolution. As hospitalists, we can role model how to continuously assess patients' discharge needs throughout hospitalization by discussing discharge barriers during daily rounds. As part of transitions of care curricula, in addition to learning about best practices in discharge planning (eg, medication reconciliation, teach back, follow‐up appointments, effective discharge summaries), trainees should be encouraged to conduct structured, daily assessment of discharge readiness and anticipated day of discharge.
Starting the discharge planning process earlier in an admission has the potential to create more thoughtful, efficient, and ultimately safer discharges for our patients. By building discharge readiness assessments into the daily workflow and education curricula, we can prompt trainees and attendings to communicate with interdisciplinary team members and address potential challenges that patients may face in managing their health after discharge. Adequately preparing patients for safe discharges has readmission implications. With Centers for Medicare and Medicaid Services reducing payments to facilities with high rates of readmissions, reducing avoidable readmissions is a priority for all institutions.[10]
We can accomplish safe and early discharges. However, we must get better at accurately assessing our patients' readiness for discharge if we are to take the first step.
Disclosure
Nothing to report.
Widespread evidence suggests that the period around hospitalization remains a vulnerable time for patients. Nearly 20% of patients experience adverse events, including medication errors and hospital readmissions, within 3 weeks of discharge.[1] Multiple factors contribute to adverse events, including the overwhelming volume of information patients receive on their last day in the hospital and fragmented interdisciplinary communication, both among hospital‐based providers and with community providers.[2, 3, 4] A growing body of literature suggests that to ensure patient understanding and a safe transition, discharge planning should start at time of admission. Yet, in the context of high patient volumes and competing priorities, clinicians often postpone discharge planning until they perceive a patient's discharge is imminent. Discharge bundles, designed to improve the safety of hospital discharge, such as those developed by Project BOOST (Better Outcomes by Optimizing Safe Transitions) or Project RED (Re‐Engineered Discharge), are not designed to help providers determine when a patient might be approaching discharge.[5, 6] Early identification of a patient's probable discharge date can provide vital information to inpatient and outpatient teams as they establish comprehensive discharge plans. Accurate discharge‐date predictions allow for effective discharge planning, serving to reduce length of stay (LOS) and consequently improving patient satisfaction and patient safety.[7] However, in the complex world of internal medicine, can clinicians accurately predict the timing of discharge?
A study by Sullivan and colleagues[8] in this issue of the Journal of Hospital Medicine explores a physician's ability to predict hospital discharge. Trainees and attending physicians on general internal medicine wards were asked to predict whether each patient under their care would be discharged on the next day, on the same day, or neither. Discharge predictions were recorded at 3 time points: mornings (79 am), midday (122 pm), or afternoons (57 pm). For predictions of next‐day discharges, the sensitivity (SN) and positive predictive value (PPV) were highest in the afternoon (SN 67%, PPV 69%), whereas for same‐day discharges, accuracy was highest midday (SN 88%, PPV 79%). The authors note that physicians' ability to correctly predict discharges continually improved as time to actual discharge fell.
This study is novel; to our knowledge, no other studies have evaluated the accuracy with which physicians can predict the actual day of discharge. Although this study is particular to a trainee setting and more specific to a single academic medical center, the results are thought provoking. Why are attendings and trainees unable to predict next‐day discharges more accurately? Can we do better? The majority of medical patients are not electively admitted and therefore may have complex and unpredictable courses compared to elective or surgical admissions. Subspecialty consultants may be guiding clinical care and potentially even determining readiness for discharge. Furthermore, the additional responsibilities of teaching and supervising trainees in academic medical centers may further delay discussions and decisions about patient discharges. Another plausible hypothesis, however, is that determination of barriers to discharge and discharge readiness is a clinical skill that is underappreciated and not taught or modeled sufficiently.
If we are to do better at predicting and planning for discharge, we need to build prompts for discharge readiness assessment into our daily work and education of trainees. Although interdisciplinary rounds are typically held in the morning, Wertheimer and colleagues show that additional afternoon interdisciplinary rounds can help identify patients who might be discharged before noon the next day.[9] In their study, identifying such patients in advance improved the overall early discharge rate, moved the average discharge time to earlier in the day, and decreased the observed‐to‐expected LOS, all without any adverse effects on readmissions. We also need more communication between members of the physician care team, especially with subspecialists helping manage care. The authors describe moderate agreement with next‐day and substantial agreement with same‐day discharges between trainees and attendings. Although the authors do not reveal whether trainees or attendings were more accurate, the discrepancy with next‐day discharges is notable. The disagreement suggests a lack of communication between team members about discharge barriers that can hinder planning efforts. Assessing a patient's readiness for and needs upon discharge, and anticipating a patient's disease trajectory, are important clinical skills. Trainees may lack clinical judgment and experience to accurately predict a patient's clinical evolution. As hospitalists, we can role model how to continuously assess patients' discharge needs throughout hospitalization by discussing discharge barriers during daily rounds. As part of transitions of care curricula, in addition to learning about best practices in discharge planning (eg, medication reconciliation, teach back, follow‐up appointments, effective discharge summaries), trainees should be encouraged to conduct structured, daily assessment of discharge readiness and anticipated day of discharge.
Starting the discharge planning process earlier in an admission has the potential to create more thoughtful, efficient, and ultimately safer discharges for our patients. By building discharge readiness assessments into the daily workflow and education curricula, we can prompt trainees and attendings to communicate with interdisciplinary team members and address potential challenges that patients may face in managing their health after discharge. Adequately preparing patients for safe discharges has readmission implications. With Centers for Medicare and Medicaid Services reducing payments to facilities with high rates of readmissions, reducing avoidable readmissions is a priority for all institutions.[10]
We can accomplish safe and early discharges. However, we must get better at accurately assessing our patients' readiness for discharge if we are to take the first step.
Disclosure
Nothing to report.
Widespread evidence suggests that the period around hospitalization remains a vulnerable time for patients. Nearly 20% of patients experience adverse events, including medication errors and hospital readmissions, within 3 weeks of discharge.[1] Multiple factors contribute to adverse events, including the overwhelming volume of information patients receive on their last day in the hospital and fragmented interdisciplinary communication, both among hospital‐based providers and with community providers.[2, 3, 4] A growing body of literature suggests that to ensure patient understanding and a safe transition, discharge planning should start at time of admission. Yet, in the context of high patient volumes and competing priorities, clinicians often postpone discharge planning until they perceive a patient's discharge is imminent. Discharge bundles, designed to improve the safety of hospital discharge, such as those developed by Project BOOST (Better Outcomes by Optimizing Safe Transitions) or Project RED (Re‐Engineered Discharge), are not designed to help providers determine when a patient might be approaching discharge.[5, 6] Early identification of a patient's probable discharge date can provide vital information to inpatient and outpatient teams as they establish comprehensive discharge plans. Accurate discharge‐date predictions allow for effective discharge planning, serving to reduce length of stay (LOS) and consequently improving patient satisfaction and patient safety.[7] However, in the complex world of internal medicine, can clinicians accurately predict the timing of discharge?
A study by Sullivan and colleagues[8] in this issue of the Journal of Hospital Medicine explores a physician's ability to predict hospital discharge. Trainees and attending physicians on general internal medicine wards were asked to predict whether each patient under their care would be discharged on the next day, on the same day, or neither. Discharge predictions were recorded at 3 time points: mornings (79 am), midday (122 pm), or afternoons (57 pm). For predictions of next‐day discharges, the sensitivity (SN) and positive predictive value (PPV) were highest in the afternoon (SN 67%, PPV 69%), whereas for same‐day discharges, accuracy was highest midday (SN 88%, PPV 79%). The authors note that physicians' ability to correctly predict discharges continually improved as time to actual discharge fell.
This study is novel; to our knowledge, no other studies have evaluated the accuracy with which physicians can predict the actual day of discharge. Although this study is particular to a trainee setting and more specific to a single academic medical center, the results are thought provoking. Why are attendings and trainees unable to predict next‐day discharges more accurately? Can we do better? The majority of medical patients are not electively admitted and therefore may have complex and unpredictable courses compared to elective or surgical admissions. Subspecialty consultants may be guiding clinical care and potentially even determining readiness for discharge. Furthermore, the additional responsibilities of teaching and supervising trainees in academic medical centers may further delay discussions and decisions about patient discharges. Another plausible hypothesis, however, is that determination of barriers to discharge and discharge readiness is a clinical skill that is underappreciated and not taught or modeled sufficiently.
If we are to do better at predicting and planning for discharge, we need to build prompts for discharge readiness assessment into our daily work and education of trainees. Although interdisciplinary rounds are typically held in the morning, Wertheimer and colleagues show that additional afternoon interdisciplinary rounds can help identify patients who might be discharged before noon the next day.[9] In their study, identifying such patients in advance improved the overall early discharge rate, moved the average discharge time to earlier in the day, and decreased the observed‐to‐expected LOS, all without any adverse effects on readmissions. We also need more communication between members of the physician care team, especially with subspecialists helping manage care. The authors describe moderate agreement with next‐day and substantial agreement with same‐day discharges between trainees and attendings. Although the authors do not reveal whether trainees or attendings were more accurate, the discrepancy with next‐day discharges is notable. The disagreement suggests a lack of communication between team members about discharge barriers that can hinder planning efforts. Assessing a patient's readiness for and needs upon discharge, and anticipating a patient's disease trajectory, are important clinical skills. Trainees may lack clinical judgment and experience to accurately predict a patient's clinical evolution. As hospitalists, we can role model how to continuously assess patients' discharge needs throughout hospitalization by discussing discharge barriers during daily rounds. As part of transitions of care curricula, in addition to learning about best practices in discharge planning (eg, medication reconciliation, teach back, follow‐up appointments, effective discharge summaries), trainees should be encouraged to conduct structured, daily assessment of discharge readiness and anticipated day of discharge.
Starting the discharge planning process earlier in an admission has the potential to create more thoughtful, efficient, and ultimately safer discharges for our patients. By building discharge readiness assessments into the daily workflow and education curricula, we can prompt trainees and attendings to communicate with interdisciplinary team members and address potential challenges that patients may face in managing their health after discharge. Adequately preparing patients for safe discharges has readmission implications. With Centers for Medicare and Medicaid Services reducing payments to facilities with high rates of readmissions, reducing avoidable readmissions is a priority for all institutions.[10]
We can accomplish safe and early discharges. However, we must get better at accurately assessing our patients' readiness for discharge if we are to take the first step.
Disclosure
Nothing to report.
Sacubitril-valsartan and the evolution of heart failure care
Three decades ago, the only drugs we had for treating chronic heart failure were digitalis and loop diuretics. The mortality rate was very high, and heart transplantation was a newly developing treatment that could help only a very few patients.
The early 80s heralded new hope for patients with heart failure, with the introduction of angiotensin-converting enzyme (ACE) inhibitors1–5 and, later, beta-blockers. Beta-blockers were considered contraindicated in heart failure until new trials provided evidence of dramatic benefit such as better quality of life and longer survival.6–8 ACE inhibitors, along with beta-blockers, quickly became the standard of care for all patients with systolic heart failure.
The implantable cardioverter-defibrillator (ICD) required numerous clinical trials in ischemic and nonischemic cardiomyopathy to define its role.9,10 Cardiac resynchronization therapy did not arrive until 15 years ago and is now indicated in a specific niche of patients with left bundle branch block.11,12 Mineralocorticoid antagonists required three pivotal clinical trials before their important role in the treatment of systolic heart failure was defined.13–16
And in the current decade, the roles of ACE inhibitors, angiotensin II receptor blockers (ARBs), beta-blockers, mineralocorticoid antagonists, ICDs, and cardiac resynchronization therapy have been further defined, as reflected in the latest guidelines for the treatment of systolic heart failure.17
Guideline-directed medical therapy for systolic heart failure with the agents and devices mentioned above improves quality of life and extends survival. It was therefore hard to imagine that any new additive therapy could offer significant incremental improvement. However, more than 5 years ago, in an ambitious effort, the largest global clinical trial ever performed in chronic heart failure was launched with a novel agent.18
THE PARADIGM-HF TRIAL
In this issue of the Journal, Sabe et al19 describe the results of the Prospective Comparison of ARNI With ACEI to Determine Impact on Global Mortality and Morbidity in Heart Failure (PARADIGM-HF) trial of the novel combination drug sacubitril-valsartan, designated LCZ696 during its development and now available as Entresto.20
The mean age of the 8,442 patients in PARADIGM-HF was 64, and 78% were men. Despite guideline-directed medical therapy (93% of the patients were receiving a beta-blocker, and 60% were receiving a mineralocorticoid receptor antagonist), patients had persistent symptoms and signs of heart failure, diminished health-related quality of life, reduced ejection fraction (mean 29%), and elevated n-terminal pro-B-type natriuretic peptide levels (median 1,608 pg/mL, interquartile range 886–3,221).
The investigators reported a remarkable 20% reduction in the primary outcome of death from cardiovascular causes or hospitalization for heart failure in the patients who received sacubitril-valsartan compared with enalapril.20
Sacubitril-valsartan was reviewed under a US Food and Drug Administration (FDA) program that provides expedited review of drugs that are intended to treat a serious disease or condition and that may provide a significant improvement over available therapy. It was also granted a fast-track designation, which supports FDA efforts to facilitate the development and expedite the review of drugs to treat serious and life-threatening conditions and fill an unmet medical need. The FDA approved sacubitril-valsartan on July 7, 2015, for use in place of an ACE inhibitor or ARB in patients with New York Heart Association class II, III, or IV heart failure with reduced ejection fraction.21
WHAT WE STILL NEED TO KNOW
The results of PARADIGM-HF are generalizable, and sacubitril-valsartan was well tolerated in patients whose blood pressure was acceptable and who were able to tolerate ACE inhibitors in target doses. More than 90% of patients were receiving a beta-blocker. The dosing of enalapril (target 10 mg twice a day) is the guideline-directed target dose, and ACE inhibition is considered the gold standard for heart failure with reduced ejection fraction. Sacubitril-valsartan vs enalapril was a very appropriate comparison.
Far fewer PARADIGM-HF patients outside the United States had an ICD than those in the United States, which is a common finding in global clinical trials. However, Desai et al reported that sacubitril-valsartan reduced rates of cardiovascular mortality both from worsening heart failure and from sudden cardiac death, independent of whether the patient had an ICD.22
Sacubitril-valsartan is taken twice a day, but most heart failure patients already take medications at several times during the day, so this should not pose a problem.
More information is needed on the use of this new drug in patients with New York Heart Association class IV symptoms, as only 60 patients with class IV symptoms were included in the PARADIGM-HF trial. Also, the efficacy of the drug in patients unable to tolerate a full dose will need to be analyzed.
PARADIGM-HF was conducted in stable, nonhospitalized patients with chronic heart failure; the use of the drug in new-onset heart failure and its initiation in hospitalized patients will require further study. In addition, the PARAGON-HF trial23 will examine the efficacy of sacubitril-valsartan in patients with heart failure and an ejection fraction of 45% or higher.
Sacubitril-valsartan ushers in a new era in heart failure treatment for patients with reduced ejection fraction and will certainly prompt quick revision of heart failure guidelines.
- Captopril Multicenter Research Group. A placebo-controlled trial of captopril in refractory chronic congestive heart failure. J Am Coll Cardiol 1983; 2:755–763.
- Effects of enalapril on mortality in severe congestive heart failure. Results of the Cooperative North Scandinavian Enalapril Survival Study (CONSENSUS). The CONSENSUS Trial Study Group. N Engl J Med 1987; 316:1429–1435.
- The SOLVD Investigators. Effect of enalapril on survival in patients with reduced left ventricular ejection fraction and congestive heart failure. N Engl J Med 1991; 325:293–302.
- Cohn JN, Johnson G, Ziesche S, et al. A comparison of enalapril with hydralazine-isosorbide dinitrate in the treatment of chronic congestive heart failure. N Engl J Med 1991; 325:303–310.
- Pfeffer MA, Braunwald E, Moyé LA, et al. Effect of captopril on mortality and morbidity in patients with left ventricular dysfunction after myocardial infarction. Results of the survival and ventricular enlargement trial. The SAVE Investigators. N Engl J Med 1992; 327:669–677.
- Packer M, Coats AJ, Fowler MB, et al. Effect of carvedilol on survival in severe chronic heart failure. N Engl J Med 2001; 344:1651–1658.
- Effect of metoprolol CR/XL in chronic heart failure: Metoprolol CR/XL Randomised Intervention Trial in Congestive Heart Failure (MERIT-HF). Lancet 1999; 353:2001–2007.
- Brophy JM, Joseph L, Rouleau JL. Beta-blockers in congestive heart failure. A Bayesian meta-analysis. Ann Intern Med 2001; 134:550–560.
- Buxton AE, Lee KL, Fisher JD, et al. A randomized study of the prevention of sudden death in patients with coronary artery disease. Multicenter Unsustained Tachycardia Trial Investigators. N Engl J Med 1999; 341:1882–1890.
- Moss AJ, Zareba W, Hall WJ, et al. Prophylactic implantation of a defibrillator in patients with myocardial infarction and reduced ejection fraction. N Engl J Med 2002; 346:877–883.
- Abraham WT, Fisher WG, Smith AL, et al; MIRACLE Study Group. Multicenter InSync Randomized Clinical Evaluation. Cardiac resynchronization in chronic heart failure. N Engl J Med 2002; 346:1845–1853.
- McAlister FA, Ezekowitz J, Hooton N, et al. Cardiac resynchronization therapy for patients with left ventricular systolic dysfunction: a systematic review. JAMA 2007; 297:2502–2514.
- Pitt B, Zannad F, Remme WJ, et al. The effect of spironolactone on morbidity and mortality in patients with severe heart failure. Randomized Aldactone Evaluation Study Investigators. N Engl J Med 1999; 341:709–717.
- Pitt B, Remme W, Zannad F, et al; Eplerenone Post-Acute Myocardial Infarction Heart Failure Efficacy and Survival Study Investigators. Eplerenone, a selective aldosterone blocker, in patients with left ventricular dysfunction after myocardial infarction. N Engl J Med 2003; 348:1309–1321.
- Pitt B, White H, Nicolau J, et al; EPHESUS Investigators. Eplerenone reduces mortality 30 days after randomization following acute myocardial infarction in patients with left ventricular systolic dysfunction and heart failure. J Am Coll Cardiol 2005; 46:425–431.
- Zannad F, McMurray JJ, Krum H, et al; EMPHASIS-HF Study Group. Eplerenone in patients with systolic heart failure and mild symptoms. N Engl J Med 2011; 364:11–21.
- Yancy CW, Jessup M, Bozkurt B, et al; American College of Cardiology Foundation; American Heart Association Task Force on Practice Guidelines. 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013; 62:e147–e239.
- McMurray JJ, Packer M, Desai AS, et al; PARADIGM-HF Committees and Investigators. Dual angiotensin receptor and neprilysin inhibition as an alternative to angiotensin-converting enzyme inhibition in patients with chronic systolic heart failure: rationale for and design of the Prospective comparison of ARNI with ACEI to Determine Impact on Global Mortality and morbidity in Heart Failure trial (PARADIGM-HF). Eur J Heart Fail 2013; 15:1062–1073.
- Sabe IA, Jacob MS, Taylor DO. A new class of drugs for systolic heart failure: The PARADIGM-HF study. Cleve Clin J Med 2015; 82:693–701.
- McMurray JJ, Packer M, Desai AS, Gong J, et al; PARADIGM-HF Investigators and Committees. Angiotensin-neprilysin inhibition versus enalapril in heart failure. N Engl J Med 2014; 371:993–1004.
- US Food and Drug Administration. FDA approves new drug to treat heart failure. www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm453845.htm. Accessed September 2, 2015.
- Desai AS, McMurray JJ, Packer M, et al. Effect of the angiotensin-receptor-neprilysin inhibitor LCZ696 compared with enalapril on mode of death in heart failure patients. Eur Heart J 2015; 36:1990–1997.
- ClinicalTrials.gov. Efficacy and Safety of LCZ696 Compared to Valsartan, on Morbidity and Mortality in Heart Failure Patients With Preserved Ejection Fraction (PARAGON-HF). https://clinicaltrials.gov/ct2/show/NCT01920711. Accessed September 2, 2015.
Three decades ago, the only drugs we had for treating chronic heart failure were digitalis and loop diuretics. The mortality rate was very high, and heart transplantation was a newly developing treatment that could help only a very few patients.
The early 80s heralded new hope for patients with heart failure, with the introduction of angiotensin-converting enzyme (ACE) inhibitors1–5 and, later, beta-blockers. Beta-blockers were considered contraindicated in heart failure until new trials provided evidence of dramatic benefit such as better quality of life and longer survival.6–8 ACE inhibitors, along with beta-blockers, quickly became the standard of care for all patients with systolic heart failure.
The implantable cardioverter-defibrillator (ICD) required numerous clinical trials in ischemic and nonischemic cardiomyopathy to define its role.9,10 Cardiac resynchronization therapy did not arrive until 15 years ago and is now indicated in a specific niche of patients with left bundle branch block.11,12 Mineralocorticoid antagonists required three pivotal clinical trials before their important role in the treatment of systolic heart failure was defined.13–16
And in the current decade, the roles of ACE inhibitors, angiotensin II receptor blockers (ARBs), beta-blockers, mineralocorticoid antagonists, ICDs, and cardiac resynchronization therapy have been further defined, as reflected in the latest guidelines for the treatment of systolic heart failure.17
Guideline-directed medical therapy for systolic heart failure with the agents and devices mentioned above improves quality of life and extends survival. It was therefore hard to imagine that any new additive therapy could offer significant incremental improvement. However, more than 5 years ago, in an ambitious effort, the largest global clinical trial ever performed in chronic heart failure was launched with a novel agent.18
THE PARADIGM-HF TRIAL
In this issue of the Journal, Sabe et al19 describe the results of the Prospective Comparison of ARNI With ACEI to Determine Impact on Global Mortality and Morbidity in Heart Failure (PARADIGM-HF) trial of the novel combination drug sacubitril-valsartan, designated LCZ696 during its development and now available as Entresto.20
The mean age of the 8,442 patients in PARADIGM-HF was 64, and 78% were men. Despite guideline-directed medical therapy (93% of the patients were receiving a beta-blocker, and 60% were receiving a mineralocorticoid receptor antagonist), patients had persistent symptoms and signs of heart failure, diminished health-related quality of life, reduced ejection fraction (mean 29%), and elevated n-terminal pro-B-type natriuretic peptide levels (median 1,608 pg/mL, interquartile range 886–3,221).
The investigators reported a remarkable 20% reduction in the primary outcome of death from cardiovascular causes or hospitalization for heart failure in the patients who received sacubitril-valsartan compared with enalapril.20
Sacubitril-valsartan was reviewed under a US Food and Drug Administration (FDA) program that provides expedited review of drugs that are intended to treat a serious disease or condition and that may provide a significant improvement over available therapy. It was also granted a fast-track designation, which supports FDA efforts to facilitate the development and expedite the review of drugs to treat serious and life-threatening conditions and fill an unmet medical need. The FDA approved sacubitril-valsartan on July 7, 2015, for use in place of an ACE inhibitor or ARB in patients with New York Heart Association class II, III, or IV heart failure with reduced ejection fraction.21
WHAT WE STILL NEED TO KNOW
The results of PARADIGM-HF are generalizable, and sacubitril-valsartan was well tolerated in patients whose blood pressure was acceptable and who were able to tolerate ACE inhibitors in target doses. More than 90% of patients were receiving a beta-blocker. The dosing of enalapril (target 10 mg twice a day) is the guideline-directed target dose, and ACE inhibition is considered the gold standard for heart failure with reduced ejection fraction. Sacubitril-valsartan vs enalapril was a very appropriate comparison.
Far fewer PARADIGM-HF patients outside the United States had an ICD than those in the United States, which is a common finding in global clinical trials. However, Desai et al reported that sacubitril-valsartan reduced rates of cardiovascular mortality both from worsening heart failure and from sudden cardiac death, independent of whether the patient had an ICD.22
Sacubitril-valsartan is taken twice a day, but most heart failure patients already take medications at several times during the day, so this should not pose a problem.
More information is needed on the use of this new drug in patients with New York Heart Association class IV symptoms, as only 60 patients with class IV symptoms were included in the PARADIGM-HF trial. Also, the efficacy of the drug in patients unable to tolerate a full dose will need to be analyzed.
PARADIGM-HF was conducted in stable, nonhospitalized patients with chronic heart failure; the use of the drug in new-onset heart failure and its initiation in hospitalized patients will require further study. In addition, the PARAGON-HF trial23 will examine the efficacy of sacubitril-valsartan in patients with heart failure and an ejection fraction of 45% or higher.
Sacubitril-valsartan ushers in a new era in heart failure treatment for patients with reduced ejection fraction and will certainly prompt quick revision of heart failure guidelines.
Three decades ago, the only drugs we had for treating chronic heart failure were digitalis and loop diuretics. The mortality rate was very high, and heart transplantation was a newly developing treatment that could help only a very few patients.
The early 80s heralded new hope for patients with heart failure, with the introduction of angiotensin-converting enzyme (ACE) inhibitors1–5 and, later, beta-blockers. Beta-blockers were considered contraindicated in heart failure until new trials provided evidence of dramatic benefit such as better quality of life and longer survival.6–8 ACE inhibitors, along with beta-blockers, quickly became the standard of care for all patients with systolic heart failure.
The implantable cardioverter-defibrillator (ICD) required numerous clinical trials in ischemic and nonischemic cardiomyopathy to define its role.9,10 Cardiac resynchronization therapy did not arrive until 15 years ago and is now indicated in a specific niche of patients with left bundle branch block.11,12 Mineralocorticoid antagonists required three pivotal clinical trials before their important role in the treatment of systolic heart failure was defined.13–16
And in the current decade, the roles of ACE inhibitors, angiotensin II receptor blockers (ARBs), beta-blockers, mineralocorticoid antagonists, ICDs, and cardiac resynchronization therapy have been further defined, as reflected in the latest guidelines for the treatment of systolic heart failure.17
Guideline-directed medical therapy for systolic heart failure with the agents and devices mentioned above improves quality of life and extends survival. It was therefore hard to imagine that any new additive therapy could offer significant incremental improvement. However, more than 5 years ago, in an ambitious effort, the largest global clinical trial ever performed in chronic heart failure was launched with a novel agent.18
THE PARADIGM-HF TRIAL
In this issue of the Journal, Sabe et al19 describe the results of the Prospective Comparison of ARNI With ACEI to Determine Impact on Global Mortality and Morbidity in Heart Failure (PARADIGM-HF) trial of the novel combination drug sacubitril-valsartan, designated LCZ696 during its development and now available as Entresto.20
The mean age of the 8,442 patients in PARADIGM-HF was 64, and 78% were men. Despite guideline-directed medical therapy (93% of the patients were receiving a beta-blocker, and 60% were receiving a mineralocorticoid receptor antagonist), patients had persistent symptoms and signs of heart failure, diminished health-related quality of life, reduced ejection fraction (mean 29%), and elevated n-terminal pro-B-type natriuretic peptide levels (median 1,608 pg/mL, interquartile range 886–3,221).
The investigators reported a remarkable 20% reduction in the primary outcome of death from cardiovascular causes or hospitalization for heart failure in the patients who received sacubitril-valsartan compared with enalapril.20
Sacubitril-valsartan was reviewed under a US Food and Drug Administration (FDA) program that provides expedited review of drugs that are intended to treat a serious disease or condition and that may provide a significant improvement over available therapy. It was also granted a fast-track designation, which supports FDA efforts to facilitate the development and expedite the review of drugs to treat serious and life-threatening conditions and fill an unmet medical need. The FDA approved sacubitril-valsartan on July 7, 2015, for use in place of an ACE inhibitor or ARB in patients with New York Heart Association class II, III, or IV heart failure with reduced ejection fraction.21
WHAT WE STILL NEED TO KNOW
The results of PARADIGM-HF are generalizable, and sacubitril-valsartan was well tolerated in patients whose blood pressure was acceptable and who were able to tolerate ACE inhibitors in target doses. More than 90% of patients were receiving a beta-blocker. The dosing of enalapril (target 10 mg twice a day) is the guideline-directed target dose, and ACE inhibition is considered the gold standard for heart failure with reduced ejection fraction. Sacubitril-valsartan vs enalapril was a very appropriate comparison.
Far fewer PARADIGM-HF patients outside the United States had an ICD than those in the United States, which is a common finding in global clinical trials. However, Desai et al reported that sacubitril-valsartan reduced rates of cardiovascular mortality both from worsening heart failure and from sudden cardiac death, independent of whether the patient had an ICD.22
Sacubitril-valsartan is taken twice a day, but most heart failure patients already take medications at several times during the day, so this should not pose a problem.
More information is needed on the use of this new drug in patients with New York Heart Association class IV symptoms, as only 60 patients with class IV symptoms were included in the PARADIGM-HF trial. Also, the efficacy of the drug in patients unable to tolerate a full dose will need to be analyzed.
PARADIGM-HF was conducted in stable, nonhospitalized patients with chronic heart failure; the use of the drug in new-onset heart failure and its initiation in hospitalized patients will require further study. In addition, the PARAGON-HF trial23 will examine the efficacy of sacubitril-valsartan in patients with heart failure and an ejection fraction of 45% or higher.
Sacubitril-valsartan ushers in a new era in heart failure treatment for patients with reduced ejection fraction and will certainly prompt quick revision of heart failure guidelines.
- Captopril Multicenter Research Group. A placebo-controlled trial of captopril in refractory chronic congestive heart failure. J Am Coll Cardiol 1983; 2:755–763.
- Effects of enalapril on mortality in severe congestive heart failure. Results of the Cooperative North Scandinavian Enalapril Survival Study (CONSENSUS). The CONSENSUS Trial Study Group. N Engl J Med 1987; 316:1429–1435.
- The SOLVD Investigators. Effect of enalapril on survival in patients with reduced left ventricular ejection fraction and congestive heart failure. N Engl J Med 1991; 325:293–302.
- Cohn JN, Johnson G, Ziesche S, et al. A comparison of enalapril with hydralazine-isosorbide dinitrate in the treatment of chronic congestive heart failure. N Engl J Med 1991; 325:303–310.
- Pfeffer MA, Braunwald E, Moyé LA, et al. Effect of captopril on mortality and morbidity in patients with left ventricular dysfunction after myocardial infarction. Results of the survival and ventricular enlargement trial. The SAVE Investigators. N Engl J Med 1992; 327:669–677.
- Packer M, Coats AJ, Fowler MB, et al. Effect of carvedilol on survival in severe chronic heart failure. N Engl J Med 2001; 344:1651–1658.
- Effect of metoprolol CR/XL in chronic heart failure: Metoprolol CR/XL Randomised Intervention Trial in Congestive Heart Failure (MERIT-HF). Lancet 1999; 353:2001–2007.
- Brophy JM, Joseph L, Rouleau JL. Beta-blockers in congestive heart failure. A Bayesian meta-analysis. Ann Intern Med 2001; 134:550–560.
- Buxton AE, Lee KL, Fisher JD, et al. A randomized study of the prevention of sudden death in patients with coronary artery disease. Multicenter Unsustained Tachycardia Trial Investigators. N Engl J Med 1999; 341:1882–1890.
- Moss AJ, Zareba W, Hall WJ, et al. Prophylactic implantation of a defibrillator in patients with myocardial infarction and reduced ejection fraction. N Engl J Med 2002; 346:877–883.
- Abraham WT, Fisher WG, Smith AL, et al; MIRACLE Study Group. Multicenter InSync Randomized Clinical Evaluation. Cardiac resynchronization in chronic heart failure. N Engl J Med 2002; 346:1845–1853.
- McAlister FA, Ezekowitz J, Hooton N, et al. Cardiac resynchronization therapy for patients with left ventricular systolic dysfunction: a systematic review. JAMA 2007; 297:2502–2514.
- Pitt B, Zannad F, Remme WJ, et al. The effect of spironolactone on morbidity and mortality in patients with severe heart failure. Randomized Aldactone Evaluation Study Investigators. N Engl J Med 1999; 341:709–717.
- Pitt B, Remme W, Zannad F, et al; Eplerenone Post-Acute Myocardial Infarction Heart Failure Efficacy and Survival Study Investigators. Eplerenone, a selective aldosterone blocker, in patients with left ventricular dysfunction after myocardial infarction. N Engl J Med 2003; 348:1309–1321.
- Pitt B, White H, Nicolau J, et al; EPHESUS Investigators. Eplerenone reduces mortality 30 days after randomization following acute myocardial infarction in patients with left ventricular systolic dysfunction and heart failure. J Am Coll Cardiol 2005; 46:425–431.
- Zannad F, McMurray JJ, Krum H, et al; EMPHASIS-HF Study Group. Eplerenone in patients with systolic heart failure and mild symptoms. N Engl J Med 2011; 364:11–21.
- Yancy CW, Jessup M, Bozkurt B, et al; American College of Cardiology Foundation; American Heart Association Task Force on Practice Guidelines. 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013; 62:e147–e239.
- McMurray JJ, Packer M, Desai AS, et al; PARADIGM-HF Committees and Investigators. Dual angiotensin receptor and neprilysin inhibition as an alternative to angiotensin-converting enzyme inhibition in patients with chronic systolic heart failure: rationale for and design of the Prospective comparison of ARNI with ACEI to Determine Impact on Global Mortality and morbidity in Heart Failure trial (PARADIGM-HF). Eur J Heart Fail 2013; 15:1062–1073.
- Sabe IA, Jacob MS, Taylor DO. A new class of drugs for systolic heart failure: The PARADIGM-HF study. Cleve Clin J Med 2015; 82:693–701.
- McMurray JJ, Packer M, Desai AS, Gong J, et al; PARADIGM-HF Investigators and Committees. Angiotensin-neprilysin inhibition versus enalapril in heart failure. N Engl J Med 2014; 371:993–1004.
- US Food and Drug Administration. FDA approves new drug to treat heart failure. www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm453845.htm. Accessed September 2, 2015.
- Desai AS, McMurray JJ, Packer M, et al. Effect of the angiotensin-receptor-neprilysin inhibitor LCZ696 compared with enalapril on mode of death in heart failure patients. Eur Heart J 2015; 36:1990–1997.
- ClinicalTrials.gov. Efficacy and Safety of LCZ696 Compared to Valsartan, on Morbidity and Mortality in Heart Failure Patients With Preserved Ejection Fraction (PARAGON-HF). https://clinicaltrials.gov/ct2/show/NCT01920711. Accessed September 2, 2015.
- Captopril Multicenter Research Group. A placebo-controlled trial of captopril in refractory chronic congestive heart failure. J Am Coll Cardiol 1983; 2:755–763.
- Effects of enalapril on mortality in severe congestive heart failure. Results of the Cooperative North Scandinavian Enalapril Survival Study (CONSENSUS). The CONSENSUS Trial Study Group. N Engl J Med 1987; 316:1429–1435.
- The SOLVD Investigators. Effect of enalapril on survival in patients with reduced left ventricular ejection fraction and congestive heart failure. N Engl J Med 1991; 325:293–302.
- Cohn JN, Johnson G, Ziesche S, et al. A comparison of enalapril with hydralazine-isosorbide dinitrate in the treatment of chronic congestive heart failure. N Engl J Med 1991; 325:303–310.
- Pfeffer MA, Braunwald E, Moyé LA, et al. Effect of captopril on mortality and morbidity in patients with left ventricular dysfunction after myocardial infarction. Results of the survival and ventricular enlargement trial. The SAVE Investigators. N Engl J Med 1992; 327:669–677.
- Packer M, Coats AJ, Fowler MB, et al. Effect of carvedilol on survival in severe chronic heart failure. N Engl J Med 2001; 344:1651–1658.
- Effect of metoprolol CR/XL in chronic heart failure: Metoprolol CR/XL Randomised Intervention Trial in Congestive Heart Failure (MERIT-HF). Lancet 1999; 353:2001–2007.
- Brophy JM, Joseph L, Rouleau JL. Beta-blockers in congestive heart failure. A Bayesian meta-analysis. Ann Intern Med 2001; 134:550–560.
- Buxton AE, Lee KL, Fisher JD, et al. A randomized study of the prevention of sudden death in patients with coronary artery disease. Multicenter Unsustained Tachycardia Trial Investigators. N Engl J Med 1999; 341:1882–1890.
- Moss AJ, Zareba W, Hall WJ, et al. Prophylactic implantation of a defibrillator in patients with myocardial infarction and reduced ejection fraction. N Engl J Med 2002; 346:877–883.
- Abraham WT, Fisher WG, Smith AL, et al; MIRACLE Study Group. Multicenter InSync Randomized Clinical Evaluation. Cardiac resynchronization in chronic heart failure. N Engl J Med 2002; 346:1845–1853.
- McAlister FA, Ezekowitz J, Hooton N, et al. Cardiac resynchronization therapy for patients with left ventricular systolic dysfunction: a systematic review. JAMA 2007; 297:2502–2514.
- Pitt B, Zannad F, Remme WJ, et al. The effect of spironolactone on morbidity and mortality in patients with severe heart failure. Randomized Aldactone Evaluation Study Investigators. N Engl J Med 1999; 341:709–717.
- Pitt B, Remme W, Zannad F, et al; Eplerenone Post-Acute Myocardial Infarction Heart Failure Efficacy and Survival Study Investigators. Eplerenone, a selective aldosterone blocker, in patients with left ventricular dysfunction after myocardial infarction. N Engl J Med 2003; 348:1309–1321.
- Pitt B, White H, Nicolau J, et al; EPHESUS Investigators. Eplerenone reduces mortality 30 days after randomization following acute myocardial infarction in patients with left ventricular systolic dysfunction and heart failure. J Am Coll Cardiol 2005; 46:425–431.
- Zannad F, McMurray JJ, Krum H, et al; EMPHASIS-HF Study Group. Eplerenone in patients with systolic heart failure and mild symptoms. N Engl J Med 2011; 364:11–21.
- Yancy CW, Jessup M, Bozkurt B, et al; American College of Cardiology Foundation; American Heart Association Task Force on Practice Guidelines. 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013; 62:e147–e239.
- McMurray JJ, Packer M, Desai AS, et al; PARADIGM-HF Committees and Investigators. Dual angiotensin receptor and neprilysin inhibition as an alternative to angiotensin-converting enzyme inhibition in patients with chronic systolic heart failure: rationale for and design of the Prospective comparison of ARNI with ACEI to Determine Impact on Global Mortality and morbidity in Heart Failure trial (PARADIGM-HF). Eur J Heart Fail 2013; 15:1062–1073.
- Sabe IA, Jacob MS, Taylor DO. A new class of drugs for systolic heart failure: The PARADIGM-HF study. Cleve Clin J Med 2015; 82:693–701.
- McMurray JJ, Packer M, Desai AS, Gong J, et al; PARADIGM-HF Investigators and Committees. Angiotensin-neprilysin inhibition versus enalapril in heart failure. N Engl J Med 2014; 371:993–1004.
- US Food and Drug Administration. FDA approves new drug to treat heart failure. www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm453845.htm. Accessed September 2, 2015.
- Desai AS, McMurray JJ, Packer M, et al. Effect of the angiotensin-receptor-neprilysin inhibitor LCZ696 compared with enalapril on mode of death in heart failure patients. Eur Heart J 2015; 36:1990–1997.
- ClinicalTrials.gov. Efficacy and Safety of LCZ696 Compared to Valsartan, on Morbidity and Mortality in Heart Failure Patients With Preserved Ejection Fraction (PARAGON-HF). https://clinicaltrials.gov/ct2/show/NCT01920711. Accessed September 2, 2015.
Why do clinicians continue to order ‘routine preoperative tests’ despite the evidence?
Guidelines and practice advisories issued by several medical societies, including the American Society of Anesthesiologists,1 American Heart Association (AHA) and American College of Cardiology (ACC),2 and Society of General Internal Medicine,3 advise against routine preoperative testing for patients undergoing low-risk surgical procedures. Such testing often includes routine blood chemistry, complete blood cell counts, measures of the clotting system, and cardiac stress testing.
In this issue of the Cleveland Clinic Journal of Medicine, Dr. Nathan Houchens reviews the evidence against these measures.4
Despite a substantial body of evidence going back more than 2 decades that includes prospective randomized controlled trials,5–10 physicians continue to order unnecessary, ineffective, and costly tests in the perioperative period.11 The process of abandoning current medical practice—a phenomenon known as medical reversal12—often takes years,13 because it is more difficult to convince physicians to discontinue a current behavior than to implement a new one.14 The study of what makes physicians accept new therapies and abandon old ones began more than half a century ago.15
More recently, Cabana et al16 created a framework to understand why physicians do not follow clinical practice guidelines. Among the reasons are lack of familiarity or agreement with the contents of the guideline, lack of outcome expectancy, inertia of previous practice, and external barriers to implementation.
The rapid proliferation of guidelines in the past 20 years has led to numerous conflicting recommendations, many of which are based primarily on expert opinion.17 Guidelines based solely on randomized trials have also come under fire.18,19
In the case of preoperative testing, the recommendations are generally evidence-based and consistent. Why then do physicians appear to disregard the evidence? We propose several reasons why they might do so.
SOME PHYSICIANS ARE UNFAMILIAR WITH THE EVIDENCE
The complexity of the evidence summarized in guidelines has increased exponentially in the last decade, but physician time to assess the evidence has not increased. For example, the number of references in the executive summary of the ACC/AHA perioperative guidelines increased from 96 in 2002 to 252 in 2014. Most of the recommendations are backed by substantial amounts of high-quality evidence. For example, there are 17 prospective and 13 retrospective studies demonstrating that routine testing with the prothrombin time and the partial thromboplastin time is not helpful in asymptomatic patients.20
Although compliance with medical evidence varies among specialties,21 most physicians do not have time to keep up with the ever-increasing amount of information. Specifically in the area of cardiac risk assessment, there has been a rapid proliferation of tests that can be used to assess cardiac risk.22–28 In a Harris Interactive survey from 2008, physicians reported not applying medical evidence routinely. One-third believed they would do it more if they had the time.29 Without information technology support to provide medical information at the point of care,30 especially in small practices, using evidence may not be practical. Simply making the information available online and not promoting it actively does not improve utilization.31
As a consequence, physicians continue to order unnecessary tests, even though they may not feel confident interpreting the results.32
PHYSICIANS MAY NOT BELIEVE THE EVIDENCE
A lack of transparency in evidence-based guidelines and, sometimes, a lack of flexibility and relevance to clinical practice are important barriers to physicians’ acceptance of and adherence to evidence-based clinical practice guidelines.30
Even experts who write guidelines may not be swayed by the evidence. For example, a randomized prospective trial of almost 6,000 patients reported that coronary artery revascularization before elective major vascular surgery does not affect long-term mortality rates.33 Based on this study, the 2014 ACC/AHA guidelines2 advised against revascularization before noncardiac surgery exclusively to reduce perioperative cardiac events. Yet the same guidelines do recommend assessing for myocardial ischemia in patients with elevated risk and poor or unknown functional capacity, using a pharmacologic stress test. Based on the extent of the stress test abnormalities, coronary angiography and revascularization are then suggested for patients willing to undergo coronary artery bypass grafting (CABG) or percutaneous coronary intervention.2
The 2014 European Society of Cardiology and European Society of Anaesthesiology guidelines directly recommend revascularization before high-risk surgery, depending on the extent of a stress-induced perfusion defect.34 This recommendation relies on data from the Coronary Artery Surgery Study registry, which included almost 25,000 patients who underwent coronary angiography from 1975 through 1979. At a mean follow-up of 4.1 years, 1,961 patients underwent high-risk surgery. In this observational cohort, patients who underwent CABG had a lower risk of death and myocardial infarction after surgery.35 The reliance of medical societies34 on data that are more than 30 years old—when operative mortality rates and the treatment of coronary artery disease have changed substantially in the interim and despite the fact that this study did not test whether preoperative revascularization can reduce postoperative mortality—reflects a certain resistance to accept the results of the more recent and relevant randomized trial.33
Other physicians may also prefer to rely on selective data or to simply defer to guidelines that support their beliefs. Some physicians find that evidence-based guidelines are impractical and rigid and reduce their autonomy.36 For many physicians, trials that use surrogate end points and short-term outcomes are not sufficiently compelling to make them abandon current practice.37 Finally, when members of the guideline committees have financial associations with the pharmaceutical industry, or when corporations interested in the outcomes provide financial support for a trial’s development, the likelihood of a recommendation being trusted and used by physicians is drastically reduced.38
PRACTICING DEFENSIVELY
Even if physicians are familiar with the evidence and believe it, they may choose not to act on it. One reason is fear of litigation.
In court, attorneys can use guidelines as well as articles from medical journals as both exculpatory and inculpatory evidence. But they more frequently rely on the standard of care, or what most physicians would do under similar circumstances. If a patient has a bad outcome, such as a perioperative myocardial infarction or life-threatening bleeding, the defendant may assert that testing was unwarranted because guidelines do not recommend it or because the probability of such an outcome was low. However, because the outcome occurred, the jury may not believe that the probability was low enough not to consider, especially if expert witnesses testify that the standard of care would be to order the test.
In areas of controversy, physicians generally believe that erring on the side of more testing is more defensible in court.39 Indeed, following established practice traditions, learned during residency,11,40 may absolve physicians in negligence claims if the way medical care was delivered is supported by recognized and respected physicians.41
As a consequence, physicians prefer to practice the same way their peers do rather than follow the evidence. Unfortunately, the more procedures physicians perform for low-risk patients, the more likely these tests will become accepted as the legal standard of care.42 In this vicious circle, the new standard of care can increase the risk of litigation for others.43 Although unnecessary testing that leads to harmful invasive tests or procedures can also result in malpractice litigation, physicians may not consider this possibility.
FINANCIAL INCENTIVES
The threat of malpractice litigation provides a negative financial incentive to keep performing unnecessary tests, but there are a number of positive incentives as well.
First, physicians often feel compelled to order tests when they believe that physicians referring the patients want the tests done, or when they fear that not completing the tests could delay or cancel the scheduled surgery.40 Refusing to order the test could result in a loss of future referrals. In contrast, ordering tests allows them to meet expectations, preserve trust, and appear more valuable to referring physicians and their patients.
Insurance companies are complicit in these practices. Paying for unnecessary tests can create direct financial incentives for physicians or institutions that own on-site laboratories or diagnostic imaging equipment. Evidence shows that under those circumstances physicians do order more tests. Self-referral and referral to facilities where physicians have a financial interest is associated with increased healthcare costs.44 In addition to direct revenues for the tests performed, physicians may also bill for test interpretation, follow-up visits, and additional procedures generated from test results.
This may be one explanation why the ordering of cardiac tests (stress testing, echocardiography, vascular ultrasonography) by US physicians varies widely from state to state.45
RECOMMENDATIONS TO REDUCE INAPPROPRIATE TESTING
To counter these influences, we propose a multifaceted intervention that includes the following:
- Establish preoperative clinics staffed by experts. Despite the large volume of potentially relevant evidence, the number of articles directly supporting or refuting preoperative laboratory testing is small enough that physicians who routinely engage in preoperative assessment should easily master the evidence.
- Identify local leaders who can convince colleagues of the evidence. Distribute evidence summaries or guidelines with references to major articles that support each recommendation.
- Work with clinical practice committees to establish new standards of care within the hospital. Establish hospital care paths to dictate and support local standards of care. Measure individual physician performance and offer feedback with the goal of reducing utilization.
- National societies should recommend that insurance companies remove inappropriate financial incentives. If companies deny payment for inappropriate testing, physicians will stop ordering it. Even requirements for preauthorization of tests should reduce utilization. The Choosing Wisely campaign (www.choosingwisely.org) would be a good place to start.
- Committee on Standards and Practice Parameters, Apfelbaum JL, Connis RT, Nickinovich DG, et al. Practice advisory for preanesthesia evaluation. An updated report by the American Society of Anesthesiologists Task Force on Preanesthesia Evaluation. Anesthesiology 2012; 116:522–538.
- Fleisher LA, Fleischmann KE, Auerbach AD, et al; American College of Cardiology and American Heart Association. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines. J Am Coll Cardiol 2014; 64:e77–e137.
- Society of General Internal Medicine. Don’t perform routine pre-operative testing before low-risk surgical procedures. Choosing Wisely. An initiative of the ABIM Foundation. September 12, 2013. www.choosingwisely.org/clinician-lists/society-general-internal-medicine-routine-preoperative-testing-before-low-risk-surgery/. Accessed August 31, 2015.
- Houchens N. Should healthy patients undergoing low-risk, elective, noncardiac surgery undergo routine preoperative laboratory testing? Cleve Clin J Med 2015; 82:664–666.
- Rohrer MJ, Michelotti MC, Nahrwold DL. A prospective evaluation of the efficacy of preoperative coagulation testing. Ann Surg 1988; 208:554–557.
- Eagle KA, Coley CM, Newell JB, et al. Combining clinical and thallium data optimizes preoperative assessment of cardiac risk before major vascular surgery. Ann Intern Med 1989; 110:859–866.
- Mangano DT, London MJ, Tubau JF, et al. Dipyridamole thallium-201 scintigraphy as a preoperative screening test. A reexamination of its predictive potential. Study of Perioperative Ischemia Research Group. Circulation 1991; 84:493–502.
- Stratmann HG, Younis LT, Wittry MD, Amato M, Mark AL, Miller DD. Dipyridamole technetium 99m sestamibi myocardial tomography for preoperative cardiac risk stratification before major or minor nonvascular surgery. Am Heart J 1996; 132:536–541.
- Schein OD, Katz J, Bass EB, et al. The value of routine preoperative medical testing before cataract surgery. Study of Medical Testing for Cataract Surgery. N Engl J Med 2000; 342:168–175.
- Hashimoto J, Nakahara T, Bai J, Kitamura N, Kasamatsu T, Kubo A. Preoperative risk stratification with myocardial perfusion imaging in intermediate and low-risk non-cardiac surgery. Circ J 2007; 71:1395–1400.
- Smetana GW. The conundrum of unnecessary preoperative testing. JAMA Intern Med 2015; 175:1359–1361.
- Prasad V, Cifu A. Medical reversal: why we must raise the bar before adopting new technologies. Yale J Biol Med 2011; 84:471–478.
- Tatsioni A, Bonitsis NG, Ioannidis JP. Persistence of contradicted claims in the literature. JAMA 2007; 298:2517–2526.
- Moscucci M. Medical reversal, clinical trials, and the “late” open artery hypothesis in acute myocardial infarction. Arch Intern Med 2011; 171:1643–1644.
- Coleman J, Menzel H, Katz E. Social processes in physicians’ adoption of a new drug. J Chronic Dis 1959; 9:1–19.
- Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282:1458–1465.
- Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 2009; 301:831–841.
- Moher D, Hopewell S, Schulz KF, et al; CONSORT. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int J Surg 2012; 10:28–55.
- Gattinoni L, Giomarelli P. Acquiring knowledge in intensive care: merits and pitfalls of randomized controlled trials. Intensive Care Med 2015; 41:1460–1464.
- Levy JH, Szlam F, Wolberg AS, Winkler A. Clinical use of the activated partial thromboplastin time and prothrombin time for screening: a review of the literature and current guidelines for testing. Clin Lab Med 2014; 34:453–477.
- Dale W, Hemmerich J, Moliski E, Schwarze ML, Tung A. Effect of specialty and recent experience on perioperative decision-making for abdominal aortic aneurysm repair. J Am Geriatr Soc 2012; 60:1889–1894.
- Underwood SR, Anagnostopoulos C, Cerqueira M, et al; British Cardiac Society, British Nuclear Cardiology Society, British Nuclear Medicine Society, Royal College of Physicians of London, Royal College of Physicians of London. Myocardial perfusion scintigraphy: the evidence. Eur J Nucl Med Mol Imaging 2004; 31:261–291.
- Das MK, Pellikka PA, Mahoney DW, et al. Assessment of cardiac risk before nonvascular surgery: dobutamine stress echocardiography in 530 patients. J Am Coll Cardiol 2000; 35:1647–1653.
- Meijboom WB, Mollet NR, Van Mieghem CA, et al. Pre-operative computed tomography coronary angiography to detect significant coronary artery disease in patients referred for cardiac valve surgery. J Am Coll Cardiol 2006; 48:1658–1665.
- Russo V, Gostoli V, Lovato L, et al. Clinical value of multidetector CT coronary angiography as a preoperative screening test before non-coronary cardiac surgery. Heart 2007; 93:1591–1598.
- Schuetz GM, Zacharopoulou NM, Schlattmann P, Dewey M. Meta-analysis: noninvasive coronary angiography using computed tomography versus magnetic resonance imaging. Ann Intern Med 2010; 152:167–177.
- Bluemke DA, Achenbach S, Budoff M, et al. Noninvasive coronary artery imaging: magnetic resonance angiography and multidetector computed tomography angiography: a scientific statement from the American Heart Association Committee on Cardiovascular Imaging and Intervention of the Council on Cardiovascular Radiology and Intervention, and the Councils on Clinical Cardiology and Cardiovascular Disease in the Young. Circulation 2008; 118:586–606.
- Nagel E, Lehmkuhl HB, Bocksch W, et al. Noninvasive diagnosis of ischemia-induced wall motion abnormalities with the use of high-dose dobutamine stress MRI: comparison with dobutamine stress echocardiography. Circulation 1999; 99:763–770.
- Taylor H. Physicians’ use of clinical guidelines—and how to increase it. Healthcare News 2008; 8:32–55. www.harrisinteractive.com/vault/HI_HealthCareNews2008Vol8_Iss04.pdf. Accessed August 31, 2015.
- Kenefick H, Lee J, Fleishman V. Improving physician adherence to clinical practice guidelines. Barriers and stragies for change. New England Healthcare Institute, February 2008. www.nehi.net/writable/publication_files/file/cpg_report_final.pdf. Accessed August 31, 2015.
- Williams J, Cheung WY, Price DE, et al. Clinical guidelines online: do they improve compliance? Postgrad Med J 2004; 80:415–419.
- Wians F. Clinical laboratory tests: which, why, and what do the results mean? Lab Medicine 2009; 40:105–113.
- McFalls EO, Ward HB, Moritz TE, et al. Coronary-artery revascularization before elective major vascular surgery. N Engl J Med 2004; 351:2795–2804.
- Kristensen SD, Knuuti J, Saraste A, et al; Authors/Task Force Members. 2014 ESC/ESA guidelines on non-cardiac surgery: cardiovascular assessment and management: The Joint Task Force on non-cardiac surgery: cardiovascular assessment and management of the European Society of Cardiology (ESC) and the European Society of Anaesthesiology (ESA). Eur Heart J 2014; 35:2383–2431.
- Eagle KA, Rihal CS, Mickel MC, Holmes DR, Foster ED, Gersh BJ. Cardiac risk of noncardiac surgery: influence of coronary disease and type of surgery in 3368 operations. CASS Investigators and University of Michigan Heart Care Program. Coronary Artery Surgery Study. Circulation 1997; 96:1882–1887.
- Farquhar CM, Kofa EW, Slutsky JR. Clinicians’ attitudes to clinical practice guidelines: a systematic review. Med J Aust 2002; 177:502–506.
- Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:37–38.
- Steinbrook R. Guidance for guidelines. N Engl J Med 2007; 356:331–333.
- Sirovich BE, Woloshin S, Schwartz LM. Too little? Too much? Primary care physicians’ views on US health care: a brief report. Arch Intern Med 2011; 171:1582–1585.
- Brown SR, Brown J. Why do physicians order unnecessary preoperative tests? A qualitative study. Fam Med 2011; 43:338–343.
- LeCraw LL. Use of clinical practice guidelines in medical malpractice litigation. J Oncol Pract 2007; 3:254.
- Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA 2005; 293:2609–2617.
- Budetti PP. Tort reform and the patient safety movement: seeking common ground. JAMA 2005; 293:2660–2662.
- Bishop TF, Federman AD, Ross JS. Laboratory test ordering at physician offices with and without on-site laboratories. J Gen Intern Med 2010; 25:1057–1063.
- Rosenthal E. Medical costs rise as retirees winter in Florida. The New York Times, Jan 31, 2015. http://nyti.ms/1vmjfa5. Accessed August 31, 2015.
Guidelines and practice advisories issued by several medical societies, including the American Society of Anesthesiologists,1 American Heart Association (AHA) and American College of Cardiology (ACC),2 and Society of General Internal Medicine,3 advise against routine preoperative testing for patients undergoing low-risk surgical procedures. Such testing often includes routine blood chemistry, complete blood cell counts, measures of the clotting system, and cardiac stress testing.
In this issue of the Cleveland Clinic Journal of Medicine, Dr. Nathan Houchens reviews the evidence against these measures.4
Despite a substantial body of evidence going back more than 2 decades that includes prospective randomized controlled trials,5–10 physicians continue to order unnecessary, ineffective, and costly tests in the perioperative period.11 The process of abandoning current medical practice—a phenomenon known as medical reversal12—often takes years,13 because it is more difficult to convince physicians to discontinue a current behavior than to implement a new one.14 The study of what makes physicians accept new therapies and abandon old ones began more than half a century ago.15
More recently, Cabana et al16 created a framework to understand why physicians do not follow clinical practice guidelines. Among the reasons are lack of familiarity or agreement with the contents of the guideline, lack of outcome expectancy, inertia of previous practice, and external barriers to implementation.
The rapid proliferation of guidelines in the past 20 years has led to numerous conflicting recommendations, many of which are based primarily on expert opinion.17 Guidelines based solely on randomized trials have also come under fire.18,19
In the case of preoperative testing, the recommendations are generally evidence-based and consistent. Why then do physicians appear to disregard the evidence? We propose several reasons why they might do so.
SOME PHYSICIANS ARE UNFAMILIAR WITH THE EVIDENCE
The complexity of the evidence summarized in guidelines has increased exponentially in the last decade, but physician time to assess the evidence has not increased. For example, the number of references in the executive summary of the ACC/AHA perioperative guidelines increased from 96 in 2002 to 252 in 2014. Most of the recommendations are backed by substantial amounts of high-quality evidence. For example, there are 17 prospective and 13 retrospective studies demonstrating that routine testing with the prothrombin time and the partial thromboplastin time is not helpful in asymptomatic patients.20
Although compliance with medical evidence varies among specialties,21 most physicians do not have time to keep up with the ever-increasing amount of information. Specifically in the area of cardiac risk assessment, there has been a rapid proliferation of tests that can be used to assess cardiac risk.22–28 In a Harris Interactive survey from 2008, physicians reported not applying medical evidence routinely. One-third believed they would do it more if they had the time.29 Without information technology support to provide medical information at the point of care,30 especially in small practices, using evidence may not be practical. Simply making the information available online and not promoting it actively does not improve utilization.31
As a consequence, physicians continue to order unnecessary tests, even though they may not feel confident interpreting the results.32
PHYSICIANS MAY NOT BELIEVE THE EVIDENCE
A lack of transparency in evidence-based guidelines and, sometimes, a lack of flexibility and relevance to clinical practice are important barriers to physicians’ acceptance of and adherence to evidence-based clinical practice guidelines.30
Even experts who write guidelines may not be swayed by the evidence. For example, a randomized prospective trial of almost 6,000 patients reported that coronary artery revascularization before elective major vascular surgery does not affect long-term mortality rates.33 Based on this study, the 2014 ACC/AHA guidelines2 advised against revascularization before noncardiac surgery exclusively to reduce perioperative cardiac events. Yet the same guidelines do recommend assessing for myocardial ischemia in patients with elevated risk and poor or unknown functional capacity, using a pharmacologic stress test. Based on the extent of the stress test abnormalities, coronary angiography and revascularization are then suggested for patients willing to undergo coronary artery bypass grafting (CABG) or percutaneous coronary intervention.2
The 2014 European Society of Cardiology and European Society of Anaesthesiology guidelines directly recommend revascularization before high-risk surgery, depending on the extent of a stress-induced perfusion defect.34 This recommendation relies on data from the Coronary Artery Surgery Study registry, which included almost 25,000 patients who underwent coronary angiography from 1975 through 1979. At a mean follow-up of 4.1 years, 1,961 patients underwent high-risk surgery. In this observational cohort, patients who underwent CABG had a lower risk of death and myocardial infarction after surgery.35 The reliance of medical societies34 on data that are more than 30 years old—when operative mortality rates and the treatment of coronary artery disease have changed substantially in the interim and despite the fact that this study did not test whether preoperative revascularization can reduce postoperative mortality—reflects a certain resistance to accept the results of the more recent and relevant randomized trial.33
Other physicians may also prefer to rely on selective data or to simply defer to guidelines that support their beliefs. Some physicians find that evidence-based guidelines are impractical and rigid and reduce their autonomy.36 For many physicians, trials that use surrogate end points and short-term outcomes are not sufficiently compelling to make them abandon current practice.37 Finally, when members of the guideline committees have financial associations with the pharmaceutical industry, or when corporations interested in the outcomes provide financial support for a trial’s development, the likelihood of a recommendation being trusted and used by physicians is drastically reduced.38
PRACTICING DEFENSIVELY
Even if physicians are familiar with the evidence and believe it, they may choose not to act on it. One reason is fear of litigation.
In court, attorneys can use guidelines as well as articles from medical journals as both exculpatory and inculpatory evidence. But they more frequently rely on the standard of care, or what most physicians would do under similar circumstances. If a patient has a bad outcome, such as a perioperative myocardial infarction or life-threatening bleeding, the defendant may assert that testing was unwarranted because guidelines do not recommend it or because the probability of such an outcome was low. However, because the outcome occurred, the jury may not believe that the probability was low enough not to consider, especially if expert witnesses testify that the standard of care would be to order the test.
In areas of controversy, physicians generally believe that erring on the side of more testing is more defensible in court.39 Indeed, following established practice traditions, learned during residency,11,40 may absolve physicians in negligence claims if the way medical care was delivered is supported by recognized and respected physicians.41
As a consequence, physicians prefer to practice the same way their peers do rather than follow the evidence. Unfortunately, the more procedures physicians perform for low-risk patients, the more likely these tests will become accepted as the legal standard of care.42 In this vicious circle, the new standard of care can increase the risk of litigation for others.43 Although unnecessary testing that leads to harmful invasive tests or procedures can also result in malpractice litigation, physicians may not consider this possibility.
FINANCIAL INCENTIVES
The threat of malpractice litigation provides a negative financial incentive to keep performing unnecessary tests, but there are a number of positive incentives as well.
First, physicians often feel compelled to order tests when they believe that physicians referring the patients want the tests done, or when they fear that not completing the tests could delay or cancel the scheduled surgery.40 Refusing to order the test could result in a loss of future referrals. In contrast, ordering tests allows them to meet expectations, preserve trust, and appear more valuable to referring physicians and their patients.
Insurance companies are complicit in these practices. Paying for unnecessary tests can create direct financial incentives for physicians or institutions that own on-site laboratories or diagnostic imaging equipment. Evidence shows that under those circumstances physicians do order more tests. Self-referral and referral to facilities where physicians have a financial interest is associated with increased healthcare costs.44 In addition to direct revenues for the tests performed, physicians may also bill for test interpretation, follow-up visits, and additional procedures generated from test results.
This may be one explanation why the ordering of cardiac tests (stress testing, echocardiography, vascular ultrasonography) by US physicians varies widely from state to state.45
RECOMMENDATIONS TO REDUCE INAPPROPRIATE TESTING
To counter these influences, we propose a multifaceted intervention that includes the following:
- Establish preoperative clinics staffed by experts. Despite the large volume of potentially relevant evidence, the number of articles directly supporting or refuting preoperative laboratory testing is small enough that physicians who routinely engage in preoperative assessment should easily master the evidence.
- Identify local leaders who can convince colleagues of the evidence. Distribute evidence summaries or guidelines with references to major articles that support each recommendation.
- Work with clinical practice committees to establish new standards of care within the hospital. Establish hospital care paths to dictate and support local standards of care. Measure individual physician performance and offer feedback with the goal of reducing utilization.
- National societies should recommend that insurance companies remove inappropriate financial incentives. If companies deny payment for inappropriate testing, physicians will stop ordering it. Even requirements for preauthorization of tests should reduce utilization. The Choosing Wisely campaign (www.choosingwisely.org) would be a good place to start.
Guidelines and practice advisories issued by several medical societies, including the American Society of Anesthesiologists,1 American Heart Association (AHA) and American College of Cardiology (ACC),2 and Society of General Internal Medicine,3 advise against routine preoperative testing for patients undergoing low-risk surgical procedures. Such testing often includes routine blood chemistry, complete blood cell counts, measures of the clotting system, and cardiac stress testing.
In this issue of the Cleveland Clinic Journal of Medicine, Dr. Nathan Houchens reviews the evidence against these measures.4
Despite a substantial body of evidence going back more than 2 decades that includes prospective randomized controlled trials,5–10 physicians continue to order unnecessary, ineffective, and costly tests in the perioperative period.11 The process of abandoning current medical practice—a phenomenon known as medical reversal12—often takes years,13 because it is more difficult to convince physicians to discontinue a current behavior than to implement a new one.14 The study of what makes physicians accept new therapies and abandon old ones began more than half a century ago.15
More recently, Cabana et al16 created a framework to understand why physicians do not follow clinical practice guidelines. Among the reasons are lack of familiarity or agreement with the contents of the guideline, lack of outcome expectancy, inertia of previous practice, and external barriers to implementation.
The rapid proliferation of guidelines in the past 20 years has led to numerous conflicting recommendations, many of which are based primarily on expert opinion.17 Guidelines based solely on randomized trials have also come under fire.18,19
In the case of preoperative testing, the recommendations are generally evidence-based and consistent. Why then do physicians appear to disregard the evidence? We propose several reasons why they might do so.
SOME PHYSICIANS ARE UNFAMILIAR WITH THE EVIDENCE
The complexity of the evidence summarized in guidelines has increased exponentially in the last decade, but physician time to assess the evidence has not increased. For example, the number of references in the executive summary of the ACC/AHA perioperative guidelines increased from 96 in 2002 to 252 in 2014. Most of the recommendations are backed by substantial amounts of high-quality evidence. For example, there are 17 prospective and 13 retrospective studies demonstrating that routine testing with the prothrombin time and the partial thromboplastin time is not helpful in asymptomatic patients.20
Although compliance with medical evidence varies among specialties,21 most physicians do not have time to keep up with the ever-increasing amount of information. Specifically in the area of cardiac risk assessment, there has been a rapid proliferation of tests that can be used to assess cardiac risk.22–28 In a Harris Interactive survey from 2008, physicians reported not applying medical evidence routinely. One-third believed they would do it more if they had the time.29 Without information technology support to provide medical information at the point of care,30 especially in small practices, using evidence may not be practical. Simply making the information available online and not promoting it actively does not improve utilization.31
As a consequence, physicians continue to order unnecessary tests, even though they may not feel confident interpreting the results.32
PHYSICIANS MAY NOT BELIEVE THE EVIDENCE
A lack of transparency in evidence-based guidelines and, sometimes, a lack of flexibility and relevance to clinical practice are important barriers to physicians’ acceptance of and adherence to evidence-based clinical practice guidelines.30
Even experts who write guidelines may not be swayed by the evidence. For example, a randomized prospective trial of almost 6,000 patients reported that coronary artery revascularization before elective major vascular surgery does not affect long-term mortality rates.33 Based on this study, the 2014 ACC/AHA guidelines2 advised against revascularization before noncardiac surgery exclusively to reduce perioperative cardiac events. Yet the same guidelines do recommend assessing for myocardial ischemia in patients with elevated risk and poor or unknown functional capacity, using a pharmacologic stress test. Based on the extent of the stress test abnormalities, coronary angiography and revascularization are then suggested for patients willing to undergo coronary artery bypass grafting (CABG) or percutaneous coronary intervention.2
The 2014 European Society of Cardiology and European Society of Anaesthesiology guidelines directly recommend revascularization before high-risk surgery, depending on the extent of a stress-induced perfusion defect.34 This recommendation relies on data from the Coronary Artery Surgery Study registry, which included almost 25,000 patients who underwent coronary angiography from 1975 through 1979. At a mean follow-up of 4.1 years, 1,961 patients underwent high-risk surgery. In this observational cohort, patients who underwent CABG had a lower risk of death and myocardial infarction after surgery.35 The reliance of medical societies34 on data that are more than 30 years old—when operative mortality rates and the treatment of coronary artery disease have changed substantially in the interim and despite the fact that this study did not test whether preoperative revascularization can reduce postoperative mortality—reflects a certain resistance to accept the results of the more recent and relevant randomized trial.33
Other physicians may also prefer to rely on selective data or to simply defer to guidelines that support their beliefs. Some physicians find that evidence-based guidelines are impractical and rigid and reduce their autonomy.36 For many physicians, trials that use surrogate end points and short-term outcomes are not sufficiently compelling to make them abandon current practice.37 Finally, when members of the guideline committees have financial associations with the pharmaceutical industry, or when corporations interested in the outcomes provide financial support for a trial’s development, the likelihood of a recommendation being trusted and used by physicians is drastically reduced.38
PRACTICING DEFENSIVELY
Even if physicians are familiar with the evidence and believe it, they may choose not to act on it. One reason is fear of litigation.
In court, attorneys can use guidelines as well as articles from medical journals as both exculpatory and inculpatory evidence. But they more frequently rely on the standard of care, or what most physicians would do under similar circumstances. If a patient has a bad outcome, such as a perioperative myocardial infarction or life-threatening bleeding, the defendant may assert that testing was unwarranted because guidelines do not recommend it or because the probability of such an outcome was low. However, because the outcome occurred, the jury may not believe that the probability was low enough not to consider, especially if expert witnesses testify that the standard of care would be to order the test.
In areas of controversy, physicians generally believe that erring on the side of more testing is more defensible in court.39 Indeed, following established practice traditions, learned during residency,11,40 may absolve physicians in negligence claims if the way medical care was delivered is supported by recognized and respected physicians.41
As a consequence, physicians prefer to practice the same way their peers do rather than follow the evidence. Unfortunately, the more procedures physicians perform for low-risk patients, the more likely these tests will become accepted as the legal standard of care.42 In this vicious circle, the new standard of care can increase the risk of litigation for others.43 Although unnecessary testing that leads to harmful invasive tests or procedures can also result in malpractice litigation, physicians may not consider this possibility.
FINANCIAL INCENTIVES
The threat of malpractice litigation provides a negative financial incentive to keep performing unnecessary tests, but there are a number of positive incentives as well.
First, physicians often feel compelled to order tests when they believe that physicians referring the patients want the tests done, or when they fear that not completing the tests could delay or cancel the scheduled surgery.40 Refusing to order the test could result in a loss of future referrals. In contrast, ordering tests allows them to meet expectations, preserve trust, and appear more valuable to referring physicians and their patients.
Insurance companies are complicit in these practices. Paying for unnecessary tests can create direct financial incentives for physicians or institutions that own on-site laboratories or diagnostic imaging equipment. Evidence shows that under those circumstances physicians do order more tests. Self-referral and referral to facilities where physicians have a financial interest is associated with increased healthcare costs.44 In addition to direct revenues for the tests performed, physicians may also bill for test interpretation, follow-up visits, and additional procedures generated from test results.
This may be one explanation why the ordering of cardiac tests (stress testing, echocardiography, vascular ultrasonography) by US physicians varies widely from state to state.45
RECOMMENDATIONS TO REDUCE INAPPROPRIATE TESTING
To counter these influences, we propose a multifaceted intervention that includes the following:
- Establish preoperative clinics staffed by experts. Despite the large volume of potentially relevant evidence, the number of articles directly supporting or refuting preoperative laboratory testing is small enough that physicians who routinely engage in preoperative assessment should easily master the evidence.
- Identify local leaders who can convince colleagues of the evidence. Distribute evidence summaries or guidelines with references to major articles that support each recommendation.
- Work with clinical practice committees to establish new standards of care within the hospital. Establish hospital care paths to dictate and support local standards of care. Measure individual physician performance and offer feedback with the goal of reducing utilization.
- National societies should recommend that insurance companies remove inappropriate financial incentives. If companies deny payment for inappropriate testing, physicians will stop ordering it. Even requirements for preauthorization of tests should reduce utilization. The Choosing Wisely campaign (www.choosingwisely.org) would be a good place to start.
- Committee on Standards and Practice Parameters, Apfelbaum JL, Connis RT, Nickinovich DG, et al. Practice advisory for preanesthesia evaluation. An updated report by the American Society of Anesthesiologists Task Force on Preanesthesia Evaluation. Anesthesiology 2012; 116:522–538.
- Fleisher LA, Fleischmann KE, Auerbach AD, et al; American College of Cardiology and American Heart Association. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines. J Am Coll Cardiol 2014; 64:e77–e137.
- Society of General Internal Medicine. Don’t perform routine pre-operative testing before low-risk surgical procedures. Choosing Wisely. An initiative of the ABIM Foundation. September 12, 2013. www.choosingwisely.org/clinician-lists/society-general-internal-medicine-routine-preoperative-testing-before-low-risk-surgery/. Accessed August 31, 2015.
- Houchens N. Should healthy patients undergoing low-risk, elective, noncardiac surgery undergo routine preoperative laboratory testing? Cleve Clin J Med 2015; 82:664–666.
- Rohrer MJ, Michelotti MC, Nahrwold DL. A prospective evaluation of the efficacy of preoperative coagulation testing. Ann Surg 1988; 208:554–557.
- Eagle KA, Coley CM, Newell JB, et al. Combining clinical and thallium data optimizes preoperative assessment of cardiac risk before major vascular surgery. Ann Intern Med 1989; 110:859–866.
- Mangano DT, London MJ, Tubau JF, et al. Dipyridamole thallium-201 scintigraphy as a preoperative screening test. A reexamination of its predictive potential. Study of Perioperative Ischemia Research Group. Circulation 1991; 84:493–502.
- Stratmann HG, Younis LT, Wittry MD, Amato M, Mark AL, Miller DD. Dipyridamole technetium 99m sestamibi myocardial tomography for preoperative cardiac risk stratification before major or minor nonvascular surgery. Am Heart J 1996; 132:536–541.
- Schein OD, Katz J, Bass EB, et al. The value of routine preoperative medical testing before cataract surgery. Study of Medical Testing for Cataract Surgery. N Engl J Med 2000; 342:168–175.
- Hashimoto J, Nakahara T, Bai J, Kitamura N, Kasamatsu T, Kubo A. Preoperative risk stratification with myocardial perfusion imaging in intermediate and low-risk non-cardiac surgery. Circ J 2007; 71:1395–1400.
- Smetana GW. The conundrum of unnecessary preoperative testing. JAMA Intern Med 2015; 175:1359–1361.
- Prasad V, Cifu A. Medical reversal: why we must raise the bar before adopting new technologies. Yale J Biol Med 2011; 84:471–478.
- Tatsioni A, Bonitsis NG, Ioannidis JP. Persistence of contradicted claims in the literature. JAMA 2007; 298:2517–2526.
- Moscucci M. Medical reversal, clinical trials, and the “late” open artery hypothesis in acute myocardial infarction. Arch Intern Med 2011; 171:1643–1644.
- Coleman J, Menzel H, Katz E. Social processes in physicians’ adoption of a new drug. J Chronic Dis 1959; 9:1–19.
- Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282:1458–1465.
- Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 2009; 301:831–841.
- Moher D, Hopewell S, Schulz KF, et al; CONSORT. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int J Surg 2012; 10:28–55.
- Gattinoni L, Giomarelli P. Acquiring knowledge in intensive care: merits and pitfalls of randomized controlled trials. Intensive Care Med 2015; 41:1460–1464.
- Levy JH, Szlam F, Wolberg AS, Winkler A. Clinical use of the activated partial thromboplastin time and prothrombin time for screening: a review of the literature and current guidelines for testing. Clin Lab Med 2014; 34:453–477.
- Dale W, Hemmerich J, Moliski E, Schwarze ML, Tung A. Effect of specialty and recent experience on perioperative decision-making for abdominal aortic aneurysm repair. J Am Geriatr Soc 2012; 60:1889–1894.
- Underwood SR, Anagnostopoulos C, Cerqueira M, et al; British Cardiac Society, British Nuclear Cardiology Society, British Nuclear Medicine Society, Royal College of Physicians of London, Royal College of Physicians of London. Myocardial perfusion scintigraphy: the evidence. Eur J Nucl Med Mol Imaging 2004; 31:261–291.
- Das MK, Pellikka PA, Mahoney DW, et al. Assessment of cardiac risk before nonvascular surgery: dobutamine stress echocardiography in 530 patients. J Am Coll Cardiol 2000; 35:1647–1653.
- Meijboom WB, Mollet NR, Van Mieghem CA, et al. Pre-operative computed tomography coronary angiography to detect significant coronary artery disease in patients referred for cardiac valve surgery. J Am Coll Cardiol 2006; 48:1658–1665.
- Russo V, Gostoli V, Lovato L, et al. Clinical value of multidetector CT coronary angiography as a preoperative screening test before non-coronary cardiac surgery. Heart 2007; 93:1591–1598.
- Schuetz GM, Zacharopoulou NM, Schlattmann P, Dewey M. Meta-analysis: noninvasive coronary angiography using computed tomography versus magnetic resonance imaging. Ann Intern Med 2010; 152:167–177.
- Bluemke DA, Achenbach S, Budoff M, et al. Noninvasive coronary artery imaging: magnetic resonance angiography and multidetector computed tomography angiography: a scientific statement from the American Heart Association Committee on Cardiovascular Imaging and Intervention of the Council on Cardiovascular Radiology and Intervention, and the Councils on Clinical Cardiology and Cardiovascular Disease in the Young. Circulation 2008; 118:586–606.
- Nagel E, Lehmkuhl HB, Bocksch W, et al. Noninvasive diagnosis of ischemia-induced wall motion abnormalities with the use of high-dose dobutamine stress MRI: comparison with dobutamine stress echocardiography. Circulation 1999; 99:763–770.
- Taylor H. Physicians’ use of clinical guidelines—and how to increase it. Healthcare News 2008; 8:32–55. www.harrisinteractive.com/vault/HI_HealthCareNews2008Vol8_Iss04.pdf. Accessed August 31, 2015.
- Kenefick H, Lee J, Fleishman V. Improving physician adherence to clinical practice guidelines. Barriers and stragies for change. New England Healthcare Institute, February 2008. www.nehi.net/writable/publication_files/file/cpg_report_final.pdf. Accessed August 31, 2015.
- Williams J, Cheung WY, Price DE, et al. Clinical guidelines online: do they improve compliance? Postgrad Med J 2004; 80:415–419.
- Wians F. Clinical laboratory tests: which, why, and what do the results mean? Lab Medicine 2009; 40:105–113.
- McFalls EO, Ward HB, Moritz TE, et al. Coronary-artery revascularization before elective major vascular surgery. N Engl J Med 2004; 351:2795–2804.
- Kristensen SD, Knuuti J, Saraste A, et al; Authors/Task Force Members. 2014 ESC/ESA guidelines on non-cardiac surgery: cardiovascular assessment and management: The Joint Task Force on non-cardiac surgery: cardiovascular assessment and management of the European Society of Cardiology (ESC) and the European Society of Anaesthesiology (ESA). Eur Heart J 2014; 35:2383–2431.
- Eagle KA, Rihal CS, Mickel MC, Holmes DR, Foster ED, Gersh BJ. Cardiac risk of noncardiac surgery: influence of coronary disease and type of surgery in 3368 operations. CASS Investigators and University of Michigan Heart Care Program. Coronary Artery Surgery Study. Circulation 1997; 96:1882–1887.
- Farquhar CM, Kofa EW, Slutsky JR. Clinicians’ attitudes to clinical practice guidelines: a systematic review. Med J Aust 2002; 177:502–506.
- Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:37–38.
- Steinbrook R. Guidance for guidelines. N Engl J Med 2007; 356:331–333.
- Sirovich BE, Woloshin S, Schwartz LM. Too little? Too much? Primary care physicians’ views on US health care: a brief report. Arch Intern Med 2011; 171:1582–1585.
- Brown SR, Brown J. Why do physicians order unnecessary preoperative tests? A qualitative study. Fam Med 2011; 43:338–343.
- LeCraw LL. Use of clinical practice guidelines in medical malpractice litigation. J Oncol Pract 2007; 3:254.
- Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA 2005; 293:2609–2617.
- Budetti PP. Tort reform and the patient safety movement: seeking common ground. JAMA 2005; 293:2660–2662.
- Bishop TF, Federman AD, Ross JS. Laboratory test ordering at physician offices with and without on-site laboratories. J Gen Intern Med 2010; 25:1057–1063.
- Rosenthal E. Medical costs rise as retirees winter in Florida. The New York Times, Jan 31, 2015. http://nyti.ms/1vmjfa5. Accessed August 31, 2015.
- Committee on Standards and Practice Parameters, Apfelbaum JL, Connis RT, Nickinovich DG, et al. Practice advisory for preanesthesia evaluation. An updated report by the American Society of Anesthesiologists Task Force on Preanesthesia Evaluation. Anesthesiology 2012; 116:522–538.
- Fleisher LA, Fleischmann KE, Auerbach AD, et al; American College of Cardiology and American Heart Association. 2014 ACC/AHA guideline on perioperative cardiovascular evaluation and management of patients undergoing noncardiac surgery: a report of the American College of Cardiology/American Heart Association Task Force on practice guidelines. J Am Coll Cardiol 2014; 64:e77–e137.
- Society of General Internal Medicine. Don’t perform routine pre-operative testing before low-risk surgical procedures. Choosing Wisely. An initiative of the ABIM Foundation. September 12, 2013. www.choosingwisely.org/clinician-lists/society-general-internal-medicine-routine-preoperative-testing-before-low-risk-surgery/. Accessed August 31, 2015.
- Houchens N. Should healthy patients undergoing low-risk, elective, noncardiac surgery undergo routine preoperative laboratory testing? Cleve Clin J Med 2015; 82:664–666.
- Rohrer MJ, Michelotti MC, Nahrwold DL. A prospective evaluation of the efficacy of preoperative coagulation testing. Ann Surg 1988; 208:554–557.
- Eagle KA, Coley CM, Newell JB, et al. Combining clinical and thallium data optimizes preoperative assessment of cardiac risk before major vascular surgery. Ann Intern Med 1989; 110:859–866.
- Mangano DT, London MJ, Tubau JF, et al. Dipyridamole thallium-201 scintigraphy as a preoperative screening test. A reexamination of its predictive potential. Study of Perioperative Ischemia Research Group. Circulation 1991; 84:493–502.
- Stratmann HG, Younis LT, Wittry MD, Amato M, Mark AL, Miller DD. Dipyridamole technetium 99m sestamibi myocardial tomography for preoperative cardiac risk stratification before major or minor nonvascular surgery. Am Heart J 1996; 132:536–541.
- Schein OD, Katz J, Bass EB, et al. The value of routine preoperative medical testing before cataract surgery. Study of Medical Testing for Cataract Surgery. N Engl J Med 2000; 342:168–175.
- Hashimoto J, Nakahara T, Bai J, Kitamura N, Kasamatsu T, Kubo A. Preoperative risk stratification with myocardial perfusion imaging in intermediate and low-risk non-cardiac surgery. Circ J 2007; 71:1395–1400.
- Smetana GW. The conundrum of unnecessary preoperative testing. JAMA Intern Med 2015; 175:1359–1361.
- Prasad V, Cifu A. Medical reversal: why we must raise the bar before adopting new technologies. Yale J Biol Med 2011; 84:471–478.
- Tatsioni A, Bonitsis NG, Ioannidis JP. Persistence of contradicted claims in the literature. JAMA 2007; 298:2517–2526.
- Moscucci M. Medical reversal, clinical trials, and the “late” open artery hypothesis in acute myocardial infarction. Arch Intern Med 2011; 171:1643–1644.
- Coleman J, Menzel H, Katz E. Social processes in physicians’ adoption of a new drug. J Chronic Dis 1959; 9:1–19.
- Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282:1458–1465.
- Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 2009; 301:831–841.
- Moher D, Hopewell S, Schulz KF, et al; CONSORT. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int J Surg 2012; 10:28–55.
- Gattinoni L, Giomarelli P. Acquiring knowledge in intensive care: merits and pitfalls of randomized controlled trials. Intensive Care Med 2015; 41:1460–1464.
- Levy JH, Szlam F, Wolberg AS, Winkler A. Clinical use of the activated partial thromboplastin time and prothrombin time for screening: a review of the literature and current guidelines for testing. Clin Lab Med 2014; 34:453–477.
- Dale W, Hemmerich J, Moliski E, Schwarze ML, Tung A. Effect of specialty and recent experience on perioperative decision-making for abdominal aortic aneurysm repair. J Am Geriatr Soc 2012; 60:1889–1894.
- Underwood SR, Anagnostopoulos C, Cerqueira M, et al; British Cardiac Society, British Nuclear Cardiology Society, British Nuclear Medicine Society, Royal College of Physicians of London, Royal College of Physicians of London. Myocardial perfusion scintigraphy: the evidence. Eur J Nucl Med Mol Imaging 2004; 31:261–291.
- Das MK, Pellikka PA, Mahoney DW, et al. Assessment of cardiac risk before nonvascular surgery: dobutamine stress echocardiography in 530 patients. J Am Coll Cardiol 2000; 35:1647–1653.
- Meijboom WB, Mollet NR, Van Mieghem CA, et al. Pre-operative computed tomography coronary angiography to detect significant coronary artery disease in patients referred for cardiac valve surgery. J Am Coll Cardiol 2006; 48:1658–1665.
- Russo V, Gostoli V, Lovato L, et al. Clinical value of multidetector CT coronary angiography as a preoperative screening test before non-coronary cardiac surgery. Heart 2007; 93:1591–1598.
- Schuetz GM, Zacharopoulou NM, Schlattmann P, Dewey M. Meta-analysis: noninvasive coronary angiography using computed tomography versus magnetic resonance imaging. Ann Intern Med 2010; 152:167–177.
- Bluemke DA, Achenbach S, Budoff M, et al. Noninvasive coronary artery imaging: magnetic resonance angiography and multidetector computed tomography angiography: a scientific statement from the American Heart Association Committee on Cardiovascular Imaging and Intervention of the Council on Cardiovascular Radiology and Intervention, and the Councils on Clinical Cardiology and Cardiovascular Disease in the Young. Circulation 2008; 118:586–606.
- Nagel E, Lehmkuhl HB, Bocksch W, et al. Noninvasive diagnosis of ischemia-induced wall motion abnormalities with the use of high-dose dobutamine stress MRI: comparison with dobutamine stress echocardiography. Circulation 1999; 99:763–770.
- Taylor H. Physicians’ use of clinical guidelines—and how to increase it. Healthcare News 2008; 8:32–55. www.harrisinteractive.com/vault/HI_HealthCareNews2008Vol8_Iss04.pdf. Accessed August 31, 2015.
- Kenefick H, Lee J, Fleishman V. Improving physician adherence to clinical practice guidelines. Barriers and stragies for change. New England Healthcare Institute, February 2008. www.nehi.net/writable/publication_files/file/cpg_report_final.pdf. Accessed August 31, 2015.
- Williams J, Cheung WY, Price DE, et al. Clinical guidelines online: do they improve compliance? Postgrad Med J 2004; 80:415–419.
- Wians F. Clinical laboratory tests: which, why, and what do the results mean? Lab Medicine 2009; 40:105–113.
- McFalls EO, Ward HB, Moritz TE, et al. Coronary-artery revascularization before elective major vascular surgery. N Engl J Med 2004; 351:2795–2804.
- Kristensen SD, Knuuti J, Saraste A, et al; Authors/Task Force Members. 2014 ESC/ESA guidelines on non-cardiac surgery: cardiovascular assessment and management: The Joint Task Force on non-cardiac surgery: cardiovascular assessment and management of the European Society of Cardiology (ESC) and the European Society of Anaesthesiology (ESA). Eur Heart J 2014; 35:2383–2431.
- Eagle KA, Rihal CS, Mickel MC, Holmes DR, Foster ED, Gersh BJ. Cardiac risk of noncardiac surgery: influence of coronary disease and type of surgery in 3368 operations. CASS Investigators and University of Michigan Heart Care Program. Coronary Artery Surgery Study. Circulation 1997; 96:1882–1887.
- Farquhar CM, Kofa EW, Slutsky JR. Clinicians’ attitudes to clinical practice guidelines: a systematic review. Med J Aust 2002; 177:502–506.
- Prasad V, Cifu A, Ioannidis JP. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307:37–38.
- Steinbrook R. Guidance for guidelines. N Engl J Med 2007; 356:331–333.
- Sirovich BE, Woloshin S, Schwartz LM. Too little? Too much? Primary care physicians’ views on US health care: a brief report. Arch Intern Med 2011; 171:1582–1585.
- Brown SR, Brown J. Why do physicians order unnecessary preoperative tests? A qualitative study. Fam Med 2011; 43:338–343.
- LeCraw LL. Use of clinical practice guidelines in medical malpractice litigation. J Oncol Pract 2007; 3:254.
- Studdert DM, Mello MM, Sage WM, et al. Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA 2005; 293:2609–2617.
- Budetti PP. Tort reform and the patient safety movement: seeking common ground. JAMA 2005; 293:2660–2662.
- Bishop TF, Federman AD, Ross JS. Laboratory test ordering at physician offices with and without on-site laboratories. J Gen Intern Med 2010; 25:1057–1063.
- Rosenthal E. Medical costs rise as retirees winter in Florida. The New York Times, Jan 31, 2015. http://nyti.ms/1vmjfa5. Accessed August 31, 2015.