User login
Point and Counterpoint: Is medical therapy as good as PCI in stable angina? Two views
Counterpoint: PCI is no better than medical therapy for stable angina? Seeing is not believing
Point: It takes COURAGE to alter our belief system
Clear writing, clear thinking and the disappearing art of the problem list
My hospital's electronic medical record helpfully informs me after 1 week on service that there are 524 data available for my attention, a statistic that would be paralyzing without a cognitive framework for organizing and interpreting them in a manner that can be shared among my colleagues. Accurate information flow among clinicians was identified early on as an imperative of hospital medicine. Much attention has been focused on communication during transitions of care, such as that between inpatient and outpatient services and between inpatient teams, taking the form of the discharge summary and the sign‐out, respectively. But communication among physicians, consultants, and allied therapists must and inevitably does occur continuously day by day during even the most uneventful hospital stay. On academic services the need to keep multiple and ever‐rotating team members on the same page, so to speak, is particularly pressing.
The succinct and accurate problem list, formulated at the end of the history and physical examination and propagated through daily progress notes, is a powerful tool for promoting clear diagnostic and therapeutic planning and is ideally suited to meeting the need for continuous information flow among clinicians. Sadly, this inexpensive and potentially elegant device has fallen into disuse and disrepair and is in need of restoration.
In the 1960s, Dr. Lawrence Weed, the inventor of the SOAP note and a pioneer of medical informatics, wrote of the power of the problem list to impose order on the chaos of clinical information and to aid clear diagnostic thinking, in contrast with the simply chronological record popular in earlier years:
It is this multiplicity of problems with which the physician must deal in his daily work.[T]he multiplicity is inevitable but a random approach to the difficulties it creates is not. The instruction of physicians should be based on a system that helps them to define and follow clinical problems one by one and then systematically to relate and resolve them.[T]the basic criterion of the physician is how well he can identify the patient's problems and organize them for solution.1
Weed proposed that the product of our diagnostic thinking and investigations should be a concise list of diagnoses, as precisely as we are able to identify them, or, in their absence, a clear understanding of the specific problems awaiting resolution and a clear appreciation of the interrelationships among these entities:
The list shouldstate the problems at a level of refinement consistent with the physician's understanding, running the gamut from the precise diagnosis to the isolated, unexplained finding. Each item should be classified as one of the following: (1) a diagnosis, e.g., ASHD, followed by the principal manifestation that requires management; (2) a physiological finding, e.g., heart failure, followed by either the phrase etiology unknown or secondary to a diagnosis; (3) a symptom or physical finding, e.g., shortness of breath; or (4) an abnormal laboratory finding, e.g., an abnormal EKG. If a given diagnosis has several major manifestations, each of which requires individual management and separate, carefully delineated progress notes, then the second manifestation is presented as a second problem and designated as secondary to the major diagnosis.1
These principles were widely praised and adopted. An editorial in the New England Journal of Medicine proclaimed that his system is the essence of education itself,3 and it reigned throughout my own formal medical education.
In the decade that has seen our specialty flourish, with the attendant imperatives of clear thinking and communication, in teaching hospitals the problem list seems to have become an endangered species. The general pattern of its decline is that it is often supplanted by a list of organs, or worse, medical subspecialties, each followed by some assessment of its condition, whether diseased or not. The format resembles that used in critical care units for patients with multiple vital functions in jeopardy, on which survival depends from minute to minute, sometimes regardless of the original etiology of their failure. It is not clear how these notes began to spread from the ICU to the medical floor, where puzzles are solved and progress has goals more varied than mere survival. None of the residents I have queried over the years seem to know. The prevalence of this habit is also unknown, but it is widespread at both institutions at which I have been recently affiliated, and from the generation of notes in this format by trainees freshly graduated from medical schools across the land, I infer that it is no mere regional phenomenon. There may be an unspoken assumption that if this format is used for the sickest patients, it must be the superior format to use for all patients. Perhaps it reflects subspecialists teaching inpatient medicine, equipping trainees with vast technical knowledge of specific diseases and placing less emphasis on formulating coherent assessments. I believe its effects are pernicious and far‐reaching, affecting not only the quality of information flow among clinicians, but also the quality and rigor of diagnostic thinking of those in our training programs.
The history and physical examination properly culminate in the formulation of a problem list that establishes the framework for subsequent investigations and therapy. For each problem a narrative thread is initiated that can be followed in progress notes to resolution and succinctly reviewed in the discharge summary. It is now common to see diagnostic formulations arranged not by problem but by organ or subspecialty, for example, Endocrine: DKA. As everyone understands DKA to be an endocrine problem, the organ system preface adds nothing useful and only serves to bury the diagnosis in text. More tortured prose follows attempts to cram into the header all organs or specialties touched by the problem; hence pneumonia is often preceded by pulmonary/ID. A more egregious recent example was an esophageal variceal hemorrhage designated GI/Heme. And efforts to force an undifferentiated problem into an organ group can reach absurdity: Heme: Asymmetric leg swelling raised concern for DVT, but ultrasound was negative.
The organ preface at best merely adds clutter; the difficulty is compounded when the actual diagnosis or problem is omitted entirely in favor of mention of the organs, for example, for pneumonia: Pulm/ID: begin antibiotics. The reader may be left to guess exactly what is being treated, as with CV: begin heparin and beta‐blocker. The assessment and subsequent notes become even more unwieldy when the unifying diagnosis is approached circuitously on paper by way of its component elements, as with a recent patient with typical lobar pneumonia who was assessed by the house officer as having (1) ID: fever probably due to pneumonia; (2) Pulm: Hypoxia, sputum production and infiltrate on CXR consistent with pneumonia; and (3) Heme: leukocytosis likely due to pneumonia as well. Synthesis, the holy grail of the H&P, is thus replaced by analysis. Each tree is closely inspected, but we are lost in the forest. Weed wrote of such notes:
Failure to integrate findings into a valid single entity can almost always be traced to incomplete understanding.If a beginner puts cardiomegaly, edema, hepatomegaly and shortness of breath as four separate problems, it is his way of clearly admitting that he does not recognize cardiac failure when he sees it.2
Often, however, as in the example above, the physician fully understands the unifying diagnosis but nonetheless insists on addressing involved systems separately. Each feature is then apt to be separately followed in isolation through the progress notes, sometimes without any further mention of pneumonia as such. Many progress notes thus omit stating what is actually thought to be wrong with the patient.
The failure to commit to a diagnosis on paper, even when having done so in practice, ultimately can make its way to the discharge summary, propagating confusion to the outpatient department and ricocheting it into future admissions. It also robs us of the satisfaction of declaring a puzzle solved. I was compelled to write this piece in part by the recent case of a young woman who presented with fever and dyspnea. Through an elegant series of imaging studies and serologic tests, a diagnosis of lupus pericarditis was established, and steroid therapy produced dramatic remission of her symptomsa diagnostic triumph by any measure. How disheartening then to read the resident's final diagnosis for posterity in the discharge summary: fever and dyspnea.
The disembodied organ list thus sows confusion and redundant, convoluted prose throughout the medical record. Perhaps even more destructive is its effect on diagnostic thinking when applied to undifferentiated symptoms or problems, the general internist's pice de rsistance. Language shapes thought, and premature assignment of symptoms to a single organ or subspecialty constrains the imagination needed to puzzle things out. Examples are everywhere. Fever of unknown origin may be peremptorily designated ID, by implication excluding inflammatory, neoplastic, and iatrogenic causes from consideration. The asymmetrically swollen legs cited earlier are not hematologic, but they are still swollen. Undiagnosed problems should be labeled as such, with comment as to the differential diagnosis as it stands at the time and the status of the investigation. When a diagnosis is established, it should replace the undifferentiated symptom or abnormal finding in the list, with cardinal manifestations addressed as such when necessary. Thus, for example, fever in an intravenous drug user becomes endocarditis, and anasarca becomes nephrotic syndrome becomes glomerulonephritis as the diagnosis is established and refined. Weed saw the promise of the well‐groomed, problem‐based record in teaching diagnostic thinking:
The education of a physicianshould be based on his clinical experience and should be reflected in the records he maintains on his patients.The educationbecomes defective not when he is given too much or too little training in basic sciencebut rather when he is allowed to ignore or slight the elementary definition and the progressive adjustment of the problems that comprise his clinical experience. The teacher who ultimately benefits students the most is the one who is willing to establish parameters of discipline in the not unsophisticated but often unappreciated task of preventing this imprecision and disorganization.1
Hospitalists as generalist clinician‐educators have an opportunity to teach fundamental principles of medicine that span subspecialties. These principles must include clear organization and prioritization of complex medical information to enable coherent diagnostic and therapeutic planning and smooth continuity of care. The sign‐out and the all‐important discharge summary can be only as clear and as logical as the diagnoses that inform them. To these ends, let us maintain and reinvigorate the art of the problem list. As an exercise at morning report and attending rounds, we should emphasize the development of an accurate, comprehensive list of active problems before moving on to detailed discussion of any single issue, as Weed suggested nearly 40 years ago:
A serious mistake in teaching medicine is to expose the student, the house officer, or the physician to an analytical discussion of the diagnosis and management of one problem before establishing whether or not he is capable of identifying and defining all of the patient's problems at the outset1
We should expect this list to be formulated at the end of the admission history and physical examination. We must ensure that trainees can correctly identify the level of resolution achieved for each item. They must learn to distinguish among undifferentiated symptoms, for example, passed out; undifferentiated problems, expressed by medical terms with precise meaning, such as syncope; and precise etiologic diagnoses, such as ventricular tachycardia. Daily progress notes and sign‐out documents must reflect the progressive refinement in classification of each item and give the current status of the diagnostic evaluation. When therapy has been established, daily notes must reflect its precise status relative to its end points; examples include place in the timeline for antibiotics or, for a bleeding patient, a tally of blood products and their impact. In the end, we must ensure that the discharge summary reflects the highest level of diagnostic resolution achieved for each problem we have identified. In so doing, we will help to ensure coherent and efficient care for our patients, save time and spare confusion for our colleagues, and teach our trainees to think and communicate clearly about our collective efforts.
- .Medical Records, Medical Education and Patient Care.Cleveland, OH:Press of Case Western Reserve University;1971.
- .Medical records that guide and teach (concluded).N Engl J Med.1968;278:593–600.
- .Ten reasons why Lawrence Weed is right.N Engl J Med.1971;284:51–52.
My hospital's electronic medical record helpfully informs me after 1 week on service that there are 524 data available for my attention, a statistic that would be paralyzing without a cognitive framework for organizing and interpreting them in a manner that can be shared among my colleagues. Accurate information flow among clinicians was identified early on as an imperative of hospital medicine. Much attention has been focused on communication during transitions of care, such as that between inpatient and outpatient services and between inpatient teams, taking the form of the discharge summary and the sign‐out, respectively. But communication among physicians, consultants, and allied therapists must and inevitably does occur continuously day by day during even the most uneventful hospital stay. On academic services the need to keep multiple and ever‐rotating team members on the same page, so to speak, is particularly pressing.
The succinct and accurate problem list, formulated at the end of the history and physical examination and propagated through daily progress notes, is a powerful tool for promoting clear diagnostic and therapeutic planning and is ideally suited to meeting the need for continuous information flow among clinicians. Sadly, this inexpensive and potentially elegant device has fallen into disuse and disrepair and is in need of restoration.
In the 1960s, Dr. Lawrence Weed, the inventor of the SOAP note and a pioneer of medical informatics, wrote of the power of the problem list to impose order on the chaos of clinical information and to aid clear diagnostic thinking, in contrast with the simply chronological record popular in earlier years:
It is this multiplicity of problems with which the physician must deal in his daily work.[T]he multiplicity is inevitable but a random approach to the difficulties it creates is not. The instruction of physicians should be based on a system that helps them to define and follow clinical problems one by one and then systematically to relate and resolve them.[T]the basic criterion of the physician is how well he can identify the patient's problems and organize them for solution.1
Weed proposed that the product of our diagnostic thinking and investigations should be a concise list of diagnoses, as precisely as we are able to identify them, or, in their absence, a clear understanding of the specific problems awaiting resolution and a clear appreciation of the interrelationships among these entities:
The list shouldstate the problems at a level of refinement consistent with the physician's understanding, running the gamut from the precise diagnosis to the isolated, unexplained finding. Each item should be classified as one of the following: (1) a diagnosis, e.g., ASHD, followed by the principal manifestation that requires management; (2) a physiological finding, e.g., heart failure, followed by either the phrase etiology unknown or secondary to a diagnosis; (3) a symptom or physical finding, e.g., shortness of breath; or (4) an abnormal laboratory finding, e.g., an abnormal EKG. If a given diagnosis has several major manifestations, each of which requires individual management and separate, carefully delineated progress notes, then the second manifestation is presented as a second problem and designated as secondary to the major diagnosis.1
These principles were widely praised and adopted. An editorial in the New England Journal of Medicine proclaimed that his system is the essence of education itself,3 and it reigned throughout my own formal medical education.
In the decade that has seen our specialty flourish, with the attendant imperatives of clear thinking and communication, in teaching hospitals the problem list seems to have become an endangered species. The general pattern of its decline is that it is often supplanted by a list of organs, or worse, medical subspecialties, each followed by some assessment of its condition, whether diseased or not. The format resembles that used in critical care units for patients with multiple vital functions in jeopardy, on which survival depends from minute to minute, sometimes regardless of the original etiology of their failure. It is not clear how these notes began to spread from the ICU to the medical floor, where puzzles are solved and progress has goals more varied than mere survival. None of the residents I have queried over the years seem to know. The prevalence of this habit is also unknown, but it is widespread at both institutions at which I have been recently affiliated, and from the generation of notes in this format by trainees freshly graduated from medical schools across the land, I infer that it is no mere regional phenomenon. There may be an unspoken assumption that if this format is used for the sickest patients, it must be the superior format to use for all patients. Perhaps it reflects subspecialists teaching inpatient medicine, equipping trainees with vast technical knowledge of specific diseases and placing less emphasis on formulating coherent assessments. I believe its effects are pernicious and far‐reaching, affecting not only the quality of information flow among clinicians, but also the quality and rigor of diagnostic thinking of those in our training programs.
The history and physical examination properly culminate in the formulation of a problem list that establishes the framework for subsequent investigations and therapy. For each problem a narrative thread is initiated that can be followed in progress notes to resolution and succinctly reviewed in the discharge summary. It is now common to see diagnostic formulations arranged not by problem but by organ or subspecialty, for example, Endocrine: DKA. As everyone understands DKA to be an endocrine problem, the organ system preface adds nothing useful and only serves to bury the diagnosis in text. More tortured prose follows attempts to cram into the header all organs or specialties touched by the problem; hence pneumonia is often preceded by pulmonary/ID. A more egregious recent example was an esophageal variceal hemorrhage designated GI/Heme. And efforts to force an undifferentiated problem into an organ group can reach absurdity: Heme: Asymmetric leg swelling raised concern for DVT, but ultrasound was negative.
The organ preface at best merely adds clutter; the difficulty is compounded when the actual diagnosis or problem is omitted entirely in favor of mention of the organs, for example, for pneumonia: Pulm/ID: begin antibiotics. The reader may be left to guess exactly what is being treated, as with CV: begin heparin and beta‐blocker. The assessment and subsequent notes become even more unwieldy when the unifying diagnosis is approached circuitously on paper by way of its component elements, as with a recent patient with typical lobar pneumonia who was assessed by the house officer as having (1) ID: fever probably due to pneumonia; (2) Pulm: Hypoxia, sputum production and infiltrate on CXR consistent with pneumonia; and (3) Heme: leukocytosis likely due to pneumonia as well. Synthesis, the holy grail of the H&P, is thus replaced by analysis. Each tree is closely inspected, but we are lost in the forest. Weed wrote of such notes:
Failure to integrate findings into a valid single entity can almost always be traced to incomplete understanding.If a beginner puts cardiomegaly, edema, hepatomegaly and shortness of breath as four separate problems, it is his way of clearly admitting that he does not recognize cardiac failure when he sees it.2
Often, however, as in the example above, the physician fully understands the unifying diagnosis but nonetheless insists on addressing involved systems separately. Each feature is then apt to be separately followed in isolation through the progress notes, sometimes without any further mention of pneumonia as such. Many progress notes thus omit stating what is actually thought to be wrong with the patient.
The failure to commit to a diagnosis on paper, even when having done so in practice, ultimately can make its way to the discharge summary, propagating confusion to the outpatient department and ricocheting it into future admissions. It also robs us of the satisfaction of declaring a puzzle solved. I was compelled to write this piece in part by the recent case of a young woman who presented with fever and dyspnea. Through an elegant series of imaging studies and serologic tests, a diagnosis of lupus pericarditis was established, and steroid therapy produced dramatic remission of her symptomsa diagnostic triumph by any measure. How disheartening then to read the resident's final diagnosis for posterity in the discharge summary: fever and dyspnea.
The disembodied organ list thus sows confusion and redundant, convoluted prose throughout the medical record. Perhaps even more destructive is its effect on diagnostic thinking when applied to undifferentiated symptoms or problems, the general internist's pice de rsistance. Language shapes thought, and premature assignment of symptoms to a single organ or subspecialty constrains the imagination needed to puzzle things out. Examples are everywhere. Fever of unknown origin may be peremptorily designated ID, by implication excluding inflammatory, neoplastic, and iatrogenic causes from consideration. The asymmetrically swollen legs cited earlier are not hematologic, but they are still swollen. Undiagnosed problems should be labeled as such, with comment as to the differential diagnosis as it stands at the time and the status of the investigation. When a diagnosis is established, it should replace the undifferentiated symptom or abnormal finding in the list, with cardinal manifestations addressed as such when necessary. Thus, for example, fever in an intravenous drug user becomes endocarditis, and anasarca becomes nephrotic syndrome becomes glomerulonephritis as the diagnosis is established and refined. Weed saw the promise of the well‐groomed, problem‐based record in teaching diagnostic thinking:
The education of a physicianshould be based on his clinical experience and should be reflected in the records he maintains on his patients.The educationbecomes defective not when he is given too much or too little training in basic sciencebut rather when he is allowed to ignore or slight the elementary definition and the progressive adjustment of the problems that comprise his clinical experience. The teacher who ultimately benefits students the most is the one who is willing to establish parameters of discipline in the not unsophisticated but often unappreciated task of preventing this imprecision and disorganization.1
Hospitalists as generalist clinician‐educators have an opportunity to teach fundamental principles of medicine that span subspecialties. These principles must include clear organization and prioritization of complex medical information to enable coherent diagnostic and therapeutic planning and smooth continuity of care. The sign‐out and the all‐important discharge summary can be only as clear and as logical as the diagnoses that inform them. To these ends, let us maintain and reinvigorate the art of the problem list. As an exercise at morning report and attending rounds, we should emphasize the development of an accurate, comprehensive list of active problems before moving on to detailed discussion of any single issue, as Weed suggested nearly 40 years ago:
A serious mistake in teaching medicine is to expose the student, the house officer, or the physician to an analytical discussion of the diagnosis and management of one problem before establishing whether or not he is capable of identifying and defining all of the patient's problems at the outset1
We should expect this list to be formulated at the end of the admission history and physical examination. We must ensure that trainees can correctly identify the level of resolution achieved for each item. They must learn to distinguish among undifferentiated symptoms, for example, passed out; undifferentiated problems, expressed by medical terms with precise meaning, such as syncope; and precise etiologic diagnoses, such as ventricular tachycardia. Daily progress notes and sign‐out documents must reflect the progressive refinement in classification of each item and give the current status of the diagnostic evaluation. When therapy has been established, daily notes must reflect its precise status relative to its end points; examples include place in the timeline for antibiotics or, for a bleeding patient, a tally of blood products and their impact. In the end, we must ensure that the discharge summary reflects the highest level of diagnostic resolution achieved for each problem we have identified. In so doing, we will help to ensure coherent and efficient care for our patients, save time and spare confusion for our colleagues, and teach our trainees to think and communicate clearly about our collective efforts.
My hospital's electronic medical record helpfully informs me after 1 week on service that there are 524 data available for my attention, a statistic that would be paralyzing without a cognitive framework for organizing and interpreting them in a manner that can be shared among my colleagues. Accurate information flow among clinicians was identified early on as an imperative of hospital medicine. Much attention has been focused on communication during transitions of care, such as that between inpatient and outpatient services and between inpatient teams, taking the form of the discharge summary and the sign‐out, respectively. But communication among physicians, consultants, and allied therapists must and inevitably does occur continuously day by day during even the most uneventful hospital stay. On academic services the need to keep multiple and ever‐rotating team members on the same page, so to speak, is particularly pressing.
The succinct and accurate problem list, formulated at the end of the history and physical examination and propagated through daily progress notes, is a powerful tool for promoting clear diagnostic and therapeutic planning and is ideally suited to meeting the need for continuous information flow among clinicians. Sadly, this inexpensive and potentially elegant device has fallen into disuse and disrepair and is in need of restoration.
In the 1960s, Dr. Lawrence Weed, the inventor of the SOAP note and a pioneer of medical informatics, wrote of the power of the problem list to impose order on the chaos of clinical information and to aid clear diagnostic thinking, in contrast with the simply chronological record popular in earlier years:
It is this multiplicity of problems with which the physician must deal in his daily work.[T]he multiplicity is inevitable but a random approach to the difficulties it creates is not. The instruction of physicians should be based on a system that helps them to define and follow clinical problems one by one and then systematically to relate and resolve them.[T]the basic criterion of the physician is how well he can identify the patient's problems and organize them for solution.1
Weed proposed that the product of our diagnostic thinking and investigations should be a concise list of diagnoses, as precisely as we are able to identify them, or, in their absence, a clear understanding of the specific problems awaiting resolution and a clear appreciation of the interrelationships among these entities:
The list shouldstate the problems at a level of refinement consistent with the physician's understanding, running the gamut from the precise diagnosis to the isolated, unexplained finding. Each item should be classified as one of the following: (1) a diagnosis, e.g., ASHD, followed by the principal manifestation that requires management; (2) a physiological finding, e.g., heart failure, followed by either the phrase etiology unknown or secondary to a diagnosis; (3) a symptom or physical finding, e.g., shortness of breath; or (4) an abnormal laboratory finding, e.g., an abnormal EKG. If a given diagnosis has several major manifestations, each of which requires individual management and separate, carefully delineated progress notes, then the second manifestation is presented as a second problem and designated as secondary to the major diagnosis.1
These principles were widely praised and adopted. An editorial in the New England Journal of Medicine proclaimed that his system is the essence of education itself,3 and it reigned throughout my own formal medical education.
In the decade that has seen our specialty flourish, with the attendant imperatives of clear thinking and communication, in teaching hospitals the problem list seems to have become an endangered species. The general pattern of its decline is that it is often supplanted by a list of organs, or worse, medical subspecialties, each followed by some assessment of its condition, whether diseased or not. The format resembles that used in critical care units for patients with multiple vital functions in jeopardy, on which survival depends from minute to minute, sometimes regardless of the original etiology of their failure. It is not clear how these notes began to spread from the ICU to the medical floor, where puzzles are solved and progress has goals more varied than mere survival. None of the residents I have queried over the years seem to know. The prevalence of this habit is also unknown, but it is widespread at both institutions at which I have been recently affiliated, and from the generation of notes in this format by trainees freshly graduated from medical schools across the land, I infer that it is no mere regional phenomenon. There may be an unspoken assumption that if this format is used for the sickest patients, it must be the superior format to use for all patients. Perhaps it reflects subspecialists teaching inpatient medicine, equipping trainees with vast technical knowledge of specific diseases and placing less emphasis on formulating coherent assessments. I believe its effects are pernicious and far‐reaching, affecting not only the quality of information flow among clinicians, but also the quality and rigor of diagnostic thinking of those in our training programs.
The history and physical examination properly culminate in the formulation of a problem list that establishes the framework for subsequent investigations and therapy. For each problem a narrative thread is initiated that can be followed in progress notes to resolution and succinctly reviewed in the discharge summary. It is now common to see diagnostic formulations arranged not by problem but by organ or subspecialty, for example, Endocrine: DKA. As everyone understands DKA to be an endocrine problem, the organ system preface adds nothing useful and only serves to bury the diagnosis in text. More tortured prose follows attempts to cram into the header all organs or specialties touched by the problem; hence pneumonia is often preceded by pulmonary/ID. A more egregious recent example was an esophageal variceal hemorrhage designated GI/Heme. And efforts to force an undifferentiated problem into an organ group can reach absurdity: Heme: Asymmetric leg swelling raised concern for DVT, but ultrasound was negative.
The organ preface at best merely adds clutter; the difficulty is compounded when the actual diagnosis or problem is omitted entirely in favor of mention of the organs, for example, for pneumonia: Pulm/ID: begin antibiotics. The reader may be left to guess exactly what is being treated, as with CV: begin heparin and beta‐blocker. The assessment and subsequent notes become even more unwieldy when the unifying diagnosis is approached circuitously on paper by way of its component elements, as with a recent patient with typical lobar pneumonia who was assessed by the house officer as having (1) ID: fever probably due to pneumonia; (2) Pulm: Hypoxia, sputum production and infiltrate on CXR consistent with pneumonia; and (3) Heme: leukocytosis likely due to pneumonia as well. Synthesis, the holy grail of the H&P, is thus replaced by analysis. Each tree is closely inspected, but we are lost in the forest. Weed wrote of such notes:
Failure to integrate findings into a valid single entity can almost always be traced to incomplete understanding.If a beginner puts cardiomegaly, edema, hepatomegaly and shortness of breath as four separate problems, it is his way of clearly admitting that he does not recognize cardiac failure when he sees it.2
Often, however, as in the example above, the physician fully understands the unifying diagnosis but nonetheless insists on addressing involved systems separately. Each feature is then apt to be separately followed in isolation through the progress notes, sometimes without any further mention of pneumonia as such. Many progress notes thus omit stating what is actually thought to be wrong with the patient.
The failure to commit to a diagnosis on paper, even when having done so in practice, ultimately can make its way to the discharge summary, propagating confusion to the outpatient department and ricocheting it into future admissions. It also robs us of the satisfaction of declaring a puzzle solved. I was compelled to write this piece in part by the recent case of a young woman who presented with fever and dyspnea. Through an elegant series of imaging studies and serologic tests, a diagnosis of lupus pericarditis was established, and steroid therapy produced dramatic remission of her symptomsa diagnostic triumph by any measure. How disheartening then to read the resident's final diagnosis for posterity in the discharge summary: fever and dyspnea.
The disembodied organ list thus sows confusion and redundant, convoluted prose throughout the medical record. Perhaps even more destructive is its effect on diagnostic thinking when applied to undifferentiated symptoms or problems, the general internist's pice de rsistance. Language shapes thought, and premature assignment of symptoms to a single organ or subspecialty constrains the imagination needed to puzzle things out. Examples are everywhere. Fever of unknown origin may be peremptorily designated ID, by implication excluding inflammatory, neoplastic, and iatrogenic causes from consideration. The asymmetrically swollen legs cited earlier are not hematologic, but they are still swollen. Undiagnosed problems should be labeled as such, with comment as to the differential diagnosis as it stands at the time and the status of the investigation. When a diagnosis is established, it should replace the undifferentiated symptom or abnormal finding in the list, with cardinal manifestations addressed as such when necessary. Thus, for example, fever in an intravenous drug user becomes endocarditis, and anasarca becomes nephrotic syndrome becomes glomerulonephritis as the diagnosis is established and refined. Weed saw the promise of the well‐groomed, problem‐based record in teaching diagnostic thinking:
The education of a physicianshould be based on his clinical experience and should be reflected in the records he maintains on his patients.The educationbecomes defective not when he is given too much or too little training in basic sciencebut rather when he is allowed to ignore or slight the elementary definition and the progressive adjustment of the problems that comprise his clinical experience. The teacher who ultimately benefits students the most is the one who is willing to establish parameters of discipline in the not unsophisticated but often unappreciated task of preventing this imprecision and disorganization.1
Hospitalists as generalist clinician‐educators have an opportunity to teach fundamental principles of medicine that span subspecialties. These principles must include clear organization and prioritization of complex medical information to enable coherent diagnostic and therapeutic planning and smooth continuity of care. The sign‐out and the all‐important discharge summary can be only as clear and as logical as the diagnoses that inform them. To these ends, let us maintain and reinvigorate the art of the problem list. As an exercise at morning report and attending rounds, we should emphasize the development of an accurate, comprehensive list of active problems before moving on to detailed discussion of any single issue, as Weed suggested nearly 40 years ago:
A serious mistake in teaching medicine is to expose the student, the house officer, or the physician to an analytical discussion of the diagnosis and management of one problem before establishing whether or not he is capable of identifying and defining all of the patient's problems at the outset1
We should expect this list to be formulated at the end of the admission history and physical examination. We must ensure that trainees can correctly identify the level of resolution achieved for each item. They must learn to distinguish among undifferentiated symptoms, for example, passed out; undifferentiated problems, expressed by medical terms with precise meaning, such as syncope; and precise etiologic diagnoses, such as ventricular tachycardia. Daily progress notes and sign‐out documents must reflect the progressive refinement in classification of each item and give the current status of the diagnostic evaluation. When therapy has been established, daily notes must reflect its precise status relative to its end points; examples include place in the timeline for antibiotics or, for a bleeding patient, a tally of blood products and their impact. In the end, we must ensure that the discharge summary reflects the highest level of diagnostic resolution achieved for each problem we have identified. In so doing, we will help to ensure coherent and efficient care for our patients, save time and spare confusion for our colleagues, and teach our trainees to think and communicate clearly about our collective efforts.
- .Medical Records, Medical Education and Patient Care.Cleveland, OH:Press of Case Western Reserve University;1971.
- .Medical records that guide and teach (concluded).N Engl J Med.1968;278:593–600.
- .Ten reasons why Lawrence Weed is right.N Engl J Med.1971;284:51–52.
- .Medical Records, Medical Education and Patient Care.Cleveland, OH:Press of Case Western Reserve University;1971.
- .Medical records that guide and teach (concluded).N Engl J Med.1968;278:593–600.
- .Ten reasons why Lawrence Weed is right.N Engl J Med.1971;284:51–52.
Implications of the Spine Patient Outcomes Research Trial in the clinical management of lumbar disk herniation
Handoffs
When I was four years old, Grandpa always cut off the crust of the bread before I ate my peanut butter and jelly sandwich.
When I was seven years old, Grandpa took me to the circus and bought me cotton candy. He didn't care when I got the sticky stuff all over my face and dress.
When I was nine years old, Grandpa took me out on my birthday for a chocolate ice cream cone with rainbow sprinkles on top.
I didn't know he had high blood pressure. And neither did he.
He made me laugh. He made me feel so good deep down inside.
At age eleven I returned home from school to find Grandpa had been taken to the hospital with a stroke.
I cut the crust off his bread, got him cotton candy and an ice cream cone so he would feel better.
I went with Mommy to see him. She was stopped at the nurses' station. They wanted to talk to her.
I broke away and ran down the hall to his room. His bed was empty. Grandpa had died. No one told me.
Grandpa never got to eat the peanut butter and jelly sandwich with the crust cut off.
Maybe if he had, things would have turned out differently.
When I was four years old, Grandpa always cut off the crust of the bread before I ate my peanut butter and jelly sandwich.
When I was seven years old, Grandpa took me to the circus and bought me cotton candy. He didn't care when I got the sticky stuff all over my face and dress.
When I was nine years old, Grandpa took me out on my birthday for a chocolate ice cream cone with rainbow sprinkles on top.
I didn't know he had high blood pressure. And neither did he.
He made me laugh. He made me feel so good deep down inside.
At age eleven I returned home from school to find Grandpa had been taken to the hospital with a stroke.
I cut the crust off his bread, got him cotton candy and an ice cream cone so he would feel better.
I went with Mommy to see him. She was stopped at the nurses' station. They wanted to talk to her.
I broke away and ran down the hall to his room. His bed was empty. Grandpa had died. No one told me.
Grandpa never got to eat the peanut butter and jelly sandwich with the crust cut off.
Maybe if he had, things would have turned out differently.
When I was four years old, Grandpa always cut off the crust of the bread before I ate my peanut butter and jelly sandwich.
When I was seven years old, Grandpa took me to the circus and bought me cotton candy. He didn't care when I got the sticky stuff all over my face and dress.
When I was nine years old, Grandpa took me out on my birthday for a chocolate ice cream cone with rainbow sprinkles on top.
I didn't know he had high blood pressure. And neither did he.
He made me laugh. He made me feel so good deep down inside.
At age eleven I returned home from school to find Grandpa had been taken to the hospital with a stroke.
I cut the crust off his bread, got him cotton candy and an ice cream cone so he would feel better.
I went with Mommy to see him. She was stopped at the nurses' station. They wanted to talk to her.
I broke away and ran down the hall to his room. His bed was empty. Grandpa had died. No one told me.
Grandpa never got to eat the peanut butter and jelly sandwich with the crust cut off.
Maybe if he had, things would have turned out differently.
Editorial
Founded in 1997 by 2 community‐based hospitalists, Win Whitcomb and John Nelson, the National Association of Inpatient Physicians was renamed the Society of Hospital Medicine in 2003 and celebrates its 10th anniversary this year. Evolving from the enthusiastic engagement by the attendees at the first hospital medicine CME meeting in the spring of 1997,1 this new organization has grown into a robust voice for improving the care of hospitalized patients. The Society has actively attempted to represent a big tent welcoming participation from everyone involved in hospital care. The name change to the Society of Hospital Medicine (SHM) reflected the recognition that a team is needed to achieve the goal of optimizing care of the hospitalized patient. Merriam‐Webster defines society as companionship or association with one's fellows and a voluntary association of individuals for common ends; especially an organized group working together or periodically meeting because of common interests, beliefs, or profession.2 The hospital medicine team working together includes nurses, pharmacists, case managers, social workers, physicians, and administrators in addition to dieticians, respiratory therapists, and physical and occupational therapists. With a focus on patient‐centered care and quality improvement, SHM eagerly anticipates future changes in health care, seeking to help its membership adapt to and manage the expected change.
As an integral component of the hospital care delivery team, physicians represent the bulk of membership in SHM. Thus, development of hospital medicine as a medical specialty has concerned many of its members. Fortunately, progress is being made, and Bob Wachter is chairing a task force on this for the American Board of Internal Medicine.3 Certainly, content in the field is growing exponentially, with textbooks (including possibly 3 separate general references for adult and pediatric hospital medicine), multiple printed periodicals, and this successful peer‐reviewed journal listed in MEDLINE and PubMed. In addition, most academic medical centers now have thriving groups of hospitalists, and many are establishing or plan separate divisions within their respective departments of medicine (eg, Northwestern, UCSan Francisco, UCSan Diego, Duke, Mayo Clinic). These events confirm how hospital medicine has progressed to become a true specialty of medicine and justify the publication of its own set of core competencies.4 We believe some form of certification is inevitable. This will be supported by development of residency tracks and fellowships in hospital medicine.5
Most remarkable about the Society of Hospital Medicine has been its ability to collaborate with multiple medical societies, governmental agencies, foundations, and organizations seeking to improve care for hospitalized patients (see Table 1). These relationships represent the teamwork approach that hospitalists take into their hospitals on a daily basis. We hope to build on these collaborations and work toward more interactive efforts to identify optimal delivery of health care in the hospital setting, while also reaching out to ambulatory‐based providers to ensure smooth transitions of care. Such efforts will require innovative approaches to educating SHM members and altering the standard approach to continuing medical education (CME). Investment in the concept of hospitalists by the John A. Hartford Foundation with a $1.4 million grant to improve the discharge process (Improving Hospital Care Transitions for Older Adults) exemplifies SHM's commitment to collaboration, with more than 10 organizations participating on the advisory board.
| Agency for Healthcare Research and Quality (AHRQ) |
| Alliance of Academic Internal Medicine |
| Ambulatory Pediatric Association |
| American Academy of Clinical Endocrinology |
| American Academy of Pediatricians |
| American Association of Critical Care Nurses |
| American Board of Internal Medicine |
| American College of Health Executives |
| American College of Chest Physicians |
| American College of Emergency Physicians |
| American College of Physicians |
| American College of Physician Executives |
| American Diabetes Association |
| American Geriatric Society |
| American Hospital Association |
| American Society of Health System Pharmacists |
| AMA's Physician Consortium for Performance Improvement |
| Association of American Medical Colleges |
| Case Management Society of America |
| Centers for Disease Control and Prevention (CDC) |
| Centers for Medicare & Medicaid Services (CMS) |
| The Hartford Foundation |
| Hospital Quality Alliance |
| Institute of Healthcare Improvement |
| The Joint Commission |
| National Quality Forum |
| Society of Critical Care Medicine |
| Society of General Internal Medicine |
As SHM and its growing membership, which now exceeds 6500, stride into the future, we embrace advances in educational approaches to enhancing health care delivery and expect to play a leadership role in applying them. Increasingly, use of pay‐for‐performance (P4P) will attempt to align payment incentives to promote better quality care by rewarding providers that perform well.6 SHM aims to train hospitalists through use of knowledge translation which combines the right educational tools with involvement of the entire health care team, yielding truly effective CME.7 A reinvention of CME that links it to care delivery and improving performance, it is supported by governmental health care leaders.8 This approach moves CME to where hospitalists deliver care, targets all participants (patients, nurses, pharmacists, and doctors), and has content based around initiatives to improve health care.
Such a quality improvement model would take advantage of SHM's Quality Improvement Resource Rooms (hospitalmedicine.org), marking an important shift toward translating evidence into practice. SHM will also continue with its efforts to lead in nonclinical training, as exemplified by its popular biannual leadership training courses. We expect this will expand to provide much‐needed QI training in the future.
In its first 10 years SHM has accomplished much already, but the best days for hospital medicine lie ahead of us. There will be more than 30,000 hospitalists practicing at virtually every hospital in the United States, with high expectations for teams of health professionals providing patient‐centered care with documented quality standards. SHM is poised to work with all our partner organizations to do our part to create the hospital of the future. Our patients are counting on all of us.
- .Reflections: the hospitalist movement a decade later.J Hosp Med.2006;1:248–252.
- Available at: www.merriam‐webster.com. accessed April 2,2007.
- .What will board certification be—and mean—for hospitalists?J Hosp Med.2007;2:102–104.
- ,,,,.Core competencies of hospital medicine: development and methodology.J Hosp Med.2006;1:48–56
- ,,,.Hospital medicine fellowships: in progress.Am J Med.2006;119:72.e1–e7.
- Committee on Redesigning Health Insurance Performance Measures Payment and Performance Improvement Programs.Rewarding Provider Performance: Aligning Incentives in Medicare.Washington, DC:National Academies Press;2007.
- ,,, et al.The case for knowledge translation: shortening the journey from evidence to effect.BMJ.2003;327:33–35.
- .Commentary: reinventing continuing medical education.BMJ.2004;4:181.
Founded in 1997 by 2 community‐based hospitalists, Win Whitcomb and John Nelson, the National Association of Inpatient Physicians was renamed the Society of Hospital Medicine in 2003 and celebrates its 10th anniversary this year. Evolving from the enthusiastic engagement by the attendees at the first hospital medicine CME meeting in the spring of 1997,1 this new organization has grown into a robust voice for improving the care of hospitalized patients. The Society has actively attempted to represent a big tent welcoming participation from everyone involved in hospital care. The name change to the Society of Hospital Medicine (SHM) reflected the recognition that a team is needed to achieve the goal of optimizing care of the hospitalized patient. Merriam‐Webster defines society as companionship or association with one's fellows and a voluntary association of individuals for common ends; especially an organized group working together or periodically meeting because of common interests, beliefs, or profession.2 The hospital medicine team working together includes nurses, pharmacists, case managers, social workers, physicians, and administrators in addition to dieticians, respiratory therapists, and physical and occupational therapists. With a focus on patient‐centered care and quality improvement, SHM eagerly anticipates future changes in health care, seeking to help its membership adapt to and manage the expected change.
As an integral component of the hospital care delivery team, physicians represent the bulk of membership in SHM. Thus, development of hospital medicine as a medical specialty has concerned many of its members. Fortunately, progress is being made, and Bob Wachter is chairing a task force on this for the American Board of Internal Medicine.3 Certainly, content in the field is growing exponentially, with textbooks (including possibly 3 separate general references for adult and pediatric hospital medicine), multiple printed periodicals, and this successful peer‐reviewed journal listed in MEDLINE and PubMed. In addition, most academic medical centers now have thriving groups of hospitalists, and many are establishing or plan separate divisions within their respective departments of medicine (eg, Northwestern, UCSan Francisco, UCSan Diego, Duke, Mayo Clinic). These events confirm how hospital medicine has progressed to become a true specialty of medicine and justify the publication of its own set of core competencies.4 We believe some form of certification is inevitable. This will be supported by development of residency tracks and fellowships in hospital medicine.5
Most remarkable about the Society of Hospital Medicine has been its ability to collaborate with multiple medical societies, governmental agencies, foundations, and organizations seeking to improve care for hospitalized patients (see Table 1). These relationships represent the teamwork approach that hospitalists take into their hospitals on a daily basis. We hope to build on these collaborations and work toward more interactive efforts to identify optimal delivery of health care in the hospital setting, while also reaching out to ambulatory‐based providers to ensure smooth transitions of care. Such efforts will require innovative approaches to educating SHM members and altering the standard approach to continuing medical education (CME). Investment in the concept of hospitalists by the John A. Hartford Foundation with a $1.4 million grant to improve the discharge process (Improving Hospital Care Transitions for Older Adults) exemplifies SHM's commitment to collaboration, with more than 10 organizations participating on the advisory board.
| Agency for Healthcare Research and Quality (AHRQ) |
| Alliance of Academic Internal Medicine |
| Ambulatory Pediatric Association |
| American Academy of Clinical Endocrinology |
| American Academy of Pediatricians |
| American Association of Critical Care Nurses |
| American Board of Internal Medicine |
| American College of Health Executives |
| American College of Chest Physicians |
| American College of Emergency Physicians |
| American College of Physicians |
| American College of Physician Executives |
| American Diabetes Association |
| American Geriatric Society |
| American Hospital Association |
| American Society of Health System Pharmacists |
| AMA's Physician Consortium for Performance Improvement |
| Association of American Medical Colleges |
| Case Management Society of America |
| Centers for Disease Control and Prevention (CDC) |
| Centers for Medicare & Medicaid Services (CMS) |
| The Hartford Foundation |
| Hospital Quality Alliance |
| Institute of Healthcare Improvement |
| The Joint Commission |
| National Quality Forum |
| Society of Critical Care Medicine |
| Society of General Internal Medicine |
As SHM and its growing membership, which now exceeds 6500, stride into the future, we embrace advances in educational approaches to enhancing health care delivery and expect to play a leadership role in applying them. Increasingly, use of pay‐for‐performance (P4P) will attempt to align payment incentives to promote better quality care by rewarding providers that perform well.6 SHM aims to train hospitalists through use of knowledge translation which combines the right educational tools with involvement of the entire health care team, yielding truly effective CME.7 A reinvention of CME that links it to care delivery and improving performance, it is supported by governmental health care leaders.8 This approach moves CME to where hospitalists deliver care, targets all participants (patients, nurses, pharmacists, and doctors), and has content based around initiatives to improve health care.
Such a quality improvement model would take advantage of SHM's Quality Improvement Resource Rooms (hospitalmedicine.org), marking an important shift toward translating evidence into practice. SHM will also continue with its efforts to lead in nonclinical training, as exemplified by its popular biannual leadership training courses. We expect this will expand to provide much‐needed QI training in the future.
In its first 10 years SHM has accomplished much already, but the best days for hospital medicine lie ahead of us. There will be more than 30,000 hospitalists practicing at virtually every hospital in the United States, with high expectations for teams of health professionals providing patient‐centered care with documented quality standards. SHM is poised to work with all our partner organizations to do our part to create the hospital of the future. Our patients are counting on all of us.
Founded in 1997 by 2 community‐based hospitalists, Win Whitcomb and John Nelson, the National Association of Inpatient Physicians was renamed the Society of Hospital Medicine in 2003 and celebrates its 10th anniversary this year. Evolving from the enthusiastic engagement by the attendees at the first hospital medicine CME meeting in the spring of 1997,1 this new organization has grown into a robust voice for improving the care of hospitalized patients. The Society has actively attempted to represent a big tent welcoming participation from everyone involved in hospital care. The name change to the Society of Hospital Medicine (SHM) reflected the recognition that a team is needed to achieve the goal of optimizing care of the hospitalized patient. Merriam‐Webster defines society as companionship or association with one's fellows and a voluntary association of individuals for common ends; especially an organized group working together or periodically meeting because of common interests, beliefs, or profession.2 The hospital medicine team working together includes nurses, pharmacists, case managers, social workers, physicians, and administrators in addition to dieticians, respiratory therapists, and physical and occupational therapists. With a focus on patient‐centered care and quality improvement, SHM eagerly anticipates future changes in health care, seeking to help its membership adapt to and manage the expected change.
As an integral component of the hospital care delivery team, physicians represent the bulk of membership in SHM. Thus, development of hospital medicine as a medical specialty has concerned many of its members. Fortunately, progress is being made, and Bob Wachter is chairing a task force on this for the American Board of Internal Medicine.3 Certainly, content in the field is growing exponentially, with textbooks (including possibly 3 separate general references for adult and pediatric hospital medicine), multiple printed periodicals, and this successful peer‐reviewed journal listed in MEDLINE and PubMed. In addition, most academic medical centers now have thriving groups of hospitalists, and many are establishing or plan separate divisions within their respective departments of medicine (eg, Northwestern, UCSan Francisco, UCSan Diego, Duke, Mayo Clinic). These events confirm how hospital medicine has progressed to become a true specialty of medicine and justify the publication of its own set of core competencies.4 We believe some form of certification is inevitable. This will be supported by development of residency tracks and fellowships in hospital medicine.5
Most remarkable about the Society of Hospital Medicine has been its ability to collaborate with multiple medical societies, governmental agencies, foundations, and organizations seeking to improve care for hospitalized patients (see Table 1). These relationships represent the teamwork approach that hospitalists take into their hospitals on a daily basis. We hope to build on these collaborations and work toward more interactive efforts to identify optimal delivery of health care in the hospital setting, while also reaching out to ambulatory‐based providers to ensure smooth transitions of care. Such efforts will require innovative approaches to educating SHM members and altering the standard approach to continuing medical education (CME). Investment in the concept of hospitalists by the John A. Hartford Foundation with a $1.4 million grant to improve the discharge process (Improving Hospital Care Transitions for Older Adults) exemplifies SHM's commitment to collaboration, with more than 10 organizations participating on the advisory board.
| Agency for Healthcare Research and Quality (AHRQ) |
| Alliance of Academic Internal Medicine |
| Ambulatory Pediatric Association |
| American Academy of Clinical Endocrinology |
| American Academy of Pediatricians |
| American Association of Critical Care Nurses |
| American Board of Internal Medicine |
| American College of Health Executives |
| American College of Chest Physicians |
| American College of Emergency Physicians |
| American College of Physicians |
| American College of Physician Executives |
| American Diabetes Association |
| American Geriatric Society |
| American Hospital Association |
| American Society of Health System Pharmacists |
| AMA's Physician Consortium for Performance Improvement |
| Association of American Medical Colleges |
| Case Management Society of America |
| Centers for Disease Control and Prevention (CDC) |
| Centers for Medicare & Medicaid Services (CMS) |
| The Hartford Foundation |
| Hospital Quality Alliance |
| Institute of Healthcare Improvement |
| The Joint Commission |
| National Quality Forum |
| Society of Critical Care Medicine |
| Society of General Internal Medicine |
As SHM and its growing membership, which now exceeds 6500, stride into the future, we embrace advances in educational approaches to enhancing health care delivery and expect to play a leadership role in applying them. Increasingly, use of pay‐for‐performance (P4P) will attempt to align payment incentives to promote better quality care by rewarding providers that perform well.6 SHM aims to train hospitalists through use of knowledge translation which combines the right educational tools with involvement of the entire health care team, yielding truly effective CME.7 A reinvention of CME that links it to care delivery and improving performance, it is supported by governmental health care leaders.8 This approach moves CME to where hospitalists deliver care, targets all participants (patients, nurses, pharmacists, and doctors), and has content based around initiatives to improve health care.
Such a quality improvement model would take advantage of SHM's Quality Improvement Resource Rooms (hospitalmedicine.org), marking an important shift toward translating evidence into practice. SHM will also continue with its efforts to lead in nonclinical training, as exemplified by its popular biannual leadership training courses. We expect this will expand to provide much‐needed QI training in the future.
In its first 10 years SHM has accomplished much already, but the best days for hospital medicine lie ahead of us. There will be more than 30,000 hospitalists practicing at virtually every hospital in the United States, with high expectations for teams of health professionals providing patient‐centered care with documented quality standards. SHM is poised to work with all our partner organizations to do our part to create the hospital of the future. Our patients are counting on all of us.
- .Reflections: the hospitalist movement a decade later.J Hosp Med.2006;1:248–252.
- Available at: www.merriam‐webster.com. accessed April 2,2007.
- .What will board certification be—and mean—for hospitalists?J Hosp Med.2007;2:102–104.
- ,,,,.Core competencies of hospital medicine: development and methodology.J Hosp Med.2006;1:48–56
- ,,,.Hospital medicine fellowships: in progress.Am J Med.2006;119:72.e1–e7.
- Committee on Redesigning Health Insurance Performance Measures Payment and Performance Improvement Programs.Rewarding Provider Performance: Aligning Incentives in Medicare.Washington, DC:National Academies Press;2007.
- ,,, et al.The case for knowledge translation: shortening the journey from evidence to effect.BMJ.2003;327:33–35.
- .Commentary: reinventing continuing medical education.BMJ.2004;4:181.
- .Reflections: the hospitalist movement a decade later.J Hosp Med.2006;1:248–252.
- Available at: www.merriam‐webster.com. accessed April 2,2007.
- .What will board certification be—and mean—for hospitalists?J Hosp Med.2007;2:102–104.
- ,,,,.Core competencies of hospital medicine: development and methodology.J Hosp Med.2006;1:48–56
- ,,,.Hospital medicine fellowships: in progress.Am J Med.2006;119:72.e1–e7.
- Committee on Redesigning Health Insurance Performance Measures Payment and Performance Improvement Programs.Rewarding Provider Performance: Aligning Incentives in Medicare.Washington, DC:National Academies Press;2007.
- ,,, et al.The case for knowledge translation: shortening the journey from evidence to effect.BMJ.2003;327:33–35.
- .Commentary: reinventing continuing medical education.BMJ.2004;4:181.
Editorial
Why measure hospital quality? One popular premise is that measurement and transparency will inform consumer decision making and drive volume to high‐quality programs, providing incentives for improvement and raising the bar nationally. In this issue of the Journal of Hospital Medicine, Halasyamani and Davis report that there is relatively poor correlation between the Hospital Compare scores of the Centers for Medicare and Medicaid Services (CMS) and U.S. News and World Report's Best Hospitals rankings.1 The authors note that this is not necessarily surprising, as the methodologies of these rating systems are quite different, although their purposes are functionally similar.
Clearly, these 2 popular quality evaluation systems reflect different underlying constructs (which may or may not actually describe quality). And therein lies a central dilemma for health care professionals and academics: we haven't agreed among ourselves on reliable and meaningful quality metrics; so how can we, or even should we, expect the public to use available data to make health care decisions?
The 2 constructs in this particular comparison are certainly divergent in design. For the Hospital Compare ratings, the CMS used detailed process‐of‐care measures, expensively abstracted from the medical record, for just 3 medical conditions: acute myocardial infarction, congestive heart failure, and community‐acquired pneumonia. The U.S. News Best Hospitals rankings used reputation (based on a survey of physicians), severity‐adjusted mortality rate, staffing ratio, and key technologies offered by hospitals. Halasyamani and Davis conclude that consumers may be left to wonder how to reconcile these discordant rating systems. At the same time, they acknowledge that it is not yet clear whether public reporting will affect consumers' health care choices. Available evidence suggests that when making choices about health care, patients are much more likely to consult family and friends than an Internet site that posts quality information.2 There is as yet no conclusive evidence that quality data drive consumer decision making. Furthermore, acute myocardial infarction patients rarely have the opportunity to choose a hospital, even if they had access to the data.
The assessment of hospital quality is not only a challenge for patients, it's still perplexing for those of us immersed in health care. The scope of measures of quality is both broad and incomplete. At the microsystem and individual clinical syndrome level, we have a plethora of process measures that are evidence based (such as the CMS Hospital Compare measures) but appear to move meaningful outcomes only slightly, if at all. The evidence linking the pneumonia measures, for instance, to significant outcomes such as lower mortality or (rarely studied) better functional outcomes is extremely limited or nonexistent.3, 4
At the other end of the continuum are sweeping metrics such as risk‐adjusted in‐hospital mortality, which may be important and yet has 2 significant limitations. First, mortality rates in acute care are generally so low that this is not a useful outcome of interest for most clinical conditions. Its utility is really limited to well‐studied procedures such as cardiac surgery. Second, mortality rate reduction is extraordinarily difficult to link meaningfully to specific process interventions with available information and tools. For high‐volume complex medical conditions, such as pneumonia, nonsurgically‐managed cardiac disease, and oncology, we cannot as yet reliably use in‐hospital mortality rate as a descriptor for quality of care because the populations are so diverse and the statistical tools so crude. The public reporting of these data is even more complex because it often lags behind current data by years and may be significantly affected by sample size.
Even when we settle on a few, well‐defined process metrics, we have problems with complete and accurate reporting of data. In Halasyamani and Davis's study, only 2.9% of hospitals reported all 14 Hospital Compare core performance measures used in their analysis.1 Evidence suggests that poor performance is a strong disincentive to voluntarily report quality measures to the public.5 And because there is no evidence that this type of transparency initiative will drive volume to higher‐quality programs, publicly reporting quality measures may not provide a strong enough incentive for hospitals to allocate resources to the improvement of the quality of care they deliver in these specific areas.
The CMS has introduced financial incentives to encourage hospitals to report performance measures (regardless of the actual level of performance which is reported), providing financial rewards to top‐performing hospitals and/or to hospitals that actually demonstrate that strong performance may have a greater impact. The results of early studies suggested that that pay‐for‐performance did improve the quality of health care.6 Lindenauer et al. recently published the results of a large study evaluating adherence to quality measures in hospitals that voluntarily reported measures compared with those participating in a pay‐for‐performance demonstration project funded by the CMS. Hospitals engaged in both public reporting and pay‐for‐performance achieved modestly greater improvements in quality compared with those that only did public reporting.7 It is notable that this demonstration project generally produced modest financial rewards to those hospitals that improved performance.8 The optimal model to reward performance remains to be determined.7, 9, 10
There are a number of potentially harmful unintended consequences of poorly designed quality measures and associated transparency and incentive programs. The most obvious is opportunity cost. As the incentives become more tangible and meaningful, hospital quality leaders will be expected to step up efforts to improve performance in the specific process of care measures for which they are rewarded. Without caution, however, hospital quality leaders may develop a narrow focus in deciding where to apply their limited resources and may become distracted from other areas in dire need of improvement. Their boards of directors might appropriately argue that it is their fiduciary responsibility to focus on improving those aspects of quality that the payer community has highlighted as most important. If the metrics are excellent and the underlying constructs are in fact the right ones to advance quality in American acute care, this is a direction to be applauded. If the metrics are flawed and limited, which is the case today, then the risk is that resources will be wasted and diverted from more important priorities.
Even worse, an overly narrow focus may have unintended adverse clinical consequences. Recently, Wachter discussed several real‐world examples of unintended consequences of quality improvement efforts, including giving patients multiple doses of pneumococcal vaccines and inappropriately treating patients with symptoms that might indicate community‐acquired pneumonia with antibiotics.11 As hospitals attempt to improve their report cards, a significant risk exists that patients will receive excessive or unnecessary care in an attempt to meet specified timeliness goals.
The most important issue that has still not been completely addressed is whether improvements in process‐of‐care measures will actually improve patient outcomes. In a recent issue of this journal, Seymann concluded that there is strong evidence for influenza vaccination and the use of appropriate antibiotics for community‐acquired pneumonia12 but that other pneumonia quality measures were of less obvious clinical benefit. Controversy continues over whether the optimal timing of the initial treatment of community‐acquired pneumonia with antibiotics is 4 hours, as it currently stands, or 8 hours. Patients hospitalized with pneumonia may be motivated to quit smoking, but CMS requirements for smoking cessation advice/counseling can be satisfied with a simple pamphlet or a video, rather than interventions that involve counseling by specifically trained professionals and the use of pharmacotherapy, which are more likely to succeed. Although smoking cessation is an admirable goal, whether this is performed will not affect the quality of care that a patient with pneumonia receives during the index admission. In fact, it would be more important to counsel all patients about the hazards of smoking in an attempt to prevent pneumonia and acute myocardial infarction as well as a host of other smoking‐related illnesses.
In another example, Fonarow and colleagues examined the association between heart failure clinical outcomes and performance measures in a large observational cohort.13 The study found that current heart failure performance measures, aside from prescribing angiotensin‐converting inhibitor or angiotensin receptor blocker at discharge, had little relationship to mortality in the first 60‐90 days following discharge. On the other hand, the team found that being discharged on a beta blocker was associated with a significant reduction in mortality; however, beta blocker use is not part of the current CMS core measures. In addition, many patients hospitalized for heart failure may benefit from implantable cardioverter‐defibrillator therapy and/or cardiac resynchronization therapy,14 yet referral to a cardiologist to evaluate patients who may be suitable for these therapies is not a CMS core measure.
A similar, more comprehensive study recently evaluated whether performance on CMS quality measures for acute myocardial infarction, heart failure, and pneumonia correlated with condition‐specific inpatient, 30‐day, and 1‐year risk‐adjusted mortality rates.15 The study found that the best hospitals, those performing at the 75th percentile on quality measures, did have lower mortality rates than did hospitals performing at the 25th percentile, but the absolute risk reduction was small. Specifically, the absolute risk reduction for 30‐day mortality was 0.6%, 0.1%, and 0.1% for acute myocardial infarction, heart failure, and pneumonia, respectively. In attempting to explain their findings, the authors noted that current quality measures include only a subset of activities involved in the care of hospitalized patients. In addition, mortality rates are likely influenced by factors not included in current quality measures, such as the use of electronic health records, staffing levels, and other activities of quality oversight committees.
The era of measurement and accountability for providing high‐quality health care is upon us. Public reporting may lead to improvement in quality measures, but it is incumbent on the academic and provider communities as well as the payer community to ensure that the metrics are meaningful, reliable, and reproducible and, equally important, that they make a difference in essential clinical outcomes such as mortality, return to function, and avoidance of adverse events.10 Emerging evidence suggests the measures may need to be linked to meaningful financial incentives to the provider in order to accelerate change. Incentives directed at patients appear to be ineffective, clumsy, and slow to produce results.16
The time is right to revisit the quality measures currently used for transparency and incentives. We need a tighter, more reliable set of metrics that actually correlate with meaningful outcomes. Some evidence‐based measures appear to be missing from the current leading lists and some remain inadequately defined with regard to compliance. As a system, the measurement program contains poorly understood risks of unintended consequences. Above all else, local and national quality leaders need to be mindful that improving patient outcomes must be the central goal in our efforts to improve performance on process‐of‐care measures.
- ,.Conflicting measures of hospital quality: ratings from “Hospital Compare” versus “Best Hospitals.”J Hosp Med.2007;2:128–134.
- Kaiser Family Foundation and Agency for Health Care Research and Quality.National Survey on Consumers' Experiences with Patient Safety and Quality Information.Washington, DC:Kaiser Family Foundation;2004.
- ,, et al.Quality of care, process, and outcomes in elderly patients with pneumonia.JAMA.1997;278:2080–2084.
- ,,,,.Process of care, illness severity, and outcomes in the management of community acquired pneumonia at academic hospitals.Arch Intern Med.2001;161:2099–2104.
- ,,,,.Relationship between low quality‐of‐care scores and HMOs' subsequent public disclosure of quality‐of‐care scores.JAMA.2002;288:1484–1490.
- ,,,,.Does pay‐for‐performance improve the quality of health care?Ann Intern Med.2006;145:265–272.
- ,,, et al.Public Reporting and pay for performance in hospital quality improvement.N Engl J Med.2007;356:486–496.
- The CMS demonstration project methodology provides a 2% incremental payment for the best 10 percent of hospitals and 1% for the second decile. See CMS press release, available at: http://www.cms.hhs.gov/apps/media/. Accessed January 26,2007.
- .Pay for performance and accountability: related themes in improving health care.Ann Intern Med.2006;145:695–699.
- Institute of Medicine Committee on Redesigning Health Insurance Performance Measures, Payment, and Performance Improvement Programs.Rewarding Provider Performance: Aligning Incentives in Medicare (Pathways to Quality Health Care Series).Washington, DC:National Academies Press;2007.
- .Expected and unanticipated consequences of the quality and information technology revolutions.JAMA.2006;295:2780–2783.
- .Community‐acquired pneumonia: defining quality care.J Hosp Med.2006;1:344–353.
- ,,, et al.Association between performance measures and clinical outcomes for patients hospitalized with heart failure.JAMA.2007;297:61–70.
- ,, et al.ACC/AHA 2005 Guideline update for the diagnosis and management of chronic heart failure in the adult: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.Circulation.2005;112:e154–e235.
- ,.Relationship between Medicare's Hospital Compare performance measures and mortality rates.JAMA.2006;296:2694–2702.
- Employee Benefit Research Institute. 2nd Annual EBRI/Commonwealth Fund Consumerism in Health Care Survey, 2006: early experience with high‐deductible and consumer‐driven health plans. December 2006. Available at: http://www.ebri.org/pdf/briefspdf/EBRI_IB_12‐20061.pdf.. Accessed February 23,2007.
Why measure hospital quality? One popular premise is that measurement and transparency will inform consumer decision making and drive volume to high‐quality programs, providing incentives for improvement and raising the bar nationally. In this issue of the Journal of Hospital Medicine, Halasyamani and Davis report that there is relatively poor correlation between the Hospital Compare scores of the Centers for Medicare and Medicaid Services (CMS) and U.S. News and World Report's Best Hospitals rankings.1 The authors note that this is not necessarily surprising, as the methodologies of these rating systems are quite different, although their purposes are functionally similar.
Clearly, these 2 popular quality evaluation systems reflect different underlying constructs (which may or may not actually describe quality). And therein lies a central dilemma for health care professionals and academics: we haven't agreed among ourselves on reliable and meaningful quality metrics; so how can we, or even should we, expect the public to use available data to make health care decisions?
The 2 constructs in this particular comparison are certainly divergent in design. For the Hospital Compare ratings, the CMS used detailed process‐of‐care measures, expensively abstracted from the medical record, for just 3 medical conditions: acute myocardial infarction, congestive heart failure, and community‐acquired pneumonia. The U.S. News Best Hospitals rankings used reputation (based on a survey of physicians), severity‐adjusted mortality rate, staffing ratio, and key technologies offered by hospitals. Halasyamani and Davis conclude that consumers may be left to wonder how to reconcile these discordant rating systems. At the same time, they acknowledge that it is not yet clear whether public reporting will affect consumers' health care choices. Available evidence suggests that when making choices about health care, patients are much more likely to consult family and friends than an Internet site that posts quality information.2 There is as yet no conclusive evidence that quality data drive consumer decision making. Furthermore, acute myocardial infarction patients rarely have the opportunity to choose a hospital, even if they had access to the data.
The assessment of hospital quality is not only a challenge for patients, it's still perplexing for those of us immersed in health care. The scope of measures of quality is both broad and incomplete. At the microsystem and individual clinical syndrome level, we have a plethora of process measures that are evidence based (such as the CMS Hospital Compare measures) but appear to move meaningful outcomes only slightly, if at all. The evidence linking the pneumonia measures, for instance, to significant outcomes such as lower mortality or (rarely studied) better functional outcomes is extremely limited or nonexistent.3, 4
At the other end of the continuum are sweeping metrics such as risk‐adjusted in‐hospital mortality, which may be important and yet has 2 significant limitations. First, mortality rates in acute care are generally so low that this is not a useful outcome of interest for most clinical conditions. Its utility is really limited to well‐studied procedures such as cardiac surgery. Second, mortality rate reduction is extraordinarily difficult to link meaningfully to specific process interventions with available information and tools. For high‐volume complex medical conditions, such as pneumonia, nonsurgically‐managed cardiac disease, and oncology, we cannot as yet reliably use in‐hospital mortality rate as a descriptor for quality of care because the populations are so diverse and the statistical tools so crude. The public reporting of these data is even more complex because it often lags behind current data by years and may be significantly affected by sample size.
Even when we settle on a few, well‐defined process metrics, we have problems with complete and accurate reporting of data. In Halasyamani and Davis's study, only 2.9% of hospitals reported all 14 Hospital Compare core performance measures used in their analysis.1 Evidence suggests that poor performance is a strong disincentive to voluntarily report quality measures to the public.5 And because there is no evidence that this type of transparency initiative will drive volume to higher‐quality programs, publicly reporting quality measures may not provide a strong enough incentive for hospitals to allocate resources to the improvement of the quality of care they deliver in these specific areas.
The CMS has introduced financial incentives to encourage hospitals to report performance measures (regardless of the actual level of performance which is reported), providing financial rewards to top‐performing hospitals and/or to hospitals that actually demonstrate that strong performance may have a greater impact. The results of early studies suggested that that pay‐for‐performance did improve the quality of health care.6 Lindenauer et al. recently published the results of a large study evaluating adherence to quality measures in hospitals that voluntarily reported measures compared with those participating in a pay‐for‐performance demonstration project funded by the CMS. Hospitals engaged in both public reporting and pay‐for‐performance achieved modestly greater improvements in quality compared with those that only did public reporting.7 It is notable that this demonstration project generally produced modest financial rewards to those hospitals that improved performance.8 The optimal model to reward performance remains to be determined.7, 9, 10
There are a number of potentially harmful unintended consequences of poorly designed quality measures and associated transparency and incentive programs. The most obvious is opportunity cost. As the incentives become more tangible and meaningful, hospital quality leaders will be expected to step up efforts to improve performance in the specific process of care measures for which they are rewarded. Without caution, however, hospital quality leaders may develop a narrow focus in deciding where to apply their limited resources and may become distracted from other areas in dire need of improvement. Their boards of directors might appropriately argue that it is their fiduciary responsibility to focus on improving those aspects of quality that the payer community has highlighted as most important. If the metrics are excellent and the underlying constructs are in fact the right ones to advance quality in American acute care, this is a direction to be applauded. If the metrics are flawed and limited, which is the case today, then the risk is that resources will be wasted and diverted from more important priorities.
Even worse, an overly narrow focus may have unintended adverse clinical consequences. Recently, Wachter discussed several real‐world examples of unintended consequences of quality improvement efforts, including giving patients multiple doses of pneumococcal vaccines and inappropriately treating patients with symptoms that might indicate community‐acquired pneumonia with antibiotics.11 As hospitals attempt to improve their report cards, a significant risk exists that patients will receive excessive or unnecessary care in an attempt to meet specified timeliness goals.
The most important issue that has still not been completely addressed is whether improvements in process‐of‐care measures will actually improve patient outcomes. In a recent issue of this journal, Seymann concluded that there is strong evidence for influenza vaccination and the use of appropriate antibiotics for community‐acquired pneumonia12 but that other pneumonia quality measures were of less obvious clinical benefit. Controversy continues over whether the optimal timing of the initial treatment of community‐acquired pneumonia with antibiotics is 4 hours, as it currently stands, or 8 hours. Patients hospitalized with pneumonia may be motivated to quit smoking, but CMS requirements for smoking cessation advice/counseling can be satisfied with a simple pamphlet or a video, rather than interventions that involve counseling by specifically trained professionals and the use of pharmacotherapy, which are more likely to succeed. Although smoking cessation is an admirable goal, whether this is performed will not affect the quality of care that a patient with pneumonia receives during the index admission. In fact, it would be more important to counsel all patients about the hazards of smoking in an attempt to prevent pneumonia and acute myocardial infarction as well as a host of other smoking‐related illnesses.
In another example, Fonarow and colleagues examined the association between heart failure clinical outcomes and performance measures in a large observational cohort.13 The study found that current heart failure performance measures, aside from prescribing angiotensin‐converting inhibitor or angiotensin receptor blocker at discharge, had little relationship to mortality in the first 60‐90 days following discharge. On the other hand, the team found that being discharged on a beta blocker was associated with a significant reduction in mortality; however, beta blocker use is not part of the current CMS core measures. In addition, many patients hospitalized for heart failure may benefit from implantable cardioverter‐defibrillator therapy and/or cardiac resynchronization therapy,14 yet referral to a cardiologist to evaluate patients who may be suitable for these therapies is not a CMS core measure.
A similar, more comprehensive study recently evaluated whether performance on CMS quality measures for acute myocardial infarction, heart failure, and pneumonia correlated with condition‐specific inpatient, 30‐day, and 1‐year risk‐adjusted mortality rates.15 The study found that the best hospitals, those performing at the 75th percentile on quality measures, did have lower mortality rates than did hospitals performing at the 25th percentile, but the absolute risk reduction was small. Specifically, the absolute risk reduction for 30‐day mortality was 0.6%, 0.1%, and 0.1% for acute myocardial infarction, heart failure, and pneumonia, respectively. In attempting to explain their findings, the authors noted that current quality measures include only a subset of activities involved in the care of hospitalized patients. In addition, mortality rates are likely influenced by factors not included in current quality measures, such as the use of electronic health records, staffing levels, and other activities of quality oversight committees.
The era of measurement and accountability for providing high‐quality health care is upon us. Public reporting may lead to improvement in quality measures, but it is incumbent on the academic and provider communities as well as the payer community to ensure that the metrics are meaningful, reliable, and reproducible and, equally important, that they make a difference in essential clinical outcomes such as mortality, return to function, and avoidance of adverse events.10 Emerging evidence suggests the measures may need to be linked to meaningful financial incentives to the provider in order to accelerate change. Incentives directed at patients appear to be ineffective, clumsy, and slow to produce results.16
The time is right to revisit the quality measures currently used for transparency and incentives. We need a tighter, more reliable set of metrics that actually correlate with meaningful outcomes. Some evidence‐based measures appear to be missing from the current leading lists and some remain inadequately defined with regard to compliance. As a system, the measurement program contains poorly understood risks of unintended consequences. Above all else, local and national quality leaders need to be mindful that improving patient outcomes must be the central goal in our efforts to improve performance on process‐of‐care measures.
Why measure hospital quality? One popular premise is that measurement and transparency will inform consumer decision making and drive volume to high‐quality programs, providing incentives for improvement and raising the bar nationally. In this issue of the Journal of Hospital Medicine, Halasyamani and Davis report that there is relatively poor correlation between the Hospital Compare scores of the Centers for Medicare and Medicaid Services (CMS) and U.S. News and World Report's Best Hospitals rankings.1 The authors note that this is not necessarily surprising, as the methodologies of these rating systems are quite different, although their purposes are functionally similar.
Clearly, these 2 popular quality evaluation systems reflect different underlying constructs (which may or may not actually describe quality). And therein lies a central dilemma for health care professionals and academics: we haven't agreed among ourselves on reliable and meaningful quality metrics; so how can we, or even should we, expect the public to use available data to make health care decisions?
The 2 constructs in this particular comparison are certainly divergent in design. For the Hospital Compare ratings, the CMS used detailed process‐of‐care measures, expensively abstracted from the medical record, for just 3 medical conditions: acute myocardial infarction, congestive heart failure, and community‐acquired pneumonia. The U.S. News Best Hospitals rankings used reputation (based on a survey of physicians), severity‐adjusted mortality rate, staffing ratio, and key technologies offered by hospitals. Halasyamani and Davis conclude that consumers may be left to wonder how to reconcile these discordant rating systems. At the same time, they acknowledge that it is not yet clear whether public reporting will affect consumers' health care choices. Available evidence suggests that when making choices about health care, patients are much more likely to consult family and friends than an Internet site that posts quality information.2 There is as yet no conclusive evidence that quality data drive consumer decision making. Furthermore, acute myocardial infarction patients rarely have the opportunity to choose a hospital, even if they had access to the data.
The assessment of hospital quality is not only a challenge for patients, it's still perplexing for those of us immersed in health care. The scope of measures of quality is both broad and incomplete. At the microsystem and individual clinical syndrome level, we have a plethora of process measures that are evidence based (such as the CMS Hospital Compare measures) but appear to move meaningful outcomes only slightly, if at all. The evidence linking the pneumonia measures, for instance, to significant outcomes such as lower mortality or (rarely studied) better functional outcomes is extremely limited or nonexistent.3, 4
At the other end of the continuum are sweeping metrics such as risk‐adjusted in‐hospital mortality, which may be important and yet has 2 significant limitations. First, mortality rates in acute care are generally so low that this is not a useful outcome of interest for most clinical conditions. Its utility is really limited to well‐studied procedures such as cardiac surgery. Second, mortality rate reduction is extraordinarily difficult to link meaningfully to specific process interventions with available information and tools. For high‐volume complex medical conditions, such as pneumonia, nonsurgically‐managed cardiac disease, and oncology, we cannot as yet reliably use in‐hospital mortality rate as a descriptor for quality of care because the populations are so diverse and the statistical tools so crude. The public reporting of these data is even more complex because it often lags behind current data by years and may be significantly affected by sample size.
Even when we settle on a few, well‐defined process metrics, we have problems with complete and accurate reporting of data. In Halasyamani and Davis's study, only 2.9% of hospitals reported all 14 Hospital Compare core performance measures used in their analysis.1 Evidence suggests that poor performance is a strong disincentive to voluntarily report quality measures to the public.5 And because there is no evidence that this type of transparency initiative will drive volume to higher‐quality programs, publicly reporting quality measures may not provide a strong enough incentive for hospitals to allocate resources to the improvement of the quality of care they deliver in these specific areas.
The CMS has introduced financial incentives to encourage hospitals to report performance measures (regardless of the actual level of performance which is reported), providing financial rewards to top‐performing hospitals and/or to hospitals that actually demonstrate that strong performance may have a greater impact. The results of early studies suggested that that pay‐for‐performance did improve the quality of health care.6 Lindenauer et al. recently published the results of a large study evaluating adherence to quality measures in hospitals that voluntarily reported measures compared with those participating in a pay‐for‐performance demonstration project funded by the CMS. Hospitals engaged in both public reporting and pay‐for‐performance achieved modestly greater improvements in quality compared with those that only did public reporting.7 It is notable that this demonstration project generally produced modest financial rewards to those hospitals that improved performance.8 The optimal model to reward performance remains to be determined.7, 9, 10
There are a number of potentially harmful unintended consequences of poorly designed quality measures and associated transparency and incentive programs. The most obvious is opportunity cost. As the incentives become more tangible and meaningful, hospital quality leaders will be expected to step up efforts to improve performance in the specific process of care measures for which they are rewarded. Without caution, however, hospital quality leaders may develop a narrow focus in deciding where to apply their limited resources and may become distracted from other areas in dire need of improvement. Their boards of directors might appropriately argue that it is their fiduciary responsibility to focus on improving those aspects of quality that the payer community has highlighted as most important. If the metrics are excellent and the underlying constructs are in fact the right ones to advance quality in American acute care, this is a direction to be applauded. If the metrics are flawed and limited, which is the case today, then the risk is that resources will be wasted and diverted from more important priorities.
Even worse, an overly narrow focus may have unintended adverse clinical consequences. Recently, Wachter discussed several real‐world examples of unintended consequences of quality improvement efforts, including giving patients multiple doses of pneumococcal vaccines and inappropriately treating patients with symptoms that might indicate community‐acquired pneumonia with antibiotics.11 As hospitals attempt to improve their report cards, a significant risk exists that patients will receive excessive or unnecessary care in an attempt to meet specified timeliness goals.
The most important issue that has still not been completely addressed is whether improvements in process‐of‐care measures will actually improve patient outcomes. In a recent issue of this journal, Seymann concluded that there is strong evidence for influenza vaccination and the use of appropriate antibiotics for community‐acquired pneumonia12 but that other pneumonia quality measures were of less obvious clinical benefit. Controversy continues over whether the optimal timing of the initial treatment of community‐acquired pneumonia with antibiotics is 4 hours, as it currently stands, or 8 hours. Patients hospitalized with pneumonia may be motivated to quit smoking, but CMS requirements for smoking cessation advice/counseling can be satisfied with a simple pamphlet or a video, rather than interventions that involve counseling by specifically trained professionals and the use of pharmacotherapy, which are more likely to succeed. Although smoking cessation is an admirable goal, whether this is performed will not affect the quality of care that a patient with pneumonia receives during the index admission. In fact, it would be more important to counsel all patients about the hazards of smoking in an attempt to prevent pneumonia and acute myocardial infarction as well as a host of other smoking‐related illnesses.
In another example, Fonarow and colleagues examined the association between heart failure clinical outcomes and performance measures in a large observational cohort.13 The study found that current heart failure performance measures, aside from prescribing angiotensin‐converting inhibitor or angiotensin receptor blocker at discharge, had little relationship to mortality in the first 60‐90 days following discharge. On the other hand, the team found that being discharged on a beta blocker was associated with a significant reduction in mortality; however, beta blocker use is not part of the current CMS core measures. In addition, many patients hospitalized for heart failure may benefit from implantable cardioverter‐defibrillator therapy and/or cardiac resynchronization therapy,14 yet referral to a cardiologist to evaluate patients who may be suitable for these therapies is not a CMS core measure.
A similar, more comprehensive study recently evaluated whether performance on CMS quality measures for acute myocardial infarction, heart failure, and pneumonia correlated with condition‐specific inpatient, 30‐day, and 1‐year risk‐adjusted mortality rates.15 The study found that the best hospitals, those performing at the 75th percentile on quality measures, did have lower mortality rates than did hospitals performing at the 25th percentile, but the absolute risk reduction was small. Specifically, the absolute risk reduction for 30‐day mortality was 0.6%, 0.1%, and 0.1% for acute myocardial infarction, heart failure, and pneumonia, respectively. In attempting to explain their findings, the authors noted that current quality measures include only a subset of activities involved in the care of hospitalized patients. In addition, mortality rates are likely influenced by factors not included in current quality measures, such as the use of electronic health records, staffing levels, and other activities of quality oversight committees.
The era of measurement and accountability for providing high‐quality health care is upon us. Public reporting may lead to improvement in quality measures, but it is incumbent on the academic and provider communities as well as the payer community to ensure that the metrics are meaningful, reliable, and reproducible and, equally important, that they make a difference in essential clinical outcomes such as mortality, return to function, and avoidance of adverse events.10 Emerging evidence suggests the measures may need to be linked to meaningful financial incentives to the provider in order to accelerate change. Incentives directed at patients appear to be ineffective, clumsy, and slow to produce results.16
The time is right to revisit the quality measures currently used for transparency and incentives. We need a tighter, more reliable set of metrics that actually correlate with meaningful outcomes. Some evidence‐based measures appear to be missing from the current leading lists and some remain inadequately defined with regard to compliance. As a system, the measurement program contains poorly understood risks of unintended consequences. Above all else, local and national quality leaders need to be mindful that improving patient outcomes must be the central goal in our efforts to improve performance on process‐of‐care measures.
- ,.Conflicting measures of hospital quality: ratings from “Hospital Compare” versus “Best Hospitals.”J Hosp Med.2007;2:128–134.
- Kaiser Family Foundation and Agency for Health Care Research and Quality.National Survey on Consumers' Experiences with Patient Safety and Quality Information.Washington, DC:Kaiser Family Foundation;2004.
- ,, et al.Quality of care, process, and outcomes in elderly patients with pneumonia.JAMA.1997;278:2080–2084.
- ,,,,.Process of care, illness severity, and outcomes in the management of community acquired pneumonia at academic hospitals.Arch Intern Med.2001;161:2099–2104.
- ,,,,.Relationship between low quality‐of‐care scores and HMOs' subsequent public disclosure of quality‐of‐care scores.JAMA.2002;288:1484–1490.
- ,,,,.Does pay‐for‐performance improve the quality of health care?Ann Intern Med.2006;145:265–272.
- ,,, et al.Public Reporting and pay for performance in hospital quality improvement.N Engl J Med.2007;356:486–496.
- The CMS demonstration project methodology provides a 2% incremental payment for the best 10 percent of hospitals and 1% for the second decile. See CMS press release, available at: http://www.cms.hhs.gov/apps/media/. Accessed January 26,2007.
- .Pay for performance and accountability: related themes in improving health care.Ann Intern Med.2006;145:695–699.
- Institute of Medicine Committee on Redesigning Health Insurance Performance Measures, Payment, and Performance Improvement Programs.Rewarding Provider Performance: Aligning Incentives in Medicare (Pathways to Quality Health Care Series).Washington, DC:National Academies Press;2007.
- .Expected and unanticipated consequences of the quality and information technology revolutions.JAMA.2006;295:2780–2783.
- .Community‐acquired pneumonia: defining quality care.J Hosp Med.2006;1:344–353.
- ,,, et al.Association between performance measures and clinical outcomes for patients hospitalized with heart failure.JAMA.2007;297:61–70.
- ,, et al.ACC/AHA 2005 Guideline update for the diagnosis and management of chronic heart failure in the adult: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.Circulation.2005;112:e154–e235.
- ,.Relationship between Medicare's Hospital Compare performance measures and mortality rates.JAMA.2006;296:2694–2702.
- Employee Benefit Research Institute. 2nd Annual EBRI/Commonwealth Fund Consumerism in Health Care Survey, 2006: early experience with high‐deductible and consumer‐driven health plans. December 2006. Available at: http://www.ebri.org/pdf/briefspdf/EBRI_IB_12‐20061.pdf.. Accessed February 23,2007.
- ,.Conflicting measures of hospital quality: ratings from “Hospital Compare” versus “Best Hospitals.”J Hosp Med.2007;2:128–134.
- Kaiser Family Foundation and Agency for Health Care Research and Quality.National Survey on Consumers' Experiences with Patient Safety and Quality Information.Washington, DC:Kaiser Family Foundation;2004.
- ,, et al.Quality of care, process, and outcomes in elderly patients with pneumonia.JAMA.1997;278:2080–2084.
- ,,,,.Process of care, illness severity, and outcomes in the management of community acquired pneumonia at academic hospitals.Arch Intern Med.2001;161:2099–2104.
- ,,,,.Relationship between low quality‐of‐care scores and HMOs' subsequent public disclosure of quality‐of‐care scores.JAMA.2002;288:1484–1490.
- ,,,,.Does pay‐for‐performance improve the quality of health care?Ann Intern Med.2006;145:265–272.
- ,,, et al.Public Reporting and pay for performance in hospital quality improvement.N Engl J Med.2007;356:486–496.
- The CMS demonstration project methodology provides a 2% incremental payment for the best 10 percent of hospitals and 1% for the second decile. See CMS press release, available at: http://www.cms.hhs.gov/apps/media/. Accessed January 26,2007.
- .Pay for performance and accountability: related themes in improving health care.Ann Intern Med.2006;145:695–699.
- Institute of Medicine Committee on Redesigning Health Insurance Performance Measures, Payment, and Performance Improvement Programs.Rewarding Provider Performance: Aligning Incentives in Medicare (Pathways to Quality Health Care Series).Washington, DC:National Academies Press;2007.
- .Expected and unanticipated consequences of the quality and information technology revolutions.JAMA.2006;295:2780–2783.
- .Community‐acquired pneumonia: defining quality care.J Hosp Med.2006;1:344–353.
- ,,, et al.Association between performance measures and clinical outcomes for patients hospitalized with heart failure.JAMA.2007;297:61–70.
- ,, et al.ACC/AHA 2005 Guideline update for the diagnosis and management of chronic heart failure in the adult: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.Circulation.2005;112:e154–e235.
- ,.Relationship between Medicare's Hospital Compare performance measures and mortality rates.JAMA.2006;296:2694–2702.
- Employee Benefit Research Institute. 2nd Annual EBRI/Commonwealth Fund Consumerism in Health Care Survey, 2006: early experience with high‐deductible and consumer‐driven health plans. December 2006. Available at: http://www.ebri.org/pdf/briefspdf/EBRI_IB_12‐20061.pdf.. Accessed February 23,2007.
Editorial
See one, do one, teach one is a refrain familiar to all physicians. Historically, most procedural training has occurred at the bedside. In this model, senior residents, subspecialty fellows, or faculty members would demonstrate procedural skills to junior trainees, who would subsequently practice the procedures on patients, often with uneven, risky results. Acquisition of procedural skills by residents and fellows on inpatient wards is suboptimal for at least 2 reasons beyond the risks to patient safety: (1) clinical priorities are more important than educational priorities in this setting, and (2) the patient, not the medical learner, is the most important person in the room.
Recently, several new factors have challenged the traditional medical education model. For a variety of reasons, general internists currently perform far fewer invasive procedures than they used to.1 A heightened focus on patient safety and quality raises questions about the qualifications needed to perform invasive procedures. Assessment requirements have also become more stringent. The Accreditation Council for Graduate Medical Education (ACGME) now requires the use of measures that yield reliable and valid data to document the competence of trainees performing invasive procedures.2 In 2006 these factors, and the challenge to educate, assess, and certify residents, prompted the American Board of Internal Medicine to revise its certification requirements and remove the need for technical proficiency in several procedures including paracentesis, central venous catheter placement, and thoracentesis.3, 4
Two studies reported in this issue of the Journal of Hospital Medicine highlight important issues about preparing residents to perform invasive procedures. These include the educational limits of routine clinical care and the challenge to design rigorous educational interventions that improve residents' skills. Miranda and colleagues5 designed a clinical trial to evaluate an educational intervention in which residents practiced insertion of subclavian and internal jugular venous catheters under the supervision of a hospitalist faculty member. The goal was to reduce the frequency of femoral venous catheters placed at their institution. Although residents demonstrated increased knowledge and confidence after the educational intervention, the actual number of subclavian and internal jugular venous catheter insertions was lower in the intervention group, and was rare overall. The intervention did not achieve the stated goal of reducing the number of femoral venous catheters placed by residents. This research highlights that residents cannot be trained to perform invasive procedures through clinical experience alone. In addition, it demonstrates that brief educational interventions are also insufficient. Whether a longer and more robust educational intervention might have shown different results is uncertain, but many experts believe that opportunities for deliberate practice6 using standardized and sustained treatments7 can be a powerful tool to boost the procedural skills of physicians.
At the same institution, Lucas and colleagues studied the impact of a procedural service on the number of invasive procedures performed on a general medicine inpatient service.8 They found a 48% increase in procedure attempts when the procedure service staffed by an experienced faulty member was available. However, no improvement in success rate or reduction in complications was demonstrated. Thus, opportunities for trainees to perform procedures increased, but the presence of a faculty member to provide direct supervision did not improve the quality of the procedures accomplished.
Together these reports highlight challenges and opportunities in training residents to perform invasive procedures. Both studies involved the procedural skills of residents. One used an educational intervention, the other featured faculty supervision. Both studies produced outcomes that suggest improved procedural training, but neither improved the actual quality of delivered care. A brief educational intervention increased resident confidence and knowledge but did not increase the quality or number of procedures performed by residents. Opportunities to perform invasive procedures increased dramatically when an experienced attending physician was available to supervise residents. However, more education was not provided, and the quality of procedures performed did not improve.
Given these limitations, how should physicians learn to perform invasive procedures? We endorse a systematic approach to achieve high levels of procedural skills in resident physicians. First, procedures should be carefully selected. Only those essential to future practice should be required. If possible, opportunities should be available for selected trainees to develop skills in performing additional procedures relevant to their future careers. An example would be the opportunity for residents in a hospitalist track to develop proficiency in central venous catheter insertion through clinical experience, didactic education, and rigorous skill assessment. Second, dedicated programs are needed to train and assess residents in procedural skills. Reliance on clinical experience alone is inadequate because of the low frequency at which most procedures are performed and the inability to standardize assessments in routine clinical practice.
Simulation technology is a powerful adjunct to traditional clinical training and has been demonstrated to be highly effective in developing procedural skills in disciplines such as endoscopy9 and laparoscopic surgery.10 At our institution, a simulation‐based training program has been used to help residents achieve11 and maintain12 a high level of skill in performing advanced cardiac life support procedures. We use simulation to provide opportunities for deliberate practice in a controlled environment in which immediate feedback is emphasized and mastery levels are reached. The rigorous curriculum is standardized, but learner progress is individualized depending on the practice time needed to achieve competency standards.
Most important, when training physicians to perform invasive procedures, it is critical to use interventions and training programs that can be linked to improvements in actual clinical care. The studies by Miranda et al. and Lucas et al. highlight the utility of focused educational programs to complement clinical training as well as the positive impact of direct faculty supervision. These results are important starting points for programs to consider as they train and certify residents in required procedural skills. However, much work remains to be done. These studies have revealed that improvements in patient care outcomes are not likely to occur unless robust, learner‐centered educational programs are combined with adequate opportunities for residents to perform procedures under appropriate supervision.
- ,.The declining number and variety of procedures done by general internists: a resurvey of members of the American College of Physicians.Ann Intern Med.2007;146:355–360.
- Accreditation Council for Graduate Medical Education. Outcome project: general competencies. Available at: http://www.acgme.org/outcome/comp/compFull.asp#1. Accessed January 28,2007.
- American Board of Internal Medicine. Requirements for certification in internal medicine. Available at: http://www.abim.org/cert/policiesim.shtm. Accessed January 28,2007.
- ,.What procedures should internists do?Ann Intern Med.2007;146:392–393.
- ,,,,,.Firm‐based trial to improve central venous catheter insertion practices.J Hosp Med.2007;2:135–142.
- .Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains.Acad Med.2004 Oct;79(10 Suppl):S70–S81.
- ,.Treatment strength and integrity: models and methods. In:Bootzin RR,McKnight PE, eds.Strengthening Research Methodology: Psychological Measurement and Evaluation.Washington, DC:American Psychological Association;2006:103–124.
- ,,, et al.Impact of a bedside procedure service on general medicine inpatients: a firm‐based trial.J Hosp Med.2007;2:143–149.
- ,,, et al.Multicenter, randomized, controlled trial of virtual‐reality simulator training in acquisition of competency in colonoscopy.Gastrointest Endosc.2006;64:361–368.
- ,,, et al.Laparoscopic skills are improved with LapMentor training: results of a randomized, double‐blinded study.Ann Surg.2006;243:854–860.
- ,,, et al.Mastery learning of advanced cardiac life support skills by internal medicine using simulation technology and deliberate practice.J Gen Intern Med.2006;21:251–256.
- ,,, et al.A longitudinal study of internal medicine residents' retention of advanced cardiac support life skills.Acad Med.2006;81(10 Suppl):S9–S12.
See one, do one, teach one is a refrain familiar to all physicians. Historically, most procedural training has occurred at the bedside. In this model, senior residents, subspecialty fellows, or faculty members would demonstrate procedural skills to junior trainees, who would subsequently practice the procedures on patients, often with uneven, risky results. Acquisition of procedural skills by residents and fellows on inpatient wards is suboptimal for at least 2 reasons beyond the risks to patient safety: (1) clinical priorities are more important than educational priorities in this setting, and (2) the patient, not the medical learner, is the most important person in the room.
Recently, several new factors have challenged the traditional medical education model. For a variety of reasons, general internists currently perform far fewer invasive procedures than they used to.1 A heightened focus on patient safety and quality raises questions about the qualifications needed to perform invasive procedures. Assessment requirements have also become more stringent. The Accreditation Council for Graduate Medical Education (ACGME) now requires the use of measures that yield reliable and valid data to document the competence of trainees performing invasive procedures.2 In 2006 these factors, and the challenge to educate, assess, and certify residents, prompted the American Board of Internal Medicine to revise its certification requirements and remove the need for technical proficiency in several procedures including paracentesis, central venous catheter placement, and thoracentesis.3, 4
Two studies reported in this issue of the Journal of Hospital Medicine highlight important issues about preparing residents to perform invasive procedures. These include the educational limits of routine clinical care and the challenge to design rigorous educational interventions that improve residents' skills. Miranda and colleagues5 designed a clinical trial to evaluate an educational intervention in which residents practiced insertion of subclavian and internal jugular venous catheters under the supervision of a hospitalist faculty member. The goal was to reduce the frequency of femoral venous catheters placed at their institution. Although residents demonstrated increased knowledge and confidence after the educational intervention, the actual number of subclavian and internal jugular venous catheter insertions was lower in the intervention group, and was rare overall. The intervention did not achieve the stated goal of reducing the number of femoral venous catheters placed by residents. This research highlights that residents cannot be trained to perform invasive procedures through clinical experience alone. In addition, it demonstrates that brief educational interventions are also insufficient. Whether a longer and more robust educational intervention might have shown different results is uncertain, but many experts believe that opportunities for deliberate practice6 using standardized and sustained treatments7 can be a powerful tool to boost the procedural skills of physicians.
At the same institution, Lucas and colleagues studied the impact of a procedural service on the number of invasive procedures performed on a general medicine inpatient service.8 They found a 48% increase in procedure attempts when the procedure service staffed by an experienced faulty member was available. However, no improvement in success rate or reduction in complications was demonstrated. Thus, opportunities for trainees to perform procedures increased, but the presence of a faculty member to provide direct supervision did not improve the quality of the procedures accomplished.
Together these reports highlight challenges and opportunities in training residents to perform invasive procedures. Both studies involved the procedural skills of residents. One used an educational intervention, the other featured faculty supervision. Both studies produced outcomes that suggest improved procedural training, but neither improved the actual quality of delivered care. A brief educational intervention increased resident confidence and knowledge but did not increase the quality or number of procedures performed by residents. Opportunities to perform invasive procedures increased dramatically when an experienced attending physician was available to supervise residents. However, more education was not provided, and the quality of procedures performed did not improve.
Given these limitations, how should physicians learn to perform invasive procedures? We endorse a systematic approach to achieve high levels of procedural skills in resident physicians. First, procedures should be carefully selected. Only those essential to future practice should be required. If possible, opportunities should be available for selected trainees to develop skills in performing additional procedures relevant to their future careers. An example would be the opportunity for residents in a hospitalist track to develop proficiency in central venous catheter insertion through clinical experience, didactic education, and rigorous skill assessment. Second, dedicated programs are needed to train and assess residents in procedural skills. Reliance on clinical experience alone is inadequate because of the low frequency at which most procedures are performed and the inability to standardize assessments in routine clinical practice.
Simulation technology is a powerful adjunct to traditional clinical training and has been demonstrated to be highly effective in developing procedural skills in disciplines such as endoscopy9 and laparoscopic surgery.10 At our institution, a simulation‐based training program has been used to help residents achieve11 and maintain12 a high level of skill in performing advanced cardiac life support procedures. We use simulation to provide opportunities for deliberate practice in a controlled environment in which immediate feedback is emphasized and mastery levels are reached. The rigorous curriculum is standardized, but learner progress is individualized depending on the practice time needed to achieve competency standards.
Most important, when training physicians to perform invasive procedures, it is critical to use interventions and training programs that can be linked to improvements in actual clinical care. The studies by Miranda et al. and Lucas et al. highlight the utility of focused educational programs to complement clinical training as well as the positive impact of direct faculty supervision. These results are important starting points for programs to consider as they train and certify residents in required procedural skills. However, much work remains to be done. These studies have revealed that improvements in patient care outcomes are not likely to occur unless robust, learner‐centered educational programs are combined with adequate opportunities for residents to perform procedures under appropriate supervision.
See one, do one, teach one is a refrain familiar to all physicians. Historically, most procedural training has occurred at the bedside. In this model, senior residents, subspecialty fellows, or faculty members would demonstrate procedural skills to junior trainees, who would subsequently practice the procedures on patients, often with uneven, risky results. Acquisition of procedural skills by residents and fellows on inpatient wards is suboptimal for at least 2 reasons beyond the risks to patient safety: (1) clinical priorities are more important than educational priorities in this setting, and (2) the patient, not the medical learner, is the most important person in the room.
Recently, several new factors have challenged the traditional medical education model. For a variety of reasons, general internists currently perform far fewer invasive procedures than they used to.1 A heightened focus on patient safety and quality raises questions about the qualifications needed to perform invasive procedures. Assessment requirements have also become more stringent. The Accreditation Council for Graduate Medical Education (ACGME) now requires the use of measures that yield reliable and valid data to document the competence of trainees performing invasive procedures.2 In 2006 these factors, and the challenge to educate, assess, and certify residents, prompted the American Board of Internal Medicine to revise its certification requirements and remove the need for technical proficiency in several procedures including paracentesis, central venous catheter placement, and thoracentesis.3, 4
Two studies reported in this issue of the Journal of Hospital Medicine highlight important issues about preparing residents to perform invasive procedures. These include the educational limits of routine clinical care and the challenge to design rigorous educational interventions that improve residents' skills. Miranda and colleagues5 designed a clinical trial to evaluate an educational intervention in which residents practiced insertion of subclavian and internal jugular venous catheters under the supervision of a hospitalist faculty member. The goal was to reduce the frequency of femoral venous catheters placed at their institution. Although residents demonstrated increased knowledge and confidence after the educational intervention, the actual number of subclavian and internal jugular venous catheter insertions was lower in the intervention group, and was rare overall. The intervention did not achieve the stated goal of reducing the number of femoral venous catheters placed by residents. This research highlights that residents cannot be trained to perform invasive procedures through clinical experience alone. In addition, it demonstrates that brief educational interventions are also insufficient. Whether a longer and more robust educational intervention might have shown different results is uncertain, but many experts believe that opportunities for deliberate practice6 using standardized and sustained treatments7 can be a powerful tool to boost the procedural skills of physicians.
At the same institution, Lucas and colleagues studied the impact of a procedural service on the number of invasive procedures performed on a general medicine inpatient service.8 They found a 48% increase in procedure attempts when the procedure service staffed by an experienced faulty member was available. However, no improvement in success rate or reduction in complications was demonstrated. Thus, opportunities for trainees to perform procedures increased, but the presence of a faculty member to provide direct supervision did not improve the quality of the procedures accomplished.
Together these reports highlight challenges and opportunities in training residents to perform invasive procedures. Both studies involved the procedural skills of residents. One used an educational intervention, the other featured faculty supervision. Both studies produced outcomes that suggest improved procedural training, but neither improved the actual quality of delivered care. A brief educational intervention increased resident confidence and knowledge but did not increase the quality or number of procedures performed by residents. Opportunities to perform invasive procedures increased dramatically when an experienced attending physician was available to supervise residents. However, more education was not provided, and the quality of procedures performed did not improve.
Given these limitations, how should physicians learn to perform invasive procedures? We endorse a systematic approach to achieve high levels of procedural skills in resident physicians. First, procedures should be carefully selected. Only those essential to future practice should be required. If possible, opportunities should be available for selected trainees to develop skills in performing additional procedures relevant to their future careers. An example would be the opportunity for residents in a hospitalist track to develop proficiency in central venous catheter insertion through clinical experience, didactic education, and rigorous skill assessment. Second, dedicated programs are needed to train and assess residents in procedural skills. Reliance on clinical experience alone is inadequate because of the low frequency at which most procedures are performed and the inability to standardize assessments in routine clinical practice.
Simulation technology is a powerful adjunct to traditional clinical training and has been demonstrated to be highly effective in developing procedural skills in disciplines such as endoscopy9 and laparoscopic surgery.10 At our institution, a simulation‐based training program has been used to help residents achieve11 and maintain12 a high level of skill in performing advanced cardiac life support procedures. We use simulation to provide opportunities for deliberate practice in a controlled environment in which immediate feedback is emphasized and mastery levels are reached. The rigorous curriculum is standardized, but learner progress is individualized depending on the practice time needed to achieve competency standards.
Most important, when training physicians to perform invasive procedures, it is critical to use interventions and training programs that can be linked to improvements in actual clinical care. The studies by Miranda et al. and Lucas et al. highlight the utility of focused educational programs to complement clinical training as well as the positive impact of direct faculty supervision. These results are important starting points for programs to consider as they train and certify residents in required procedural skills. However, much work remains to be done. These studies have revealed that improvements in patient care outcomes are not likely to occur unless robust, learner‐centered educational programs are combined with adequate opportunities for residents to perform procedures under appropriate supervision.
- ,.The declining number and variety of procedures done by general internists: a resurvey of members of the American College of Physicians.Ann Intern Med.2007;146:355–360.
- Accreditation Council for Graduate Medical Education. Outcome project: general competencies. Available at: http://www.acgme.org/outcome/comp/compFull.asp#1. Accessed January 28,2007.
- American Board of Internal Medicine. Requirements for certification in internal medicine. Available at: http://www.abim.org/cert/policiesim.shtm. Accessed January 28,2007.
- ,.What procedures should internists do?Ann Intern Med.2007;146:392–393.
- ,,,,,.Firm‐based trial to improve central venous catheter insertion practices.J Hosp Med.2007;2:135–142.
- .Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains.Acad Med.2004 Oct;79(10 Suppl):S70–S81.
- ,.Treatment strength and integrity: models and methods. In:Bootzin RR,McKnight PE, eds.Strengthening Research Methodology: Psychological Measurement and Evaluation.Washington, DC:American Psychological Association;2006:103–124.
- ,,, et al.Impact of a bedside procedure service on general medicine inpatients: a firm‐based trial.J Hosp Med.2007;2:143–149.
- ,,, et al.Multicenter, randomized, controlled trial of virtual‐reality simulator training in acquisition of competency in colonoscopy.Gastrointest Endosc.2006;64:361–368.
- ,,, et al.Laparoscopic skills are improved with LapMentor training: results of a randomized, double‐blinded study.Ann Surg.2006;243:854–860.
- ,,, et al.Mastery learning of advanced cardiac life support skills by internal medicine using simulation technology and deliberate practice.J Gen Intern Med.2006;21:251–256.
- ,,, et al.A longitudinal study of internal medicine residents' retention of advanced cardiac support life skills.Acad Med.2006;81(10 Suppl):S9–S12.
- ,.The declining number and variety of procedures done by general internists: a resurvey of members of the American College of Physicians.Ann Intern Med.2007;146:355–360.
- Accreditation Council for Graduate Medical Education. Outcome project: general competencies. Available at: http://www.acgme.org/outcome/comp/compFull.asp#1. Accessed January 28,2007.
- American Board of Internal Medicine. Requirements for certification in internal medicine. Available at: http://www.abim.org/cert/policiesim.shtm. Accessed January 28,2007.
- ,.What procedures should internists do?Ann Intern Med.2007;146:392–393.
- ,,,,,.Firm‐based trial to improve central venous catheter insertion practices.J Hosp Med.2007;2:135–142.
- .Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains.Acad Med.2004 Oct;79(10 Suppl):S70–S81.
- ,.Treatment strength and integrity: models and methods. In:Bootzin RR,McKnight PE, eds.Strengthening Research Methodology: Psychological Measurement and Evaluation.Washington, DC:American Psychological Association;2006:103–124.
- ,,, et al.Impact of a bedside procedure service on general medicine inpatients: a firm‐based trial.J Hosp Med.2007;2:143–149.
- ,,, et al.Multicenter, randomized, controlled trial of virtual‐reality simulator training in acquisition of competency in colonoscopy.Gastrointest Endosc.2006;64:361–368.
- ,,, et al.Laparoscopic skills are improved with LapMentor training: results of a randomized, double‐blinded study.Ann Surg.2006;243:854–860.
- ,,, et al.Mastery learning of advanced cardiac life support skills by internal medicine using simulation technology and deliberate practice.J Gen Intern Med.2006;21:251–256.
- ,,, et al.A longitudinal study of internal medicine residents' retention of advanced cardiac support life skills.Acad Med.2006;81(10 Suppl):S9–S12.