User login
Making and using guidelines
Modern medicine increasingly relies on the adoption and use of guidelines.
Forty years ago, medicine was like free-form, rhythmic gymnastics in which physicians would develop an artisanal treatment plan for each patient. Now, medicine frequently involves recognizing when we need to do a triple-twisting, double-back somersault (the Biles II) and then performing it. The belief is that better outcomes flow from reduced variability in diagnostic and treatment plans, based on guidelines developed from evidence-based medicine from large meta-analyses. This dogma, still unproven in real life, probably works best for 95% of patients. The physician must not omit a step of deciding whether their particular patient is one of the 5% of patients to whom the guideline does not apply.
To be useful, the guidelines must be based on accurate science, produce a significantly positive cost-benefit-risk analysis, be wisely constructed, and be clearly written.
Alas, many guidelines fall far short of this ideal, and when they fail, they impugn all of medical care, they lower the credibility of the organizations that issue them, and they lower the public’s trust in medicine, which thereby impedes improving the public health. Don’t sweat the small stuff for public health guidelines.
The science matters. Nutritional guidelines have been particularly rickety, as John P.A. Ioannidis, MD, wrote in a JAMA op-ed 1 year ago.1 For instance, previous dietary recommendations to reduce cholesterol by avoiding eggs have since been shown to be wrong. The recommendation for reducing salt intake has been heavily criticized. Now the decades-long condemnation of red meat has been challenged. New “guidelines,” suggested by one group (let’s view it as a minority report that contradicts many official guidelines) in the October 1, 2019, issue of Annals of Internal Medicine, say that red meat and processed meats aren’t the boogeyman.2 The authors of the accompanying editorial are from the Center for Pediatric and Adolescent Comparative Effectiveness Research at Indiana University, Indianapolis.3 The editorial supports the new study, criticizing past recommendations because “the field of nutritional epidemiology is plagued by observational studies that have conducted inappropriate analyses, accompanied by likely erroneous conclusions.”
Clarity also matters. One factor in the current opiate epidemic was guidance in the mid-1990s making pain the “fifth vital sign.” This certainly was not the only factor nor was it necessarily the primary one. Most disasters, like most codes on the ward, proceed from multiple smaller failures and missteps. An emphasis on assessing pain in hospitalized patients did not intend to be interpreted as requiring that all pain be eliminated with strong medication, but that was the practical consequence. In response to the epidemic of overdose deaths, guidelines were promulgated in 2016 recommending reducing doses used for chronic opiate regimens. Some patients with chronic pain feared, and soon experienced, the consequences of those changes. In October 2019, those guidelines were revised telling physicians to go slower.4 In explaining the revision, one government official is quoted as saying: “Clearly we believe that there has been misinterpretation of the guidelines, which were very clear.”5 F. Scott Fitzgerald once wrote that “the test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.” I reread that governmental doublespeak three times and my brain broke.
Clinical practice guidelines are an important part of modern medicine. But we need to be wiser about their creation. The science needs to be rigorous. The committees need to contain skeptics rather than just research scientists and clinicians with a vested interest in the field. The purported benefits of the guideline must be weighed against costs, risks, and unintended consequences. Humility is important. All physicians are taught the principle: “First, do no harm.” In explaining medical ethics to students, I rephrase that principle as: “Be cautious and humble. You are not as smart as you think you are.” Consider this food for thought the next time you read or create a guideline.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. JAMA. 2018;320(10):969-70.
2. Ann Intern Med. 2019 Oct 1. doi: 10.7326/M19-1621.
3. Ann Intern Med. 2019 Oct 1. doi: 10.7326/M19-2620.
4. U.S. Department of Health & Human Services. HHS guide for clinicians on the appropriate dosage reduction or discontinuation of opioid analgesics. https://www.hhs.gov/opioids/sites/default/files/2019-10/Dosage_Reduction_Discontinuation.pdf.
5. “New guidelines on opioid tapering tell doctors to go slow.” Washington Post. 2019 Oct 10.
Modern medicine increasingly relies on the adoption and use of guidelines.
Forty years ago, medicine was like free-form, rhythmic gymnastics in which physicians would develop an artisanal treatment plan for each patient. Now, medicine frequently involves recognizing when we need to do a triple-twisting, double-back somersault (the Biles II) and then performing it. The belief is that better outcomes flow from reduced variability in diagnostic and treatment plans, based on guidelines developed from evidence-based medicine from large meta-analyses. This dogma, still unproven in real life, probably works best for 95% of patients. The physician must not omit a step of deciding whether their particular patient is one of the 5% of patients to whom the guideline does not apply.
To be useful, the guidelines must be based on accurate science, produce a significantly positive cost-benefit-risk analysis, be wisely constructed, and be clearly written.
Alas, many guidelines fall far short of this ideal, and when they fail, they impugn all of medical care, they lower the credibility of the organizations that issue them, and they lower the public’s trust in medicine, which thereby impedes improving the public health. Don’t sweat the small stuff for public health guidelines.
The science matters. Nutritional guidelines have been particularly rickety, as John P.A. Ioannidis, MD, wrote in a JAMA op-ed 1 year ago.1 For instance, previous dietary recommendations to reduce cholesterol by avoiding eggs have since been shown to be wrong. The recommendation for reducing salt intake has been heavily criticized. Now the decades-long condemnation of red meat has been challenged. New “guidelines,” suggested by one group (let’s view it as a minority report that contradicts many official guidelines) in the October 1, 2019, issue of Annals of Internal Medicine, say that red meat and processed meats aren’t the boogeyman.2 The authors of the accompanying editorial are from the Center for Pediatric and Adolescent Comparative Effectiveness Research at Indiana University, Indianapolis.3 The editorial supports the new study, criticizing past recommendations because “the field of nutritional epidemiology is plagued by observational studies that have conducted inappropriate analyses, accompanied by likely erroneous conclusions.”
Clarity also matters. One factor in the current opiate epidemic was guidance in the mid-1990s making pain the “fifth vital sign.” This certainly was not the only factor nor was it necessarily the primary one. Most disasters, like most codes on the ward, proceed from multiple smaller failures and missteps. An emphasis on assessing pain in hospitalized patients did not intend to be interpreted as requiring that all pain be eliminated with strong medication, but that was the practical consequence. In response to the epidemic of overdose deaths, guidelines were promulgated in 2016 recommending reducing doses used for chronic opiate regimens. Some patients with chronic pain feared, and soon experienced, the consequences of those changes. In October 2019, those guidelines were revised telling physicians to go slower.4 In explaining the revision, one government official is quoted as saying: “Clearly we believe that there has been misinterpretation of the guidelines, which were very clear.”5 F. Scott Fitzgerald once wrote that “the test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.” I reread that governmental doublespeak three times and my brain broke.
Clinical practice guidelines are an important part of modern medicine. But we need to be wiser about their creation. The science needs to be rigorous. The committees need to contain skeptics rather than just research scientists and clinicians with a vested interest in the field. The purported benefits of the guideline must be weighed against costs, risks, and unintended consequences. Humility is important. All physicians are taught the principle: “First, do no harm.” In explaining medical ethics to students, I rephrase that principle as: “Be cautious and humble. You are not as smart as you think you are.” Consider this food for thought the next time you read or create a guideline.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. JAMA. 2018;320(10):969-70.
2. Ann Intern Med. 2019 Oct 1. doi: 10.7326/M19-1621.
3. Ann Intern Med. 2019 Oct 1. doi: 10.7326/M19-2620.
4. U.S. Department of Health & Human Services. HHS guide for clinicians on the appropriate dosage reduction or discontinuation of opioid analgesics. https://www.hhs.gov/opioids/sites/default/files/2019-10/Dosage_Reduction_Discontinuation.pdf.
5. “New guidelines on opioid tapering tell doctors to go slow.” Washington Post. 2019 Oct 10.
Modern medicine increasingly relies on the adoption and use of guidelines.
Forty years ago, medicine was like free-form, rhythmic gymnastics in which physicians would develop an artisanal treatment plan for each patient. Now, medicine frequently involves recognizing when we need to do a triple-twisting, double-back somersault (the Biles II) and then performing it. The belief is that better outcomes flow from reduced variability in diagnostic and treatment plans, based on guidelines developed from evidence-based medicine from large meta-analyses. This dogma, still unproven in real life, probably works best for 95% of patients. The physician must not omit a step of deciding whether their particular patient is one of the 5% of patients to whom the guideline does not apply.
To be useful, the guidelines must be based on accurate science, produce a significantly positive cost-benefit-risk analysis, be wisely constructed, and be clearly written.
Alas, many guidelines fall far short of this ideal, and when they fail, they impugn all of medical care, they lower the credibility of the organizations that issue them, and they lower the public’s trust in medicine, which thereby impedes improving the public health. Don’t sweat the small stuff for public health guidelines.
The science matters. Nutritional guidelines have been particularly rickety, as John P.A. Ioannidis, MD, wrote in a JAMA op-ed 1 year ago.1 For instance, previous dietary recommendations to reduce cholesterol by avoiding eggs have since been shown to be wrong. The recommendation for reducing salt intake has been heavily criticized. Now the decades-long condemnation of red meat has been challenged. New “guidelines,” suggested by one group (let’s view it as a minority report that contradicts many official guidelines) in the October 1, 2019, issue of Annals of Internal Medicine, say that red meat and processed meats aren’t the boogeyman.2 The authors of the accompanying editorial are from the Center for Pediatric and Adolescent Comparative Effectiveness Research at Indiana University, Indianapolis.3 The editorial supports the new study, criticizing past recommendations because “the field of nutritional epidemiology is plagued by observational studies that have conducted inappropriate analyses, accompanied by likely erroneous conclusions.”
Clarity also matters. One factor in the current opiate epidemic was guidance in the mid-1990s making pain the “fifth vital sign.” This certainly was not the only factor nor was it necessarily the primary one. Most disasters, like most codes on the ward, proceed from multiple smaller failures and missteps. An emphasis on assessing pain in hospitalized patients did not intend to be interpreted as requiring that all pain be eliminated with strong medication, but that was the practical consequence. In response to the epidemic of overdose deaths, guidelines were promulgated in 2016 recommending reducing doses used for chronic opiate regimens. Some patients with chronic pain feared, and soon experienced, the consequences of those changes. In October 2019, those guidelines were revised telling physicians to go slower.4 In explaining the revision, one government official is quoted as saying: “Clearly we believe that there has been misinterpretation of the guidelines, which were very clear.”5 F. Scott Fitzgerald once wrote that “the test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.” I reread that governmental doublespeak three times and my brain broke.
Clinical practice guidelines are an important part of modern medicine. But we need to be wiser about their creation. The science needs to be rigorous. The committees need to contain skeptics rather than just research scientists and clinicians with a vested interest in the field. The purported benefits of the guideline must be weighed against costs, risks, and unintended consequences. Humility is important. All physicians are taught the principle: “First, do no harm.” In explaining medical ethics to students, I rephrase that principle as: “Be cautious and humble. You are not as smart as you think you are.” Consider this food for thought the next time you read or create a guideline.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. JAMA. 2018;320(10):969-70.
2. Ann Intern Med. 2019 Oct 1. doi: 10.7326/M19-1621.
3. Ann Intern Med. 2019 Oct 1. doi: 10.7326/M19-2620.
4. U.S. Department of Health & Human Services. HHS guide for clinicians on the appropriate dosage reduction or discontinuation of opioid analgesics. https://www.hhs.gov/opioids/sites/default/files/2019-10/Dosage_Reduction_Discontinuation.pdf.
5. “New guidelines on opioid tapering tell doctors to go slow.” Washington Post. 2019 Oct 10.
Recalling a medical education
As I look back, there have been many changes during my 25 years of clinical practice. I always assumed there would be advancements in medical research during my career. I expected those advancements to produce progress rather than a random walk.
One area of positive change has been the recommendations for safe sleep practices for young infants. The Back to Sleep program of the mid 1990s reversed prior advice. It recommended that babies should sleep on their backs to avoid accidental suffocation. Prior advice had been that they should sleep on their stomachs to avoid aspiration. The new advice cut infant deaths by 50%.
Over the years, treatment of gastroesophageal reflux has significantly changed. Polysomnograms are ordered much less frequently. Medications to reduce stomach acid have been associated with side effects and now are discouraged. Raising the head of the crib was common advice in 2000s that was contradicted in the 2010s. For 2 decades I wrote orders in the hospital to elevate the head of the crib. More frequently, the nurses did it without my orders whenever they found a spitty baby.
In May 2019, there was a product recall of inclined infant sleepers. The Fisher-Price Rock ‘n Play was one product recalled; 4.7 million of these were sold in the United States in the past 10 years. Because they are used only by infants, and because there are about 4 million births per year in the United States, there are enough of these items stored in basements and garages for every infant to have one.
Investigative reporting by the Washington Post yielded an article highly critical of the product and the way it was originally created and designed. There is outrage in the author’s description of events. Because I have degrees in both engineering and pediatric medicine, I reviewed his assertions and tried to compare his ideal of the medical research world with my reality.
There are 3,600 infant deaths per year in the United States attributed to SUID/SIDS (sudden unexplained infant death/sudden infant death syndrome). From that perspective, I don’t know what 30 deaths in a decade associated with the sleeper really means. There is a high potential for recall bias and confirmation bias. It doesn’t surprise me that there was a delay in assigning blame to an ubiquitous consumer product. The article assumes that medical opinion is monolithic and synchronized rather than undergoing a diffusion of innovation, as described by Everett M. Rogers. Sorting out who knew what and when they knew it will take the courts many years.
Some of my columns earlier this year have appraised medical information in social media, and particularly on Facebook, as being harmfully unreliable.
An example of the unreliability of modern medical research was documented in an article in Hospital Pediatrics in July 2019.
The authors were performing a meta-analysis to determine whether the use of respiratory viral (RV) detection tests are helpful in reducing length of stay or reducing unnecessary antibiotic use. To me, that is a much simpler issue, scientifically, than safe sleep practices. The authors found 23 relevant studies that met their criteria for inclusion. Their overall conclusion was that the quality of the studies, the heterogeneity of the studies, and the statistically significant but contradictory results between the studies made it impossible to prove RV testing is beneficial. However, as I read the article, they cannot – for a litany of reasons – rule out such a benefit. Twenty three published articles in total yielded no reliable medical knowledge.
RV testing already has been widely adopted, particularly in emergency rooms. It is expensive. Clinical guidelines discourage RV testing but those guidelines are based on RV testing in the 2003-2006 time frame, which is obsolete technology. The author of the article on the infant sleepers expressed shock at what he considered to be inadequate medical research supporting the development of the inclined infant sleeper. RV testing is a product in widespread use, with lots of research, and has no better proof of efficacy or safety.
I expected, when I first started practice, that when I was older and grayer I would look back and recall many advances. I anticipated my recall would be of fond memories and of many patients helped. What I didn’t expect was so much of the advice that I provided to be wrong. Perhaps my medical education and parts of the academic research system should be subject to a product recall.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
As I look back, there have been many changes during my 25 years of clinical practice. I always assumed there would be advancements in medical research during my career. I expected those advancements to produce progress rather than a random walk.
One area of positive change has been the recommendations for safe sleep practices for young infants. The Back to Sleep program of the mid 1990s reversed prior advice. It recommended that babies should sleep on their backs to avoid accidental suffocation. Prior advice had been that they should sleep on their stomachs to avoid aspiration. The new advice cut infant deaths by 50%.
Over the years, treatment of gastroesophageal reflux has significantly changed. Polysomnograms are ordered much less frequently. Medications to reduce stomach acid have been associated with side effects and now are discouraged. Raising the head of the crib was common advice in 2000s that was contradicted in the 2010s. For 2 decades I wrote orders in the hospital to elevate the head of the crib. More frequently, the nurses did it without my orders whenever they found a spitty baby.
In May 2019, there was a product recall of inclined infant sleepers. The Fisher-Price Rock ‘n Play was one product recalled; 4.7 million of these were sold in the United States in the past 10 years. Because they are used only by infants, and because there are about 4 million births per year in the United States, there are enough of these items stored in basements and garages for every infant to have one.
Investigative reporting by the Washington Post yielded an article highly critical of the product and the way it was originally created and designed. There is outrage in the author’s description of events. Because I have degrees in both engineering and pediatric medicine, I reviewed his assertions and tried to compare his ideal of the medical research world with my reality.
There are 3,600 infant deaths per year in the United States attributed to SUID/SIDS (sudden unexplained infant death/sudden infant death syndrome). From that perspective, I don’t know what 30 deaths in a decade associated with the sleeper really means. There is a high potential for recall bias and confirmation bias. It doesn’t surprise me that there was a delay in assigning blame to an ubiquitous consumer product. The article assumes that medical opinion is monolithic and synchronized rather than undergoing a diffusion of innovation, as described by Everett M. Rogers. Sorting out who knew what and when they knew it will take the courts many years.
Some of my columns earlier this year have appraised medical information in social media, and particularly on Facebook, as being harmfully unreliable.
An example of the unreliability of modern medical research was documented in an article in Hospital Pediatrics in July 2019.
The authors were performing a meta-analysis to determine whether the use of respiratory viral (RV) detection tests are helpful in reducing length of stay or reducing unnecessary antibiotic use. To me, that is a much simpler issue, scientifically, than safe sleep practices. The authors found 23 relevant studies that met their criteria for inclusion. Their overall conclusion was that the quality of the studies, the heterogeneity of the studies, and the statistically significant but contradictory results between the studies made it impossible to prove RV testing is beneficial. However, as I read the article, they cannot – for a litany of reasons – rule out such a benefit. Twenty three published articles in total yielded no reliable medical knowledge.
RV testing already has been widely adopted, particularly in emergency rooms. It is expensive. Clinical guidelines discourage RV testing but those guidelines are based on RV testing in the 2003-2006 time frame, which is obsolete technology. The author of the article on the infant sleepers expressed shock at what he considered to be inadequate medical research supporting the development of the inclined infant sleeper. RV testing is a product in widespread use, with lots of research, and has no better proof of efficacy or safety.
I expected, when I first started practice, that when I was older and grayer I would look back and recall many advances. I anticipated my recall would be of fond memories and of many patients helped. What I didn’t expect was so much of the advice that I provided to be wrong. Perhaps my medical education and parts of the academic research system should be subject to a product recall.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
As I look back, there have been many changes during my 25 years of clinical practice. I always assumed there would be advancements in medical research during my career. I expected those advancements to produce progress rather than a random walk.
One area of positive change has been the recommendations for safe sleep practices for young infants. The Back to Sleep program of the mid 1990s reversed prior advice. It recommended that babies should sleep on their backs to avoid accidental suffocation. Prior advice had been that they should sleep on their stomachs to avoid aspiration. The new advice cut infant deaths by 50%.
Over the years, treatment of gastroesophageal reflux has significantly changed. Polysomnograms are ordered much less frequently. Medications to reduce stomach acid have been associated with side effects and now are discouraged. Raising the head of the crib was common advice in 2000s that was contradicted in the 2010s. For 2 decades I wrote orders in the hospital to elevate the head of the crib. More frequently, the nurses did it without my orders whenever they found a spitty baby.
In May 2019, there was a product recall of inclined infant sleepers. The Fisher-Price Rock ‘n Play was one product recalled; 4.7 million of these were sold in the United States in the past 10 years. Because they are used only by infants, and because there are about 4 million births per year in the United States, there are enough of these items stored in basements and garages for every infant to have one.
Investigative reporting by the Washington Post yielded an article highly critical of the product and the way it was originally created and designed. There is outrage in the author’s description of events. Because I have degrees in both engineering and pediatric medicine, I reviewed his assertions and tried to compare his ideal of the medical research world with my reality.
There are 3,600 infant deaths per year in the United States attributed to SUID/SIDS (sudden unexplained infant death/sudden infant death syndrome). From that perspective, I don’t know what 30 deaths in a decade associated with the sleeper really means. There is a high potential for recall bias and confirmation bias. It doesn’t surprise me that there was a delay in assigning blame to an ubiquitous consumer product. The article assumes that medical opinion is monolithic and synchronized rather than undergoing a diffusion of innovation, as described by Everett M. Rogers. Sorting out who knew what and when they knew it will take the courts many years.
Some of my columns earlier this year have appraised medical information in social media, and particularly on Facebook, as being harmfully unreliable.
An example of the unreliability of modern medical research was documented in an article in Hospital Pediatrics in July 2019.
The authors were performing a meta-analysis to determine whether the use of respiratory viral (RV) detection tests are helpful in reducing length of stay or reducing unnecessary antibiotic use. To me, that is a much simpler issue, scientifically, than safe sleep practices. The authors found 23 relevant studies that met their criteria for inclusion. Their overall conclusion was that the quality of the studies, the heterogeneity of the studies, and the statistically significant but contradictory results between the studies made it impossible to prove RV testing is beneficial. However, as I read the article, they cannot – for a litany of reasons – rule out such a benefit. Twenty three published articles in total yielded no reliable medical knowledge.
RV testing already has been widely adopted, particularly in emergency rooms. It is expensive. Clinical guidelines discourage RV testing but those guidelines are based on RV testing in the 2003-2006 time frame, which is obsolete technology. The author of the article on the infant sleepers expressed shock at what he considered to be inadequate medical research supporting the development of the inclined infant sleeper. RV testing is a product in widespread use, with lots of research, and has no better proof of efficacy or safety.
I expected, when I first started practice, that when I was older and grayer I would look back and recall many advances. I anticipated my recall would be of fond memories and of many patients helped. What I didn’t expect was so much of the advice that I provided to be wrong. Perhaps my medical education and parts of the academic research system should be subject to a product recall.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
The right of conscientious objection
This particularly is true for those who consider medicine a vocation, or a calling, rather than just a job.
On May 2, 2019, the Department of Health and Human Services made public a 440-page document known as the Final Conscience Rule.1 It isn’t quite final. And the state of California already is suing to stop it.2 But the document represents the culmination of years of legal wrangling over whether physicians are allowed to have consciences or whether they must function as automatons providing any legally permitted care that a patient might demand. This comprehensive document provides a history of the issues, but was written in dense legalese, as if it expected to be answering challenges in court.
The short answer in the United States is that religious liberty continues to triumph over editorials in the New England Journal of Medicine. Consciences are allowed. The Final Conscience Rule begins with “The United States has a long history of providing protections in health care for individuals and entities on the basis of religious beliefs or moral convictions.” That history includes the Religious Freedom and Restoration Act of 1993.3 RFRA was introduced into the Senate by Sen. Ted Kennedy (D-Mass.), a bastion of liberal health care policies, and passed by a 97-3 vote. It was introduced into the House by then-Rep. Chuck Schumer (D-N.Y.) and passed by a unanimous voice vote. RFRA is not the invention of Republican fundamentalists.
For my colleagues in Canada, the Ontario Court of Appeals (ONCA, the highest provincial court) decided on May 15, 2019, that the opposite situation is the law in Canada. A recent Ontario law concerning medical assistance in dying (also known as physician-assisted suicide) requires Ontario physicians to either provide the assistance when requested or to make an effective referral, defined as “a referral made in good faith, to a non-objecting, available, and accessible physician, other health-care professional, or agency.” Some Canadian physicians objected to this requirement as a violation of their consciences and their Hippocratic Oaths. They lost. The ONCA decision is 74 readable, double-spaced pages and spells out the ethics and legal principles. In summary, the ONCA said the policies on requiring an effective referral “strike a reasonable balance between patients’ interests and physicians’ Charter-protected religious freedom. In short, they are reasonable limits prescribed by law that are demonstrably justified in a free and democratic society.”4
The California physician-assisted dying law, known as the End of Life Option Act, which became effective in 2016, has policies which are very different from the Ontario policies. The California law has clear protections for the consciences of physicians. The law empowers them to avoid being compelled or coerced into cooperating with these deaths. “Participation in activities authorized pursuant to this part shall be voluntary. … A person or entity that elects, for reasons of conscience, morality, or ethics, not to engage in activities authorized pursuant to this part is not required to take any action in support of an individual’s decision under this part.”5 If it seems strange that California would strongly protect conscience with its own statute but challenge the new federal regulations, welcome to tribal politics.
The point is that the role of physicians in abortion, physician aid in dying, and other controversial practices is not going to be decided by philosophical discussions about the ideal scope and purpose of medicine. Compromises are involved that reflect the values of society. Canada is more anticlerical than the United States, and Ontario chose a different path. French culture is even more extreme. Recently, mayors in two towns in France told their elementary schools to stop offering alternative entrées on days when pork was served for hot lunches. Secular schools were not to provide accommodation for students (Muslim and Jewish) who religiously objected to pork. Since the French Revolution, the emphasis is on assimilation and laïcité (France’s principle of secularism in public affairs). The cathedral Notre-Dame de Paris – recently damaged by fire – is owned by the state, not the Catholic Church. The United States has a different history and culture. It has supported religious liberty and reasonable accommodations. That is the loving thing to do. But as a reminder, the Peace of Westphalia in 1648, which ended European religious wars between Protestants and Catholics, was not a result of enlightened thinking and agapeic love. The fighting parties looked in the abyss of mutual annihilation and opted for coexistence instead.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. Department of Health and Human Services, “HHS Announces Final Conscience Rule Protecting Health Care Entities and Individuals,” May 2, 2019.
2. “California sues Trump administration over ‘conscience rule’ that could limit abortions,” Los Angeles Times, May 21, 2019.
3. Wikipedia, “Religious Freedom Restoration Act of 1993”
4. Christian Medical and Dental Society of Canada v. College of Physicians and Surgeons of Ontario, 2019 ONCA 393.
5. California Assembly Bill No. 15, End of Life Option Act.
The article was updated on June 21, 2019.
This particularly is true for those who consider medicine a vocation, or a calling, rather than just a job.
On May 2, 2019, the Department of Health and Human Services made public a 440-page document known as the Final Conscience Rule.1 It isn’t quite final. And the state of California already is suing to stop it.2 But the document represents the culmination of years of legal wrangling over whether physicians are allowed to have consciences or whether they must function as automatons providing any legally permitted care that a patient might demand. This comprehensive document provides a history of the issues, but was written in dense legalese, as if it expected to be answering challenges in court.
The short answer in the United States is that religious liberty continues to triumph over editorials in the New England Journal of Medicine. Consciences are allowed. The Final Conscience Rule begins with “The United States has a long history of providing protections in health care for individuals and entities on the basis of religious beliefs or moral convictions.” That history includes the Religious Freedom and Restoration Act of 1993.3 RFRA was introduced into the Senate by Sen. Ted Kennedy (D-Mass.), a bastion of liberal health care policies, and passed by a 97-3 vote. It was introduced into the House by then-Rep. Chuck Schumer (D-N.Y.) and passed by a unanimous voice vote. RFRA is not the invention of Republican fundamentalists.
For my colleagues in Canada, the Ontario Court of Appeals (ONCA, the highest provincial court) decided on May 15, 2019, that the opposite situation is the law in Canada. A recent Ontario law concerning medical assistance in dying (also known as physician-assisted suicide) requires Ontario physicians to either provide the assistance when requested or to make an effective referral, defined as “a referral made in good faith, to a non-objecting, available, and accessible physician, other health-care professional, or agency.” Some Canadian physicians objected to this requirement as a violation of their consciences and their Hippocratic Oaths. They lost. The ONCA decision is 74 readable, double-spaced pages and spells out the ethics and legal principles. In summary, the ONCA said the policies on requiring an effective referral “strike a reasonable balance between patients’ interests and physicians’ Charter-protected religious freedom. In short, they are reasonable limits prescribed by law that are demonstrably justified in a free and democratic society.”4
The California physician-assisted dying law, known as the End of Life Option Act, which became effective in 2016, has policies which are very different from the Ontario policies. The California law has clear protections for the consciences of physicians. The law empowers them to avoid being compelled or coerced into cooperating with these deaths. “Participation in activities authorized pursuant to this part shall be voluntary. … A person or entity that elects, for reasons of conscience, morality, or ethics, not to engage in activities authorized pursuant to this part is not required to take any action in support of an individual’s decision under this part.”5 If it seems strange that California would strongly protect conscience with its own statute but challenge the new federal regulations, welcome to tribal politics.
The point is that the role of physicians in abortion, physician aid in dying, and other controversial practices is not going to be decided by philosophical discussions about the ideal scope and purpose of medicine. Compromises are involved that reflect the values of society. Canada is more anticlerical than the United States, and Ontario chose a different path. French culture is even more extreme. Recently, mayors in two towns in France told their elementary schools to stop offering alternative entrées on days when pork was served for hot lunches. Secular schools were not to provide accommodation for students (Muslim and Jewish) who religiously objected to pork. Since the French Revolution, the emphasis is on assimilation and laïcité (France’s principle of secularism in public affairs). The cathedral Notre-Dame de Paris – recently damaged by fire – is owned by the state, not the Catholic Church. The United States has a different history and culture. It has supported religious liberty and reasonable accommodations. That is the loving thing to do. But as a reminder, the Peace of Westphalia in 1648, which ended European religious wars between Protestants and Catholics, was not a result of enlightened thinking and agapeic love. The fighting parties looked in the abyss of mutual annihilation and opted for coexistence instead.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. Department of Health and Human Services, “HHS Announces Final Conscience Rule Protecting Health Care Entities and Individuals,” May 2, 2019.
2. “California sues Trump administration over ‘conscience rule’ that could limit abortions,” Los Angeles Times, May 21, 2019.
3. Wikipedia, “Religious Freedom Restoration Act of 1993”
4. Christian Medical and Dental Society of Canada v. College of Physicians and Surgeons of Ontario, 2019 ONCA 393.
5. California Assembly Bill No. 15, End of Life Option Act.
The article was updated on June 21, 2019.
This particularly is true for those who consider medicine a vocation, or a calling, rather than just a job.
On May 2, 2019, the Department of Health and Human Services made public a 440-page document known as the Final Conscience Rule.1 It isn’t quite final. And the state of California already is suing to stop it.2 But the document represents the culmination of years of legal wrangling over whether physicians are allowed to have consciences or whether they must function as automatons providing any legally permitted care that a patient might demand. This comprehensive document provides a history of the issues, but was written in dense legalese, as if it expected to be answering challenges in court.
The short answer in the United States is that religious liberty continues to triumph over editorials in the New England Journal of Medicine. Consciences are allowed. The Final Conscience Rule begins with “The United States has a long history of providing protections in health care for individuals and entities on the basis of religious beliefs or moral convictions.” That history includes the Religious Freedom and Restoration Act of 1993.3 RFRA was introduced into the Senate by Sen. Ted Kennedy (D-Mass.), a bastion of liberal health care policies, and passed by a 97-3 vote. It was introduced into the House by then-Rep. Chuck Schumer (D-N.Y.) and passed by a unanimous voice vote. RFRA is not the invention of Republican fundamentalists.
For my colleagues in Canada, the Ontario Court of Appeals (ONCA, the highest provincial court) decided on May 15, 2019, that the opposite situation is the law in Canada. A recent Ontario law concerning medical assistance in dying (also known as physician-assisted suicide) requires Ontario physicians to either provide the assistance when requested or to make an effective referral, defined as “a referral made in good faith, to a non-objecting, available, and accessible physician, other health-care professional, or agency.” Some Canadian physicians objected to this requirement as a violation of their consciences and their Hippocratic Oaths. They lost. The ONCA decision is 74 readable, double-spaced pages and spells out the ethics and legal principles. In summary, the ONCA said the policies on requiring an effective referral “strike a reasonable balance between patients’ interests and physicians’ Charter-protected religious freedom. In short, they are reasonable limits prescribed by law that are demonstrably justified in a free and democratic society.”4
The California physician-assisted dying law, known as the End of Life Option Act, which became effective in 2016, has policies which are very different from the Ontario policies. The California law has clear protections for the consciences of physicians. The law empowers them to avoid being compelled or coerced into cooperating with these deaths. “Participation in activities authorized pursuant to this part shall be voluntary. … A person or entity that elects, for reasons of conscience, morality, or ethics, not to engage in activities authorized pursuant to this part is not required to take any action in support of an individual’s decision under this part.”5 If it seems strange that California would strongly protect conscience with its own statute but challenge the new federal regulations, welcome to tribal politics.
The point is that the role of physicians in abortion, physician aid in dying, and other controversial practices is not going to be decided by philosophical discussions about the ideal scope and purpose of medicine. Compromises are involved that reflect the values of society. Canada is more anticlerical than the United States, and Ontario chose a different path. French culture is even more extreme. Recently, mayors in two towns in France told their elementary schools to stop offering alternative entrées on days when pork was served for hot lunches. Secular schools were not to provide accommodation for students (Muslim and Jewish) who religiously objected to pork. Since the French Revolution, the emphasis is on assimilation and laïcité (France’s principle of secularism in public affairs). The cathedral Notre-Dame de Paris – recently damaged by fire – is owned by the state, not the Catholic Church. The United States has a different history and culture. It has supported religious liberty and reasonable accommodations. That is the loving thing to do. But as a reminder, the Peace of Westphalia in 1648, which ended European religious wars between Protestants and Catholics, was not a result of enlightened thinking and agapeic love. The fighting parties looked in the abyss of mutual annihilation and opted for coexistence instead.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. Department of Health and Human Services, “HHS Announces Final Conscience Rule Protecting Health Care Entities and Individuals,” May 2, 2019.
2. “California sues Trump administration over ‘conscience rule’ that could limit abortions,” Los Angeles Times, May 21, 2019.
3. Wikipedia, “Religious Freedom Restoration Act of 1993”
4. Christian Medical and Dental Society of Canada v. College of Physicians and Surgeons of Ontario, 2019 ONCA 393.
5. California Assembly Bill No. 15, End of Life Option Act.
The article was updated on June 21, 2019.
The type II error and black holes
An international group of scientists have announced they have an image of a black hole. This feat of scientific achievement and teamwork is another giant step in humankind’s understanding of the universe. It isn’t easy to find something that isn’t there. Black holes exist and this one is about 6.5 billion times more massive than Earth’s sun. That is a lot of “there.”
In medical research, most articles are about discovering something new. Lately, it is also common to publish studies that claim that something doesn’t exist. No difference is found between treatment A and treatment B. Two decades ago those negative studies rarely were published, but there was merit in the idea that more of them should be published. However, that merit presupposed that the negative studies worthy of publication would be well designed, robust, and, most importantly, contain a power calculation showing that the methodology would have detected the phenomenon if the phenomenon were large enough to be clinically important. Alas, the literature has been flooded with negative studies finding no effect because the studies were hopelessly underpowered and never had a realistic chance of detecting anything. This fake news pollutes our medical knowledge.
To clarify, let me provide a simple example. With my myopia, at 100 yards and without my glasses, I can’t detect the difference between Lebron James and Megan Rapinoe, although I know Megan is better at corner kicks.
Now let me give a second, more complex example that obfuscates the same detection issue. Are there moons circling Jupiter? I go out each night, find Jupiter, take a picture with my trusty cell phone, and examine the picture for any evidence of an object(s) circling the planet. I do this many times. How many? Well, if I only do it three times, people will doubt my science, but doing it 1,000 times would take too long. In my experience, most negative studies seem to involve about 30-50 patients. So one picture a week for a year will produce 52 observations. That is a lot of cold nights under the stars. I will use my scientific knowledge and ability to read sky charts to locate Jupiter. (There is an app for that.) I will use my experience to distinguish Jupiter from Venus and Mars. There will be cloudy days, so maybe only 30 clear pictures will be obtained. I will have a second observer examine the photos. We will calculate a kappa statistic for inter-rater agreement. There will be pictures and tables of numbers. When I’m done, I will publish an article saying that Jupiter doesn’t have moons because I didn’t find any. Trust me, I’m a doctor.
Science doesn’t work that way. Science doesn’t care how smart I am, how dedicated I am, how expensive my cell phone is, or how much work I put into the project, science wants empiric proof. My failure to find moons does not refute their existence. A claim that something does NOT exist cannot be correctly made by simply showing that the P value is greater than .05. A statistically insignificant P value also might also mean that my experiment, despite all my time, effort, commitment, and data collection, is simply inadequate to detect the phenomenon. My cell phone has enough pixels to see Jupiter but not its moons. The phone isn’t powerful enough. My claim is a type II error.
One needs to specify the threshold size of a clinically important effect and then show that your methods and results were powerful enough to have detected something that small. Only then may you correctly publish a conclusion that there is nothing there, a donut hole in the black void of space.
I invite you to do your own survey. As you read journal articles, identify the next 10 times you read a conclusion that claims no effect was found. Scour that article carefully for any indication of the size of effect that those methods and results would have been able to detect. Look for a power calculation. Grade the article with a simple pass/fail on that point. Did the authors provide that information in a way you can understand, or do you just have to trust them? Take President Reagan’s advice, “Trust, but verify.” Most of the 10 articles will lack the calculation and many negative claims are type II errors.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
An international group of scientists have announced they have an image of a black hole. This feat of scientific achievement and teamwork is another giant step in humankind’s understanding of the universe. It isn’t easy to find something that isn’t there. Black holes exist and this one is about 6.5 billion times more massive than Earth’s sun. That is a lot of “there.”
In medical research, most articles are about discovering something new. Lately, it is also common to publish studies that claim that something doesn’t exist. No difference is found between treatment A and treatment B. Two decades ago those negative studies rarely were published, but there was merit in the idea that more of them should be published. However, that merit presupposed that the negative studies worthy of publication would be well designed, robust, and, most importantly, contain a power calculation showing that the methodology would have detected the phenomenon if the phenomenon were large enough to be clinically important. Alas, the literature has been flooded with negative studies finding no effect because the studies were hopelessly underpowered and never had a realistic chance of detecting anything. This fake news pollutes our medical knowledge.
To clarify, let me provide a simple example. With my myopia, at 100 yards and without my glasses, I can’t detect the difference between Lebron James and Megan Rapinoe, although I know Megan is better at corner kicks.
Now let me give a second, more complex example that obfuscates the same detection issue. Are there moons circling Jupiter? I go out each night, find Jupiter, take a picture with my trusty cell phone, and examine the picture for any evidence of an object(s) circling the planet. I do this many times. How many? Well, if I only do it three times, people will doubt my science, but doing it 1,000 times would take too long. In my experience, most negative studies seem to involve about 30-50 patients. So one picture a week for a year will produce 52 observations. That is a lot of cold nights under the stars. I will use my scientific knowledge and ability to read sky charts to locate Jupiter. (There is an app for that.) I will use my experience to distinguish Jupiter from Venus and Mars. There will be cloudy days, so maybe only 30 clear pictures will be obtained. I will have a second observer examine the photos. We will calculate a kappa statistic for inter-rater agreement. There will be pictures and tables of numbers. When I’m done, I will publish an article saying that Jupiter doesn’t have moons because I didn’t find any. Trust me, I’m a doctor.
Science doesn’t work that way. Science doesn’t care how smart I am, how dedicated I am, how expensive my cell phone is, or how much work I put into the project, science wants empiric proof. My failure to find moons does not refute their existence. A claim that something does NOT exist cannot be correctly made by simply showing that the P value is greater than .05. A statistically insignificant P value also might also mean that my experiment, despite all my time, effort, commitment, and data collection, is simply inadequate to detect the phenomenon. My cell phone has enough pixels to see Jupiter but not its moons. The phone isn’t powerful enough. My claim is a type II error.
One needs to specify the threshold size of a clinically important effect and then show that your methods and results were powerful enough to have detected something that small. Only then may you correctly publish a conclusion that there is nothing there, a donut hole in the black void of space.
I invite you to do your own survey. As you read journal articles, identify the next 10 times you read a conclusion that claims no effect was found. Scour that article carefully for any indication of the size of effect that those methods and results would have been able to detect. Look for a power calculation. Grade the article with a simple pass/fail on that point. Did the authors provide that information in a way you can understand, or do you just have to trust them? Take President Reagan’s advice, “Trust, but verify.” Most of the 10 articles will lack the calculation and many negative claims are type II errors.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
An international group of scientists have announced they have an image of a black hole. This feat of scientific achievement and teamwork is another giant step in humankind’s understanding of the universe. It isn’t easy to find something that isn’t there. Black holes exist and this one is about 6.5 billion times more massive than Earth’s sun. That is a lot of “there.”
In medical research, most articles are about discovering something new. Lately, it is also common to publish studies that claim that something doesn’t exist. No difference is found between treatment A and treatment B. Two decades ago those negative studies rarely were published, but there was merit in the idea that more of them should be published. However, that merit presupposed that the negative studies worthy of publication would be well designed, robust, and, most importantly, contain a power calculation showing that the methodology would have detected the phenomenon if the phenomenon were large enough to be clinically important. Alas, the literature has been flooded with negative studies finding no effect because the studies were hopelessly underpowered and never had a realistic chance of detecting anything. This fake news pollutes our medical knowledge.
To clarify, let me provide a simple example. With my myopia, at 100 yards and without my glasses, I can’t detect the difference between Lebron James and Megan Rapinoe, although I know Megan is better at corner kicks.
Now let me give a second, more complex example that obfuscates the same detection issue. Are there moons circling Jupiter? I go out each night, find Jupiter, take a picture with my trusty cell phone, and examine the picture for any evidence of an object(s) circling the planet. I do this many times. How many? Well, if I only do it three times, people will doubt my science, but doing it 1,000 times would take too long. In my experience, most negative studies seem to involve about 30-50 patients. So one picture a week for a year will produce 52 observations. That is a lot of cold nights under the stars. I will use my scientific knowledge and ability to read sky charts to locate Jupiter. (There is an app for that.) I will use my experience to distinguish Jupiter from Venus and Mars. There will be cloudy days, so maybe only 30 clear pictures will be obtained. I will have a second observer examine the photos. We will calculate a kappa statistic for inter-rater agreement. There will be pictures and tables of numbers. When I’m done, I will publish an article saying that Jupiter doesn’t have moons because I didn’t find any. Trust me, I’m a doctor.
Science doesn’t work that way. Science doesn’t care how smart I am, how dedicated I am, how expensive my cell phone is, or how much work I put into the project, science wants empiric proof. My failure to find moons does not refute their existence. A claim that something does NOT exist cannot be correctly made by simply showing that the P value is greater than .05. A statistically insignificant P value also might also mean that my experiment, despite all my time, effort, commitment, and data collection, is simply inadequate to detect the phenomenon. My cell phone has enough pixels to see Jupiter but not its moons. The phone isn’t powerful enough. My claim is a type II error.
One needs to specify the threshold size of a clinically important effect and then show that your methods and results were powerful enough to have detected something that small. Only then may you correctly publish a conclusion that there is nothing there, a donut hole in the black void of space.
I invite you to do your own survey. As you read journal articles, identify the next 10 times you read a conclusion that claims no effect was found. Scour that article carefully for any indication of the size of effect that those methods and results would have been able to detect. Look for a power calculation. Grade the article with a simple pass/fail on that point. Did the authors provide that information in a way you can understand, or do you just have to trust them? Take President Reagan’s advice, “Trust, but verify.” Most of the 10 articles will lack the calculation and many negative claims are type II errors.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
Pseudoscience redux
My most recent column discussed the problem of pseudoscience that pervades some corners of the Internet. Personally, I respond to pseudoscience primarily by trying to provide accurate and less-biased information. I recognize that not everyone approaches decision making by seeking more information. When dealing a diverse public, a medical professional needs to have other approaches in the armamentarium.1 When dealing with other physicians, I am less flexible. Either the profession of medicine believes in science or it doesn’t.
Since that column was published, there have been major developments. There are measles outbreaks in the states of Washington and New York, and more than 100 deaths from a measles epidemic in the Philippines. The World Health Organization has made vaccine hesitancy one of its ten threats to global health in 2019.
Facebook has indicated that it might demote the priority and frequency with which it recommends articles that promulgate anti-vax information and conspiracy theories.2 Facebook isn’t doing this because it has had an epiphany; it has come under pressure for its role in the spread of misinformation. Current legislation was written before the rise of social media, when Internet Service Providers were primarily conduits to transfer bits and bytes between computers. Those ISPs were not liable for the content of the transmitted Web pages. Facebook, by producing what it called a newsfeed and by making personalized suggestions for other websites to browse, doesn’t fit the passive model of an ISP.
For alleged violations of user’s privacy, Facebook might be subject to billion dollar fines, according to a Washington Post article.3 Still, for a company whose revenue is $4 billion per month and whose stock market value is $400 billion, paying a billion dollar fine for years of alleged misbehaviors that have enabled it to become a giant empire is, “in the scheme of things ... a speeding ticket” in the parlance of the penultimate scene of the movie The Social Network. The real financial risk is people deciding they can’t trust the platform and going elsewhere.
Authorities in the United Kingdom in February 2019 released a highly critical, 108-page report about fake news, which said, “Facebook should not be allowed to behave like ‘digital gangsters’ in the online world.”4 The U.K. report urges new regulations to deal with privacy breaches and with fake news. It endeavors to create a duty for social media companies to combat the spread of misinformation.
Then the Wall Street Journal reported that Pinterest has stopped returning results for searches related to vaccination.5 Pinterest realized that most of the shared images on its platform cautioned against vaccination, which contradicts the recommendations of medical experts. Unable to otherwise combat the flow of misinformation, the company apparently has decided to eliminate returning results, pro or con, for any search terms related to vaccines.
While lamenting the public’s inability to distinguish misinformation on the Internet, I’ve also been observing the factors that lead physicians astray. I expect physicians, as trained scientists and as professionals, to be able to assimilate new information and change their practices accordingly. Those who do research on the translation of technology find that, this doesn’t happen with any regularity.
The February 2019 issue of Hospital Pediatrics has four items on the topic of treating bronchiolitis, including two research articles, a brief report, and a commentary. That is obviously a relevant topic this time of year. The impression after reading those four items is that hospitalists don’t really know how to best treat the most common illness they encounter. And even when they “know” how to do it, many factors distort the science. Those factors are highlighted in the article on barriers to minimizing viral testing.6
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. “Discussing immunization with vaccine-hesitant parents requires caring, individualized approach,” by Jeff Craven, Pediatric News, Nov. 7, 2018; “How do you get anti-vaxxers to vaccinate their kids? Talk to them – for hours,” by Nadine Gartner, Washington Post, Feb. 19, 2019.
2. “Facebook will consider removing or demoting anti-vaccination recommendations amid backlash,” by Taylor Telford, Washington Post, Feb. 15, 2019.
3. “U.S. regulators have met to discuss imposing a record-setting fine against Facebook for privacy violations,” by Tony Romm and Elizabeth Dwoskin, Washington Post, Jan. 18, 2019; “Report: Facebook, FTC discussing ‘multibillion dollar’ fine,” by Associated Press.
4. “Disinformation and ‘fake news’: Final Report,” House of Commons, Feb. 18, 2019, p. 42, item 139.
5. “Pinterest blocks vaccination searches in move to control the conversation,” by Robert McMillan and Daniela Hernandez, The Wall Street Journal, Feb. 20, 2019.
6. “Barriers to minimizing respiratory viral testing in bronchiolitis: Physician perceptions on testing practices,” by MZ Huang et al. Hospital Pediatrics 2019 Feb. doi: 10.1542/hpeds.2018-0108.
My most recent column discussed the problem of pseudoscience that pervades some corners of the Internet. Personally, I respond to pseudoscience primarily by trying to provide accurate and less-biased information. I recognize that not everyone approaches decision making by seeking more information. When dealing a diverse public, a medical professional needs to have other approaches in the armamentarium.1 When dealing with other physicians, I am less flexible. Either the profession of medicine believes in science or it doesn’t.
Since that column was published, there have been major developments. There are measles outbreaks in the states of Washington and New York, and more than 100 deaths from a measles epidemic in the Philippines. The World Health Organization has made vaccine hesitancy one of its ten threats to global health in 2019.
Facebook has indicated that it might demote the priority and frequency with which it recommends articles that promulgate anti-vax information and conspiracy theories.2 Facebook isn’t doing this because it has had an epiphany; it has come under pressure for its role in the spread of misinformation. Current legislation was written before the rise of social media, when Internet Service Providers were primarily conduits to transfer bits and bytes between computers. Those ISPs were not liable for the content of the transmitted Web pages. Facebook, by producing what it called a newsfeed and by making personalized suggestions for other websites to browse, doesn’t fit the passive model of an ISP.
For alleged violations of user’s privacy, Facebook might be subject to billion dollar fines, according to a Washington Post article.3 Still, for a company whose revenue is $4 billion per month and whose stock market value is $400 billion, paying a billion dollar fine for years of alleged misbehaviors that have enabled it to become a giant empire is, “in the scheme of things ... a speeding ticket” in the parlance of the penultimate scene of the movie The Social Network. The real financial risk is people deciding they can’t trust the platform and going elsewhere.
Authorities in the United Kingdom in February 2019 released a highly critical, 108-page report about fake news, which said, “Facebook should not be allowed to behave like ‘digital gangsters’ in the online world.”4 The U.K. report urges new regulations to deal with privacy breaches and with fake news. It endeavors to create a duty for social media companies to combat the spread of misinformation.
Then the Wall Street Journal reported that Pinterest has stopped returning results for searches related to vaccination.5 Pinterest realized that most of the shared images on its platform cautioned against vaccination, which contradicts the recommendations of medical experts. Unable to otherwise combat the flow of misinformation, the company apparently has decided to eliminate returning results, pro or con, for any search terms related to vaccines.
While lamenting the public’s inability to distinguish misinformation on the Internet, I’ve also been observing the factors that lead physicians astray. I expect physicians, as trained scientists and as professionals, to be able to assimilate new information and change their practices accordingly. Those who do research on the translation of technology find that, this doesn’t happen with any regularity.
The February 2019 issue of Hospital Pediatrics has four items on the topic of treating bronchiolitis, including two research articles, a brief report, and a commentary. That is obviously a relevant topic this time of year. The impression after reading those four items is that hospitalists don’t really know how to best treat the most common illness they encounter. And even when they “know” how to do it, many factors distort the science. Those factors are highlighted in the article on barriers to minimizing viral testing.6
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. “Discussing immunization with vaccine-hesitant parents requires caring, individualized approach,” by Jeff Craven, Pediatric News, Nov. 7, 2018; “How do you get anti-vaxxers to vaccinate their kids? Talk to them – for hours,” by Nadine Gartner, Washington Post, Feb. 19, 2019.
2. “Facebook will consider removing or demoting anti-vaccination recommendations amid backlash,” by Taylor Telford, Washington Post, Feb. 15, 2019.
3. “U.S. regulators have met to discuss imposing a record-setting fine against Facebook for privacy violations,” by Tony Romm and Elizabeth Dwoskin, Washington Post, Jan. 18, 2019; “Report: Facebook, FTC discussing ‘multibillion dollar’ fine,” by Associated Press.
4. “Disinformation and ‘fake news’: Final Report,” House of Commons, Feb. 18, 2019, p. 42, item 139.
5. “Pinterest blocks vaccination searches in move to control the conversation,” by Robert McMillan and Daniela Hernandez, The Wall Street Journal, Feb. 20, 2019.
6. “Barriers to minimizing respiratory viral testing in bronchiolitis: Physician perceptions on testing practices,” by MZ Huang et al. Hospital Pediatrics 2019 Feb. doi: 10.1542/hpeds.2018-0108.
My most recent column discussed the problem of pseudoscience that pervades some corners of the Internet. Personally, I respond to pseudoscience primarily by trying to provide accurate and less-biased information. I recognize that not everyone approaches decision making by seeking more information. When dealing a diverse public, a medical professional needs to have other approaches in the armamentarium.1 When dealing with other physicians, I am less flexible. Either the profession of medicine believes in science or it doesn’t.
Since that column was published, there have been major developments. There are measles outbreaks in the states of Washington and New York, and more than 100 deaths from a measles epidemic in the Philippines. The World Health Organization has made vaccine hesitancy one of its ten threats to global health in 2019.
Facebook has indicated that it might demote the priority and frequency with which it recommends articles that promulgate anti-vax information and conspiracy theories.2 Facebook isn’t doing this because it has had an epiphany; it has come under pressure for its role in the spread of misinformation. Current legislation was written before the rise of social media, when Internet Service Providers were primarily conduits to transfer bits and bytes between computers. Those ISPs were not liable for the content of the transmitted Web pages. Facebook, by producing what it called a newsfeed and by making personalized suggestions for other websites to browse, doesn’t fit the passive model of an ISP.
For alleged violations of user’s privacy, Facebook might be subject to billion dollar fines, according to a Washington Post article.3 Still, for a company whose revenue is $4 billion per month and whose stock market value is $400 billion, paying a billion dollar fine for years of alleged misbehaviors that have enabled it to become a giant empire is, “in the scheme of things ... a speeding ticket” in the parlance of the penultimate scene of the movie The Social Network. The real financial risk is people deciding they can’t trust the platform and going elsewhere.
Authorities in the United Kingdom in February 2019 released a highly critical, 108-page report about fake news, which said, “Facebook should not be allowed to behave like ‘digital gangsters’ in the online world.”4 The U.K. report urges new regulations to deal with privacy breaches and with fake news. It endeavors to create a duty for social media companies to combat the spread of misinformation.
Then the Wall Street Journal reported that Pinterest has stopped returning results for searches related to vaccination.5 Pinterest realized that most of the shared images on its platform cautioned against vaccination, which contradicts the recommendations of medical experts. Unable to otherwise combat the flow of misinformation, the company apparently has decided to eliminate returning results, pro or con, for any search terms related to vaccines.
While lamenting the public’s inability to distinguish misinformation on the Internet, I’ve also been observing the factors that lead physicians astray. I expect physicians, as trained scientists and as professionals, to be able to assimilate new information and change their practices accordingly. Those who do research on the translation of technology find that, this doesn’t happen with any regularity.
The February 2019 issue of Hospital Pediatrics has four items on the topic of treating bronchiolitis, including two research articles, a brief report, and a commentary. That is obviously a relevant topic this time of year. The impression after reading those four items is that hospitalists don’t really know how to best treat the most common illness they encounter. And even when they “know” how to do it, many factors distort the science. Those factors are highlighted in the article on barriers to minimizing viral testing.6
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. “Discussing immunization with vaccine-hesitant parents requires caring, individualized approach,” by Jeff Craven, Pediatric News, Nov. 7, 2018; “How do you get anti-vaxxers to vaccinate their kids? Talk to them – for hours,” by Nadine Gartner, Washington Post, Feb. 19, 2019.
2. “Facebook will consider removing or demoting anti-vaccination recommendations amid backlash,” by Taylor Telford, Washington Post, Feb. 15, 2019.
3. “U.S. regulators have met to discuss imposing a record-setting fine against Facebook for privacy violations,” by Tony Romm and Elizabeth Dwoskin, Washington Post, Jan. 18, 2019; “Report: Facebook, FTC discussing ‘multibillion dollar’ fine,” by Associated Press.
4. “Disinformation and ‘fake news’: Final Report,” House of Commons, Feb. 18, 2019, p. 42, item 139.
5. “Pinterest blocks vaccination searches in move to control the conversation,” by Robert McMillan and Daniela Hernandez, The Wall Street Journal, Feb. 20, 2019.
6. “Barriers to minimizing respiratory viral testing in bronchiolitis: Physician perceptions on testing practices,” by MZ Huang et al. Hospital Pediatrics 2019 Feb. doi: 10.1542/hpeds.2018-0108.
Responding to pseudoscience
The Internet has been a transformative means of transmitting information. Alas, the information is often not vetted, so the effects on science, truth, and health literacy have been mixed. Unfortunately, Facebook spawned a billion dollar industry that transmits gossip. Twitter distributes information based on celebrity rather than intelligence or expertise.
Listservs and Google groups have allowed small communities to form unrestricted by the physical locations of the members. A listserv for pediatric hospitalists, with 3,800 members, provides quick access to a vast body of knowledge, an extensive array of experience, and insightful clinical wisdom. Discussions on this listserv resource have inspired several of my columns, including this one. The professionalism of the listserv members ensures the accuracy of the messages. Because many of the members work nights, it is possible to post a question and receive five consults from peers, even at 1 a.m. When I first started office practice in rural areas, all I had available was my memory, Rudolph’s Pediatrics textbook, and The Harriet Lane Handbook.
Misinformation has led to vaccine hesitancy and the reemergence of diseases such as measles that had been essentially eliminated. Because people haven’t seen these diseases, they are prone to believing any critique about the risk of vaccines. More recently, parents have been refusing the vitamin K shot that is provided to all newborns to prevent hemorrhagic disease of the newborn, now called vitamin K deficiency bleeding. The incidence of this bleeding disorder is relatively rare. However, when it occurs, the results can be disastrous, with life-threatening gastrointestinal bleeds and disabling brain hemorrhages. As with vaccine hesitancy, the corruption of scientific knowledge has led to bad outcomes that once were nearly eliminated by modern health care.
Part of being a professional is communicating in a manner that helps parents understand small risks. I compare newborn vitamin K deficiency to the risk of driving the newborn around for the first 30 days of life without a car seat. The vast majority of people will not have an accident in that time and their babies will be fine. But emergency department doctors would see so many preventable cases of injury that they would strongly advocate for car seats. I also note that if the baby has a stroke due to vitamin K deficiency, we can’t catch it early and fix it.
One issue that comes up in the nursery is whether the physician should refuse to perform a circumcision on a newborn who has not received vitamin K. The risk of bleeding is increased further when circumcisions are done as outpatient procedures a few days after birth. When this topic was discussed on the hospitalist’s listserv, most respondents took a hard line and would not perform the procedure. I am more ambivalent because of my strong personal value of accommodating diverse views and perhaps because I have never experienced a severe case of postop bleeding. The absolute risk is low.
The ethical issues are similar to those involved in maintaining or dismissing families from your practice panel if they refuse vaccines. Some physicians think the threat of having to find another doctor is the only way to appear credible when advocating the use of vaccines. Actions speak louder than words. Other physicians are dedicated to accommodating diverse viewpoints. They try to persuade over time. This is a complex subject and the American Academy of Pediatrics’ position on this changed 2 years ago to consider dismissal as a viable option as long as it adheres to relevant state laws that prohibit abandonment of patients.1
Respect for science has diminished since the era when men walked on the moon. There are myriad reasons for this. They exceed what can be covered here. All human endeavors wax and wane in their prestige and credibility. The 1960s was an era of great technological progress in many areas, including space flight and medicine. Since then, the credibility of science has been harmed by mercenary scientists who do research not to illuminate truth but to sow doubt.2 This doubt has impeded educating the public about the risks of smoking, lead paint, and climate change.
Physicians themselves have contributed to this diminished credibility of scientists. Recommendations have been published and later withdrawn in areas such as dietary cholesterol, salt, and saturated fats, estrogen replacement therapy, and screening for prostate and breast cancers. In modern America, even small inconsistencies and errors get blown up into conspiracy plots.
The era of expecting patients to blindly follow a doctor’s orders has long since passed. Parents will search the Internet for answers. The modern physician needs to guide them to good ones.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. Pediatrics. 2016 Aug. doi: 10.1542/peds.2016-2146.
2. “Doubt is Their Product,” by David Michaels, Oxford University Press, 2008, and “Merchants of Doubt,” by Naomi Oreskes and Erik M. Conway, Bloomsbury Press, 2011.
The Internet has been a transformative means of transmitting information. Alas, the information is often not vetted, so the effects on science, truth, and health literacy have been mixed. Unfortunately, Facebook spawned a billion dollar industry that transmits gossip. Twitter distributes information based on celebrity rather than intelligence or expertise.
Listservs and Google groups have allowed small communities to form unrestricted by the physical locations of the members. A listserv for pediatric hospitalists, with 3,800 members, provides quick access to a vast body of knowledge, an extensive array of experience, and insightful clinical wisdom. Discussions on this listserv resource have inspired several of my columns, including this one. The professionalism of the listserv members ensures the accuracy of the messages. Because many of the members work nights, it is possible to post a question and receive five consults from peers, even at 1 a.m. When I first started office practice in rural areas, all I had available was my memory, Rudolph’s Pediatrics textbook, and The Harriet Lane Handbook.
Misinformation has led to vaccine hesitancy and the reemergence of diseases such as measles that had been essentially eliminated. Because people haven’t seen these diseases, they are prone to believing any critique about the risk of vaccines. More recently, parents have been refusing the vitamin K shot that is provided to all newborns to prevent hemorrhagic disease of the newborn, now called vitamin K deficiency bleeding. The incidence of this bleeding disorder is relatively rare. However, when it occurs, the results can be disastrous, with life-threatening gastrointestinal bleeds and disabling brain hemorrhages. As with vaccine hesitancy, the corruption of scientific knowledge has led to bad outcomes that once were nearly eliminated by modern health care.
Part of being a professional is communicating in a manner that helps parents understand small risks. I compare newborn vitamin K deficiency to the risk of driving the newborn around for the first 30 days of life without a car seat. The vast majority of people will not have an accident in that time and their babies will be fine. But emergency department doctors would see so many preventable cases of injury that they would strongly advocate for car seats. I also note that if the baby has a stroke due to vitamin K deficiency, we can’t catch it early and fix it.
One issue that comes up in the nursery is whether the physician should refuse to perform a circumcision on a newborn who has not received vitamin K. The risk of bleeding is increased further when circumcisions are done as outpatient procedures a few days after birth. When this topic was discussed on the hospitalist’s listserv, most respondents took a hard line and would not perform the procedure. I am more ambivalent because of my strong personal value of accommodating diverse views and perhaps because I have never experienced a severe case of postop bleeding. The absolute risk is low.
The ethical issues are similar to those involved in maintaining or dismissing families from your practice panel if they refuse vaccines. Some physicians think the threat of having to find another doctor is the only way to appear credible when advocating the use of vaccines. Actions speak louder than words. Other physicians are dedicated to accommodating diverse viewpoints. They try to persuade over time. This is a complex subject and the American Academy of Pediatrics’ position on this changed 2 years ago to consider dismissal as a viable option as long as it adheres to relevant state laws that prohibit abandonment of patients.1
Respect for science has diminished since the era when men walked on the moon. There are myriad reasons for this. They exceed what can be covered here. All human endeavors wax and wane in their prestige and credibility. The 1960s was an era of great technological progress in many areas, including space flight and medicine. Since then, the credibility of science has been harmed by mercenary scientists who do research not to illuminate truth but to sow doubt.2 This doubt has impeded educating the public about the risks of smoking, lead paint, and climate change.
Physicians themselves have contributed to this diminished credibility of scientists. Recommendations have been published and later withdrawn in areas such as dietary cholesterol, salt, and saturated fats, estrogen replacement therapy, and screening for prostate and breast cancers. In modern America, even small inconsistencies and errors get blown up into conspiracy plots.
The era of expecting patients to blindly follow a doctor’s orders has long since passed. Parents will search the Internet for answers. The modern physician needs to guide them to good ones.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. Pediatrics. 2016 Aug. doi: 10.1542/peds.2016-2146.
2. “Doubt is Their Product,” by David Michaels, Oxford University Press, 2008, and “Merchants of Doubt,” by Naomi Oreskes and Erik M. Conway, Bloomsbury Press, 2011.
The Internet has been a transformative means of transmitting information. Alas, the information is often not vetted, so the effects on science, truth, and health literacy have been mixed. Unfortunately, Facebook spawned a billion dollar industry that transmits gossip. Twitter distributes information based on celebrity rather than intelligence or expertise.
Listservs and Google groups have allowed small communities to form unrestricted by the physical locations of the members. A listserv for pediatric hospitalists, with 3,800 members, provides quick access to a vast body of knowledge, an extensive array of experience, and insightful clinical wisdom. Discussions on this listserv resource have inspired several of my columns, including this one. The professionalism of the listserv members ensures the accuracy of the messages. Because many of the members work nights, it is possible to post a question and receive five consults from peers, even at 1 a.m. When I first started office practice in rural areas, all I had available was my memory, Rudolph’s Pediatrics textbook, and The Harriet Lane Handbook.
Misinformation has led to vaccine hesitancy and the reemergence of diseases such as measles that had been essentially eliminated. Because people haven’t seen these diseases, they are prone to believing any critique about the risk of vaccines. More recently, parents have been refusing the vitamin K shot that is provided to all newborns to prevent hemorrhagic disease of the newborn, now called vitamin K deficiency bleeding. The incidence of this bleeding disorder is relatively rare. However, when it occurs, the results can be disastrous, with life-threatening gastrointestinal bleeds and disabling brain hemorrhages. As with vaccine hesitancy, the corruption of scientific knowledge has led to bad outcomes that once were nearly eliminated by modern health care.
Part of being a professional is communicating in a manner that helps parents understand small risks. I compare newborn vitamin K deficiency to the risk of driving the newborn around for the first 30 days of life without a car seat. The vast majority of people will not have an accident in that time and their babies will be fine. But emergency department doctors would see so many preventable cases of injury that they would strongly advocate for car seats. I also note that if the baby has a stroke due to vitamin K deficiency, we can’t catch it early and fix it.
One issue that comes up in the nursery is whether the physician should refuse to perform a circumcision on a newborn who has not received vitamin K. The risk of bleeding is increased further when circumcisions are done as outpatient procedures a few days after birth. When this topic was discussed on the hospitalist’s listserv, most respondents took a hard line and would not perform the procedure. I am more ambivalent because of my strong personal value of accommodating diverse views and perhaps because I have never experienced a severe case of postop bleeding. The absolute risk is low.
The ethical issues are similar to those involved in maintaining or dismissing families from your practice panel if they refuse vaccines. Some physicians think the threat of having to find another doctor is the only way to appear credible when advocating the use of vaccines. Actions speak louder than words. Other physicians are dedicated to accommodating diverse viewpoints. They try to persuade over time. This is a complex subject and the American Academy of Pediatrics’ position on this changed 2 years ago to consider dismissal as a viable option as long as it adheres to relevant state laws that prohibit abandonment of patients.1
Respect for science has diminished since the era when men walked on the moon. There are myriad reasons for this. They exceed what can be covered here. All human endeavors wax and wane in their prestige and credibility. The 1960s was an era of great technological progress in many areas, including space flight and medicine. Since then, the credibility of science has been harmed by mercenary scientists who do research not to illuminate truth but to sow doubt.2 This doubt has impeded educating the public about the risks of smoking, lead paint, and climate change.
Physicians themselves have contributed to this diminished credibility of scientists. Recommendations have been published and later withdrawn in areas such as dietary cholesterol, salt, and saturated fats, estrogen replacement therapy, and screening for prostate and breast cancers. In modern America, even small inconsistencies and errors get blown up into conspiracy plots.
The era of expecting patients to blindly follow a doctor’s orders has long since passed. Parents will search the Internet for answers. The modern physician needs to guide them to good ones.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. Pediatrics. 2016 Aug. doi: 10.1542/peds.2016-2146.
2. “Doubt is Their Product,” by David Michaels, Oxford University Press, 2008, and “Merchants of Doubt,” by Naomi Oreskes and Erik M. Conway, Bloomsbury Press, 2011.
How much more proof do you need?
One piece of wisdom I was given in medical school was to never be the first nor the last to adopt a new treatment. The history of medicine is full of new discoveries that don’t work out as well as the first report. It also is full of long standing dogmas that later were proven false. This balancing act is part of being a professional and being an advocate for your patient. There is science behind this art. Everett Rogers identified innovators, early adopters, and laggards as new ideas are diffused into practice.1
A 2007 French study2 that investigated oral amoxicillin for early-onset group B streptococcal (GBS) disease is one of the few times in the past 3 decades in which I changed my practice based on a single article. It was a large, conclusive study with 222 patients, so it doesn’t need a meta-analysis like American research often requires. The research showed that most of what I had been taught about oral amoxicillin was false. Amoxicillin is absorbed well even at doses above 50 mg/kg per day. It is absorbed reliably by full term neonates, even mildly sick ones. It does adequately cross the blood-brain barrier. The French researchers measured serum levels and proved all this using both scientific principles and through a clinical trial.
I have used this oral protocol (10 days total after 2-3 days IV therapy) on two occasions to treat GBS sepsis when I had informed consent of the parents and buy-in from the primary care pediatrician to be early adopters. I expected the Red Book would update its recommendations. That didn’t happen.
Meanwhile, I have seen other babies kept for 10 days in the hospital for IV therapy with resultant wasted costs (about $20 million/year in the United States) and income loss for the parents. I’ve treated complications and readmissions caused by peripherally inserted central catheter (PICC) line issues. One baby at home got a syringe of gentamicin given as an IV push instead of a normal saline flush. Mistakes happen at home and in the hospital.
Because late-onset GBS can be acquired environmentally, there always will be recurrences. Unless you are practicing defensive medicine, the issue isn’t the rate of recurrence; it is whether the more invasive intervention of prolonged IV therapy reduces that rate. Then balance any measured reduction (which apparently is zero) against the adverse effects of the invasive intervention, such as PICC line infections. This Bayesian decision making is hard for some risk-averse humans to assimilate. (I’m part Borg.)
Coon et al.3 have confirmed, using big data, that prolonged IV therapy of uncomplicated, late-onset GBS bacteremia does not generate a clinically significant benefit. It certainly is possible to sow doubt by asking for proof in a variety of subpopulations. Even in the era of intrapartum antibiotic prophylaxis, which has halved the incidence of GBS disease, GBS disease occurs in about 2,000 babies per year in the United States. However, most are treated in community hospitals and are not included in the database used in this new report. With fewer than 2-3 cases of GBS bacteremia per year per hospital, a multicenter, randomized controlled trial would be an unprecedented undertaking, is ethically problematic, and is not realistically happening soon. So these observational data, skillfully acquired and analyzed, are and will remain the best available data.
This new article is in the context of multiple articles over the past decade that have disproven the myth of the superiority of IV therapy. Given the known risks and costs of PICC lines and prolonged IV therapy, the default should be, absent a credible rationale to the contrary, that oral therapy at home is better.
Coon et al. show that, by 2015, 5 of 49 children’s hospitals (10%) were early adopters and had already made the switch to mostly using short treatment courses for uncomplicated GBS bacteremia; 14 of 49 (29%) hadn’t changed at all from the obsolete Red Book recommendation. Given this new analysis, what are you laggards4 waiting for?
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. “Diffusion of Innovations,” 5th ed. (New York: Free Press, 2003).
2. Eur J Clin Pharmacol. 2007 Jul;63(7):657-62.
3. Pediatrics. 2018;142(5):e20180345.
4. https://en.wikipedia.org/wiki/Diffusion_of_innovations.
One piece of wisdom I was given in medical school was to never be the first nor the last to adopt a new treatment. The history of medicine is full of new discoveries that don’t work out as well as the first report. It also is full of long standing dogmas that later were proven false. This balancing act is part of being a professional and being an advocate for your patient. There is science behind this art. Everett Rogers identified innovators, early adopters, and laggards as new ideas are diffused into practice.1
A 2007 French study2 that investigated oral amoxicillin for early-onset group B streptococcal (GBS) disease is one of the few times in the past 3 decades in which I changed my practice based on a single article. It was a large, conclusive study with 222 patients, so it doesn’t need a meta-analysis like American research often requires. The research showed that most of what I had been taught about oral amoxicillin was false. Amoxicillin is absorbed well even at doses above 50 mg/kg per day. It is absorbed reliably by full term neonates, even mildly sick ones. It does adequately cross the blood-brain barrier. The French researchers measured serum levels and proved all this using both scientific principles and through a clinical trial.
I have used this oral protocol (10 days total after 2-3 days IV therapy) on two occasions to treat GBS sepsis when I had informed consent of the parents and buy-in from the primary care pediatrician to be early adopters. I expected the Red Book would update its recommendations. That didn’t happen.
Meanwhile, I have seen other babies kept for 10 days in the hospital for IV therapy with resultant wasted costs (about $20 million/year in the United States) and income loss for the parents. I’ve treated complications and readmissions caused by peripherally inserted central catheter (PICC) line issues. One baby at home got a syringe of gentamicin given as an IV push instead of a normal saline flush. Mistakes happen at home and in the hospital.
Because late-onset GBS can be acquired environmentally, there always will be recurrences. Unless you are practicing defensive medicine, the issue isn’t the rate of recurrence; it is whether the more invasive intervention of prolonged IV therapy reduces that rate. Then balance any measured reduction (which apparently is zero) against the adverse effects of the invasive intervention, such as PICC line infections. This Bayesian decision making is hard for some risk-averse humans to assimilate. (I’m part Borg.)
Coon et al.3 have confirmed, using big data, that prolonged IV therapy of uncomplicated, late-onset GBS bacteremia does not generate a clinically significant benefit. It certainly is possible to sow doubt by asking for proof in a variety of subpopulations. Even in the era of intrapartum antibiotic prophylaxis, which has halved the incidence of GBS disease, GBS disease occurs in about 2,000 babies per year in the United States. However, most are treated in community hospitals and are not included in the database used in this new report. With fewer than 2-3 cases of GBS bacteremia per year per hospital, a multicenter, randomized controlled trial would be an unprecedented undertaking, is ethically problematic, and is not realistically happening soon. So these observational data, skillfully acquired and analyzed, are and will remain the best available data.
This new article is in the context of multiple articles over the past decade that have disproven the myth of the superiority of IV therapy. Given the known risks and costs of PICC lines and prolonged IV therapy, the default should be, absent a credible rationale to the contrary, that oral therapy at home is better.
Coon et al. show that, by 2015, 5 of 49 children’s hospitals (10%) were early adopters and had already made the switch to mostly using short treatment courses for uncomplicated GBS bacteremia; 14 of 49 (29%) hadn’t changed at all from the obsolete Red Book recommendation. Given this new analysis, what are you laggards4 waiting for?
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. “Diffusion of Innovations,” 5th ed. (New York: Free Press, 2003).
2. Eur J Clin Pharmacol. 2007 Jul;63(7):657-62.
3. Pediatrics. 2018;142(5):e20180345.
4. https://en.wikipedia.org/wiki/Diffusion_of_innovations.
One piece of wisdom I was given in medical school was to never be the first nor the last to adopt a new treatment. The history of medicine is full of new discoveries that don’t work out as well as the first report. It also is full of long standing dogmas that later were proven false. This balancing act is part of being a professional and being an advocate for your patient. There is science behind this art. Everett Rogers identified innovators, early adopters, and laggards as new ideas are diffused into practice.1
A 2007 French study2 that investigated oral amoxicillin for early-onset group B streptococcal (GBS) disease is one of the few times in the past 3 decades in which I changed my practice based on a single article. It was a large, conclusive study with 222 patients, so it doesn’t need a meta-analysis like American research often requires. The research showed that most of what I had been taught about oral amoxicillin was false. Amoxicillin is absorbed well even at doses above 50 mg/kg per day. It is absorbed reliably by full term neonates, even mildly sick ones. It does adequately cross the blood-brain barrier. The French researchers measured serum levels and proved all this using both scientific principles and through a clinical trial.
I have used this oral protocol (10 days total after 2-3 days IV therapy) on two occasions to treat GBS sepsis when I had informed consent of the parents and buy-in from the primary care pediatrician to be early adopters. I expected the Red Book would update its recommendations. That didn’t happen.
Meanwhile, I have seen other babies kept for 10 days in the hospital for IV therapy with resultant wasted costs (about $20 million/year in the United States) and income loss for the parents. I’ve treated complications and readmissions caused by peripherally inserted central catheter (PICC) line issues. One baby at home got a syringe of gentamicin given as an IV push instead of a normal saline flush. Mistakes happen at home and in the hospital.
Because late-onset GBS can be acquired environmentally, there always will be recurrences. Unless you are practicing defensive medicine, the issue isn’t the rate of recurrence; it is whether the more invasive intervention of prolonged IV therapy reduces that rate. Then balance any measured reduction (which apparently is zero) against the adverse effects of the invasive intervention, such as PICC line infections. This Bayesian decision making is hard for some risk-averse humans to assimilate. (I’m part Borg.)
Coon et al.3 have confirmed, using big data, that prolonged IV therapy of uncomplicated, late-onset GBS bacteremia does not generate a clinically significant benefit. It certainly is possible to sow doubt by asking for proof in a variety of subpopulations. Even in the era of intrapartum antibiotic prophylaxis, which has halved the incidence of GBS disease, GBS disease occurs in about 2,000 babies per year in the United States. However, most are treated in community hospitals and are not included in the database used in this new report. With fewer than 2-3 cases of GBS bacteremia per year per hospital, a multicenter, randomized controlled trial would be an unprecedented undertaking, is ethically problematic, and is not realistically happening soon. So these observational data, skillfully acquired and analyzed, are and will remain the best available data.
This new article is in the context of multiple articles over the past decade that have disproven the myth of the superiority of IV therapy. Given the known risks and costs of PICC lines and prolonged IV therapy, the default should be, absent a credible rationale to the contrary, that oral therapy at home is better.
Coon et al. show that, by 2015, 5 of 49 children’s hospitals (10%) were early adopters and had already made the switch to mostly using short treatment courses for uncomplicated GBS bacteremia; 14 of 49 (29%) hadn’t changed at all from the obsolete Red Book recommendation. Given this new analysis, what are you laggards4 waiting for?
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
References
1. “Diffusion of Innovations,” 5th ed. (New York: Free Press, 2003).
2. Eur J Clin Pharmacol. 2007 Jul;63(7):657-62.
3. Pediatrics. 2018;142(5):e20180345.
4. https://en.wikipedia.org/wiki/Diffusion_of_innovations.
Promoting confrontation
The optimist says the glass is half-full. The pessimist says it is half-empty. An engineer says the glass is twice as large as needed to contain the specified amount of fluid. To some people, that mindset makes engineers negative people. We focus on weaknesses and inefficiencies. A chain is only as strong as its weakest link. There is no partial credit when building a bridge. 98% right is still wrong.
When I worked as an engineer, critiquing ideas was a daily activity. I am used to conflicting opinions. Industry trains people to be professional and act appropriately when disagreeing with a colleague. Tact is the art of making a point without making an enemy. Engineering has a strong culture of focusing on a problem rather than on personalities. Upper management made it clear that in any turf war, both sides will lose. Academia has a different culture. Turf wars in academia are so bitter because the stakes are so small.
Pediatrics has less confrontation and competitiveness than do other subspecialties. That makes the work environment more pleasant, as long as every other group in the hospital isn’t walking all over you. Pediatricians often view themselves as dedicated to doing what is right for the children, even to the point of martyrdom. Some early pediatric hospitalist programs got into economic trouble because they adopted tasks that benefited the children but that weren’t being performed by other physicians precisely because those tasks were neither valued nor compensated. Learning to say “No” is hard but necessary.
As a clinical ethics consultant, I was consulted when conflict had developed between providers and patients/parents or between different specialties. Ethics consults are rarely about what philosophers would call ethics. They are mostly about miscommunication, empowering voices to be heard and clarifying values. Practical skills in de-escalation and mediation are more important than either law or philosophy degrees.
There are downsides to avoiding confrontation. Truth suffers. Integrity is lost. Goals become corrupted. I will give two examples. One ED had a five-level triage system. Level 1 was reserved for life-threatening situations such as gunshot wounds and resuscitations. So I was surprised to see a “bili” baby triaged at Level 1. He was a good baby with normal vitals. Admission for phototherapy was reasonable, but the urgency of a bilirubin of 19 did not match that of a gunshot wound. A colleague warned me not to even consider challenging the practice. A powerful physician at that institution had made it policy years earlier.
I witnessed a similar dynamic many times at that institution. Residents are even better than 4-year-olds at noticing hypocritical behavior. Once they perceive that the dynamic is political power and not science, they adapt quickly. A couple days later, I asked a resident if he really thought an IV was necessary for a toddler we were admitting. He replied no, but if he hadn’t put an IV in, the hospital wouldn’t get paid for the admission. To him, that was the unspoken policy. The action didn’t even cause him moral distress. I worry about that much cynicism so early in a career. Cognitive dissonance starts small and slowly creeps its way into everything.
The art of managing conflict is particularly important in pediatric hospital medicine because of its heavy investment in reducing overdiagnosis and overtreatment. Many pediatric hospitalists are located at academic institutions and more subject to its turf wars than outpatient colleagues practicing in small groups. The recent conference for pediatric hospital medicine was held in Atlanta, a few blocks from the Center for Civil and Human Rights. That museum evokes powerful images of struggles around the world. My two takeaway lessons: Silence is a form of collaboration. Tyrannical suppression of dissent magnifies suffering.
In poorly managed academic institutions, it can be harmful to one’s career to ask questions, challenge assumptions, and seek truth. A recent report found that the Department of Veterans Affairs health system also has a culture that punishes whistle-blowers. Nationally, politics has become polarized. Splitting, once considered a dysfunctional behavior, has become normalized. So I understand the reluctance to speak up. One must choose one’s battles.
Given the personal and career risks, why confront inaccurate research, wasteful practices, and unjust policies? I believe that there is a balance and a choice each person must make. Canadian engineers wear an iron ring to remind themselves of their professional responsibilities. Doctors wear white coats. Personally, I share a memory with other engineers of my generation. In January 1986, NASA engineers could not convince their managers about a risk. The space shuttle Challenger exploded. I heard about it in the medical school’s cafeteria. So for me, disputation is part of the vocation.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
The optimist says the glass is half-full. The pessimist says it is half-empty. An engineer says the glass is twice as large as needed to contain the specified amount of fluid. To some people, that mindset makes engineers negative people. We focus on weaknesses and inefficiencies. A chain is only as strong as its weakest link. There is no partial credit when building a bridge. 98% right is still wrong.
When I worked as an engineer, critiquing ideas was a daily activity. I am used to conflicting opinions. Industry trains people to be professional and act appropriately when disagreeing with a colleague. Tact is the art of making a point without making an enemy. Engineering has a strong culture of focusing on a problem rather than on personalities. Upper management made it clear that in any turf war, both sides will lose. Academia has a different culture. Turf wars in academia are so bitter because the stakes are so small.
Pediatrics has less confrontation and competitiveness than do other subspecialties. That makes the work environment more pleasant, as long as every other group in the hospital isn’t walking all over you. Pediatricians often view themselves as dedicated to doing what is right for the children, even to the point of martyrdom. Some early pediatric hospitalist programs got into economic trouble because they adopted tasks that benefited the children but that weren’t being performed by other physicians precisely because those tasks were neither valued nor compensated. Learning to say “No” is hard but necessary.
As a clinical ethics consultant, I was consulted when conflict had developed between providers and patients/parents or between different specialties. Ethics consults are rarely about what philosophers would call ethics. They are mostly about miscommunication, empowering voices to be heard and clarifying values. Practical skills in de-escalation and mediation are more important than either law or philosophy degrees.
There are downsides to avoiding confrontation. Truth suffers. Integrity is lost. Goals become corrupted. I will give two examples. One ED had a five-level triage system. Level 1 was reserved for life-threatening situations such as gunshot wounds and resuscitations. So I was surprised to see a “bili” baby triaged at Level 1. He was a good baby with normal vitals. Admission for phototherapy was reasonable, but the urgency of a bilirubin of 19 did not match that of a gunshot wound. A colleague warned me not to even consider challenging the practice. A powerful physician at that institution had made it policy years earlier.
I witnessed a similar dynamic many times at that institution. Residents are even better than 4-year-olds at noticing hypocritical behavior. Once they perceive that the dynamic is political power and not science, they adapt quickly. A couple days later, I asked a resident if he really thought an IV was necessary for a toddler we were admitting. He replied no, but if he hadn’t put an IV in, the hospital wouldn’t get paid for the admission. To him, that was the unspoken policy. The action didn’t even cause him moral distress. I worry about that much cynicism so early in a career. Cognitive dissonance starts small and slowly creeps its way into everything.
The art of managing conflict is particularly important in pediatric hospital medicine because of its heavy investment in reducing overdiagnosis and overtreatment. Many pediatric hospitalists are located at academic institutions and more subject to its turf wars than outpatient colleagues practicing in small groups. The recent conference for pediatric hospital medicine was held in Atlanta, a few blocks from the Center for Civil and Human Rights. That museum evokes powerful images of struggles around the world. My two takeaway lessons: Silence is a form of collaboration. Tyrannical suppression of dissent magnifies suffering.
In poorly managed academic institutions, it can be harmful to one’s career to ask questions, challenge assumptions, and seek truth. A recent report found that the Department of Veterans Affairs health system also has a culture that punishes whistle-blowers. Nationally, politics has become polarized. Splitting, once considered a dysfunctional behavior, has become normalized. So I understand the reluctance to speak up. One must choose one’s battles.
Given the personal and career risks, why confront inaccurate research, wasteful practices, and unjust policies? I believe that there is a balance and a choice each person must make. Canadian engineers wear an iron ring to remind themselves of their professional responsibilities. Doctors wear white coats. Personally, I share a memory with other engineers of my generation. In January 1986, NASA engineers could not convince their managers about a risk. The space shuttle Challenger exploded. I heard about it in the medical school’s cafeteria. So for me, disputation is part of the vocation.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
The optimist says the glass is half-full. The pessimist says it is half-empty. An engineer says the glass is twice as large as needed to contain the specified amount of fluid. To some people, that mindset makes engineers negative people. We focus on weaknesses and inefficiencies. A chain is only as strong as its weakest link. There is no partial credit when building a bridge. 98% right is still wrong.
When I worked as an engineer, critiquing ideas was a daily activity. I am used to conflicting opinions. Industry trains people to be professional and act appropriately when disagreeing with a colleague. Tact is the art of making a point without making an enemy. Engineering has a strong culture of focusing on a problem rather than on personalities. Upper management made it clear that in any turf war, both sides will lose. Academia has a different culture. Turf wars in academia are so bitter because the stakes are so small.
Pediatrics has less confrontation and competitiveness than do other subspecialties. That makes the work environment more pleasant, as long as every other group in the hospital isn’t walking all over you. Pediatricians often view themselves as dedicated to doing what is right for the children, even to the point of martyrdom. Some early pediatric hospitalist programs got into economic trouble because they adopted tasks that benefited the children but that weren’t being performed by other physicians precisely because those tasks were neither valued nor compensated. Learning to say “No” is hard but necessary.
As a clinical ethics consultant, I was consulted when conflict had developed between providers and patients/parents or between different specialties. Ethics consults are rarely about what philosophers would call ethics. They are mostly about miscommunication, empowering voices to be heard and clarifying values. Practical skills in de-escalation and mediation are more important than either law or philosophy degrees.
There are downsides to avoiding confrontation. Truth suffers. Integrity is lost. Goals become corrupted. I will give two examples. One ED had a five-level triage system. Level 1 was reserved for life-threatening situations such as gunshot wounds and resuscitations. So I was surprised to see a “bili” baby triaged at Level 1. He was a good baby with normal vitals. Admission for phototherapy was reasonable, but the urgency of a bilirubin of 19 did not match that of a gunshot wound. A colleague warned me not to even consider challenging the practice. A powerful physician at that institution had made it policy years earlier.
I witnessed a similar dynamic many times at that institution. Residents are even better than 4-year-olds at noticing hypocritical behavior. Once they perceive that the dynamic is political power and not science, they adapt quickly. A couple days later, I asked a resident if he really thought an IV was necessary for a toddler we were admitting. He replied no, but if he hadn’t put an IV in, the hospital wouldn’t get paid for the admission. To him, that was the unspoken policy. The action didn’t even cause him moral distress. I worry about that much cynicism so early in a career. Cognitive dissonance starts small and slowly creeps its way into everything.
The art of managing conflict is particularly important in pediatric hospital medicine because of its heavy investment in reducing overdiagnosis and overtreatment. Many pediatric hospitalists are located at academic institutions and more subject to its turf wars than outpatient colleagues practicing in small groups. The recent conference for pediatric hospital medicine was held in Atlanta, a few blocks from the Center for Civil and Human Rights. That museum evokes powerful images of struggles around the world. My two takeaway lessons: Silence is a form of collaboration. Tyrannical suppression of dissent magnifies suffering.
In poorly managed academic institutions, it can be harmful to one’s career to ask questions, challenge assumptions, and seek truth. A recent report found that the Department of Veterans Affairs health system also has a culture that punishes whistle-blowers. Nationally, politics has become polarized. Splitting, once considered a dysfunctional behavior, has become normalized. So I understand the reluctance to speak up. One must choose one’s battles.
Given the personal and career risks, why confront inaccurate research, wasteful practices, and unjust policies? I believe that there is a balance and a choice each person must make. Canadian engineers wear an iron ring to remind themselves of their professional responsibilities. Doctors wear white coats. Personally, I share a memory with other engineers of my generation. In January 1986, NASA engineers could not convince their managers about a risk. The space shuttle Challenger exploded. I heard about it in the medical school’s cafeteria. So for me, disputation is part of the vocation.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].
Significant figures: The honesty in being precise
Physicists have strict rules about significant figures. Medical journals lack this professional discipline and it produces distortions that mislead readers.
Whenever you measure and report something in physics, the precision of the measurement is reflected in how the value is written. Writing a result with more digits implies that a higher precision was achieved. If that wasn’t the case, you are falsely claiming skill and accomplishment. You’ve entered the zone of post-truth.
This point was taught by my high school physics teacher, Mr. Gunnar Overgaard, may he rest in peace. Suppose we measured the length of the lab table with the meter stick. We repeated the action three times. We computed an average. Our table was 243.7 cm long. If we wrote 243.73 or 243.73333 we got a lower grade. Meter sticks only have markings of 0.1 cm. So the precision of the reported measurement should properly reflect that limitation.
Researchers in medicine seem to have skipped that lesson in physics lab. In medical journals, the default seems to be to report measurements with two decimal points, such as 16.67%, which is a gross distortion of the precision when I know that that really means 2 out of 12 patients had the finding.
This issue of precision came up recently in two papers published about the number of deaths caused by Hurricane Maria in Puerto Rico. The official death toll was 64. This number became a political hot potato when President Trump cited it as if it was evidence that he and the current local government had managed the emergency response better than George W. Bush did for Katrina.
On May 29, 2018, some researchers at the Harvard School of Public Health, a prestigious institution, published an article in The New England Journal of Medicine, a prestigious journal. You would presume that pair could report properly. The abstract said “This rate yielded a total of 4,645 excess deaths during this period (95% CI, 793 to 8,498).”1 Many newspapers published the number 4,645 in a headline. Most newspapers didn’t include all of the scientific mumbo jumbo about bias and confidence intervals.
However, the number 4,645 did not pass the sniff test at many newspapers, including the Washington Post. Their headline began “Harvard study estimates thousands died”2 and that story went on to clarify that “The Harvard study’s statistical analysis found that deaths related to the hurricane fell within a range of about 800 to more than 8,000.” That is one significant digit. Then the fact checkers went to work on it. They didn’t issue a Pinocchio score, but under a headline of “Did exactly 4,645 people die in Hurricane Maria? Nope”3 the fact checkers concluded that “it’s an egregious example of false precision to cite the ‘4,645’ number without explaining how fuzzy the number really is.”
The situation was compounded 3 days later when another news report had the Puerto Rico Department of Public Health putting the death toll at 1,397. Many assumptions go into determining what an excess death is. If the false precision makes it appear the scientists have a political agenda, it casts shade on whether the assumptions they made are objective and unbiased.
The result on social media was predictable. Outrage was expressed, as always. Lawsuits have been filed. The reputations of all scientists have been impugned. The implication is that, depending on your political polarization, you can choose the number 64, 1,000, 1,400, or 4,645 and any number is just as true as another. Worse, instead of focusing on the severity of the catastrophe and how we might have responded better then and better now and with better planning for the future, the debate has focused on alternative facts and fake scientific news. Thanks, Harvard.
So in the spirit of thinking globally but acting locally, what can I do? I love my editor. I have hinted before about how much easier it is to read, as well as more accurate scientifically, to round the numbers that we report. We've done it a few times recently, but now that the Washington Post has done it on a major news story, should this practice become the norm for journalism? If medical journal editors won't handle precision honestly, other journalists must step up. I'm distressed when I review an article that says 14.6% agreed and 79.2% strongly agreed and I know those percentages with 3 digits really mean 7/48 and 38/48, so they should be rounded to two significant figures. And isn’t it easier to read and comprehend if reporting that three treatment groups had positive findings of 4.25%, 12.08%, and 9.84% when rounded to 4%, 12%, and 10%?
Scientists using this false precision (and peer reviewers who allow it) need to be corrected. They are trying to sell their research as a Louis Vuitton handbag when we all know it is only a cheap knockoff.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected]
References
1. N Eng J Med. 2018 May 29. doi: 10.1056/NEJMsa1803972
2. “Harvard study estimates thousands died in Puerto Rico because of Hurricane Maria,” by Arelis R. Hernández and Laurie McGinley, The Washington Post, May 29, 2018.
3. “Did exactly 4,645 people die in Hurricane Maria? Nope.” by Glenn Kessler, The Washington Post, June 1, 2018.
Physicists have strict rules about significant figures. Medical journals lack this professional discipline and it produces distortions that mislead readers.
Whenever you measure and report something in physics, the precision of the measurement is reflected in how the value is written. Writing a result with more digits implies that a higher precision was achieved. If that wasn’t the case, you are falsely claiming skill and accomplishment. You’ve entered the zone of post-truth.
This point was taught by my high school physics teacher, Mr. Gunnar Overgaard, may he rest in peace. Suppose we measured the length of the lab table with the meter stick. We repeated the action three times. We computed an average. Our table was 243.7 cm long. If we wrote 243.73 or 243.73333 we got a lower grade. Meter sticks only have markings of 0.1 cm. So the precision of the reported measurement should properly reflect that limitation.
Researchers in medicine seem to have skipped that lesson in physics lab. In medical journals, the default seems to be to report measurements with two decimal points, such as 16.67%, which is a gross distortion of the precision when I know that that really means 2 out of 12 patients had the finding.
This issue of precision came up recently in two papers published about the number of deaths caused by Hurricane Maria in Puerto Rico. The official death toll was 64. This number became a political hot potato when President Trump cited it as if it was evidence that he and the current local government had managed the emergency response better than George W. Bush did for Katrina.
On May 29, 2018, some researchers at the Harvard School of Public Health, a prestigious institution, published an article in The New England Journal of Medicine, a prestigious journal. You would presume that pair could report properly. The abstract said “This rate yielded a total of 4,645 excess deaths during this period (95% CI, 793 to 8,498).”1 Many newspapers published the number 4,645 in a headline. Most newspapers didn’t include all of the scientific mumbo jumbo about bias and confidence intervals.
However, the number 4,645 did not pass the sniff test at many newspapers, including the Washington Post. Their headline began “Harvard study estimates thousands died”2 and that story went on to clarify that “The Harvard study’s statistical analysis found that deaths related to the hurricane fell within a range of about 800 to more than 8,000.” That is one significant digit. Then the fact checkers went to work on it. They didn’t issue a Pinocchio score, but under a headline of “Did exactly 4,645 people die in Hurricane Maria? Nope”3 the fact checkers concluded that “it’s an egregious example of false precision to cite the ‘4,645’ number without explaining how fuzzy the number really is.”
The situation was compounded 3 days later when another news report had the Puerto Rico Department of Public Health putting the death toll at 1,397. Many assumptions go into determining what an excess death is. If the false precision makes it appear the scientists have a political agenda, it casts shade on whether the assumptions they made are objective and unbiased.
The result on social media was predictable. Outrage was expressed, as always. Lawsuits have been filed. The reputations of all scientists have been impugned. The implication is that, depending on your political polarization, you can choose the number 64, 1,000, 1,400, or 4,645 and any number is just as true as another. Worse, instead of focusing on the severity of the catastrophe and how we might have responded better then and better now and with better planning for the future, the debate has focused on alternative facts and fake scientific news. Thanks, Harvard.
So in the spirit of thinking globally but acting locally, what can I do? I love my editor. I have hinted before about how much easier it is to read, as well as more accurate scientifically, to round the numbers that we report. We've done it a few times recently, but now that the Washington Post has done it on a major news story, should this practice become the norm for journalism? If medical journal editors won't handle precision honestly, other journalists must step up. I'm distressed when I review an article that says 14.6% agreed and 79.2% strongly agreed and I know those percentages with 3 digits really mean 7/48 and 38/48, so they should be rounded to two significant figures. And isn’t it easier to read and comprehend if reporting that three treatment groups had positive findings of 4.25%, 12.08%, and 9.84% when rounded to 4%, 12%, and 10%?
Scientists using this false precision (and peer reviewers who allow it) need to be corrected. They are trying to sell their research as a Louis Vuitton handbag when we all know it is only a cheap knockoff.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected]
References
1. N Eng J Med. 2018 May 29. doi: 10.1056/NEJMsa1803972
2. “Harvard study estimates thousands died in Puerto Rico because of Hurricane Maria,” by Arelis R. Hernández and Laurie McGinley, The Washington Post, May 29, 2018.
3. “Did exactly 4,645 people die in Hurricane Maria? Nope.” by Glenn Kessler, The Washington Post, June 1, 2018.
Physicists have strict rules about significant figures. Medical journals lack this professional discipline and it produces distortions that mislead readers.
Whenever you measure and report something in physics, the precision of the measurement is reflected in how the value is written. Writing a result with more digits implies that a higher precision was achieved. If that wasn’t the case, you are falsely claiming skill and accomplishment. You’ve entered the zone of post-truth.
This point was taught by my high school physics teacher, Mr. Gunnar Overgaard, may he rest in peace. Suppose we measured the length of the lab table with the meter stick. We repeated the action three times. We computed an average. Our table was 243.7 cm long. If we wrote 243.73 or 243.73333 we got a lower grade. Meter sticks only have markings of 0.1 cm. So the precision of the reported measurement should properly reflect that limitation.
Researchers in medicine seem to have skipped that lesson in physics lab. In medical journals, the default seems to be to report measurements with two decimal points, such as 16.67%, which is a gross distortion of the precision when I know that that really means 2 out of 12 patients had the finding.
This issue of precision came up recently in two papers published about the number of deaths caused by Hurricane Maria in Puerto Rico. The official death toll was 64. This number became a political hot potato when President Trump cited it as if it was evidence that he and the current local government had managed the emergency response better than George W. Bush did for Katrina.
On May 29, 2018, some researchers at the Harvard School of Public Health, a prestigious institution, published an article in The New England Journal of Medicine, a prestigious journal. You would presume that pair could report properly. The abstract said “This rate yielded a total of 4,645 excess deaths during this period (95% CI, 793 to 8,498).”1 Many newspapers published the number 4,645 in a headline. Most newspapers didn’t include all of the scientific mumbo jumbo about bias and confidence intervals.
However, the number 4,645 did not pass the sniff test at many newspapers, including the Washington Post. Their headline began “Harvard study estimates thousands died”2 and that story went on to clarify that “The Harvard study’s statistical analysis found that deaths related to the hurricane fell within a range of about 800 to more than 8,000.” That is one significant digit. Then the fact checkers went to work on it. They didn’t issue a Pinocchio score, but under a headline of “Did exactly 4,645 people die in Hurricane Maria? Nope”3 the fact checkers concluded that “it’s an egregious example of false precision to cite the ‘4,645’ number without explaining how fuzzy the number really is.”
The situation was compounded 3 days later when another news report had the Puerto Rico Department of Public Health putting the death toll at 1,397. Many assumptions go into determining what an excess death is. If the false precision makes it appear the scientists have a political agenda, it casts shade on whether the assumptions they made are objective and unbiased.
The result on social media was predictable. Outrage was expressed, as always. Lawsuits have been filed. The reputations of all scientists have been impugned. The implication is that, depending on your political polarization, you can choose the number 64, 1,000, 1,400, or 4,645 and any number is just as true as another. Worse, instead of focusing on the severity of the catastrophe and how we might have responded better then and better now and with better planning for the future, the debate has focused on alternative facts and fake scientific news. Thanks, Harvard.
So in the spirit of thinking globally but acting locally, what can I do? I love my editor. I have hinted before about how much easier it is to read, as well as more accurate scientifically, to round the numbers that we report. We've done it a few times recently, but now that the Washington Post has done it on a major news story, should this practice become the norm for journalism? If medical journal editors won't handle precision honestly, other journalists must step up. I'm distressed when I review an article that says 14.6% agreed and 79.2% strongly agreed and I know those percentages with 3 digits really mean 7/48 and 38/48, so they should be rounded to two significant figures. And isn’t it easier to read and comprehend if reporting that three treatment groups had positive findings of 4.25%, 12.08%, and 9.84% when rounded to 4%, 12%, and 10%?
Scientists using this false precision (and peer reviewers who allow it) need to be corrected. They are trying to sell their research as a Louis Vuitton handbag when we all know it is only a cheap knockoff.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected]
References
1. N Eng J Med. 2018 May 29. doi: 10.1056/NEJMsa1803972
2. “Harvard study estimates thousands died in Puerto Rico because of Hurricane Maria,” by Arelis R. Hernández and Laurie McGinley, The Washington Post, May 29, 2018.
3. “Did exactly 4,645 people die in Hurricane Maria? Nope.” by Glenn Kessler, The Washington Post, June 1, 2018.
Privacy, propaganda, and polarization
Because I rarely perform surgery, my primary product is providing relevant information delivered with competence, compassion, and commitment. I must make the correct diagnosis and prescribe the correct treatment. I deliver that information with compassion to meet the emotional and spiritual needs of my patients and their parents. Parents can trust that I am committed to providing the best possible care for their child, rather than primarily seeking to enrich myself. After all, I chose pediatrics.
Three years ago I wrote a column about the use of Google as an alternative to physicians. The public can access a massive amount of medical information through the Internet. That information has been growing exponentially. But let’s look at what else has happened in the past 3 years that reflects the difference between my professionalism and the merchandising of the Internet.
I am a medical professional committed to my patients. The purveyors of information via the Internet are primarily dedicated to increased advertising revenue through click baiting and profiling. Apple’s CEO Tim Cook put it this way: “A few years ago, users of Internet services began to realize that when an online service is free, you’re not the customer. You’re the product.”
Facebook and Google learn from the content of people’s messages and search terms to build a personal profile that is valuable to advertisers. Soon that profile could include health information. Recently, 300,000 users were tempted to download a survey app via Facebook. The app developer used Facebook tools to scrape profile information not just on those 300,000 users, but on 87,000,000 contacts who did not give explicit consent. This massive leak of privacy was used to target people’s votes. Similar profiles could be used in focused advertising of health care products and services.
I have a professional and legal responsibility to provide accurate information to my patients. Years ago, Internet service providers lobbied for and obtained legal protections saying that they were not responsible for content transmitted over their networks. That idea made some sense when Facebook was primarily sharing information within families and friends. But then Facebook began a news feed without reporters vetting information and without the ethics of journalism and the fourth estate. A generation ago, three television broadcasting companies competed to provide daily evening news programs consisting of four to six stories carefully chosen to be important and relevant. Now a myriad of polarized blogs on unaccountable social media are designed to solicit clicks, spread advertising, and influence shoppers. The result has been a massive, toxic spill of false information into the noosphere. Given the already poor state of health literacy, this fake news contributes to ongoing problems with vaccine hesitancy, worthless cures, and distrust of the medical profession.
It makes the BP/Deep Horizon oil spill into the Gulf look small by comparison. The cleanup of this social media mess is going to be costly and require new technology. Chemical companies used to dump vast quantities of toxic waste and byproducts into rivers and landfills. Superfund sites involve billion dollar cleanups. Efforts are made to trace where the chemicals came from and to bill the original companies. Under a “cradle to grave” concept, a chemical company cannot avoid liability by giving toxic waste to a fly-by-night waste disposal company. Two years ago, Volkswagen stock lost $15 billion overnight when fraud was exposed in diesel emissions testing. Fines and compensation exceeded $25 billion. It has gained it all back. Facebook stock is worth five times more that Volkswagen. So even billion dollar fines would be a small cost of doing business within social media.
One information technology that has resisted pollution is Wikipedia. Google has been featuring Wikipedia websites in its search engine results for many years. Now even Facebook is contemplating using Wikipedia to combat fake news. I would not treat a patient solely based on information I found on Wikipedia. But I do find it convenient to remind me of information I had learned in the past and to reassure me that my memory is neither faulty nor outdated.
One senator said Facebook had problems with privacy and propaganda.He missed a third issue – polarization. Internet apps are designed to affirm people’s biases.By targeting Facebook users with news feeds and advertisements tailored to their prior search terms, likes, sites visited, and friends, Facebook provides news feeds that support people’s current beliefs. My own use of Google to search for health information is similarly tainted. Social media also has contaminated the ability of government to solicit public comments on legislative proposals.Similar issues make product reviews unreliable.
Overall, it is clear that the public’s ability to use the Internet to improve their health has been markedly compromised over the past 3 years. Professionalism is important. Three years ago I asked who you were going to believe – me or billionaire Elizabeth Holmes, CEO of Theranos? Since then, one of us has not signed an agreement with the Securities and Exchange Commission involving massive fraud.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. He said he had no relevant financial disclosures. Email him at [email protected].
Because I rarely perform surgery, my primary product is providing relevant information delivered with competence, compassion, and commitment. I must make the correct diagnosis and prescribe the correct treatment. I deliver that information with compassion to meet the emotional and spiritual needs of my patients and their parents. Parents can trust that I am committed to providing the best possible care for their child, rather than primarily seeking to enrich myself. After all, I chose pediatrics.
Three years ago I wrote a column about the use of Google as an alternative to physicians. The public can access a massive amount of medical information through the Internet. That information has been growing exponentially. But let’s look at what else has happened in the past 3 years that reflects the difference between my professionalism and the merchandising of the Internet.
I am a medical professional committed to my patients. The purveyors of information via the Internet are primarily dedicated to increased advertising revenue through click baiting and profiling. Apple’s CEO Tim Cook put it this way: “A few years ago, users of Internet services began to realize that when an online service is free, you’re not the customer. You’re the product.”
Facebook and Google learn from the content of people’s messages and search terms to build a personal profile that is valuable to advertisers. Soon that profile could include health information. Recently, 300,000 users were tempted to download a survey app via Facebook. The app developer used Facebook tools to scrape profile information not just on those 300,000 users, but on 87,000,000 contacts who did not give explicit consent. This massive leak of privacy was used to target people’s votes. Similar profiles could be used in focused advertising of health care products and services.
I have a professional and legal responsibility to provide accurate information to my patients. Years ago, Internet service providers lobbied for and obtained legal protections saying that they were not responsible for content transmitted over their networks. That idea made some sense when Facebook was primarily sharing information within families and friends. But then Facebook began a news feed without reporters vetting information and without the ethics of journalism and the fourth estate. A generation ago, three television broadcasting companies competed to provide daily evening news programs consisting of four to six stories carefully chosen to be important and relevant. Now a myriad of polarized blogs on unaccountable social media are designed to solicit clicks, spread advertising, and influence shoppers. The result has been a massive, toxic spill of false information into the noosphere. Given the already poor state of health literacy, this fake news contributes to ongoing problems with vaccine hesitancy, worthless cures, and distrust of the medical profession.
It makes the BP/Deep Horizon oil spill into the Gulf look small by comparison. The cleanup of this social media mess is going to be costly and require new technology. Chemical companies used to dump vast quantities of toxic waste and byproducts into rivers and landfills. Superfund sites involve billion dollar cleanups. Efforts are made to trace where the chemicals came from and to bill the original companies. Under a “cradle to grave” concept, a chemical company cannot avoid liability by giving toxic waste to a fly-by-night waste disposal company. Two years ago, Volkswagen stock lost $15 billion overnight when fraud was exposed in diesel emissions testing. Fines and compensation exceeded $25 billion. It has gained it all back. Facebook stock is worth five times more that Volkswagen. So even billion dollar fines would be a small cost of doing business within social media.
One information technology that has resisted pollution is Wikipedia. Google has been featuring Wikipedia websites in its search engine results for many years. Now even Facebook is contemplating using Wikipedia to combat fake news. I would not treat a patient solely based on information I found on Wikipedia. But I do find it convenient to remind me of information I had learned in the past and to reassure me that my memory is neither faulty nor outdated.
One senator said Facebook had problems with privacy and propaganda.He missed a third issue – polarization. Internet apps are designed to affirm people’s biases.By targeting Facebook users with news feeds and advertisements tailored to their prior search terms, likes, sites visited, and friends, Facebook provides news feeds that support people’s current beliefs. My own use of Google to search for health information is similarly tainted. Social media also has contaminated the ability of government to solicit public comments on legislative proposals.Similar issues make product reviews unreliable.
Overall, it is clear that the public’s ability to use the Internet to improve their health has been markedly compromised over the past 3 years. Professionalism is important. Three years ago I asked who you were going to believe – me or billionaire Elizabeth Holmes, CEO of Theranos? Since then, one of us has not signed an agreement with the Securities and Exchange Commission involving massive fraud.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. He said he had no relevant financial disclosures. Email him at [email protected].
Because I rarely perform surgery, my primary product is providing relevant information delivered with competence, compassion, and commitment. I must make the correct diagnosis and prescribe the correct treatment. I deliver that information with compassion to meet the emotional and spiritual needs of my patients and their parents. Parents can trust that I am committed to providing the best possible care for their child, rather than primarily seeking to enrich myself. After all, I chose pediatrics.
Three years ago I wrote a column about the use of Google as an alternative to physicians. The public can access a massive amount of medical information through the Internet. That information has been growing exponentially. But let’s look at what else has happened in the past 3 years that reflects the difference between my professionalism and the merchandising of the Internet.
I am a medical professional committed to my patients. The purveyors of information via the Internet are primarily dedicated to increased advertising revenue through click baiting and profiling. Apple’s CEO Tim Cook put it this way: “A few years ago, users of Internet services began to realize that when an online service is free, you’re not the customer. You’re the product.”
Facebook and Google learn from the content of people’s messages and search terms to build a personal profile that is valuable to advertisers. Soon that profile could include health information. Recently, 300,000 users were tempted to download a survey app via Facebook. The app developer used Facebook tools to scrape profile information not just on those 300,000 users, but on 87,000,000 contacts who did not give explicit consent. This massive leak of privacy was used to target people’s votes. Similar profiles could be used in focused advertising of health care products and services.
I have a professional and legal responsibility to provide accurate information to my patients. Years ago, Internet service providers lobbied for and obtained legal protections saying that they were not responsible for content transmitted over their networks. That idea made some sense when Facebook was primarily sharing information within families and friends. But then Facebook began a news feed without reporters vetting information and without the ethics of journalism and the fourth estate. A generation ago, three television broadcasting companies competed to provide daily evening news programs consisting of four to six stories carefully chosen to be important and relevant. Now a myriad of polarized blogs on unaccountable social media are designed to solicit clicks, spread advertising, and influence shoppers. The result has been a massive, toxic spill of false information into the noosphere. Given the already poor state of health literacy, this fake news contributes to ongoing problems with vaccine hesitancy, worthless cures, and distrust of the medical profession.
It makes the BP/Deep Horizon oil spill into the Gulf look small by comparison. The cleanup of this social media mess is going to be costly and require new technology. Chemical companies used to dump vast quantities of toxic waste and byproducts into rivers and landfills. Superfund sites involve billion dollar cleanups. Efforts are made to trace where the chemicals came from and to bill the original companies. Under a “cradle to grave” concept, a chemical company cannot avoid liability by giving toxic waste to a fly-by-night waste disposal company. Two years ago, Volkswagen stock lost $15 billion overnight when fraud was exposed in diesel emissions testing. Fines and compensation exceeded $25 billion. It has gained it all back. Facebook stock is worth five times more that Volkswagen. So even billion dollar fines would be a small cost of doing business within social media.
One information technology that has resisted pollution is Wikipedia. Google has been featuring Wikipedia websites in its search engine results for many years. Now even Facebook is contemplating using Wikipedia to combat fake news. I would not treat a patient solely based on information I found on Wikipedia. But I do find it convenient to remind me of information I had learned in the past and to reassure me that my memory is neither faulty nor outdated.
One senator said Facebook had problems with privacy and propaganda.He missed a third issue – polarization. Internet apps are designed to affirm people’s biases.By targeting Facebook users with news feeds and advertisements tailored to their prior search terms, likes, sites visited, and friends, Facebook provides news feeds that support people’s current beliefs. My own use of Google to search for health information is similarly tainted. Social media also has contaminated the ability of government to solicit public comments on legislative proposals.Similar issues make product reviews unreliable.
Overall, it is clear that the public’s ability to use the Internet to improve their health has been markedly compromised over the past 3 years. Professionalism is important. Three years ago I asked who you were going to believe – me or billionaire Elizabeth Holmes, CEO of Theranos? Since then, one of us has not signed an agreement with the Securities and Exchange Commission involving massive fraud.
Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. He said he had no relevant financial disclosures. Email him at [email protected].