User login
Nutrition and Medical Education
How comfortable are you giving nutritional advice to your patients? When you offer it are you basing your advice on something you learned during medical school or your training? Was it included in a course devoted to nutrition? Did you learn it later as part of continuing medical education course (CME)? Or was it just something you just picked up from your experience seeing patients (osmosis)? It is very unlikely that a significant portion, or any part for that matter, of your medical training was devoted to nutrition. It certainly wasn’t during my training.
I recently read an interview with Emily M. Broad Leib, JD, faculty director of the Harvard School Center for Health Law and Policy Innovation, Cambridge, Massachusetts, who would like to correct that deficiency. She feels doctors need to know more about food and that acquiring that knowledge should be a significant component of their formal training.
In the interview, Leib said that “roughly 86% of physicians report they do not feel adequately trained to answer basic questions on diet or nutrition.” She also notes that while “72% of entering medical students report they believe food is important to health” less than 50% retained this belief after graduation.
Leib and associates feel they have recently reached a milestone in their efforts to include nutrition in the mainstream of medical education this fall by publishing a paper that demonstrates “consensus on doctor-approved nutritional standard for medical schools and residency programs.”
36 Recommended Competencies
Curious about what these nutrition experts chose to include in medical training, I decided to drill down into the list of 36 consensus-driven competencies they had agreed upon.
It was an interesting voyage into a forest of redundancies, many of which can be boiled down to having the student demonstrate that he/she understands that what we eat is important to our health and that there is a complex web of relationships connecting our society to the food consume.
Some of the recommended competencies I found make perfect sense. For example the student/trainee should be able to take a diet and food history and be able to interpret lab values and anthropometric measurements and be able to discuss the patient’s weight and diet with sensitivity while keeping in mind his/her own biases about food.
Some other recommendations are more problematic, for example, “performs a comprehensive nutrition-focused physical examination” or “demonstrates knowledge of how to create culinary nutrition SMART [Specific, Measurable, Achievable, Relevant, and Time-Bound] goals for personal use and for patient care” or “provides brief counseling interventions to help patients decrease visceral adiposity or reduce the risk of metabolic syndrome.” Including competencies like these demonstrates a lack of understanding of the time restraints and realities of a primary care physician’s life and training.
Instead of simply reinforcing the prospective physician’s preexisting assumption that food and health are entwined and discussing when and how to consult a nutrition expert, these 36 competencies seem to be an attempt to create fast-tracked part-time dietitians and nutrition advocates out of medical students and trainees who already believe that nutrition is important for health but also have a very full plate of clinical responsibilities ahead of them.
The study that Leib quotes — that 72% of medical students believed food was important in health while after graduation only 50% of agreed — doesn’t necessarily mean that professors are preaching that food was unimportant. It is more likely by the end of medical school the students have seen that food must share the spotlight with numerous other factors that influence their patients’ health.
‘A More Appropriate Focus’
In my experience, diet and lifestyle counseling done well is extremely time consuming and best done by people for whom that is their specialty. A more appropriate focus for a list of nutritional competencies for physicians in training would be for the student to achieve an understanding of when and how to consult a dietitian and then how to support and evaluate the dietitian’s recommendations to the patient.
Finally, I don’t think we can ignore a serious public relations problem that hangs like a cloud over the nutrition advocacy community. It is the same one that casts a shadow on the medical community as well. It is a common perception among the lay public that nutritionists (and physicians) are always changing their recommendations when it comes to food. What is believable? Just think about eggs, red wine, or introducing peanuts to infants, to name just a few. And what about the food pyramids that seem to have been rebuilt every several years? The problem is compounded when some “credentialed” nutritionists and physicians continue to make dietary pronouncements with only a shred of evidence or poorly documented anecdotal observations.
The first of the 36 competencies I reviewed reads: “Provide evidence-based, culturally sensitive nutrition and food recommendations for the prevention and treatment of disease.” When it comes to nutrition the “evidence” can be tough to come by. The natural experiments in which individuals and populations had extremely limited access to a certain nutrients (eg, scurvy) don’t occur very often. Animal studies don’t always extrapolate to humans. And, observational studies concerning diet often have co-factors that are difficult to control and must run over time courses that can tax even the most patient researchers.
I certainly applaud Leib and associates for promoting their primary goal of including more about of the relationship between food and health in the medical school and trainee curriculum. But I must voice a caution to be careful to keep it truly evidence-based and in a format that acknowledges the realities of the life and education of a primary care provider.
The best nutritional advice I ever received in my training was from an older pediatric professor who suggested that a healthy diet consisted of everything in moderation.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
How comfortable are you giving nutritional advice to your patients? When you offer it are you basing your advice on something you learned during medical school or your training? Was it included in a course devoted to nutrition? Did you learn it later as part of continuing medical education course (CME)? Or was it just something you just picked up from your experience seeing patients (osmosis)? It is very unlikely that a significant portion, or any part for that matter, of your medical training was devoted to nutrition. It certainly wasn’t during my training.
I recently read an interview with Emily M. Broad Leib, JD, faculty director of the Harvard School Center for Health Law and Policy Innovation, Cambridge, Massachusetts, who would like to correct that deficiency. She feels doctors need to know more about food and that acquiring that knowledge should be a significant component of their formal training.
In the interview, Leib said that “roughly 86% of physicians report they do not feel adequately trained to answer basic questions on diet or nutrition.” She also notes that while “72% of entering medical students report they believe food is important to health” less than 50% retained this belief after graduation.
Leib and associates feel they have recently reached a milestone in their efforts to include nutrition in the mainstream of medical education this fall by publishing a paper that demonstrates “consensus on doctor-approved nutritional standard for medical schools and residency programs.”
36 Recommended Competencies
Curious about what these nutrition experts chose to include in medical training, I decided to drill down into the list of 36 consensus-driven competencies they had agreed upon.
It was an interesting voyage into a forest of redundancies, many of which can be boiled down to having the student demonstrate that he/she understands that what we eat is important to our health and that there is a complex web of relationships connecting our society to the food consume.
Some of the recommended competencies I found make perfect sense. For example the student/trainee should be able to take a diet and food history and be able to interpret lab values and anthropometric measurements and be able to discuss the patient’s weight and diet with sensitivity while keeping in mind his/her own biases about food.
Some other recommendations are more problematic, for example, “performs a comprehensive nutrition-focused physical examination” or “demonstrates knowledge of how to create culinary nutrition SMART [Specific, Measurable, Achievable, Relevant, and Time-Bound] goals for personal use and for patient care” or “provides brief counseling interventions to help patients decrease visceral adiposity or reduce the risk of metabolic syndrome.” Including competencies like these demonstrates a lack of understanding of the time restraints and realities of a primary care physician’s life and training.
Instead of simply reinforcing the prospective physician’s preexisting assumption that food and health are entwined and discussing when and how to consult a nutrition expert, these 36 competencies seem to be an attempt to create fast-tracked part-time dietitians and nutrition advocates out of medical students and trainees who already believe that nutrition is important for health but also have a very full plate of clinical responsibilities ahead of them.
The study that Leib quotes — that 72% of medical students believed food was important in health while after graduation only 50% of agreed — doesn’t necessarily mean that professors are preaching that food was unimportant. It is more likely by the end of medical school the students have seen that food must share the spotlight with numerous other factors that influence their patients’ health.
‘A More Appropriate Focus’
In my experience, diet and lifestyle counseling done well is extremely time consuming and best done by people for whom that is their specialty. A more appropriate focus for a list of nutritional competencies for physicians in training would be for the student to achieve an understanding of when and how to consult a dietitian and then how to support and evaluate the dietitian’s recommendations to the patient.
Finally, I don’t think we can ignore a serious public relations problem that hangs like a cloud over the nutrition advocacy community. It is the same one that casts a shadow on the medical community as well. It is a common perception among the lay public that nutritionists (and physicians) are always changing their recommendations when it comes to food. What is believable? Just think about eggs, red wine, or introducing peanuts to infants, to name just a few. And what about the food pyramids that seem to have been rebuilt every several years? The problem is compounded when some “credentialed” nutritionists and physicians continue to make dietary pronouncements with only a shred of evidence or poorly documented anecdotal observations.
The first of the 36 competencies I reviewed reads: “Provide evidence-based, culturally sensitive nutrition and food recommendations for the prevention and treatment of disease.” When it comes to nutrition the “evidence” can be tough to come by. The natural experiments in which individuals and populations had extremely limited access to a certain nutrients (eg, scurvy) don’t occur very often. Animal studies don’t always extrapolate to humans. And, observational studies concerning diet often have co-factors that are difficult to control and must run over time courses that can tax even the most patient researchers.
I certainly applaud Leib and associates for promoting their primary goal of including more about of the relationship between food and health in the medical school and trainee curriculum. But I must voice a caution to be careful to keep it truly evidence-based and in a format that acknowledges the realities of the life and education of a primary care provider.
The best nutritional advice I ever received in my training was from an older pediatric professor who suggested that a healthy diet consisted of everything in moderation.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
How comfortable are you giving nutritional advice to your patients? When you offer it are you basing your advice on something you learned during medical school or your training? Was it included in a course devoted to nutrition? Did you learn it later as part of continuing medical education course (CME)? Or was it just something you just picked up from your experience seeing patients (osmosis)? It is very unlikely that a significant portion, or any part for that matter, of your medical training was devoted to nutrition. It certainly wasn’t during my training.
I recently read an interview with Emily M. Broad Leib, JD, faculty director of the Harvard School Center for Health Law and Policy Innovation, Cambridge, Massachusetts, who would like to correct that deficiency. She feels doctors need to know more about food and that acquiring that knowledge should be a significant component of their formal training.
In the interview, Leib said that “roughly 86% of physicians report they do not feel adequately trained to answer basic questions on diet or nutrition.” She also notes that while “72% of entering medical students report they believe food is important to health” less than 50% retained this belief after graduation.
Leib and associates feel they have recently reached a milestone in their efforts to include nutrition in the mainstream of medical education this fall by publishing a paper that demonstrates “consensus on doctor-approved nutritional standard for medical schools and residency programs.”
36 Recommended Competencies
Curious about what these nutrition experts chose to include in medical training, I decided to drill down into the list of 36 consensus-driven competencies they had agreed upon.
It was an interesting voyage into a forest of redundancies, many of which can be boiled down to having the student demonstrate that he/she understands that what we eat is important to our health and that there is a complex web of relationships connecting our society to the food consume.
Some of the recommended competencies I found make perfect sense. For example the student/trainee should be able to take a diet and food history and be able to interpret lab values and anthropometric measurements and be able to discuss the patient’s weight and diet with sensitivity while keeping in mind his/her own biases about food.
Some other recommendations are more problematic, for example, “performs a comprehensive nutrition-focused physical examination” or “demonstrates knowledge of how to create culinary nutrition SMART [Specific, Measurable, Achievable, Relevant, and Time-Bound] goals for personal use and for patient care” or “provides brief counseling interventions to help patients decrease visceral adiposity or reduce the risk of metabolic syndrome.” Including competencies like these demonstrates a lack of understanding of the time restraints and realities of a primary care physician’s life and training.
Instead of simply reinforcing the prospective physician’s preexisting assumption that food and health are entwined and discussing when and how to consult a nutrition expert, these 36 competencies seem to be an attempt to create fast-tracked part-time dietitians and nutrition advocates out of medical students and trainees who already believe that nutrition is important for health but also have a very full plate of clinical responsibilities ahead of them.
The study that Leib quotes — that 72% of medical students believed food was important in health while after graduation only 50% of agreed — doesn’t necessarily mean that professors are preaching that food was unimportant. It is more likely by the end of medical school the students have seen that food must share the spotlight with numerous other factors that influence their patients’ health.
‘A More Appropriate Focus’
In my experience, diet and lifestyle counseling done well is extremely time consuming and best done by people for whom that is their specialty. A more appropriate focus for a list of nutritional competencies for physicians in training would be for the student to achieve an understanding of when and how to consult a dietitian and then how to support and evaluate the dietitian’s recommendations to the patient.
Finally, I don’t think we can ignore a serious public relations problem that hangs like a cloud over the nutrition advocacy community. It is the same one that casts a shadow on the medical community as well. It is a common perception among the lay public that nutritionists (and physicians) are always changing their recommendations when it comes to food. What is believable? Just think about eggs, red wine, or introducing peanuts to infants, to name just a few. And what about the food pyramids that seem to have been rebuilt every several years? The problem is compounded when some “credentialed” nutritionists and physicians continue to make dietary pronouncements with only a shred of evidence or poorly documented anecdotal observations.
The first of the 36 competencies I reviewed reads: “Provide evidence-based, culturally sensitive nutrition and food recommendations for the prevention and treatment of disease.” When it comes to nutrition the “evidence” can be tough to come by. The natural experiments in which individuals and populations had extremely limited access to a certain nutrients (eg, scurvy) don’t occur very often. Animal studies don’t always extrapolate to humans. And, observational studies concerning diet often have co-factors that are difficult to control and must run over time courses that can tax even the most patient researchers.
I certainly applaud Leib and associates for promoting their primary goal of including more about of the relationship between food and health in the medical school and trainee curriculum. But I must voice a caution to be careful to keep it truly evidence-based and in a format that acknowledges the realities of the life and education of a primary care provider.
The best nutritional advice I ever received in my training was from an older pediatric professor who suggested that a healthy diet consisted of everything in moderation.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Obesity: A Social Vulnerability
Sometime in the last year or 2 I wrote that, despite my considerable reservations, I had finally come to the conclusion that the American Medical Association’s decision to designate obesity as a disease was appropriate. My rationalization was that the disease label would open more opportunities for funding obesity treatments. However, the explosive growth and popularity of glucagon-like peptide 1 (GLP-1) agonists over the last year has had me rethinking my decision to suppress my long-held reservations about the disease designation.
So, if it’s not a disease, then what should we call it? How do we explain its surge in high-income countries that began in the 1980s? While there are still some folks who see obesity as a character flaw, I think you and I as healthcare providers have difficulty explaining the increase prevalence of obesity as either global breakdown of willpower or a widespread genetic shift as the result of burst of radiation from solar flares.
However, if we want to continue our search and finger-pointing we need to have a better definition of exactly what obesity is. If we’re going to continue calling it a disease we have done a pretty sloppy job of creating diagnostic criteria. To be honest, we aren’t doing such a hot job with “long COVID” either.
A recent article in the New York Times makes it clear that I’m not the only physician who is feeling uncomfortable with this lack of diagnostic specificity.
We know that using body mass index (BMI) as a criteria is imprecise. There are healthy individuals with elevated BMIs and there are others who are carrying an unhealthy amount of fat who have normal BMIs. And, there are individuals who have what might appear to be an excess amount of fat who are fit and healthy by other criteria.
Some investigators feel that a set of measurements that includes a waist and/or hip measurement may be a more accurate way of determining visceral adipose tissue. However, this body roundness index (BRI) currently relies on a tape measurement. Until the technique can be preformed by an inexpensive and readily available scanner, the BRI cannot be considered a practical tool for determining obesity.
Dr. Francisco Rubino, the chair of metabolic and bariatric surgery at Kings College in London, England, has been quoted as saying that, “if one defines a disease inaccurately, everything that stems from that – from diagnosis to treatment to policies – will be distorted and biased.”
Denmark has been forced to relabel obesity as a risk factor because the disease designation was stressing the financial viability of their healthcare system as more and more patients were being prescribe GLP-1 agonists, sometimes off label. A rationing strategy was resulting in suboptimal treatment of a significant portion of the obese population.
Spearheaded by Dr. Rubino, a Lancet Commission composed of physicians has tasked itself to define an “evidence-based diagnosis for obesity. Instead of relying on a single metric such as the BMI or BRI, diagnosing “clinical obesity” would involve a broad array of observations including a history, physical examination, standard laboratory and additional testing, “naming signs and symptoms, organ by organ, tissue by tissue, with plausible mechanisms for each one.” In other words, treating each patient as an individual using evidence-based criteria to make a diagnosis. While likely to be time consuming, this strategy feels like a more scientific approach. I suspect once clinical obesity is more rigorously defined it could be divided into several subtypes. For example, there would be a few conditions that were genetic; Prader-Willi syndrome being the best known.
However, I think the Lancet Commission’s strategy will find that the majority of individuals who make up this half-century global surge have become clinically obese because they have been unable to adapt to the obeseogenic forces in our society, which include diet, autocentricity, and attractive sedentary forms of entertainment, to name just three.
In some cases these unfortunate individuals are more vulnerable because there were born into an economically disadvantaged situation. In other scenarios a lack of foresight and/or political will may have left individuals with no other choice but to rely on automobiles to get around. Still others may find themselves living in a nutritional desert because all of the grocery stores have closed.
I recently encountered a descriptor in a story about the Federal Emergency Management Agency which could easily be adapted to describe this large and growing subtype of individuals with clinical obesity. “Social vulnerability” is measure of how well a community can withstand external stressors that impact human health. For example, the emergency management folks are thinking in terms of natural disaster such as hurricanes, floods, and tornadoes and are asking how well a given community can meet the challenges one would create.
But, the term social vulnerability can easily be applied to individuals living in a society in which unhealthy food is abundant, an infrastructure that discourages or outright prevents non-motorized travel, and the temptation of sedentary entertainment options is unavoidable. Fortunately, not every citizen living in an obesogenic society becomes obese. What factors have protected the non-obese individuals from these obeseogenic stressors? What are the characteristics of the unfortunate “vulnerables” living in the same society who end up being obese?
It is time to shift our focus away from a poorly defined disease model to one in which we begin looking at our society to find out why we have so many socially vulnerable individuals. The toll of obesity as it is currently defined is many order of magnitudes greater than any natural disaster. We have become communities that can no longer withstand the its obesogenic stressors many of which we have created and/or allowed to accumulate over the last century.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Sometime in the last year or 2 I wrote that, despite my considerable reservations, I had finally come to the conclusion that the American Medical Association’s decision to designate obesity as a disease was appropriate. My rationalization was that the disease label would open more opportunities for funding obesity treatments. However, the explosive growth and popularity of glucagon-like peptide 1 (GLP-1) agonists over the last year has had me rethinking my decision to suppress my long-held reservations about the disease designation.
So, if it’s not a disease, then what should we call it? How do we explain its surge in high-income countries that began in the 1980s? While there are still some folks who see obesity as a character flaw, I think you and I as healthcare providers have difficulty explaining the increase prevalence of obesity as either global breakdown of willpower or a widespread genetic shift as the result of burst of radiation from solar flares.
However, if we want to continue our search and finger-pointing we need to have a better definition of exactly what obesity is. If we’re going to continue calling it a disease we have done a pretty sloppy job of creating diagnostic criteria. To be honest, we aren’t doing such a hot job with “long COVID” either.
A recent article in the New York Times makes it clear that I’m not the only physician who is feeling uncomfortable with this lack of diagnostic specificity.
We know that using body mass index (BMI) as a criteria is imprecise. There are healthy individuals with elevated BMIs and there are others who are carrying an unhealthy amount of fat who have normal BMIs. And, there are individuals who have what might appear to be an excess amount of fat who are fit and healthy by other criteria.
Some investigators feel that a set of measurements that includes a waist and/or hip measurement may be a more accurate way of determining visceral adipose tissue. However, this body roundness index (BRI) currently relies on a tape measurement. Until the technique can be preformed by an inexpensive and readily available scanner, the BRI cannot be considered a practical tool for determining obesity.
Dr. Francisco Rubino, the chair of metabolic and bariatric surgery at Kings College in London, England, has been quoted as saying that, “if one defines a disease inaccurately, everything that stems from that – from diagnosis to treatment to policies – will be distorted and biased.”
Denmark has been forced to relabel obesity as a risk factor because the disease designation was stressing the financial viability of their healthcare system as more and more patients were being prescribe GLP-1 agonists, sometimes off label. A rationing strategy was resulting in suboptimal treatment of a significant portion of the obese population.
Spearheaded by Dr. Rubino, a Lancet Commission composed of physicians has tasked itself to define an “evidence-based diagnosis for obesity. Instead of relying on a single metric such as the BMI or BRI, diagnosing “clinical obesity” would involve a broad array of observations including a history, physical examination, standard laboratory and additional testing, “naming signs and symptoms, organ by organ, tissue by tissue, with plausible mechanisms for each one.” In other words, treating each patient as an individual using evidence-based criteria to make a diagnosis. While likely to be time consuming, this strategy feels like a more scientific approach. I suspect once clinical obesity is more rigorously defined it could be divided into several subtypes. For example, there would be a few conditions that were genetic; Prader-Willi syndrome being the best known.
However, I think the Lancet Commission’s strategy will find that the majority of individuals who make up this half-century global surge have become clinically obese because they have been unable to adapt to the obeseogenic forces in our society, which include diet, autocentricity, and attractive sedentary forms of entertainment, to name just three.
In some cases these unfortunate individuals are more vulnerable because there were born into an economically disadvantaged situation. In other scenarios a lack of foresight and/or political will may have left individuals with no other choice but to rely on automobiles to get around. Still others may find themselves living in a nutritional desert because all of the grocery stores have closed.
I recently encountered a descriptor in a story about the Federal Emergency Management Agency which could easily be adapted to describe this large and growing subtype of individuals with clinical obesity. “Social vulnerability” is measure of how well a community can withstand external stressors that impact human health. For example, the emergency management folks are thinking in terms of natural disaster such as hurricanes, floods, and tornadoes and are asking how well a given community can meet the challenges one would create.
But, the term social vulnerability can easily be applied to individuals living in a society in which unhealthy food is abundant, an infrastructure that discourages or outright prevents non-motorized travel, and the temptation of sedentary entertainment options is unavoidable. Fortunately, not every citizen living in an obesogenic society becomes obese. What factors have protected the non-obese individuals from these obeseogenic stressors? What are the characteristics of the unfortunate “vulnerables” living in the same society who end up being obese?
It is time to shift our focus away from a poorly defined disease model to one in which we begin looking at our society to find out why we have so many socially vulnerable individuals. The toll of obesity as it is currently defined is many order of magnitudes greater than any natural disaster. We have become communities that can no longer withstand the its obesogenic stressors many of which we have created and/or allowed to accumulate over the last century.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Sometime in the last year or 2 I wrote that, despite my considerable reservations, I had finally come to the conclusion that the American Medical Association’s decision to designate obesity as a disease was appropriate. My rationalization was that the disease label would open more opportunities for funding obesity treatments. However, the explosive growth and popularity of glucagon-like peptide 1 (GLP-1) agonists over the last year has had me rethinking my decision to suppress my long-held reservations about the disease designation.
So, if it’s not a disease, then what should we call it? How do we explain its surge in high-income countries that began in the 1980s? While there are still some folks who see obesity as a character flaw, I think you and I as healthcare providers have difficulty explaining the increase prevalence of obesity as either global breakdown of willpower or a widespread genetic shift as the result of burst of radiation from solar flares.
However, if we want to continue our search and finger-pointing we need to have a better definition of exactly what obesity is. If we’re going to continue calling it a disease we have done a pretty sloppy job of creating diagnostic criteria. To be honest, we aren’t doing such a hot job with “long COVID” either.
A recent article in the New York Times makes it clear that I’m not the only physician who is feeling uncomfortable with this lack of diagnostic specificity.
We know that using body mass index (BMI) as a criteria is imprecise. There are healthy individuals with elevated BMIs and there are others who are carrying an unhealthy amount of fat who have normal BMIs. And, there are individuals who have what might appear to be an excess amount of fat who are fit and healthy by other criteria.
Some investigators feel that a set of measurements that includes a waist and/or hip measurement may be a more accurate way of determining visceral adipose tissue. However, this body roundness index (BRI) currently relies on a tape measurement. Until the technique can be preformed by an inexpensive and readily available scanner, the BRI cannot be considered a practical tool for determining obesity.
Dr. Francisco Rubino, the chair of metabolic and bariatric surgery at Kings College in London, England, has been quoted as saying that, “if one defines a disease inaccurately, everything that stems from that – from diagnosis to treatment to policies – will be distorted and biased.”
Denmark has been forced to relabel obesity as a risk factor because the disease designation was stressing the financial viability of their healthcare system as more and more patients were being prescribe GLP-1 agonists, sometimes off label. A rationing strategy was resulting in suboptimal treatment of a significant portion of the obese population.
Spearheaded by Dr. Rubino, a Lancet Commission composed of physicians has tasked itself to define an “evidence-based diagnosis for obesity. Instead of relying on a single metric such as the BMI or BRI, diagnosing “clinical obesity” would involve a broad array of observations including a history, physical examination, standard laboratory and additional testing, “naming signs and symptoms, organ by organ, tissue by tissue, with plausible mechanisms for each one.” In other words, treating each patient as an individual using evidence-based criteria to make a diagnosis. While likely to be time consuming, this strategy feels like a more scientific approach. I suspect once clinical obesity is more rigorously defined it could be divided into several subtypes. For example, there would be a few conditions that were genetic; Prader-Willi syndrome being the best known.
However, I think the Lancet Commission’s strategy will find that the majority of individuals who make up this half-century global surge have become clinically obese because they have been unable to adapt to the obeseogenic forces in our society, which include diet, autocentricity, and attractive sedentary forms of entertainment, to name just three.
In some cases these unfortunate individuals are more vulnerable because there were born into an economically disadvantaged situation. In other scenarios a lack of foresight and/or political will may have left individuals with no other choice but to rely on automobiles to get around. Still others may find themselves living in a nutritional desert because all of the grocery stores have closed.
I recently encountered a descriptor in a story about the Federal Emergency Management Agency which could easily be adapted to describe this large and growing subtype of individuals with clinical obesity. “Social vulnerability” is measure of how well a community can withstand external stressors that impact human health. For example, the emergency management folks are thinking in terms of natural disaster such as hurricanes, floods, and tornadoes and are asking how well a given community can meet the challenges one would create.
But, the term social vulnerability can easily be applied to individuals living in a society in which unhealthy food is abundant, an infrastructure that discourages or outright prevents non-motorized travel, and the temptation of sedentary entertainment options is unavoidable. Fortunately, not every citizen living in an obesogenic society becomes obese. What factors have protected the non-obese individuals from these obeseogenic stressors? What are the characteristics of the unfortunate “vulnerables” living in the same society who end up being obese?
It is time to shift our focus away from a poorly defined disease model to one in which we begin looking at our society to find out why we have so many socially vulnerable individuals. The toll of obesity as it is currently defined is many order of magnitudes greater than any natural disaster. We have become communities that can no longer withstand the its obesogenic stressors many of which we have created and/or allowed to accumulate over the last century.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Preventing Pediatric Migraine
I suspect you all have some experience with childhood migraine. It can mean a painful several hours for the patient, arriving often without warning, with recurrences spaced months or sometimes even years apart. It may be accompanied by vomiting, which in some cases overshadows the severity of the headache. It can result in lost days from school and ruin family activities. It can occur so infrequently that the family can’t recall accurately when the last episode happened. In some ways it is a different animal than the adult version.
Most of the pediatric patients with migraine I have seen have experienced attacks that were occurring so infrequently that the families and I seldom discussed medication as an option. Back then imipramine was the only choice. However, currently there are more than a half dozen medications and combinations that have been tried. Recently a review of 45 clinical trials of these medications was published in JAMA Network Open.
I will let you review for yourself the details of these Iranian investigators’ network meta-analysis, but the bottom line is that some medications were associated with a reduction in migraine frequency. Others were associated with headache intensity. “However, no treatments were associated with significant improvements in quality of life or reduction of the duration of migraine attacks.”
Obviously, this paper illustrates clearly that we have not yet discovered the medicinal magic bullet for pediatric migraine prophylaxis. This doesn’t surprise me. After listening to scores of families tell their migraine stories, it became apparent to me that there was often a pattern in which the child’s headache had arrived after a period of acute sleep deprivation. For example, a trip to an amusement park in which travel or excitement may have resulted in the child going to bed later and/or getting up earlier. By afternoon the child’s reserves of something (currently unknown) were depleted to a point that the headache and/or vomiting struck.
Because these episodes were often so infrequent, separated by months, that taking a history demonstrating a recurring pattern could take considerable patience on the part of the family and the provider, even for a physician like myself who believes that better sleep is the answer for everything. However, once I could convince a family of the connection between the sleep deprivation and the headaches, they could often recall other episodes in the past that substantiated my explanation.
In some cases there was no obvious history of acute sleep deprivation, or at least it was so subtle that even a history taker with a sleep obsession couldn’t detect it. However, in these cases I could usually elicit a history of chronic sleep deprivation. For example, falling asleep instantly on automobile rides, difficulty with waking in the morning, or unhealthy bedtime routines. With this underlying vulnerability of chronic sleep deprivation, a slightly more exciting or vigorous day was all that was necessary to trigger the headache.
For those of you who don’t share my contention that childhood migraine is usually the result of sleep deprivation, consider the similarity between an epileptic seizure, which can be triggered by fatigue. Both events are usually followed by a deep sleep from which the child wakes refreshed and symptom free.
I think it is interesting that this recent meta-analysis could find no benefit in the quality of life for any of the medications. The explanation may be that the child with migraine already had a somewhat diminished quality of life as a result of the sleep deprivation, either acute or chronic.
When speaking with parents of migraine sufferers, I would tell them that once the headache had started there was little I had to offer to forestall the inevitable pain and vomiting. Certainly not in the form of an oral medication. While many adults will have an aura that warns them of the headache onset, I have found that most children don’t describe an aura. It may be they simply lack the ability to express it. Occasionally an observant parent may detect pallor or a behavior change that indicates a migraine is beginning. On rare occasions a parent may be able to abort the attack by quickly getting the child to a quiet, dark, and calm environment.
Although this recent meta-analysis review of treatment options is discouraging, it may be providing a clue to effective prophylaxis. Some of the medications that decrease the frequency of the attacks may be doing so because they improve the patient’s sleep patterns. Those that decrease the intensity of the pain are probably working on pain pathway that is not specific to migraine.
Continuing a search for a prophylactic medication is a worthy goal, particularly for those patients in which their migraines are debilitating. However, based on my experience, enhanced by my bias, the safest and most effective prophylaxis results from increasing the family’s awareness of the role that sleep deprivation plays in the illness. Even when the family buys into the message and attempts to avoid situations that will tax their vulnerable children, parents will need to accept that sometimes stuff happens even though siblings and peers may be able to tolerate the situation. Spontaneous activities can converge on a day when for whatever reason the migraine-prone child is overtired and the headache and vomiting will erupt.
A lifestyle change is always preferable to a pharmacological intervention. However, that doesn’t mean it is always easy to achieve.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I suspect you all have some experience with childhood migraine. It can mean a painful several hours for the patient, arriving often without warning, with recurrences spaced months or sometimes even years apart. It may be accompanied by vomiting, which in some cases overshadows the severity of the headache. It can result in lost days from school and ruin family activities. It can occur so infrequently that the family can’t recall accurately when the last episode happened. In some ways it is a different animal than the adult version.
Most of the pediatric patients with migraine I have seen have experienced attacks that were occurring so infrequently that the families and I seldom discussed medication as an option. Back then imipramine was the only choice. However, currently there are more than a half dozen medications and combinations that have been tried. Recently a review of 45 clinical trials of these medications was published in JAMA Network Open.
I will let you review for yourself the details of these Iranian investigators’ network meta-analysis, but the bottom line is that some medications were associated with a reduction in migraine frequency. Others were associated with headache intensity. “However, no treatments were associated with significant improvements in quality of life or reduction of the duration of migraine attacks.”
Obviously, this paper illustrates clearly that we have not yet discovered the medicinal magic bullet for pediatric migraine prophylaxis. This doesn’t surprise me. After listening to scores of families tell their migraine stories, it became apparent to me that there was often a pattern in which the child’s headache had arrived after a period of acute sleep deprivation. For example, a trip to an amusement park in which travel or excitement may have resulted in the child going to bed later and/or getting up earlier. By afternoon the child’s reserves of something (currently unknown) were depleted to a point that the headache and/or vomiting struck.
Because these episodes were often so infrequent, separated by months, that taking a history demonstrating a recurring pattern could take considerable patience on the part of the family and the provider, even for a physician like myself who believes that better sleep is the answer for everything. However, once I could convince a family of the connection between the sleep deprivation and the headaches, they could often recall other episodes in the past that substantiated my explanation.
In some cases there was no obvious history of acute sleep deprivation, or at least it was so subtle that even a history taker with a sleep obsession couldn’t detect it. However, in these cases I could usually elicit a history of chronic sleep deprivation. For example, falling asleep instantly on automobile rides, difficulty with waking in the morning, or unhealthy bedtime routines. With this underlying vulnerability of chronic sleep deprivation, a slightly more exciting or vigorous day was all that was necessary to trigger the headache.
For those of you who don’t share my contention that childhood migraine is usually the result of sleep deprivation, consider the similarity between an epileptic seizure, which can be triggered by fatigue. Both events are usually followed by a deep sleep from which the child wakes refreshed and symptom free.
I think it is interesting that this recent meta-analysis could find no benefit in the quality of life for any of the medications. The explanation may be that the child with migraine already had a somewhat diminished quality of life as a result of the sleep deprivation, either acute or chronic.
When speaking with parents of migraine sufferers, I would tell them that once the headache had started there was little I had to offer to forestall the inevitable pain and vomiting. Certainly not in the form of an oral medication. While many adults will have an aura that warns them of the headache onset, I have found that most children don’t describe an aura. It may be they simply lack the ability to express it. Occasionally an observant parent may detect pallor or a behavior change that indicates a migraine is beginning. On rare occasions a parent may be able to abort the attack by quickly getting the child to a quiet, dark, and calm environment.
Although this recent meta-analysis review of treatment options is discouraging, it may be providing a clue to effective prophylaxis. Some of the medications that decrease the frequency of the attacks may be doing so because they improve the patient’s sleep patterns. Those that decrease the intensity of the pain are probably working on pain pathway that is not specific to migraine.
Continuing a search for a prophylactic medication is a worthy goal, particularly for those patients in which their migraines are debilitating. However, based on my experience, enhanced by my bias, the safest and most effective prophylaxis results from increasing the family’s awareness of the role that sleep deprivation plays in the illness. Even when the family buys into the message and attempts to avoid situations that will tax their vulnerable children, parents will need to accept that sometimes stuff happens even though siblings and peers may be able to tolerate the situation. Spontaneous activities can converge on a day when for whatever reason the migraine-prone child is overtired and the headache and vomiting will erupt.
A lifestyle change is always preferable to a pharmacological intervention. However, that doesn’t mean it is always easy to achieve.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I suspect you all have some experience with childhood migraine. It can mean a painful several hours for the patient, arriving often without warning, with recurrences spaced months or sometimes even years apart. It may be accompanied by vomiting, which in some cases overshadows the severity of the headache. It can result in lost days from school and ruin family activities. It can occur so infrequently that the family can’t recall accurately when the last episode happened. In some ways it is a different animal than the adult version.
Most of the pediatric patients with migraine I have seen have experienced attacks that were occurring so infrequently that the families and I seldom discussed medication as an option. Back then imipramine was the only choice. However, currently there are more than a half dozen medications and combinations that have been tried. Recently a review of 45 clinical trials of these medications was published in JAMA Network Open.
I will let you review for yourself the details of these Iranian investigators’ network meta-analysis, but the bottom line is that some medications were associated with a reduction in migraine frequency. Others were associated with headache intensity. “However, no treatments were associated with significant improvements in quality of life or reduction of the duration of migraine attacks.”
Obviously, this paper illustrates clearly that we have not yet discovered the medicinal magic bullet for pediatric migraine prophylaxis. This doesn’t surprise me. After listening to scores of families tell their migraine stories, it became apparent to me that there was often a pattern in which the child’s headache had arrived after a period of acute sleep deprivation. For example, a trip to an amusement park in which travel or excitement may have resulted in the child going to bed later and/or getting up earlier. By afternoon the child’s reserves of something (currently unknown) were depleted to a point that the headache and/or vomiting struck.
Because these episodes were often so infrequent, separated by months, that taking a history demonstrating a recurring pattern could take considerable patience on the part of the family and the provider, even for a physician like myself who believes that better sleep is the answer for everything. However, once I could convince a family of the connection between the sleep deprivation and the headaches, they could often recall other episodes in the past that substantiated my explanation.
In some cases there was no obvious history of acute sleep deprivation, or at least it was so subtle that even a history taker with a sleep obsession couldn’t detect it. However, in these cases I could usually elicit a history of chronic sleep deprivation. For example, falling asleep instantly on automobile rides, difficulty with waking in the morning, or unhealthy bedtime routines. With this underlying vulnerability of chronic sleep deprivation, a slightly more exciting or vigorous day was all that was necessary to trigger the headache.
For those of you who don’t share my contention that childhood migraine is usually the result of sleep deprivation, consider the similarity between an epileptic seizure, which can be triggered by fatigue. Both events are usually followed by a deep sleep from which the child wakes refreshed and symptom free.
I think it is interesting that this recent meta-analysis could find no benefit in the quality of life for any of the medications. The explanation may be that the child with migraine already had a somewhat diminished quality of life as a result of the sleep deprivation, either acute or chronic.
When speaking with parents of migraine sufferers, I would tell them that once the headache had started there was little I had to offer to forestall the inevitable pain and vomiting. Certainly not in the form of an oral medication. While many adults will have an aura that warns them of the headache onset, I have found that most children don’t describe an aura. It may be they simply lack the ability to express it. Occasionally an observant parent may detect pallor or a behavior change that indicates a migraine is beginning. On rare occasions a parent may be able to abort the attack by quickly getting the child to a quiet, dark, and calm environment.
Although this recent meta-analysis review of treatment options is discouraging, it may be providing a clue to effective prophylaxis. Some of the medications that decrease the frequency of the attacks may be doing so because they improve the patient’s sleep patterns. Those that decrease the intensity of the pain are probably working on pain pathway that is not specific to migraine.
Continuing a search for a prophylactic medication is a worthy goal, particularly for those patients in which their migraines are debilitating. However, based on my experience, enhanced by my bias, the safest and most effective prophylaxis results from increasing the family’s awareness of the role that sleep deprivation plays in the illness. Even when the family buys into the message and attempts to avoid situations that will tax their vulnerable children, parents will need to accept that sometimes stuff happens even though siblings and peers may be able to tolerate the situation. Spontaneous activities can converge on a day when for whatever reason the migraine-prone child is overtired and the headache and vomiting will erupt.
A lifestyle change is always preferable to a pharmacological intervention. However, that doesn’t mean it is always easy to achieve.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Nonalcoholic Beer and Underage Drinking
Several months ago in a letter about healthcare providers and the decision to use alcohol and other mind-altering substances on the job, I waxed enthusiastically about the new wave of no alcohol (NA) and zero (00) alcohol beers that have come on the market. In the last 2 years our local grocery store’s cooler space for nonalcoholic beer has grown from less than 24 inches to something approaching the height of the average sixth grader.
In a bold act of chivalry at the beginning of the pandemic I accepted the mantle of designated grocery shopper and over the last 3 years have become uncommonly proud of my ability to bring home the groceries efficiently and cost effectively, without catching COVID in the process. I have developed a sixth sense of choosing which human checker/bagger combination is fastest or whether the self-checkout is the way to go.
For obvious reasons the human checkers don’t ask for my ID when I am buying adult beverages. However, the self-check register freezes up instantly when I scan my 12-pack of Run Wild nonalcoholic. This necessitates a search for the MIA store person assigned to patrol the self-check corral, ever on the lookout for shoplifters, underage drinkers, and other generally shifty looking characters.
When I find one of the grocery store detectives (who is likely to have been a former patient), I say: “You know, this doesn’t have any alcohol in it.” They invariably reply with a shrug. “I know. But, the rules are the rules.” Occasionally, they may add: “It doesn’t make sense, does it?”
At first blush checking IDs for a nonalcoholic beverage may sound dumb, certainly to someone who is just a few years on either side of the legal drinking age. Why are we trying to protect some crazy teenager from the futility of getting high on a six-pack of something that at worst will make him spend most of the next couple of hours peeing?
But, there is concern in some corners that nonalcoholic drinks pose a significant threat to teenagers. Two PhDs at Stanford University have recently published a paper in which they worry that the dramatic rise in US sales of nonalcoholic drinks from 15% to 30% since 2018 may be socializing “users of alcohol drinking experiences by exposing them to the taste, look, and even brands of alcoholic beverages”.
Is there evidence to support their concern? I could only find one brief report in the Japanese literature that states that among young people “who experienced the nonalcoholic beverage intake, interest in or motivation for drinking alcoholic beverages, and/or smoking is higher than [among] those who did not.” The study didn’t appear to clearly separate the exposure in a family setting from the actual intake.
Beer is an acquired taste. If someone offered you your first taste of beer after a hot-weather set of tennis most of you would reject it and ask for water or lemonade. I can recall my first taste of beer. For some reason my father thought at age 11 or 12 I might like to try some from his glass. I’m not sure of his motivation, but he tried the same thing with oysters. I didn’t drink beer again until I was 16, motivated at that time by a group dynamic. The oyster trial, however, backfired on him and from then on he had to share his coveted dozen with me. Alcohol, unless heavily disguised by a mixer, is also not a taste that most young people find appealing.
It is unlikely that the average thrill-seeking teenager is going to ask his older-appearing buddy with a fake ID to buy him some nonalcoholic beer. Nor would he go to the effort or risk of acquiring his own fake ID just to see how it tastes. It just doesn’t compute, especially to a self-check corral patroller.
I guess one could envision a scenario in which a teenager wanting to fit in with the fast crowd would ask a trusted adult (or clueless parent) to buy him some nonalcoholic beer to bring to a party. He is running a serious risk of being laughed at by his friends if they find he’s drinking the fake stuff. It also seems unlikely that a parent would buy nonalcoholic beer to introduce his teenager to the taste of beer.
So,
Although it runs counter to my usual commitment to evidence-based decisions, making it difficult for adolescents to buy nonalcoholic beverages feels like the right think to do. As long as alcoholic and nonalcoholic beverages share the same display space and are packaged in nearly identical containers, there is ample opportunity for confusion. Recent evidence suggesting that even small amounts of alcohol increases some health risks should strengthen our resolve to minimize that confusion.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Several months ago in a letter about healthcare providers and the decision to use alcohol and other mind-altering substances on the job, I waxed enthusiastically about the new wave of no alcohol (NA) and zero (00) alcohol beers that have come on the market. In the last 2 years our local grocery store’s cooler space for nonalcoholic beer has grown from less than 24 inches to something approaching the height of the average sixth grader.
In a bold act of chivalry at the beginning of the pandemic I accepted the mantle of designated grocery shopper and over the last 3 years have become uncommonly proud of my ability to bring home the groceries efficiently and cost effectively, without catching COVID in the process. I have developed a sixth sense of choosing which human checker/bagger combination is fastest or whether the self-checkout is the way to go.
For obvious reasons the human checkers don’t ask for my ID when I am buying adult beverages. However, the self-check register freezes up instantly when I scan my 12-pack of Run Wild nonalcoholic. This necessitates a search for the MIA store person assigned to patrol the self-check corral, ever on the lookout for shoplifters, underage drinkers, and other generally shifty looking characters.
When I find one of the grocery store detectives (who is likely to have been a former patient), I say: “You know, this doesn’t have any alcohol in it.” They invariably reply with a shrug. “I know. But, the rules are the rules.” Occasionally, they may add: “It doesn’t make sense, does it?”
At first blush checking IDs for a nonalcoholic beverage may sound dumb, certainly to someone who is just a few years on either side of the legal drinking age. Why are we trying to protect some crazy teenager from the futility of getting high on a six-pack of something that at worst will make him spend most of the next couple of hours peeing?
But, there is concern in some corners that nonalcoholic drinks pose a significant threat to teenagers. Two PhDs at Stanford University have recently published a paper in which they worry that the dramatic rise in US sales of nonalcoholic drinks from 15% to 30% since 2018 may be socializing “users of alcohol drinking experiences by exposing them to the taste, look, and even brands of alcoholic beverages”.
Is there evidence to support their concern? I could only find one brief report in the Japanese literature that states that among young people “who experienced the nonalcoholic beverage intake, interest in or motivation for drinking alcoholic beverages, and/or smoking is higher than [among] those who did not.” The study didn’t appear to clearly separate the exposure in a family setting from the actual intake.
Beer is an acquired taste. If someone offered you your first taste of beer after a hot-weather set of tennis most of you would reject it and ask for water or lemonade. I can recall my first taste of beer. For some reason my father thought at age 11 or 12 I might like to try some from his glass. I’m not sure of his motivation, but he tried the same thing with oysters. I didn’t drink beer again until I was 16, motivated at that time by a group dynamic. The oyster trial, however, backfired on him and from then on he had to share his coveted dozen with me. Alcohol, unless heavily disguised by a mixer, is also not a taste that most young people find appealing.
It is unlikely that the average thrill-seeking teenager is going to ask his older-appearing buddy with a fake ID to buy him some nonalcoholic beer. Nor would he go to the effort or risk of acquiring his own fake ID just to see how it tastes. It just doesn’t compute, especially to a self-check corral patroller.
I guess one could envision a scenario in which a teenager wanting to fit in with the fast crowd would ask a trusted adult (or clueless parent) to buy him some nonalcoholic beer to bring to a party. He is running a serious risk of being laughed at by his friends if they find he’s drinking the fake stuff. It also seems unlikely that a parent would buy nonalcoholic beer to introduce his teenager to the taste of beer.
So,
Although it runs counter to my usual commitment to evidence-based decisions, making it difficult for adolescents to buy nonalcoholic beverages feels like the right think to do. As long as alcoholic and nonalcoholic beverages share the same display space and are packaged in nearly identical containers, there is ample opportunity for confusion. Recent evidence suggesting that even small amounts of alcohol increases some health risks should strengthen our resolve to minimize that confusion.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Several months ago in a letter about healthcare providers and the decision to use alcohol and other mind-altering substances on the job, I waxed enthusiastically about the new wave of no alcohol (NA) and zero (00) alcohol beers that have come on the market. In the last 2 years our local grocery store’s cooler space for nonalcoholic beer has grown from less than 24 inches to something approaching the height of the average sixth grader.
In a bold act of chivalry at the beginning of the pandemic I accepted the mantle of designated grocery shopper and over the last 3 years have become uncommonly proud of my ability to bring home the groceries efficiently and cost effectively, without catching COVID in the process. I have developed a sixth sense of choosing which human checker/bagger combination is fastest or whether the self-checkout is the way to go.
For obvious reasons the human checkers don’t ask for my ID when I am buying adult beverages. However, the self-check register freezes up instantly when I scan my 12-pack of Run Wild nonalcoholic. This necessitates a search for the MIA store person assigned to patrol the self-check corral, ever on the lookout for shoplifters, underage drinkers, and other generally shifty looking characters.
When I find one of the grocery store detectives (who is likely to have been a former patient), I say: “You know, this doesn’t have any alcohol in it.” They invariably reply with a shrug. “I know. But, the rules are the rules.” Occasionally, they may add: “It doesn’t make sense, does it?”
At first blush checking IDs for a nonalcoholic beverage may sound dumb, certainly to someone who is just a few years on either side of the legal drinking age. Why are we trying to protect some crazy teenager from the futility of getting high on a six-pack of something that at worst will make him spend most of the next couple of hours peeing?
But, there is concern in some corners that nonalcoholic drinks pose a significant threat to teenagers. Two PhDs at Stanford University have recently published a paper in which they worry that the dramatic rise in US sales of nonalcoholic drinks from 15% to 30% since 2018 may be socializing “users of alcohol drinking experiences by exposing them to the taste, look, and even brands of alcoholic beverages”.
Is there evidence to support their concern? I could only find one brief report in the Japanese literature that states that among young people “who experienced the nonalcoholic beverage intake, interest in or motivation for drinking alcoholic beverages, and/or smoking is higher than [among] those who did not.” The study didn’t appear to clearly separate the exposure in a family setting from the actual intake.
Beer is an acquired taste. If someone offered you your first taste of beer after a hot-weather set of tennis most of you would reject it and ask for water or lemonade. I can recall my first taste of beer. For some reason my father thought at age 11 or 12 I might like to try some from his glass. I’m not sure of his motivation, but he tried the same thing with oysters. I didn’t drink beer again until I was 16, motivated at that time by a group dynamic. The oyster trial, however, backfired on him and from then on he had to share his coveted dozen with me. Alcohol, unless heavily disguised by a mixer, is also not a taste that most young people find appealing.
It is unlikely that the average thrill-seeking teenager is going to ask his older-appearing buddy with a fake ID to buy him some nonalcoholic beer. Nor would he go to the effort or risk of acquiring his own fake ID just to see how it tastes. It just doesn’t compute, especially to a self-check corral patroller.
I guess one could envision a scenario in which a teenager wanting to fit in with the fast crowd would ask a trusted adult (or clueless parent) to buy him some nonalcoholic beer to bring to a party. He is running a serious risk of being laughed at by his friends if they find he’s drinking the fake stuff. It also seems unlikely that a parent would buy nonalcoholic beer to introduce his teenager to the taste of beer.
So,
Although it runs counter to my usual commitment to evidence-based decisions, making it difficult for adolescents to buy nonalcoholic beverages feels like the right think to do. As long as alcoholic and nonalcoholic beverages share the same display space and are packaged in nearly identical containers, there is ample opportunity for confusion. Recent evidence suggesting that even small amounts of alcohol increases some health risks should strengthen our resolve to minimize that confusion.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
The Patient Encounter Is Changing
Over the last few decades the patient encounter has changed dramatically. Most recently fueled by the COVID pandemic, face-to-face events between patients and providers have become less frequent. The shift began years before with the slow acceptance of telemedicine by third-party payers.
Still, among the growing collection of options, I think it is fair to say that a live face-to-face encounter remains the gold standard in the opinions of both patients and providers. Patients may have become increasingly critical and vocal when they feel their provider appears rushed or is over focused on the desktop computer screen. However, given all of the options, I suspect that for the moment patients feel a face-to-face meeting continues to offer them the best chance of being heard and their concerns answered.
Even when the image on the video screen is sharp and the intelligibility of the audio feed is crystal clear, I bet most providers feel they can learn more about the patient during a live face-to-face encounter than a Zoom-style encounter.
Nonetheless, there are hints that face-to-face visits maybe losing their place in the pantheon of patient-provider encounters. A recent study from England found that there were a significant number of patients who were more forthcoming in reporting their preferences for social care-related quality of life when they were surveyed by internet rather than face-to-face. It is unclear what was behind this observation, however it may be that patients were embarrassed and viewed these questions about their social neediness as too sensitive to share face-to-face.
There is ample evidence of situations in which the internet can provide a level of anonymity that emboldens the user to say things that are cruel and hurtful, using words they might be afraid to voice in a live setting. This license to act in an uncivil manner is behind much of the harm generated by chat rooms and other social media sites. While in these cases the ability to hide behind the video screen is a negative, this study from England suggests that we should be looking for more opportunities to use this emboldening feature with certain individuals and populations who may be intimidated during a face-to-face encounter. It is likely a hybrid approach may be the most beneficial strategy tailored to the individual patient.
One advantage of a face-to-face visit is that each participant can read the body language of the other. This, of course, can be a disadvantage for the provider who has failed to master the art of disguising his “I’m running behind” stress level, when he should be replacing it with an “I’m ready to listen” posture.
Portals have opened up a whole other can of worms, particularly when the provider has failed to clearly delineate what sort of questions are appropriate for an online forum, not informed the patient who will be providing the answer, and a rough idea of when this will happen. It may take several trips up the learning curve for patients and providers to develop a style of writing that make optimal use of the portal format and make it fit the needs of the practice and the patients.
Regardless of what kind of visit platform we are talking about, a lot hinges on the providers choice of words. I recently reviewed some of the work of Jeffrey D. Robinson, PhD, a professor of communication at the Portland State University, Portland, Oregon. He offers the example of the difference between “some” and “any.” When the patient was asked “Is there something else you would like to address today” almost 80% of the patient’s unmet questions were addressed. However, when the question was “Is there anything else ...” very few of the patient’s unmet questions were addressed. Dr. Robinson has also found that when the question is posed early in the visit rather than at the end, it improves the chances of having the patient’s unmet concerns addressed.
I suspect that the face-to-face patient encounter will survive, but it will continue to lose its market share as other platforms emerge. We can be sure there will be change. We need look no further than generative AI to look for the next step. A well-crafted question could help the patient and the provider choose the most appropriate patient encounter format given the patient’s demographic, chief complaint, and prior history, and match this with the provider’s background and strengths.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Over the last few decades the patient encounter has changed dramatically. Most recently fueled by the COVID pandemic, face-to-face events between patients and providers have become less frequent. The shift began years before with the slow acceptance of telemedicine by third-party payers.
Still, among the growing collection of options, I think it is fair to say that a live face-to-face encounter remains the gold standard in the opinions of both patients and providers. Patients may have become increasingly critical and vocal when they feel their provider appears rushed or is over focused on the desktop computer screen. However, given all of the options, I suspect that for the moment patients feel a face-to-face meeting continues to offer them the best chance of being heard and their concerns answered.
Even when the image on the video screen is sharp and the intelligibility of the audio feed is crystal clear, I bet most providers feel they can learn more about the patient during a live face-to-face encounter than a Zoom-style encounter.
Nonetheless, there are hints that face-to-face visits maybe losing their place in the pantheon of patient-provider encounters. A recent study from England found that there were a significant number of patients who were more forthcoming in reporting their preferences for social care-related quality of life when they were surveyed by internet rather than face-to-face. It is unclear what was behind this observation, however it may be that patients were embarrassed and viewed these questions about their social neediness as too sensitive to share face-to-face.
There is ample evidence of situations in which the internet can provide a level of anonymity that emboldens the user to say things that are cruel and hurtful, using words they might be afraid to voice in a live setting. This license to act in an uncivil manner is behind much of the harm generated by chat rooms and other social media sites. While in these cases the ability to hide behind the video screen is a negative, this study from England suggests that we should be looking for more opportunities to use this emboldening feature with certain individuals and populations who may be intimidated during a face-to-face encounter. It is likely a hybrid approach may be the most beneficial strategy tailored to the individual patient.
One advantage of a face-to-face visit is that each participant can read the body language of the other. This, of course, can be a disadvantage for the provider who has failed to master the art of disguising his “I’m running behind” stress level, when he should be replacing it with an “I’m ready to listen” posture.
Portals have opened up a whole other can of worms, particularly when the provider has failed to clearly delineate what sort of questions are appropriate for an online forum, not informed the patient who will be providing the answer, and a rough idea of when this will happen. It may take several trips up the learning curve for patients and providers to develop a style of writing that make optimal use of the portal format and make it fit the needs of the practice and the patients.
Regardless of what kind of visit platform we are talking about, a lot hinges on the providers choice of words. I recently reviewed some of the work of Jeffrey D. Robinson, PhD, a professor of communication at the Portland State University, Portland, Oregon. He offers the example of the difference between “some” and “any.” When the patient was asked “Is there something else you would like to address today” almost 80% of the patient’s unmet questions were addressed. However, when the question was “Is there anything else ...” very few of the patient’s unmet questions were addressed. Dr. Robinson has also found that when the question is posed early in the visit rather than at the end, it improves the chances of having the patient’s unmet concerns addressed.
I suspect that the face-to-face patient encounter will survive, but it will continue to lose its market share as other platforms emerge. We can be sure there will be change. We need look no further than generative AI to look for the next step. A well-crafted question could help the patient and the provider choose the most appropriate patient encounter format given the patient’s demographic, chief complaint, and prior history, and match this with the provider’s background and strengths.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Over the last few decades the patient encounter has changed dramatically. Most recently fueled by the COVID pandemic, face-to-face events between patients and providers have become less frequent. The shift began years before with the slow acceptance of telemedicine by third-party payers.
Still, among the growing collection of options, I think it is fair to say that a live face-to-face encounter remains the gold standard in the opinions of both patients and providers. Patients may have become increasingly critical and vocal when they feel their provider appears rushed or is over focused on the desktop computer screen. However, given all of the options, I suspect that for the moment patients feel a face-to-face meeting continues to offer them the best chance of being heard and their concerns answered.
Even when the image on the video screen is sharp and the intelligibility of the audio feed is crystal clear, I bet most providers feel they can learn more about the patient during a live face-to-face encounter than a Zoom-style encounter.
Nonetheless, there are hints that face-to-face visits maybe losing their place in the pantheon of patient-provider encounters. A recent study from England found that there were a significant number of patients who were more forthcoming in reporting their preferences for social care-related quality of life when they were surveyed by internet rather than face-to-face. It is unclear what was behind this observation, however it may be that patients were embarrassed and viewed these questions about their social neediness as too sensitive to share face-to-face.
There is ample evidence of situations in which the internet can provide a level of anonymity that emboldens the user to say things that are cruel and hurtful, using words they might be afraid to voice in a live setting. This license to act in an uncivil manner is behind much of the harm generated by chat rooms and other social media sites. While in these cases the ability to hide behind the video screen is a negative, this study from England suggests that we should be looking for more opportunities to use this emboldening feature with certain individuals and populations who may be intimidated during a face-to-face encounter. It is likely a hybrid approach may be the most beneficial strategy tailored to the individual patient.
One advantage of a face-to-face visit is that each participant can read the body language of the other. This, of course, can be a disadvantage for the provider who has failed to master the art of disguising his “I’m running behind” stress level, when he should be replacing it with an “I’m ready to listen” posture.
Portals have opened up a whole other can of worms, particularly when the provider has failed to clearly delineate what sort of questions are appropriate for an online forum, not informed the patient who will be providing the answer, and a rough idea of when this will happen. It may take several trips up the learning curve for patients and providers to develop a style of writing that make optimal use of the portal format and make it fit the needs of the practice and the patients.
Regardless of what kind of visit platform we are talking about, a lot hinges on the providers choice of words. I recently reviewed some of the work of Jeffrey D. Robinson, PhD, a professor of communication at the Portland State University, Portland, Oregon. He offers the example of the difference between “some” and “any.” When the patient was asked “Is there something else you would like to address today” almost 80% of the patient’s unmet questions were addressed. However, when the question was “Is there anything else ...” very few of the patient’s unmet questions were addressed. Dr. Robinson has also found that when the question is posed early in the visit rather than at the end, it improves the chances of having the patient’s unmet concerns addressed.
I suspect that the face-to-face patient encounter will survive, but it will continue to lose its market share as other platforms emerge. We can be sure there will be change. We need look no further than generative AI to look for the next step. A well-crafted question could help the patient and the provider choose the most appropriate patient encounter format given the patient’s demographic, chief complaint, and prior history, and match this with the provider’s background and strengths.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Burnout and Vacations
How many weeks of vacation do you take each year? Does it feel like enough? What prevents you from taking more time off? Is it a contractual obligation to your employer? Or a concern about the lack of income while your are away? Is it the difficulty of finding coverage for your patient care responsibilities? How much of it is the dread of facing your unattended or poorly attended EHR box when you return?
A recent survey of more than 3000 US physicians found that almost 60% took 3 weeks or less vacation per year? The investigators also learned that 70% of the respondents did patient-related tasks while they were on vacation and less than half had full EHR coverage while they were away. Not surprisingly, providers who expressed concerns about finding someone to cover clinical responsibilities and financial concerns were less likely to take more than 3 weeks’ vacation.
As one might hope, taking more than 3 weeks’ vacation and having full EHR coverage were associated with decreased rates of burnout. On the other hand, spending more than 30 minutes per day doing patient-related work while on vacation was associated with higher rates of burnout.
In their conclusion, the authors suggest that if we hope to reduce physician burnout, employers should introduce system-level initiatives to ensure that physicians take adequate vacation and have adequate coverage for their clinical responsibilities — including EHR inbox management.
I will readily admit that I was one of those physicians who took less than 3 weeks of vacation and can’t recall ever taking more than 2 weeks. Since most of our vacations were staycations, I would usually round on the newborns first thing in the morning when I was in town to keep the flow of new patients coming into the practice.
I’m sure there was some collateral damage to my family, but our children continue to reassure me that they weren’t envious of their peers who went away on “real” vacations. As adults two of them take their families on the kind of vacations that make me envious. The third has married someone who shares, what I might call, a “robust commitment” to showing up in the office. But they seem to be a happy couple.
At the root of my vacation style was an egotistical delusion that there weren’t any clinicians in the community who could look after my patients as well as I did. Unfortunately, I had done little to discourage those patients who shared my distorted view.
I was lucky to have spent nearly all my career without the added burden of an EHR inbox. However, in the lead up to our infrequent vacations, the rush to tie up the loose ends of those patients for whom we had not achieved diagnostic closure was stressful and time consuming. Luckily, as a primary care pediatrician most of their problems were short lived. But, leaving the ship battened down could be exhausting.
I can fully understand why the physicians who are taking less than 3 weeks’ vacation and continue to be burdened by patient-related tasks while they are “away” are more likely to experience burnout. However, I wonder why I seemed to have been resistant considering my vacation style, which the authors of the above-mentioned article feel would have placed me at high risk.
I think the answer may lie in my commitment to making decisions that allowed me to maintain equilibrium in my life. In other words, if there were things in my day-to-day activities that were so taxing or distasteful that I am counting the hours and days until I can escape them, then I needed to make the necessary changes promptly and not count on a vacation to repair the accumulating damage. That may have required cutting back some responsibilities or it may have meant that I needed to be in better mental and physical shape to be able to maintain that equilibrium. Maybe it was more sleep, more exercise, less television, not investing as much in time-wasting meetings. This doesn’t mean that I didn’t have bad days. Stuff happens. But if I was putting together two or three bad days a week, something had to change. A vacation wasn’t going solve the inherent or systemic problems that are making day-to-day life so intolerable that I needed to escape for some respite.
In full disclosure, I will share that at age 55 I took a leave of 2 1/2 months and with my wife and another couple bicycled across America. This was a goal I had harbored since childhood and in anticipation over several decades had banked considerable coverage equity by doing extra coverage for other providers to minimize my guilt feelings at being away. This was not an escape from I job I didn’t enjoy going to everyday. It was an exercise in goal fulfillment.
I think the authors of this recent study should be applauded for providing some numbers to support the obvious. However,
Encouraging a clinician to take a bit more vacation may help. But, having someone to properly manage the EHR inbox would do a lot more. If your coverage is telling everyone to “Wait until Dr. Away has returned” it is only going to make things worse.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
How many weeks of vacation do you take each year? Does it feel like enough? What prevents you from taking more time off? Is it a contractual obligation to your employer? Or a concern about the lack of income while your are away? Is it the difficulty of finding coverage for your patient care responsibilities? How much of it is the dread of facing your unattended or poorly attended EHR box when you return?
A recent survey of more than 3000 US physicians found that almost 60% took 3 weeks or less vacation per year? The investigators also learned that 70% of the respondents did patient-related tasks while they were on vacation and less than half had full EHR coverage while they were away. Not surprisingly, providers who expressed concerns about finding someone to cover clinical responsibilities and financial concerns were less likely to take more than 3 weeks’ vacation.
As one might hope, taking more than 3 weeks’ vacation and having full EHR coverage were associated with decreased rates of burnout. On the other hand, spending more than 30 minutes per day doing patient-related work while on vacation was associated with higher rates of burnout.
In their conclusion, the authors suggest that if we hope to reduce physician burnout, employers should introduce system-level initiatives to ensure that physicians take adequate vacation and have adequate coverage for their clinical responsibilities — including EHR inbox management.
I will readily admit that I was one of those physicians who took less than 3 weeks of vacation and can’t recall ever taking more than 2 weeks. Since most of our vacations were staycations, I would usually round on the newborns first thing in the morning when I was in town to keep the flow of new patients coming into the practice.
I’m sure there was some collateral damage to my family, but our children continue to reassure me that they weren’t envious of their peers who went away on “real” vacations. As adults two of them take their families on the kind of vacations that make me envious. The third has married someone who shares, what I might call, a “robust commitment” to showing up in the office. But they seem to be a happy couple.
At the root of my vacation style was an egotistical delusion that there weren’t any clinicians in the community who could look after my patients as well as I did. Unfortunately, I had done little to discourage those patients who shared my distorted view.
I was lucky to have spent nearly all my career without the added burden of an EHR inbox. However, in the lead up to our infrequent vacations, the rush to tie up the loose ends of those patients for whom we had not achieved diagnostic closure was stressful and time consuming. Luckily, as a primary care pediatrician most of their problems were short lived. But, leaving the ship battened down could be exhausting.
I can fully understand why the physicians who are taking less than 3 weeks’ vacation and continue to be burdened by patient-related tasks while they are “away” are more likely to experience burnout. However, I wonder why I seemed to have been resistant considering my vacation style, which the authors of the above-mentioned article feel would have placed me at high risk.
I think the answer may lie in my commitment to making decisions that allowed me to maintain equilibrium in my life. In other words, if there were things in my day-to-day activities that were so taxing or distasteful that I am counting the hours and days until I can escape them, then I needed to make the necessary changes promptly and not count on a vacation to repair the accumulating damage. That may have required cutting back some responsibilities or it may have meant that I needed to be in better mental and physical shape to be able to maintain that equilibrium. Maybe it was more sleep, more exercise, less television, not investing as much in time-wasting meetings. This doesn’t mean that I didn’t have bad days. Stuff happens. But if I was putting together two or three bad days a week, something had to change. A vacation wasn’t going solve the inherent or systemic problems that are making day-to-day life so intolerable that I needed to escape for some respite.
In full disclosure, I will share that at age 55 I took a leave of 2 1/2 months and with my wife and another couple bicycled across America. This was a goal I had harbored since childhood and in anticipation over several decades had banked considerable coverage equity by doing extra coverage for other providers to minimize my guilt feelings at being away. This was not an escape from I job I didn’t enjoy going to everyday. It was an exercise in goal fulfillment.
I think the authors of this recent study should be applauded for providing some numbers to support the obvious. However,
Encouraging a clinician to take a bit more vacation may help. But, having someone to properly manage the EHR inbox would do a lot more. If your coverage is telling everyone to “Wait until Dr. Away has returned” it is only going to make things worse.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
How many weeks of vacation do you take each year? Does it feel like enough? What prevents you from taking more time off? Is it a contractual obligation to your employer? Or a concern about the lack of income while your are away? Is it the difficulty of finding coverage for your patient care responsibilities? How much of it is the dread of facing your unattended or poorly attended EHR box when you return?
A recent survey of more than 3000 US physicians found that almost 60% took 3 weeks or less vacation per year? The investigators also learned that 70% of the respondents did patient-related tasks while they were on vacation and less than half had full EHR coverage while they were away. Not surprisingly, providers who expressed concerns about finding someone to cover clinical responsibilities and financial concerns were less likely to take more than 3 weeks’ vacation.
As one might hope, taking more than 3 weeks’ vacation and having full EHR coverage were associated with decreased rates of burnout. On the other hand, spending more than 30 minutes per day doing patient-related work while on vacation was associated with higher rates of burnout.
In their conclusion, the authors suggest that if we hope to reduce physician burnout, employers should introduce system-level initiatives to ensure that physicians take adequate vacation and have adequate coverage for their clinical responsibilities — including EHR inbox management.
I will readily admit that I was one of those physicians who took less than 3 weeks of vacation and can’t recall ever taking more than 2 weeks. Since most of our vacations were staycations, I would usually round on the newborns first thing in the morning when I was in town to keep the flow of new patients coming into the practice.
I’m sure there was some collateral damage to my family, but our children continue to reassure me that they weren’t envious of their peers who went away on “real” vacations. As adults two of them take their families on the kind of vacations that make me envious. The third has married someone who shares, what I might call, a “robust commitment” to showing up in the office. But they seem to be a happy couple.
At the root of my vacation style was an egotistical delusion that there weren’t any clinicians in the community who could look after my patients as well as I did. Unfortunately, I had done little to discourage those patients who shared my distorted view.
I was lucky to have spent nearly all my career without the added burden of an EHR inbox. However, in the lead up to our infrequent vacations, the rush to tie up the loose ends of those patients for whom we had not achieved diagnostic closure was stressful and time consuming. Luckily, as a primary care pediatrician most of their problems were short lived. But, leaving the ship battened down could be exhausting.
I can fully understand why the physicians who are taking less than 3 weeks’ vacation and continue to be burdened by patient-related tasks while they are “away” are more likely to experience burnout. However, I wonder why I seemed to have been resistant considering my vacation style, which the authors of the above-mentioned article feel would have placed me at high risk.
I think the answer may lie in my commitment to making decisions that allowed me to maintain equilibrium in my life. In other words, if there were things in my day-to-day activities that were so taxing or distasteful that I am counting the hours and days until I can escape them, then I needed to make the necessary changes promptly and not count on a vacation to repair the accumulating damage. That may have required cutting back some responsibilities or it may have meant that I needed to be in better mental and physical shape to be able to maintain that equilibrium. Maybe it was more sleep, more exercise, less television, not investing as much in time-wasting meetings. This doesn’t mean that I didn’t have bad days. Stuff happens. But if I was putting together two or three bad days a week, something had to change. A vacation wasn’t going solve the inherent or systemic problems that are making day-to-day life so intolerable that I needed to escape for some respite.
In full disclosure, I will share that at age 55 I took a leave of 2 1/2 months and with my wife and another couple bicycled across America. This was a goal I had harbored since childhood and in anticipation over several decades had banked considerable coverage equity by doing extra coverage for other providers to minimize my guilt feelings at being away. This was not an escape from I job I didn’t enjoy going to everyday. It was an exercise in goal fulfillment.
I think the authors of this recent study should be applauded for providing some numbers to support the obvious. However,
Encouraging a clinician to take a bit more vacation may help. But, having someone to properly manage the EHR inbox would do a lot more. If your coverage is telling everyone to “Wait until Dr. Away has returned” it is only going to make things worse.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Playing the ‘Doctor’ Card: A Lesson in Three Hypotheticals
Scenario I. Let’s say you wake with a collection of symptoms. None of them is concerning, but the combination seems a bit unusual, or at least confusing. You would like to speak to your PCP, whom you have known for a long time, and ask for either reassurance or advice on whether you should make an appointment. However, your experience with the front office’s organization tells you that the quick 4-minute conversation you’re looking for is not going to happen easily.
You have that robotic phone message memorized. It begins suggesting that you think you have an emergency to call 911. Then it reminds you that if have a question about COVID to press “2,” which will take you to a recorded message and eventually link you to a triage nurse if the recording doesn’t answer your questions. If you need a prescription refill you should press “3.” If you are a doctor’s office and wish speak to the doctor press “4.” If you know you need an appointment press “5.” And finally if you have a question press “6” and leave a message and a nurse will get back to you before the end of the day.
The good news is that your PCP’s office is good to its word and will return your call the same day, but the bad news is that it is likely to be well into the afternoon. And, while you don’t consider your symptoms life-threatening, you don’t want getting an answer to be an exercise in schedule disruption.
You were a doctor before you retired and you still have an “office.” It’s really more of a combination den and studio. So, technically you are a doctor’s office wanting to speak to the doctor. And, you know that pressing “4” will get you the answer you are looking for in a matter of minutes.
Scenario II. Your spouse, or your aunt, or the elderly widow next door asks you to accompany her at an upcoming doctor’s visit because she had been having trouble understanding the physician’s plan regarding further diagnosis and possible treatment. She believes having you along as kind of an interpreter/advocate would be a big help. Do you agree and do you make any stipulations?
Scenario III. Your PCP has referred you to a specialist. You are filling out the previsit form(s). Do you list your occupation as “retired physician” or just “retired”? Or just leave it blank?
Whether you deserve it or not, graduating from medical school has conferred on you a specialness in the eyes of many people. It is assumed you are smarter than the average bear and in taking the Hippocratic oath you have joined an elite club. And, with that membership comes some special undefined privileges.
But with that specialness there are are some downsides. For example, in some states being a physician once allowed you to have a license plate with “MD” in the number sequence. Sometimes that helped you avoid the occasional parking ticket. That is until folks realized the “MD” made you a target for car thieves and drug seekers who mistakenly believe we all carry drugs in our glove compartments.
So what about that first scenario? Do you press “4” to jump yourself to the head of the queue and avoid the inconvenience of having to wait for a reasonably timely response from your PCP? After all, you are fellow physicians and you’ve known her for a decade or two. If you are retired is your time any more valuable than that of her other patients? If you are still in active practice you can argue that getting special attention will benefit your patients. But, if it’s a weekend and you are off it’s a bit harder to rationalize special treatment. Playing the doctor card in this situation is your own decision but you must be prepared to shoulder the perceptions by your PCP and her staff as well as your own sense of fairness.
The other two scenarios are much different. In neither are you risking the impression that you are asking for a favor. But, they each have their downsides. In the second scenario you are doing someone a favor to act as an interpreter. How could this have downside? Unfortunately, what happens too often in situations like this is that when the patient’s physician learns that you are a fellow physician, the rest of the visit becomes a dialogue in doctor-speak between the two physicians with the patient sitting by as an observer. In the end this discussion may benefit the patient by creating a treatment plan that the patient can understand either because they overheard it or more likely because you eventually explained it to them.
On the other the hand, this doctor-to-doctor chat has done nothing to build a doctor-patient relationship that had obviously been lacking something. In situations like this it is probably better to keep the doctor card up your sleeve to be played at the end of the visit or maybe not at all. Before agreeing to be an interpreter/advocate, ask the patient to avoid mentioning that you are a physician. Instead, ask that she introduce you as a friend or relative that she has asked to come along to serve as a memory bank. During the visit it may be helpful to occasionally interject and suggest that the patient ask a question that hasn’t been adequately addressed. While some physicians may be upset when they belatedly find you have not revealed up front that you are a physician, I find this a harmless omission that has the benefit of improving patient care.
The final scenario — in which you are the patient — is likely to occur more often as you get older. When filling out a previsit form, I often simply put retired or leave it blank. But, how I answer the question often seems to be irrelevant because I have learned that physicians and their staff read those boilerplate forms so cursorily that even when I report my status as “retired physician” everyone seems surprised if and when it later comes to light.
My rationale in keeping the doctor card close to my vest in these situations is that I want to be addressed without any assumptions regarding my medical knowledge, which in my situation is well over half a century old and spotty at best. I don’t want my physicians to say “I’m sure you understand.” Because I often don’t. I would like them to learn about who I am just as I hope they would other patients. I won’t be offended if they “talk down” to me. If this specialist is as good as I’ve heard she is, I want to hear her full performance, not one edited for fellow and former physicians.
It doesn’t arrive gold edged with a list of special privileges. If it comes with any extras, they are risks that must be avoided.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Scenario I. Let’s say you wake with a collection of symptoms. None of them is concerning, but the combination seems a bit unusual, or at least confusing. You would like to speak to your PCP, whom you have known for a long time, and ask for either reassurance or advice on whether you should make an appointment. However, your experience with the front office’s organization tells you that the quick 4-minute conversation you’re looking for is not going to happen easily.
You have that robotic phone message memorized. It begins suggesting that you think you have an emergency to call 911. Then it reminds you that if have a question about COVID to press “2,” which will take you to a recorded message and eventually link you to a triage nurse if the recording doesn’t answer your questions. If you need a prescription refill you should press “3.” If you are a doctor’s office and wish speak to the doctor press “4.” If you know you need an appointment press “5.” And finally if you have a question press “6” and leave a message and a nurse will get back to you before the end of the day.
The good news is that your PCP’s office is good to its word and will return your call the same day, but the bad news is that it is likely to be well into the afternoon. And, while you don’t consider your symptoms life-threatening, you don’t want getting an answer to be an exercise in schedule disruption.
You were a doctor before you retired and you still have an “office.” It’s really more of a combination den and studio. So, technically you are a doctor’s office wanting to speak to the doctor. And, you know that pressing “4” will get you the answer you are looking for in a matter of minutes.
Scenario II. Your spouse, or your aunt, or the elderly widow next door asks you to accompany her at an upcoming doctor’s visit because she had been having trouble understanding the physician’s plan regarding further diagnosis and possible treatment. She believes having you along as kind of an interpreter/advocate would be a big help. Do you agree and do you make any stipulations?
Scenario III. Your PCP has referred you to a specialist. You are filling out the previsit form(s). Do you list your occupation as “retired physician” or just “retired”? Or just leave it blank?
Whether you deserve it or not, graduating from medical school has conferred on you a specialness in the eyes of many people. It is assumed you are smarter than the average bear and in taking the Hippocratic oath you have joined an elite club. And, with that membership comes some special undefined privileges.
But with that specialness there are are some downsides. For example, in some states being a physician once allowed you to have a license plate with “MD” in the number sequence. Sometimes that helped you avoid the occasional parking ticket. That is until folks realized the “MD” made you a target for car thieves and drug seekers who mistakenly believe we all carry drugs in our glove compartments.
So what about that first scenario? Do you press “4” to jump yourself to the head of the queue and avoid the inconvenience of having to wait for a reasonably timely response from your PCP? After all, you are fellow physicians and you’ve known her for a decade or two. If you are retired is your time any more valuable than that of her other patients? If you are still in active practice you can argue that getting special attention will benefit your patients. But, if it’s a weekend and you are off it’s a bit harder to rationalize special treatment. Playing the doctor card in this situation is your own decision but you must be prepared to shoulder the perceptions by your PCP and her staff as well as your own sense of fairness.
The other two scenarios are much different. In neither are you risking the impression that you are asking for a favor. But, they each have their downsides. In the second scenario you are doing someone a favor to act as an interpreter. How could this have downside? Unfortunately, what happens too often in situations like this is that when the patient’s physician learns that you are a fellow physician, the rest of the visit becomes a dialogue in doctor-speak between the two physicians with the patient sitting by as an observer. In the end this discussion may benefit the patient by creating a treatment plan that the patient can understand either because they overheard it or more likely because you eventually explained it to them.
On the other the hand, this doctor-to-doctor chat has done nothing to build a doctor-patient relationship that had obviously been lacking something. In situations like this it is probably better to keep the doctor card up your sleeve to be played at the end of the visit or maybe not at all. Before agreeing to be an interpreter/advocate, ask the patient to avoid mentioning that you are a physician. Instead, ask that she introduce you as a friend or relative that she has asked to come along to serve as a memory bank. During the visit it may be helpful to occasionally interject and suggest that the patient ask a question that hasn’t been adequately addressed. While some physicians may be upset when they belatedly find you have not revealed up front that you are a physician, I find this a harmless omission that has the benefit of improving patient care.
The final scenario — in which you are the patient — is likely to occur more often as you get older. When filling out a previsit form, I often simply put retired or leave it blank. But, how I answer the question often seems to be irrelevant because I have learned that physicians and their staff read those boilerplate forms so cursorily that even when I report my status as “retired physician” everyone seems surprised if and when it later comes to light.
My rationale in keeping the doctor card close to my vest in these situations is that I want to be addressed without any assumptions regarding my medical knowledge, which in my situation is well over half a century old and spotty at best. I don’t want my physicians to say “I’m sure you understand.” Because I often don’t. I would like them to learn about who I am just as I hope they would other patients. I won’t be offended if they “talk down” to me. If this specialist is as good as I’ve heard she is, I want to hear her full performance, not one edited for fellow and former physicians.
It doesn’t arrive gold edged with a list of special privileges. If it comes with any extras, they are risks that must be avoided.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Scenario I. Let’s say you wake with a collection of symptoms. None of them is concerning, but the combination seems a bit unusual, or at least confusing. You would like to speak to your PCP, whom you have known for a long time, and ask for either reassurance or advice on whether you should make an appointment. However, your experience with the front office’s organization tells you that the quick 4-minute conversation you’re looking for is not going to happen easily.
You have that robotic phone message memorized. It begins suggesting that you think you have an emergency to call 911. Then it reminds you that if have a question about COVID to press “2,” which will take you to a recorded message and eventually link you to a triage nurse if the recording doesn’t answer your questions. If you need a prescription refill you should press “3.” If you are a doctor’s office and wish speak to the doctor press “4.” If you know you need an appointment press “5.” And finally if you have a question press “6” and leave a message and a nurse will get back to you before the end of the day.
The good news is that your PCP’s office is good to its word and will return your call the same day, but the bad news is that it is likely to be well into the afternoon. And, while you don’t consider your symptoms life-threatening, you don’t want getting an answer to be an exercise in schedule disruption.
You were a doctor before you retired and you still have an “office.” It’s really more of a combination den and studio. So, technically you are a doctor’s office wanting to speak to the doctor. And, you know that pressing “4” will get you the answer you are looking for in a matter of minutes.
Scenario II. Your spouse, or your aunt, or the elderly widow next door asks you to accompany her at an upcoming doctor’s visit because she had been having trouble understanding the physician’s plan regarding further diagnosis and possible treatment. She believes having you along as kind of an interpreter/advocate would be a big help. Do you agree and do you make any stipulations?
Scenario III. Your PCP has referred you to a specialist. You are filling out the previsit form(s). Do you list your occupation as “retired physician” or just “retired”? Or just leave it blank?
Whether you deserve it or not, graduating from medical school has conferred on you a specialness in the eyes of many people. It is assumed you are smarter than the average bear and in taking the Hippocratic oath you have joined an elite club. And, with that membership comes some special undefined privileges.
But with that specialness there are are some downsides. For example, in some states being a physician once allowed you to have a license plate with “MD” in the number sequence. Sometimes that helped you avoid the occasional parking ticket. That is until folks realized the “MD” made you a target for car thieves and drug seekers who mistakenly believe we all carry drugs in our glove compartments.
So what about that first scenario? Do you press “4” to jump yourself to the head of the queue and avoid the inconvenience of having to wait for a reasonably timely response from your PCP? After all, you are fellow physicians and you’ve known her for a decade or two. If you are retired is your time any more valuable than that of her other patients? If you are still in active practice you can argue that getting special attention will benefit your patients. But, if it’s a weekend and you are off it’s a bit harder to rationalize special treatment. Playing the doctor card in this situation is your own decision but you must be prepared to shoulder the perceptions by your PCP and her staff as well as your own sense of fairness.
The other two scenarios are much different. In neither are you risking the impression that you are asking for a favor. But, they each have their downsides. In the second scenario you are doing someone a favor to act as an interpreter. How could this have downside? Unfortunately, what happens too often in situations like this is that when the patient’s physician learns that you are a fellow physician, the rest of the visit becomes a dialogue in doctor-speak between the two physicians with the patient sitting by as an observer. In the end this discussion may benefit the patient by creating a treatment plan that the patient can understand either because they overheard it or more likely because you eventually explained it to them.
On the other the hand, this doctor-to-doctor chat has done nothing to build a doctor-patient relationship that had obviously been lacking something. In situations like this it is probably better to keep the doctor card up your sleeve to be played at the end of the visit or maybe not at all. Before agreeing to be an interpreter/advocate, ask the patient to avoid mentioning that you are a physician. Instead, ask that she introduce you as a friend or relative that she has asked to come along to serve as a memory bank. During the visit it may be helpful to occasionally interject and suggest that the patient ask a question that hasn’t been adequately addressed. While some physicians may be upset when they belatedly find you have not revealed up front that you are a physician, I find this a harmless omission that has the benefit of improving patient care.
The final scenario — in which you are the patient — is likely to occur more often as you get older. When filling out a previsit form, I often simply put retired or leave it blank. But, how I answer the question often seems to be irrelevant because I have learned that physicians and their staff read those boilerplate forms so cursorily that even when I report my status as “retired physician” everyone seems surprised if and when it later comes to light.
My rationale in keeping the doctor card close to my vest in these situations is that I want to be addressed without any assumptions regarding my medical knowledge, which in my situation is well over half a century old and spotty at best. I don’t want my physicians to say “I’m sure you understand.” Because I often don’t. I would like them to learn about who I am just as I hope they would other patients. I won’t be offended if they “talk down” to me. If this specialist is as good as I’ve heard she is, I want to hear her full performance, not one edited for fellow and former physicians.
It doesn’t arrive gold edged with a list of special privileges. If it comes with any extras, they are risks that must be avoided.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Baby-Led Weaning
I first heard the term “baby-led weaning” about 20 years ago, which turns out was just a few years after the concept was introduced to the public by a public health/midwife in Britain. Starting infants on solid foods when they could feed themselves didn’t sound as off-the-wall to me as it did to most other folks, but I chose not to include it in my list of standard recommendations at the 4- and 6-month well child visits. If any parent had asked me my opinion I would have told them to give it a try with a few specific cautions about what and how. But, I don’t recall any parents asking me. The ones who knew me well or had read, or at least heard about, my book on picky eating must have already figured out what my answer would be. The parents who didn’t know me may have been afraid I would tell them it was a crazy idea.
Twelve years ago I retired from office practice and hadn’t heard a peep about baby-led weaning until last week when I encountered a story in The New York Times. It appears that while I have been reveling in my post-practice existence, baby-led weaning has become a “thing.” As the author of the article observed: “The concept seems to appeal to millennials who favor parenting philosophies that prioritize child autonomy.”
Baby-led weaning’s traction has been so robust that the largest manufacturer of baby food in this country has been labeling some of its products “baby-led friendly since 2021.” There are several online businesses that have tapped into the growing market. One offers a very detailed free directory that lists almost any edible you can imagine with recommendations of when and how they can be presented in a safe and appealing matter to little hand feeders. Of course the company has also figured out a way to monetize the product.
Not surprisingly the American Academy of Pediatrics (AAP) has remained silent on baby-led weaning. However, in The New York Times article, Dr. Mark R. Corkins, chair of the AAP nutrition committee, is quoted as describing baby-led weaning is “a social media–driven invention.”
While I was interested to learn about the concept’s growth and commercialization, I was troubled to find that like co-sleeping, sleep training, and exclusive breastfeeding, baby-led weaning has become one of those angst-producing topics that is torturing new parents who live every day in fear that they “aren’t doing it right.” We pediatricians might deserve a small dose of blame for not vigorously emphasizing that there are numerous ways to skin that cat known as parenting. However, social media websites and Mom chat rooms are probably more responsible for creating an atmosphere in which parents are afraid of being ostracized for the decisions they have made in good faith whether it is about weaning or when to start toilet training.
In isolated cultures, weaning a baby to solids was probably never a topic for discussion or debate. New parents did what their parents did, or more likely a child’s grandmother advised or took over the process herself. The child was fed what the rest of the family ate. If it was something the infant could handle himself you gave it to him. If not you mashed it up or maybe you chewed it for him into a consistency he could manage.
However, most new parents have become so distanced from their own parents’ childrearing practices geographically, temporally, and philosophically, that they must rely on folks like us and others whom they believe are, or at least claim to be, experts. Young adults are no longer hesitant to cross ethnic thresholds when they decide to be co-parents, meaning that any remnant of family tradition is either diluted or lost outright. In the void created by this abandonment of tradition, corporations were happy to step in with easy-to-prepare baby food that lacks in nutritional and dietary variety. Baby-led weaning is just one more logical step in the metamorphosis of our society’s infant feeding patterns.
I still have no problem with baby-led weaning as an option for parents, particularly if with just a click of a mouse they can access safe and healthy advice to make up for generations of grandmotherly experience acquired over hundreds of years. However,
It is one thing when parents hoping to encourage the process of self-feeding offer their infants an edible that may not be in the family’s usual diet. However, it is a totally different matter when a family allows itself to become dietary contortionists to a accommodate a 4-year-old whose diet consists of a monotonous rotation of three pasta shapes topped with grated Parmesan cheese, and on a good day a raw carrot slice or two. Parents living in this nutritional wasteland may have given up on managing their children’s pickiness, and may find it is less stressful to join the child and eat a few forkfuls of pasta to preserve some semblance of a family dinner. Then after the child has been put to bed they have their own balanced meal.
Almost by definition family meals are a compromise. Even adults without children negotiate often unspoken menu patterns with their partners. “This evening we’ll have your favorite, I may have my favorite next week.”
Most parents of young children understand that their diet may be a bit heavier on pasta than they might prefer and a little less varied when it comes to vegetables. It is just part of the deal. However, when mealtimes become totally dictated by the pickiness of a child there is a problem. While a poorly structured child-led family diet may be nutritionally deficient, the bigger problem is that it is expensive in time and labor, two resources usually in short supply in young families.
Theoretically, infants who have led their own weaning are more likely to have been introduced to a broad variety of flavors and textures and this may carry them into childhood as more adventuresome eaters. Picky eating can be managed successfully and result in a family that can enjoy the psychological and emotional benefits of nutritionally balanced family meals, but it requires a combination of parental courage and patience.
It is unclear exactly how we got into a situation in which a generation of parents makes things more difficult for themselves by favoring practices that overemphasize child autonomy. It may be that the parents had suffered under autocratic parents themselves, or more likely they have read too many novels or watched too many movies and TV shows in which the parents were portrayed as overbearing or controlling. Or, it may simply be that they haven’t had enough exposure to young children to realize that they all benefit from clear limits to a varying degree.
In the process of watching tens of thousands of parents, it has become clear to me that those who are the most successful are leaders and that they lead primarily by example. They have learned to be masters in the art of deception by creating a safe environment with sensible limits while at the same time fostering an atmosphere in which the child sees himself as participating in the process.
The biblical prophet Isaiah (11:6-9) in his description of how things will be different after the Lord acts to help his people predicts: “and a little child shall lead them.” This prediction fits nicely as the last in a string of crazy situations that includes a wolf living with a lamb and a leopard lying down with a calf.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I first heard the term “baby-led weaning” about 20 years ago, which turns out was just a few years after the concept was introduced to the public by a public health/midwife in Britain. Starting infants on solid foods when they could feed themselves didn’t sound as off-the-wall to me as it did to most other folks, but I chose not to include it in my list of standard recommendations at the 4- and 6-month well child visits. If any parent had asked me my opinion I would have told them to give it a try with a few specific cautions about what and how. But, I don’t recall any parents asking me. The ones who knew me well or had read, or at least heard about, my book on picky eating must have already figured out what my answer would be. The parents who didn’t know me may have been afraid I would tell them it was a crazy idea.
Twelve years ago I retired from office practice and hadn’t heard a peep about baby-led weaning until last week when I encountered a story in The New York Times. It appears that while I have been reveling in my post-practice existence, baby-led weaning has become a “thing.” As the author of the article observed: “The concept seems to appeal to millennials who favor parenting philosophies that prioritize child autonomy.”
Baby-led weaning’s traction has been so robust that the largest manufacturer of baby food in this country has been labeling some of its products “baby-led friendly since 2021.” There are several online businesses that have tapped into the growing market. One offers a very detailed free directory that lists almost any edible you can imagine with recommendations of when and how they can be presented in a safe and appealing matter to little hand feeders. Of course the company has also figured out a way to monetize the product.
Not surprisingly the American Academy of Pediatrics (AAP) has remained silent on baby-led weaning. However, in The New York Times article, Dr. Mark R. Corkins, chair of the AAP nutrition committee, is quoted as describing baby-led weaning is “a social media–driven invention.”
While I was interested to learn about the concept’s growth and commercialization, I was troubled to find that like co-sleeping, sleep training, and exclusive breastfeeding, baby-led weaning has become one of those angst-producing topics that is torturing new parents who live every day in fear that they “aren’t doing it right.” We pediatricians might deserve a small dose of blame for not vigorously emphasizing that there are numerous ways to skin that cat known as parenting. However, social media websites and Mom chat rooms are probably more responsible for creating an atmosphere in which parents are afraid of being ostracized for the decisions they have made in good faith whether it is about weaning or when to start toilet training.
In isolated cultures, weaning a baby to solids was probably never a topic for discussion or debate. New parents did what their parents did, or more likely a child’s grandmother advised or took over the process herself. The child was fed what the rest of the family ate. If it was something the infant could handle himself you gave it to him. If not you mashed it up or maybe you chewed it for him into a consistency he could manage.
However, most new parents have become so distanced from their own parents’ childrearing practices geographically, temporally, and philosophically, that they must rely on folks like us and others whom they believe are, or at least claim to be, experts. Young adults are no longer hesitant to cross ethnic thresholds when they decide to be co-parents, meaning that any remnant of family tradition is either diluted or lost outright. In the void created by this abandonment of tradition, corporations were happy to step in with easy-to-prepare baby food that lacks in nutritional and dietary variety. Baby-led weaning is just one more logical step in the metamorphosis of our society’s infant feeding patterns.
I still have no problem with baby-led weaning as an option for parents, particularly if with just a click of a mouse they can access safe and healthy advice to make up for generations of grandmotherly experience acquired over hundreds of years. However,
It is one thing when parents hoping to encourage the process of self-feeding offer their infants an edible that may not be in the family’s usual diet. However, it is a totally different matter when a family allows itself to become dietary contortionists to a accommodate a 4-year-old whose diet consists of a monotonous rotation of three pasta shapes topped with grated Parmesan cheese, and on a good day a raw carrot slice or two. Parents living in this nutritional wasteland may have given up on managing their children’s pickiness, and may find it is less stressful to join the child and eat a few forkfuls of pasta to preserve some semblance of a family dinner. Then after the child has been put to bed they have their own balanced meal.
Almost by definition family meals are a compromise. Even adults without children negotiate often unspoken menu patterns with their partners. “This evening we’ll have your favorite, I may have my favorite next week.”
Most parents of young children understand that their diet may be a bit heavier on pasta than they might prefer and a little less varied when it comes to vegetables. It is just part of the deal. However, when mealtimes become totally dictated by the pickiness of a child there is a problem. While a poorly structured child-led family diet may be nutritionally deficient, the bigger problem is that it is expensive in time and labor, two resources usually in short supply in young families.
Theoretically, infants who have led their own weaning are more likely to have been introduced to a broad variety of flavors and textures and this may carry them into childhood as more adventuresome eaters. Picky eating can be managed successfully and result in a family that can enjoy the psychological and emotional benefits of nutritionally balanced family meals, but it requires a combination of parental courage and patience.
It is unclear exactly how we got into a situation in which a generation of parents makes things more difficult for themselves by favoring practices that overemphasize child autonomy. It may be that the parents had suffered under autocratic parents themselves, or more likely they have read too many novels or watched too many movies and TV shows in which the parents were portrayed as overbearing or controlling. Or, it may simply be that they haven’t had enough exposure to young children to realize that they all benefit from clear limits to a varying degree.
In the process of watching tens of thousands of parents, it has become clear to me that those who are the most successful are leaders and that they lead primarily by example. They have learned to be masters in the art of deception by creating a safe environment with sensible limits while at the same time fostering an atmosphere in which the child sees himself as participating in the process.
The biblical prophet Isaiah (11:6-9) in his description of how things will be different after the Lord acts to help his people predicts: “and a little child shall lead them.” This prediction fits nicely as the last in a string of crazy situations that includes a wolf living with a lamb and a leopard lying down with a calf.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I first heard the term “baby-led weaning” about 20 years ago, which turns out was just a few years after the concept was introduced to the public by a public health/midwife in Britain. Starting infants on solid foods when they could feed themselves didn’t sound as off-the-wall to me as it did to most other folks, but I chose not to include it in my list of standard recommendations at the 4- and 6-month well child visits. If any parent had asked me my opinion I would have told them to give it a try with a few specific cautions about what and how. But, I don’t recall any parents asking me. The ones who knew me well or had read, or at least heard about, my book on picky eating must have already figured out what my answer would be. The parents who didn’t know me may have been afraid I would tell them it was a crazy idea.
Twelve years ago I retired from office practice and hadn’t heard a peep about baby-led weaning until last week when I encountered a story in The New York Times. It appears that while I have been reveling in my post-practice existence, baby-led weaning has become a “thing.” As the author of the article observed: “The concept seems to appeal to millennials who favor parenting philosophies that prioritize child autonomy.”
Baby-led weaning’s traction has been so robust that the largest manufacturer of baby food in this country has been labeling some of its products “baby-led friendly since 2021.” There are several online businesses that have tapped into the growing market. One offers a very detailed free directory that lists almost any edible you can imagine with recommendations of when and how they can be presented in a safe and appealing matter to little hand feeders. Of course the company has also figured out a way to monetize the product.
Not surprisingly the American Academy of Pediatrics (AAP) has remained silent on baby-led weaning. However, in The New York Times article, Dr. Mark R. Corkins, chair of the AAP nutrition committee, is quoted as describing baby-led weaning is “a social media–driven invention.”
While I was interested to learn about the concept’s growth and commercialization, I was troubled to find that like co-sleeping, sleep training, and exclusive breastfeeding, baby-led weaning has become one of those angst-producing topics that is torturing new parents who live every day in fear that they “aren’t doing it right.” We pediatricians might deserve a small dose of blame for not vigorously emphasizing that there are numerous ways to skin that cat known as parenting. However, social media websites and Mom chat rooms are probably more responsible for creating an atmosphere in which parents are afraid of being ostracized for the decisions they have made in good faith whether it is about weaning or when to start toilet training.
In isolated cultures, weaning a baby to solids was probably never a topic for discussion or debate. New parents did what their parents did, or more likely a child’s grandmother advised or took over the process herself. The child was fed what the rest of the family ate. If it was something the infant could handle himself you gave it to him. If not you mashed it up or maybe you chewed it for him into a consistency he could manage.
However, most new parents have become so distanced from their own parents’ childrearing practices geographically, temporally, and philosophically, that they must rely on folks like us and others whom they believe are, or at least claim to be, experts. Young adults are no longer hesitant to cross ethnic thresholds when they decide to be co-parents, meaning that any remnant of family tradition is either diluted or lost outright. In the void created by this abandonment of tradition, corporations were happy to step in with easy-to-prepare baby food that lacks in nutritional and dietary variety. Baby-led weaning is just one more logical step in the metamorphosis of our society’s infant feeding patterns.
I still have no problem with baby-led weaning as an option for parents, particularly if with just a click of a mouse they can access safe and healthy advice to make up for generations of grandmotherly experience acquired over hundreds of years. However,
It is one thing when parents hoping to encourage the process of self-feeding offer their infants an edible that may not be in the family’s usual diet. However, it is a totally different matter when a family allows itself to become dietary contortionists to a accommodate a 4-year-old whose diet consists of a monotonous rotation of three pasta shapes topped with grated Parmesan cheese, and on a good day a raw carrot slice or two. Parents living in this nutritional wasteland may have given up on managing their children’s pickiness, and may find it is less stressful to join the child and eat a few forkfuls of pasta to preserve some semblance of a family dinner. Then after the child has been put to bed they have their own balanced meal.
Almost by definition family meals are a compromise. Even adults without children negotiate often unspoken menu patterns with their partners. “This evening we’ll have your favorite, I may have my favorite next week.”
Most parents of young children understand that their diet may be a bit heavier on pasta than they might prefer and a little less varied when it comes to vegetables. It is just part of the deal. However, when mealtimes become totally dictated by the pickiness of a child there is a problem. While a poorly structured child-led family diet may be nutritionally deficient, the bigger problem is that it is expensive in time and labor, two resources usually in short supply in young families.
Theoretically, infants who have led their own weaning are more likely to have been introduced to a broad variety of flavors and textures and this may carry them into childhood as more adventuresome eaters. Picky eating can be managed successfully and result in a family that can enjoy the psychological and emotional benefits of nutritionally balanced family meals, but it requires a combination of parental courage and patience.
It is unclear exactly how we got into a situation in which a generation of parents makes things more difficult for themselves by favoring practices that overemphasize child autonomy. It may be that the parents had suffered under autocratic parents themselves, or more likely they have read too many novels or watched too many movies and TV shows in which the parents were portrayed as overbearing or controlling. Or, it may simply be that they haven’t had enough exposure to young children to realize that they all benefit from clear limits to a varying degree.
In the process of watching tens of thousands of parents, it has become clear to me that those who are the most successful are leaders and that they lead primarily by example. They have learned to be masters in the art of deception by creating a safe environment with sensible limits while at the same time fostering an atmosphere in which the child sees himself as participating in the process.
The biblical prophet Isaiah (11:6-9) in his description of how things will be different after the Lord acts to help his people predicts: “and a little child shall lead them.” This prediction fits nicely as the last in a string of crazy situations that includes a wolf living with a lamb and a leopard lying down with a calf.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Long COVID and Blame Hunting
I suspect that many of you have seen or read about a recent study regarding the “long COVID” enigma. The investigators surveyed the records of more than 4000 pediatric patients who had been infected and nearly 1400 who had not. The researchers then developed models in which 14 symptoms were more common in previous SARS-CoV2–infected individuals in all age groups, compared with the uninfected. There were four additional symptoms in children only and three additional symptoms in the adolescents.
Using these data, the investigators created research indices that “correlated with poor overall health and quality of life” and emphasized “neurocognitive, pain, and gastrointestinal symptoms in school-age children” and a “change or loss in smell or taste, pain, and fatigue/malaise-related symptoms in adolescents.”
So now thanks to these investigators we have research indices for characterizing PASC (post-acute sequelae of SARS-CoV-2, aka. long COVID). What should we to do with them? I’m not sure these results move us any further if our goal is finding something to help patients who believe, or have been told, that they have long COVID.
Even to a non-statistician like myself there appear to be some problems with this study. In an editorial accompanying this study, Suchitra Rao, MBBS, MSCS in the Department of Pediatrics, University of Colorado School of Medicine, Aurora, noted the study has the potential for ascertainment bias. For example, the researchers’ subject recruitment procedure resulted in a higher “proportion of neurocognitive/behavioral manifestations” may have skewed the results.
Also, some of the patient evaluations were not done at a consistent interval after the initial infection, which could result in recall bias. And, more importantly, because there were no baseline measurements to determine preinfection status, the investigators had no way of determining to what degree the patients’ underlying conditions may have reflected the quality of life scores.
Although I wouldn’t consider it a bias, I wonder if the investigators have a preconceived vision of what long COVID is going to look like once it is better understood. The fact that they undertook this project suggests that they believe the truth about the phenomenon will be discoverable using data based on collections of vague symptoms.
Or, do the researchers share my vision of long COVID that if it exists it will be something akin to the burst of Parkinson’s disease seen decades later in survivors of the 1918-1920 flu pandemic. Or, maybe it is something like post-polio syndrome, in which survivors in childhood develop atrophy and muscle weakness as they age. Do the researchers believe that COVID survivors are harboring some remnant of SARS-CoV-2 or its genome inside their bodies ticking like a time bomb ready to surface in the future? Think shingles.
I suspect that there are some folks who may or not share my ticking time bomb vision, but who, like me, wonder if there is really such a thing as long COVID – at least one in the form characterized by the work of these investigators. Unfortunately, the $1 billion the National Institutes of Health has invested in the Researching COVID to Enhance Recovery (RECOVER) initiative is not going to discover delayed sequelae until time is ready to tell us. What researchers are looking at now is a collection of patients, some who were not well to begin with but now describe a collection of vague symptoms, some of which are unique to COVID, but most are not. The loss of taste and smell being the one notable and important exception.
It is easy to understand why patients and their physicians would like to have a diagnosis like “long COVID” to at least validate their symptoms that up until now have eluded explanation or remedy. Not surprisingly, they may feel that, if researchers can’t find a cure, let’s at least have something we can lay the blame on.
A major flaw in this current attempt to characterize long COVID is the lack of a true control group. Yes, the subjects the researchers labeled as “uninfected” lived contemporaneously with the patients unfortunate enough to have acquired the virus. However, this illness was mysterious from its first appearance, continued to be more frightening as we struggled to learn more about it, and was clumsily managed in a way that turned our way of life upside down. This was particularly true for school-age children. It unmasked previously unsuspected underlying conditions and quickly acquired a poorly documented reputation for having a “long” variety.
Of course the “uninfected” also lived through these same tumultuous times. But knowing that you harbored, and may still harbor, this mysterious invader moves the infected and their families into a whole new level of concern and anxiety the rest of us who were more fortunate don’t share.
We must not ignore the fact that patients and their caregivers may receive some comfort when they have something to blame for their symptoms. However, we must shift our focus away from blame hunting, which up to this point has been fruitless. Instead, Each patient should be treated as an individual and not part of a group with similar symptoms cobbled together with data acquired under a cloud of bias.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I suspect that many of you have seen or read about a recent study regarding the “long COVID” enigma. The investigators surveyed the records of more than 4000 pediatric patients who had been infected and nearly 1400 who had not. The researchers then developed models in which 14 symptoms were more common in previous SARS-CoV2–infected individuals in all age groups, compared with the uninfected. There were four additional symptoms in children only and three additional symptoms in the adolescents.
Using these data, the investigators created research indices that “correlated with poor overall health and quality of life” and emphasized “neurocognitive, pain, and gastrointestinal symptoms in school-age children” and a “change or loss in smell or taste, pain, and fatigue/malaise-related symptoms in adolescents.”
So now thanks to these investigators we have research indices for characterizing PASC (post-acute sequelae of SARS-CoV-2, aka. long COVID). What should we to do with them? I’m not sure these results move us any further if our goal is finding something to help patients who believe, or have been told, that they have long COVID.
Even to a non-statistician like myself there appear to be some problems with this study. In an editorial accompanying this study, Suchitra Rao, MBBS, MSCS in the Department of Pediatrics, University of Colorado School of Medicine, Aurora, noted the study has the potential for ascertainment bias. For example, the researchers’ subject recruitment procedure resulted in a higher “proportion of neurocognitive/behavioral manifestations” may have skewed the results.
Also, some of the patient evaluations were not done at a consistent interval after the initial infection, which could result in recall bias. And, more importantly, because there were no baseline measurements to determine preinfection status, the investigators had no way of determining to what degree the patients’ underlying conditions may have reflected the quality of life scores.
Although I wouldn’t consider it a bias, I wonder if the investigators have a preconceived vision of what long COVID is going to look like once it is better understood. The fact that they undertook this project suggests that they believe the truth about the phenomenon will be discoverable using data based on collections of vague symptoms.
Or, do the researchers share my vision of long COVID that if it exists it will be something akin to the burst of Parkinson’s disease seen decades later in survivors of the 1918-1920 flu pandemic. Or, maybe it is something like post-polio syndrome, in which survivors in childhood develop atrophy and muscle weakness as they age. Do the researchers believe that COVID survivors are harboring some remnant of SARS-CoV-2 or its genome inside their bodies ticking like a time bomb ready to surface in the future? Think shingles.
I suspect that there are some folks who may or not share my ticking time bomb vision, but who, like me, wonder if there is really such a thing as long COVID – at least one in the form characterized by the work of these investigators. Unfortunately, the $1 billion the National Institutes of Health has invested in the Researching COVID to Enhance Recovery (RECOVER) initiative is not going to discover delayed sequelae until time is ready to tell us. What researchers are looking at now is a collection of patients, some who were not well to begin with but now describe a collection of vague symptoms, some of which are unique to COVID, but most are not. The loss of taste and smell being the one notable and important exception.
It is easy to understand why patients and their physicians would like to have a diagnosis like “long COVID” to at least validate their symptoms that up until now have eluded explanation or remedy. Not surprisingly, they may feel that, if researchers can’t find a cure, let’s at least have something we can lay the blame on.
A major flaw in this current attempt to characterize long COVID is the lack of a true control group. Yes, the subjects the researchers labeled as “uninfected” lived contemporaneously with the patients unfortunate enough to have acquired the virus. However, this illness was mysterious from its first appearance, continued to be more frightening as we struggled to learn more about it, and was clumsily managed in a way that turned our way of life upside down. This was particularly true for school-age children. It unmasked previously unsuspected underlying conditions and quickly acquired a poorly documented reputation for having a “long” variety.
Of course the “uninfected” also lived through these same tumultuous times. But knowing that you harbored, and may still harbor, this mysterious invader moves the infected and their families into a whole new level of concern and anxiety the rest of us who were more fortunate don’t share.
We must not ignore the fact that patients and their caregivers may receive some comfort when they have something to blame for their symptoms. However, we must shift our focus away from blame hunting, which up to this point has been fruitless. Instead, Each patient should be treated as an individual and not part of a group with similar symptoms cobbled together with data acquired under a cloud of bias.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I suspect that many of you have seen or read about a recent study regarding the “long COVID” enigma. The investigators surveyed the records of more than 4000 pediatric patients who had been infected and nearly 1400 who had not. The researchers then developed models in which 14 symptoms were more common in previous SARS-CoV2–infected individuals in all age groups, compared with the uninfected. There were four additional symptoms in children only and three additional symptoms in the adolescents.
Using these data, the investigators created research indices that “correlated with poor overall health and quality of life” and emphasized “neurocognitive, pain, and gastrointestinal symptoms in school-age children” and a “change or loss in smell or taste, pain, and fatigue/malaise-related symptoms in adolescents.”
So now thanks to these investigators we have research indices for characterizing PASC (post-acute sequelae of SARS-CoV-2, aka. long COVID). What should we to do with them? I’m not sure these results move us any further if our goal is finding something to help patients who believe, or have been told, that they have long COVID.
Even to a non-statistician like myself there appear to be some problems with this study. In an editorial accompanying this study, Suchitra Rao, MBBS, MSCS in the Department of Pediatrics, University of Colorado School of Medicine, Aurora, noted the study has the potential for ascertainment bias. For example, the researchers’ subject recruitment procedure resulted in a higher “proportion of neurocognitive/behavioral manifestations” may have skewed the results.
Also, some of the patient evaluations were not done at a consistent interval after the initial infection, which could result in recall bias. And, more importantly, because there were no baseline measurements to determine preinfection status, the investigators had no way of determining to what degree the patients’ underlying conditions may have reflected the quality of life scores.
Although I wouldn’t consider it a bias, I wonder if the investigators have a preconceived vision of what long COVID is going to look like once it is better understood. The fact that they undertook this project suggests that they believe the truth about the phenomenon will be discoverable using data based on collections of vague symptoms.
Or, do the researchers share my vision of long COVID that if it exists it will be something akin to the burst of Parkinson’s disease seen decades later in survivors of the 1918-1920 flu pandemic. Or, maybe it is something like post-polio syndrome, in which survivors in childhood develop atrophy and muscle weakness as they age. Do the researchers believe that COVID survivors are harboring some remnant of SARS-CoV-2 or its genome inside their bodies ticking like a time bomb ready to surface in the future? Think shingles.
I suspect that there are some folks who may or not share my ticking time bomb vision, but who, like me, wonder if there is really such a thing as long COVID – at least one in the form characterized by the work of these investigators. Unfortunately, the $1 billion the National Institutes of Health has invested in the Researching COVID to Enhance Recovery (RECOVER) initiative is not going to discover delayed sequelae until time is ready to tell us. What researchers are looking at now is a collection of patients, some who were not well to begin with but now describe a collection of vague symptoms, some of which are unique to COVID, but most are not. The loss of taste and smell being the one notable and important exception.
It is easy to understand why patients and their physicians would like to have a diagnosis like “long COVID” to at least validate their symptoms that up until now have eluded explanation or remedy. Not surprisingly, they may feel that, if researchers can’t find a cure, let’s at least have something we can lay the blame on.
A major flaw in this current attempt to characterize long COVID is the lack of a true control group. Yes, the subjects the researchers labeled as “uninfected” lived contemporaneously with the patients unfortunate enough to have acquired the virus. However, this illness was mysterious from its first appearance, continued to be more frightening as we struggled to learn more about it, and was clumsily managed in a way that turned our way of life upside down. This was particularly true for school-age children. It unmasked previously unsuspected underlying conditions and quickly acquired a poorly documented reputation for having a “long” variety.
Of course the “uninfected” also lived through these same tumultuous times. But knowing that you harbored, and may still harbor, this mysterious invader moves the infected and their families into a whole new level of concern and anxiety the rest of us who were more fortunate don’t share.
We must not ignore the fact that patients and their caregivers may receive some comfort when they have something to blame for their symptoms. However, we must shift our focus away from blame hunting, which up to this point has been fruitless. Instead, Each patient should be treated as an individual and not part of a group with similar symptoms cobbled together with data acquired under a cloud of bias.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Being An Outsider
Our son works for a Maine-based company that produces and sells clothing and outdoor recreation equipment. One of its tag lines is “Be an Outsider.” In his role as chief marketing officer, he was recently given an app for his phone that can calculate how many minutes he spends outside each day. He assured me: “Dad, you don’t need one of these on your phone. Your weather-beaten skin says you are already logging in way more than enough minutes outdoors.”
But, it got me thinking about several avenues of research where an app like that would be useful. As luck would have it, the following week I stumbled across a paper describing just such a study.
Researchers in Shanghai, China, placed smartwatches with technology similar to my son’s phone on nearly 3000 children and found “that outdoor exposure patterns characterized by a continuous period of at least 15 minutes, accompanied by a sunlight intensity of more than 2000 lux, were associated with less myopic shift.” In other words, children getting more time outside were less likely to become nearsighted.” Whether this was an effect of being outside instead of staring at a screen indoors is an interesting question.
I have alway suspected that being outdoors was important for wellness and this paper meshed nicely with an article I had recently read in The Washington Post titled, “How time in nature builds happier, healthier and more social children” (Jamie Friedlander Serrano, 2024 Aug 4). The reporter quotes numerous experts in child health and includes links to several articles that tout the benefits of outdoor experiences, particularly ones in a natural environment. There are the vitamin D effects on growth and bone health. There are studies suggesting that being out in nature can reduce stress, anxiety, and aggression, and improve working memory and attention.
In this country there is a small but growing group of schools modeling themselves after the “Forest kindergartens” that have become popular in Europe in which a large portion of the students’ days are spent outside surrounded by nature. It will be interesting to see how robustly this trend grows here in the United States. However, in a nation like ours in which the Environmental Protection Agency estimates that the average American spends 90% of his day indoors, it’s going to require a seismic shift in our societal norms.
I think my mother always knew that being outdoors was healthy for children. I also suspect that she and most my friends’ mothers were primarily motivated by a desire to have the house to themselves. This was primarily to allow them to get the housework done unimpeded by pestering children. But, there may have been times when a busy housewife simply needed to sit down with a book in the peace and quiet of a childless environment. We kids were told to get out of the house and return for lunch and dinner, hopefully not in the tow of a police officer. There were few rules and for the most part we were left to invent our own amusement.
Yes, you’ve heard this old-fogey legend before. But it was true. Those were the halcyon days of the 1950s in a small suburban town of 5000 of a little more than 1 square mile with its own swimming pool. My particular idyll was aptly named Pleasantville but I know we were not alone as the only community where children were allowed – or let’s say “encouraged” – to be outdoors if they weren’t in school. It was a different time.
I am not so naive to believe that we will ever return to those good old days when children roamed free, but it is worth considering what has changed to drive children inside and away from all the health benefits of being outdoors. Is there anything we can do to reverse this unfortunate trend?
First, we must first face up to the reality that our society has become so focused on the potential downsides of everything that we seem to be driven primarily by risk avoidance. We hear how things can go terribly wrong in the world outside, a world we can’t control. Although the data from the pandemic don’t support it, more of us believe children are safer indoors. Parents in particular seem to worry more now than they did 75 years ago. I don’t think we can point to a single event such as the tragedies of September 11 to explain the shift.
While bad news has always traveled fast, today (with communication being almost instantaneous) a story about a child abduction at 6 in the morning in Nevada can be on my local TV channel by lunchtime here in Maine. Parents worry that if bad stuff can happen to a child in Mount Elsewhere, it could happen to my child playing in the backyard across the street.
I think we pediatricians should consider how large a role we may be playing in driving parental anxiety with our frequent warnings about the dangers a child can encounter outdoors whether they come in the form of accidents or exposure to the elements.
While parents have grown more hesitant to send their children outside to play, as a society we have failed to adequately acknowledge and respond to the role that unhealthy attraction of indoor alternatives to outdoor play may be contributing to indoorism. Here we’re talking about television, smartphones, and the internet.
So, what can we do as pediatricians to get our patients outside? First, we can set an example and cover our office walls with pictures of ourselves and our families enjoying the outdoors. We can be vocal advocates for creating and maintaining accessible outdoor spaces in our community. We can advocate for more outside time during recess in school and encourage the school officials to consider having more courses taught outside.
We can be more diligent in asking families about their screen use and not be afraid to express our concern when we hear how little outdoor time their child is getting. Finally, we can strive for more balance in our messaging. For example for every warning we give about playing outside on poor air quality days there should be a reminder of the health benefits of being outdoors on the other days. Every message about the importance of sunscreen should be preceded by a few sentences promoting outdoor activities in wooded environments where sun exposure is less of a concern.
Being an outsider is just as important as getting enough sleep, eating the right food and staying physically active.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Our son works for a Maine-based company that produces and sells clothing and outdoor recreation equipment. One of its tag lines is “Be an Outsider.” In his role as chief marketing officer, he was recently given an app for his phone that can calculate how many minutes he spends outside each day. He assured me: “Dad, you don’t need one of these on your phone. Your weather-beaten skin says you are already logging in way more than enough minutes outdoors.”
But, it got me thinking about several avenues of research where an app like that would be useful. As luck would have it, the following week I stumbled across a paper describing just such a study.
Researchers in Shanghai, China, placed smartwatches with technology similar to my son’s phone on nearly 3000 children and found “that outdoor exposure patterns characterized by a continuous period of at least 15 minutes, accompanied by a sunlight intensity of more than 2000 lux, were associated with less myopic shift.” In other words, children getting more time outside were less likely to become nearsighted.” Whether this was an effect of being outside instead of staring at a screen indoors is an interesting question.
I have alway suspected that being outdoors was important for wellness and this paper meshed nicely with an article I had recently read in The Washington Post titled, “How time in nature builds happier, healthier and more social children” (Jamie Friedlander Serrano, 2024 Aug 4). The reporter quotes numerous experts in child health and includes links to several articles that tout the benefits of outdoor experiences, particularly ones in a natural environment. There are the vitamin D effects on growth and bone health. There are studies suggesting that being out in nature can reduce stress, anxiety, and aggression, and improve working memory and attention.
In this country there is a small but growing group of schools modeling themselves after the “Forest kindergartens” that have become popular in Europe in which a large portion of the students’ days are spent outside surrounded by nature. It will be interesting to see how robustly this trend grows here in the United States. However, in a nation like ours in which the Environmental Protection Agency estimates that the average American spends 90% of his day indoors, it’s going to require a seismic shift in our societal norms.
I think my mother always knew that being outdoors was healthy for children. I also suspect that she and most my friends’ mothers were primarily motivated by a desire to have the house to themselves. This was primarily to allow them to get the housework done unimpeded by pestering children. But, there may have been times when a busy housewife simply needed to sit down with a book in the peace and quiet of a childless environment. We kids were told to get out of the house and return for lunch and dinner, hopefully not in the tow of a police officer. There were few rules and for the most part we were left to invent our own amusement.
Yes, you’ve heard this old-fogey legend before. But it was true. Those were the halcyon days of the 1950s in a small suburban town of 5000 of a little more than 1 square mile with its own swimming pool. My particular idyll was aptly named Pleasantville but I know we were not alone as the only community where children were allowed – or let’s say “encouraged” – to be outdoors if they weren’t in school. It was a different time.
I am not so naive to believe that we will ever return to those good old days when children roamed free, but it is worth considering what has changed to drive children inside and away from all the health benefits of being outdoors. Is there anything we can do to reverse this unfortunate trend?
First, we must first face up to the reality that our society has become so focused on the potential downsides of everything that we seem to be driven primarily by risk avoidance. We hear how things can go terribly wrong in the world outside, a world we can’t control. Although the data from the pandemic don’t support it, more of us believe children are safer indoors. Parents in particular seem to worry more now than they did 75 years ago. I don’t think we can point to a single event such as the tragedies of September 11 to explain the shift.
While bad news has always traveled fast, today (with communication being almost instantaneous) a story about a child abduction at 6 in the morning in Nevada can be on my local TV channel by lunchtime here in Maine. Parents worry that if bad stuff can happen to a child in Mount Elsewhere, it could happen to my child playing in the backyard across the street.
I think we pediatricians should consider how large a role we may be playing in driving parental anxiety with our frequent warnings about the dangers a child can encounter outdoors whether they come in the form of accidents or exposure to the elements.
While parents have grown more hesitant to send their children outside to play, as a society we have failed to adequately acknowledge and respond to the role that unhealthy attraction of indoor alternatives to outdoor play may be contributing to indoorism. Here we’re talking about television, smartphones, and the internet.
So, what can we do as pediatricians to get our patients outside? First, we can set an example and cover our office walls with pictures of ourselves and our families enjoying the outdoors. We can be vocal advocates for creating and maintaining accessible outdoor spaces in our community. We can advocate for more outside time during recess in school and encourage the school officials to consider having more courses taught outside.
We can be more diligent in asking families about their screen use and not be afraid to express our concern when we hear how little outdoor time their child is getting. Finally, we can strive for more balance in our messaging. For example for every warning we give about playing outside on poor air quality days there should be a reminder of the health benefits of being outdoors on the other days. Every message about the importance of sunscreen should be preceded by a few sentences promoting outdoor activities in wooded environments where sun exposure is less of a concern.
Being an outsider is just as important as getting enough sleep, eating the right food and staying physically active.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Our son works for a Maine-based company that produces and sells clothing and outdoor recreation equipment. One of its tag lines is “Be an Outsider.” In his role as chief marketing officer, he was recently given an app for his phone that can calculate how many minutes he spends outside each day. He assured me: “Dad, you don’t need one of these on your phone. Your weather-beaten skin says you are already logging in way more than enough minutes outdoors.”
But, it got me thinking about several avenues of research where an app like that would be useful. As luck would have it, the following week I stumbled across a paper describing just such a study.
Researchers in Shanghai, China, placed smartwatches with technology similar to my son’s phone on nearly 3000 children and found “that outdoor exposure patterns characterized by a continuous period of at least 15 minutes, accompanied by a sunlight intensity of more than 2000 lux, were associated with less myopic shift.” In other words, children getting more time outside were less likely to become nearsighted.” Whether this was an effect of being outside instead of staring at a screen indoors is an interesting question.
I have alway suspected that being outdoors was important for wellness and this paper meshed nicely with an article I had recently read in The Washington Post titled, “How time in nature builds happier, healthier and more social children” (Jamie Friedlander Serrano, 2024 Aug 4). The reporter quotes numerous experts in child health and includes links to several articles that tout the benefits of outdoor experiences, particularly ones in a natural environment. There are the vitamin D effects on growth and bone health. There are studies suggesting that being out in nature can reduce stress, anxiety, and aggression, and improve working memory and attention.
In this country there is a small but growing group of schools modeling themselves after the “Forest kindergartens” that have become popular in Europe in which a large portion of the students’ days are spent outside surrounded by nature. It will be interesting to see how robustly this trend grows here in the United States. However, in a nation like ours in which the Environmental Protection Agency estimates that the average American spends 90% of his day indoors, it’s going to require a seismic shift in our societal norms.
I think my mother always knew that being outdoors was healthy for children. I also suspect that she and most my friends’ mothers were primarily motivated by a desire to have the house to themselves. This was primarily to allow them to get the housework done unimpeded by pestering children. But, there may have been times when a busy housewife simply needed to sit down with a book in the peace and quiet of a childless environment. We kids were told to get out of the house and return for lunch and dinner, hopefully not in the tow of a police officer. There were few rules and for the most part we were left to invent our own amusement.
Yes, you’ve heard this old-fogey legend before. But it was true. Those were the halcyon days of the 1950s in a small suburban town of 5000 of a little more than 1 square mile with its own swimming pool. My particular idyll was aptly named Pleasantville but I know we were not alone as the only community where children were allowed – or let’s say “encouraged” – to be outdoors if they weren’t in school. It was a different time.
I am not so naive to believe that we will ever return to those good old days when children roamed free, but it is worth considering what has changed to drive children inside and away from all the health benefits of being outdoors. Is there anything we can do to reverse this unfortunate trend?
First, we must first face up to the reality that our society has become so focused on the potential downsides of everything that we seem to be driven primarily by risk avoidance. We hear how things can go terribly wrong in the world outside, a world we can’t control. Although the data from the pandemic don’t support it, more of us believe children are safer indoors. Parents in particular seem to worry more now than they did 75 years ago. I don’t think we can point to a single event such as the tragedies of September 11 to explain the shift.
While bad news has always traveled fast, today (with communication being almost instantaneous) a story about a child abduction at 6 in the morning in Nevada can be on my local TV channel by lunchtime here in Maine. Parents worry that if bad stuff can happen to a child in Mount Elsewhere, it could happen to my child playing in the backyard across the street.
I think we pediatricians should consider how large a role we may be playing in driving parental anxiety with our frequent warnings about the dangers a child can encounter outdoors whether they come in the form of accidents or exposure to the elements.
While parents have grown more hesitant to send their children outside to play, as a society we have failed to adequately acknowledge and respond to the role that unhealthy attraction of indoor alternatives to outdoor play may be contributing to indoorism. Here we’re talking about television, smartphones, and the internet.
So, what can we do as pediatricians to get our patients outside? First, we can set an example and cover our office walls with pictures of ourselves and our families enjoying the outdoors. We can be vocal advocates for creating and maintaining accessible outdoor spaces in our community. We can advocate for more outside time during recess in school and encourage the school officials to consider having more courses taught outside.
We can be more diligent in asking families about their screen use and not be afraid to express our concern when we hear how little outdoor time their child is getting. Finally, we can strive for more balance in our messaging. For example for every warning we give about playing outside on poor air quality days there should be a reminder of the health benefits of being outdoors on the other days. Every message about the importance of sunscreen should be preceded by a few sentences promoting outdoor activities in wooded environments where sun exposure is less of a concern.
Being an outsider is just as important as getting enough sleep, eating the right food and staying physically active.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].