Impostor syndrome is a risk for doctors of all ages

Article Type
Changed
Mon, 07/17/2023 - 17:49

Feelings of inadequacy, in terms of skills and expectations in an ever-changing system, are common emotions that many doctors have experienced since the start of the pandemic.

COVID-19 imposed challenges on health care professionals and systems by forcing changes in how doctors organize themselves professionally as well as in their relationships with patients and in their expectations (realistic or not) of their roles. The situation was bound to generate high rates of frustration and discomfort among younger and older physicians. It was compounded by a generational transition of the profession, which was accelerated by the virus. It was not managed by the decision-makers and was painful for doctors and patients.

Impostor syndrome (IS) is a psychological construct characterized by the persistent belief that one’s success is undeserved, rather than stemming from personal effort, skill, and ability. The phenomenon is common among medics for various reasons, including professional burnout. Recent studies have helped to better define the extent and characteristic features of the syndrome, as well as efforts to combat it.
 

Doctors and burnout

Although occupational burnout among physicians is a systemic issue primarily attributable to problems in the practice environment, professional norms and aspects of medical culture often contribute to the distress that individual physicians experience.

These dimensions have been well characterized and include suggestions that physicians should be impervious to normal human limitations (that is, superhuman), that work should always come first, and that seeking help is a sign of weakness. In aggregate, these attitudes lead many physicians to engage in unhealthy levels of self-sacrifice, manifested by excessive work hours, anxiety about missing something that would benefit their patients, and prioritizing work over personal health. These factors are familiar to many hospital-based and family physicians.
 

The impostor phenomenon

The impostor phenomenon (IP) is a psychological experience of intellectual and professional fraud. Individuals who suffer from it believe that others have inflated perceptions of the individual’s abilities and fear being judged. This fear persists despite continual proof of the individual’s successes. These people ignore praise, are highly self-critical, and attribute their successes to external factors, such as luck, hard work, or receiving help from others, rather than to qualities such as skill, intelligence, or ability.

IP is common among men and women. Some studies suggest it may be more prevalent among women. Studies across industries suggest that the phenomenon is associated with personal consequences (for example, low emotional well-being, problems with work-life integration, anxiety, depression, suicide) and professional consequences (for example, impaired job performance, occupational burnout). Studies involving U.S. medical students have revealed that more than one in four medical students experience IP and that those who experience it are at higher risk for burnout.
 

Surveying IS

IS, which is not a formal psychiatric diagnosis, is defined as having feelings of uncertainty, inadequacy, and being undeserving of one’s achievements despite evidence to the contrary. There are five subtypes of IS:

  • Perfectionist: insecurity related to self-imposed, unachievable goals
  • Expert: feeling inadequate from lacking sufficient knowledge
  • Superperson: assuming excessive workloads just to feel okay among peers
  • Natural genius: experiencing shame when it takes effort to develop a skill
  • Soloist: believing that requesting help is a sign of weakness
 

 

Risk factors

Studies suggest that IS is a problem early in the physician training process. There is limited information on IS among physicians in practice.

Because transitions represent a risk factor for IP, the frequent rotation between clerkships and being a “perpetual novice” during medical school training may contribute to the high prevalence. Qualitative studies suggest that, once in practice, other professional experiences (for example, unfavorable patient outcomes, patient complaints, rejection of grants or manuscripts, and poor teaching evaluations or patient satisfaction scores) may contribute to IP.
 

Impact on doctors

Several methods have been used to classify how much the phenomenon interferes with a person’s life. The Clance Impostor Phenomenon Scale is a 20-item scale that asks respondents to indicate how well each item characterizes their experience on a 5-point scale. Options range from “not at all” to “very true.” The sum of responses to the individual items is used to create an aggregate score (IP score). The higher the score, the more frequently and seriously IP interferes with a person’s life.

A simplified version of the IP score was used in a study of 3,237 U.S. doctors that investigated the association between IS and burnout among doctors and to compare their rates of IS with those of other professionals.

Mean IP scores were higher for female physicians than for male physicians (mean, 10.91 vs. 9.12; P < .001). Scores decreased with age and were lower among those who were married or widowed.

With respect to professional characteristics, IP scores were greater among those in academic practice or who worked in the Veterans Affairs medical system and decreased with years in practice.

The highest IP scores were among pediatric subspecialists, general pediatricians, and emergency medicine physicians. Scores were lowest among ophthalmologists, radiologists, and orthopedic surgeons. IP has been independently associated with the risk of burnout and low professional fulfillment.
 

Lessening the impact

An article commenting on the study highlighted the following expert practice strategies that doctors can use to reduce the impact of IS in their professional life.

  • Review and celebrate feats that have led to your professional role.
  • Share concerns with trusted colleagues who can validate your accomplishments and normalize your feelings by reporting their own struggles with IS.
  • Combat perfectionism by accepting that it is okay to be good enough when meeting the challenges of a demanding profession.
  • Exercise self-compassion as an alternative to relying on an external locus of self-worth.
  • Understand that IS may be common, especially during transitions, such as when entering medical school, graduate medical training, or starting a new career.

This article was translated from Univadis Italy. A version appeared on Medscape.com.

Publications
Topics
Sections

Feelings of inadequacy, in terms of skills and expectations in an ever-changing system, are common emotions that many doctors have experienced since the start of the pandemic.

COVID-19 imposed challenges on health care professionals and systems by forcing changes in how doctors organize themselves professionally as well as in their relationships with patients and in their expectations (realistic or not) of their roles. The situation was bound to generate high rates of frustration and discomfort among younger and older physicians. It was compounded by a generational transition of the profession, which was accelerated by the virus. It was not managed by the decision-makers and was painful for doctors and patients.

Impostor syndrome (IS) is a psychological construct characterized by the persistent belief that one’s success is undeserved, rather than stemming from personal effort, skill, and ability. The phenomenon is common among medics for various reasons, including professional burnout. Recent studies have helped to better define the extent and characteristic features of the syndrome, as well as efforts to combat it.
 

Doctors and burnout

Although occupational burnout among physicians is a systemic issue primarily attributable to problems in the practice environment, professional norms and aspects of medical culture often contribute to the distress that individual physicians experience.

These dimensions have been well characterized and include suggestions that physicians should be impervious to normal human limitations (that is, superhuman), that work should always come first, and that seeking help is a sign of weakness. In aggregate, these attitudes lead many physicians to engage in unhealthy levels of self-sacrifice, manifested by excessive work hours, anxiety about missing something that would benefit their patients, and prioritizing work over personal health. These factors are familiar to many hospital-based and family physicians.
 

The impostor phenomenon

The impostor phenomenon (IP) is a psychological experience of intellectual and professional fraud. Individuals who suffer from it believe that others have inflated perceptions of the individual’s abilities and fear being judged. This fear persists despite continual proof of the individual’s successes. These people ignore praise, are highly self-critical, and attribute their successes to external factors, such as luck, hard work, or receiving help from others, rather than to qualities such as skill, intelligence, or ability.

IP is common among men and women. Some studies suggest it may be more prevalent among women. Studies across industries suggest that the phenomenon is associated with personal consequences (for example, low emotional well-being, problems with work-life integration, anxiety, depression, suicide) and professional consequences (for example, impaired job performance, occupational burnout). Studies involving U.S. medical students have revealed that more than one in four medical students experience IP and that those who experience it are at higher risk for burnout.
 

Surveying IS

IS, which is not a formal psychiatric diagnosis, is defined as having feelings of uncertainty, inadequacy, and being undeserving of one’s achievements despite evidence to the contrary. There are five subtypes of IS:

  • Perfectionist: insecurity related to self-imposed, unachievable goals
  • Expert: feeling inadequate from lacking sufficient knowledge
  • Superperson: assuming excessive workloads just to feel okay among peers
  • Natural genius: experiencing shame when it takes effort to develop a skill
  • Soloist: believing that requesting help is a sign of weakness
 

 

Risk factors

Studies suggest that IS is a problem early in the physician training process. There is limited information on IS among physicians in practice.

Because transitions represent a risk factor for IP, the frequent rotation between clerkships and being a “perpetual novice” during medical school training may contribute to the high prevalence. Qualitative studies suggest that, once in practice, other professional experiences (for example, unfavorable patient outcomes, patient complaints, rejection of grants or manuscripts, and poor teaching evaluations or patient satisfaction scores) may contribute to IP.
 

Impact on doctors

Several methods have been used to classify how much the phenomenon interferes with a person’s life. The Clance Impostor Phenomenon Scale is a 20-item scale that asks respondents to indicate how well each item characterizes their experience on a 5-point scale. Options range from “not at all” to “very true.” The sum of responses to the individual items is used to create an aggregate score (IP score). The higher the score, the more frequently and seriously IP interferes with a person’s life.

A simplified version of the IP score was used in a study of 3,237 U.S. doctors that investigated the association between IS and burnout among doctors and to compare their rates of IS with those of other professionals.

Mean IP scores were higher for female physicians than for male physicians (mean, 10.91 vs. 9.12; P < .001). Scores decreased with age and were lower among those who were married or widowed.

With respect to professional characteristics, IP scores were greater among those in academic practice or who worked in the Veterans Affairs medical system and decreased with years in practice.

The highest IP scores were among pediatric subspecialists, general pediatricians, and emergency medicine physicians. Scores were lowest among ophthalmologists, radiologists, and orthopedic surgeons. IP has been independently associated with the risk of burnout and low professional fulfillment.
 

Lessening the impact

An article commenting on the study highlighted the following expert practice strategies that doctors can use to reduce the impact of IS in their professional life.

  • Review and celebrate feats that have led to your professional role.
  • Share concerns with trusted colleagues who can validate your accomplishments and normalize your feelings by reporting their own struggles with IS.
  • Combat perfectionism by accepting that it is okay to be good enough when meeting the challenges of a demanding profession.
  • Exercise self-compassion as an alternative to relying on an external locus of self-worth.
  • Understand that IS may be common, especially during transitions, such as when entering medical school, graduate medical training, or starting a new career.

This article was translated from Univadis Italy. A version appeared on Medscape.com.

Feelings of inadequacy, in terms of skills and expectations in an ever-changing system, are common emotions that many doctors have experienced since the start of the pandemic.

COVID-19 imposed challenges on health care professionals and systems by forcing changes in how doctors organize themselves professionally as well as in their relationships with patients and in their expectations (realistic or not) of their roles. The situation was bound to generate high rates of frustration and discomfort among younger and older physicians. It was compounded by a generational transition of the profession, which was accelerated by the virus. It was not managed by the decision-makers and was painful for doctors and patients.

Impostor syndrome (IS) is a psychological construct characterized by the persistent belief that one’s success is undeserved, rather than stemming from personal effort, skill, and ability. The phenomenon is common among medics for various reasons, including professional burnout. Recent studies have helped to better define the extent and characteristic features of the syndrome, as well as efforts to combat it.
 

Doctors and burnout

Although occupational burnout among physicians is a systemic issue primarily attributable to problems in the practice environment, professional norms and aspects of medical culture often contribute to the distress that individual physicians experience.

These dimensions have been well characterized and include suggestions that physicians should be impervious to normal human limitations (that is, superhuman), that work should always come first, and that seeking help is a sign of weakness. In aggregate, these attitudes lead many physicians to engage in unhealthy levels of self-sacrifice, manifested by excessive work hours, anxiety about missing something that would benefit their patients, and prioritizing work over personal health. These factors are familiar to many hospital-based and family physicians.
 

The impostor phenomenon

The impostor phenomenon (IP) is a psychological experience of intellectual and professional fraud. Individuals who suffer from it believe that others have inflated perceptions of the individual’s abilities and fear being judged. This fear persists despite continual proof of the individual’s successes. These people ignore praise, are highly self-critical, and attribute their successes to external factors, such as luck, hard work, or receiving help from others, rather than to qualities such as skill, intelligence, or ability.

IP is common among men and women. Some studies suggest it may be more prevalent among women. Studies across industries suggest that the phenomenon is associated with personal consequences (for example, low emotional well-being, problems with work-life integration, anxiety, depression, suicide) and professional consequences (for example, impaired job performance, occupational burnout). Studies involving U.S. medical students have revealed that more than one in four medical students experience IP and that those who experience it are at higher risk for burnout.
 

Surveying IS

IS, which is not a formal psychiatric diagnosis, is defined as having feelings of uncertainty, inadequacy, and being undeserving of one’s achievements despite evidence to the contrary. There are five subtypes of IS:

  • Perfectionist: insecurity related to self-imposed, unachievable goals
  • Expert: feeling inadequate from lacking sufficient knowledge
  • Superperson: assuming excessive workloads just to feel okay among peers
  • Natural genius: experiencing shame when it takes effort to develop a skill
  • Soloist: believing that requesting help is a sign of weakness
 

 

Risk factors

Studies suggest that IS is a problem early in the physician training process. There is limited information on IS among physicians in practice.

Because transitions represent a risk factor for IP, the frequent rotation between clerkships and being a “perpetual novice” during medical school training may contribute to the high prevalence. Qualitative studies suggest that, once in practice, other professional experiences (for example, unfavorable patient outcomes, patient complaints, rejection of grants or manuscripts, and poor teaching evaluations or patient satisfaction scores) may contribute to IP.
 

Impact on doctors

Several methods have been used to classify how much the phenomenon interferes with a person’s life. The Clance Impostor Phenomenon Scale is a 20-item scale that asks respondents to indicate how well each item characterizes their experience on a 5-point scale. Options range from “not at all” to “very true.” The sum of responses to the individual items is used to create an aggregate score (IP score). The higher the score, the more frequently and seriously IP interferes with a person’s life.

A simplified version of the IP score was used in a study of 3,237 U.S. doctors that investigated the association between IS and burnout among doctors and to compare their rates of IS with those of other professionals.

Mean IP scores were higher for female physicians than for male physicians (mean, 10.91 vs. 9.12; P < .001). Scores decreased with age and were lower among those who were married or widowed.

With respect to professional characteristics, IP scores were greater among those in academic practice or who worked in the Veterans Affairs medical system and decreased with years in practice.

The highest IP scores were among pediatric subspecialists, general pediatricians, and emergency medicine physicians. Scores were lowest among ophthalmologists, radiologists, and orthopedic surgeons. IP has been independently associated with the risk of burnout and low professional fulfillment.
 

Lessening the impact

An article commenting on the study highlighted the following expert practice strategies that doctors can use to reduce the impact of IS in their professional life.

  • Review and celebrate feats that have led to your professional role.
  • Share concerns with trusted colleagues who can validate your accomplishments and normalize your feelings by reporting their own struggles with IS.
  • Combat perfectionism by accepting that it is okay to be good enough when meeting the challenges of a demanding profession.
  • Exercise self-compassion as an alternative to relying on an external locus of self-worth.
  • Understand that IS may be common, especially during transitions, such as when entering medical school, graduate medical training, or starting a new career.

This article was translated from Univadis Italy. A version appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

What new cardiovascular disease risk factors have emerged?

Article Type
Changed
Wed, 04/26/2023 - 10:10

Cardiovascular disease (CVD) is the main cause of premature death and disability in the general population, and according to the World Health Organization, the incidence of CVD is increasing throughout the world. Conventional risk factors that contribute to the occurrence and worsening of CVD have been identified and widely studied. They include high cholesterol levels, high blood pressure, diabetes, obesity, smoking, and lack of physical activity. Despite the introduction of measures to prevent and treat these risk factors with lipid-lowering drugs, antihypertensives, antiplatelet drugs, and anticoagulants, the mortality rate related to CVD remains high.
 

Despite the effectiveness of many currently available treatment options, there are still significant gaps in risk assessment and treatment of CVD.

In the past few years, new coronary risk factors have emerged. They are detailed in an editorial published in The American Journal of Medicine that describes their role and their impact on our cardiovascular health.
 

Systemic inflammation

The new coronary risk factors include the following diseases characterized by systemic inflammation:

  • Gout – Among patients who have experienced a recent flare of gout, the probability of experiencing an acute cardiovascular event such as a myocardial infarction or stroke is increased.
  • Rheumatoid arthritis and systemic lupus erythematous – Patients with one or both of these conditions are at higher odds of experiencing concomitant premature and extremely premature coronary artery disease.
  • Inflammatory bowel disease (Crohn’s disease or ulcerative colitis) – Patients with this disease have increased odds of developing coronary artery disease.
  • Psoriasis – Patients with psoriasis are up to 50% more likely to develop CVD.

Maternal and childhood factors

The following maternal and childhood factors are associated with an increased risk of developing coronary artery disease: gestational diabetes; preeclampsia; delivering a child of low birth weight; preterm delivery; and premature or surgical menopause. The factor or factors that increase the risk of coronary artery disease associated with each of these conditions are not known but may be the result of increased cytokine and oxidative stress.

An unusual and yet unexplained association has been observed between migraine headaches with aura in women and incident CVD.

Also of interest is the association of early life trauma and the risk of adverse cardiovascular outcomes in young and middle-aged individuals who have a history of myocardial infarction.

Transgender patients who present for gender-affirming care are also at increased cardiovascular risk. Among these patients, the increase in coronary artery disease risk may be related to high rates of anxiety and depression.
 

Environmental factors

Low socioeconomic status has emerged as a risk factor. Increased psychosocial stressors, limited educational and economic opportunities, and lack of peer influence favoring healthier lifestyle choices may be causative elements leading to enhanced coronary artery disease among individuals with low socioeconomic living conditions.

Air pollution was estimated to have caused 9 million deaths worldwide in 2019, with 62% due to CVD and 31.7% to coronary artery disease. Severely polluted environmental aerosols contain several toxic metals, such as lead, mercury, arsenic, and cadmium. Transient exposure to various air pollutants may trigger the onset of an acute coronary syndrome.
 

Lifestyle factors

Long working hours by patients who have experienced a first myocardial infarction increase the risk for a recurrent event, possibly because of prolonged exposure to work stressors.

Skipping breakfast has been linked to increased cardiovascular and all-cause mortality.

Long-term consumption of drinks containing sugar and artificial sweeteners has also been associated with increased cardiovascular mortality.

Recognizing the presence of one or more of these new risk factors could help prompt and improve behaviors for reducing more conventional CV risk factors to a minimum.

This article was translated from Univadis Italy, which is part of the Medscape Professional Network.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Cardiovascular disease (CVD) is the main cause of premature death and disability in the general population, and according to the World Health Organization, the incidence of CVD is increasing throughout the world. Conventional risk factors that contribute to the occurrence and worsening of CVD have been identified and widely studied. They include high cholesterol levels, high blood pressure, diabetes, obesity, smoking, and lack of physical activity. Despite the introduction of measures to prevent and treat these risk factors with lipid-lowering drugs, antihypertensives, antiplatelet drugs, and anticoagulants, the mortality rate related to CVD remains high.
 

Despite the effectiveness of many currently available treatment options, there are still significant gaps in risk assessment and treatment of CVD.

In the past few years, new coronary risk factors have emerged. They are detailed in an editorial published in The American Journal of Medicine that describes their role and their impact on our cardiovascular health.
 

Systemic inflammation

The new coronary risk factors include the following diseases characterized by systemic inflammation:

  • Gout – Among patients who have experienced a recent flare of gout, the probability of experiencing an acute cardiovascular event such as a myocardial infarction or stroke is increased.
  • Rheumatoid arthritis and systemic lupus erythematous – Patients with one or both of these conditions are at higher odds of experiencing concomitant premature and extremely premature coronary artery disease.
  • Inflammatory bowel disease (Crohn’s disease or ulcerative colitis) – Patients with this disease have increased odds of developing coronary artery disease.
  • Psoriasis – Patients with psoriasis are up to 50% more likely to develop CVD.

Maternal and childhood factors

The following maternal and childhood factors are associated with an increased risk of developing coronary artery disease: gestational diabetes; preeclampsia; delivering a child of low birth weight; preterm delivery; and premature or surgical menopause. The factor or factors that increase the risk of coronary artery disease associated with each of these conditions are not known but may be the result of increased cytokine and oxidative stress.

An unusual and yet unexplained association has been observed between migraine headaches with aura in women and incident CVD.

Also of interest is the association of early life trauma and the risk of adverse cardiovascular outcomes in young and middle-aged individuals who have a history of myocardial infarction.

Transgender patients who present for gender-affirming care are also at increased cardiovascular risk. Among these patients, the increase in coronary artery disease risk may be related to high rates of anxiety and depression.
 

Environmental factors

Low socioeconomic status has emerged as a risk factor. Increased psychosocial stressors, limited educational and economic opportunities, and lack of peer influence favoring healthier lifestyle choices may be causative elements leading to enhanced coronary artery disease among individuals with low socioeconomic living conditions.

Air pollution was estimated to have caused 9 million deaths worldwide in 2019, with 62% due to CVD and 31.7% to coronary artery disease. Severely polluted environmental aerosols contain several toxic metals, such as lead, mercury, arsenic, and cadmium. Transient exposure to various air pollutants may trigger the onset of an acute coronary syndrome.
 

Lifestyle factors

Long working hours by patients who have experienced a first myocardial infarction increase the risk for a recurrent event, possibly because of prolonged exposure to work stressors.

Skipping breakfast has been linked to increased cardiovascular and all-cause mortality.

Long-term consumption of drinks containing sugar and artificial sweeteners has also been associated with increased cardiovascular mortality.

Recognizing the presence of one or more of these new risk factors could help prompt and improve behaviors for reducing more conventional CV risk factors to a minimum.

This article was translated from Univadis Italy, which is part of the Medscape Professional Network.

A version of this article first appeared on Medscape.com.

Cardiovascular disease (CVD) is the main cause of premature death and disability in the general population, and according to the World Health Organization, the incidence of CVD is increasing throughout the world. Conventional risk factors that contribute to the occurrence and worsening of CVD have been identified and widely studied. They include high cholesterol levels, high blood pressure, diabetes, obesity, smoking, and lack of physical activity. Despite the introduction of measures to prevent and treat these risk factors with lipid-lowering drugs, antihypertensives, antiplatelet drugs, and anticoagulants, the mortality rate related to CVD remains high.
 

Despite the effectiveness of many currently available treatment options, there are still significant gaps in risk assessment and treatment of CVD.

In the past few years, new coronary risk factors have emerged. They are detailed in an editorial published in The American Journal of Medicine that describes their role and their impact on our cardiovascular health.
 

Systemic inflammation

The new coronary risk factors include the following diseases characterized by systemic inflammation:

  • Gout – Among patients who have experienced a recent flare of gout, the probability of experiencing an acute cardiovascular event such as a myocardial infarction or stroke is increased.
  • Rheumatoid arthritis and systemic lupus erythematous – Patients with one or both of these conditions are at higher odds of experiencing concomitant premature and extremely premature coronary artery disease.
  • Inflammatory bowel disease (Crohn’s disease or ulcerative colitis) – Patients with this disease have increased odds of developing coronary artery disease.
  • Psoriasis – Patients with psoriasis are up to 50% more likely to develop CVD.

Maternal and childhood factors

The following maternal and childhood factors are associated with an increased risk of developing coronary artery disease: gestational diabetes; preeclampsia; delivering a child of low birth weight; preterm delivery; and premature or surgical menopause. The factor or factors that increase the risk of coronary artery disease associated with each of these conditions are not known but may be the result of increased cytokine and oxidative stress.

An unusual and yet unexplained association has been observed between migraine headaches with aura in women and incident CVD.

Also of interest is the association of early life trauma and the risk of adverse cardiovascular outcomes in young and middle-aged individuals who have a history of myocardial infarction.

Transgender patients who present for gender-affirming care are also at increased cardiovascular risk. Among these patients, the increase in coronary artery disease risk may be related to high rates of anxiety and depression.
 

Environmental factors

Low socioeconomic status has emerged as a risk factor. Increased psychosocial stressors, limited educational and economic opportunities, and lack of peer influence favoring healthier lifestyle choices may be causative elements leading to enhanced coronary artery disease among individuals with low socioeconomic living conditions.

Air pollution was estimated to have caused 9 million deaths worldwide in 2019, with 62% due to CVD and 31.7% to coronary artery disease. Severely polluted environmental aerosols contain several toxic metals, such as lead, mercury, arsenic, and cadmium. Transient exposure to various air pollutants may trigger the onset of an acute coronary syndrome.
 

Lifestyle factors

Long working hours by patients who have experienced a first myocardial infarction increase the risk for a recurrent event, possibly because of prolonged exposure to work stressors.

Skipping breakfast has been linked to increased cardiovascular and all-cause mortality.

Long-term consumption of drinks containing sugar and artificial sweeteners has also been associated with increased cardiovascular mortality.

Recognizing the presence of one or more of these new risk factors could help prompt and improve behaviors for reducing more conventional CV risk factors to a minimum.

This article was translated from Univadis Italy, which is part of the Medscape Professional Network.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

 How does salt intake relate to mortality?

Article Type
Changed
Wed, 09/14/2022 - 15:49

Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.

Krisana Antharith / EyeEm / Getty Images

Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered: “What is the relationship between mortality and adding salt to foods?” and “How much does a reduction in salt intake influence people’s health?”
 

Cardiovascular disease and death

Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.

In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).

In addition, the researchers made the following observations:

  • For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
  • Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.

The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
 

 

 

Salt sensitivity

Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).

The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
 

The effect of potassium

Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.

In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).

The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.

Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.

A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.

In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.

This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.

Krisana Antharith / EyeEm / Getty Images

Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered: “What is the relationship between mortality and adding salt to foods?” and “How much does a reduction in salt intake influence people’s health?”
 

Cardiovascular disease and death

Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.

In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).

In addition, the researchers made the following observations:

  • For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
  • Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.

The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
 

 

 

Salt sensitivity

Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).

The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
 

The effect of potassium

Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.

In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).

The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.

Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.

A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.

In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.

This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.

Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.

Krisana Antharith / EyeEm / Getty Images

Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered: “What is the relationship between mortality and adding salt to foods?” and “How much does a reduction in salt intake influence people’s health?”
 

Cardiovascular disease and death

Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.

In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).

In addition, the researchers made the following observations:

  • For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
  • Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.

The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
 

 

 

Salt sensitivity

Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).

The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
 

The effect of potassium

Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.

In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).

The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.

Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.

A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.

In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.

This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hormone therapy and breast cancer: An overview

Article Type
Changed
Mon, 07/11/2022 - 08:39

It is projected that by 2050, 1.6 billion women in the world will have reached menopause or the postmenopausal period, a significant increase, compared with a billion women in 2020. Of all menopausal women, around 75% are affected by troublesome menopause symptoms, such as hot flashes and night sweats.

Around 84% of postmenopausal women experience genitourinary symptoms, such as vulvovaginal atrophy and incontinence.

Menopausal hormone therapy (MHT) is the most effective treatment for managing these symptoms; however, its effects on numerous aspects of female health remain uncertain, in particular with regard to breast cancer. The influence of MHT on breast cancer remains unsettled, with discordant findings from observational studies and randomized clinical trials, a factor that affects the decisions made by doctors concerning hormone therapy in menopausal women.
 

Background

Conjugated equine estrogens (CEEs) were introduced into clinical practice in the 1940s. For decades, MHT was the main treatment in conventional medicine for the symptoms of menopause. MHT was used in Western countries for about 600 million women starting from 1970, and it progressively increased during the 1990s. Professional organizations recommended MHT for the prevention of osteoporosis and chronic heart disease (CHD), and a third of prescriptions were for women older than 60 years.

Against this background, the National Institutes of Health launched randomized trials of MHT through the Women’s Health Initiative (WHI) to test whether the association with reduced risk for CHD found in observational studies was real and to obtain reliable information on the overall risks and benefits regarding the prevention of chronic disease for postmenopausal women aged 50-79 years.

The WHI trials tested standard-dose oral CEEs with and without standard-dose continuous medroxyprogesterone acetate (EPT). In 2002, the results of the WHI studies raised a series of concerns about the long-term safety of MHT, in particular the finding of an increased risk of breast cancer for women undergoing therapy. That risk exceeded the benefits from reductions in hip fractures and colorectal cancer.

The WHI findings received wide attention. Prescriptions for MHT dropped precipitously after 2002 and continued to decline in subsequent years. Declines were most marked for standard-dose EPT and in older women. The results of the CEE study were less negative, compared with those for EPT, as they showed no effect on CHD, a nonsignificant reduction in the risk of breast cancer, and a more favorable risk-benefit ratio for younger women, compared with older women. A decade later, it had become widely accepted that MHT should not be used for the prevention of chronic disease in older women; however, short-term use for treatment of vasomotor symptoms remains an accepted indication.
 

Risks and outcomes

Emerging from a series of WHI reports are complex models on the effect of hormonal therapy on the risk and outcome of breast cancer. In one study, women with an intact uterus received CEEs plus medroxyprogesterone acetate (MPA). An increase in the risk of breast cancer was observed over a median of 5.6 years of treatment, followed by a moderate reduction, with the risk increasing after 13 years of cumulative follow-up. For women treated with CEE alone, the reduction in risk observed over an average of 7.2 years of treatment was maintained for 13 years of follow-up.

Results from observational studies contrast with those from randomized controlled trials, particularly those concerning the use of estrogens only. A meta-analysis by the Collaborative Group on Hormonal Factors in Breast Cancer showed that both EPT and CEE were associated with a higher risk of breast neoplasia. Results of the Million Women Study showed a higher death rate.
 

Treatment methods and duration

Information from prospective studies on the effects of commencing MHT at various ages between 40 and 59 years show that for women who commenced treatment at any time within this age range, the relative risk was similar and was highly significant for all ages. Few women had started MHT treatment well after menopause at ages 60-69 years, and their excess risks during years 5-14 of current use were significant for estrogen-progestogen but not for estrogen-only MHT.

If these associations are largely causal, then for women of average weight in developed countries, 5 years of MHT, starting at age 50 years, would increase breast cancer incidence at ages 50-69 years by about 1 in every 50 users of estrogen plus daily progestogen preparations; 1 in every 70 users of estrogen plus intermittent progestogen preparations; and 1 in every 200 users of estrogen-only preparations. The corresponding excesses from 10 years of MHT would be about twice as great.

During 5-14 years of MHT use, the RRs were similarly increased if MHT use had started at ages 40-44 years, 45-49 years, 50-54 years, and 55-59 years; RRs appeared to be attenuated if MHT use had started after age 60 years. They were also attenuated by adiposity, particularly for estrogen-only MHT (which had little effect in obese women). After MHT use ceased, some excess risk of breast cancer persisted for more than a decade; this is directly correlated with the duration of treatment.

Therefore, it can be expected that the effects of MHT may vary between participants on the basis of age or time since menopause, as well as treatments (MHT type, dose, formulation, duration of use, and route of administration). Regarding formulation effects on the risk of breast cancer, new evidence shows an increased risk of 28%. Progestogens appeared to be differentially associated with breast cancer (micronized progesterone: odds ratio, 0.99; 95% confidence interval 0.55-1.79; synthetic progestin: OR, 1.28; 95% CI, 1.22-1.35). When prescribing MHT, micronized progesterone may be the safer progestogen to use.

In conclusion, MHT has a complex balance of benefits and risk on various health outcomes. Some effects differ qualitatively between ET and EPT. Regarding use of MHT, consideration should be given to the full range of effects, along with patients’ values and preferences. The overall quality of existing systematic reviews is moderate to poor. Clinicians should evaluate their scientific strength before considering applying their results in clinical practice. Regarding use of any hormone therapy regimen, consideration should be given to the full range of risk and benefits and should involve shared decisionmaking with the patient. It should be recognized that risk-benefit balance is altered by factors such as age, time from menopause, oophorectomy status, and prior hysterectomy and that some outcomes persist and there is some attenuation after stopping use.

This article was translated from Univadis Italy.

A version of the article appeared on Medscape.com.

Publications
Topics
Sections

It is projected that by 2050, 1.6 billion women in the world will have reached menopause or the postmenopausal period, a significant increase, compared with a billion women in 2020. Of all menopausal women, around 75% are affected by troublesome menopause symptoms, such as hot flashes and night sweats.

Around 84% of postmenopausal women experience genitourinary symptoms, such as vulvovaginal atrophy and incontinence.

Menopausal hormone therapy (MHT) is the most effective treatment for managing these symptoms; however, its effects on numerous aspects of female health remain uncertain, in particular with regard to breast cancer. The influence of MHT on breast cancer remains unsettled, with discordant findings from observational studies and randomized clinical trials, a factor that affects the decisions made by doctors concerning hormone therapy in menopausal women.
 

Background

Conjugated equine estrogens (CEEs) were introduced into clinical practice in the 1940s. For decades, MHT was the main treatment in conventional medicine for the symptoms of menopause. MHT was used in Western countries for about 600 million women starting from 1970, and it progressively increased during the 1990s. Professional organizations recommended MHT for the prevention of osteoporosis and chronic heart disease (CHD), and a third of prescriptions were for women older than 60 years.

Against this background, the National Institutes of Health launched randomized trials of MHT through the Women’s Health Initiative (WHI) to test whether the association with reduced risk for CHD found in observational studies was real and to obtain reliable information on the overall risks and benefits regarding the prevention of chronic disease for postmenopausal women aged 50-79 years.

The WHI trials tested standard-dose oral CEEs with and without standard-dose continuous medroxyprogesterone acetate (EPT). In 2002, the results of the WHI studies raised a series of concerns about the long-term safety of MHT, in particular the finding of an increased risk of breast cancer for women undergoing therapy. That risk exceeded the benefits from reductions in hip fractures and colorectal cancer.

The WHI findings received wide attention. Prescriptions for MHT dropped precipitously after 2002 and continued to decline in subsequent years. Declines were most marked for standard-dose EPT and in older women. The results of the CEE study were less negative, compared with those for EPT, as they showed no effect on CHD, a nonsignificant reduction in the risk of breast cancer, and a more favorable risk-benefit ratio for younger women, compared with older women. A decade later, it had become widely accepted that MHT should not be used for the prevention of chronic disease in older women; however, short-term use for treatment of vasomotor symptoms remains an accepted indication.
 

Risks and outcomes

Emerging from a series of WHI reports are complex models on the effect of hormonal therapy on the risk and outcome of breast cancer. In one study, women with an intact uterus received CEEs plus medroxyprogesterone acetate (MPA). An increase in the risk of breast cancer was observed over a median of 5.6 years of treatment, followed by a moderate reduction, with the risk increasing after 13 years of cumulative follow-up. For women treated with CEE alone, the reduction in risk observed over an average of 7.2 years of treatment was maintained for 13 years of follow-up.

Results from observational studies contrast with those from randomized controlled trials, particularly those concerning the use of estrogens only. A meta-analysis by the Collaborative Group on Hormonal Factors in Breast Cancer showed that both EPT and CEE were associated with a higher risk of breast neoplasia. Results of the Million Women Study showed a higher death rate.
 

Treatment methods and duration

Information from prospective studies on the effects of commencing MHT at various ages between 40 and 59 years show that for women who commenced treatment at any time within this age range, the relative risk was similar and was highly significant for all ages. Few women had started MHT treatment well after menopause at ages 60-69 years, and their excess risks during years 5-14 of current use were significant for estrogen-progestogen but not for estrogen-only MHT.

If these associations are largely causal, then for women of average weight in developed countries, 5 years of MHT, starting at age 50 years, would increase breast cancer incidence at ages 50-69 years by about 1 in every 50 users of estrogen plus daily progestogen preparations; 1 in every 70 users of estrogen plus intermittent progestogen preparations; and 1 in every 200 users of estrogen-only preparations. The corresponding excesses from 10 years of MHT would be about twice as great.

During 5-14 years of MHT use, the RRs were similarly increased if MHT use had started at ages 40-44 years, 45-49 years, 50-54 years, and 55-59 years; RRs appeared to be attenuated if MHT use had started after age 60 years. They were also attenuated by adiposity, particularly for estrogen-only MHT (which had little effect in obese women). After MHT use ceased, some excess risk of breast cancer persisted for more than a decade; this is directly correlated with the duration of treatment.

Therefore, it can be expected that the effects of MHT may vary between participants on the basis of age or time since menopause, as well as treatments (MHT type, dose, formulation, duration of use, and route of administration). Regarding formulation effects on the risk of breast cancer, new evidence shows an increased risk of 28%. Progestogens appeared to be differentially associated with breast cancer (micronized progesterone: odds ratio, 0.99; 95% confidence interval 0.55-1.79; synthetic progestin: OR, 1.28; 95% CI, 1.22-1.35). When prescribing MHT, micronized progesterone may be the safer progestogen to use.

In conclusion, MHT has a complex balance of benefits and risk on various health outcomes. Some effects differ qualitatively between ET and EPT. Regarding use of MHT, consideration should be given to the full range of effects, along with patients’ values and preferences. The overall quality of existing systematic reviews is moderate to poor. Clinicians should evaluate their scientific strength before considering applying their results in clinical practice. Regarding use of any hormone therapy regimen, consideration should be given to the full range of risk and benefits and should involve shared decisionmaking with the patient. It should be recognized that risk-benefit balance is altered by factors such as age, time from menopause, oophorectomy status, and prior hysterectomy and that some outcomes persist and there is some attenuation after stopping use.

This article was translated from Univadis Italy.

A version of the article appeared on Medscape.com.

It is projected that by 2050, 1.6 billion women in the world will have reached menopause or the postmenopausal period, a significant increase, compared with a billion women in 2020. Of all menopausal women, around 75% are affected by troublesome menopause symptoms, such as hot flashes and night sweats.

Around 84% of postmenopausal women experience genitourinary symptoms, such as vulvovaginal atrophy and incontinence.

Menopausal hormone therapy (MHT) is the most effective treatment for managing these symptoms; however, its effects on numerous aspects of female health remain uncertain, in particular with regard to breast cancer. The influence of MHT on breast cancer remains unsettled, with discordant findings from observational studies and randomized clinical trials, a factor that affects the decisions made by doctors concerning hormone therapy in menopausal women.
 

Background

Conjugated equine estrogens (CEEs) were introduced into clinical practice in the 1940s. For decades, MHT was the main treatment in conventional medicine for the symptoms of menopause. MHT was used in Western countries for about 600 million women starting from 1970, and it progressively increased during the 1990s. Professional organizations recommended MHT for the prevention of osteoporosis and chronic heart disease (CHD), and a third of prescriptions were for women older than 60 years.

Against this background, the National Institutes of Health launched randomized trials of MHT through the Women’s Health Initiative (WHI) to test whether the association with reduced risk for CHD found in observational studies was real and to obtain reliable information on the overall risks and benefits regarding the prevention of chronic disease for postmenopausal women aged 50-79 years.

The WHI trials tested standard-dose oral CEEs with and without standard-dose continuous medroxyprogesterone acetate (EPT). In 2002, the results of the WHI studies raised a series of concerns about the long-term safety of MHT, in particular the finding of an increased risk of breast cancer for women undergoing therapy. That risk exceeded the benefits from reductions in hip fractures and colorectal cancer.

The WHI findings received wide attention. Prescriptions for MHT dropped precipitously after 2002 and continued to decline in subsequent years. Declines were most marked for standard-dose EPT and in older women. The results of the CEE study were less negative, compared with those for EPT, as they showed no effect on CHD, a nonsignificant reduction in the risk of breast cancer, and a more favorable risk-benefit ratio for younger women, compared with older women. A decade later, it had become widely accepted that MHT should not be used for the prevention of chronic disease in older women; however, short-term use for treatment of vasomotor symptoms remains an accepted indication.
 

Risks and outcomes

Emerging from a series of WHI reports are complex models on the effect of hormonal therapy on the risk and outcome of breast cancer. In one study, women with an intact uterus received CEEs plus medroxyprogesterone acetate (MPA). An increase in the risk of breast cancer was observed over a median of 5.6 years of treatment, followed by a moderate reduction, with the risk increasing after 13 years of cumulative follow-up. For women treated with CEE alone, the reduction in risk observed over an average of 7.2 years of treatment was maintained for 13 years of follow-up.

Results from observational studies contrast with those from randomized controlled trials, particularly those concerning the use of estrogens only. A meta-analysis by the Collaborative Group on Hormonal Factors in Breast Cancer showed that both EPT and CEE were associated with a higher risk of breast neoplasia. Results of the Million Women Study showed a higher death rate.
 

Treatment methods and duration

Information from prospective studies on the effects of commencing MHT at various ages between 40 and 59 years show that for women who commenced treatment at any time within this age range, the relative risk was similar and was highly significant for all ages. Few women had started MHT treatment well after menopause at ages 60-69 years, and their excess risks during years 5-14 of current use were significant for estrogen-progestogen but not for estrogen-only MHT.

If these associations are largely causal, then for women of average weight in developed countries, 5 years of MHT, starting at age 50 years, would increase breast cancer incidence at ages 50-69 years by about 1 in every 50 users of estrogen plus daily progestogen preparations; 1 in every 70 users of estrogen plus intermittent progestogen preparations; and 1 in every 200 users of estrogen-only preparations. The corresponding excesses from 10 years of MHT would be about twice as great.

During 5-14 years of MHT use, the RRs were similarly increased if MHT use had started at ages 40-44 years, 45-49 years, 50-54 years, and 55-59 years; RRs appeared to be attenuated if MHT use had started after age 60 years. They were also attenuated by adiposity, particularly for estrogen-only MHT (which had little effect in obese women). After MHT use ceased, some excess risk of breast cancer persisted for more than a decade; this is directly correlated with the duration of treatment.

Therefore, it can be expected that the effects of MHT may vary between participants on the basis of age or time since menopause, as well as treatments (MHT type, dose, formulation, duration of use, and route of administration). Regarding formulation effects on the risk of breast cancer, new evidence shows an increased risk of 28%. Progestogens appeared to be differentially associated with breast cancer (micronized progesterone: odds ratio, 0.99; 95% confidence interval 0.55-1.79; synthetic progestin: OR, 1.28; 95% CI, 1.22-1.35). When prescribing MHT, micronized progesterone may be the safer progestogen to use.

In conclusion, MHT has a complex balance of benefits and risk on various health outcomes. Some effects differ qualitatively between ET and EPT. Regarding use of MHT, consideration should be given to the full range of effects, along with patients’ values and preferences. The overall quality of existing systematic reviews is moderate to poor. Clinicians should evaluate their scientific strength before considering applying their results in clinical practice. Regarding use of any hormone therapy regimen, consideration should be given to the full range of risk and benefits and should involve shared decisionmaking with the patient. It should be recognized that risk-benefit balance is altered by factors such as age, time from menopause, oophorectomy status, and prior hysterectomy and that some outcomes persist and there is some attenuation after stopping use.

This article was translated from Univadis Italy.

A version of the article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

What are the signs of post–acute infection syndromes?

Article Type
Changed
Thu, 06/23/2022 - 16:40

The long-term health consequences of COVID-19 have refocused our attention on post–acute infection syndromes (PAIS), starting a discussion on the need for a complete understanding of multisystemic pathophysiology, clinical indicators, and the epidemiology of these syndromes, representing a significant blind spot in the field of medicine. A better understanding of these persistent symptom profiles, not only for post-acute sequelae of SARS-CoV-2 infection (PASC), better known as long COVID, but also for other diseases with unexplainable post-acute sequelae, would allow doctors to fine tune the diagnostic criteria. Having a clear definition and better understanding of post–acute infection symptoms is a necessary step toward developing an evidence-based, multidisciplinary management approach.

PAIS, PASC, or long COVID

The observation of unexplained chronic sequelae after SARS-CoV-2 is known as PASC or long COVID.

Long COVID has been reported as a syndrome in survivors of serious and critical disease, but the effects also persist over time for subjects who experienced a mild infection that did not require admission to hospital. This means that PASC, especially when occurring after a mild or moderate COVID-19 infection, shares many of the same characteristics as chronic diseases triggered by other pathogenic organisms, many of which have not been sufficiently clarified.

PAIS are characterized by a set of core symptoms centering on the following:

  • Exertion intolerance
  • Disproportionate levels of fatigue
  • Neurocognitive and sensory impairment
  • Flu-like symptoms
  • Unrefreshing sleep
  • Myalgia/arthralgia

A plethora of nonspecific symptoms are often present to various degrees.

These similarities suggest a unifying pathophysiology that needs to be elucidated to properly understand and manage postinfectious chronic disability.
 

Overview of PAIS

A detailed revision on what is currently known about PAIS was published in Nature Medicine. It provided various useful pieces of information to assist with the poor recognition of these conditions in clinical practice, a result of which is that patients might experience delayed or a complete lack of clinical care.

The following consolidated postinfection sequelae are mentioned:

  • Q fever fatigue syndrome, which follows infection by the intracellular bacterium Coxiella burnetii
  • Post-dengue fatigue syndrome, which can follow infection by the mosquito-borne dengue virus
  • Fatiguing and rheumatic symptoms in a subset of individuals infected with chikungunya virus, a mosquito-borne virus that causes fever and joint pain in the acute phase
  • Post-polio syndrome, which can emerge as many as 15-40 years after an initial poliomyelitis attack (similarly, some other neurotropic microbes, such as West Nile virus, might lead to persistent effects)
  • Prolonged, debilitating, chronic symptoms have long been reported in a subset of patients after common and typically nonserious infections. For example, after mononucleosis, a condition generally caused by Epstein-Barr virus (EBV), and after an outbreak of Giardia lamblia, an intestinal parasite that usually causes acute intestinal illness. In fact, several studies identified the association of this outbreak of giardiasis with chronic fatigue, irritable bowel syndrome (IBS), and fibromyalgia persisting for many years.
  • Views expressed in the literature regarding the frequency and the validity of posttreatment Lyme disease syndrome are divided. Although substantial evidence points to persistence of arthralgia, fatigue, and subjective neurocognitive impairments in a minority of patients with Lyme disease after the recommended antibiotic treatment, some of the early studies have failed to characterize the initial Lyme disease episode with sufficient rigor.
 

 

Symptoms and signs

The symptoms and signs which, based on the evidence available, are seen more frequently in health care checks may be characterized as the following:

  • Exertion intolerance, fatigue
  • Flu-like and ‘sickness behavior’ symptoms: fever, feverishness, muscle pain, feeling sick, malaise, sweating, irritability
  • Neurological/neurocognitive symptoms: brain fog, impaired concentration or memory, trouble finding words
  • Rheumatologic symptoms: chronic or recurrent joint pain
  • Trigger-specific symptoms: for example, eye problems post Ebola, IBS post Giardia, anosmia and ageusia post COVID-19, motor disturbances post polio and post West Nile virus

Myalgic encephalomyelitis/chronic fatigue syndrome

Patients with this disorder experience worsening of symptoms following physical, cognitive, or emotional exertion above their (very low) tolerated limit. Other prominent features frequently observed in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) are neurocognitive impairments (colloquially referred to as brain fog), unrefreshing sleep, pain, sensory disturbances, gastrointestinal issues, and various forms of dysautonomia. Up to 75% of ME/CFS cases report an infection-like episode preceding the onset of their illness. Postinfectious and postviral fatigue syndromes were originally postulated as subsets of chronic fatigue syndrome. However, there appears to be no clear consensus at present about whether these terms should be considered synonymous to the ME/CFS label or any of its subsets, or include a wider range of postinfectious fatigue conditions.

Practical diagnostic criteria

From a revision of the available criteria, it emerges that the diagnostic criteria for a PAIS should include not only the presence of symptoms, but ideally also the intensity, course, and constellation of symptoms within an individual, as the individual symptoms and symptom trajectories of PAIS vary over time, rendering a mere comparison of symptom presence at a single time point misleading. Furthermore, when a diagnosis of ME/CFS is made, attention should be given to the choice of diagnostic criteria, with preference given to the more conservative criteria, so as not to run the risk of overestimating the syndrome.

Asthenia is the cornerstone symptom for most epidemiological studies on PAIS, but it would be reductive to concentrate only on this rather than the other characteristics, such as the exacerbation of symptoms following exertion, together with other characteristic symptoms and signs that may allow for better identification of the overall, observable clinical picture in these postinfection syndromes, which have significant impacts on a patient’s quality of life.

This article was translated from Univadis Italy. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

The long-term health consequences of COVID-19 have refocused our attention on post–acute infection syndromes (PAIS), starting a discussion on the need for a complete understanding of multisystemic pathophysiology, clinical indicators, and the epidemiology of these syndromes, representing a significant blind spot in the field of medicine. A better understanding of these persistent symptom profiles, not only for post-acute sequelae of SARS-CoV-2 infection (PASC), better known as long COVID, but also for other diseases with unexplainable post-acute sequelae, would allow doctors to fine tune the diagnostic criteria. Having a clear definition and better understanding of post–acute infection symptoms is a necessary step toward developing an evidence-based, multidisciplinary management approach.

PAIS, PASC, or long COVID

The observation of unexplained chronic sequelae after SARS-CoV-2 is known as PASC or long COVID.

Long COVID has been reported as a syndrome in survivors of serious and critical disease, but the effects also persist over time for subjects who experienced a mild infection that did not require admission to hospital. This means that PASC, especially when occurring after a mild or moderate COVID-19 infection, shares many of the same characteristics as chronic diseases triggered by other pathogenic organisms, many of which have not been sufficiently clarified.

PAIS are characterized by a set of core symptoms centering on the following:

  • Exertion intolerance
  • Disproportionate levels of fatigue
  • Neurocognitive and sensory impairment
  • Flu-like symptoms
  • Unrefreshing sleep
  • Myalgia/arthralgia

A plethora of nonspecific symptoms are often present to various degrees.

These similarities suggest a unifying pathophysiology that needs to be elucidated to properly understand and manage postinfectious chronic disability.
 

Overview of PAIS

A detailed revision on what is currently known about PAIS was published in Nature Medicine. It provided various useful pieces of information to assist with the poor recognition of these conditions in clinical practice, a result of which is that patients might experience delayed or a complete lack of clinical care.

The following consolidated postinfection sequelae are mentioned:

  • Q fever fatigue syndrome, which follows infection by the intracellular bacterium Coxiella burnetii
  • Post-dengue fatigue syndrome, which can follow infection by the mosquito-borne dengue virus
  • Fatiguing and rheumatic symptoms in a subset of individuals infected with chikungunya virus, a mosquito-borne virus that causes fever and joint pain in the acute phase
  • Post-polio syndrome, which can emerge as many as 15-40 years after an initial poliomyelitis attack (similarly, some other neurotropic microbes, such as West Nile virus, might lead to persistent effects)
  • Prolonged, debilitating, chronic symptoms have long been reported in a subset of patients after common and typically nonserious infections. For example, after mononucleosis, a condition generally caused by Epstein-Barr virus (EBV), and after an outbreak of Giardia lamblia, an intestinal parasite that usually causes acute intestinal illness. In fact, several studies identified the association of this outbreak of giardiasis with chronic fatigue, irritable bowel syndrome (IBS), and fibromyalgia persisting for many years.
  • Views expressed in the literature regarding the frequency and the validity of posttreatment Lyme disease syndrome are divided. Although substantial evidence points to persistence of arthralgia, fatigue, and subjective neurocognitive impairments in a minority of patients with Lyme disease after the recommended antibiotic treatment, some of the early studies have failed to characterize the initial Lyme disease episode with sufficient rigor.
 

 

Symptoms and signs

The symptoms and signs which, based on the evidence available, are seen more frequently in health care checks may be characterized as the following:

  • Exertion intolerance, fatigue
  • Flu-like and ‘sickness behavior’ symptoms: fever, feverishness, muscle pain, feeling sick, malaise, sweating, irritability
  • Neurological/neurocognitive symptoms: brain fog, impaired concentration or memory, trouble finding words
  • Rheumatologic symptoms: chronic or recurrent joint pain
  • Trigger-specific symptoms: for example, eye problems post Ebola, IBS post Giardia, anosmia and ageusia post COVID-19, motor disturbances post polio and post West Nile virus

Myalgic encephalomyelitis/chronic fatigue syndrome

Patients with this disorder experience worsening of symptoms following physical, cognitive, or emotional exertion above their (very low) tolerated limit. Other prominent features frequently observed in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) are neurocognitive impairments (colloquially referred to as brain fog), unrefreshing sleep, pain, sensory disturbances, gastrointestinal issues, and various forms of dysautonomia. Up to 75% of ME/CFS cases report an infection-like episode preceding the onset of their illness. Postinfectious and postviral fatigue syndromes were originally postulated as subsets of chronic fatigue syndrome. However, there appears to be no clear consensus at present about whether these terms should be considered synonymous to the ME/CFS label or any of its subsets, or include a wider range of postinfectious fatigue conditions.

Practical diagnostic criteria

From a revision of the available criteria, it emerges that the diagnostic criteria for a PAIS should include not only the presence of symptoms, but ideally also the intensity, course, and constellation of symptoms within an individual, as the individual symptoms and symptom trajectories of PAIS vary over time, rendering a mere comparison of symptom presence at a single time point misleading. Furthermore, when a diagnosis of ME/CFS is made, attention should be given to the choice of diagnostic criteria, with preference given to the more conservative criteria, so as not to run the risk of overestimating the syndrome.

Asthenia is the cornerstone symptom for most epidemiological studies on PAIS, but it would be reductive to concentrate only on this rather than the other characteristics, such as the exacerbation of symptoms following exertion, together with other characteristic symptoms and signs that may allow for better identification of the overall, observable clinical picture in these postinfection syndromes, which have significant impacts on a patient’s quality of life.

This article was translated from Univadis Italy. A version of this article appeared on Medscape.com.

The long-term health consequences of COVID-19 have refocused our attention on post–acute infection syndromes (PAIS), starting a discussion on the need for a complete understanding of multisystemic pathophysiology, clinical indicators, and the epidemiology of these syndromes, representing a significant blind spot in the field of medicine. A better understanding of these persistent symptom profiles, not only for post-acute sequelae of SARS-CoV-2 infection (PASC), better known as long COVID, but also for other diseases with unexplainable post-acute sequelae, would allow doctors to fine tune the diagnostic criteria. Having a clear definition and better understanding of post–acute infection symptoms is a necessary step toward developing an evidence-based, multidisciplinary management approach.

PAIS, PASC, or long COVID

The observation of unexplained chronic sequelae after SARS-CoV-2 is known as PASC or long COVID.

Long COVID has been reported as a syndrome in survivors of serious and critical disease, but the effects also persist over time for subjects who experienced a mild infection that did not require admission to hospital. This means that PASC, especially when occurring after a mild or moderate COVID-19 infection, shares many of the same characteristics as chronic diseases triggered by other pathogenic organisms, many of which have not been sufficiently clarified.

PAIS are characterized by a set of core symptoms centering on the following:

  • Exertion intolerance
  • Disproportionate levels of fatigue
  • Neurocognitive and sensory impairment
  • Flu-like symptoms
  • Unrefreshing sleep
  • Myalgia/arthralgia

A plethora of nonspecific symptoms are often present to various degrees.

These similarities suggest a unifying pathophysiology that needs to be elucidated to properly understand and manage postinfectious chronic disability.
 

Overview of PAIS

A detailed revision on what is currently known about PAIS was published in Nature Medicine. It provided various useful pieces of information to assist with the poor recognition of these conditions in clinical practice, a result of which is that patients might experience delayed or a complete lack of clinical care.

The following consolidated postinfection sequelae are mentioned:

  • Q fever fatigue syndrome, which follows infection by the intracellular bacterium Coxiella burnetii
  • Post-dengue fatigue syndrome, which can follow infection by the mosquito-borne dengue virus
  • Fatiguing and rheumatic symptoms in a subset of individuals infected with chikungunya virus, a mosquito-borne virus that causes fever and joint pain in the acute phase
  • Post-polio syndrome, which can emerge as many as 15-40 years after an initial poliomyelitis attack (similarly, some other neurotropic microbes, such as West Nile virus, might lead to persistent effects)
  • Prolonged, debilitating, chronic symptoms have long been reported in a subset of patients after common and typically nonserious infections. For example, after mononucleosis, a condition generally caused by Epstein-Barr virus (EBV), and after an outbreak of Giardia lamblia, an intestinal parasite that usually causes acute intestinal illness. In fact, several studies identified the association of this outbreak of giardiasis with chronic fatigue, irritable bowel syndrome (IBS), and fibromyalgia persisting for many years.
  • Views expressed in the literature regarding the frequency and the validity of posttreatment Lyme disease syndrome are divided. Although substantial evidence points to persistence of arthralgia, fatigue, and subjective neurocognitive impairments in a minority of patients with Lyme disease after the recommended antibiotic treatment, some of the early studies have failed to characterize the initial Lyme disease episode with sufficient rigor.
 

 

Symptoms and signs

The symptoms and signs which, based on the evidence available, are seen more frequently in health care checks may be characterized as the following:

  • Exertion intolerance, fatigue
  • Flu-like and ‘sickness behavior’ symptoms: fever, feverishness, muscle pain, feeling sick, malaise, sweating, irritability
  • Neurological/neurocognitive symptoms: brain fog, impaired concentration or memory, trouble finding words
  • Rheumatologic symptoms: chronic or recurrent joint pain
  • Trigger-specific symptoms: for example, eye problems post Ebola, IBS post Giardia, anosmia and ageusia post COVID-19, motor disturbances post polio and post West Nile virus

Myalgic encephalomyelitis/chronic fatigue syndrome

Patients with this disorder experience worsening of symptoms following physical, cognitive, or emotional exertion above their (very low) tolerated limit. Other prominent features frequently observed in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) are neurocognitive impairments (colloquially referred to as brain fog), unrefreshing sleep, pain, sensory disturbances, gastrointestinal issues, and various forms of dysautonomia. Up to 75% of ME/CFS cases report an infection-like episode preceding the onset of their illness. Postinfectious and postviral fatigue syndromes were originally postulated as subsets of chronic fatigue syndrome. However, there appears to be no clear consensus at present about whether these terms should be considered synonymous to the ME/CFS label or any of its subsets, or include a wider range of postinfectious fatigue conditions.

Practical diagnostic criteria

From a revision of the available criteria, it emerges that the diagnostic criteria for a PAIS should include not only the presence of symptoms, but ideally also the intensity, course, and constellation of symptoms within an individual, as the individual symptoms and symptom trajectories of PAIS vary over time, rendering a mere comparison of symptom presence at a single time point misleading. Furthermore, when a diagnosis of ME/CFS is made, attention should be given to the choice of diagnostic criteria, with preference given to the more conservative criteria, so as not to run the risk of overestimating the syndrome.

Asthenia is the cornerstone symptom for most epidemiological studies on PAIS, but it would be reductive to concentrate only on this rather than the other characteristics, such as the exacerbation of symptoms following exertion, together with other characteristic symptoms and signs that may allow for better identification of the overall, observable clinical picture in these postinfection syndromes, which have significant impacts on a patient’s quality of life.

This article was translated from Univadis Italy. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Omicron BA.2: What do we know so far?

Article Type
Changed
Tue, 04/19/2022 - 16:31

Since November 2021, the Omicron variant of SARS-CoV-2 has quickly become the most dominant variant worldwide. Early sequencing of Omicron in South Africa alerted researchers to the possibility that Omicron could be a cause for concern because of extensive mutations of the spike protein. Omicron has 30 mutations of the spike protein, compared with the original Wuhan-Hu-1 variant, with 15 mutations of the receptor-binding domain (which are linked to a decrease in antibody binding), mutations at the furin S1/S2 site (which improves furin binding and increases infectiousness), and mutations of the amino terminal domain (which is the main binding site for some of the therapeutic antibodies used to treat COVID-19 infections).

Omicron’s functional characteristics

Non–peer-reviewed studies have shown a replication of Omicron in pulmonary epithelial cells, which was shown to be less efficient, when compared with Delta and Wuhan-Hu-1. The number of viral copies from an Omicron infection in pulmonary epithelial cells was significantly lower, compared with infection with the Delta or Wuhan-Hu-1 variants. The association of these characteristics found an increase in the number of viral copies in human epithelial cells (taken from the nasal airways) infected with Omicron. This supports the understanding that Omicron is more transmissible but results in a less severe manifestation of the disease.

As for the phenotypic expression of the infection, attention has been focused on Omicron’s reduced capacity to cause syncytia in pulmonary tissue cultures, information which is relevant to its clinical significance, if we consider that the formation of syncytia has been associated with a more severe manifestation of the disease. Furthermore, it has emerged that Omicron can use different cellular entry routes, with a preference for endosomal fusion over superficial cellular fusion. This characteristic allows Omicron to significantly increase the number of types of cells it can infect.
 

Omicron BA.2 evolves

Between November and December 2021, Omicron progressed, evolving into a variant with characteristics similar to those of its predecessors (that is, it underwent a gradual and progressive increase in transmissibility). Early studies on the Omicron variant were mainly based on the BA.1 subvariant. Since the start of January 2022, there has been an unexpected increase in BA.2 in Europe and Asia. Since then, continued surveillance on the evolution of Omicron has shown an increased prevalence of two subvariants: BA.1 with a R346K mutation (BA.1 + R346K) and B.1.1.529.2 (BA.2), with the latter containing eight unique spike mutations and 13 missing spike mutations, compared with those found in BA.1.

From these differences, we cannot presume that their antigenic properties are similar or different, but they seem to be antigenically equidistant from wild-type SARS-CoV-2, likely jeopardizing in equal measures the efficacy of current COVID-19 vaccines. Furthermore, BA.2 shows significant resistance to 17 out of 19 neutralizing monoclonal antibodies tested in this study, demonstrating that current monoclonal antibody therapy may have significant limitations in terms of adequate coverage for all subvariants of the Omicron variant.
 

Omicron BA.2 and reinfection

BA.2 initially represented only 13% of Omicron sequences at a global level, quickly becoming the dominant form in some countries, such as Denmark. At the end of 2021, BA.2 represented around 20% of all Danish cases of SARS-CoV-2. Halfway through January 2022, this had increased to around 45%, data that indicate that BA.2 carries an advantage over BA.1 within the highly vaccinated population of Denmark.

BA.2 is associated with an increased susceptibility of infection for unvaccinated individuals (odds ratio, 2.19; 95% confidence interval, 1.58-3.04), fully vaccinated individuals (OR, 2.45; 95% CI, 1.77-3.40), and booster-vaccinated individuals (OR, 2.99; 95% CI, 2.11-4.24), compared with BA.1. The pattern of increased transmissibility in BA.2 households was not observed for fully vaccinated and booster-vaccinated primary cases, where the OR of transmission was below 1 for BA.2, compared with BA.1. These data confirm the immune-evasive properties of BA.2 that further reduce the protective effect of vaccination against infection, but do not increase its transmissibility from vaccinated individuals with breakthrough infections.
 

Omicron, BA.2, and vaccination

The understanding of serum neutralizing activity, in correlation to the efficacy of a vaccine, is a priority of research because of the growing epidemiological significance of BA.2. There is evidence to support the claim that the immune-evasive nature of BA.2 doesn›t seem to be as severe as that of BA.1, and it is possible that there are other viral or host factors that are enabling the rapid diffusion of BA.2. A study published in Science Immunology investigated humoral and cellular immune responses to Omicron and other variants of concern (VOCs), looking to understand how, and to what degree, vaccinated individuals are protected against Omicron. From the results, a very low level of antibody cross-neutralization of Omicron, or a lack thereof, was seen when compared with wild type, Beta, and Delta variants, which could be partially restored by a third booster vaccination. Furthermore, T lymphocytes were shown to recognize Omicron with the same efficacy as seen for the other VOCs, suggesting that vaccinated individuals maintain T lymphocyte immunity, an element that is capable of providing protection in the absence of neutralizing antibodies, limiting the chance of serious disease.

These results are consistent with those available from a study performed in a population from Qatar made up of 2,239,193 people who had received at least two doses of a BNT162b2 or mRNA-1273 vaccine. The efficacy of the booster against a symptomatic Omicron infection, compared with that from the primary series, was 49.4% (95% CI, 47.1-51.6). The efficacy of the booster against hospitalization for COVID-19 and the death rate from Omicron infection, compared with the primary series, was 76.5% (95% CI, 55.9-87.5). The efficacy of the BNT162b2 booster against a symptomatic Delta variant infection (or B.1.617.2), compared with the primary series, was 86.1% (95% CI, 67.3-94.1).

To summarize, the constant increase in the prevalence of BA.2 in more countries over the world has confirmed the growth advantage that this variant has compared with others. BA.2 reduces the protective effect of vaccination against infection. Omicron antibody cross-neutralization can be partially restored by a third booster vaccination, an aspect that becomes problematic in the context of a low vaccination rate, where peaks of Omicron may increase the likelihood of infection in the elderly and in other groups at a higher risk of severe disease. Omicron BA.2 opens up new evolution channels, but what do the experts think will happen?

A version of this article was originally published in Italian on Univadis.

Publications
Topics
Sections

Since November 2021, the Omicron variant of SARS-CoV-2 has quickly become the most dominant variant worldwide. Early sequencing of Omicron in South Africa alerted researchers to the possibility that Omicron could be a cause for concern because of extensive mutations of the spike protein. Omicron has 30 mutations of the spike protein, compared with the original Wuhan-Hu-1 variant, with 15 mutations of the receptor-binding domain (which are linked to a decrease in antibody binding), mutations at the furin S1/S2 site (which improves furin binding and increases infectiousness), and mutations of the amino terminal domain (which is the main binding site for some of the therapeutic antibodies used to treat COVID-19 infections).

Omicron’s functional characteristics

Non–peer-reviewed studies have shown a replication of Omicron in pulmonary epithelial cells, which was shown to be less efficient, when compared with Delta and Wuhan-Hu-1. The number of viral copies from an Omicron infection in pulmonary epithelial cells was significantly lower, compared with infection with the Delta or Wuhan-Hu-1 variants. The association of these characteristics found an increase in the number of viral copies in human epithelial cells (taken from the nasal airways) infected with Omicron. This supports the understanding that Omicron is more transmissible but results in a less severe manifestation of the disease.

As for the phenotypic expression of the infection, attention has been focused on Omicron’s reduced capacity to cause syncytia in pulmonary tissue cultures, information which is relevant to its clinical significance, if we consider that the formation of syncytia has been associated with a more severe manifestation of the disease. Furthermore, it has emerged that Omicron can use different cellular entry routes, with a preference for endosomal fusion over superficial cellular fusion. This characteristic allows Omicron to significantly increase the number of types of cells it can infect.
 

Omicron BA.2 evolves

Between November and December 2021, Omicron progressed, evolving into a variant with characteristics similar to those of its predecessors (that is, it underwent a gradual and progressive increase in transmissibility). Early studies on the Omicron variant were mainly based on the BA.1 subvariant. Since the start of January 2022, there has been an unexpected increase in BA.2 in Europe and Asia. Since then, continued surveillance on the evolution of Omicron has shown an increased prevalence of two subvariants: BA.1 with a R346K mutation (BA.1 + R346K) and B.1.1.529.2 (BA.2), with the latter containing eight unique spike mutations and 13 missing spike mutations, compared with those found in BA.1.

From these differences, we cannot presume that their antigenic properties are similar or different, but they seem to be antigenically equidistant from wild-type SARS-CoV-2, likely jeopardizing in equal measures the efficacy of current COVID-19 vaccines. Furthermore, BA.2 shows significant resistance to 17 out of 19 neutralizing monoclonal antibodies tested in this study, demonstrating that current monoclonal antibody therapy may have significant limitations in terms of adequate coverage for all subvariants of the Omicron variant.
 

Omicron BA.2 and reinfection

BA.2 initially represented only 13% of Omicron sequences at a global level, quickly becoming the dominant form in some countries, such as Denmark. At the end of 2021, BA.2 represented around 20% of all Danish cases of SARS-CoV-2. Halfway through January 2022, this had increased to around 45%, data that indicate that BA.2 carries an advantage over BA.1 within the highly vaccinated population of Denmark.

BA.2 is associated with an increased susceptibility of infection for unvaccinated individuals (odds ratio, 2.19; 95% confidence interval, 1.58-3.04), fully vaccinated individuals (OR, 2.45; 95% CI, 1.77-3.40), and booster-vaccinated individuals (OR, 2.99; 95% CI, 2.11-4.24), compared with BA.1. The pattern of increased transmissibility in BA.2 households was not observed for fully vaccinated and booster-vaccinated primary cases, where the OR of transmission was below 1 for BA.2, compared with BA.1. These data confirm the immune-evasive properties of BA.2 that further reduce the protective effect of vaccination against infection, but do not increase its transmissibility from vaccinated individuals with breakthrough infections.
 

Omicron, BA.2, and vaccination

The understanding of serum neutralizing activity, in correlation to the efficacy of a vaccine, is a priority of research because of the growing epidemiological significance of BA.2. There is evidence to support the claim that the immune-evasive nature of BA.2 doesn›t seem to be as severe as that of BA.1, and it is possible that there are other viral or host factors that are enabling the rapid diffusion of BA.2. A study published in Science Immunology investigated humoral and cellular immune responses to Omicron and other variants of concern (VOCs), looking to understand how, and to what degree, vaccinated individuals are protected against Omicron. From the results, a very low level of antibody cross-neutralization of Omicron, or a lack thereof, was seen when compared with wild type, Beta, and Delta variants, which could be partially restored by a third booster vaccination. Furthermore, T lymphocytes were shown to recognize Omicron with the same efficacy as seen for the other VOCs, suggesting that vaccinated individuals maintain T lymphocyte immunity, an element that is capable of providing protection in the absence of neutralizing antibodies, limiting the chance of serious disease.

These results are consistent with those available from a study performed in a population from Qatar made up of 2,239,193 people who had received at least two doses of a BNT162b2 or mRNA-1273 vaccine. The efficacy of the booster against a symptomatic Omicron infection, compared with that from the primary series, was 49.4% (95% CI, 47.1-51.6). The efficacy of the booster against hospitalization for COVID-19 and the death rate from Omicron infection, compared with the primary series, was 76.5% (95% CI, 55.9-87.5). The efficacy of the BNT162b2 booster against a symptomatic Delta variant infection (or B.1.617.2), compared with the primary series, was 86.1% (95% CI, 67.3-94.1).

To summarize, the constant increase in the prevalence of BA.2 in more countries over the world has confirmed the growth advantage that this variant has compared with others. BA.2 reduces the protective effect of vaccination against infection. Omicron antibody cross-neutralization can be partially restored by a third booster vaccination, an aspect that becomes problematic in the context of a low vaccination rate, where peaks of Omicron may increase the likelihood of infection in the elderly and in other groups at a higher risk of severe disease. Omicron BA.2 opens up new evolution channels, but what do the experts think will happen?

A version of this article was originally published in Italian on Univadis.

Since November 2021, the Omicron variant of SARS-CoV-2 has quickly become the most dominant variant worldwide. Early sequencing of Omicron in South Africa alerted researchers to the possibility that Omicron could be a cause for concern because of extensive mutations of the spike protein. Omicron has 30 mutations of the spike protein, compared with the original Wuhan-Hu-1 variant, with 15 mutations of the receptor-binding domain (which are linked to a decrease in antibody binding), mutations at the furin S1/S2 site (which improves furin binding and increases infectiousness), and mutations of the amino terminal domain (which is the main binding site for some of the therapeutic antibodies used to treat COVID-19 infections).

Omicron’s functional characteristics

Non–peer-reviewed studies have shown a replication of Omicron in pulmonary epithelial cells, which was shown to be less efficient, when compared with Delta and Wuhan-Hu-1. The number of viral copies from an Omicron infection in pulmonary epithelial cells was significantly lower, compared with infection with the Delta or Wuhan-Hu-1 variants. The association of these characteristics found an increase in the number of viral copies in human epithelial cells (taken from the nasal airways) infected with Omicron. This supports the understanding that Omicron is more transmissible but results in a less severe manifestation of the disease.

As for the phenotypic expression of the infection, attention has been focused on Omicron’s reduced capacity to cause syncytia in pulmonary tissue cultures, information which is relevant to its clinical significance, if we consider that the formation of syncytia has been associated with a more severe manifestation of the disease. Furthermore, it has emerged that Omicron can use different cellular entry routes, with a preference for endosomal fusion over superficial cellular fusion. This characteristic allows Omicron to significantly increase the number of types of cells it can infect.
 

Omicron BA.2 evolves

Between November and December 2021, Omicron progressed, evolving into a variant with characteristics similar to those of its predecessors (that is, it underwent a gradual and progressive increase in transmissibility). Early studies on the Omicron variant were mainly based on the BA.1 subvariant. Since the start of January 2022, there has been an unexpected increase in BA.2 in Europe and Asia. Since then, continued surveillance on the evolution of Omicron has shown an increased prevalence of two subvariants: BA.1 with a R346K mutation (BA.1 + R346K) and B.1.1.529.2 (BA.2), with the latter containing eight unique spike mutations and 13 missing spike mutations, compared with those found in BA.1.

From these differences, we cannot presume that their antigenic properties are similar or different, but they seem to be antigenically equidistant from wild-type SARS-CoV-2, likely jeopardizing in equal measures the efficacy of current COVID-19 vaccines. Furthermore, BA.2 shows significant resistance to 17 out of 19 neutralizing monoclonal antibodies tested in this study, demonstrating that current monoclonal antibody therapy may have significant limitations in terms of adequate coverage for all subvariants of the Omicron variant.
 

Omicron BA.2 and reinfection

BA.2 initially represented only 13% of Omicron sequences at a global level, quickly becoming the dominant form in some countries, such as Denmark. At the end of 2021, BA.2 represented around 20% of all Danish cases of SARS-CoV-2. Halfway through January 2022, this had increased to around 45%, data that indicate that BA.2 carries an advantage over BA.1 within the highly vaccinated population of Denmark.

BA.2 is associated with an increased susceptibility of infection for unvaccinated individuals (odds ratio, 2.19; 95% confidence interval, 1.58-3.04), fully vaccinated individuals (OR, 2.45; 95% CI, 1.77-3.40), and booster-vaccinated individuals (OR, 2.99; 95% CI, 2.11-4.24), compared with BA.1. The pattern of increased transmissibility in BA.2 households was not observed for fully vaccinated and booster-vaccinated primary cases, where the OR of transmission was below 1 for BA.2, compared with BA.1. These data confirm the immune-evasive properties of BA.2 that further reduce the protective effect of vaccination against infection, but do not increase its transmissibility from vaccinated individuals with breakthrough infections.
 

Omicron, BA.2, and vaccination

The understanding of serum neutralizing activity, in correlation to the efficacy of a vaccine, is a priority of research because of the growing epidemiological significance of BA.2. There is evidence to support the claim that the immune-evasive nature of BA.2 doesn›t seem to be as severe as that of BA.1, and it is possible that there are other viral or host factors that are enabling the rapid diffusion of BA.2. A study published in Science Immunology investigated humoral and cellular immune responses to Omicron and other variants of concern (VOCs), looking to understand how, and to what degree, vaccinated individuals are protected against Omicron. From the results, a very low level of antibody cross-neutralization of Omicron, or a lack thereof, was seen when compared with wild type, Beta, and Delta variants, which could be partially restored by a third booster vaccination. Furthermore, T lymphocytes were shown to recognize Omicron with the same efficacy as seen for the other VOCs, suggesting that vaccinated individuals maintain T lymphocyte immunity, an element that is capable of providing protection in the absence of neutralizing antibodies, limiting the chance of serious disease.

These results are consistent with those available from a study performed in a population from Qatar made up of 2,239,193 people who had received at least two doses of a BNT162b2 or mRNA-1273 vaccine. The efficacy of the booster against a symptomatic Omicron infection, compared with that from the primary series, was 49.4% (95% CI, 47.1-51.6). The efficacy of the booster against hospitalization for COVID-19 and the death rate from Omicron infection, compared with the primary series, was 76.5% (95% CI, 55.9-87.5). The efficacy of the BNT162b2 booster against a symptomatic Delta variant infection (or B.1.617.2), compared with the primary series, was 86.1% (95% CI, 67.3-94.1).

To summarize, the constant increase in the prevalence of BA.2 in more countries over the world has confirmed the growth advantage that this variant has compared with others. BA.2 reduces the protective effect of vaccination against infection. Omicron antibody cross-neutralization can be partially restored by a third booster vaccination, an aspect that becomes problematic in the context of a low vaccination rate, where peaks of Omicron may increase the likelihood of infection in the elderly and in other groups at a higher risk of severe disease. Omicron BA.2 opens up new evolution channels, but what do the experts think will happen?

A version of this article was originally published in Italian on Univadis.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Natural, vaccine-induced, and hybrid immunity to COVID-19

Article Type
Changed
Wed, 03/23/2022 - 15:09

Seroprevalence surveys suggest that, from the beginning of the pandemic to 2022, more than a third of the global population had been infected with SARS-CoV-2. As large numbers of people continue to be infected, the efficacy and duration of natural immunity, in terms of protection against SARS-CoV-2 reinfections and severe disease, are of crucial significance. The virus’s epidemiologic trajectory will be influenced by the trends in vaccine-induced and hybrid immunity.

Omicron’s immune evasion

Cases of SARS-CoV-2 reinfection are increasing around the world. According to data from the U.K. Health Security Agency, 650,000 people in England have been infected twice, and most of them were reinfected in the past 2 months. Before mid-November 2021, reinfections accounted for about 1% of reported cases, but the rate has now increased to around 10%. The reinfection risk was 16 times higher between mid-December 2021 and early January 2022. Experts believe that this spike in reinfections is related to the spread of Omicron, which overtook Delta as the dominant variant. Nonetheless, other aspects should also be considered.

Omicron’s greater propensity to spread is not unrelated to its ability to evade the body’s immune defenses. This aspect was raised in a letter recently published in the New England Journal of Medicine. The authors reported that the effectiveness of previous infection in preventing reinfection against the Alpha, Beta, and Delta variants was around 90%, but it was only 56% against Omicron.
 

Natural immunity

Natural immunity showed roughly similar effectiveness regarding protection against reinfection across different SARS-CoV-2 variants, with the exception of the Omicron variant. The risk of hospitalization and death was also reduced in SARS-CoV-2 reinfections versus primary infections. Observational studies indicate that natural immunity may offer equal or greater protection against SARS-CoV-2 infections, compared with immunization with two doses of an mRNA vaccine, but the data are not fully consistent.

Natural immunity seems to be relatively long-lasting. Data from Denmark and Austria show no evidence that protection against reinfections wanes after 6 months. Some investigations indicate that protection against reinfection is lowest 4-5 months after initial infection and increases thereafter, a finding that might hypothetically be explained by persistent viral shedding; that is, misclassification of prolonged SARS-CoV-2 infections as reinfections. While no comparison was made against information pertaining to unvaccinated, not previously-infected individuals, preliminary data from Israel suggest that protection from reinfection can decrease from 6 to more than 12 months after the first SARS-CoV-2 infection. Taken together, epidemiologic studies indicate that protection against reinfections by natural immunity lasts over 1 year with only moderate, if any, decline over this period. Among older individuals, immunocompromised patients, and those with certain comorbidities or exposure risk (for example, health care workers), rates of reinfection may be higher. It is plausible that reinfection risk may be a function of exposure risk.

There is accumulating evidence that reinfections may be significantly less severe than primary infections with SARS-CoV-2. Reduced clinical severity of SARS-CoV-2 reinfections naturally also makes sense from a biologic point of view, inasmuch as a previously primed immune system should be better prepared for a rechallenge with this virus.
 

 

 

Vaccine-induced immunity

The short-term (<4 months) efficacy of mRNA vaccines against SARS-CoV-2 is high and varies from 94.1% (Moderna) to 95% (BioNTech/Pfizer). This has been confirmed by randomized controlled trials and was subsequently confirmed in effectiveness studies in real-world settings. Waning efficacy was observed with respect to protection against SARS-CoV-2 infections (for example, only approximately 20% after about half a year in Qatar), whereas protection against severe disease was either sustained or showed only a moderate decline.

In individuals who received two doses of the BioNTech/Pfizer vaccine at least 5 months earlier, an additional vaccine dose, a so-called booster, significantly lowered mortality and severe illness. These findings suggest that the booster restored and probably exceeded the initial short-term efficacy of the initial vaccination.

Data are still emerging regarding the efficacy of boosters against the Omicron variants. Preliminary data suggest a far lower ability to restore protection from infection and vaccination. However, fatalities and hospitalizations remain low.
 

Natural immunity vs. vaccine-induced immunity

Comparisons of natural immunity with vaccine-induced immunity are complicated by a series of biases and by combinations of biases – for example, the biases of comparisons between infected and uninfected, plus the biases of comparisons between vaccinated and nonvaccinated, with strong potential selection biases and confounding. Of particular note, the proportion of people previously infected and/or vaccinated may influence estimates of effectiveness. Regarding this point, one study compared unvaccinated patients with a prior SARS-CoV-2 infection and vaccinated individuals followed up from a week after the second vaccine dose onward versus a group of unvaccinated, not previously infected individuals. The findings showed that, compared with unvaccinated, not previously infected individuals, the natural immunity group and the vaccinated group had similar protection of 94.8% and 92.8% against infection, of 94.1% and 94.2% against hospitalization, and of 96.4% and 94.4% against severe illness, respectively.

Hybrid immunity

The combination of a previous SARS-CoV-2 infection and a respective vaccination is called hybrid immunity. This combination seems to confer the greatest protection against SARS-CoV-2 infections, but several knowledge gaps remain regarding this issue.

Data from Israel showed that, when the time since the last immunity-conferring event (either primary infection or vaccination) was the same, the rates of SARS-CoV-2 infections were similar in the following groups: individuals who had a previous infection and no vaccination, individuals who had an infection and were then vaccinated with a single dose after at least 3 months, and individuals who were vaccinated (two doses) and then infected. Severe disease was relatively rare overall.

Data on the efficacy of hybrid immunity point in the direction of hybrid immunity being superior, as compared with either vaccine-induced (without a booster) immunity or natural immunity alone. Timing and mode of vaccination of previously infected individuals to achieve optimal hybrid immunity are central questions that remain to be addressed in future studies.

Given that vaccination rates are continuously increasing and that, by the beginning of 2022, perhaps half or more of the global population had already been infected with SARS-CoV-2, with the vast majority of this group not being officially detected, it would appear logical that future infection waves, even with highly transmissible variants of SARS-CoV-2, may be limited with respect to their maximum potential health burden. The advent of Omicron suggests that massive surges can occur even in populations with extremely high rates of previous vaccination and variable rates of prior infections. However, even then, the accompanying burden of hospitalizations and deaths is far less than what was seen in 2020 and 2021. One may argue that the pandemic has already transitioned to the endemic phase and that Omicron is an endemic wave occurring in the setting of already widespread population immunity.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Seroprevalence surveys suggest that, from the beginning of the pandemic to 2022, more than a third of the global population had been infected with SARS-CoV-2. As large numbers of people continue to be infected, the efficacy and duration of natural immunity, in terms of protection against SARS-CoV-2 reinfections and severe disease, are of crucial significance. The virus’s epidemiologic trajectory will be influenced by the trends in vaccine-induced and hybrid immunity.

Omicron’s immune evasion

Cases of SARS-CoV-2 reinfection are increasing around the world. According to data from the U.K. Health Security Agency, 650,000 people in England have been infected twice, and most of them were reinfected in the past 2 months. Before mid-November 2021, reinfections accounted for about 1% of reported cases, but the rate has now increased to around 10%. The reinfection risk was 16 times higher between mid-December 2021 and early January 2022. Experts believe that this spike in reinfections is related to the spread of Omicron, which overtook Delta as the dominant variant. Nonetheless, other aspects should also be considered.

Omicron’s greater propensity to spread is not unrelated to its ability to evade the body’s immune defenses. This aspect was raised in a letter recently published in the New England Journal of Medicine. The authors reported that the effectiveness of previous infection in preventing reinfection against the Alpha, Beta, and Delta variants was around 90%, but it was only 56% against Omicron.
 

Natural immunity

Natural immunity showed roughly similar effectiveness regarding protection against reinfection across different SARS-CoV-2 variants, with the exception of the Omicron variant. The risk of hospitalization and death was also reduced in SARS-CoV-2 reinfections versus primary infections. Observational studies indicate that natural immunity may offer equal or greater protection against SARS-CoV-2 infections, compared with immunization with two doses of an mRNA vaccine, but the data are not fully consistent.

Natural immunity seems to be relatively long-lasting. Data from Denmark and Austria show no evidence that protection against reinfections wanes after 6 months. Some investigations indicate that protection against reinfection is lowest 4-5 months after initial infection and increases thereafter, a finding that might hypothetically be explained by persistent viral shedding; that is, misclassification of prolonged SARS-CoV-2 infections as reinfections. While no comparison was made against information pertaining to unvaccinated, not previously-infected individuals, preliminary data from Israel suggest that protection from reinfection can decrease from 6 to more than 12 months after the first SARS-CoV-2 infection. Taken together, epidemiologic studies indicate that protection against reinfections by natural immunity lasts over 1 year with only moderate, if any, decline over this period. Among older individuals, immunocompromised patients, and those with certain comorbidities or exposure risk (for example, health care workers), rates of reinfection may be higher. It is plausible that reinfection risk may be a function of exposure risk.

There is accumulating evidence that reinfections may be significantly less severe than primary infections with SARS-CoV-2. Reduced clinical severity of SARS-CoV-2 reinfections naturally also makes sense from a biologic point of view, inasmuch as a previously primed immune system should be better prepared for a rechallenge with this virus.
 

 

 

Vaccine-induced immunity

The short-term (<4 months) efficacy of mRNA vaccines against SARS-CoV-2 is high and varies from 94.1% (Moderna) to 95% (BioNTech/Pfizer). This has been confirmed by randomized controlled trials and was subsequently confirmed in effectiveness studies in real-world settings. Waning efficacy was observed with respect to protection against SARS-CoV-2 infections (for example, only approximately 20% after about half a year in Qatar), whereas protection against severe disease was either sustained or showed only a moderate decline.

In individuals who received two doses of the BioNTech/Pfizer vaccine at least 5 months earlier, an additional vaccine dose, a so-called booster, significantly lowered mortality and severe illness. These findings suggest that the booster restored and probably exceeded the initial short-term efficacy of the initial vaccination.

Data are still emerging regarding the efficacy of boosters against the Omicron variants. Preliminary data suggest a far lower ability to restore protection from infection and vaccination. However, fatalities and hospitalizations remain low.
 

Natural immunity vs. vaccine-induced immunity

Comparisons of natural immunity with vaccine-induced immunity are complicated by a series of biases and by combinations of biases – for example, the biases of comparisons between infected and uninfected, plus the biases of comparisons between vaccinated and nonvaccinated, with strong potential selection biases and confounding. Of particular note, the proportion of people previously infected and/or vaccinated may influence estimates of effectiveness. Regarding this point, one study compared unvaccinated patients with a prior SARS-CoV-2 infection and vaccinated individuals followed up from a week after the second vaccine dose onward versus a group of unvaccinated, not previously infected individuals. The findings showed that, compared with unvaccinated, not previously infected individuals, the natural immunity group and the vaccinated group had similar protection of 94.8% and 92.8% against infection, of 94.1% and 94.2% against hospitalization, and of 96.4% and 94.4% against severe illness, respectively.

Hybrid immunity

The combination of a previous SARS-CoV-2 infection and a respective vaccination is called hybrid immunity. This combination seems to confer the greatest protection against SARS-CoV-2 infections, but several knowledge gaps remain regarding this issue.

Data from Israel showed that, when the time since the last immunity-conferring event (either primary infection or vaccination) was the same, the rates of SARS-CoV-2 infections were similar in the following groups: individuals who had a previous infection and no vaccination, individuals who had an infection and were then vaccinated with a single dose after at least 3 months, and individuals who were vaccinated (two doses) and then infected. Severe disease was relatively rare overall.

Data on the efficacy of hybrid immunity point in the direction of hybrid immunity being superior, as compared with either vaccine-induced (without a booster) immunity or natural immunity alone. Timing and mode of vaccination of previously infected individuals to achieve optimal hybrid immunity are central questions that remain to be addressed in future studies.

Given that vaccination rates are continuously increasing and that, by the beginning of 2022, perhaps half or more of the global population had already been infected with SARS-CoV-2, with the vast majority of this group not being officially detected, it would appear logical that future infection waves, even with highly transmissible variants of SARS-CoV-2, may be limited with respect to their maximum potential health burden. The advent of Omicron suggests that massive surges can occur even in populations with extremely high rates of previous vaccination and variable rates of prior infections. However, even then, the accompanying burden of hospitalizations and deaths is far less than what was seen in 2020 and 2021. One may argue that the pandemic has already transitioned to the endemic phase and that Omicron is an endemic wave occurring in the setting of already widespread population immunity.

A version of this article first appeared on Medscape.com.

Seroprevalence surveys suggest that, from the beginning of the pandemic to 2022, more than a third of the global population had been infected with SARS-CoV-2. As large numbers of people continue to be infected, the efficacy and duration of natural immunity, in terms of protection against SARS-CoV-2 reinfections and severe disease, are of crucial significance. The virus’s epidemiologic trajectory will be influenced by the trends in vaccine-induced and hybrid immunity.

Omicron’s immune evasion

Cases of SARS-CoV-2 reinfection are increasing around the world. According to data from the U.K. Health Security Agency, 650,000 people in England have been infected twice, and most of them were reinfected in the past 2 months. Before mid-November 2021, reinfections accounted for about 1% of reported cases, but the rate has now increased to around 10%. The reinfection risk was 16 times higher between mid-December 2021 and early January 2022. Experts believe that this spike in reinfections is related to the spread of Omicron, which overtook Delta as the dominant variant. Nonetheless, other aspects should also be considered.

Omicron’s greater propensity to spread is not unrelated to its ability to evade the body’s immune defenses. This aspect was raised in a letter recently published in the New England Journal of Medicine. The authors reported that the effectiveness of previous infection in preventing reinfection against the Alpha, Beta, and Delta variants was around 90%, but it was only 56% against Omicron.
 

Natural immunity

Natural immunity showed roughly similar effectiveness regarding protection against reinfection across different SARS-CoV-2 variants, with the exception of the Omicron variant. The risk of hospitalization and death was also reduced in SARS-CoV-2 reinfections versus primary infections. Observational studies indicate that natural immunity may offer equal or greater protection against SARS-CoV-2 infections, compared with immunization with two doses of an mRNA vaccine, but the data are not fully consistent.

Natural immunity seems to be relatively long-lasting. Data from Denmark and Austria show no evidence that protection against reinfections wanes after 6 months. Some investigations indicate that protection against reinfection is lowest 4-5 months after initial infection and increases thereafter, a finding that might hypothetically be explained by persistent viral shedding; that is, misclassification of prolonged SARS-CoV-2 infections as reinfections. While no comparison was made against information pertaining to unvaccinated, not previously-infected individuals, preliminary data from Israel suggest that protection from reinfection can decrease from 6 to more than 12 months after the first SARS-CoV-2 infection. Taken together, epidemiologic studies indicate that protection against reinfections by natural immunity lasts over 1 year with only moderate, if any, decline over this period. Among older individuals, immunocompromised patients, and those with certain comorbidities or exposure risk (for example, health care workers), rates of reinfection may be higher. It is plausible that reinfection risk may be a function of exposure risk.

There is accumulating evidence that reinfections may be significantly less severe than primary infections with SARS-CoV-2. Reduced clinical severity of SARS-CoV-2 reinfections naturally also makes sense from a biologic point of view, inasmuch as a previously primed immune system should be better prepared for a rechallenge with this virus.
 

 

 

Vaccine-induced immunity

The short-term (<4 months) efficacy of mRNA vaccines against SARS-CoV-2 is high and varies from 94.1% (Moderna) to 95% (BioNTech/Pfizer). This has been confirmed by randomized controlled trials and was subsequently confirmed in effectiveness studies in real-world settings. Waning efficacy was observed with respect to protection against SARS-CoV-2 infections (for example, only approximately 20% after about half a year in Qatar), whereas protection against severe disease was either sustained or showed only a moderate decline.

In individuals who received two doses of the BioNTech/Pfizer vaccine at least 5 months earlier, an additional vaccine dose, a so-called booster, significantly lowered mortality and severe illness. These findings suggest that the booster restored and probably exceeded the initial short-term efficacy of the initial vaccination.

Data are still emerging regarding the efficacy of boosters against the Omicron variants. Preliminary data suggest a far lower ability to restore protection from infection and vaccination. However, fatalities and hospitalizations remain low.
 

Natural immunity vs. vaccine-induced immunity

Comparisons of natural immunity with vaccine-induced immunity are complicated by a series of biases and by combinations of biases – for example, the biases of comparisons between infected and uninfected, plus the biases of comparisons between vaccinated and nonvaccinated, with strong potential selection biases and confounding. Of particular note, the proportion of people previously infected and/or vaccinated may influence estimates of effectiveness. Regarding this point, one study compared unvaccinated patients with a prior SARS-CoV-2 infection and vaccinated individuals followed up from a week after the second vaccine dose onward versus a group of unvaccinated, not previously infected individuals. The findings showed that, compared with unvaccinated, not previously infected individuals, the natural immunity group and the vaccinated group had similar protection of 94.8% and 92.8% against infection, of 94.1% and 94.2% against hospitalization, and of 96.4% and 94.4% against severe illness, respectively.

Hybrid immunity

The combination of a previous SARS-CoV-2 infection and a respective vaccination is called hybrid immunity. This combination seems to confer the greatest protection against SARS-CoV-2 infections, but several knowledge gaps remain regarding this issue.

Data from Israel showed that, when the time since the last immunity-conferring event (either primary infection or vaccination) was the same, the rates of SARS-CoV-2 infections were similar in the following groups: individuals who had a previous infection and no vaccination, individuals who had an infection and were then vaccinated with a single dose after at least 3 months, and individuals who were vaccinated (two doses) and then infected. Severe disease was relatively rare overall.

Data on the efficacy of hybrid immunity point in the direction of hybrid immunity being superior, as compared with either vaccine-induced (without a booster) immunity or natural immunity alone. Timing and mode of vaccination of previously infected individuals to achieve optimal hybrid immunity are central questions that remain to be addressed in future studies.

Given that vaccination rates are continuously increasing and that, by the beginning of 2022, perhaps half or more of the global population had already been infected with SARS-CoV-2, with the vast majority of this group not being officially detected, it would appear logical that future infection waves, even with highly transmissible variants of SARS-CoV-2, may be limited with respect to their maximum potential health burden. The advent of Omicron suggests that massive surges can occur even in populations with extremely high rates of previous vaccination and variable rates of prior infections. However, even then, the accompanying burden of hospitalizations and deaths is far less than what was seen in 2020 and 2021. One may argue that the pandemic has already transitioned to the endemic phase and that Omicron is an endemic wave occurring in the setting of already widespread population immunity.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article