User login
Risky business: Most cancer drugs don’t reach the market
, a new analysis suggests.
The researchers also found that about 8% of approved agents were subsequently taken off the market.
“The 6% is not a big surprise to us, since a few other studies using different methodologies and foci have estimated similar percentages,” Alyson Haslam, PhD, University of California, San Francisco, told this news organization. “When you look at drug development, it makes sense that you have to test a lot of drugs to get one that works, but sometimes it is nice to quantify the actual percentage in order to fully appreciate the process.”
The fact that 8% were withdrawn, however, “elicits the question of how the approval process can be improved to avoid ineffective or harmful drugs from coming onto the market,” Dr. Haslam added.
The study was published online in the International Journal of Cancer.
More desirable features?
Monitoring trends over time helps oncologists assess whether more drugs are making it to market and if certain factors make some drugs more likely to get approved.
Prior published estimates put the likelihood of approval between 6.7% and 13.4%, but these estimates were for drugs tested more than a decade ago.
To provide updated estimates, the researchers searched the literature for all oncology drugs tested in phase 1 studies during 2015 and evaluated their fate in subsequent phase 2/3 studies through FDA clearance.
Overall, the team found 803 phase 1 studies that met initial inclusion criteria; 48 trials that included only Japanese participants were excluded because these studies often evaluated drugs already approved in the United States, leaving 755 studies for the analysis.
The most common tumor types were solid/multiple tumors (24.2%), leukemias (12.8%), and lung cancer (8.5%). Just under half (47%) of the trials tested a drug as monotherapy; 43% were combination trials with one dose-escalated drug; and about 10% were combination trials with both drugs dose-escalated.
The FDA approved 51 drugs during the study period. Four (7.8%) were subsequently withdrawn: nivolumab (Opdivo) and pembrolizumab (Keytruda) for small cell lung cancer, olaratumab (Lartruvo) for soft tissue sarcoma, and melflufen (Pepaxto) for multiple myeloma. These four were not counted in the overall number of approvals.
“We really wanted to look at the end fate of drugs (within a reasonable time frame), which is why we did not include the four drugs that were initially approved but later withdrawn, although this had little impact on the main finding,” Dr. Haslam explained.
The estimated probability of any drug or drug combination tested in a phase 1 trial published in 2015 and approved that year was 1.7% and reached 6.2% by the end of 2021, the researchers found.
Monoclonal antibodies had a higher probability of being approved (15.3%), compared with inhibitors (5.1%) and chemotherapy drugs (4.2%).
The FDA was also more apt to green-light drugs tested as monotherapy, compared with drug combinations (odds ratio, 0.22). Drugs tested in monotherapy had a 9.4% probability of approval versus those tested in combination, which had a 5.6% probability of being approved when pairing a novel drug with one or more established agents, as well as when combining two novel drugs. The probability of approval was less than 1% for trials testing two established drug combinations.
Other factors that boosted the odds of FDA approval include having a response rate over 40% in phase 1 testing, demonstrating an overall survival benefit in phase 3 testing, and having the trial sponsored by a top-20 drug company, compared with a non–top-20 drug company.
Dr. Haslam found the last finding rather surprising, given the recent trend for bigger companies to invest in smaller companies who are developing promising drugs, rather than doing all of the development themselves. “In fact, a recent analysis found that only 25% of new drugs are sponsored by larger companies,” she noted.
Reached for comment, Jeff Allen, PhD, who wasn’t involved in the study, noted that “these types of landscape analyses are quite helpful in understanding the current state of oncology science and drug development.”
When looking at a 6.2% success rate for phase 1–tested oncology drugs, “it can be difficult holistically to determine all factors for which development didn’t continue,” said Dr. Allen, president and CEO of the nonprofit Friends of Cancer Research.
For instance, lack of approval may not signal the drug was a failure “but rather an artifact of circumstances such as resource limitations or reprioritization,” Dr. Allen said.
Plus, he commented, “I don’t think that we should expect all these early studies to lead to eventual approvals, but it’s clear from the authors’ findings that continued efforts to improve the overall success rate in developing new cancer medicines are greatly needed.”
The study was funded by Arnold Ventures. Dr. Haslam and Dr. Allen have no relevant disclosures. Study author Vinay Prasad, MD, MPH, receives royalties from Arnold Ventures.
A version of this article first appeared on Medscape.com.
, a new analysis suggests.
The researchers also found that about 8% of approved agents were subsequently taken off the market.
“The 6% is not a big surprise to us, since a few other studies using different methodologies and foci have estimated similar percentages,” Alyson Haslam, PhD, University of California, San Francisco, told this news organization. “When you look at drug development, it makes sense that you have to test a lot of drugs to get one that works, but sometimes it is nice to quantify the actual percentage in order to fully appreciate the process.”
The fact that 8% were withdrawn, however, “elicits the question of how the approval process can be improved to avoid ineffective or harmful drugs from coming onto the market,” Dr. Haslam added.
The study was published online in the International Journal of Cancer.
More desirable features?
Monitoring trends over time helps oncologists assess whether more drugs are making it to market and if certain factors make some drugs more likely to get approved.
Prior published estimates put the likelihood of approval between 6.7% and 13.4%, but these estimates were for drugs tested more than a decade ago.
To provide updated estimates, the researchers searched the literature for all oncology drugs tested in phase 1 studies during 2015 and evaluated their fate in subsequent phase 2/3 studies through FDA clearance.
Overall, the team found 803 phase 1 studies that met initial inclusion criteria; 48 trials that included only Japanese participants were excluded because these studies often evaluated drugs already approved in the United States, leaving 755 studies for the analysis.
The most common tumor types were solid/multiple tumors (24.2%), leukemias (12.8%), and lung cancer (8.5%). Just under half (47%) of the trials tested a drug as monotherapy; 43% were combination trials with one dose-escalated drug; and about 10% were combination trials with both drugs dose-escalated.
The FDA approved 51 drugs during the study period. Four (7.8%) were subsequently withdrawn: nivolumab (Opdivo) and pembrolizumab (Keytruda) for small cell lung cancer, olaratumab (Lartruvo) for soft tissue sarcoma, and melflufen (Pepaxto) for multiple myeloma. These four were not counted in the overall number of approvals.
“We really wanted to look at the end fate of drugs (within a reasonable time frame), which is why we did not include the four drugs that were initially approved but later withdrawn, although this had little impact on the main finding,” Dr. Haslam explained.
The estimated probability of any drug or drug combination tested in a phase 1 trial published in 2015 and approved that year was 1.7% and reached 6.2% by the end of 2021, the researchers found.
Monoclonal antibodies had a higher probability of being approved (15.3%), compared with inhibitors (5.1%) and chemotherapy drugs (4.2%).
The FDA was also more apt to green-light drugs tested as monotherapy, compared with drug combinations (odds ratio, 0.22). Drugs tested in monotherapy had a 9.4% probability of approval versus those tested in combination, which had a 5.6% probability of being approved when pairing a novel drug with one or more established agents, as well as when combining two novel drugs. The probability of approval was less than 1% for trials testing two established drug combinations.
Other factors that boosted the odds of FDA approval include having a response rate over 40% in phase 1 testing, demonstrating an overall survival benefit in phase 3 testing, and having the trial sponsored by a top-20 drug company, compared with a non–top-20 drug company.
Dr. Haslam found the last finding rather surprising, given the recent trend for bigger companies to invest in smaller companies who are developing promising drugs, rather than doing all of the development themselves. “In fact, a recent analysis found that only 25% of new drugs are sponsored by larger companies,” she noted.
Reached for comment, Jeff Allen, PhD, who wasn’t involved in the study, noted that “these types of landscape analyses are quite helpful in understanding the current state of oncology science and drug development.”
When looking at a 6.2% success rate for phase 1–tested oncology drugs, “it can be difficult holistically to determine all factors for which development didn’t continue,” said Dr. Allen, president and CEO of the nonprofit Friends of Cancer Research.
For instance, lack of approval may not signal the drug was a failure “but rather an artifact of circumstances such as resource limitations or reprioritization,” Dr. Allen said.
Plus, he commented, “I don’t think that we should expect all these early studies to lead to eventual approvals, but it’s clear from the authors’ findings that continued efforts to improve the overall success rate in developing new cancer medicines are greatly needed.”
The study was funded by Arnold Ventures. Dr. Haslam and Dr. Allen have no relevant disclosures. Study author Vinay Prasad, MD, MPH, receives royalties from Arnold Ventures.
A version of this article first appeared on Medscape.com.
, a new analysis suggests.
The researchers also found that about 8% of approved agents were subsequently taken off the market.
“The 6% is not a big surprise to us, since a few other studies using different methodologies and foci have estimated similar percentages,” Alyson Haslam, PhD, University of California, San Francisco, told this news organization. “When you look at drug development, it makes sense that you have to test a lot of drugs to get one that works, but sometimes it is nice to quantify the actual percentage in order to fully appreciate the process.”
The fact that 8% were withdrawn, however, “elicits the question of how the approval process can be improved to avoid ineffective or harmful drugs from coming onto the market,” Dr. Haslam added.
The study was published online in the International Journal of Cancer.
More desirable features?
Monitoring trends over time helps oncologists assess whether more drugs are making it to market and if certain factors make some drugs more likely to get approved.
Prior published estimates put the likelihood of approval between 6.7% and 13.4%, but these estimates were for drugs tested more than a decade ago.
To provide updated estimates, the researchers searched the literature for all oncology drugs tested in phase 1 studies during 2015 and evaluated their fate in subsequent phase 2/3 studies through FDA clearance.
Overall, the team found 803 phase 1 studies that met initial inclusion criteria; 48 trials that included only Japanese participants were excluded because these studies often evaluated drugs already approved in the United States, leaving 755 studies for the analysis.
The most common tumor types were solid/multiple tumors (24.2%), leukemias (12.8%), and lung cancer (8.5%). Just under half (47%) of the trials tested a drug as monotherapy; 43% were combination trials with one dose-escalated drug; and about 10% were combination trials with both drugs dose-escalated.
The FDA approved 51 drugs during the study period. Four (7.8%) were subsequently withdrawn: nivolumab (Opdivo) and pembrolizumab (Keytruda) for small cell lung cancer, olaratumab (Lartruvo) for soft tissue sarcoma, and melflufen (Pepaxto) for multiple myeloma. These four were not counted in the overall number of approvals.
“We really wanted to look at the end fate of drugs (within a reasonable time frame), which is why we did not include the four drugs that were initially approved but later withdrawn, although this had little impact on the main finding,” Dr. Haslam explained.
The estimated probability of any drug or drug combination tested in a phase 1 trial published in 2015 and approved that year was 1.7% and reached 6.2% by the end of 2021, the researchers found.
Monoclonal antibodies had a higher probability of being approved (15.3%), compared with inhibitors (5.1%) and chemotherapy drugs (4.2%).
The FDA was also more apt to green-light drugs tested as monotherapy, compared with drug combinations (odds ratio, 0.22). Drugs tested in monotherapy had a 9.4% probability of approval versus those tested in combination, which had a 5.6% probability of being approved when pairing a novel drug with one or more established agents, as well as when combining two novel drugs. The probability of approval was less than 1% for trials testing two established drug combinations.
Other factors that boosted the odds of FDA approval include having a response rate over 40% in phase 1 testing, demonstrating an overall survival benefit in phase 3 testing, and having the trial sponsored by a top-20 drug company, compared with a non–top-20 drug company.
Dr. Haslam found the last finding rather surprising, given the recent trend for bigger companies to invest in smaller companies who are developing promising drugs, rather than doing all of the development themselves. “In fact, a recent analysis found that only 25% of new drugs are sponsored by larger companies,” she noted.
Reached for comment, Jeff Allen, PhD, who wasn’t involved in the study, noted that “these types of landscape analyses are quite helpful in understanding the current state of oncology science and drug development.”
When looking at a 6.2% success rate for phase 1–tested oncology drugs, “it can be difficult holistically to determine all factors for which development didn’t continue,” said Dr. Allen, president and CEO of the nonprofit Friends of Cancer Research.
For instance, lack of approval may not signal the drug was a failure “but rather an artifact of circumstances such as resource limitations or reprioritization,” Dr. Allen said.
Plus, he commented, “I don’t think that we should expect all these early studies to lead to eventual approvals, but it’s clear from the authors’ findings that continued efforts to improve the overall success rate in developing new cancer medicines are greatly needed.”
The study was funded by Arnold Ventures. Dr. Haslam and Dr. Allen have no relevant disclosures. Study author Vinay Prasad, MD, MPH, receives royalties from Arnold Ventures.
A version of this article first appeared on Medscape.com.
FROM INTERNATIONAL JOURNAL OF CANCER
U.S. hot, cold spots of young-onset CRC may help target interventions
The so-called hot and cold spots of mortality from young-onset CRC differed slightly for people younger than 50 and those younger than 35, report the researchers, who say such studies may lead to better understanding of the underlying factors as well as to targeted interventions.
The authors suggest that deaths in the youngest young-onset CRC individuals “may be driven by a distinct set of factors, compared with deaths among older young-onset CRC and average-onset CRC patients.”
They add that “unmeasured factors ... may drive anomalous young-onset CRC mortality rates, either independently or in conjunction with demographic [and] modifiable variables accounted for here.”
The research was published online in Gastroenterology.
Incidence, mortality rates on the rise
The incidence and mortality rates of young-onset CRC have been increasing for decades, the authors write, but it has only recently begun to attract public health attention.
Risk factors and prognostic indicators, such as smoking, obesity, alcohol consumption, diabetes, sex, race, and socioeconomic factors, have been implicated in the development of the condition.
Geospatial distribution of young-onset CRC adds an “important [layer] for understanding the underlying drivers of mortality and allocating public health resources,” the authors write.
It is “too soon” to draw conclusions about the cause of the hot and cold spots, cautioned senior author Stephanie L. Schmit, PhD, vice chair of the Genomic Medicine Institute at the Lerner Research Institute, Cleveland Clinic.
Speaking to this news organization, she said, “Additional factors like proximity to primary care, gastroenterology, and cancer care facilities or novel environmental exposures may contribute to hot spots.”
On the other hand, “lifestyle factors like diet and exercise might contribute to some extent to cold spots,” she added.
While Dr. Schmit said it would be “challenging” to replicate the findings nationally, “further analyses at more granular geographic levels would be incredibly helpful.”
Exploring the geographical distribution
To explore the geographical distribution of young-onset CRC mortality, the researchers gathered 20 years of data on more than 1 million CRC deaths from 3,036 U.S. counties. With aggregated county-level information from 1999 to 2019, they derived mortality rates from CDC WONDER underlying cause of death data.
Over the study period, there were 69,976 deaths from CRC among individuals diagnosed before age 50, including 7,325 persons diagnosed younger than 35. Most CRC deaths (1,033,541) occurred in people diagnosed at age 50 and older.
The researchers calculated an average county-level young-onset CRC mortality rate of 1.78 deaths per 100,000 population, compared with a CRC mortality rate of 56.82 per 100,000 population among individuals 50 and older.
Overall, for individuals younger than 50 at diagnosis, the researchers found two hot spots – in the Southeast (relative risk, 1.24) and in the Great Lakes region (RR, 1.10). They identified cold spots in lower Wisconsin (RR, 0.87), the Northeast (RR, 0.92), southwest Texas (RR, 0.90), and Western counties more broadly, including Alaska (RR, 0.82).
Further analysis of those diagnosed when younger than 35 revealed two significant young-onset CRC mortality hot spots – in the Northeast (RR, 1.25) and the upper Midwest (RR, 1.11). In this youngest group, the team also found three significant cold spots – in the Southwest (RR, 0.74), in California (RR, 0.78), and in the Mountain West (RR, 0.82).
Among those aged 35-49 years at diagnosis, researchers found three hot spots – two in the Southeast (RR,1.20 and 1.16) and in the Great Lakes region (RR, 1.12). Several cold spots emerged from the mortality data on young-onset CRC in this age group – in the Pacific/Mountain West (RR, 0.90), in California (RR, 0.82), southern Texas (RR, 0.89), and the Southwest more broadly (RR, 0.86).
“Though cold spots were similar across strata, young-onset CRC hot spots shifted southward in the 35-49 age stratum in comparison to the less than 35 group,” the team notes.
They acknowledge several limitations to the study, including its “ecological nature” and the lack of adjustment for stage at diagnosis.
In comments to this news organization, Andrew T. Chan, MD, MPH, of Massachusetts General Hospital and Harvard Medical School, Boston, said the approach used by the researchers was “very interesting.”
Dr. Chan said that this is “one of the first studies that has given us insight into whether there is potential geographic variation in the incidence of young-onset colorectal cancer.”
This, he continued, is “very helpful in terms of thinking about potential risk factors for early-onset cancer and giving us more information about where we might want to focus our efforts in terms of prevention.”
Dr. Chan added that another interesting aspect of the study was that “the patterns might be different, depending on how you define early-onset cancer,” whether as “very-early onset,” defined as onset in those younger than 35, or the “less stringent definition” of 35-49 years.
He said that, “within the group that we’re calling very-early onset, there may be enriched factors,” compared with people who are “a little bit older.”
The research was supported by a National Cancer Institute of the National Institutes of Health grant to Case Comprehensive Cancer Center. Dr. Schmit reports no relevant financial relationships. Other authors have relationships with Exelixis, Tempus, Olympus, Anthos, Bayer, BMS, Janssen, Nektar Therapeutics, Pfizer, Sanofi, and WebMD/Medscape. Dr. Chan reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The so-called hot and cold spots of mortality from young-onset CRC differed slightly for people younger than 50 and those younger than 35, report the researchers, who say such studies may lead to better understanding of the underlying factors as well as to targeted interventions.
The authors suggest that deaths in the youngest young-onset CRC individuals “may be driven by a distinct set of factors, compared with deaths among older young-onset CRC and average-onset CRC patients.”
They add that “unmeasured factors ... may drive anomalous young-onset CRC mortality rates, either independently or in conjunction with demographic [and] modifiable variables accounted for here.”
The research was published online in Gastroenterology.
Incidence, mortality rates on the rise
The incidence and mortality rates of young-onset CRC have been increasing for decades, the authors write, but it has only recently begun to attract public health attention.
Risk factors and prognostic indicators, such as smoking, obesity, alcohol consumption, diabetes, sex, race, and socioeconomic factors, have been implicated in the development of the condition.
Geospatial distribution of young-onset CRC adds an “important [layer] for understanding the underlying drivers of mortality and allocating public health resources,” the authors write.
It is “too soon” to draw conclusions about the cause of the hot and cold spots, cautioned senior author Stephanie L. Schmit, PhD, vice chair of the Genomic Medicine Institute at the Lerner Research Institute, Cleveland Clinic.
Speaking to this news organization, she said, “Additional factors like proximity to primary care, gastroenterology, and cancer care facilities or novel environmental exposures may contribute to hot spots.”
On the other hand, “lifestyle factors like diet and exercise might contribute to some extent to cold spots,” she added.
While Dr. Schmit said it would be “challenging” to replicate the findings nationally, “further analyses at more granular geographic levels would be incredibly helpful.”
Exploring the geographical distribution
To explore the geographical distribution of young-onset CRC mortality, the researchers gathered 20 years of data on more than 1 million CRC deaths from 3,036 U.S. counties. With aggregated county-level information from 1999 to 2019, they derived mortality rates from CDC WONDER underlying cause of death data.
Over the study period, there were 69,976 deaths from CRC among individuals diagnosed before age 50, including 7,325 persons diagnosed younger than 35. Most CRC deaths (1,033,541) occurred in people diagnosed at age 50 and older.
The researchers calculated an average county-level young-onset CRC mortality rate of 1.78 deaths per 100,000 population, compared with a CRC mortality rate of 56.82 per 100,000 population among individuals 50 and older.
Overall, for individuals younger than 50 at diagnosis, the researchers found two hot spots – in the Southeast (relative risk, 1.24) and in the Great Lakes region (RR, 1.10). They identified cold spots in lower Wisconsin (RR, 0.87), the Northeast (RR, 0.92), southwest Texas (RR, 0.90), and Western counties more broadly, including Alaska (RR, 0.82).
Further analysis of those diagnosed when younger than 35 revealed two significant young-onset CRC mortality hot spots – in the Northeast (RR, 1.25) and the upper Midwest (RR, 1.11). In this youngest group, the team also found three significant cold spots – in the Southwest (RR, 0.74), in California (RR, 0.78), and in the Mountain West (RR, 0.82).
Among those aged 35-49 years at diagnosis, researchers found three hot spots – two in the Southeast (RR,1.20 and 1.16) and in the Great Lakes region (RR, 1.12). Several cold spots emerged from the mortality data on young-onset CRC in this age group – in the Pacific/Mountain West (RR, 0.90), in California (RR, 0.82), southern Texas (RR, 0.89), and the Southwest more broadly (RR, 0.86).
“Though cold spots were similar across strata, young-onset CRC hot spots shifted southward in the 35-49 age stratum in comparison to the less than 35 group,” the team notes.
They acknowledge several limitations to the study, including its “ecological nature” and the lack of adjustment for stage at diagnosis.
In comments to this news organization, Andrew T. Chan, MD, MPH, of Massachusetts General Hospital and Harvard Medical School, Boston, said the approach used by the researchers was “very interesting.”
Dr. Chan said that this is “one of the first studies that has given us insight into whether there is potential geographic variation in the incidence of young-onset colorectal cancer.”
This, he continued, is “very helpful in terms of thinking about potential risk factors for early-onset cancer and giving us more information about where we might want to focus our efforts in terms of prevention.”
Dr. Chan added that another interesting aspect of the study was that “the patterns might be different, depending on how you define early-onset cancer,” whether as “very-early onset,” defined as onset in those younger than 35, or the “less stringent definition” of 35-49 years.
He said that, “within the group that we’re calling very-early onset, there may be enriched factors,” compared with people who are “a little bit older.”
The research was supported by a National Cancer Institute of the National Institutes of Health grant to Case Comprehensive Cancer Center. Dr. Schmit reports no relevant financial relationships. Other authors have relationships with Exelixis, Tempus, Olympus, Anthos, Bayer, BMS, Janssen, Nektar Therapeutics, Pfizer, Sanofi, and WebMD/Medscape. Dr. Chan reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The so-called hot and cold spots of mortality from young-onset CRC differed slightly for people younger than 50 and those younger than 35, report the researchers, who say such studies may lead to better understanding of the underlying factors as well as to targeted interventions.
The authors suggest that deaths in the youngest young-onset CRC individuals “may be driven by a distinct set of factors, compared with deaths among older young-onset CRC and average-onset CRC patients.”
They add that “unmeasured factors ... may drive anomalous young-onset CRC mortality rates, either independently or in conjunction with demographic [and] modifiable variables accounted for here.”
The research was published online in Gastroenterology.
Incidence, mortality rates on the rise
The incidence and mortality rates of young-onset CRC have been increasing for decades, the authors write, but it has only recently begun to attract public health attention.
Risk factors and prognostic indicators, such as smoking, obesity, alcohol consumption, diabetes, sex, race, and socioeconomic factors, have been implicated in the development of the condition.
Geospatial distribution of young-onset CRC adds an “important [layer] for understanding the underlying drivers of mortality and allocating public health resources,” the authors write.
It is “too soon” to draw conclusions about the cause of the hot and cold spots, cautioned senior author Stephanie L. Schmit, PhD, vice chair of the Genomic Medicine Institute at the Lerner Research Institute, Cleveland Clinic.
Speaking to this news organization, she said, “Additional factors like proximity to primary care, gastroenterology, and cancer care facilities or novel environmental exposures may contribute to hot spots.”
On the other hand, “lifestyle factors like diet and exercise might contribute to some extent to cold spots,” she added.
While Dr. Schmit said it would be “challenging” to replicate the findings nationally, “further analyses at more granular geographic levels would be incredibly helpful.”
Exploring the geographical distribution
To explore the geographical distribution of young-onset CRC mortality, the researchers gathered 20 years of data on more than 1 million CRC deaths from 3,036 U.S. counties. With aggregated county-level information from 1999 to 2019, they derived mortality rates from CDC WONDER underlying cause of death data.
Over the study period, there were 69,976 deaths from CRC among individuals diagnosed before age 50, including 7,325 persons diagnosed younger than 35. Most CRC deaths (1,033,541) occurred in people diagnosed at age 50 and older.
The researchers calculated an average county-level young-onset CRC mortality rate of 1.78 deaths per 100,000 population, compared with a CRC mortality rate of 56.82 per 100,000 population among individuals 50 and older.
Overall, for individuals younger than 50 at diagnosis, the researchers found two hot spots – in the Southeast (relative risk, 1.24) and in the Great Lakes region (RR, 1.10). They identified cold spots in lower Wisconsin (RR, 0.87), the Northeast (RR, 0.92), southwest Texas (RR, 0.90), and Western counties more broadly, including Alaska (RR, 0.82).
Further analysis of those diagnosed when younger than 35 revealed two significant young-onset CRC mortality hot spots – in the Northeast (RR, 1.25) and the upper Midwest (RR, 1.11). In this youngest group, the team also found three significant cold spots – in the Southwest (RR, 0.74), in California (RR, 0.78), and in the Mountain West (RR, 0.82).
Among those aged 35-49 years at diagnosis, researchers found three hot spots – two in the Southeast (RR,1.20 and 1.16) and in the Great Lakes region (RR, 1.12). Several cold spots emerged from the mortality data on young-onset CRC in this age group – in the Pacific/Mountain West (RR, 0.90), in California (RR, 0.82), southern Texas (RR, 0.89), and the Southwest more broadly (RR, 0.86).
“Though cold spots were similar across strata, young-onset CRC hot spots shifted southward in the 35-49 age stratum in comparison to the less than 35 group,” the team notes.
They acknowledge several limitations to the study, including its “ecological nature” and the lack of adjustment for stage at diagnosis.
In comments to this news organization, Andrew T. Chan, MD, MPH, of Massachusetts General Hospital and Harvard Medical School, Boston, said the approach used by the researchers was “very interesting.”
Dr. Chan said that this is “one of the first studies that has given us insight into whether there is potential geographic variation in the incidence of young-onset colorectal cancer.”
This, he continued, is “very helpful in terms of thinking about potential risk factors for early-onset cancer and giving us more information about where we might want to focus our efforts in terms of prevention.”
Dr. Chan added that another interesting aspect of the study was that “the patterns might be different, depending on how you define early-onset cancer,” whether as “very-early onset,” defined as onset in those younger than 35, or the “less stringent definition” of 35-49 years.
He said that, “within the group that we’re calling very-early onset, there may be enriched factors,” compared with people who are “a little bit older.”
The research was supported by a National Cancer Institute of the National Institutes of Health grant to Case Comprehensive Cancer Center. Dr. Schmit reports no relevant financial relationships. Other authors have relationships with Exelixis, Tempus, Olympus, Anthos, Bayer, BMS, Janssen, Nektar Therapeutics, Pfizer, Sanofi, and WebMD/Medscape. Dr. Chan reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM GASTROENTEROLOGY
Liver cancer risk persists after direct-acting antiviral treatment for HCV
, according to a new report.
Among patients with cirrhosis and fibrosis-4 (FIB-4) scores of 3.25 or higher, the incidence of hepatocellular carcinoma appeared to decline progressively each year up to 7 years after a sustained virologic response, although the rate remained above the 1% per year threshold that warrants screening.
“The majority of patients with hepatitis C have been treated and cured in the United States,” George Ioannou, MD, the senior study author and professor of medicine at the University of Washington, Seattle, said in an interview. “After hepatitis C eradication, these patients generally do very well from the liver standpoint, but the one thing they have to continue worrying about is development of liver cancer.”
Dr. Ioannou, who is also director of hepatology at the Veterans Affairs Puget Sound Health Care System, Seattle, noted that patients may be screened “indefinitely,” which places a burden on the patients and the health care system.
“We are still not sure to what extent the risk of liver cancer declines after hepatitis C eradication as more and more time accrues,” he said. “In those who had cirrhosis of the liver prior to hepatitis C cure, we are still not certain if there is a time point after hepatitis C cure when we can tell a patient that their risk of liver cancer is now very low and we no longer need to keep screening for liver cancer.”
The study was published online in Gastroenterology.
Risk calculations
In a previous study, Dr. Ioannou and colleagues found that hepatocellular carcinoma risk declined during the first 4 years of follow-up after a sustained virologic response from direct-acting antiviral medications. But the follow-up time wasn’t long enough to determine whether the cancer risk continues to decline to levels low enough to forgo screening.
In this study, Dr. Ioannou and colleagues extended the follow-up to 7 years. They were curious to see whether the cancer risk declines enough to drop the screening requirement, particularly as related to pretreatment cirrhosis and fibrosis-4 scores.
The research team analyzed electronic health records from the Veterans Affairs Corporate Data Warehouse, a national repository of Veterans Health Administration records developed specifically for research purposes.
The researchers included 29,033 patients in the Veterans Affairs health care system who had been infected with hepatitis C virus and were treated with direct-acting antivirals between January 2013 and December 2015. The patients had a sustained virologic response, which is defined as a viral load below the lower limit of detection at least 12 weeks after therapy completion.
The patients were followed for incident hepatocellular carcinoma until December 2021. The researchers then calculated the annual incidence during each year of follow-up after treatment.
About 96.6% of patients were men, and 52.2% were non-Hispanic White persons. The average age was 61 years. The most common conditions were alcohol use disorder (43.7%), substance use disorder (37.7%), and diabetes (28.9%).
Among the 7,533 patients with pretreatment cirrhosis, 948 (12.6%) developed hepatocellular carcinoma during a mean follow-up period of 4.9 years. Among patients with FIB-4 scores of 3.25 or higher, the annual incidence decreased from 3.8% in the first year to 1.4% in the seventh year but remained substantial up to 7 years after sustained virologic response. Among patients with both cirrhosis and a high FIB-4 score, the annual rate ranged from 0.7% to 1.3% and didn’t change significantly over time.
Among the 21,500 patients without pretreatment cirrhosis, 541 (or 2.5%) developed hepatocellular carcinoma during a mean follow-up period of 5.4 years. The incidence rate was significantly higher for patients with high FIB-4 scores. Among patients without cirrhosis but who had a high FIB-4 score, the annual rate remained stable but substantial (from 0.8% to 1.3%) for up to 7 years.
In a subgroup analysis that examined incidence according to changes in FIB-4 scores before and after treatment, the rate remained high among those with cirrhosis regardless of a score change. Among those without cirrhosis but who had a persistently high FIB-4 score, the incidence was high. In those without cirrhosis whose FIB-4 score dropped, the incidence was lower.
“The study demonstrates a clear decline in the risk of liver cancer over time after hepatitis C cure in the highest-risk group. This is very positive news for patients,” Dr. Ioannou said. “However, even with that decline in risk up to 7 years after eradication of hepatitis C with direct-acting antivirals, the risk is still high enough to warrant liver cancer screening.”
Future concerns
For a follow-up study, Dr. Ioannou and colleagues plan to adjust their analyses for other factors that influence the risk of liver cancer, such as age and nonalcoholic fatty liver disease. Other studies could increase the follow-up time beyond 7 years and assess how changes in diabetes, weight management, and alcohol use might affect liver cancer risk.
“With the availability of safe and effective direct-acting antiviral treatments, a growing number of patients have been or will be treated and cured of their hepatitis C infection,” Nicole Kim, MD, one of the lead authors and a transplant hepatology fellow at the University of Washington, Seattle, told this news organization.
“It is therefore important for us to develop a better understanding of how liver cancer risk might change after treatment, so we can improve the care we provide to this patient population,” she said.
The results require validation in nonveteran cohorts, the study authors write, as well as follow-up after the COVID-19 pandemic, when screening and diagnostic practices were restricted.
“Several studies have demonstrated that HCC [hepatocellular carcinoma] surveillance is underused in clinical practice, including in patients after [sustained virologic response],” Amit Singal, MD, clinical chief of hepatology and medical director of the liver tumor program at the University of Texas Southwestern Medical Center, told this news organization.
Dr. Singal, who wasn’t involved with this study, is evaluating several intervention strategies to increase surveillance utilization. His research group is conducting a multicenter randomized trial using mailed outreach invitations and is also evaluating a biomarker, PLSec-AFP, to identify patients with the highest risks who may warrant more intensive surveillance strategies.
“We have recently validated the performance of this biomarker in a large cohort of patients with cirrhosis, including some with cured hepatitis C virus infection,” he said.
The study was funded by an NIH/NCI grant and a VA CSR under Dr. Ioannou. The manuscript writing was supported by the NIH under Dr. Kim and co-author Philip Vutien. Dr. Singal has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, according to a new report.
Among patients with cirrhosis and fibrosis-4 (FIB-4) scores of 3.25 or higher, the incidence of hepatocellular carcinoma appeared to decline progressively each year up to 7 years after a sustained virologic response, although the rate remained above the 1% per year threshold that warrants screening.
“The majority of patients with hepatitis C have been treated and cured in the United States,” George Ioannou, MD, the senior study author and professor of medicine at the University of Washington, Seattle, said in an interview. “After hepatitis C eradication, these patients generally do very well from the liver standpoint, but the one thing they have to continue worrying about is development of liver cancer.”
Dr. Ioannou, who is also director of hepatology at the Veterans Affairs Puget Sound Health Care System, Seattle, noted that patients may be screened “indefinitely,” which places a burden on the patients and the health care system.
“We are still not sure to what extent the risk of liver cancer declines after hepatitis C eradication as more and more time accrues,” he said. “In those who had cirrhosis of the liver prior to hepatitis C cure, we are still not certain if there is a time point after hepatitis C cure when we can tell a patient that their risk of liver cancer is now very low and we no longer need to keep screening for liver cancer.”
The study was published online in Gastroenterology.
Risk calculations
In a previous study, Dr. Ioannou and colleagues found that hepatocellular carcinoma risk declined during the first 4 years of follow-up after a sustained virologic response from direct-acting antiviral medications. But the follow-up time wasn’t long enough to determine whether the cancer risk continues to decline to levels low enough to forgo screening.
In this study, Dr. Ioannou and colleagues extended the follow-up to 7 years. They were curious to see whether the cancer risk declines enough to drop the screening requirement, particularly as related to pretreatment cirrhosis and fibrosis-4 scores.
The research team analyzed electronic health records from the Veterans Affairs Corporate Data Warehouse, a national repository of Veterans Health Administration records developed specifically for research purposes.
The researchers included 29,033 patients in the Veterans Affairs health care system who had been infected with hepatitis C virus and were treated with direct-acting antivirals between January 2013 and December 2015. The patients had a sustained virologic response, which is defined as a viral load below the lower limit of detection at least 12 weeks after therapy completion.
The patients were followed for incident hepatocellular carcinoma until December 2021. The researchers then calculated the annual incidence during each year of follow-up after treatment.
About 96.6% of patients were men, and 52.2% were non-Hispanic White persons. The average age was 61 years. The most common conditions were alcohol use disorder (43.7%), substance use disorder (37.7%), and diabetes (28.9%).
Among the 7,533 patients with pretreatment cirrhosis, 948 (12.6%) developed hepatocellular carcinoma during a mean follow-up period of 4.9 years. Among patients with FIB-4 scores of 3.25 or higher, the annual incidence decreased from 3.8% in the first year to 1.4% in the seventh year but remained substantial up to 7 years after sustained virologic response. Among patients with both cirrhosis and a high FIB-4 score, the annual rate ranged from 0.7% to 1.3% and didn’t change significantly over time.
Among the 21,500 patients without pretreatment cirrhosis, 541 (or 2.5%) developed hepatocellular carcinoma during a mean follow-up period of 5.4 years. The incidence rate was significantly higher for patients with high FIB-4 scores. Among patients without cirrhosis but who had a high FIB-4 score, the annual rate remained stable but substantial (from 0.8% to 1.3%) for up to 7 years.
In a subgroup analysis that examined incidence according to changes in FIB-4 scores before and after treatment, the rate remained high among those with cirrhosis regardless of a score change. Among those without cirrhosis but who had a persistently high FIB-4 score, the incidence was high. In those without cirrhosis whose FIB-4 score dropped, the incidence was lower.
“The study demonstrates a clear decline in the risk of liver cancer over time after hepatitis C cure in the highest-risk group. This is very positive news for patients,” Dr. Ioannou said. “However, even with that decline in risk up to 7 years after eradication of hepatitis C with direct-acting antivirals, the risk is still high enough to warrant liver cancer screening.”
Future concerns
For a follow-up study, Dr. Ioannou and colleagues plan to adjust their analyses for other factors that influence the risk of liver cancer, such as age and nonalcoholic fatty liver disease. Other studies could increase the follow-up time beyond 7 years and assess how changes in diabetes, weight management, and alcohol use might affect liver cancer risk.
“With the availability of safe and effective direct-acting antiviral treatments, a growing number of patients have been or will be treated and cured of their hepatitis C infection,” Nicole Kim, MD, one of the lead authors and a transplant hepatology fellow at the University of Washington, Seattle, told this news organization.
“It is therefore important for us to develop a better understanding of how liver cancer risk might change after treatment, so we can improve the care we provide to this patient population,” she said.
The results require validation in nonveteran cohorts, the study authors write, as well as follow-up after the COVID-19 pandemic, when screening and diagnostic practices were restricted.
“Several studies have demonstrated that HCC [hepatocellular carcinoma] surveillance is underused in clinical practice, including in patients after [sustained virologic response],” Amit Singal, MD, clinical chief of hepatology and medical director of the liver tumor program at the University of Texas Southwestern Medical Center, told this news organization.
Dr. Singal, who wasn’t involved with this study, is evaluating several intervention strategies to increase surveillance utilization. His research group is conducting a multicenter randomized trial using mailed outreach invitations and is also evaluating a biomarker, PLSec-AFP, to identify patients with the highest risks who may warrant more intensive surveillance strategies.
“We have recently validated the performance of this biomarker in a large cohort of patients with cirrhosis, including some with cured hepatitis C virus infection,” he said.
The study was funded by an NIH/NCI grant and a VA CSR under Dr. Ioannou. The manuscript writing was supported by the NIH under Dr. Kim and co-author Philip Vutien. Dr. Singal has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
, according to a new report.
Among patients with cirrhosis and fibrosis-4 (FIB-4) scores of 3.25 or higher, the incidence of hepatocellular carcinoma appeared to decline progressively each year up to 7 years after a sustained virologic response, although the rate remained above the 1% per year threshold that warrants screening.
“The majority of patients with hepatitis C have been treated and cured in the United States,” George Ioannou, MD, the senior study author and professor of medicine at the University of Washington, Seattle, said in an interview. “After hepatitis C eradication, these patients generally do very well from the liver standpoint, but the one thing they have to continue worrying about is development of liver cancer.”
Dr. Ioannou, who is also director of hepatology at the Veterans Affairs Puget Sound Health Care System, Seattle, noted that patients may be screened “indefinitely,” which places a burden on the patients and the health care system.
“We are still not sure to what extent the risk of liver cancer declines after hepatitis C eradication as more and more time accrues,” he said. “In those who had cirrhosis of the liver prior to hepatitis C cure, we are still not certain if there is a time point after hepatitis C cure when we can tell a patient that their risk of liver cancer is now very low and we no longer need to keep screening for liver cancer.”
The study was published online in Gastroenterology.
Risk calculations
In a previous study, Dr. Ioannou and colleagues found that hepatocellular carcinoma risk declined during the first 4 years of follow-up after a sustained virologic response from direct-acting antiviral medications. But the follow-up time wasn’t long enough to determine whether the cancer risk continues to decline to levels low enough to forgo screening.
In this study, Dr. Ioannou and colleagues extended the follow-up to 7 years. They were curious to see whether the cancer risk declines enough to drop the screening requirement, particularly as related to pretreatment cirrhosis and fibrosis-4 scores.
The research team analyzed electronic health records from the Veterans Affairs Corporate Data Warehouse, a national repository of Veterans Health Administration records developed specifically for research purposes.
The researchers included 29,033 patients in the Veterans Affairs health care system who had been infected with hepatitis C virus and were treated with direct-acting antivirals between January 2013 and December 2015. The patients had a sustained virologic response, which is defined as a viral load below the lower limit of detection at least 12 weeks after therapy completion.
The patients were followed for incident hepatocellular carcinoma until December 2021. The researchers then calculated the annual incidence during each year of follow-up after treatment.
About 96.6% of patients were men, and 52.2% were non-Hispanic White persons. The average age was 61 years. The most common conditions were alcohol use disorder (43.7%), substance use disorder (37.7%), and diabetes (28.9%).
Among the 7,533 patients with pretreatment cirrhosis, 948 (12.6%) developed hepatocellular carcinoma during a mean follow-up period of 4.9 years. Among patients with FIB-4 scores of 3.25 or higher, the annual incidence decreased from 3.8% in the first year to 1.4% in the seventh year but remained substantial up to 7 years after sustained virologic response. Among patients with both cirrhosis and a high FIB-4 score, the annual rate ranged from 0.7% to 1.3% and didn’t change significantly over time.
Among the 21,500 patients without pretreatment cirrhosis, 541 (or 2.5%) developed hepatocellular carcinoma during a mean follow-up period of 5.4 years. The incidence rate was significantly higher for patients with high FIB-4 scores. Among patients without cirrhosis but who had a high FIB-4 score, the annual rate remained stable but substantial (from 0.8% to 1.3%) for up to 7 years.
In a subgroup analysis that examined incidence according to changes in FIB-4 scores before and after treatment, the rate remained high among those with cirrhosis regardless of a score change. Among those without cirrhosis but who had a persistently high FIB-4 score, the incidence was high. In those without cirrhosis whose FIB-4 score dropped, the incidence was lower.
“The study demonstrates a clear decline in the risk of liver cancer over time after hepatitis C cure in the highest-risk group. This is very positive news for patients,” Dr. Ioannou said. “However, even with that decline in risk up to 7 years after eradication of hepatitis C with direct-acting antivirals, the risk is still high enough to warrant liver cancer screening.”
Future concerns
For a follow-up study, Dr. Ioannou and colleagues plan to adjust their analyses for other factors that influence the risk of liver cancer, such as age and nonalcoholic fatty liver disease. Other studies could increase the follow-up time beyond 7 years and assess how changes in diabetes, weight management, and alcohol use might affect liver cancer risk.
“With the availability of safe and effective direct-acting antiviral treatments, a growing number of patients have been or will be treated and cured of their hepatitis C infection,” Nicole Kim, MD, one of the lead authors and a transplant hepatology fellow at the University of Washington, Seattle, told this news organization.
“It is therefore important for us to develop a better understanding of how liver cancer risk might change after treatment, so we can improve the care we provide to this patient population,” she said.
The results require validation in nonveteran cohorts, the study authors write, as well as follow-up after the COVID-19 pandemic, when screening and diagnostic practices were restricted.
“Several studies have demonstrated that HCC [hepatocellular carcinoma] surveillance is underused in clinical practice, including in patients after [sustained virologic response],” Amit Singal, MD, clinical chief of hepatology and medical director of the liver tumor program at the University of Texas Southwestern Medical Center, told this news organization.
Dr. Singal, who wasn’t involved with this study, is evaluating several intervention strategies to increase surveillance utilization. His research group is conducting a multicenter randomized trial using mailed outreach invitations and is also evaluating a biomarker, PLSec-AFP, to identify patients with the highest risks who may warrant more intensive surveillance strategies.
“We have recently validated the performance of this biomarker in a large cohort of patients with cirrhosis, including some with cured hepatitis C virus infection,” he said.
The study was funded by an NIH/NCI grant and a VA CSR under Dr. Ioannou. The manuscript writing was supported by the NIH under Dr. Kim and co-author Philip Vutien. Dr. Singal has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM GASTROENTEROLOGY
Late summer heat may bring increased risk of miscarriage
Summer heat is notorious for making the strain of pregnancy worse. But for many pregnant people, sweltering temperatures are much worse than a sweaty annoyance.
In late August, for example, the risk of losing a pregnancy is 44% higher than in February, according to the findings.
“One of our hypotheses is that heat may trigger miscarriage, which is something that we are now exploring further,” says Amelia Wesselink, PhD, an assistant professor of epidemiology at Boston University School of Public Health, who led the study team. “Our next step is to dig into drivers of this seasonal pattern.”
She and her colleagues analyzed seasonal differences and pregnancy outcomes for over 12,000 women. Spontaneous abortion rates peaked in late August, especially for those living in the southern and midwestern United States.
Spontaneous abortion was defined as miscarriage, chemical pregnancy (a very early miscarriage where the embryo stops growing), or blighted ovum (the embryo stops developing or never develops).
From 2013 to 2020, 12,197 women living in the United States and Canada were followed for up to 1 year using Pregnancy Study Online (PRESTO), an internet-based fertility study from the Boston University School of Public Health. Those in the study answered questions about their income, education, race/ethnicity, and lifestyle, as well as follow-up questions about their pregnancy and/or loss of pregnancy.
Most of the people studied were non-Hispanic White (86%) and had at least a college degree (79%). Almost half earned more than $100,000 annually (47%). Those seeking fertility treatments were excluded from the study.
Half of the women (6,104) said they conceived in the first 12 months of trying to get pregnant, and almost one in five (19.5%) of those who conceived miscarried.
The risk of miscarriage was 44% higher in late August than it was in late February, the month with the lowest rate of lost pregnancies. This trend was almost exclusively seen for pregnancies in their first 8 weeks. The risk of miscarriage increased 31% in late August for pregnancies at any stage.
The link between miscarriage and extreme heat was strongest in the South and Midwest, with peaks in late August and early September, respectively.
“We know so little about the causes of miscarriage that it’s difficult to tie seasonal variation in risk to any particular cause,” says David Savitz, PhD, a professor of epidemiology and obstetrics, gynecology & pediatrics at Brown University, Providence, R.I., who helped conduct the study. “Exposures vary by summer, including a lower risk of respiratory infection in the warm season, changes in diet and physical activity, and physical factors such as temperature and sunlight.”
But another expert warned that extreme heat may not be the only culprit in summer’s observed miscarriage rates.
“You need to be careful when linking summer months to miscarriage, as women may pursue more outdoor activities during summer,” says Saifuddin Ahmed PhD, a researcher at Johns Hopkins Bloomberg School of Public Health, Baltimore.
Although the paper suggested physical activity may play a role in miscarriage frequency, no analysis supported this claim, Dr. Ahmed says.
Also, participants in the study were mostly White and tended to be wealthier than the general population, so the findings may not apply to everyone, Dr. Wesselink says. Although the researchers saw some similarities between participants with income above $100,000 a year and those who earned less, socioeconomic status plays an important role in environmental exposures – including heat – so the results may not hold among lower-income populations, Dr. Wesselink says.
Dr. Wesselink and her colleagues published their findings in the journal Epidemiology.
A version of this article first appeared on WebMD.com.
Summer heat is notorious for making the strain of pregnancy worse. But for many pregnant people, sweltering temperatures are much worse than a sweaty annoyance.
In late August, for example, the risk of losing a pregnancy is 44% higher than in February, according to the findings.
“One of our hypotheses is that heat may trigger miscarriage, which is something that we are now exploring further,” says Amelia Wesselink, PhD, an assistant professor of epidemiology at Boston University School of Public Health, who led the study team. “Our next step is to dig into drivers of this seasonal pattern.”
She and her colleagues analyzed seasonal differences and pregnancy outcomes for over 12,000 women. Spontaneous abortion rates peaked in late August, especially for those living in the southern and midwestern United States.
Spontaneous abortion was defined as miscarriage, chemical pregnancy (a very early miscarriage where the embryo stops growing), or blighted ovum (the embryo stops developing or never develops).
From 2013 to 2020, 12,197 women living in the United States and Canada were followed for up to 1 year using Pregnancy Study Online (PRESTO), an internet-based fertility study from the Boston University School of Public Health. Those in the study answered questions about their income, education, race/ethnicity, and lifestyle, as well as follow-up questions about their pregnancy and/or loss of pregnancy.
Most of the people studied were non-Hispanic White (86%) and had at least a college degree (79%). Almost half earned more than $100,000 annually (47%). Those seeking fertility treatments were excluded from the study.
Half of the women (6,104) said they conceived in the first 12 months of trying to get pregnant, and almost one in five (19.5%) of those who conceived miscarried.
The risk of miscarriage was 44% higher in late August than it was in late February, the month with the lowest rate of lost pregnancies. This trend was almost exclusively seen for pregnancies in their first 8 weeks. The risk of miscarriage increased 31% in late August for pregnancies at any stage.
The link between miscarriage and extreme heat was strongest in the South and Midwest, with peaks in late August and early September, respectively.
“We know so little about the causes of miscarriage that it’s difficult to tie seasonal variation in risk to any particular cause,” says David Savitz, PhD, a professor of epidemiology and obstetrics, gynecology & pediatrics at Brown University, Providence, R.I., who helped conduct the study. “Exposures vary by summer, including a lower risk of respiratory infection in the warm season, changes in diet and physical activity, and physical factors such as temperature and sunlight.”
But another expert warned that extreme heat may not be the only culprit in summer’s observed miscarriage rates.
“You need to be careful when linking summer months to miscarriage, as women may pursue more outdoor activities during summer,” says Saifuddin Ahmed PhD, a researcher at Johns Hopkins Bloomberg School of Public Health, Baltimore.
Although the paper suggested physical activity may play a role in miscarriage frequency, no analysis supported this claim, Dr. Ahmed says.
Also, participants in the study were mostly White and tended to be wealthier than the general population, so the findings may not apply to everyone, Dr. Wesselink says. Although the researchers saw some similarities between participants with income above $100,000 a year and those who earned less, socioeconomic status plays an important role in environmental exposures – including heat – so the results may not hold among lower-income populations, Dr. Wesselink says.
Dr. Wesselink and her colleagues published their findings in the journal Epidemiology.
A version of this article first appeared on WebMD.com.
Summer heat is notorious for making the strain of pregnancy worse. But for many pregnant people, sweltering temperatures are much worse than a sweaty annoyance.
In late August, for example, the risk of losing a pregnancy is 44% higher than in February, according to the findings.
“One of our hypotheses is that heat may trigger miscarriage, which is something that we are now exploring further,” says Amelia Wesselink, PhD, an assistant professor of epidemiology at Boston University School of Public Health, who led the study team. “Our next step is to dig into drivers of this seasonal pattern.”
She and her colleagues analyzed seasonal differences and pregnancy outcomes for over 12,000 women. Spontaneous abortion rates peaked in late August, especially for those living in the southern and midwestern United States.
Spontaneous abortion was defined as miscarriage, chemical pregnancy (a very early miscarriage where the embryo stops growing), or blighted ovum (the embryo stops developing or never develops).
From 2013 to 2020, 12,197 women living in the United States and Canada were followed for up to 1 year using Pregnancy Study Online (PRESTO), an internet-based fertility study from the Boston University School of Public Health. Those in the study answered questions about their income, education, race/ethnicity, and lifestyle, as well as follow-up questions about their pregnancy and/or loss of pregnancy.
Most of the people studied were non-Hispanic White (86%) and had at least a college degree (79%). Almost half earned more than $100,000 annually (47%). Those seeking fertility treatments were excluded from the study.
Half of the women (6,104) said they conceived in the first 12 months of trying to get pregnant, and almost one in five (19.5%) of those who conceived miscarried.
The risk of miscarriage was 44% higher in late August than it was in late February, the month with the lowest rate of lost pregnancies. This trend was almost exclusively seen for pregnancies in their first 8 weeks. The risk of miscarriage increased 31% in late August for pregnancies at any stage.
The link between miscarriage and extreme heat was strongest in the South and Midwest, with peaks in late August and early September, respectively.
“We know so little about the causes of miscarriage that it’s difficult to tie seasonal variation in risk to any particular cause,” says David Savitz, PhD, a professor of epidemiology and obstetrics, gynecology & pediatrics at Brown University, Providence, R.I., who helped conduct the study. “Exposures vary by summer, including a lower risk of respiratory infection in the warm season, changes in diet and physical activity, and physical factors such as temperature and sunlight.”
But another expert warned that extreme heat may not be the only culprit in summer’s observed miscarriage rates.
“You need to be careful when linking summer months to miscarriage, as women may pursue more outdoor activities during summer,” says Saifuddin Ahmed PhD, a researcher at Johns Hopkins Bloomberg School of Public Health, Baltimore.
Although the paper suggested physical activity may play a role in miscarriage frequency, no analysis supported this claim, Dr. Ahmed says.
Also, participants in the study were mostly White and tended to be wealthier than the general population, so the findings may not apply to everyone, Dr. Wesselink says. Although the researchers saw some similarities between participants with income above $100,000 a year and those who earned less, socioeconomic status plays an important role in environmental exposures – including heat – so the results may not hold among lower-income populations, Dr. Wesselink says.
Dr. Wesselink and her colleagues published their findings in the journal Epidemiology.
A version of this article first appeared on WebMD.com.
Amazon involved with new cancer vaccine clinical trial
The trial is aimed at finding “personalized vaccines” to treat breast cancer and melanoma. The phase 1 trial is recruiting 20 people over the age of 18 to study the safety of the vaccines, according to CNBC.
The Fred Hutchinson Cancer Research Center and University of Washington Cancer Consortium are listed as the researchers of the clinical trial, and Amazon is listed as a collaborator, according to a filing on the ClinicalTrials.gov database.
“Amazon is contributing scientific and machine learning expertise to a partnership with Fred Hutch to explore the development of a personalized treatment for certain forms of cancer,” an Amazon spokesperson told CNBC.
“It’s very early, but Fred Hutch recently received permission from the U.S. Food and Drug Administration to proceed with a phase 1 clinical trial, and it’s unclear whether it will be successful,” the spokesperson said. “This will be a long, multiyear process – should it progress, we would be open to working with other organizations in health care and life sciences that might also be interested in similar efforts.”
In recent years, Amazon has grown its presence in the health care industry, CNBC reported. The company launched an online pharmacy in 2020, developed a telehealth service called Amazon Care, and released its own COVID-19 test during the pandemic.
A research and development group inside Amazon, known as Grand Challenge, oversaw the company’s early cancer vaccine effort, according to Business Insider. It’s now under the purview of a cancer research team that reports to Robert Williams, the company’s vice president of devices.
The study was first posted on ClinicalTrials.gov in October 2021 and began recruiting patients on June 9, according to the filing. The phase 1 trial is expected to run through November 2023.
The phase 1 trial will study the safety of personalized vaccines to treat patients with late-stage melanoma or hormone receptor-positive HER2-negative breast cancer which has either spread to other parts of the body or doesn’t respond to treatment.
More information about the study can be found on ClinicalTrials.gov under the identifier NCT05098210.
A version of this article first appeared on WebMD.com.
The trial is aimed at finding “personalized vaccines” to treat breast cancer and melanoma. The phase 1 trial is recruiting 20 people over the age of 18 to study the safety of the vaccines, according to CNBC.
The Fred Hutchinson Cancer Research Center and University of Washington Cancer Consortium are listed as the researchers of the clinical trial, and Amazon is listed as a collaborator, according to a filing on the ClinicalTrials.gov database.
“Amazon is contributing scientific and machine learning expertise to a partnership with Fred Hutch to explore the development of a personalized treatment for certain forms of cancer,” an Amazon spokesperson told CNBC.
“It’s very early, but Fred Hutch recently received permission from the U.S. Food and Drug Administration to proceed with a phase 1 clinical trial, and it’s unclear whether it will be successful,” the spokesperson said. “This will be a long, multiyear process – should it progress, we would be open to working with other organizations in health care and life sciences that might also be interested in similar efforts.”
In recent years, Amazon has grown its presence in the health care industry, CNBC reported. The company launched an online pharmacy in 2020, developed a telehealth service called Amazon Care, and released its own COVID-19 test during the pandemic.
A research and development group inside Amazon, known as Grand Challenge, oversaw the company’s early cancer vaccine effort, according to Business Insider. It’s now under the purview of a cancer research team that reports to Robert Williams, the company’s vice president of devices.
The study was first posted on ClinicalTrials.gov in October 2021 and began recruiting patients on June 9, according to the filing. The phase 1 trial is expected to run through November 2023.
The phase 1 trial will study the safety of personalized vaccines to treat patients with late-stage melanoma or hormone receptor-positive HER2-negative breast cancer which has either spread to other parts of the body or doesn’t respond to treatment.
More information about the study can be found on ClinicalTrials.gov under the identifier NCT05098210.
A version of this article first appeared on WebMD.com.
The trial is aimed at finding “personalized vaccines” to treat breast cancer and melanoma. The phase 1 trial is recruiting 20 people over the age of 18 to study the safety of the vaccines, according to CNBC.
The Fred Hutchinson Cancer Research Center and University of Washington Cancer Consortium are listed as the researchers of the clinical trial, and Amazon is listed as a collaborator, according to a filing on the ClinicalTrials.gov database.
“Amazon is contributing scientific and machine learning expertise to a partnership with Fred Hutch to explore the development of a personalized treatment for certain forms of cancer,” an Amazon spokesperson told CNBC.
“It’s very early, but Fred Hutch recently received permission from the U.S. Food and Drug Administration to proceed with a phase 1 clinical trial, and it’s unclear whether it will be successful,” the spokesperson said. “This will be a long, multiyear process – should it progress, we would be open to working with other organizations in health care and life sciences that might also be interested in similar efforts.”
In recent years, Amazon has grown its presence in the health care industry, CNBC reported. The company launched an online pharmacy in 2020, developed a telehealth service called Amazon Care, and released its own COVID-19 test during the pandemic.
A research and development group inside Amazon, known as Grand Challenge, oversaw the company’s early cancer vaccine effort, according to Business Insider. It’s now under the purview of a cancer research team that reports to Robert Williams, the company’s vice president of devices.
The study was first posted on ClinicalTrials.gov in October 2021 and began recruiting patients on June 9, according to the filing. The phase 1 trial is expected to run through November 2023.
The phase 1 trial will study the safety of personalized vaccines to treat patients with late-stage melanoma or hormone receptor-positive HER2-negative breast cancer which has either spread to other parts of the body or doesn’t respond to treatment.
More information about the study can be found on ClinicalTrials.gov under the identifier NCT05098210.
A version of this article first appeared on WebMD.com.
Bevacizumab first matches aflibercept for diabetic macular edema
A cost-saving, stepwise approach to treating diabetic macular edema was as effective and at least as safe as what’s been the standard approach, which jumps straight to the costlier treatment, in a head-to-head, multicenter, U.S. randomized trial of the two regimens that included 312 eyes in 270 adults with type 1 or type 2 diabetes.
The findings validate a treatment regimen for diabetic macular edema that’s already common in U.S. practice based on requirements by many health insurance providers because of the money it saves.
The step-therapy approach studied involves starting off-label treatment with the relatively inexpensive agent bevacizumab (Avastin), followed by a switch to the much pricier aflibercept (Eylea) when patients don’t adequately respond, following a prespecified algorithm that applies four criteria to determine when patients need to change agents.
These new findings build on a 2016 study that compared aflibercept monotherapy with bevacizumab monotherapy and showed that after 2 years of treatment aflibercept produced clearly better outcomes.
The new trial findings “are particularly relevant given the increasing frequency of insurers mandating step therapy with bevacizumab before the use of other drugs” such as aflibercept, noted Chirag D. Jhaveri, MD, and colleagues in the study published online in the New England Journal of Medicine.
Opportunity for ‘substantial cost reductions’
Jhaveri, a retina surgeon in Austin, Texas, and associates note that, based on Medicare reimbursement rates of $1,830 for a single dose of aflibercept and $70 for one dose of bevacizumab, starting treatment with bevacizumab could produce “substantial cost reductions for the health care system.”
The authors of an accompanying editorial agree. Step therapy that starts with bevacizumab would probably result in “substantial” cost savings, and the findings document “similar outcomes” from the two tested regimens based on improvements in visual acuity and changes in the thickness measurement of the central retina during the 2-year trial, write David C. Musch, PhD, and Emily Y. Chew, MD.
Dr. Musch, a professor and ophthalmology epidemiologist at the University of Michigan in Ann Arbor, and Dr. Chew, director of the Division of Epidemiology and Clinical Applications at the National Eye Institute in Bethesda, Md., also laud the “rigorous” study for its design and conduct that was “beyond reproach,” and for producing evidence that “applies well to clinical practice.”
The only potential drawback to the step-therapy approach, they write, is that people with diabetes often have “numerous coexisting conditions that make it more difficult for them to adhere to frequent follow-up visits,” a key element of the tested step-care protocol, which mandated follow-up visits every 4 weeks during the first year and every 4-16 weeks during the second year.
312 eyes of 270 patients
The new trial, organized by the DRCR Retinal Network and the Jaeb Center for Health Research in Tampa, Fla., ran at 54 U.S. sites from December 2017 to November 2019. The study randomized 158 eyes in 137 patients to aflibercept monotherapy, and 154 eyes in 133 patients to the step-care regimen (both eyes were treated in several patients in each group, with each eye receiving a different regimen). Participants were around 60 years old, 48% were women, and 95% had type 2 diabetes.
To be eligible for enrollment, patients had at least one eye with a best-corrected visual-acuity letter score of 24-69 on an Electronic Early Treatment Diabetic Retinopathy Study chart (ranges from 0 to 100, with higher values indicating better visual acuity), which corresponds to Snellen chart values of 20/320-20/50, readings that encompass most patients with diabetic macular edema, noted study authors Adam R. Glassman and Jennifer K. Sun, MD, in an interview.
“Very few patients with diabetic macular edema have vision due to this alone that is worse than 20/320, which meets criteria for legal blindness,” said Dr. Glassman, who is executive director of the Jaeb Center for Health Research, and Dr. Sun, who is chief of the center for clinical eye research and trials at the Joslin Diabetes Center in Boston and chair of the DRCR Retinal Network.
The primary outcome was time-averaged change in visual-acuity letter score from baseline to 2 years, which improved by an average of 15.0 letters in the aflibercept monotherapy group and an average of 14.0 letters in the step-therapy group, an adjusted difference of 0.8 letters, which was not significant. An improvement from baseline of at least 15 letters occurred in 53% of the eyes in the aflibercept monotherapy group and in 58% of those who had step therapy, and 77% of eyes in both groups had improvements of at least 10 letters.
Central retinal thickness dropped from baseline by an average of 192 mcm with aflibercept monotherapy and 198 mcm with step therapy. The average number of total treatments (by intravitreous injection) was 14.6 in the aflibercept monotherapy group and 16.1 in the step-therapy group. After the first 24 weeks of the study, 39% of eyes in the step-therapy group had switched from bevacizumab to aflibercept injections; after 1 year, 60% of eyes had switched; and by study end, after 2 years, 70% had changed.
The bevacizumab-first group also showed at least comparable if not better safety, with similar rates of prespecified ocular events in both groups, but with a significantly lower rate of serious systemic adverse events, which occurred in 52% of the eyes treated with aflibercept only and 36% of eyes that began treatment on bevacizumab. Serious systemic adverse events occurred in 43% of patients who had two eyes treated as part of the trial.
‘Bevacizumab first was noninferior’
The team that designed the trial opted for a superiority design rather than a noninferiority trial and powered the study based on the presumption that aflibercept monotherapy would prove superior, said Dr. Glassman and Dr. Sun. “We feel that the clinical interpretation of these results will be similar to the interpretation if we had conducted a noninferiority study, and we found that bevacizumab first was noninferior to aflibercept monotherapy,” they maintained in an interview.
Dr. Glassman and Dr. Sun said they and their coauthors are now analyzing the results to try to find patient characteristics that could identify eyes most likely to respond to the bevacizumab-first approach. “It would be clinically valuable” to use the results to identify characteristics that could help guide clinicians’ treatment approach and enhance patient counseling, they said.
The study received funding from the National Institutes of Health. Dr. Jhaveri has reported being a consultant for Genentech, Novartis, and Regenxbio. Dr. Glassman has reported receiving grants from Genentech and Regeneron. Dr. Sun has reported receiving grants from Boehringer Ingelheim, Janssen Biotech, KalVista, Optovue, and Physical Sciences, grants and travel support from Novartis and Novo Nordisk, travel support from Merck, writing support from Genentech, and equipment supplied by Adaptive Sensory and Boston Micromachines. Dr. Musch and Dr. Chew have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A cost-saving, stepwise approach to treating diabetic macular edema was as effective and at least as safe as what’s been the standard approach, which jumps straight to the costlier treatment, in a head-to-head, multicenter, U.S. randomized trial of the two regimens that included 312 eyes in 270 adults with type 1 or type 2 diabetes.
The findings validate a treatment regimen for diabetic macular edema that’s already common in U.S. practice based on requirements by many health insurance providers because of the money it saves.
The step-therapy approach studied involves starting off-label treatment with the relatively inexpensive agent bevacizumab (Avastin), followed by a switch to the much pricier aflibercept (Eylea) when patients don’t adequately respond, following a prespecified algorithm that applies four criteria to determine when patients need to change agents.
These new findings build on a 2016 study that compared aflibercept monotherapy with bevacizumab monotherapy and showed that after 2 years of treatment aflibercept produced clearly better outcomes.
The new trial findings “are particularly relevant given the increasing frequency of insurers mandating step therapy with bevacizumab before the use of other drugs” such as aflibercept, noted Chirag D. Jhaveri, MD, and colleagues in the study published online in the New England Journal of Medicine.
Opportunity for ‘substantial cost reductions’
Jhaveri, a retina surgeon in Austin, Texas, and associates note that, based on Medicare reimbursement rates of $1,830 for a single dose of aflibercept and $70 for one dose of bevacizumab, starting treatment with bevacizumab could produce “substantial cost reductions for the health care system.”
The authors of an accompanying editorial agree. Step therapy that starts with bevacizumab would probably result in “substantial” cost savings, and the findings document “similar outcomes” from the two tested regimens based on improvements in visual acuity and changes in the thickness measurement of the central retina during the 2-year trial, write David C. Musch, PhD, and Emily Y. Chew, MD.
Dr. Musch, a professor and ophthalmology epidemiologist at the University of Michigan in Ann Arbor, and Dr. Chew, director of the Division of Epidemiology and Clinical Applications at the National Eye Institute in Bethesda, Md., also laud the “rigorous” study for its design and conduct that was “beyond reproach,” and for producing evidence that “applies well to clinical practice.”
The only potential drawback to the step-therapy approach, they write, is that people with diabetes often have “numerous coexisting conditions that make it more difficult for them to adhere to frequent follow-up visits,” a key element of the tested step-care protocol, which mandated follow-up visits every 4 weeks during the first year and every 4-16 weeks during the second year.
312 eyes of 270 patients
The new trial, organized by the DRCR Retinal Network and the Jaeb Center for Health Research in Tampa, Fla., ran at 54 U.S. sites from December 2017 to November 2019. The study randomized 158 eyes in 137 patients to aflibercept monotherapy, and 154 eyes in 133 patients to the step-care regimen (both eyes were treated in several patients in each group, with each eye receiving a different regimen). Participants were around 60 years old, 48% were women, and 95% had type 2 diabetes.
To be eligible for enrollment, patients had at least one eye with a best-corrected visual-acuity letter score of 24-69 on an Electronic Early Treatment Diabetic Retinopathy Study chart (ranges from 0 to 100, with higher values indicating better visual acuity), which corresponds to Snellen chart values of 20/320-20/50, readings that encompass most patients with diabetic macular edema, noted study authors Adam R. Glassman and Jennifer K. Sun, MD, in an interview.
“Very few patients with diabetic macular edema have vision due to this alone that is worse than 20/320, which meets criteria for legal blindness,” said Dr. Glassman, who is executive director of the Jaeb Center for Health Research, and Dr. Sun, who is chief of the center for clinical eye research and trials at the Joslin Diabetes Center in Boston and chair of the DRCR Retinal Network.
The primary outcome was time-averaged change in visual-acuity letter score from baseline to 2 years, which improved by an average of 15.0 letters in the aflibercept monotherapy group and an average of 14.0 letters in the step-therapy group, an adjusted difference of 0.8 letters, which was not significant. An improvement from baseline of at least 15 letters occurred in 53% of the eyes in the aflibercept monotherapy group and in 58% of those who had step therapy, and 77% of eyes in both groups had improvements of at least 10 letters.
Central retinal thickness dropped from baseline by an average of 192 mcm with aflibercept monotherapy and 198 mcm with step therapy. The average number of total treatments (by intravitreous injection) was 14.6 in the aflibercept monotherapy group and 16.1 in the step-therapy group. After the first 24 weeks of the study, 39% of eyes in the step-therapy group had switched from bevacizumab to aflibercept injections; after 1 year, 60% of eyes had switched; and by study end, after 2 years, 70% had changed.
The bevacizumab-first group also showed at least comparable if not better safety, with similar rates of prespecified ocular events in both groups, but with a significantly lower rate of serious systemic adverse events, which occurred in 52% of the eyes treated with aflibercept only and 36% of eyes that began treatment on bevacizumab. Serious systemic adverse events occurred in 43% of patients who had two eyes treated as part of the trial.
‘Bevacizumab first was noninferior’
The team that designed the trial opted for a superiority design rather than a noninferiority trial and powered the study based on the presumption that aflibercept monotherapy would prove superior, said Dr. Glassman and Dr. Sun. “We feel that the clinical interpretation of these results will be similar to the interpretation if we had conducted a noninferiority study, and we found that bevacizumab first was noninferior to aflibercept monotherapy,” they maintained in an interview.
Dr. Glassman and Dr. Sun said they and their coauthors are now analyzing the results to try to find patient characteristics that could identify eyes most likely to respond to the bevacizumab-first approach. “It would be clinically valuable” to use the results to identify characteristics that could help guide clinicians’ treatment approach and enhance patient counseling, they said.
The study received funding from the National Institutes of Health. Dr. Jhaveri has reported being a consultant for Genentech, Novartis, and Regenxbio. Dr. Glassman has reported receiving grants from Genentech and Regeneron. Dr. Sun has reported receiving grants from Boehringer Ingelheim, Janssen Biotech, KalVista, Optovue, and Physical Sciences, grants and travel support from Novartis and Novo Nordisk, travel support from Merck, writing support from Genentech, and equipment supplied by Adaptive Sensory and Boston Micromachines. Dr. Musch and Dr. Chew have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A cost-saving, stepwise approach to treating diabetic macular edema was as effective and at least as safe as what’s been the standard approach, which jumps straight to the costlier treatment, in a head-to-head, multicenter, U.S. randomized trial of the two regimens that included 312 eyes in 270 adults with type 1 or type 2 diabetes.
The findings validate a treatment regimen for diabetic macular edema that’s already common in U.S. practice based on requirements by many health insurance providers because of the money it saves.
The step-therapy approach studied involves starting off-label treatment with the relatively inexpensive agent bevacizumab (Avastin), followed by a switch to the much pricier aflibercept (Eylea) when patients don’t adequately respond, following a prespecified algorithm that applies four criteria to determine when patients need to change agents.
These new findings build on a 2016 study that compared aflibercept monotherapy with bevacizumab monotherapy and showed that after 2 years of treatment aflibercept produced clearly better outcomes.
The new trial findings “are particularly relevant given the increasing frequency of insurers mandating step therapy with bevacizumab before the use of other drugs” such as aflibercept, noted Chirag D. Jhaveri, MD, and colleagues in the study published online in the New England Journal of Medicine.
Opportunity for ‘substantial cost reductions’
Jhaveri, a retina surgeon in Austin, Texas, and associates note that, based on Medicare reimbursement rates of $1,830 for a single dose of aflibercept and $70 for one dose of bevacizumab, starting treatment with bevacizumab could produce “substantial cost reductions for the health care system.”
The authors of an accompanying editorial agree. Step therapy that starts with bevacizumab would probably result in “substantial” cost savings, and the findings document “similar outcomes” from the two tested regimens based on improvements in visual acuity and changes in the thickness measurement of the central retina during the 2-year trial, write David C. Musch, PhD, and Emily Y. Chew, MD.
Dr. Musch, a professor and ophthalmology epidemiologist at the University of Michigan in Ann Arbor, and Dr. Chew, director of the Division of Epidemiology and Clinical Applications at the National Eye Institute in Bethesda, Md., also laud the “rigorous” study for its design and conduct that was “beyond reproach,” and for producing evidence that “applies well to clinical practice.”
The only potential drawback to the step-therapy approach, they write, is that people with diabetes often have “numerous coexisting conditions that make it more difficult for them to adhere to frequent follow-up visits,” a key element of the tested step-care protocol, which mandated follow-up visits every 4 weeks during the first year and every 4-16 weeks during the second year.
312 eyes of 270 patients
The new trial, organized by the DRCR Retinal Network and the Jaeb Center for Health Research in Tampa, Fla., ran at 54 U.S. sites from December 2017 to November 2019. The study randomized 158 eyes in 137 patients to aflibercept monotherapy, and 154 eyes in 133 patients to the step-care regimen (both eyes were treated in several patients in each group, with each eye receiving a different regimen). Participants were around 60 years old, 48% were women, and 95% had type 2 diabetes.
To be eligible for enrollment, patients had at least one eye with a best-corrected visual-acuity letter score of 24-69 on an Electronic Early Treatment Diabetic Retinopathy Study chart (ranges from 0 to 100, with higher values indicating better visual acuity), which corresponds to Snellen chart values of 20/320-20/50, readings that encompass most patients with diabetic macular edema, noted study authors Adam R. Glassman and Jennifer K. Sun, MD, in an interview.
“Very few patients with diabetic macular edema have vision due to this alone that is worse than 20/320, which meets criteria for legal blindness,” said Dr. Glassman, who is executive director of the Jaeb Center for Health Research, and Dr. Sun, who is chief of the center for clinical eye research and trials at the Joslin Diabetes Center in Boston and chair of the DRCR Retinal Network.
The primary outcome was time-averaged change in visual-acuity letter score from baseline to 2 years, which improved by an average of 15.0 letters in the aflibercept monotherapy group and an average of 14.0 letters in the step-therapy group, an adjusted difference of 0.8 letters, which was not significant. An improvement from baseline of at least 15 letters occurred in 53% of the eyes in the aflibercept monotherapy group and in 58% of those who had step therapy, and 77% of eyes in both groups had improvements of at least 10 letters.
Central retinal thickness dropped from baseline by an average of 192 mcm with aflibercept monotherapy and 198 mcm with step therapy. The average number of total treatments (by intravitreous injection) was 14.6 in the aflibercept monotherapy group and 16.1 in the step-therapy group. After the first 24 weeks of the study, 39% of eyes in the step-therapy group had switched from bevacizumab to aflibercept injections; after 1 year, 60% of eyes had switched; and by study end, after 2 years, 70% had changed.
The bevacizumab-first group also showed at least comparable if not better safety, with similar rates of prespecified ocular events in both groups, but with a significantly lower rate of serious systemic adverse events, which occurred in 52% of the eyes treated with aflibercept only and 36% of eyes that began treatment on bevacizumab. Serious systemic adverse events occurred in 43% of patients who had two eyes treated as part of the trial.
‘Bevacizumab first was noninferior’
The team that designed the trial opted for a superiority design rather than a noninferiority trial and powered the study based on the presumption that aflibercept monotherapy would prove superior, said Dr. Glassman and Dr. Sun. “We feel that the clinical interpretation of these results will be similar to the interpretation if we had conducted a noninferiority study, and we found that bevacizumab first was noninferior to aflibercept monotherapy,” they maintained in an interview.
Dr. Glassman and Dr. Sun said they and their coauthors are now analyzing the results to try to find patient characteristics that could identify eyes most likely to respond to the bevacizumab-first approach. “It would be clinically valuable” to use the results to identify characteristics that could help guide clinicians’ treatment approach and enhance patient counseling, they said.
The study received funding from the National Institutes of Health. Dr. Jhaveri has reported being a consultant for Genentech, Novartis, and Regenxbio. Dr. Glassman has reported receiving grants from Genentech and Regeneron. Dr. Sun has reported receiving grants from Boehringer Ingelheim, Janssen Biotech, KalVista, Optovue, and Physical Sciences, grants and travel support from Novartis and Novo Nordisk, travel support from Merck, writing support from Genentech, and equipment supplied by Adaptive Sensory and Boston Micromachines. Dr. Musch and Dr. Chew have reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM NEW ENGLAND JOURNAL OF MEDICINE
Interventional imagers take on central role and more radiation
Interventional echocardiographers have become an increasingly critical part of the structural heart team but may be paying the price in terms of radiation exposure, a new study suggests.
Results showed that interventional echocardiographers receive threefold higher head-level radiation doses than interventional cardiologists during left atrial appendage occlusion (LAAO) closures and 11-fold higher doses during mitral valve transcatheter edge-to-edge repair (TEER).
“Over the last 5-10 years there’s been exponential growth in these two procedures, TEER and LAAO, and while that’s been very exciting, I think there hasn’t been as much research into how to protect these individuals,” lead author David A. McNamara, MD, MPH, Spectrum Health, Grand Rapids, Mich., told this news organization.
The study was published in JAMA Network Open.
Previous studies have focused largely on radiation exposure and mitigation efforts during coronary interventions, but the room set-up for LAAO and TEER and shielding techniques to mitigate radiation exposure are vastly different, he noted.
A 2017 study reported that radiation exposure was significantly higher for imaging specialists than structural heart specialists and varied by procedure type.
For the current study, Dr. McNamara, an echocardiographer by training, and colleagues collected data from 30 consecutive LAAO and 30 consecutive TEER procedures performed at their institution between July 2016 and January 2018.
Interventional imagers, interventional cardiologists, and sonographers all wore a lead skirt, apron, and thyroid collar, as well as a dosimeter to collect radiation data.
Interventional cardiologists stood immediately adjacent to the procedure table and used a ceiling-mounted, upper-body lead shield and a lower-body shield extending from the table to the floor. The echocardiographer stood at the patient’s head and used a mobile accessory shield raised to a height that allowed the imager to extend their arms over the shield to manipulate a transesophageal echocardiogram probe throughout the case.
The median fluoroscopy time was 9.2 minutes for LAAO and 20.9 minutes for TEER. The median air kerma was 164 mGy and 109 mGy, respectively.
Interventional echocardiographers received a median per case radiation dose of 10.6 µSv, compared with 2.1 µSv for interventional cardiologists. The result was similar for TEER (10.5 vs. 0.9 µSv) and LAAO (10.6 vs. 3.5 µSv; P < .001 for all).
The odds of interventional echocardiographers having a radiation dose greater than 20 µSV were 7.5 times greater than for interventional cardiologists (P < .001).
“It’s not the direction of the association, but really the magnitude is what surprised us,” observed Dr. McNamara.
The team was pleasantly surprised, he said, that sonographers, a “vastly understudied group,” received significantly lower median radiation doses than interventional imagers during LAAO (0.2 µSV) and TEER procedures (0.0 µSv; P < .001 for both).
The average distances from the radiation source were 26 cm (10.2 inches) for the echocardiographer, 36 cm (14.2 inches) for the interventional cardiologist, and 250 cm (8.2 feet) for the sonographer.
“These folks [sonographers] were much further away than both the physicians performing these cases, and that is what we hypothesize drove their very low rates, but that should also help inform our mitigation techniques for physicians and for all other cath lab members in the room,” Dr. McNamara said.
He noted that Spectrum Health has been at the forefront in terms of research into radiation exposure and mitigation, has good institutional radiation safety education, and used dose-lowering fluoroscopy systems (AlluraClarity, Philips) with real-time image noise reduction technology and a frame rate of 15 frames per second for the study. “So we’re hopeful that this actually represents a somewhat best-case scenario for what is being done at multiple institutions throughout the nation.”
Nevertheless, there is a huge amount of variability in radiation exposure, Dr. McNamara observed. “First and foremost, we really just have to identify our problem and highlight that this is something that needs some advocacy from our [professional] groups.”
Sunil Rao, MD, the newly minted president of the Society of Cardiovascular Angiography and Interventions (SCAI), said, “This is a really important study, because it expands the potential occupational hazards outside of what we traditionally think of as the team that does interventional procedures ... we have to recognize that the procedures we’re doing in the cath lab have changed.”
“Showing that our colleagues are getting 3-10 times radiation exposure is a really important piece of information to have out there. I think it’s really sort of a call to action,” Dr. Rao, professor of medicine at Duke University, Durham, N.C., told this news organization.
Nevertheless, he observed that practices have shifted somewhat since the study and that interventional cardiologists working with imaging physicians are more cognizant of radiation exposure issues.
“When I talk with our folks here that are doing structural heart procedures, they’re making sure that they’re not stepping on the fluoro pedal while the echocardiographer is manipulating the TE probe,” Dr. Rao said. “The echocardiographer is oftentimes using a much bigger shield than what was described in the study, and remember there’s an exponential decrease in the radiation exposure by distance, so they’re stepping back during the fluoroscopy time.”
Although the volume of TEER and LAAO procedures, as well as tricuspid interventions, will continue to climb, Dr. Rao said he expects radiation exposure to the imaging cardiologist will fall thanks to greater use of newer-generation imaging systems with dose-reduction features and better shielding strategies.
He noted that several of SCAI’s “best practices” documents call attention to radiation safety and that SCAI is creating a pathway where imaging cardiologists can become fellows of the society, which was traditionally reserved for interventionalists.
Still, imaging and cardiovascular societies have yet to endorse standardized safety procedures for interventional imagers, nor is information routinely collected on radiation exposure in national registries.
“We just don’t have the budgets or the interest nationally to do that kind of thing, so it has to be done locally,” Dr. Rao said. “And the person who I think is responsible for that is really the cath lab director and the cath lab nurse manager, who really should work hand-in-glove to make sure that radiation safety is at the top of the priority list.”
The study was funded by the Frederik Meijer Heart & Vascular Institute, Spectrum Health, and by Corindus. The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, approval of the manuscript; and the decision to submit the manuscript for publication. Senior author Ryan Madder, MD, reports receiving research support, speaker honoraria, and grants, and serving on the advisory board of Corindus. No other disclosures were reported.
A version of this article first appeared on Medscape.com.
Interventional echocardiographers have become an increasingly critical part of the structural heart team but may be paying the price in terms of radiation exposure, a new study suggests.
Results showed that interventional echocardiographers receive threefold higher head-level radiation doses than interventional cardiologists during left atrial appendage occlusion (LAAO) closures and 11-fold higher doses during mitral valve transcatheter edge-to-edge repair (TEER).
“Over the last 5-10 years there’s been exponential growth in these two procedures, TEER and LAAO, and while that’s been very exciting, I think there hasn’t been as much research into how to protect these individuals,” lead author David A. McNamara, MD, MPH, Spectrum Health, Grand Rapids, Mich., told this news organization.
The study was published in JAMA Network Open.
Previous studies have focused largely on radiation exposure and mitigation efforts during coronary interventions, but the room set-up for LAAO and TEER and shielding techniques to mitigate radiation exposure are vastly different, he noted.
A 2017 study reported that radiation exposure was significantly higher for imaging specialists than structural heart specialists and varied by procedure type.
For the current study, Dr. McNamara, an echocardiographer by training, and colleagues collected data from 30 consecutive LAAO and 30 consecutive TEER procedures performed at their institution between July 2016 and January 2018.
Interventional imagers, interventional cardiologists, and sonographers all wore a lead skirt, apron, and thyroid collar, as well as a dosimeter to collect radiation data.
Interventional cardiologists stood immediately adjacent to the procedure table and used a ceiling-mounted, upper-body lead shield and a lower-body shield extending from the table to the floor. The echocardiographer stood at the patient’s head and used a mobile accessory shield raised to a height that allowed the imager to extend their arms over the shield to manipulate a transesophageal echocardiogram probe throughout the case.
The median fluoroscopy time was 9.2 minutes for LAAO and 20.9 minutes for TEER. The median air kerma was 164 mGy and 109 mGy, respectively.
Interventional echocardiographers received a median per case radiation dose of 10.6 µSv, compared with 2.1 µSv for interventional cardiologists. The result was similar for TEER (10.5 vs. 0.9 µSv) and LAAO (10.6 vs. 3.5 µSv; P < .001 for all).
The odds of interventional echocardiographers having a radiation dose greater than 20 µSV were 7.5 times greater than for interventional cardiologists (P < .001).
“It’s not the direction of the association, but really the magnitude is what surprised us,” observed Dr. McNamara.
The team was pleasantly surprised, he said, that sonographers, a “vastly understudied group,” received significantly lower median radiation doses than interventional imagers during LAAO (0.2 µSV) and TEER procedures (0.0 µSv; P < .001 for both).
The average distances from the radiation source were 26 cm (10.2 inches) for the echocardiographer, 36 cm (14.2 inches) for the interventional cardiologist, and 250 cm (8.2 feet) for the sonographer.
“These folks [sonographers] were much further away than both the physicians performing these cases, and that is what we hypothesize drove their very low rates, but that should also help inform our mitigation techniques for physicians and for all other cath lab members in the room,” Dr. McNamara said.
He noted that Spectrum Health has been at the forefront in terms of research into radiation exposure and mitigation, has good institutional radiation safety education, and used dose-lowering fluoroscopy systems (AlluraClarity, Philips) with real-time image noise reduction technology and a frame rate of 15 frames per second for the study. “So we’re hopeful that this actually represents a somewhat best-case scenario for what is being done at multiple institutions throughout the nation.”
Nevertheless, there is a huge amount of variability in radiation exposure, Dr. McNamara observed. “First and foremost, we really just have to identify our problem and highlight that this is something that needs some advocacy from our [professional] groups.”
Sunil Rao, MD, the newly minted president of the Society of Cardiovascular Angiography and Interventions (SCAI), said, “This is a really important study, because it expands the potential occupational hazards outside of what we traditionally think of as the team that does interventional procedures ... we have to recognize that the procedures we’re doing in the cath lab have changed.”
“Showing that our colleagues are getting 3-10 times radiation exposure is a really important piece of information to have out there. I think it’s really sort of a call to action,” Dr. Rao, professor of medicine at Duke University, Durham, N.C., told this news organization.
Nevertheless, he observed that practices have shifted somewhat since the study and that interventional cardiologists working with imaging physicians are more cognizant of radiation exposure issues.
“When I talk with our folks here that are doing structural heart procedures, they’re making sure that they’re not stepping on the fluoro pedal while the echocardiographer is manipulating the TE probe,” Dr. Rao said. “The echocardiographer is oftentimes using a much bigger shield than what was described in the study, and remember there’s an exponential decrease in the radiation exposure by distance, so they’re stepping back during the fluoroscopy time.”
Although the volume of TEER and LAAO procedures, as well as tricuspid interventions, will continue to climb, Dr. Rao said he expects radiation exposure to the imaging cardiologist will fall thanks to greater use of newer-generation imaging systems with dose-reduction features and better shielding strategies.
He noted that several of SCAI’s “best practices” documents call attention to radiation safety and that SCAI is creating a pathway where imaging cardiologists can become fellows of the society, which was traditionally reserved for interventionalists.
Still, imaging and cardiovascular societies have yet to endorse standardized safety procedures for interventional imagers, nor is information routinely collected on radiation exposure in national registries.
“We just don’t have the budgets or the interest nationally to do that kind of thing, so it has to be done locally,” Dr. Rao said. “And the person who I think is responsible for that is really the cath lab director and the cath lab nurse manager, who really should work hand-in-glove to make sure that radiation safety is at the top of the priority list.”
The study was funded by the Frederik Meijer Heart & Vascular Institute, Spectrum Health, and by Corindus. The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, approval of the manuscript; and the decision to submit the manuscript for publication. Senior author Ryan Madder, MD, reports receiving research support, speaker honoraria, and grants, and serving on the advisory board of Corindus. No other disclosures were reported.
A version of this article first appeared on Medscape.com.
Interventional echocardiographers have become an increasingly critical part of the structural heart team but may be paying the price in terms of radiation exposure, a new study suggests.
Results showed that interventional echocardiographers receive threefold higher head-level radiation doses than interventional cardiologists during left atrial appendage occlusion (LAAO) closures and 11-fold higher doses during mitral valve transcatheter edge-to-edge repair (TEER).
“Over the last 5-10 years there’s been exponential growth in these two procedures, TEER and LAAO, and while that’s been very exciting, I think there hasn’t been as much research into how to protect these individuals,” lead author David A. McNamara, MD, MPH, Spectrum Health, Grand Rapids, Mich., told this news organization.
The study was published in JAMA Network Open.
Previous studies have focused largely on radiation exposure and mitigation efforts during coronary interventions, but the room set-up for LAAO and TEER and shielding techniques to mitigate radiation exposure are vastly different, he noted.
A 2017 study reported that radiation exposure was significantly higher for imaging specialists than structural heart specialists and varied by procedure type.
For the current study, Dr. McNamara, an echocardiographer by training, and colleagues collected data from 30 consecutive LAAO and 30 consecutive TEER procedures performed at their institution between July 2016 and January 2018.
Interventional imagers, interventional cardiologists, and sonographers all wore a lead skirt, apron, and thyroid collar, as well as a dosimeter to collect radiation data.
Interventional cardiologists stood immediately adjacent to the procedure table and used a ceiling-mounted, upper-body lead shield and a lower-body shield extending from the table to the floor. The echocardiographer stood at the patient’s head and used a mobile accessory shield raised to a height that allowed the imager to extend their arms over the shield to manipulate a transesophageal echocardiogram probe throughout the case.
The median fluoroscopy time was 9.2 minutes for LAAO and 20.9 minutes for TEER. The median air kerma was 164 mGy and 109 mGy, respectively.
Interventional echocardiographers received a median per case radiation dose of 10.6 µSv, compared with 2.1 µSv for interventional cardiologists. The result was similar for TEER (10.5 vs. 0.9 µSv) and LAAO (10.6 vs. 3.5 µSv; P < .001 for all).
The odds of interventional echocardiographers having a radiation dose greater than 20 µSV were 7.5 times greater than for interventional cardiologists (P < .001).
“It’s not the direction of the association, but really the magnitude is what surprised us,” observed Dr. McNamara.
The team was pleasantly surprised, he said, that sonographers, a “vastly understudied group,” received significantly lower median radiation doses than interventional imagers during LAAO (0.2 µSV) and TEER procedures (0.0 µSv; P < .001 for both).
The average distances from the radiation source were 26 cm (10.2 inches) for the echocardiographer, 36 cm (14.2 inches) for the interventional cardiologist, and 250 cm (8.2 feet) for the sonographer.
“These folks [sonographers] were much further away than both the physicians performing these cases, and that is what we hypothesize drove their very low rates, but that should also help inform our mitigation techniques for physicians and for all other cath lab members in the room,” Dr. McNamara said.
He noted that Spectrum Health has been at the forefront in terms of research into radiation exposure and mitigation, has good institutional radiation safety education, and used dose-lowering fluoroscopy systems (AlluraClarity, Philips) with real-time image noise reduction technology and a frame rate of 15 frames per second for the study. “So we’re hopeful that this actually represents a somewhat best-case scenario for what is being done at multiple institutions throughout the nation.”
Nevertheless, there is a huge amount of variability in radiation exposure, Dr. McNamara observed. “First and foremost, we really just have to identify our problem and highlight that this is something that needs some advocacy from our [professional] groups.”
Sunil Rao, MD, the newly minted president of the Society of Cardiovascular Angiography and Interventions (SCAI), said, “This is a really important study, because it expands the potential occupational hazards outside of what we traditionally think of as the team that does interventional procedures ... we have to recognize that the procedures we’re doing in the cath lab have changed.”
“Showing that our colleagues are getting 3-10 times radiation exposure is a really important piece of information to have out there. I think it’s really sort of a call to action,” Dr. Rao, professor of medicine at Duke University, Durham, N.C., told this news organization.
Nevertheless, he observed that practices have shifted somewhat since the study and that interventional cardiologists working with imaging physicians are more cognizant of radiation exposure issues.
“When I talk with our folks here that are doing structural heart procedures, they’re making sure that they’re not stepping on the fluoro pedal while the echocardiographer is manipulating the TE probe,” Dr. Rao said. “The echocardiographer is oftentimes using a much bigger shield than what was described in the study, and remember there’s an exponential decrease in the radiation exposure by distance, so they’re stepping back during the fluoroscopy time.”
Although the volume of TEER and LAAO procedures, as well as tricuspid interventions, will continue to climb, Dr. Rao said he expects radiation exposure to the imaging cardiologist will fall thanks to greater use of newer-generation imaging systems with dose-reduction features and better shielding strategies.
He noted that several of SCAI’s “best practices” documents call attention to radiation safety and that SCAI is creating a pathway where imaging cardiologists can become fellows of the society, which was traditionally reserved for interventionalists.
Still, imaging and cardiovascular societies have yet to endorse standardized safety procedures for interventional imagers, nor is information routinely collected on radiation exposure in national registries.
“We just don’t have the budgets or the interest nationally to do that kind of thing, so it has to be done locally,” Dr. Rao said. “And the person who I think is responsible for that is really the cath lab director and the cath lab nurse manager, who really should work hand-in-glove to make sure that radiation safety is at the top of the priority list.”
The study was funded by the Frederik Meijer Heart & Vascular Institute, Spectrum Health, and by Corindus. The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, approval of the manuscript; and the decision to submit the manuscript for publication. Senior author Ryan Madder, MD, reports receiving research support, speaker honoraria, and grants, and serving on the advisory board of Corindus. No other disclosures were reported.
A version of this article first appeared on Medscape.com.
People really can get ‘hangry’ when hungry
The notion that people get ‘hangry’ – irritable and short-tempered when they’re hungry – is such an established part of modern folklore that the word has even been added to the Oxford English Dictionary. Although experimental studies in the past have shown that low blood glucose levels increase impulsivity, anger, and aggression, there has been little solid evidence that this translates to real-life settings.
Now new research has confirmed that the phenomenon does really exist in everyday life. The study, published in the journal PLOS ONE, is the first to investigate how hunger affects people’s emotions on a day-to-day level. Lead author Viren Swami, professor of social psychology at Anglia Ruskin University, Cambridge, England, said: “Many of us are aware that being hungry can influence our emotions, but surprisingly little scientific research has focused on being ‘hangry’.”
He and coauthors from Karl Landsteiner University of Health Sciences in Krems an der Donau, Austria, recruited 64 participants from Central Europe who completed a 21-day experience sampling phase, in which they were prompted to report their feelings on a smartphone app five times a day. At each prompt, they reported their levels of hunger, anger, irritability, pleasure, and arousal on a visual analog scale.
Participants were on average 29.9 years old (range = 18-60), predominantly (81.3%) women, and had a mean body mass index of 23.8 kg/m2 (range 15.8-36.5 kg/m2).
Anger was rated on a 5-point scale but the team explained that the effects of hunger are unlikely to be unique to anger per se, so they also asked about experiences of irritability and, in order to obtain a more holistic view of emotionality, also about pleasure and arousal, as indexed using Russell’s affect grid.
They also asked about eating behaviors over the previous 3 weeks, including frequency of main meals, snacking behavior, healthy eating, feeling hungry, and sense of satiety, and about dietary behaviors including restrictive eating, emotionally induced eating, and externally determined eating behavior.
Analysis of the resulting total of 9,142 responses showed that higher levels of self-reported hunger were associated with greater feelings of anger and irritability, and with lower levels of pleasure. These findings remained significant after accounting for participants’ sex, age, body mass index, dietary behaviors, and trait anger. However, associations with arousal were not significant.
The authors commented that the use of the app allowed data collection to take place in subjects’ everyday environments, such as their workplace and at home. “These results provide evidence that everyday levels of hunger are associated with negative emotionality and supports the notion of being ‘hangry.’ ”
‘Substantial’ effects
“The effects were substantial,” the team said, “even after taking into account demographic factors” such as age and sex, body mass index, dietary behavior, and individual personality traits. Hunger was associated with 37% of the variance in irritability, 34% of the variance in anger, and 38% of the variance in pleasure recorded by the participants.
The research also showed that the negative emotions – irritability, anger, and unpleasantness – were caused by both day-to-day fluctuations in hunger and residual levels of hunger measured by averages over the 3-week period.
The authors said their findings “suggest that the experience of being hangry is real, insofar as hunger was associated with greater anger and irritability, and lower pleasure, in our sample over a period of 3 weeks.
“These results may have important implications for understanding everyday experiences of emotions, and may also assist practitioners to more effectively ensure productive individual behaviors and interpersonal relationships (for example, by ensuring that no one goes hungry).”
Although the majority of participants (55%) said they paid attention to hunger pangs, only 23% said that they knew when they were full and then stopped eating, whereas 63% said they could tell when they were full but sometimes continued to eat. Few (4.7%) people said they could not tell when they were full and therefore oriented their eating based on the size of the meal, but 9% described frequent overeating because of not feeling satiated, and 13% stated they ate when they were stressed, upset, angry, or bored.
Professor Swami said: “Ours is the first study to examine being ‘hangry’ outside of a lab. By following people in their day-to-day lives, we found that hunger was related to levels of anger, irritability, and pleasure.
“Although our study doesn’t present ways to mitigate negative hunger-induced emotions, research suggests that being able to label an emotion can help people to regulate it, such as by recognizing that we feel angry simply because we are hungry. Therefore, greater awareness of being ‘hangry’ could reduce the likelihood that hunger results in negative emotions and behaviors in individuals.”
A version of this article first appeared on Medscape UK.
The notion that people get ‘hangry’ – irritable and short-tempered when they’re hungry – is such an established part of modern folklore that the word has even been added to the Oxford English Dictionary. Although experimental studies in the past have shown that low blood glucose levels increase impulsivity, anger, and aggression, there has been little solid evidence that this translates to real-life settings.
Now new research has confirmed that the phenomenon does really exist in everyday life. The study, published in the journal PLOS ONE, is the first to investigate how hunger affects people’s emotions on a day-to-day level. Lead author Viren Swami, professor of social psychology at Anglia Ruskin University, Cambridge, England, said: “Many of us are aware that being hungry can influence our emotions, but surprisingly little scientific research has focused on being ‘hangry’.”
He and coauthors from Karl Landsteiner University of Health Sciences in Krems an der Donau, Austria, recruited 64 participants from Central Europe who completed a 21-day experience sampling phase, in which they were prompted to report their feelings on a smartphone app five times a day. At each prompt, they reported their levels of hunger, anger, irritability, pleasure, and arousal on a visual analog scale.
Participants were on average 29.9 years old (range = 18-60), predominantly (81.3%) women, and had a mean body mass index of 23.8 kg/m2 (range 15.8-36.5 kg/m2).
Anger was rated on a 5-point scale but the team explained that the effects of hunger are unlikely to be unique to anger per se, so they also asked about experiences of irritability and, in order to obtain a more holistic view of emotionality, also about pleasure and arousal, as indexed using Russell’s affect grid.
They also asked about eating behaviors over the previous 3 weeks, including frequency of main meals, snacking behavior, healthy eating, feeling hungry, and sense of satiety, and about dietary behaviors including restrictive eating, emotionally induced eating, and externally determined eating behavior.
Analysis of the resulting total of 9,142 responses showed that higher levels of self-reported hunger were associated with greater feelings of anger and irritability, and with lower levels of pleasure. These findings remained significant after accounting for participants’ sex, age, body mass index, dietary behaviors, and trait anger. However, associations with arousal were not significant.
The authors commented that the use of the app allowed data collection to take place in subjects’ everyday environments, such as their workplace and at home. “These results provide evidence that everyday levels of hunger are associated with negative emotionality and supports the notion of being ‘hangry.’ ”
‘Substantial’ effects
“The effects were substantial,” the team said, “even after taking into account demographic factors” such as age and sex, body mass index, dietary behavior, and individual personality traits. Hunger was associated with 37% of the variance in irritability, 34% of the variance in anger, and 38% of the variance in pleasure recorded by the participants.
The research also showed that the negative emotions – irritability, anger, and unpleasantness – were caused by both day-to-day fluctuations in hunger and residual levels of hunger measured by averages over the 3-week period.
The authors said their findings “suggest that the experience of being hangry is real, insofar as hunger was associated with greater anger and irritability, and lower pleasure, in our sample over a period of 3 weeks.
“These results may have important implications for understanding everyday experiences of emotions, and may also assist practitioners to more effectively ensure productive individual behaviors and interpersonal relationships (for example, by ensuring that no one goes hungry).”
Although the majority of participants (55%) said they paid attention to hunger pangs, only 23% said that they knew when they were full and then stopped eating, whereas 63% said they could tell when they were full but sometimes continued to eat. Few (4.7%) people said they could not tell when they were full and therefore oriented their eating based on the size of the meal, but 9% described frequent overeating because of not feeling satiated, and 13% stated they ate when they were stressed, upset, angry, or bored.
Professor Swami said: “Ours is the first study to examine being ‘hangry’ outside of a lab. By following people in their day-to-day lives, we found that hunger was related to levels of anger, irritability, and pleasure.
“Although our study doesn’t present ways to mitigate negative hunger-induced emotions, research suggests that being able to label an emotion can help people to regulate it, such as by recognizing that we feel angry simply because we are hungry. Therefore, greater awareness of being ‘hangry’ could reduce the likelihood that hunger results in negative emotions and behaviors in individuals.”
A version of this article first appeared on Medscape UK.
The notion that people get ‘hangry’ – irritable and short-tempered when they’re hungry – is such an established part of modern folklore that the word has even been added to the Oxford English Dictionary. Although experimental studies in the past have shown that low blood glucose levels increase impulsivity, anger, and aggression, there has been little solid evidence that this translates to real-life settings.
Now new research has confirmed that the phenomenon does really exist in everyday life. The study, published in the journal PLOS ONE, is the first to investigate how hunger affects people’s emotions on a day-to-day level. Lead author Viren Swami, professor of social psychology at Anglia Ruskin University, Cambridge, England, said: “Many of us are aware that being hungry can influence our emotions, but surprisingly little scientific research has focused on being ‘hangry’.”
He and coauthors from Karl Landsteiner University of Health Sciences in Krems an der Donau, Austria, recruited 64 participants from Central Europe who completed a 21-day experience sampling phase, in which they were prompted to report their feelings on a smartphone app five times a day. At each prompt, they reported their levels of hunger, anger, irritability, pleasure, and arousal on a visual analog scale.
Participants were on average 29.9 years old (range = 18-60), predominantly (81.3%) women, and had a mean body mass index of 23.8 kg/m2 (range 15.8-36.5 kg/m2).
Anger was rated on a 5-point scale but the team explained that the effects of hunger are unlikely to be unique to anger per se, so they also asked about experiences of irritability and, in order to obtain a more holistic view of emotionality, also about pleasure and arousal, as indexed using Russell’s affect grid.
They also asked about eating behaviors over the previous 3 weeks, including frequency of main meals, snacking behavior, healthy eating, feeling hungry, and sense of satiety, and about dietary behaviors including restrictive eating, emotionally induced eating, and externally determined eating behavior.
Analysis of the resulting total of 9,142 responses showed that higher levels of self-reported hunger were associated with greater feelings of anger and irritability, and with lower levels of pleasure. These findings remained significant after accounting for participants’ sex, age, body mass index, dietary behaviors, and trait anger. However, associations with arousal were not significant.
The authors commented that the use of the app allowed data collection to take place in subjects’ everyday environments, such as their workplace and at home. “These results provide evidence that everyday levels of hunger are associated with negative emotionality and supports the notion of being ‘hangry.’ ”
‘Substantial’ effects
“The effects were substantial,” the team said, “even after taking into account demographic factors” such as age and sex, body mass index, dietary behavior, and individual personality traits. Hunger was associated with 37% of the variance in irritability, 34% of the variance in anger, and 38% of the variance in pleasure recorded by the participants.
The research also showed that the negative emotions – irritability, anger, and unpleasantness – were caused by both day-to-day fluctuations in hunger and residual levels of hunger measured by averages over the 3-week period.
The authors said their findings “suggest that the experience of being hangry is real, insofar as hunger was associated with greater anger and irritability, and lower pleasure, in our sample over a period of 3 weeks.
“These results may have important implications for understanding everyday experiences of emotions, and may also assist practitioners to more effectively ensure productive individual behaviors and interpersonal relationships (for example, by ensuring that no one goes hungry).”
Although the majority of participants (55%) said they paid attention to hunger pangs, only 23% said that they knew when they were full and then stopped eating, whereas 63% said they could tell when they were full but sometimes continued to eat. Few (4.7%) people said they could not tell when they were full and therefore oriented their eating based on the size of the meal, but 9% described frequent overeating because of not feeling satiated, and 13% stated they ate when they were stressed, upset, angry, or bored.
Professor Swami said: “Ours is the first study to examine being ‘hangry’ outside of a lab. By following people in their day-to-day lives, we found that hunger was related to levels of anger, irritability, and pleasure.
“Although our study doesn’t present ways to mitigate negative hunger-induced emotions, research suggests that being able to label an emotion can help people to regulate it, such as by recognizing that we feel angry simply because we are hungry. Therefore, greater awareness of being ‘hangry’ could reduce the likelihood that hunger results in negative emotions and behaviors in individuals.”
A version of this article first appeared on Medscape UK.
FROM PLOS ONE
Some have heavier periods after COVID vaccine
Many women who got a COVID-19 vaccine have reported heavier bleeding during their periods since they had the shots.
A team of researchers investigated the trend and set out to find out who among the vaccinated were more likely to experience the menstruation changes.
The researchers were led by Katharine M.N. Lee, PhD, MS, of the division of public health sciences at Washington University in St. Louis. Their findings were published ahead of print in Science Advances.
The investigators analyzed more than 139,000 responses from an online survey from both currently and formerly menstruating women.
They found that, among people who have regular periods, about the same percentage had heavier bleeding after they got a COVID vaccine as had no change in bleeding after the vaccine (44% vs. 42%, respectively).
“A much smaller portion had lighter periods,” they write.
The phenomenon has been difficult to study because questions about changes in menstruation are not a standard part of vaccine trials.
Date of last period is often tracked in clinical trials to make sure a participant is not pregnant, but the questions about periods often stop there.
Additionally, periods are different for everyone and can be influenced by all sorts of environmental factors, so making associations regarding exposures is problematic.
No changes found to fertility
The authors emphasized that, generally, changes to menstrual bleeding are not uncommon nor dangerous. They also emphasized that the changes in bleeding don’t mean changes to fertility.
The uterine reproductive system is flexible when the body is under stress, they note.
“We know that running a marathon may influence hormone concentrations in the short term while not rendering that person infertile,” the authors write.
However, they acknowledge that investigating these reports is critical in building trust in medicine.
This report includes information that hasn’t been available through the clinical trial follow-up process.
For instance, the authors write, “To the best of our knowledge, our work is the first to examine breakthrough bleeding after vaccination in either pre- or postmenopausal people.”
Reports of changes to periods after vaccination started emerging in 2021. But without data, reports were largely dismissed, fueling criticism from those waging campaigns against COVID vaccines.
Dr. Lee and colleagues gathered data from those who responded to the online survey and detailed some trends.
People who were bleeding more heavily after vaccination were more likely to be older, Hispanic, had vaccine side effects of fever and fatigue, had been pregnant at some point, or had given birth.
People with regular periods who had endometriosis, prolonged bleeding during their periods, polycystic ovarian syndrome (PCOS) or fibroids were also more likely to have increased bleeding after a COVID vaccine.
Breakthrough bleeding
For people who don’t menstruate, but have not reached menopause, breakthrough bleeding happened more often in women who had been pregnant and/or had given birth.
Among respondents who were postmenopausal, breakthrough bleeding happened more often in younger people and/or those who are Hispanic.
More than a third of the respondents (39%) who use gender-affirming hormones that eliminate menstruation reported breakthrough bleeding after vaccination.
The majority of premenopausal people on long-acting, reversible contraception (71%) and the majority of postmenopausal respondents (66%) had breakthrough bleeding as well.
The authors note that you can’t compare the percentages who report these experiences in the survey with the incidence of those who would experience changes in menstrual bleeding in the general population.
The nature of the online survey means it may be naturally biased because the people who responded may be more often those who noted some change in their own menstrual experiences, particularly if that involved discomfort, pain, or fear.
Researchers also acknowledge that Black, Indigenous, Latinx, and other respondents of color are underrepresented in this research and that represents a limitation in the work.
Alison Edelman, MD, MPH, with the department of obstetrics and gynecology at Oregon Health & Science University in Portland, was not involved with Dr. Lee and associates’ study but has also studied the relationship between COVID vaccines and menstruation.
Her team’s study found that COVID vaccination is associated with a small change in time between periods but not length of periods.
She said about the work by Dr. Lee and colleagues, “This work really elevates the voices of the public and what they’re experiencing.”
The association makes sense, Dr. Edelman says, in that the reproductive system and the immune system talk to each other and inflammation in the immune system is going to be noticed by the system governing periods.
Lack of data on the relationship between exposures and menstruation didn’t start with COVID. “There has been a signal in the population before with other vaccines that’s been dismissed,” she said.
Tracking menstruation information in clinical trials can help physicians counsel women on what may be coming with any vaccine and alleviate fears and vaccine hesitancy, Dr. Edelman explained. It can also help vaccine developers know what to include in information about their product.
“When you are counseled about what to expect, it’s not as scary. That provides trust in the system,” she said. She likened it to original lack of data on whether COVID-19 vaccines would affect pregnancy.
“We have great science now that COVID vaccine does not affect fertility and [vaccine] does not impact pregnancy.”
Another important aspect of this paper is that it included subgroups not studied before regarding menstruation and breakthrough bleeding, such as those taking gender-affirming hormones, she added.
Menstruation has been often overlooked as important in clinical trial exposures but Dr. Edelman hopes this recent attention and question will escalate and prompt more research.
“I’m hoping with the immense outpouring from the public about how important this is, that future studies will look at this a little bit better,” she says.
She said when the National Institutes of Health opened up funding for trials on COVID-19 vaccines and menstruation, researchers got flooded with requests from women to share their stories.
“As a researcher – I’ve been doing research for over 20 years – that’s not something that usually happens. I would love to have that happen for every research project.”
The authors and Dr. Edelman declare that they have no competing interests. This research was supported in part by the University of Illinois Beckman Institute for Advanced Science and Technology, the University of Illinois Interdisciplinary Health Sciences Institute, the National Institutes of Health, the Foundation for Barnes-Jewish Hospital, and the Siteman Cancer Center.
Many women who got a COVID-19 vaccine have reported heavier bleeding during their periods since they had the shots.
A team of researchers investigated the trend and set out to find out who among the vaccinated were more likely to experience the menstruation changes.
The researchers were led by Katharine M.N. Lee, PhD, MS, of the division of public health sciences at Washington University in St. Louis. Their findings were published ahead of print in Science Advances.
The investigators analyzed more than 139,000 responses from an online survey from both currently and formerly menstruating women.
They found that, among people who have regular periods, about the same percentage had heavier bleeding after they got a COVID vaccine as had no change in bleeding after the vaccine (44% vs. 42%, respectively).
“A much smaller portion had lighter periods,” they write.
The phenomenon has been difficult to study because questions about changes in menstruation are not a standard part of vaccine trials.
Date of last period is often tracked in clinical trials to make sure a participant is not pregnant, but the questions about periods often stop there.
Additionally, periods are different for everyone and can be influenced by all sorts of environmental factors, so making associations regarding exposures is problematic.
No changes found to fertility
The authors emphasized that, generally, changes to menstrual bleeding are not uncommon nor dangerous. They also emphasized that the changes in bleeding don’t mean changes to fertility.
The uterine reproductive system is flexible when the body is under stress, they note.
“We know that running a marathon may influence hormone concentrations in the short term while not rendering that person infertile,” the authors write.
However, they acknowledge that investigating these reports is critical in building trust in medicine.
This report includes information that hasn’t been available through the clinical trial follow-up process.
For instance, the authors write, “To the best of our knowledge, our work is the first to examine breakthrough bleeding after vaccination in either pre- or postmenopausal people.”
Reports of changes to periods after vaccination started emerging in 2021. But without data, reports were largely dismissed, fueling criticism from those waging campaigns against COVID vaccines.
Dr. Lee and colleagues gathered data from those who responded to the online survey and detailed some trends.
People who were bleeding more heavily after vaccination were more likely to be older, Hispanic, had vaccine side effects of fever and fatigue, had been pregnant at some point, or had given birth.
People with regular periods who had endometriosis, prolonged bleeding during their periods, polycystic ovarian syndrome (PCOS) or fibroids were also more likely to have increased bleeding after a COVID vaccine.
Breakthrough bleeding
For people who don’t menstruate, but have not reached menopause, breakthrough bleeding happened more often in women who had been pregnant and/or had given birth.
Among respondents who were postmenopausal, breakthrough bleeding happened more often in younger people and/or those who are Hispanic.
More than a third of the respondents (39%) who use gender-affirming hormones that eliminate menstruation reported breakthrough bleeding after vaccination.
The majority of premenopausal people on long-acting, reversible contraception (71%) and the majority of postmenopausal respondents (66%) had breakthrough bleeding as well.
The authors note that you can’t compare the percentages who report these experiences in the survey with the incidence of those who would experience changes in menstrual bleeding in the general population.
The nature of the online survey means it may be naturally biased because the people who responded may be more often those who noted some change in their own menstrual experiences, particularly if that involved discomfort, pain, or fear.
Researchers also acknowledge that Black, Indigenous, Latinx, and other respondents of color are underrepresented in this research and that represents a limitation in the work.
Alison Edelman, MD, MPH, with the department of obstetrics and gynecology at Oregon Health & Science University in Portland, was not involved with Dr. Lee and associates’ study but has also studied the relationship between COVID vaccines and menstruation.
Her team’s study found that COVID vaccination is associated with a small change in time between periods but not length of periods.
She said about the work by Dr. Lee and colleagues, “This work really elevates the voices of the public and what they’re experiencing.”
The association makes sense, Dr. Edelman says, in that the reproductive system and the immune system talk to each other and inflammation in the immune system is going to be noticed by the system governing periods.
Lack of data on the relationship between exposures and menstruation didn’t start with COVID. “There has been a signal in the population before with other vaccines that’s been dismissed,” she said.
Tracking menstruation information in clinical trials can help physicians counsel women on what may be coming with any vaccine and alleviate fears and vaccine hesitancy, Dr. Edelman explained. It can also help vaccine developers know what to include in information about their product.
“When you are counseled about what to expect, it’s not as scary. That provides trust in the system,” she said. She likened it to original lack of data on whether COVID-19 vaccines would affect pregnancy.
“We have great science now that COVID vaccine does not affect fertility and [vaccine] does not impact pregnancy.”
Another important aspect of this paper is that it included subgroups not studied before regarding menstruation and breakthrough bleeding, such as those taking gender-affirming hormones, she added.
Menstruation has been often overlooked as important in clinical trial exposures but Dr. Edelman hopes this recent attention and question will escalate and prompt more research.
“I’m hoping with the immense outpouring from the public about how important this is, that future studies will look at this a little bit better,” she says.
She said when the National Institutes of Health opened up funding for trials on COVID-19 vaccines and menstruation, researchers got flooded with requests from women to share their stories.
“As a researcher – I’ve been doing research for over 20 years – that’s not something that usually happens. I would love to have that happen for every research project.”
The authors and Dr. Edelman declare that they have no competing interests. This research was supported in part by the University of Illinois Beckman Institute for Advanced Science and Technology, the University of Illinois Interdisciplinary Health Sciences Institute, the National Institutes of Health, the Foundation for Barnes-Jewish Hospital, and the Siteman Cancer Center.
Many women who got a COVID-19 vaccine have reported heavier bleeding during their periods since they had the shots.
A team of researchers investigated the trend and set out to find out who among the vaccinated were more likely to experience the menstruation changes.
The researchers were led by Katharine M.N. Lee, PhD, MS, of the division of public health sciences at Washington University in St. Louis. Their findings were published ahead of print in Science Advances.
The investigators analyzed more than 139,000 responses from an online survey from both currently and formerly menstruating women.
They found that, among people who have regular periods, about the same percentage had heavier bleeding after they got a COVID vaccine as had no change in bleeding after the vaccine (44% vs. 42%, respectively).
“A much smaller portion had lighter periods,” they write.
The phenomenon has been difficult to study because questions about changes in menstruation are not a standard part of vaccine trials.
Date of last period is often tracked in clinical trials to make sure a participant is not pregnant, but the questions about periods often stop there.
Additionally, periods are different for everyone and can be influenced by all sorts of environmental factors, so making associations regarding exposures is problematic.
No changes found to fertility
The authors emphasized that, generally, changes to menstrual bleeding are not uncommon nor dangerous. They also emphasized that the changes in bleeding don’t mean changes to fertility.
The uterine reproductive system is flexible when the body is under stress, they note.
“We know that running a marathon may influence hormone concentrations in the short term while not rendering that person infertile,” the authors write.
However, they acknowledge that investigating these reports is critical in building trust in medicine.
This report includes information that hasn’t been available through the clinical trial follow-up process.
For instance, the authors write, “To the best of our knowledge, our work is the first to examine breakthrough bleeding after vaccination in either pre- or postmenopausal people.”
Reports of changes to periods after vaccination started emerging in 2021. But without data, reports were largely dismissed, fueling criticism from those waging campaigns against COVID vaccines.
Dr. Lee and colleagues gathered data from those who responded to the online survey and detailed some trends.
People who were bleeding more heavily after vaccination were more likely to be older, Hispanic, had vaccine side effects of fever and fatigue, had been pregnant at some point, or had given birth.
People with regular periods who had endometriosis, prolonged bleeding during their periods, polycystic ovarian syndrome (PCOS) or fibroids were also more likely to have increased bleeding after a COVID vaccine.
Breakthrough bleeding
For people who don’t menstruate, but have not reached menopause, breakthrough bleeding happened more often in women who had been pregnant and/or had given birth.
Among respondents who were postmenopausal, breakthrough bleeding happened more often in younger people and/or those who are Hispanic.
More than a third of the respondents (39%) who use gender-affirming hormones that eliminate menstruation reported breakthrough bleeding after vaccination.
The majority of premenopausal people on long-acting, reversible contraception (71%) and the majority of postmenopausal respondents (66%) had breakthrough bleeding as well.
The authors note that you can’t compare the percentages who report these experiences in the survey with the incidence of those who would experience changes in menstrual bleeding in the general population.
The nature of the online survey means it may be naturally biased because the people who responded may be more often those who noted some change in their own menstrual experiences, particularly if that involved discomfort, pain, or fear.
Researchers also acknowledge that Black, Indigenous, Latinx, and other respondents of color are underrepresented in this research and that represents a limitation in the work.
Alison Edelman, MD, MPH, with the department of obstetrics and gynecology at Oregon Health & Science University in Portland, was not involved with Dr. Lee and associates’ study but has also studied the relationship between COVID vaccines and menstruation.
Her team’s study found that COVID vaccination is associated with a small change in time between periods but not length of periods.
She said about the work by Dr. Lee and colleagues, “This work really elevates the voices of the public and what they’re experiencing.”
The association makes sense, Dr. Edelman says, in that the reproductive system and the immune system talk to each other and inflammation in the immune system is going to be noticed by the system governing periods.
Lack of data on the relationship between exposures and menstruation didn’t start with COVID. “There has been a signal in the population before with other vaccines that’s been dismissed,” she said.
Tracking menstruation information in clinical trials can help physicians counsel women on what may be coming with any vaccine and alleviate fears and vaccine hesitancy, Dr. Edelman explained. It can also help vaccine developers know what to include in information about their product.
“When you are counseled about what to expect, it’s not as scary. That provides trust in the system,” she said. She likened it to original lack of data on whether COVID-19 vaccines would affect pregnancy.
“We have great science now that COVID vaccine does not affect fertility and [vaccine] does not impact pregnancy.”
Another important aspect of this paper is that it included subgroups not studied before regarding menstruation and breakthrough bleeding, such as those taking gender-affirming hormones, she added.
Menstruation has been often overlooked as important in clinical trial exposures but Dr. Edelman hopes this recent attention and question will escalate and prompt more research.
“I’m hoping with the immense outpouring from the public about how important this is, that future studies will look at this a little bit better,” she says.
She said when the National Institutes of Health opened up funding for trials on COVID-19 vaccines and menstruation, researchers got flooded with requests from women to share their stories.
“As a researcher – I’ve been doing research for over 20 years – that’s not something that usually happens. I would love to have that happen for every research project.”
The authors and Dr. Edelman declare that they have no competing interests. This research was supported in part by the University of Illinois Beckman Institute for Advanced Science and Technology, the University of Illinois Interdisciplinary Health Sciences Institute, the National Institutes of Health, the Foundation for Barnes-Jewish Hospital, and the Siteman Cancer Center.
FROM SCIENCE ADVANCES
Surprising ethnic difference in atherosclerosis burden in Harlem, N.Y.
Non-Hispanic Black young adults in a large, ethnically diverse underserved neighborhood in New York City have about twice the prevalence of subclinical atherosclerosis as Hispanic young adults, according to a new cross-sectional study. It was noteworthy for identifying subclinical cardiovascular (CV) disease in the cohorts using 3D intravascular ultrasound (3D IVUS).
The study’s 436 Black and Hispanic adults, 82% of them women, completed questionnaires regarding nutrition, lifestyle, medical history, weight, blood pressure, cholesterol levels, and other metrics.
(24.5% vs. 9.3%). Overall Framingham scores for 10-year risk for CV events were not statistically different, at 4.6 and 3.6, respectively.
The presence of atherosclerosis in either the carotid or femoral arteries was identified with 3D IVUS in 8.7% of participants. But its prevalence was about twofold greater in Black than in Hispanic participants (12.9% vs. 6.6%), a finding that persisted after multivariable adjustment and appeared driven by a greater prevalence of carotid disease among Black participants (12.9% vs. 4.8%).
“For the same predicted CV risk, non-Hispanic Black individuals appear to be more vulnerable than people of Hispanic origin to early subclinical atherosclerosis, particularly in the carotid arteries, potentially placing them at increased risk of clinical CV disease,” concludes the report published in the Journal of the American College of Cardiology, with lead author Josep Iglesies-Grau, MD, Montreal Heart Institute.
International program
The current analysis from the FAMILIA study is part of a large international project called Science, Health, and Education (SHE), which is designed to promote early intervention in the lives of children, their caretakers, and teachers so they can develop lifelong heart-healthy habits, senior author Valentin Fuster, MD, PhD, physician-in chief, Mount Sinai Hospital, New York, said in an interview.
The SHE program has been presented to more than 50,000 children worldwide, and FAMILIA has delivered successful interventions to more than 500 preschoolers, caretakers, and educators at Head Start schools in the Harlem neighborhood of New York, where the current study was conducted.
The analysis centered on the children’s adult caregivers, of whom one-third were non-Hispanic Black and two-thirds were Hispanic. “We wanted to know if this young population of parents and caregivers [would show] development or initiation of atherosclerotic disease,” Dr. Fuster said, “thinking that when we showed them that they had disease, it would further motivate them to change their lifestyle.”
Participants were assessed for seven basic CV risk factors – hypertension, smoking, body mass index, diabetes, dyslipidemia, low physical activity levels, and poor-quality diet – as well as socioeconomic descriptors. All participants also underwent 3D IVUS to evaluate the presence and extent of atherosclerosis in the carotid and femoral arteries.
‘Expected and unexpected’ findings
Black participants were considerably more likely than their Hispanic counterparts to be hypertensive, to be active smokers, and to have higher BMIs. The Black cohort reported higher consumption of fruits and vegetables (P < .001).
There were no between-group differences in the prevalence of diabetes or in mean fasting glucose or total cholesterol levels.
The mean 10-year Framingham CV risk score across the entire study population was 4.0%, with no significant differences between the two groups. In fact, 89% of participants were classified as low risk on the basis of the score.
The overall prevalence of subclinical atherosclerosis was 8.7%, with a mean global plaque burden of 5.0 mm3. But there were dramatic differences in atherosclerotic burden. Across all 10-year Framingham risk categories, Black participants had twice the odds of having subclinical atherosclerosis as Hispanic participants (odds ratio, 2.11; 95% confidence interval, 1.09-4.08; P = .026).
Black participants also had a greater atherosclerotic disease burden (9.0 mm3 vs. 2.9 mm3), mean total plaque volume (P = .028), and a higher prevalence of disease in both the carotid and femoral arteries (8.2% vs. 3.8%; P = .026).
“Our findings were both expected and completely unexpected,” Dr. Fuster commented. “It was expected that the non-Hispanic Black population would have more hypertension, obesity, and smoking, and might therefore have more [atherosclerotic] disease. But what was unexpected was when we adjusted for the seven risk factors and socioeconomic status, the Black population had three times the amount of disease,” he said.
“We need to take better care of the risk factors already known in the Black population, which is critical.” However, “our challenge today is to identify these new risk factors, which might be genetic or socioeconomic.” Dr. Fuster said his group is “already working with artificial intelligence to identify risk factors beyond the traditional risk factors that are already established.”
Socioeconomic differences?
“The fact that we’re uncovering and demonstrating that this is an issue – especially for African American women at a young age – and we could make a significant interdiction in terms of risk reduction if we have tools and invest the necessary time and effort, that is the important part of this paper,” Keith Churchwell, MD, Yale New Haven Hospital, and Yale School of Medicine, New Haven, Conn., said in an interview.
“If you’re going to evaluate African Americans in Harlem who are socially disadvantaged, I would want to know if there is a difference between them and other African Americans who have a different socioeconomic status, in terms of atherosclerotic disease,” added Dr. Churchwell, who was not involved with the study.
The Framingham 10-year risk score is “inadequate in assessing CV disease risk in all populations and is not generalizable to non-Whites,” contend Ramdas G. Pai, MD, and Vrinda Vyas, MBBS, of the University of California, Riverside, in an accompanying editorial.
“New data are emerging in favor of imaging-based classification of CV disease risk and has been shown to improve patient adherence to and compliance with risk-modifying interventions,” they write. “Subclinical atherosclerosis may help better stratify CV disease risk so that preventive measures can be instituted to reduce cardiovascular events at a population level.”
Dr. Fuster and coauthors, Dr. Ramdas and Dr. Pai, and Dr. Churchwell report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Non-Hispanic Black young adults in a large, ethnically diverse underserved neighborhood in New York City have about twice the prevalence of subclinical atherosclerosis as Hispanic young adults, according to a new cross-sectional study. It was noteworthy for identifying subclinical cardiovascular (CV) disease in the cohorts using 3D intravascular ultrasound (3D IVUS).
The study’s 436 Black and Hispanic adults, 82% of them women, completed questionnaires regarding nutrition, lifestyle, medical history, weight, blood pressure, cholesterol levels, and other metrics.
(24.5% vs. 9.3%). Overall Framingham scores for 10-year risk for CV events were not statistically different, at 4.6 and 3.6, respectively.
The presence of atherosclerosis in either the carotid or femoral arteries was identified with 3D IVUS in 8.7% of participants. But its prevalence was about twofold greater in Black than in Hispanic participants (12.9% vs. 6.6%), a finding that persisted after multivariable adjustment and appeared driven by a greater prevalence of carotid disease among Black participants (12.9% vs. 4.8%).
“For the same predicted CV risk, non-Hispanic Black individuals appear to be more vulnerable than people of Hispanic origin to early subclinical atherosclerosis, particularly in the carotid arteries, potentially placing them at increased risk of clinical CV disease,” concludes the report published in the Journal of the American College of Cardiology, with lead author Josep Iglesies-Grau, MD, Montreal Heart Institute.
International program
The current analysis from the FAMILIA study is part of a large international project called Science, Health, and Education (SHE), which is designed to promote early intervention in the lives of children, their caretakers, and teachers so they can develop lifelong heart-healthy habits, senior author Valentin Fuster, MD, PhD, physician-in chief, Mount Sinai Hospital, New York, said in an interview.
The SHE program has been presented to more than 50,000 children worldwide, and FAMILIA has delivered successful interventions to more than 500 preschoolers, caretakers, and educators at Head Start schools in the Harlem neighborhood of New York, where the current study was conducted.
The analysis centered on the children’s adult caregivers, of whom one-third were non-Hispanic Black and two-thirds were Hispanic. “We wanted to know if this young population of parents and caregivers [would show] development or initiation of atherosclerotic disease,” Dr. Fuster said, “thinking that when we showed them that they had disease, it would further motivate them to change their lifestyle.”
Participants were assessed for seven basic CV risk factors – hypertension, smoking, body mass index, diabetes, dyslipidemia, low physical activity levels, and poor-quality diet – as well as socioeconomic descriptors. All participants also underwent 3D IVUS to evaluate the presence and extent of atherosclerosis in the carotid and femoral arteries.
‘Expected and unexpected’ findings
Black participants were considerably more likely than their Hispanic counterparts to be hypertensive, to be active smokers, and to have higher BMIs. The Black cohort reported higher consumption of fruits and vegetables (P < .001).
There were no between-group differences in the prevalence of diabetes or in mean fasting glucose or total cholesterol levels.
The mean 10-year Framingham CV risk score across the entire study population was 4.0%, with no significant differences between the two groups. In fact, 89% of participants were classified as low risk on the basis of the score.
The overall prevalence of subclinical atherosclerosis was 8.7%, with a mean global plaque burden of 5.0 mm3. But there were dramatic differences in atherosclerotic burden. Across all 10-year Framingham risk categories, Black participants had twice the odds of having subclinical atherosclerosis as Hispanic participants (odds ratio, 2.11; 95% confidence interval, 1.09-4.08; P = .026).
Black participants also had a greater atherosclerotic disease burden (9.0 mm3 vs. 2.9 mm3), mean total plaque volume (P = .028), and a higher prevalence of disease in both the carotid and femoral arteries (8.2% vs. 3.8%; P = .026).
“Our findings were both expected and completely unexpected,” Dr. Fuster commented. “It was expected that the non-Hispanic Black population would have more hypertension, obesity, and smoking, and might therefore have more [atherosclerotic] disease. But what was unexpected was when we adjusted for the seven risk factors and socioeconomic status, the Black population had three times the amount of disease,” he said.
“We need to take better care of the risk factors already known in the Black population, which is critical.” However, “our challenge today is to identify these new risk factors, which might be genetic or socioeconomic.” Dr. Fuster said his group is “already working with artificial intelligence to identify risk factors beyond the traditional risk factors that are already established.”
Socioeconomic differences?
“The fact that we’re uncovering and demonstrating that this is an issue – especially for African American women at a young age – and we could make a significant interdiction in terms of risk reduction if we have tools and invest the necessary time and effort, that is the important part of this paper,” Keith Churchwell, MD, Yale New Haven Hospital, and Yale School of Medicine, New Haven, Conn., said in an interview.
“If you’re going to evaluate African Americans in Harlem who are socially disadvantaged, I would want to know if there is a difference between them and other African Americans who have a different socioeconomic status, in terms of atherosclerotic disease,” added Dr. Churchwell, who was not involved with the study.
The Framingham 10-year risk score is “inadequate in assessing CV disease risk in all populations and is not generalizable to non-Whites,” contend Ramdas G. Pai, MD, and Vrinda Vyas, MBBS, of the University of California, Riverside, in an accompanying editorial.
“New data are emerging in favor of imaging-based classification of CV disease risk and has been shown to improve patient adherence to and compliance with risk-modifying interventions,” they write. “Subclinical atherosclerosis may help better stratify CV disease risk so that preventive measures can be instituted to reduce cardiovascular events at a population level.”
Dr. Fuster and coauthors, Dr. Ramdas and Dr. Pai, and Dr. Churchwell report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Non-Hispanic Black young adults in a large, ethnically diverse underserved neighborhood in New York City have about twice the prevalence of subclinical atherosclerosis as Hispanic young adults, according to a new cross-sectional study. It was noteworthy for identifying subclinical cardiovascular (CV) disease in the cohorts using 3D intravascular ultrasound (3D IVUS).
The study’s 436 Black and Hispanic adults, 82% of them women, completed questionnaires regarding nutrition, lifestyle, medical history, weight, blood pressure, cholesterol levels, and other metrics.
(24.5% vs. 9.3%). Overall Framingham scores for 10-year risk for CV events were not statistically different, at 4.6 and 3.6, respectively.
The presence of atherosclerosis in either the carotid or femoral arteries was identified with 3D IVUS in 8.7% of participants. But its prevalence was about twofold greater in Black than in Hispanic participants (12.9% vs. 6.6%), a finding that persisted after multivariable adjustment and appeared driven by a greater prevalence of carotid disease among Black participants (12.9% vs. 4.8%).
“For the same predicted CV risk, non-Hispanic Black individuals appear to be more vulnerable than people of Hispanic origin to early subclinical atherosclerosis, particularly in the carotid arteries, potentially placing them at increased risk of clinical CV disease,” concludes the report published in the Journal of the American College of Cardiology, with lead author Josep Iglesies-Grau, MD, Montreal Heart Institute.
International program
The current analysis from the FAMILIA study is part of a large international project called Science, Health, and Education (SHE), which is designed to promote early intervention in the lives of children, their caretakers, and teachers so they can develop lifelong heart-healthy habits, senior author Valentin Fuster, MD, PhD, physician-in chief, Mount Sinai Hospital, New York, said in an interview.
The SHE program has been presented to more than 50,000 children worldwide, and FAMILIA has delivered successful interventions to more than 500 preschoolers, caretakers, and educators at Head Start schools in the Harlem neighborhood of New York, where the current study was conducted.
The analysis centered on the children’s adult caregivers, of whom one-third were non-Hispanic Black and two-thirds were Hispanic. “We wanted to know if this young population of parents and caregivers [would show] development or initiation of atherosclerotic disease,” Dr. Fuster said, “thinking that when we showed them that they had disease, it would further motivate them to change their lifestyle.”
Participants were assessed for seven basic CV risk factors – hypertension, smoking, body mass index, diabetes, dyslipidemia, low physical activity levels, and poor-quality diet – as well as socioeconomic descriptors. All participants also underwent 3D IVUS to evaluate the presence and extent of atherosclerosis in the carotid and femoral arteries.
‘Expected and unexpected’ findings
Black participants were considerably more likely than their Hispanic counterparts to be hypertensive, to be active smokers, and to have higher BMIs. The Black cohort reported higher consumption of fruits and vegetables (P < .001).
There were no between-group differences in the prevalence of diabetes or in mean fasting glucose or total cholesterol levels.
The mean 10-year Framingham CV risk score across the entire study population was 4.0%, with no significant differences between the two groups. In fact, 89% of participants were classified as low risk on the basis of the score.
The overall prevalence of subclinical atherosclerosis was 8.7%, with a mean global plaque burden of 5.0 mm3. But there were dramatic differences in atherosclerotic burden. Across all 10-year Framingham risk categories, Black participants had twice the odds of having subclinical atherosclerosis as Hispanic participants (odds ratio, 2.11; 95% confidence interval, 1.09-4.08; P = .026).
Black participants also had a greater atherosclerotic disease burden (9.0 mm3 vs. 2.9 mm3), mean total plaque volume (P = .028), and a higher prevalence of disease in both the carotid and femoral arteries (8.2% vs. 3.8%; P = .026).
“Our findings were both expected and completely unexpected,” Dr. Fuster commented. “It was expected that the non-Hispanic Black population would have more hypertension, obesity, and smoking, and might therefore have more [atherosclerotic] disease. But what was unexpected was when we adjusted for the seven risk factors and socioeconomic status, the Black population had three times the amount of disease,” he said.
“We need to take better care of the risk factors already known in the Black population, which is critical.” However, “our challenge today is to identify these new risk factors, which might be genetic or socioeconomic.” Dr. Fuster said his group is “already working with artificial intelligence to identify risk factors beyond the traditional risk factors that are already established.”
Socioeconomic differences?
“The fact that we’re uncovering and demonstrating that this is an issue – especially for African American women at a young age – and we could make a significant interdiction in terms of risk reduction if we have tools and invest the necessary time and effort, that is the important part of this paper,” Keith Churchwell, MD, Yale New Haven Hospital, and Yale School of Medicine, New Haven, Conn., said in an interview.
“If you’re going to evaluate African Americans in Harlem who are socially disadvantaged, I would want to know if there is a difference between them and other African Americans who have a different socioeconomic status, in terms of atherosclerotic disease,” added Dr. Churchwell, who was not involved with the study.
The Framingham 10-year risk score is “inadequate in assessing CV disease risk in all populations and is not generalizable to non-Whites,” contend Ramdas G. Pai, MD, and Vrinda Vyas, MBBS, of the University of California, Riverside, in an accompanying editorial.
“New data are emerging in favor of imaging-based classification of CV disease risk and has been shown to improve patient adherence to and compliance with risk-modifying interventions,” they write. “Subclinical atherosclerosis may help better stratify CV disease risk so that preventive measures can be instituted to reduce cardiovascular events at a population level.”
Dr. Fuster and coauthors, Dr. Ramdas and Dr. Pai, and Dr. Churchwell report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY