User login
Nearly Half of Patients Have HIV Under Control
More people living with HIV have the virus under control, according to the most recent national data. In 2014, CDC researchers say, of the estimated 1.1 million people with HIV in the U.S., 85% were diagnosed and 49% were controlling the virus through HIV treatment. By comparison, in 2010, 83% were diagnosed but only 28% were controlling the virus. The data were released recently in the CDC’s report, Monitoring Selected National HIV Prevention and Care Objectives by Using HIV Surveillance Data.
The CDC says making testing and treatment more available in addition to updated treatment guidelines released in 2012 that recommended treatment for all people with HIV infection, were “likely major contributors” to driving down annual infections by 18% between 2008-2014.
In 2014, 37,600 new infections were diagnosed. Of HIV infections diagnosed during 2015, 22% were classified as stage 3 (AIDS), although the percentages had declined from 2010. Nine out of 10 HIV infections are transmitted by people who are not diagnosed or are not in care. Young people are at highest risk. According to the CDC researchers’ estimates, only 56% of people aged 13-24 years with HIV were diagnosed and only 27% had the virus under control.
However, patients are getting appropriate care sooner and more often. Of 28,238 people who were diagnosed during 2015, 75% were linked to HIV medical care within 1 month of diagnosis, and 84% within 3 months.
“The Monitoring Report signals that we are making progress on most of our national HIV prevention, care and treatment goals,” said Richard Wolitski, PhD, director, Office of HIV/AIDS and Infectious Disease Policy in his blog on HIV.gov. “It also shows us where we need to do better and reassess our efforts, diagnose the problems and use this information to make the changes to our policies, programs, and services that are needed to turn the results around.”
More people living with HIV have the virus under control, according to the most recent national data. In 2014, CDC researchers say, of the estimated 1.1 million people with HIV in the U.S., 85% were diagnosed and 49% were controlling the virus through HIV treatment. By comparison, in 2010, 83% were diagnosed but only 28% were controlling the virus. The data were released recently in the CDC’s report, Monitoring Selected National HIV Prevention and Care Objectives by Using HIV Surveillance Data.
The CDC says making testing and treatment more available in addition to updated treatment guidelines released in 2012 that recommended treatment for all people with HIV infection, were “likely major contributors” to driving down annual infections by 18% between 2008-2014.
In 2014, 37,600 new infections were diagnosed. Of HIV infections diagnosed during 2015, 22% were classified as stage 3 (AIDS), although the percentages had declined from 2010. Nine out of 10 HIV infections are transmitted by people who are not diagnosed or are not in care. Young people are at highest risk. According to the CDC researchers’ estimates, only 56% of people aged 13-24 years with HIV were diagnosed and only 27% had the virus under control.
However, patients are getting appropriate care sooner and more often. Of 28,238 people who were diagnosed during 2015, 75% were linked to HIV medical care within 1 month of diagnosis, and 84% within 3 months.
“The Monitoring Report signals that we are making progress on most of our national HIV prevention, care and treatment goals,” said Richard Wolitski, PhD, director, Office of HIV/AIDS and Infectious Disease Policy in his blog on HIV.gov. “It also shows us where we need to do better and reassess our efforts, diagnose the problems and use this information to make the changes to our policies, programs, and services that are needed to turn the results around.”
More people living with HIV have the virus under control, according to the most recent national data. In 2014, CDC researchers say, of the estimated 1.1 million people with HIV in the U.S., 85% were diagnosed and 49% were controlling the virus through HIV treatment. By comparison, in 2010, 83% were diagnosed but only 28% were controlling the virus. The data were released recently in the CDC’s report, Monitoring Selected National HIV Prevention and Care Objectives by Using HIV Surveillance Data.
The CDC says making testing and treatment more available in addition to updated treatment guidelines released in 2012 that recommended treatment for all people with HIV infection, were “likely major contributors” to driving down annual infections by 18% between 2008-2014.
In 2014, 37,600 new infections were diagnosed. Of HIV infections diagnosed during 2015, 22% were classified as stage 3 (AIDS), although the percentages had declined from 2010. Nine out of 10 HIV infections are transmitted by people who are not diagnosed or are not in care. Young people are at highest risk. According to the CDC researchers’ estimates, only 56% of people aged 13-24 years with HIV were diagnosed and only 27% had the virus under control.
However, patients are getting appropriate care sooner and more often. Of 28,238 people who were diagnosed during 2015, 75% were linked to HIV medical care within 1 month of diagnosis, and 84% within 3 months.
“The Monitoring Report signals that we are making progress on most of our national HIV prevention, care and treatment goals,” said Richard Wolitski, PhD, director, Office of HIV/AIDS and Infectious Disease Policy in his blog on HIV.gov. “It also shows us where we need to do better and reassess our efforts, diagnose the problems and use this information to make the changes to our policies, programs, and services that are needed to turn the results around.”
Mutations impact outcomes in AML, MDS
Researchers say they have identified genetic mutations that can significantly affect treatment outcomes in patients with acute myeloid leukemia (AML) and myelodysplastic syndromes (MDS).
The findings come from a clinical trial in which the team examined whether combining vorinostat with azacitidine could improve survival in patients with AML and MDS.
The results showed no additional benefit with the combination, when compared to azacitidine alone.
However, researchers did find that patients had significantly shorter survival times if they had mutations in CDKN2A, IDH1, or TP53.
“This important trial . . . has rapidly answered the important question of whether combining azacitidine with vorinostat improves outcomes for people with AML and MDS and emphasizes the need for further studies with new drug partners for azacitidine,” said Charles Craddock, DPhil, of the Queen Elizabeth Hospital in Birmingham, UK.
“Importantly, the linked molecular studies have shed new light on which people will benefit most from azacitidine. Furthermore, discovering that the CDKN2A gene mutation affects treatment response may be hugely valuable in helping doctors to design new treatment combinations in the future.”
Dr Craddock and his colleagues reported their discoveries in Clinical Cancer Research.
Previous, smaller trials had suggested that adding vorinostat to treatment with azacitidine could improve outcomes for patients with AML and MDS.
To test this idea, Dr Craddock and his colleagues enrolled 259 patients in the current trial. Most of these patients (n=217) had AML—111 were newly diagnosed, 73 had relapsed AML, and 33 had refractory disease.
The remaining 42 patients had MDS—36 were newly diagnosed, 5 had relapsed MDS, and 1 had refractory disease.
Half of patients (n=130) received azacitidine and vorinostat, and the other half received azacitidine alone (n=129).
In both arms, azacitidine was given at 75 mg/m2 on a 5-2-2 schedule, beginning on day 1 of a 28-day cycle for up to 6 cycles. In the combination arm, patients also received vorinostat at 300 mg twice daily for 7 consecutive days, beginning on day 3 of each cycle.
Results
The combination did not significantly improve response rates or survival times.
The overall response rate was 41% in the azacitidine arm and 42% in the combination arm (odds ratio [OR]=1.05, P=0.84).
The rate of compete response (CR)/CR with incomplete count recovery/marrow CR was 22% in the azacitidine arm and 26% in the combination arm (OR=0.82, P=0.49).
The median overall survival (OS) was 9.6 months in the azacitidine arm and 11.0 months in the combination arm (hazard ratio[HR]=1.15, P=0.32).
Impact of mutations
In a multivariable analysis adjusted for all clinical variables, mutations in NPM1 were associated with reduced overall response (OR=8.6, P=0.012).
In another multivariate analysis, mutations in CDKN2A, IDH1, and TP53 were associated with decreased OS. The HRs were 10.0 (P<0.001), 3.6 (P=0.001), and 4.7 (P<0.001), respectively.
The median OS was 4.5 months in patients with CDKN2A mutations and 11.0 months in patients without them.
The median OS was 7.6 months in patients with TP53 mutations and 11.3 months in patients without them.
And the median OS was 5.6 months in patients with IDH1 mutations and 11.1 months in patients without them.
The researchers believe that testing patients newly diagnosed with AML and MDS for CDKN2A, IDH1, and TP53 mutations could help doctors tailor treatment for patients who are less likely to do well.
The team also said the information gleaned from this trial will guide the choice of new drug partners with the potential to increase azacitidine’s clinical activity.
Researchers say they have identified genetic mutations that can significantly affect treatment outcomes in patients with acute myeloid leukemia (AML) and myelodysplastic syndromes (MDS).
The findings come from a clinical trial in which the team examined whether combining vorinostat with azacitidine could improve survival in patients with AML and MDS.
The results showed no additional benefit with the combination, when compared to azacitidine alone.
However, researchers did find that patients had significantly shorter survival times if they had mutations in CDKN2A, IDH1, or TP53.
“This important trial . . . has rapidly answered the important question of whether combining azacitidine with vorinostat improves outcomes for people with AML and MDS and emphasizes the need for further studies with new drug partners for azacitidine,” said Charles Craddock, DPhil, of the Queen Elizabeth Hospital in Birmingham, UK.
“Importantly, the linked molecular studies have shed new light on which people will benefit most from azacitidine. Furthermore, discovering that the CDKN2A gene mutation affects treatment response may be hugely valuable in helping doctors to design new treatment combinations in the future.”
Dr Craddock and his colleagues reported their discoveries in Clinical Cancer Research.
Previous, smaller trials had suggested that adding vorinostat to treatment with azacitidine could improve outcomes for patients with AML and MDS.
To test this idea, Dr Craddock and his colleagues enrolled 259 patients in the current trial. Most of these patients (n=217) had AML—111 were newly diagnosed, 73 had relapsed AML, and 33 had refractory disease.
The remaining 42 patients had MDS—36 were newly diagnosed, 5 had relapsed MDS, and 1 had refractory disease.
Half of patients (n=130) received azacitidine and vorinostat, and the other half received azacitidine alone (n=129).
In both arms, azacitidine was given at 75 mg/m2 on a 5-2-2 schedule, beginning on day 1 of a 28-day cycle for up to 6 cycles. In the combination arm, patients also received vorinostat at 300 mg twice daily for 7 consecutive days, beginning on day 3 of each cycle.
Results
The combination did not significantly improve response rates or survival times.
The overall response rate was 41% in the azacitidine arm and 42% in the combination arm (odds ratio [OR]=1.05, P=0.84).
The rate of compete response (CR)/CR with incomplete count recovery/marrow CR was 22% in the azacitidine arm and 26% in the combination arm (OR=0.82, P=0.49).
The median overall survival (OS) was 9.6 months in the azacitidine arm and 11.0 months in the combination arm (hazard ratio[HR]=1.15, P=0.32).
Impact of mutations
In a multivariable analysis adjusted for all clinical variables, mutations in NPM1 were associated with reduced overall response (OR=8.6, P=0.012).
In another multivariate analysis, mutations in CDKN2A, IDH1, and TP53 were associated with decreased OS. The HRs were 10.0 (P<0.001), 3.6 (P=0.001), and 4.7 (P<0.001), respectively.
The median OS was 4.5 months in patients with CDKN2A mutations and 11.0 months in patients without them.
The median OS was 7.6 months in patients with TP53 mutations and 11.3 months in patients without them.
And the median OS was 5.6 months in patients with IDH1 mutations and 11.1 months in patients without them.
The researchers believe that testing patients newly diagnosed with AML and MDS for CDKN2A, IDH1, and TP53 mutations could help doctors tailor treatment for patients who are less likely to do well.
The team also said the information gleaned from this trial will guide the choice of new drug partners with the potential to increase azacitidine’s clinical activity.
Researchers say they have identified genetic mutations that can significantly affect treatment outcomes in patients with acute myeloid leukemia (AML) and myelodysplastic syndromes (MDS).
The findings come from a clinical trial in which the team examined whether combining vorinostat with azacitidine could improve survival in patients with AML and MDS.
The results showed no additional benefit with the combination, when compared to azacitidine alone.
However, researchers did find that patients had significantly shorter survival times if they had mutations in CDKN2A, IDH1, or TP53.
“This important trial . . . has rapidly answered the important question of whether combining azacitidine with vorinostat improves outcomes for people with AML and MDS and emphasizes the need for further studies with new drug partners for azacitidine,” said Charles Craddock, DPhil, of the Queen Elizabeth Hospital in Birmingham, UK.
“Importantly, the linked molecular studies have shed new light on which people will benefit most from azacitidine. Furthermore, discovering that the CDKN2A gene mutation affects treatment response may be hugely valuable in helping doctors to design new treatment combinations in the future.”
Dr Craddock and his colleagues reported their discoveries in Clinical Cancer Research.
Previous, smaller trials had suggested that adding vorinostat to treatment with azacitidine could improve outcomes for patients with AML and MDS.
To test this idea, Dr Craddock and his colleagues enrolled 259 patients in the current trial. Most of these patients (n=217) had AML—111 were newly diagnosed, 73 had relapsed AML, and 33 had refractory disease.
The remaining 42 patients had MDS—36 were newly diagnosed, 5 had relapsed MDS, and 1 had refractory disease.
Half of patients (n=130) received azacitidine and vorinostat, and the other half received azacitidine alone (n=129).
In both arms, azacitidine was given at 75 mg/m2 on a 5-2-2 schedule, beginning on day 1 of a 28-day cycle for up to 6 cycles. In the combination arm, patients also received vorinostat at 300 mg twice daily for 7 consecutive days, beginning on day 3 of each cycle.
Results
The combination did not significantly improve response rates or survival times.
The overall response rate was 41% in the azacitidine arm and 42% in the combination arm (odds ratio [OR]=1.05, P=0.84).
The rate of compete response (CR)/CR with incomplete count recovery/marrow CR was 22% in the azacitidine arm and 26% in the combination arm (OR=0.82, P=0.49).
The median overall survival (OS) was 9.6 months in the azacitidine arm and 11.0 months in the combination arm (hazard ratio[HR]=1.15, P=0.32).
Impact of mutations
In a multivariable analysis adjusted for all clinical variables, mutations in NPM1 were associated with reduced overall response (OR=8.6, P=0.012).
In another multivariate analysis, mutations in CDKN2A, IDH1, and TP53 were associated with decreased OS. The HRs were 10.0 (P<0.001), 3.6 (P=0.001), and 4.7 (P<0.001), respectively.
The median OS was 4.5 months in patients with CDKN2A mutations and 11.0 months in patients without them.
The median OS was 7.6 months in patients with TP53 mutations and 11.3 months in patients without them.
And the median OS was 5.6 months in patients with IDH1 mutations and 11.1 months in patients without them.
The researchers believe that testing patients newly diagnosed with AML and MDS for CDKN2A, IDH1, and TP53 mutations could help doctors tailor treatment for patients who are less likely to do well.
The team also said the information gleaned from this trial will guide the choice of new drug partners with the potential to increase azacitidine’s clinical activity.
Genotype-guided warfarin appears safer
Genotype-guided warfarin dosing is safer than clinically guided dosing for patients undergoing elective hip or knee arthroplasty, according to a study published in JAMA.
Investigators found that genotype-guided dosing reduced a patient’s combined risk of experiencing major bleeding, having an international normalized ratio (INR) of 4 or greater, and developing venous thromboembolism (VTE).
Death was also included in this combined endpoint, but there were no deaths in either dosing group.
“Physicians have been prescribing warfarin since the Eisenhower administration,” said study author Brian F. Gage, MD, of Washington University School of Medicine in St. Louis.
“It’s a widely used anticoagulant, but it causes more major adverse events than any other oral drug. Thousands of patients end up in the emergency department or hospital because of warfarin-induced bleeding, but we continue to prescribe it because it is highly effective, reversible, and inexpensive. So our goal is to make warfarin safer.”
With this in mind, Dr Gage and his colleagues set out to determine if genotype-guided dosing would be safer for patients starting warfarin because of elective hip or knee replacement.
The investigators noted that earlier studies of genotype-guided warfarin dosing had produced conflicting results. However, these studies were smaller and included fewer genetic variants than the current trial, known as GIFT.
The GIFT study included 1650 patients, age 65 and older. They were genotyped for the following polymorphisms: VKORC1-1639G>A, CYP2C9*2, CYP2C9*3, and CYP4F2 V433M.
Then, patients were randomized to clinically guided (n=789) or genotype-guided warfarin dosing (n=808) on days 1 through 11 of therapy and to a target INR of either 1.8 or 2.5. (Clinically guided dosing was based on standard factors such as age, height, and weight, while genotype-guided dosing was influenced by clinical factors plus the aforementioned genetic variants.)
Results
The primary endpoint was a combination of major bleeding, INR of 4 or greater, VTE, and death.
This endpoint was met by 10.8% (n=87) of patients in the genotype-guided group and 14.7% (n=116) of patients in the clinically guided warfarin group. The relative rate (RR) was 0.73 (P=0.02).
The incidence of major bleeding on days 1 to 30 was 0.2% (n=2) in the genotype-guided group and 1.0% (n=8) in the clinically guided group. The RR was 0.24 (P=0.06).
The proportion of patients with an INR of 4 or greater on days 1 to 30 was 6.9% (n=56) in the genotype-guided group and 9.8% (n=77) in the clinically guided group. The RR was 2.8 (P=0.04).
The incidence of VTE on days 1 to 60 was similar—4.1% (n=33) in the genotype-guided group and 4.8% the clinically guided group. The RR was 0.85 (P=0.48).
There were no deaths in either group (on days 1 to 30).
The investigators noted that this study has limitations. In particular, the benefits of genotype-guided dosing may differ when applied to patients of other ages or to general clinical practice.
The team also said additional research is needed to determine the cost-effectiveness of personalized warfarin dosing.
“Although genetic testing is more expensive than clinical dosing, the cost is falling,” Dr Gage said. “In our study, we estimated that genetic testing costs less than $200 per person, which is less than 1 month of a newer anticoagulant.”
Finally, the investigators said future studies should assess the impact of additional genetic variants.
“There are additional genetic variants that may help to guide warfarin dosing, especially among patients with African ancestry,” Dr Gage said. “In the future, we hope to quantify how these variants affect warfarin.”
Genotype-guided warfarin dosing is safer than clinically guided dosing for patients undergoing elective hip or knee arthroplasty, according to a study published in JAMA.
Investigators found that genotype-guided dosing reduced a patient’s combined risk of experiencing major bleeding, having an international normalized ratio (INR) of 4 or greater, and developing venous thromboembolism (VTE).
Death was also included in this combined endpoint, but there were no deaths in either dosing group.
“Physicians have been prescribing warfarin since the Eisenhower administration,” said study author Brian F. Gage, MD, of Washington University School of Medicine in St. Louis.
“It’s a widely used anticoagulant, but it causes more major adverse events than any other oral drug. Thousands of patients end up in the emergency department or hospital because of warfarin-induced bleeding, but we continue to prescribe it because it is highly effective, reversible, and inexpensive. So our goal is to make warfarin safer.”
With this in mind, Dr Gage and his colleagues set out to determine if genotype-guided dosing would be safer for patients starting warfarin because of elective hip or knee replacement.
The investigators noted that earlier studies of genotype-guided warfarin dosing had produced conflicting results. However, these studies were smaller and included fewer genetic variants than the current trial, known as GIFT.
The GIFT study included 1650 patients, age 65 and older. They were genotyped for the following polymorphisms: VKORC1-1639G>A, CYP2C9*2, CYP2C9*3, and CYP4F2 V433M.
Then, patients were randomized to clinically guided (n=789) or genotype-guided warfarin dosing (n=808) on days 1 through 11 of therapy and to a target INR of either 1.8 or 2.5. (Clinically guided dosing was based on standard factors such as age, height, and weight, while genotype-guided dosing was influenced by clinical factors plus the aforementioned genetic variants.)
Results
The primary endpoint was a combination of major bleeding, INR of 4 or greater, VTE, and death.
This endpoint was met by 10.8% (n=87) of patients in the genotype-guided group and 14.7% (n=116) of patients in the clinically guided warfarin group. The relative rate (RR) was 0.73 (P=0.02).
The incidence of major bleeding on days 1 to 30 was 0.2% (n=2) in the genotype-guided group and 1.0% (n=8) in the clinically guided group. The RR was 0.24 (P=0.06).
The proportion of patients with an INR of 4 or greater on days 1 to 30 was 6.9% (n=56) in the genotype-guided group and 9.8% (n=77) in the clinically guided group. The RR was 2.8 (P=0.04).
The incidence of VTE on days 1 to 60 was similar—4.1% (n=33) in the genotype-guided group and 4.8% the clinically guided group. The RR was 0.85 (P=0.48).
There were no deaths in either group (on days 1 to 30).
The investigators noted that this study has limitations. In particular, the benefits of genotype-guided dosing may differ when applied to patients of other ages or to general clinical practice.
The team also said additional research is needed to determine the cost-effectiveness of personalized warfarin dosing.
“Although genetic testing is more expensive than clinical dosing, the cost is falling,” Dr Gage said. “In our study, we estimated that genetic testing costs less than $200 per person, which is less than 1 month of a newer anticoagulant.”
Finally, the investigators said future studies should assess the impact of additional genetic variants.
“There are additional genetic variants that may help to guide warfarin dosing, especially among patients with African ancestry,” Dr Gage said. “In the future, we hope to quantify how these variants affect warfarin.”
Genotype-guided warfarin dosing is safer than clinically guided dosing for patients undergoing elective hip or knee arthroplasty, according to a study published in JAMA.
Investigators found that genotype-guided dosing reduced a patient’s combined risk of experiencing major bleeding, having an international normalized ratio (INR) of 4 or greater, and developing venous thromboembolism (VTE).
Death was also included in this combined endpoint, but there were no deaths in either dosing group.
“Physicians have been prescribing warfarin since the Eisenhower administration,” said study author Brian F. Gage, MD, of Washington University School of Medicine in St. Louis.
“It’s a widely used anticoagulant, but it causes more major adverse events than any other oral drug. Thousands of patients end up in the emergency department or hospital because of warfarin-induced bleeding, but we continue to prescribe it because it is highly effective, reversible, and inexpensive. So our goal is to make warfarin safer.”
With this in mind, Dr Gage and his colleagues set out to determine if genotype-guided dosing would be safer for patients starting warfarin because of elective hip or knee replacement.
The investigators noted that earlier studies of genotype-guided warfarin dosing had produced conflicting results. However, these studies were smaller and included fewer genetic variants than the current trial, known as GIFT.
The GIFT study included 1650 patients, age 65 and older. They were genotyped for the following polymorphisms: VKORC1-1639G>A, CYP2C9*2, CYP2C9*3, and CYP4F2 V433M.
Then, patients were randomized to clinically guided (n=789) or genotype-guided warfarin dosing (n=808) on days 1 through 11 of therapy and to a target INR of either 1.8 or 2.5. (Clinically guided dosing was based on standard factors such as age, height, and weight, while genotype-guided dosing was influenced by clinical factors plus the aforementioned genetic variants.)
Results
The primary endpoint was a combination of major bleeding, INR of 4 or greater, VTE, and death.
This endpoint was met by 10.8% (n=87) of patients in the genotype-guided group and 14.7% (n=116) of patients in the clinically guided warfarin group. The relative rate (RR) was 0.73 (P=0.02).
The incidence of major bleeding on days 1 to 30 was 0.2% (n=2) in the genotype-guided group and 1.0% (n=8) in the clinically guided group. The RR was 0.24 (P=0.06).
The proportion of patients with an INR of 4 or greater on days 1 to 30 was 6.9% (n=56) in the genotype-guided group and 9.8% (n=77) in the clinically guided group. The RR was 2.8 (P=0.04).
The incidence of VTE on days 1 to 60 was similar—4.1% (n=33) in the genotype-guided group and 4.8% the clinically guided group. The RR was 0.85 (P=0.48).
There were no deaths in either group (on days 1 to 30).
The investigators noted that this study has limitations. In particular, the benefits of genotype-guided dosing may differ when applied to patients of other ages or to general clinical practice.
The team also said additional research is needed to determine the cost-effectiveness of personalized warfarin dosing.
“Although genetic testing is more expensive than clinical dosing, the cost is falling,” Dr Gage said. “In our study, we estimated that genetic testing costs less than $200 per person, which is less than 1 month of a newer anticoagulant.”
Finally, the investigators said future studies should assess the impact of additional genetic variants.
“There are additional genetic variants that may help to guide warfarin dosing, especially among patients with African ancestry,” Dr Gage said. “In the future, we hope to quantify how these variants affect warfarin.”
Representation in cancer clinical trials
ATLANTA—New research suggests some racial/ethnic minority groups are underrepresented in clinical trials for cancer patients in the US.
African-American and Hispanic patients were underrepresented in the trials studied, while Asian and non-Hispanic white patients were not.
Patients belonging to other racial/ethnic groups were not studied in detail.
The research also showed that elderly patients were less likely than other age groups to enroll in a cancer trial.
However, the percentage of elderly patients in the trials studied (36%) was more than double the percentage of elderly individuals in the US population (15.2%).
This research was presented at the 10th AACR Conference on The Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved (abstract A26).
“Clinical trials are crucial in studying the effectiveness of new drugs and ultimately bringing them to the market to benefit patients,” said Narjust Duma, MD, of the Mayo Clinic in Rochester, Minnesota.
“However, many clinical trials lack appropriate representation of certain patient populations. As a result, the findings of a clinical trial might not be generalizable to all patients.”
Dr Duma and her colleagues analyzed enrollment data from all cancer therapeutic trials reported as completed on clinicaltrials.gov from 2003 to 2016. These trials included 55,689 subjects, and the racial/ethnic breakdown of the group was as follows:
- Non-Hispanic white—83%
- African-American—6%
- Asian—5.3%
- Hispanic—2.6%
- “Other”—2.4%.
According to the US Census Bureau, as of July 1, 2016, the estimated total US population was 323,127,516. The racial/ethnic breakdown of that population is as follows:
- White alone (excluding Hispanics/Latinos)—61.3%
- Hispanic/Latino*—17.8%
- Black/African-American alone—13.3%
- Asian alone—5.7%
- American Indian/Alaska Native—1.3%
- Native Hawaiian/Other Pacific Islander—0.2%
- Two or more races—2.6%.
Dr Duma and her colleagues said their study suggests African-American and Hispanic representation in cancer trials has declined in recent years, when compared to historical data from 1996 to 2002.
In the 1996-2002 period, African-Americans represented 9.2% of patients in cancer trials (vs 6% in 2003-2016), and Hispanics represented 3.1% (vs 2.6% in 2003-2016).
On the other hand, the recruitment of Asians in cancer trials has more than doubled, from 2% in the historical data to 5.3% in the current data.
The current study also showed that elderly patients (age 65 and older) represented 36% of the subjects enrolled in cancer trials. In comparison, 15.2% of the total US population is 65 or older.
Previous research suggested the elderly are often underrepresented in clinical trials, despite the fact that most cancer cases are diagnosed in individuals age 65 and older, according to the National Cancer Institute’s Surveillance, Epidemiology and End Results database.
Dr Duma said the increasing use of genetic information in clinical trials may be decreasing the numbers of ethnic minorities and elderly patients. In recent years, researchers have sought to study drugs that treat cancers by targeting certain mutations. In order to identify the patients who are most likely to respond to the drugs, many trials now require molecular testing of tumors.
“This is leading to significant advances,” Dr Duma said. “However, it is vastly more expensive to run these trials, often leaving a limited budget to recruit patients or do outreach to the elderly or minorities.”
“Also, this type of testing can only be conducted at the major cancer centers. The mid-sized, regional hospitals are excluded because they don’t have the capacity, and, sadly, this leaves us farther away from these populations.”
Dr Duma added that cultural biases may also make minorities less likely to enroll in clinical trials. Previous research has indicated that members of certain minority groups may be less likely to trust healthcare providers.
Language barriers may also be a factor for minority patients, and the elderly may be dissuaded by difficulty in traveling to and from major cancer centers, Dr Duma noted.
She identified a few potential ways to narrow the gap of participation in clinical trials:
- Increase clinical trial partnerships between major cancer centers and satellite hospitals. Dr Duma suggested that patients could be enrolled at their local hospital and undergo treatment there, while data could be sent to the partnering cancer center.
- Targeted interventions, such as Spanish interpreters, could be used to help enroll minority patients in clinical trials.
- Healthcare providers should be mindful of the need to enroll more patients from underrepresented populations and should be willing to discuss risks and benefits with patients.
Dr Duma said the main limitation of this study is that race and ethnicity are generally self-reported, which could lead to some inconsistencies in data.
*The US Census Bureau notes that Hispanics may be of any race, so they are also included in applicable race categories.
ATLANTA—New research suggests some racial/ethnic minority groups are underrepresented in clinical trials for cancer patients in the US.
African-American and Hispanic patients were underrepresented in the trials studied, while Asian and non-Hispanic white patients were not.
Patients belonging to other racial/ethnic groups were not studied in detail.
The research also showed that elderly patients were less likely than other age groups to enroll in a cancer trial.
However, the percentage of elderly patients in the trials studied (36%) was more than double the percentage of elderly individuals in the US population (15.2%).
This research was presented at the 10th AACR Conference on The Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved (abstract A26).
“Clinical trials are crucial in studying the effectiveness of new drugs and ultimately bringing them to the market to benefit patients,” said Narjust Duma, MD, of the Mayo Clinic in Rochester, Minnesota.
“However, many clinical trials lack appropriate representation of certain patient populations. As a result, the findings of a clinical trial might not be generalizable to all patients.”
Dr Duma and her colleagues analyzed enrollment data from all cancer therapeutic trials reported as completed on clinicaltrials.gov from 2003 to 2016. These trials included 55,689 subjects, and the racial/ethnic breakdown of the group was as follows:
- Non-Hispanic white—83%
- African-American—6%
- Asian—5.3%
- Hispanic—2.6%
- “Other”—2.4%.
According to the US Census Bureau, as of July 1, 2016, the estimated total US population was 323,127,516. The racial/ethnic breakdown of that population is as follows:
- White alone (excluding Hispanics/Latinos)—61.3%
- Hispanic/Latino*—17.8%
- Black/African-American alone—13.3%
- Asian alone—5.7%
- American Indian/Alaska Native—1.3%
- Native Hawaiian/Other Pacific Islander—0.2%
- Two or more races—2.6%.
Dr Duma and her colleagues said their study suggests African-American and Hispanic representation in cancer trials has declined in recent years, when compared to historical data from 1996 to 2002.
In the 1996-2002 period, African-Americans represented 9.2% of patients in cancer trials (vs 6% in 2003-2016), and Hispanics represented 3.1% (vs 2.6% in 2003-2016).
On the other hand, the recruitment of Asians in cancer trials has more than doubled, from 2% in the historical data to 5.3% in the current data.
The current study also showed that elderly patients (age 65 and older) represented 36% of the subjects enrolled in cancer trials. In comparison, 15.2% of the total US population is 65 or older.
Previous research suggested the elderly are often underrepresented in clinical trials, despite the fact that most cancer cases are diagnosed in individuals age 65 and older, according to the National Cancer Institute’s Surveillance, Epidemiology and End Results database.
Dr Duma said the increasing use of genetic information in clinical trials may be decreasing the numbers of ethnic minorities and elderly patients. In recent years, researchers have sought to study drugs that treat cancers by targeting certain mutations. In order to identify the patients who are most likely to respond to the drugs, many trials now require molecular testing of tumors.
“This is leading to significant advances,” Dr Duma said. “However, it is vastly more expensive to run these trials, often leaving a limited budget to recruit patients or do outreach to the elderly or minorities.”
“Also, this type of testing can only be conducted at the major cancer centers. The mid-sized, regional hospitals are excluded because they don’t have the capacity, and, sadly, this leaves us farther away from these populations.”
Dr Duma added that cultural biases may also make minorities less likely to enroll in clinical trials. Previous research has indicated that members of certain minority groups may be less likely to trust healthcare providers.
Language barriers may also be a factor for minority patients, and the elderly may be dissuaded by difficulty in traveling to and from major cancer centers, Dr Duma noted.
She identified a few potential ways to narrow the gap of participation in clinical trials:
- Increase clinical trial partnerships between major cancer centers and satellite hospitals. Dr Duma suggested that patients could be enrolled at their local hospital and undergo treatment there, while data could be sent to the partnering cancer center.
- Targeted interventions, such as Spanish interpreters, could be used to help enroll minority patients in clinical trials.
- Healthcare providers should be mindful of the need to enroll more patients from underrepresented populations and should be willing to discuss risks and benefits with patients.
Dr Duma said the main limitation of this study is that race and ethnicity are generally self-reported, which could lead to some inconsistencies in data.
*The US Census Bureau notes that Hispanics may be of any race, so they are also included in applicable race categories.
ATLANTA—New research suggests some racial/ethnic minority groups are underrepresented in clinical trials for cancer patients in the US.
African-American and Hispanic patients were underrepresented in the trials studied, while Asian and non-Hispanic white patients were not.
Patients belonging to other racial/ethnic groups were not studied in detail.
The research also showed that elderly patients were less likely than other age groups to enroll in a cancer trial.
However, the percentage of elderly patients in the trials studied (36%) was more than double the percentage of elderly individuals in the US population (15.2%).
This research was presented at the 10th AACR Conference on The Science of Cancer Health Disparities in Racial/Ethnic Minorities and the Medically Underserved (abstract A26).
“Clinical trials are crucial in studying the effectiveness of new drugs and ultimately bringing them to the market to benefit patients,” said Narjust Duma, MD, of the Mayo Clinic in Rochester, Minnesota.
“However, many clinical trials lack appropriate representation of certain patient populations. As a result, the findings of a clinical trial might not be generalizable to all patients.”
Dr Duma and her colleagues analyzed enrollment data from all cancer therapeutic trials reported as completed on clinicaltrials.gov from 2003 to 2016. These trials included 55,689 subjects, and the racial/ethnic breakdown of the group was as follows:
- Non-Hispanic white—83%
- African-American—6%
- Asian—5.3%
- Hispanic—2.6%
- “Other”—2.4%.
According to the US Census Bureau, as of July 1, 2016, the estimated total US population was 323,127,516. The racial/ethnic breakdown of that population is as follows:
- White alone (excluding Hispanics/Latinos)—61.3%
- Hispanic/Latino*—17.8%
- Black/African-American alone—13.3%
- Asian alone—5.7%
- American Indian/Alaska Native—1.3%
- Native Hawaiian/Other Pacific Islander—0.2%
- Two or more races—2.6%.
Dr Duma and her colleagues said their study suggests African-American and Hispanic representation in cancer trials has declined in recent years, when compared to historical data from 1996 to 2002.
In the 1996-2002 period, African-Americans represented 9.2% of patients in cancer trials (vs 6% in 2003-2016), and Hispanics represented 3.1% (vs 2.6% in 2003-2016).
On the other hand, the recruitment of Asians in cancer trials has more than doubled, from 2% in the historical data to 5.3% in the current data.
The current study also showed that elderly patients (age 65 and older) represented 36% of the subjects enrolled in cancer trials. In comparison, 15.2% of the total US population is 65 or older.
Previous research suggested the elderly are often underrepresented in clinical trials, despite the fact that most cancer cases are diagnosed in individuals age 65 and older, according to the National Cancer Institute’s Surveillance, Epidemiology and End Results database.
Dr Duma said the increasing use of genetic information in clinical trials may be decreasing the numbers of ethnic minorities and elderly patients. In recent years, researchers have sought to study drugs that treat cancers by targeting certain mutations. In order to identify the patients who are most likely to respond to the drugs, many trials now require molecular testing of tumors.
“This is leading to significant advances,” Dr Duma said. “However, it is vastly more expensive to run these trials, often leaving a limited budget to recruit patients or do outreach to the elderly or minorities.”
“Also, this type of testing can only be conducted at the major cancer centers. The mid-sized, regional hospitals are excluded because they don’t have the capacity, and, sadly, this leaves us farther away from these populations.”
Dr Duma added that cultural biases may also make minorities less likely to enroll in clinical trials. Previous research has indicated that members of certain minority groups may be less likely to trust healthcare providers.
Language barriers may also be a factor for minority patients, and the elderly may be dissuaded by difficulty in traveling to and from major cancer centers, Dr Duma noted.
She identified a few potential ways to narrow the gap of participation in clinical trials:
- Increase clinical trial partnerships between major cancer centers and satellite hospitals. Dr Duma suggested that patients could be enrolled at their local hospital and undergo treatment there, while data could be sent to the partnering cancer center.
- Targeted interventions, such as Spanish interpreters, could be used to help enroll minority patients in clinical trials.
- Healthcare providers should be mindful of the need to enroll more patients from underrepresented populations and should be willing to discuss risks and benefits with patients.
Dr Duma said the main limitation of this study is that race and ethnicity are generally self-reported, which could lead to some inconsistencies in data.
*The US Census Bureau notes that Hispanics may be of any race, so they are also included in applicable race categories.
Dyslipidemia: Assessment and Treatment of Cardiovascular Risk: Applying 2016 ACC Recommendations
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
ACS weighs in on surgeon workforce bill
ACS weighs in on surgeon workforce bill
The American College of Surgeons (ACS) submitted a statement for the record (available at facs.org/~/media/files/email/091417_workforce.ashx) September 14 to the U.S. House Committee on Energy and Commerce regarding its hearing on Supporting Tomorrow’s Health Providers: Examining Workforce Programs under the Public Health Service Act. The statement emphasizes that building a solid foundation of accurate and actionable workforce data is critical to making rational, informed decisions for building an optimal health care workforce. The ACS reiterates its support for the Ensuring Access to General Surgery Act of 2017 (H.R. 2906/ S.1351), sponsored by Reps. Larry Bucshon, MD, FACS (R-IN), and Ami Bera, MD (D-CA), and Sens. Charles Grassley (R-IA) and Brian Schatz (D-HI). This legislation would direct the Secretary of the U.S. Department of Health and Human Services (HHS), through the Health Resources and Services Administration, to conduct a study to define and identify general surgery workforce shortage areas. Additionally, it would grant the Secretary the authority to provide a general surgery shortage area designation.
The ACS maintains that a shortage of general surgeons is a critical component of the nation’s health care workforce crisis. Consequently, the ACS is urging policymakers to recognize through the designation of a formal surgical shortage area that only surgeons are uniquely trained and qualified to provide certain necessary, lifesaving procedures. Surgeons play a pivotal role in the community-based health care system, but unlike other key community providers, surgery has no official shortage area designation.
The ACS encourages Fellows to contact their members of Congress through SurgeonsVoice (member login required) at www.surgeonsvoice.org to urge them to sign on in support of this legislation. For more information about surgical workforce shortage legislation, contact Carrie Zlatos, ACS Senior Congressional Lobbyist, at [email protected] or 202-672-1508.
ACS weighs in on surgeon workforce bill
The American College of Surgeons (ACS) submitted a statement for the record (available at facs.org/~/media/files/email/091417_workforce.ashx) September 14 to the U.S. House Committee on Energy and Commerce regarding its hearing on Supporting Tomorrow’s Health Providers: Examining Workforce Programs under the Public Health Service Act. The statement emphasizes that building a solid foundation of accurate and actionable workforce data is critical to making rational, informed decisions for building an optimal health care workforce. The ACS reiterates its support for the Ensuring Access to General Surgery Act of 2017 (H.R. 2906/ S.1351), sponsored by Reps. Larry Bucshon, MD, FACS (R-IN), and Ami Bera, MD (D-CA), and Sens. Charles Grassley (R-IA) and Brian Schatz (D-HI). This legislation would direct the Secretary of the U.S. Department of Health and Human Services (HHS), through the Health Resources and Services Administration, to conduct a study to define and identify general surgery workforce shortage areas. Additionally, it would grant the Secretary the authority to provide a general surgery shortage area designation.
The ACS maintains that a shortage of general surgeons is a critical component of the nation’s health care workforce crisis. Consequently, the ACS is urging policymakers to recognize through the designation of a formal surgical shortage area that only surgeons are uniquely trained and qualified to provide certain necessary, lifesaving procedures. Surgeons play a pivotal role in the community-based health care system, but unlike other key community providers, surgery has no official shortage area designation.
The ACS encourages Fellows to contact their members of Congress through SurgeonsVoice (member login required) at www.surgeonsvoice.org to urge them to sign on in support of this legislation. For more information about surgical workforce shortage legislation, contact Carrie Zlatos, ACS Senior Congressional Lobbyist, at [email protected] or 202-672-1508.
ACS weighs in on surgeon workforce bill
The American College of Surgeons (ACS) submitted a statement for the record (available at facs.org/~/media/files/email/091417_workforce.ashx) September 14 to the U.S. House Committee on Energy and Commerce regarding its hearing on Supporting Tomorrow’s Health Providers: Examining Workforce Programs under the Public Health Service Act. The statement emphasizes that building a solid foundation of accurate and actionable workforce data is critical to making rational, informed decisions for building an optimal health care workforce. The ACS reiterates its support for the Ensuring Access to General Surgery Act of 2017 (H.R. 2906/ S.1351), sponsored by Reps. Larry Bucshon, MD, FACS (R-IN), and Ami Bera, MD (D-CA), and Sens. Charles Grassley (R-IA) and Brian Schatz (D-HI). This legislation would direct the Secretary of the U.S. Department of Health and Human Services (HHS), through the Health Resources and Services Administration, to conduct a study to define and identify general surgery workforce shortage areas. Additionally, it would grant the Secretary the authority to provide a general surgery shortage area designation.
The ACS maintains that a shortage of general surgeons is a critical component of the nation’s health care workforce crisis. Consequently, the ACS is urging policymakers to recognize through the designation of a formal surgical shortage area that only surgeons are uniquely trained and qualified to provide certain necessary, lifesaving procedures. Surgeons play a pivotal role in the community-based health care system, but unlike other key community providers, surgery has no official shortage area designation.
The ACS encourages Fellows to contact their members of Congress through SurgeonsVoice (member login required) at www.surgeonsvoice.org to urge them to sign on in support of this legislation. For more information about surgical workforce shortage legislation, contact Carrie Zlatos, ACS Senior Congressional Lobbyist, at [email protected] or 202-672-1508.
Advantages of In-office Hysteroscopy in the Diagnosis of Abnormal Uterine Bleeding with Endosee
Click Here to Read the Supplement.
Topics include:
- The Endosee Office Procedure
- Patient Case Studies
- Advantages of Endosee Over Traditional Office Hysteroscopy
Dr. Ethan Goldstein, MD
Director of the Robotic
and Minimally Invasive Surgery Program
Detroit Medical Center’s Huron Valley-Sinai Hospital
Detroit, Michigan
Click Here to Read the Supplement.
Click Here to Read the Supplement.
Topics include:
- The Endosee Office Procedure
- Patient Case Studies
- Advantages of Endosee Over Traditional Office Hysteroscopy
Dr. Ethan Goldstein, MD
Director of the Robotic
and Minimally Invasive Surgery Program
Detroit Medical Center’s Huron Valley-Sinai Hospital
Detroit, Michigan
Click Here to Read the Supplement.
Click Here to Read the Supplement.
Topics include:
- The Endosee Office Procedure
- Patient Case Studies
- Advantages of Endosee Over Traditional Office Hysteroscopy
Dr. Ethan Goldstein, MD
Director of the Robotic
and Minimally Invasive Surgery Program
Detroit Medical Center’s Huron Valley-Sinai Hospital
Detroit, Michigan
Click Here to Read the Supplement.
Sleep Duration Affects Likelihood of Insomnia and Depression Remission
BOSTON—Objective sleep duration moderates the probability of remission among patients with comorbid depression and insomnia, according to research presented at the 31st Annual Meeting of the Associated Professional Sleep Societies. Sleep durations of greater than five to six hours increase the likelihood that these patients will achieve insomnia remission with cognitive behavioral therapy for insomnia (CBT-I), but do not affect the likelihood of depression remission. Sleep durations of seven or more hours optimize the likelihood of insomnia remission and depression remission in response to CBT-I.
In a 2015 consensus statement, the Sleep Research Society recommended seven or more hours of sleep per night for adults younger than 60. Studies indicate that sleep durations of less than five hours and less than six hours are associated with increased morbidity and poor treatment response among patients with insomnia. “We wanted to know what [sleep-duration] cutoffs … might be better predictors of eventual insomnia and depression remission through treatment,” said Jack Edinger, PhD, Professor of Medicine at National Jewish Health in Denver.
An Analysis of the TRIAD Study
Dr. Edinger and colleagues conducted a secondary analysis of the TRIAD study, which examined whether combined treatment of depression and insomnia improves depression and sleep outcomes in participants with both disorders. Eligible participants met Diagnostic and Statistical Manual of Mental Disorders (4th ed.) criteria for major depression and primary insomnia, had a Hamilton Rating Scale for Depression (HAMD-17) score of 16 or greater, and had an Insomnia Severity Index (ISI) score of 11 or greater. People who had had psychotherapy in the previous four months, or had failed or could not tolerate previous adequate trials of the study medications, were excluded. Participants completed one night of baseline polysomnography before entering the treatment phase of the study.
The study population included 104 participants (75 women) with a mean age of 47. Mean baseline HAMD-17 score was 22, and mean baseline ISI score was 20.6. All participants received antidepressant medication (ie, citalopram, sertraline, or venlafaxine). Patients were randomized to CBT-I or sham (ie, a pseudodesensitization condition with sleep education). The investigators assessed participants biweekly with the HAMD-17 and the ISI. The treatment period lasted for 16 weeks.
CBT-I Provided Benefits
Participants with five or more hours of sleep were more likely to respond to CBT-I than participants with fewer than five hours of sleep. Among participants with sleep duration of five or more hours, insomnia remission was more likely with CBT-I than with the control condition. The five-hour cutoff had no association with depression remission.
Among participants with six or more hours of sleep, those who received CBT-I were more likely to achieve insomnia remission than controls. The six-hour cutoff did not affect the likelihood of depression remission, however.
Among participants with seven or more hours of sleep, those randomized to CBT-I were more likely to achieve insomnia remission and depression remission than controls.
“More research is needed to determine how best to achieve depression remission in those patients with less than seven hours of objective sleep duration prior to starting treatment,” Dr. Edinger concluded.
—Erik Greb
Suggested Reading
Bathgate CJ, Edinger JD, Krystal AD. Insomnia patients with objective short sleep duration have a blunted response to cognitive behavioral therapy for insomnia. Sleep. 2017;40(1).
Vgontzas AN, Liao D, Bixler EO, et al. Insomnia with objective short sleep duration is associated with a high risk for hypertension. Sleep. 2009;32(4):491-497.
Watson NF, Badr MS, Belenky G, et al. Recommended amount of sleep for a healthy adult: A joint consensus statement of the American Academy of Sleep Medicine and Sleep Research Society. Sleep. 2015;38(6):843-844.
BOSTON—Objective sleep duration moderates the probability of remission among patients with comorbid depression and insomnia, according to research presented at the 31st Annual Meeting of the Associated Professional Sleep Societies. Sleep durations of greater than five to six hours increase the likelihood that these patients will achieve insomnia remission with cognitive behavioral therapy for insomnia (CBT-I), but do not affect the likelihood of depression remission. Sleep durations of seven or more hours optimize the likelihood of insomnia remission and depression remission in response to CBT-I.
In a 2015 consensus statement, the Sleep Research Society recommended seven or more hours of sleep per night for adults younger than 60. Studies indicate that sleep durations of less than five hours and less than six hours are associated with increased morbidity and poor treatment response among patients with insomnia. “We wanted to know what [sleep-duration] cutoffs … might be better predictors of eventual insomnia and depression remission through treatment,” said Jack Edinger, PhD, Professor of Medicine at National Jewish Health in Denver.
An Analysis of the TRIAD Study
Dr. Edinger and colleagues conducted a secondary analysis of the TRIAD study, which examined whether combined treatment of depression and insomnia improves depression and sleep outcomes in participants with both disorders. Eligible participants met Diagnostic and Statistical Manual of Mental Disorders (4th ed.) criteria for major depression and primary insomnia, had a Hamilton Rating Scale for Depression (HAMD-17) score of 16 or greater, and had an Insomnia Severity Index (ISI) score of 11 or greater. People who had had psychotherapy in the previous four months, or had failed or could not tolerate previous adequate trials of the study medications, were excluded. Participants completed one night of baseline polysomnography before entering the treatment phase of the study.
The study population included 104 participants (75 women) with a mean age of 47. Mean baseline HAMD-17 score was 22, and mean baseline ISI score was 20.6. All participants received antidepressant medication (ie, citalopram, sertraline, or venlafaxine). Patients were randomized to CBT-I or sham (ie, a pseudodesensitization condition with sleep education). The investigators assessed participants biweekly with the HAMD-17 and the ISI. The treatment period lasted for 16 weeks.
CBT-I Provided Benefits
Participants with five or more hours of sleep were more likely to respond to CBT-I than participants with fewer than five hours of sleep. Among participants with sleep duration of five or more hours, insomnia remission was more likely with CBT-I than with the control condition. The five-hour cutoff had no association with depression remission.
Among participants with six or more hours of sleep, those who received CBT-I were more likely to achieve insomnia remission than controls. The six-hour cutoff did not affect the likelihood of depression remission, however.
Among participants with seven or more hours of sleep, those randomized to CBT-I were more likely to achieve insomnia remission and depression remission than controls.
“More research is needed to determine how best to achieve depression remission in those patients with less than seven hours of objective sleep duration prior to starting treatment,” Dr. Edinger concluded.
—Erik Greb
Suggested Reading
Bathgate CJ, Edinger JD, Krystal AD. Insomnia patients with objective short sleep duration have a blunted response to cognitive behavioral therapy for insomnia. Sleep. 2017;40(1).
Vgontzas AN, Liao D, Bixler EO, et al. Insomnia with objective short sleep duration is associated with a high risk for hypertension. Sleep. 2009;32(4):491-497.
Watson NF, Badr MS, Belenky G, et al. Recommended amount of sleep for a healthy adult: A joint consensus statement of the American Academy of Sleep Medicine and Sleep Research Society. Sleep. 2015;38(6):843-844.
BOSTON—Objective sleep duration moderates the probability of remission among patients with comorbid depression and insomnia, according to research presented at the 31st Annual Meeting of the Associated Professional Sleep Societies. Sleep durations of greater than five to six hours increase the likelihood that these patients will achieve insomnia remission with cognitive behavioral therapy for insomnia (CBT-I), but do not affect the likelihood of depression remission. Sleep durations of seven or more hours optimize the likelihood of insomnia remission and depression remission in response to CBT-I.
In a 2015 consensus statement, the Sleep Research Society recommended seven or more hours of sleep per night for adults younger than 60. Studies indicate that sleep durations of less than five hours and less than six hours are associated with increased morbidity and poor treatment response among patients with insomnia. “We wanted to know what [sleep-duration] cutoffs … might be better predictors of eventual insomnia and depression remission through treatment,” said Jack Edinger, PhD, Professor of Medicine at National Jewish Health in Denver.
An Analysis of the TRIAD Study
Dr. Edinger and colleagues conducted a secondary analysis of the TRIAD study, which examined whether combined treatment of depression and insomnia improves depression and sleep outcomes in participants with both disorders. Eligible participants met Diagnostic and Statistical Manual of Mental Disorders (4th ed.) criteria for major depression and primary insomnia, had a Hamilton Rating Scale for Depression (HAMD-17) score of 16 or greater, and had an Insomnia Severity Index (ISI) score of 11 or greater. People who had had psychotherapy in the previous four months, or had failed or could not tolerate previous adequate trials of the study medications, were excluded. Participants completed one night of baseline polysomnography before entering the treatment phase of the study.
The study population included 104 participants (75 women) with a mean age of 47. Mean baseline HAMD-17 score was 22, and mean baseline ISI score was 20.6. All participants received antidepressant medication (ie, citalopram, sertraline, or venlafaxine). Patients were randomized to CBT-I or sham (ie, a pseudodesensitization condition with sleep education). The investigators assessed participants biweekly with the HAMD-17 and the ISI. The treatment period lasted for 16 weeks.
CBT-I Provided Benefits
Participants with five or more hours of sleep were more likely to respond to CBT-I than participants with fewer than five hours of sleep. Among participants with sleep duration of five or more hours, insomnia remission was more likely with CBT-I than with the control condition. The five-hour cutoff had no association with depression remission.
Among participants with six or more hours of sleep, those who received CBT-I were more likely to achieve insomnia remission than controls. The six-hour cutoff did not affect the likelihood of depression remission, however.
Among participants with seven or more hours of sleep, those randomized to CBT-I were more likely to achieve insomnia remission and depression remission than controls.
“More research is needed to determine how best to achieve depression remission in those patients with less than seven hours of objective sleep duration prior to starting treatment,” Dr. Edinger concluded.
—Erik Greb
Suggested Reading
Bathgate CJ, Edinger JD, Krystal AD. Insomnia patients with objective short sleep duration have a blunted response to cognitive behavioral therapy for insomnia. Sleep. 2017;40(1).
Vgontzas AN, Liao D, Bixler EO, et al. Insomnia with objective short sleep duration is associated with a high risk for hypertension. Sleep. 2009;32(4):491-497.
Watson NF, Badr MS, Belenky G, et al. Recommended amount of sleep for a healthy adult: A joint consensus statement of the American Academy of Sleep Medicine and Sleep Research Society. Sleep. 2015;38(6):843-844.
The many faces of dermoid
A) Dermoid plug CORRECT
The most common appearance of an ovarian dermoid is a cystic lesion with a focal echogenic nodule protruding into the cyst (Rokitansky nodule).1
B) Tip-of-the-iceberg sign INCORRECT
The next most common appearance of an ovarian dermoid is a focal or diffuse hyperechoic mass with areas of sound attenuation from the sebaceous material and hair, often called the tip-of-the-iceberg sign.1
C) Dot-dash pattern INCORRECT
The 3rd most common appearance of an ovarian dermoid is a cystic lesion with multiple thin echogenic bands (lines and dots) that visualize hair floating within the cyst.1
D) Fat-fluid level INCORRECT
The 4th most common appearance of an ovarian dermoid is a result of the echogenic sebum and hypoechoic serous fluid causing a fat-fluid level.1
- Outwater EK, Siegelman ES, Hunt JL. Ovarian teratomas: tumor types and imaging characteristics. RadioGraphics. 2001;21(2):475–490.
A) Dermoid plug CORRECT
The most common appearance of an ovarian dermoid is a cystic lesion with a focal echogenic nodule protruding into the cyst (Rokitansky nodule).1
B) Tip-of-the-iceberg sign INCORRECT
The next most common appearance of an ovarian dermoid is a focal or diffuse hyperechoic mass with areas of sound attenuation from the sebaceous material and hair, often called the tip-of-the-iceberg sign.1
C) Dot-dash pattern INCORRECT
The 3rd most common appearance of an ovarian dermoid is a cystic lesion with multiple thin echogenic bands (lines and dots) that visualize hair floating within the cyst.1
D) Fat-fluid level INCORRECT
The 4th most common appearance of an ovarian dermoid is a result of the echogenic sebum and hypoechoic serous fluid causing a fat-fluid level.1
A) Dermoid plug CORRECT
The most common appearance of an ovarian dermoid is a cystic lesion with a focal echogenic nodule protruding into the cyst (Rokitansky nodule).1
B) Tip-of-the-iceberg sign INCORRECT
The next most common appearance of an ovarian dermoid is a focal or diffuse hyperechoic mass with areas of sound attenuation from the sebaceous material and hair, often called the tip-of-the-iceberg sign.1
C) Dot-dash pattern INCORRECT
The 3rd most common appearance of an ovarian dermoid is a cystic lesion with multiple thin echogenic bands (lines and dots) that visualize hair floating within the cyst.1
D) Fat-fluid level INCORRECT
The 4th most common appearance of an ovarian dermoid is a result of the echogenic sebum and hypoechoic serous fluid causing a fat-fluid level.1
- Outwater EK, Siegelman ES, Hunt JL. Ovarian teratomas: tumor types and imaging characteristics. RadioGraphics. 2001;21(2):475–490.
- Outwater EK, Siegelman ES, Hunt JL. Ovarian teratomas: tumor types and imaging characteristics. RadioGraphics. 2001;21(2):475–490.
A 49-year-old woman with pelvic discomfort presents to her gynecologist. Physical exam suggests unilateral adnexal fullness; the gynecologist orders transvaginal pelvic ultrasonography.
How Does Cognitive Demand Affect Mobility in MS?
Patients with multiple sclerosis (MS) with an Expanded Disability Status Scale (EDSS) score between 4 and 6 have significantly slower times on the Timed Up and Go (TUG) test with the addition of a simple cognitive task, according to research published in the July–August issue of International Journal of MS Care. This reduction in performance “might have implications for a person’s more complex everyday activities,” the researchers said.
Patients with MS may develop cognitive impairment (eg, reduced processing speed or working memory), but standard cognitive assessments overlook how cognitive function affects mobility. To assess how the addition of a cognitive task affects mobility in patients with MS, George H. Kraft, MD, Emeritus Alvord Professor of MS Research at the University of Washington in Seattle, and colleagues conducted a study that included 52 adults with MS and 57 healthy controls. Participants had a mean age of about 47, and most were women.
The participants completed three versions of the TUG test: the standard test, the test plus reciting the alphabet, and the test plus subtracting from a number by threes. Times to complete the tests were compared between controls and three groups of participants with MS—those with an EDSS score of 0–3.5 (n = 26), those with an EDSS score of 4.0–5.5 (n = 11), and those with an EDSS score of 6 (n = 15).
Overall mean times for the four groups were 8.0, 8.2, 11.1, and 11.6 seconds, respectively. Controls did not differ from people with MS without mobility problems (ie, those with an EDSS score of 0–3.5), but did differ from the other two groups.
“Individuals with MS and no mobility problems have ... very little increase in time due to the addition of cognitive tasks to the TUG test. The two more severe groups perform similarly to each other, with a steeper increase in time to perform the test when the cognitive demand increases,” the researchers said. “Although we cannot automatically generalize the results to more complex everyday activities, such as walking or driving a car while talking on a cell phone, the reduction in performance is an important issue that should be discussed with the patient and his or her caregiver.”
—Jake Remaly
Suggested Reading
Ciol MA, Matsuda PN, Khurana SR, et al. Effect of cognitive demand on functional mobility in ambulatory individuals with multiple sclerosis. Int J MS Care. 2017;19(4):217-224.
Patients with multiple sclerosis (MS) with an Expanded Disability Status Scale (EDSS) score between 4 and 6 have significantly slower times on the Timed Up and Go (TUG) test with the addition of a simple cognitive task, according to research published in the July–August issue of International Journal of MS Care. This reduction in performance “might have implications for a person’s more complex everyday activities,” the researchers said.
Patients with MS may develop cognitive impairment (eg, reduced processing speed or working memory), but standard cognitive assessments overlook how cognitive function affects mobility. To assess how the addition of a cognitive task affects mobility in patients with MS, George H. Kraft, MD, Emeritus Alvord Professor of MS Research at the University of Washington in Seattle, and colleagues conducted a study that included 52 adults with MS and 57 healthy controls. Participants had a mean age of about 47, and most were women.
The participants completed three versions of the TUG test: the standard test, the test plus reciting the alphabet, and the test plus subtracting from a number by threes. Times to complete the tests were compared between controls and three groups of participants with MS—those with an EDSS score of 0–3.5 (n = 26), those with an EDSS score of 4.0–5.5 (n = 11), and those with an EDSS score of 6 (n = 15).
Overall mean times for the four groups were 8.0, 8.2, 11.1, and 11.6 seconds, respectively. Controls did not differ from people with MS without mobility problems (ie, those with an EDSS score of 0–3.5), but did differ from the other two groups.
“Individuals with MS and no mobility problems have ... very little increase in time due to the addition of cognitive tasks to the TUG test. The two more severe groups perform similarly to each other, with a steeper increase in time to perform the test when the cognitive demand increases,” the researchers said. “Although we cannot automatically generalize the results to more complex everyday activities, such as walking or driving a car while talking on a cell phone, the reduction in performance is an important issue that should be discussed with the patient and his or her caregiver.”
—Jake Remaly
Suggested Reading
Ciol MA, Matsuda PN, Khurana SR, et al. Effect of cognitive demand on functional mobility in ambulatory individuals with multiple sclerosis. Int J MS Care. 2017;19(4):217-224.
Patients with multiple sclerosis (MS) with an Expanded Disability Status Scale (EDSS) score between 4 and 6 have significantly slower times on the Timed Up and Go (TUG) test with the addition of a simple cognitive task, according to research published in the July–August issue of International Journal of MS Care. This reduction in performance “might have implications for a person’s more complex everyday activities,” the researchers said.
Patients with MS may develop cognitive impairment (eg, reduced processing speed or working memory), but standard cognitive assessments overlook how cognitive function affects mobility. To assess how the addition of a cognitive task affects mobility in patients with MS, George H. Kraft, MD, Emeritus Alvord Professor of MS Research at the University of Washington in Seattle, and colleagues conducted a study that included 52 adults with MS and 57 healthy controls. Participants had a mean age of about 47, and most were women.
The participants completed three versions of the TUG test: the standard test, the test plus reciting the alphabet, and the test plus subtracting from a number by threes. Times to complete the tests were compared between controls and three groups of participants with MS—those with an EDSS score of 0–3.5 (n = 26), those with an EDSS score of 4.0–5.5 (n = 11), and those with an EDSS score of 6 (n = 15).
Overall mean times for the four groups were 8.0, 8.2, 11.1, and 11.6 seconds, respectively. Controls did not differ from people with MS without mobility problems (ie, those with an EDSS score of 0–3.5), but did differ from the other two groups.
“Individuals with MS and no mobility problems have ... very little increase in time due to the addition of cognitive tasks to the TUG test. The two more severe groups perform similarly to each other, with a steeper increase in time to perform the test when the cognitive demand increases,” the researchers said. “Although we cannot automatically generalize the results to more complex everyday activities, such as walking or driving a car while talking on a cell phone, the reduction in performance is an important issue that should be discussed with the patient and his or her caregiver.”
—Jake Remaly
Suggested Reading
Ciol MA, Matsuda PN, Khurana SR, et al. Effect of cognitive demand on functional mobility in ambulatory individuals with multiple sclerosis. Int J MS Care. 2017;19(4):217-224.