User login
SelG1 cut pain crises in sickle cell disease
The humanized antibody SelG1 decreased the frequency of acute pain episodes in people with sickle cell disease, based on results from the multinational, randomized, double-blind, placebo-controlled SUSTAIN study that will be presented at the annual meeting of the American Society of Hematology in San Diego.
In other sickle cell disease research to be presented at the meeting, researchers will be presenting new findings from two studies conducted in Africa. One study examines a team approach to reduce mortality in pregnant women with sickle cell disease in Ghana. The other study, called SPIN, is a safety and feasibility study conducted in advance of a randomized trial in Nigerian children at risk for stroke.
After 1 year, the annual rate of sickle cell–related pain crises resulting in a visit to a medical facility was 1.6 in the group receiving the 5 mg/kg dose, compared with 3 in the placebo group. The 47% difference was statistically significant (P = .01).
Also, time to first pain crisis was a median of 4 months in those who received the 5 mg/kg dose and 1.4 months for those in the placebo group (P = .001).
Infections were not seen increased in either of the groups randomized to SelG1, and no treatment-related deaths occurred during the course of the study. The first-in-class agent “appears to be safe and well tolerated,” as well as effective in reducing pain episodes, Dr. Ataga and his colleagues wrote in their abstract.
In the Nigerian trial, led by Najibah Aliyu Galadanci, MD, MPH, of Bayero University in Kano, Nigeria, the goal was to determine whether families of children with sickle cell disease and transcranial Doppler measurements indicative of increased risk for stroke could be recruited and retained in a large clinical trial, and whether they could adhere to the medication regimen. The trial also obtained preliminary evidence for hydroxyurea’s safety in this clinical setting, where transfusion therapy is not an option for most children.
Dr. Galadanci and her colleagues approached 375 families for transcranial Doppler screening, and 90% accepted. Among families of children found to have elevated measures of risk on transcranial Doppler, 92% participated in the study and received a moderate dose of hydroxyurea (20 mg/kg) for 2 years. A comparison group included 210 children without elevated measures on transcranial Doppler. These children underwent regular monitoring but were not offered medication unless transcranial Doppler measures were found to be elevated.
Study adherence was exceptionally high: the families missed no monthly research visits, and no participants in the active treatment group dropped out voluntarily.
Also, at 2 years, the children treated with hydroxyurea did not have evidence of excessive toxicity, compared with the children who did not receive the drug. “Our results provide strong preliminary evidence supporting the current multicenter randomized controlled trial comparing hydroxyurea therapy (20 mg/kg per day vs. 10 mg/kg per day) for preventing primary strokes in children with sickle cell anemia living in Nigeria,” Dr. Galadanci and her colleagues wrote in their abstract.
In the third study, a multidisciplinary team decreased mortality in pregnant women who had sickle cell disease and lived in low and middle income settings, according to Eugenia Vicky Naa Kwarley Asare, MD, of the Ghana Institute of Clinical Genetics and the Korle-Bu Teaching Hospital in Accra.
In a prospective trial in Ghana, where maternal mortality among women with sickle cell disease is estimated to be 8,300 per 100,000 live births, compared with 690 for women without sickle cell disease, Dr. Asare and her colleagues’ multidisciplinary team included obstetricians, hematologists, pulmonologists, and nurses, and the planned intervention protocols included a number of changes to make management more consistent and intensive. A total of 154 pregnancies were evaluated before the intervention, and 91 after. Median gestational age was 24 weeks at enrollment, and median maternal age was 29 years for both pre- and post-intervention cohorts.
Maternal mortality before the intervention was 9.7% (15 of 154) and after the intervention was 1.1% (1 of 91) of total deliveries.
Dr. Ataga’s study was sponsored by Selexys Pharmaceuticals, the drug’s manufacturer, and included coinvestigators who are employees of Selexys Pharmaceuticals or who disclosed relationships with other drug manufacturers. Dr. Galadanci’s and Dr. Asare’s groups disclosed no conflicts of interest.
The humanized antibody SelG1 decreased the frequency of acute pain episodes in people with sickle cell disease, based on results from the multinational, randomized, double-blind, placebo-controlled SUSTAIN study that will be presented at the annual meeting of the American Society of Hematology in San Diego.
In other sickle cell disease research to be presented at the meeting, researchers will be presenting new findings from two studies conducted in Africa. One study examines a team approach to reduce mortality in pregnant women with sickle cell disease in Ghana. The other study, called SPIN, is a safety and feasibility study conducted in advance of a randomized trial in Nigerian children at risk for stroke.
After 1 year, the annual rate of sickle cell–related pain crises resulting in a visit to a medical facility was 1.6 in the group receiving the 5 mg/kg dose, compared with 3 in the placebo group. The 47% difference was statistically significant (P = .01).
Also, time to first pain crisis was a median of 4 months in those who received the 5 mg/kg dose and 1.4 months for those in the placebo group (P = .001).
Infections were not seen increased in either of the groups randomized to SelG1, and no treatment-related deaths occurred during the course of the study. The first-in-class agent “appears to be safe and well tolerated,” as well as effective in reducing pain episodes, Dr. Ataga and his colleagues wrote in their abstract.
In the Nigerian trial, led by Najibah Aliyu Galadanci, MD, MPH, of Bayero University in Kano, Nigeria, the goal was to determine whether families of children with sickle cell disease and transcranial Doppler measurements indicative of increased risk for stroke could be recruited and retained in a large clinical trial, and whether they could adhere to the medication regimen. The trial also obtained preliminary evidence for hydroxyurea’s safety in this clinical setting, where transfusion therapy is not an option for most children.
Dr. Galadanci and her colleagues approached 375 families for transcranial Doppler screening, and 90% accepted. Among families of children found to have elevated measures of risk on transcranial Doppler, 92% participated in the study and received a moderate dose of hydroxyurea (20 mg/kg) for 2 years. A comparison group included 210 children without elevated measures on transcranial Doppler. These children underwent regular monitoring but were not offered medication unless transcranial Doppler measures were found to be elevated.
Study adherence was exceptionally high: the families missed no monthly research visits, and no participants in the active treatment group dropped out voluntarily.
Also, at 2 years, the children treated with hydroxyurea did not have evidence of excessive toxicity, compared with the children who did not receive the drug. “Our results provide strong preliminary evidence supporting the current multicenter randomized controlled trial comparing hydroxyurea therapy (20 mg/kg per day vs. 10 mg/kg per day) for preventing primary strokes in children with sickle cell anemia living in Nigeria,” Dr. Galadanci and her colleagues wrote in their abstract.
In the third study, a multidisciplinary team decreased mortality in pregnant women who had sickle cell disease and lived in low and middle income settings, according to Eugenia Vicky Naa Kwarley Asare, MD, of the Ghana Institute of Clinical Genetics and the Korle-Bu Teaching Hospital in Accra.
In a prospective trial in Ghana, where maternal mortality among women with sickle cell disease is estimated to be 8,300 per 100,000 live births, compared with 690 for women without sickle cell disease, Dr. Asare and her colleagues’ multidisciplinary team included obstetricians, hematologists, pulmonologists, and nurses, and the planned intervention protocols included a number of changes to make management more consistent and intensive. A total of 154 pregnancies were evaluated before the intervention, and 91 after. Median gestational age was 24 weeks at enrollment, and median maternal age was 29 years for both pre- and post-intervention cohorts.
Maternal mortality before the intervention was 9.7% (15 of 154) and after the intervention was 1.1% (1 of 91) of total deliveries.
Dr. Ataga’s study was sponsored by Selexys Pharmaceuticals, the drug’s manufacturer, and included coinvestigators who are employees of Selexys Pharmaceuticals or who disclosed relationships with other drug manufacturers. Dr. Galadanci’s and Dr. Asare’s groups disclosed no conflicts of interest.
The humanized antibody SelG1 decreased the frequency of acute pain episodes in people with sickle cell disease, based on results from the multinational, randomized, double-blind, placebo-controlled SUSTAIN study that will be presented at the annual meeting of the American Society of Hematology in San Diego.
In other sickle cell disease research to be presented at the meeting, researchers will be presenting new findings from two studies conducted in Africa. One study examines a team approach to reduce mortality in pregnant women with sickle cell disease in Ghana. The other study, called SPIN, is a safety and feasibility study conducted in advance of a randomized trial in Nigerian children at risk for stroke.
After 1 year, the annual rate of sickle cell–related pain crises resulting in a visit to a medical facility was 1.6 in the group receiving the 5 mg/kg dose, compared with 3 in the placebo group. The 47% difference was statistically significant (P = .01).
Also, time to first pain crisis was a median of 4 months in those who received the 5 mg/kg dose and 1.4 months for those in the placebo group (P = .001).
Infections were not seen increased in either of the groups randomized to SelG1, and no treatment-related deaths occurred during the course of the study. The first-in-class agent “appears to be safe and well tolerated,” as well as effective in reducing pain episodes, Dr. Ataga and his colleagues wrote in their abstract.
In the Nigerian trial, led by Najibah Aliyu Galadanci, MD, MPH, of Bayero University in Kano, Nigeria, the goal was to determine whether families of children with sickle cell disease and transcranial Doppler measurements indicative of increased risk for stroke could be recruited and retained in a large clinical trial, and whether they could adhere to the medication regimen. The trial also obtained preliminary evidence for hydroxyurea’s safety in this clinical setting, where transfusion therapy is not an option for most children.
Dr. Galadanci and her colleagues approached 375 families for transcranial Doppler screening, and 90% accepted. Among families of children found to have elevated measures of risk on transcranial Doppler, 92% participated in the study and received a moderate dose of hydroxyurea (20 mg/kg) for 2 years. A comparison group included 210 children without elevated measures on transcranial Doppler. These children underwent regular monitoring but were not offered medication unless transcranial Doppler measures were found to be elevated.
Study adherence was exceptionally high: the families missed no monthly research visits, and no participants in the active treatment group dropped out voluntarily.
Also, at 2 years, the children treated with hydroxyurea did not have evidence of excessive toxicity, compared with the children who did not receive the drug. “Our results provide strong preliminary evidence supporting the current multicenter randomized controlled trial comparing hydroxyurea therapy (20 mg/kg per day vs. 10 mg/kg per day) for preventing primary strokes in children with sickle cell anemia living in Nigeria,” Dr. Galadanci and her colleagues wrote in their abstract.
In the third study, a multidisciplinary team decreased mortality in pregnant women who had sickle cell disease and lived in low and middle income settings, according to Eugenia Vicky Naa Kwarley Asare, MD, of the Ghana Institute of Clinical Genetics and the Korle-Bu Teaching Hospital in Accra.
In a prospective trial in Ghana, where maternal mortality among women with sickle cell disease is estimated to be 8,300 per 100,000 live births, compared with 690 for women without sickle cell disease, Dr. Asare and her colleagues’ multidisciplinary team included obstetricians, hematologists, pulmonologists, and nurses, and the planned intervention protocols included a number of changes to make management more consistent and intensive. A total of 154 pregnancies were evaluated before the intervention, and 91 after. Median gestational age was 24 weeks at enrollment, and median maternal age was 29 years for both pre- and post-intervention cohorts.
Maternal mortality before the intervention was 9.7% (15 of 154) and after the intervention was 1.1% (1 of 91) of total deliveries.
Dr. Ataga’s study was sponsored by Selexys Pharmaceuticals, the drug’s manufacturer, and included coinvestigators who are employees of Selexys Pharmaceuticals or who disclosed relationships with other drug manufacturers. Dr. Galadanci’s and Dr. Asare’s groups disclosed no conflicts of interest.
FROM ASH 2016
Toddler gaze patterns heritable, stable over time
NEW YORK – A team of autism researchers has found that patterns of social-visual engagement are markedly more similar among identical twin toddlers than among fraternal twins.
Social-visual engagement (SVE), which can be measured using eye-tracking technology, is how humans give preferential attention to social stimuli – in particular, people’s eyes and mouths, which provide important information for communication.
Lower levels of SVE have been shown to be associated with the later development of autism, even in children just a few months old (Nature. 2013 Dec 19;504:427-31). “But what hasn’t been shown until now is that this measure relates to genetics,” said Natasha Marrus, MD, PhD, of the department of psychiatry at Washington University in St. Louis.
The identical twins, who share 100% of their genes, “showed much more similar levels of social-visual engagement than fraternal twins,” Dr. Marrus said, with an intraclass correlation coefficient (ICC) of 0.91 (95% confidence interval, 0.85-0.95) for time spent looking at eyes, compared with 0.35 (95% CI, 0.07-0.59) for fraternal twins. Similar results were obtained for the caregiver questionnaire, suggesting strong genetic influences on both early reciprocal social behavior and SVE, said Dr. Marrus, also of Washington University.
At 36 months, 69 of the twin pairs were reevaluated. The investigators again found significantly greater SVE concordance for the identical twins: ICC, 0.93 (95% CI, 0.75-0.98), compared with ICC, 0.25 (95% CI, 0.0-0.60) for fraternal twins. They also found SVE patterns strongly correlated between 21 and 36 months for individual twins, indicating traitlike stability of this behavior over time.
“These two measures that are heritable, like autism, can be measured in a general population sample, which means they show good variability – potentially allowing the detection of subtle differences that may correspond to levels of risk for autism,” Dr. Marrus said. “By 18-21 months, the risk markers for later autism are already there – if you use a nuanced enough measure to detect them.”
Dr. Marrus said that while some practitioners have been able to reliably diagnose autism in children younger than 24 months, “it’s usually with the most severe cases,” she said. “But 18 months is a big time for social as well as language development, which becomes easier to measure at that point.”
A future direction for study, she said, “would be to go earlier. If we’re seeing this at 18 months, maybe we’d see it at 12.”
With autism, “early intervention is key, and even 6 months could make a difference,” Dr. Marrus said. “These two measures stand a really good chance of telling us important things about autism – which at early ages means better diagnostic prediction, measurement of severity and risk, and the potential to monitor responses to interventions.”
The National Institutes of Health supported the study through a grant to Dr. Constantino, and Dr. Marrus’s work was supported with a postdoctoral fellowship from the Autism Science Foundation. The investigators declared no relevant financial conflicts.
NEW YORK – A team of autism researchers has found that patterns of social-visual engagement are markedly more similar among identical twin toddlers than among fraternal twins.
Social-visual engagement (SVE), which can be measured using eye-tracking technology, is how humans give preferential attention to social stimuli – in particular, people’s eyes and mouths, which provide important information for communication.
Lower levels of SVE have been shown to be associated with the later development of autism, even in children just a few months old (Nature. 2013 Dec 19;504:427-31). “But what hasn’t been shown until now is that this measure relates to genetics,” said Natasha Marrus, MD, PhD, of the department of psychiatry at Washington University in St. Louis.
The identical twins, who share 100% of their genes, “showed much more similar levels of social-visual engagement than fraternal twins,” Dr. Marrus said, with an intraclass correlation coefficient (ICC) of 0.91 (95% confidence interval, 0.85-0.95) for time spent looking at eyes, compared with 0.35 (95% CI, 0.07-0.59) for fraternal twins. Similar results were obtained for the caregiver questionnaire, suggesting strong genetic influences on both early reciprocal social behavior and SVE, said Dr. Marrus, also of Washington University.
At 36 months, 69 of the twin pairs were reevaluated. The investigators again found significantly greater SVE concordance for the identical twins: ICC, 0.93 (95% CI, 0.75-0.98), compared with ICC, 0.25 (95% CI, 0.0-0.60) for fraternal twins. They also found SVE patterns strongly correlated between 21 and 36 months for individual twins, indicating traitlike stability of this behavior over time.
“These two measures that are heritable, like autism, can be measured in a general population sample, which means they show good variability – potentially allowing the detection of subtle differences that may correspond to levels of risk for autism,” Dr. Marrus said. “By 18-21 months, the risk markers for later autism are already there – if you use a nuanced enough measure to detect them.”
Dr. Marrus said that while some practitioners have been able to reliably diagnose autism in children younger than 24 months, “it’s usually with the most severe cases,” she said. “But 18 months is a big time for social as well as language development, which becomes easier to measure at that point.”
A future direction for study, she said, “would be to go earlier. If we’re seeing this at 18 months, maybe we’d see it at 12.”
With autism, “early intervention is key, and even 6 months could make a difference,” Dr. Marrus said. “These two measures stand a really good chance of telling us important things about autism – which at early ages means better diagnostic prediction, measurement of severity and risk, and the potential to monitor responses to interventions.”
The National Institutes of Health supported the study through a grant to Dr. Constantino, and Dr. Marrus’s work was supported with a postdoctoral fellowship from the Autism Science Foundation. The investigators declared no relevant financial conflicts.
NEW YORK – A team of autism researchers has found that patterns of social-visual engagement are markedly more similar among identical twin toddlers than among fraternal twins.
Social-visual engagement (SVE), which can be measured using eye-tracking technology, is how humans give preferential attention to social stimuli – in particular, people’s eyes and mouths, which provide important information for communication.
Lower levels of SVE have been shown to be associated with the later development of autism, even in children just a few months old (Nature. 2013 Dec 19;504:427-31). “But what hasn’t been shown until now is that this measure relates to genetics,” said Natasha Marrus, MD, PhD, of the department of psychiatry at Washington University in St. Louis.
The identical twins, who share 100% of their genes, “showed much more similar levels of social-visual engagement than fraternal twins,” Dr. Marrus said, with an intraclass correlation coefficient (ICC) of 0.91 (95% confidence interval, 0.85-0.95) for time spent looking at eyes, compared with 0.35 (95% CI, 0.07-0.59) for fraternal twins. Similar results were obtained for the caregiver questionnaire, suggesting strong genetic influences on both early reciprocal social behavior and SVE, said Dr. Marrus, also of Washington University.
At 36 months, 69 of the twin pairs were reevaluated. The investigators again found significantly greater SVE concordance for the identical twins: ICC, 0.93 (95% CI, 0.75-0.98), compared with ICC, 0.25 (95% CI, 0.0-0.60) for fraternal twins. They also found SVE patterns strongly correlated between 21 and 36 months for individual twins, indicating traitlike stability of this behavior over time.
“These two measures that are heritable, like autism, can be measured in a general population sample, which means they show good variability – potentially allowing the detection of subtle differences that may correspond to levels of risk for autism,” Dr. Marrus said. “By 18-21 months, the risk markers for later autism are already there – if you use a nuanced enough measure to detect them.”
Dr. Marrus said that while some practitioners have been able to reliably diagnose autism in children younger than 24 months, “it’s usually with the most severe cases,” she said. “But 18 months is a big time for social as well as language development, which becomes easier to measure at that point.”
A future direction for study, she said, “would be to go earlier. If we’re seeing this at 18 months, maybe we’d see it at 12.”
With autism, “early intervention is key, and even 6 months could make a difference,” Dr. Marrus said. “These two measures stand a really good chance of telling us important things about autism – which at early ages means better diagnostic prediction, measurement of severity and risk, and the potential to monitor responses to interventions.”
The National Institutes of Health supported the study through a grant to Dr. Constantino, and Dr. Marrus’s work was supported with a postdoctoral fellowship from the Autism Science Foundation. The investigators declared no relevant financial conflicts.
AT AACAP 2016
Targeting HER1/2 falls flat in bladder cancer trial
Patients with metastatic urothelial bladder cancer (UBC) overexpressing HER1 or HER2 did not benefit from a course of lapatinib maintenance therapy following chemotherapy, a U.K.-based research group reported.
The phase III study, led by Thomas Powles, MD, of Queen Mary University of London, randomized 232 patients (mean age 71, about 75% male) with HER1- or HER2-positive metastatic UBC who had not progressed during platinum-based chemotherapy to placebo or lapatinib (Tykerb), an oral medication that targets HER1 and HER2 and is marketed for use in some breast cancers. The lapatinib-treated group saw no significant gains in either progression-free (PFS) or overall survival (OS), Dr. Powles and associates reported (J Clin Oncol. 2016 Oct 31. doi: 10.1200/JCO.2015.66.3468).
The median PFS for patients receiving lapatinib 1,500 mg daily was 4.5 months, compared with 5.1 months for the placebo group (hazard ratio, 1.07; 95% CI, 0.81-1.43; P = .63), while OS after chemotherapy was 12.6 and 12 months, respectively (HR 0.96; 95% CI, 0.70-1.31; P = .80). A subgroup of patients strongly positive for either or both receptors did not see significant OS or PFS benefit associated with lapatinib, a finding that the investigators said reinforced a lack of benefit.
While previous studies have indicated roles for both HER1 and HER2 in bladder cancer progression, targeting them “may not be of clinical benefit in UBC,” Dr. Powles and his colleagues wrote.
Patients with metastatic UBC have short overall survival following first-line chemotherapy, and few proven second-line treatment options exist besides additional chemotherapy, whose benefit is controversial, the researchers noted.
Despite this trial’s negative result for postchemotherapy maintenance treatment with lapatinib, Dr. Powles and his colleagues said their study, which screened 446 patients with metastatic UBC before randomizing slightly more than half, nonetheless shed some light on this difficult-to-treat patient group, including identifying three prognostic factors associated with poor outcome: radiologic progression during chemotherapy, visceral metastasis, and poor performance status. Also, they noted, 61% of the screened patients received cisplatin chemotherapy, and 48% had visceral metastasis, “which gives some insight into the current population of patients who receive chemotherapy.”
GlaxoSmithKline and Cancer Research U.K. sponsored the study. Dr. Powles and several coauthors disclosed financial support from GlaxoSmithKline and other pharmaceutical firms.
Patients with metastatic urothelial bladder cancer (UBC) overexpressing HER1 or HER2 did not benefit from a course of lapatinib maintenance therapy following chemotherapy, a U.K.-based research group reported.
The phase III study, led by Thomas Powles, MD, of Queen Mary University of London, randomized 232 patients (mean age 71, about 75% male) with HER1- or HER2-positive metastatic UBC who had not progressed during platinum-based chemotherapy to placebo or lapatinib (Tykerb), an oral medication that targets HER1 and HER2 and is marketed for use in some breast cancers. The lapatinib-treated group saw no significant gains in either progression-free (PFS) or overall survival (OS), Dr. Powles and associates reported (J Clin Oncol. 2016 Oct 31. doi: 10.1200/JCO.2015.66.3468).
The median PFS for patients receiving lapatinib 1,500 mg daily was 4.5 months, compared with 5.1 months for the placebo group (hazard ratio, 1.07; 95% CI, 0.81-1.43; P = .63), while OS after chemotherapy was 12.6 and 12 months, respectively (HR 0.96; 95% CI, 0.70-1.31; P = .80). A subgroup of patients strongly positive for either or both receptors did not see significant OS or PFS benefit associated with lapatinib, a finding that the investigators said reinforced a lack of benefit.
While previous studies have indicated roles for both HER1 and HER2 in bladder cancer progression, targeting them “may not be of clinical benefit in UBC,” Dr. Powles and his colleagues wrote.
Patients with metastatic UBC have short overall survival following first-line chemotherapy, and few proven second-line treatment options exist besides additional chemotherapy, whose benefit is controversial, the researchers noted.
Despite this trial’s negative result for postchemotherapy maintenance treatment with lapatinib, Dr. Powles and his colleagues said their study, which screened 446 patients with metastatic UBC before randomizing slightly more than half, nonetheless shed some light on this difficult-to-treat patient group, including identifying three prognostic factors associated with poor outcome: radiologic progression during chemotherapy, visceral metastasis, and poor performance status. Also, they noted, 61% of the screened patients received cisplatin chemotherapy, and 48% had visceral metastasis, “which gives some insight into the current population of patients who receive chemotherapy.”
GlaxoSmithKline and Cancer Research U.K. sponsored the study. Dr. Powles and several coauthors disclosed financial support from GlaxoSmithKline and other pharmaceutical firms.
Patients with metastatic urothelial bladder cancer (UBC) overexpressing HER1 or HER2 did not benefit from a course of lapatinib maintenance therapy following chemotherapy, a U.K.-based research group reported.
The phase III study, led by Thomas Powles, MD, of Queen Mary University of London, randomized 232 patients (mean age 71, about 75% male) with HER1- or HER2-positive metastatic UBC who had not progressed during platinum-based chemotherapy to placebo or lapatinib (Tykerb), an oral medication that targets HER1 and HER2 and is marketed for use in some breast cancers. The lapatinib-treated group saw no significant gains in either progression-free (PFS) or overall survival (OS), Dr. Powles and associates reported (J Clin Oncol. 2016 Oct 31. doi: 10.1200/JCO.2015.66.3468).
The median PFS for patients receiving lapatinib 1,500 mg daily was 4.5 months, compared with 5.1 months for the placebo group (hazard ratio, 1.07; 95% CI, 0.81-1.43; P = .63), while OS after chemotherapy was 12.6 and 12 months, respectively (HR 0.96; 95% CI, 0.70-1.31; P = .80). A subgroup of patients strongly positive for either or both receptors did not see significant OS or PFS benefit associated with lapatinib, a finding that the investigators said reinforced a lack of benefit.
While previous studies have indicated roles for both HER1 and HER2 in bladder cancer progression, targeting them “may not be of clinical benefit in UBC,” Dr. Powles and his colleagues wrote.
Patients with metastatic UBC have short overall survival following first-line chemotherapy, and few proven second-line treatment options exist besides additional chemotherapy, whose benefit is controversial, the researchers noted.
Despite this trial’s negative result for postchemotherapy maintenance treatment with lapatinib, Dr. Powles and his colleagues said their study, which screened 446 patients with metastatic UBC before randomizing slightly more than half, nonetheless shed some light on this difficult-to-treat patient group, including identifying three prognostic factors associated with poor outcome: radiologic progression during chemotherapy, visceral metastasis, and poor performance status. Also, they noted, 61% of the screened patients received cisplatin chemotherapy, and 48% had visceral metastasis, “which gives some insight into the current population of patients who receive chemotherapy.”
GlaxoSmithKline and Cancer Research U.K. sponsored the study. Dr. Powles and several coauthors disclosed financial support from GlaxoSmithKline and other pharmaceutical firms.
FROM JOURNAL OF CLINICAL ONCOLOGY
Key clinical point: Treatment with lapatinib after chemotherapy does not improve survival in people with HER1- or HER2-positive metastatic urothelial bladder cancer.
Major finding: Median progression-free survival for lapatinib was 4.5 months (95% CI, 2.8-5.4), compared with 5.1 (95% CI, 3.0-5.8) for placebo (HR, 1.07; 95% CI, 0.81-1.43; P = .063).
Data source: A randomized, placebo-controlled trial in which 232 patients with HER1- or HER2-positive disease were assigned treatment with lapatinib (n = 116) or placebo (n = 116) after platinum-based chemotherapy.
Disclosures: GlaxoSmithKline and Cancer Research U.K. sponsored the study. Dr. Powles and several coauthors disclosed financial support from GlaxoSmithKline and other pharmaceutical firms.
Young adults and anxiety: Marriage may not be protective
A new study of anxiety disorders among young adults aged 18-24 shows that the illnesses are less prevalent among African American and Hispanic young adults, compared with whites. Furthermore, anxiety disorders are 1.5 times as prevalent among married people in this age group, compared with their unmarried peers.
For their research, presented at the annual meeting of the American Academy of Child and Adolescent Psychiatry, Cristiane S. Duarte, PhD, MPH, of Columbia University, New York, and her colleagues looked at data from the 2012/2013 National Epidemiologic Survey on Alcohol and Related Conditions (NESARC), a nationally representative sample of U.S. households.
“We were trying to look specifically at young adulthood, which there’s emerging consensus to regard as a key developmental period,” said Dr. Duarte, whose research focuses on anxiety disorders in young adults. “It’s a period where several psychiatric disorders tend to become much more prevalent. Having untreated anxiety disorders at this age can put young adults at risk for worse outcomes down the line. If anxiety disorders can be resolved, a young adult’s trajectory can be quite different; it’s a time in life in which the right intervention can have a really big impact,” she said.
The NESARC III survey data used structured diagnostic interviews and DSM-5 criteria to assess anxiety disorders occurring in the past year. These included specific phobia, generalized anxiety disorder, social anxiety, panic disorder, and agoraphobia.
For the most part, Dr. Duarte said, her group’s findings on anxiety disorders reflected earlier prevalence studies that had used DSM-IV criteria. Women were more likely than were men to report any past-year anxiety disorder (odds ratio, 2.26; 95% confidence interval, 1.80-2.84), as were people with lower personal and family incomes. Rates of anxiety disorders were highest in groups with the lowest personal and family incomes, and among people neither employed nor in an educational program.
Dr. Duarte said in an interview that the latter findings were generally anticipated. However, the finding that African Americans and Hispanics at this age had lower risk relative to whites (OR, 0.52; 95% CI, 0.40-0.67) and (OR, 0.63; 95% CI, 0.49-0.83) was interesting, because it appeared to mirror the lower relative prevalence seen among adults in those two groups, rather than the higher prevalence seen among children in the same groups. More research will be needed, she said, to verify and, if correct, understand this reversing trend in prevalence seen between childhood and adulthood.
The study’s most unexpected finding, Dr. Duarte said, was that married individuals aged 18-24 had higher prevalence of anxiety (OR, 1.54; 95% CI, 1.05-2.26). “Across the board, marriage is protective for many health and mental health conditions,” Dr. Duarte said, but she acknowledged that many factors could be in play. Marriage might not, in fact, be protective in this age group; the institution might be reflective of cultural factors promoting early marriage; or the findings could reflect a selection into marriage possibly related to existing anxiety disorders.
“To better understand this finding, we will need to consider several complexities which are part of young adulthood as a unique developmental period,” she said.
Dr. Duarte’s and her colleagues’ study was funded by the Youth Anxiety Center at New York–Presbyterian Hospital. Three coauthors reported research support from pharmaceutical manufacturers and royalties from commercial publishers.
A new study of anxiety disorders among young adults aged 18-24 shows that the illnesses are less prevalent among African American and Hispanic young adults, compared with whites. Furthermore, anxiety disorders are 1.5 times as prevalent among married people in this age group, compared with their unmarried peers.
For their research, presented at the annual meeting of the American Academy of Child and Adolescent Psychiatry, Cristiane S. Duarte, PhD, MPH, of Columbia University, New York, and her colleagues looked at data from the 2012/2013 National Epidemiologic Survey on Alcohol and Related Conditions (NESARC), a nationally representative sample of U.S. households.
“We were trying to look specifically at young adulthood, which there’s emerging consensus to regard as a key developmental period,” said Dr. Duarte, whose research focuses on anxiety disorders in young adults. “It’s a period where several psychiatric disorders tend to become much more prevalent. Having untreated anxiety disorders at this age can put young adults at risk for worse outcomes down the line. If anxiety disorders can be resolved, a young adult’s trajectory can be quite different; it’s a time in life in which the right intervention can have a really big impact,” she said.
The NESARC III survey data used structured diagnostic interviews and DSM-5 criteria to assess anxiety disorders occurring in the past year. These included specific phobia, generalized anxiety disorder, social anxiety, panic disorder, and agoraphobia.
For the most part, Dr. Duarte said, her group’s findings on anxiety disorders reflected earlier prevalence studies that had used DSM-IV criteria. Women were more likely than were men to report any past-year anxiety disorder (odds ratio, 2.26; 95% confidence interval, 1.80-2.84), as were people with lower personal and family incomes. Rates of anxiety disorders were highest in groups with the lowest personal and family incomes, and among people neither employed nor in an educational program.
Dr. Duarte said in an interview that the latter findings were generally anticipated. However, the finding that African Americans and Hispanics at this age had lower risk relative to whites (OR, 0.52; 95% CI, 0.40-0.67) and (OR, 0.63; 95% CI, 0.49-0.83) was interesting, because it appeared to mirror the lower relative prevalence seen among adults in those two groups, rather than the higher prevalence seen among children in the same groups. More research will be needed, she said, to verify and, if correct, understand this reversing trend in prevalence seen between childhood and adulthood.
The study’s most unexpected finding, Dr. Duarte said, was that married individuals aged 18-24 had higher prevalence of anxiety (OR, 1.54; 95% CI, 1.05-2.26). “Across the board, marriage is protective for many health and mental health conditions,” Dr. Duarte said, but she acknowledged that many factors could be in play. Marriage might not, in fact, be protective in this age group; the institution might be reflective of cultural factors promoting early marriage; or the findings could reflect a selection into marriage possibly related to existing anxiety disorders.
“To better understand this finding, we will need to consider several complexities which are part of young adulthood as a unique developmental period,” she said.
Dr. Duarte’s and her colleagues’ study was funded by the Youth Anxiety Center at New York–Presbyterian Hospital. Three coauthors reported research support from pharmaceutical manufacturers and royalties from commercial publishers.
A new study of anxiety disorders among young adults aged 18-24 shows that the illnesses are less prevalent among African American and Hispanic young adults, compared with whites. Furthermore, anxiety disorders are 1.5 times as prevalent among married people in this age group, compared with their unmarried peers.
For their research, presented at the annual meeting of the American Academy of Child and Adolescent Psychiatry, Cristiane S. Duarte, PhD, MPH, of Columbia University, New York, and her colleagues looked at data from the 2012/2013 National Epidemiologic Survey on Alcohol and Related Conditions (NESARC), a nationally representative sample of U.S. households.
“We were trying to look specifically at young adulthood, which there’s emerging consensus to regard as a key developmental period,” said Dr. Duarte, whose research focuses on anxiety disorders in young adults. “It’s a period where several psychiatric disorders tend to become much more prevalent. Having untreated anxiety disorders at this age can put young adults at risk for worse outcomes down the line. If anxiety disorders can be resolved, a young adult’s trajectory can be quite different; it’s a time in life in which the right intervention can have a really big impact,” she said.
The NESARC III survey data used structured diagnostic interviews and DSM-5 criteria to assess anxiety disorders occurring in the past year. These included specific phobia, generalized anxiety disorder, social anxiety, panic disorder, and agoraphobia.
For the most part, Dr. Duarte said, her group’s findings on anxiety disorders reflected earlier prevalence studies that had used DSM-IV criteria. Women were more likely than were men to report any past-year anxiety disorder (odds ratio, 2.26; 95% confidence interval, 1.80-2.84), as were people with lower personal and family incomes. Rates of anxiety disorders were highest in groups with the lowest personal and family incomes, and among people neither employed nor in an educational program.
Dr. Duarte said in an interview that the latter findings were generally anticipated. However, the finding that African Americans and Hispanics at this age had lower risk relative to whites (OR, 0.52; 95% CI, 0.40-0.67) and (OR, 0.63; 95% CI, 0.49-0.83) was interesting, because it appeared to mirror the lower relative prevalence seen among adults in those two groups, rather than the higher prevalence seen among children in the same groups. More research will be needed, she said, to verify and, if correct, understand this reversing trend in prevalence seen between childhood and adulthood.
The study’s most unexpected finding, Dr. Duarte said, was that married individuals aged 18-24 had higher prevalence of anxiety (OR, 1.54; 95% CI, 1.05-2.26). “Across the board, marriage is protective for many health and mental health conditions,” Dr. Duarte said, but she acknowledged that many factors could be in play. Marriage might not, in fact, be protective in this age group; the institution might be reflective of cultural factors promoting early marriage; or the findings could reflect a selection into marriage possibly related to existing anxiety disorders.
“To better understand this finding, we will need to consider several complexities which are part of young adulthood as a unique developmental period,” she said.
Dr. Duarte’s and her colleagues’ study was funded by the Youth Anxiety Center at New York–Presbyterian Hospital. Three coauthors reported research support from pharmaceutical manufacturers and royalties from commercial publishers.
FROM AACAP 2016
Key clinical point:
Major finding: African Americans and Hispanics who are young adults have a lower risk relative to their white peers (OR, 0.52; 95% confidence interval, 0.40-.067) and (OR, 0.63; 95% CI, 0.49-0.83). In addition, married individuals aged 18-24 had higher prevalence of anxiety (OR, 1.54; 95% CI, 1.05-2.26) than did their unmarried peers.
Data source: Data from the National Epidemiologic Survey on Alcohol and Related Conditions, a nationally representative sample of U.S. households.
Disclosures: The Youth Anxiety Center at New York–Presbyterian Hospital funded the study. Three coauthors reported research support from pharmaceutical manufacturers and royalties from commercial publishers.
Homeless youth and risk: Untangling role of executive function
NEW YORK – Researchers studying the executive functioning ability of homeless youth have found that individuals with poor executive function report more alcohol abuse and dependence than do those with higher EF.
The results are from a study of 149 youth aged 18-22 years (53% female) living in shelters in Chicago. Subjects self-reported behaviors in a series of interviews that used three validated measures of executive function.
Scott J. Hunter, Ph.D., director of neuropsychology at the University of Chicago, presented the findings at the annual meeting of the American Academy of Child and Adolescent Psychiatry. Dr. Hunter said in an interview that the results help identify low executive functioning as both a likely contributor to risk-taking behavior and a potential target of interventions.
“We believe that the EF may be the primary concern, although the interaction [with drugs and alcohol] is something that we have to take into account,” he said. “One of the biggest issues here is how do you disentangle that executive piece with the use of substances?”
In this cohort, Dr. Hunter said, about 75% of subjects were African American and an additional 25% or so were mixed race or Latino. About half comprised a sexual minority (gay, lesbian, bisexual, or transgender). “Many had been kicked out of their homes,” he said.
Close to 80% of the youth in the study used cannabis regularly, and three-quarters used alcohol. The group with low EF used the greatest level of substances regularly. Admission of unprotected sexual intercourse was highest among the heavier substance users as well, suggesting “a reliance on substances to reduce sensitivity to the risks they were taking,” said Dr. Hunter, also a professor in the departments of psychiatry and behavioral neuroscience, and pediatrics at the university.
He said the study “is providing some support for our hypothesis that the less successful these young people are in their development of EF, particularly around inhibition, the more likely it is they are going to be engaging in risk-taking behaviors that lead to cycles of more challenge” and development of psychopathology.
The researchers are considering an intervention for this population derived from EF interventions for use with adolescents with attention-deficit/hyperactivity disorder. In their current shelter environments, he said, the youth are “already undergoing programs to learn adaptive functioning to be more successful, and we’re thinking of adding an executive component where they tie the decision-making component to what they want as outcomes.”
The prefrontal cortex of the brain, which controls executive function, is not yet fully developed in adolescence, and studies have shown that youth growing up in impoverished environments have decreases or alterations in cortical development (Front Hum Neurosci. 2012 Aug 17;6:238). “What we have to think about is that we’re still at a [developmental] point where this enhancement and myelination is taking place – into the mid-20s, in fact. We may find that [an intervention] can help them better activate that,” Dr. Hunter said.
The lead author on this study was Joshua Piche, a medical student at the University of Chicago.
Dr. Hunter also is collaborating with epidemiologist John Schneider, MD, MPH, of the University of Chicago, in a study of 600 young black men who have sex with men. The researchers are looking at drug-, alcohol-related, and sexual decision-making in that cohort, about a quarter of whom are homeless. The study includes functional magnetic resonance imaging in a subgroup of subjects.
Currently, as many as 2 million U.S. youth are estimated to be living on the streets, in shelters, or in other temporary housing environments.
NEW YORK – Researchers studying the executive functioning ability of homeless youth have found that individuals with poor executive function report more alcohol abuse and dependence than do those with higher EF.
The results are from a study of 149 youth aged 18-22 years (53% female) living in shelters in Chicago. Subjects self-reported behaviors in a series of interviews that used three validated measures of executive function.
Scott J. Hunter, Ph.D., director of neuropsychology at the University of Chicago, presented the findings at the annual meeting of the American Academy of Child and Adolescent Psychiatry. Dr. Hunter said in an interview that the results help identify low executive functioning as both a likely contributor to risk-taking behavior and a potential target of interventions.
“We believe that the EF may be the primary concern, although the interaction [with drugs and alcohol] is something that we have to take into account,” he said. “One of the biggest issues here is how do you disentangle that executive piece with the use of substances?”
In this cohort, Dr. Hunter said, about 75% of subjects were African American and an additional 25% or so were mixed race or Latino. About half comprised a sexual minority (gay, lesbian, bisexual, or transgender). “Many had been kicked out of their homes,” he said.
Close to 80% of the youth in the study used cannabis regularly, and three-quarters used alcohol. The group with low EF used the greatest level of substances regularly. Admission of unprotected sexual intercourse was highest among the heavier substance users as well, suggesting “a reliance on substances to reduce sensitivity to the risks they were taking,” said Dr. Hunter, also a professor in the departments of psychiatry and behavioral neuroscience, and pediatrics at the university.
He said the study “is providing some support for our hypothesis that the less successful these young people are in their development of EF, particularly around inhibition, the more likely it is they are going to be engaging in risk-taking behaviors that lead to cycles of more challenge” and development of psychopathology.
The researchers are considering an intervention for this population derived from EF interventions for use with adolescents with attention-deficit/hyperactivity disorder. In their current shelter environments, he said, the youth are “already undergoing programs to learn adaptive functioning to be more successful, and we’re thinking of adding an executive component where they tie the decision-making component to what they want as outcomes.”
The prefrontal cortex of the brain, which controls executive function, is not yet fully developed in adolescence, and studies have shown that youth growing up in impoverished environments have decreases or alterations in cortical development (Front Hum Neurosci. 2012 Aug 17;6:238). “What we have to think about is that we’re still at a [developmental] point where this enhancement and myelination is taking place – into the mid-20s, in fact. We may find that [an intervention] can help them better activate that,” Dr. Hunter said.
The lead author on this study was Joshua Piche, a medical student at the University of Chicago.
Dr. Hunter also is collaborating with epidemiologist John Schneider, MD, MPH, of the University of Chicago, in a study of 600 young black men who have sex with men. The researchers are looking at drug-, alcohol-related, and sexual decision-making in that cohort, about a quarter of whom are homeless. The study includes functional magnetic resonance imaging in a subgroup of subjects.
Currently, as many as 2 million U.S. youth are estimated to be living on the streets, in shelters, or in other temporary housing environments.
NEW YORK – Researchers studying the executive functioning ability of homeless youth have found that individuals with poor executive function report more alcohol abuse and dependence than do those with higher EF.
The results are from a study of 149 youth aged 18-22 years (53% female) living in shelters in Chicago. Subjects self-reported behaviors in a series of interviews that used three validated measures of executive function.
Scott J. Hunter, Ph.D., director of neuropsychology at the University of Chicago, presented the findings at the annual meeting of the American Academy of Child and Adolescent Psychiatry. Dr. Hunter said in an interview that the results help identify low executive functioning as both a likely contributor to risk-taking behavior and a potential target of interventions.
“We believe that the EF may be the primary concern, although the interaction [with drugs and alcohol] is something that we have to take into account,” he said. “One of the biggest issues here is how do you disentangle that executive piece with the use of substances?”
In this cohort, Dr. Hunter said, about 75% of subjects were African American and an additional 25% or so were mixed race or Latino. About half comprised a sexual minority (gay, lesbian, bisexual, or transgender). “Many had been kicked out of their homes,” he said.
Close to 80% of the youth in the study used cannabis regularly, and three-quarters used alcohol. The group with low EF used the greatest level of substances regularly. Admission of unprotected sexual intercourse was highest among the heavier substance users as well, suggesting “a reliance on substances to reduce sensitivity to the risks they were taking,” said Dr. Hunter, also a professor in the departments of psychiatry and behavioral neuroscience, and pediatrics at the university.
He said the study “is providing some support for our hypothesis that the less successful these young people are in their development of EF, particularly around inhibition, the more likely it is they are going to be engaging in risk-taking behaviors that lead to cycles of more challenge” and development of psychopathology.
The researchers are considering an intervention for this population derived from EF interventions for use with adolescents with attention-deficit/hyperactivity disorder. In their current shelter environments, he said, the youth are “already undergoing programs to learn adaptive functioning to be more successful, and we’re thinking of adding an executive component where they tie the decision-making component to what they want as outcomes.”
The prefrontal cortex of the brain, which controls executive function, is not yet fully developed in adolescence, and studies have shown that youth growing up in impoverished environments have decreases or alterations in cortical development (Front Hum Neurosci. 2012 Aug 17;6:238). “What we have to think about is that we’re still at a [developmental] point where this enhancement and myelination is taking place – into the mid-20s, in fact. We may find that [an intervention] can help them better activate that,” Dr. Hunter said.
The lead author on this study was Joshua Piche, a medical student at the University of Chicago.
Dr. Hunter also is collaborating with epidemiologist John Schneider, MD, MPH, of the University of Chicago, in a study of 600 young black men who have sex with men. The researchers are looking at drug-, alcohol-related, and sexual decision-making in that cohort, about a quarter of whom are homeless. The study includes functional magnetic resonance imaging in a subgroup of subjects.
Currently, as many as 2 million U.S. youth are estimated to be living on the streets, in shelters, or in other temporary housing environments.
Experts outline phenotype approach to rosacea
A phenotype approach should be used to diagnose and manage rosacea, according to an expert panel that included 17 dermatologists from North America, Europe, Asia, Africa, and South America.
“As individual treatments do not address multiple features simultaneously, consideration of specific phenotypical issues facilitates individualized optimization of rosacea,” the panel concluded. As individual presentations of rosacea can span more than one of the currently defined disease subtypes, and vary widely in severity, dermatologists have long expressed a need to move to a phenotype-based system for diagnosis and classification.
The goal of the panel was “to establish international consensus on diagnosis and severity determination to improve outcomes” for people with rosacea (Br J Dermatol. 2016 Oct 8. doi: 10.1111/bjd.15122).
Jerry L. Tan, MD, of the University of Western Ontario, Windsor, and coauthors, explained why they considered a transition to the phenotype-based approach important: “Subtype classification may not fully cover the range of clinical presentations and is likely to confound severity assessment, whereas a phenotype-based approach could improve patient outcomes by addressing an individual patient’s clinical presentation and concerns.”
The panel identified two phenotypes as independently diagnostic of rosacea: persistent, centrofacial erythema associated with periodic intensification, and phymatous changes. Flushing or transient erythema, telangiectasia, inflammatory lesions, and ocular manifestations – the other phenotypes identified in the study – were not considered individually diagnostic.
Severity measurements for each phenotype were defined with a high degree of consensus, and the panel agreed that the severity of each feature should be rated independently and not grouped into subtype. For flushing or transient erythema, for example, the panel recommended that clinicians consider the intensity and frequency of episodes along with the area of involvement. For phymatous changes, inflammation, skin thickening, and deformation were identified as the key severity measures.
Although the investigators acknowledged that their expert consensus was the product of clinical opinion in the absence of extensive evidence, they cited as one of the study’s strengths its broad expert representation across geographical regions, where rosacea presentations may differ. Erythema and telangiectasia, Dr. Tan and colleagues wrote, “may not be visible in skin phototypes V and VI, an issue that may be overcome with experience and appropriate history taking.” They added that “other techniques, including skin biopsy, can also be considered for diagnostic support.” They recommended the development of new validated scales to be used in darker-skinned patients.
The panel also identified the psychosocial impact of rosacea as one severely understudied area of rosacea, and advocated the development of a new research tool that would assess psychological comorbidities. The proposed tool, they wrote, “should go beyond those currently available and assess the psychosocial impact for all major phenotypes.” The only rosacea-specific quality of life scoring measure, RosaQoL, contains notable deficiencies, they noted, including a lack of a measure for phymatous changes.
“Since clinicians and patients often have disparate views of disease,” the researchers wrote, “objective and practical tools based on individual presenting features are likely to be of value in setting treatment targets and monitoring treatment progress for patients with rosacea.”
The panel included three ophthalmologists from Germany and the United States; their recommendations were considered exploratory.
The study, which consisted of both electronic surveys and in-person meetings, was funded by Galderma. Twelve of its coauthors, including Dr. Tan, disclosed financial relationships with manufacturers.
A phenotype approach should be used to diagnose and manage rosacea, according to an expert panel that included 17 dermatologists from North America, Europe, Asia, Africa, and South America.
“As individual treatments do not address multiple features simultaneously, consideration of specific phenotypical issues facilitates individualized optimization of rosacea,” the panel concluded. As individual presentations of rosacea can span more than one of the currently defined disease subtypes, and vary widely in severity, dermatologists have long expressed a need to move to a phenotype-based system for diagnosis and classification.
The goal of the panel was “to establish international consensus on diagnosis and severity determination to improve outcomes” for people with rosacea (Br J Dermatol. 2016 Oct 8. doi: 10.1111/bjd.15122).
Jerry L. Tan, MD, of the University of Western Ontario, Windsor, and coauthors, explained why they considered a transition to the phenotype-based approach important: “Subtype classification may not fully cover the range of clinical presentations and is likely to confound severity assessment, whereas a phenotype-based approach could improve patient outcomes by addressing an individual patient’s clinical presentation and concerns.”
The panel identified two phenotypes as independently diagnostic of rosacea: persistent, centrofacial erythema associated with periodic intensification, and phymatous changes. Flushing or transient erythema, telangiectasia, inflammatory lesions, and ocular manifestations – the other phenotypes identified in the study – were not considered individually diagnostic.
Severity measurements for each phenotype were defined with a high degree of consensus, and the panel agreed that the severity of each feature should be rated independently and not grouped into subtype. For flushing or transient erythema, for example, the panel recommended that clinicians consider the intensity and frequency of episodes along with the area of involvement. For phymatous changes, inflammation, skin thickening, and deformation were identified as the key severity measures.
Although the investigators acknowledged that their expert consensus was the product of clinical opinion in the absence of extensive evidence, they cited as one of the study’s strengths its broad expert representation across geographical regions, where rosacea presentations may differ. Erythema and telangiectasia, Dr. Tan and colleagues wrote, “may not be visible in skin phototypes V and VI, an issue that may be overcome with experience and appropriate history taking.” They added that “other techniques, including skin biopsy, can also be considered for diagnostic support.” They recommended the development of new validated scales to be used in darker-skinned patients.
The panel also identified the psychosocial impact of rosacea as one severely understudied area of rosacea, and advocated the development of a new research tool that would assess psychological comorbidities. The proposed tool, they wrote, “should go beyond those currently available and assess the psychosocial impact for all major phenotypes.” The only rosacea-specific quality of life scoring measure, RosaQoL, contains notable deficiencies, they noted, including a lack of a measure for phymatous changes.
“Since clinicians and patients often have disparate views of disease,” the researchers wrote, “objective and practical tools based on individual presenting features are likely to be of value in setting treatment targets and monitoring treatment progress for patients with rosacea.”
The panel included three ophthalmologists from Germany and the United States; their recommendations were considered exploratory.
The study, which consisted of both electronic surveys and in-person meetings, was funded by Galderma. Twelve of its coauthors, including Dr. Tan, disclosed financial relationships with manufacturers.
A phenotype approach should be used to diagnose and manage rosacea, according to an expert panel that included 17 dermatologists from North America, Europe, Asia, Africa, and South America.
“As individual treatments do not address multiple features simultaneously, consideration of specific phenotypical issues facilitates individualized optimization of rosacea,” the panel concluded. As individual presentations of rosacea can span more than one of the currently defined disease subtypes, and vary widely in severity, dermatologists have long expressed a need to move to a phenotype-based system for diagnosis and classification.
The goal of the panel was “to establish international consensus on diagnosis and severity determination to improve outcomes” for people with rosacea (Br J Dermatol. 2016 Oct 8. doi: 10.1111/bjd.15122).
Jerry L. Tan, MD, of the University of Western Ontario, Windsor, and coauthors, explained why they considered a transition to the phenotype-based approach important: “Subtype classification may not fully cover the range of clinical presentations and is likely to confound severity assessment, whereas a phenotype-based approach could improve patient outcomes by addressing an individual patient’s clinical presentation and concerns.”
The panel identified two phenotypes as independently diagnostic of rosacea: persistent, centrofacial erythema associated with periodic intensification, and phymatous changes. Flushing or transient erythema, telangiectasia, inflammatory lesions, and ocular manifestations – the other phenotypes identified in the study – were not considered individually diagnostic.
Severity measurements for each phenotype were defined with a high degree of consensus, and the panel agreed that the severity of each feature should be rated independently and not grouped into subtype. For flushing or transient erythema, for example, the panel recommended that clinicians consider the intensity and frequency of episodes along with the area of involvement. For phymatous changes, inflammation, skin thickening, and deformation were identified as the key severity measures.
Although the investigators acknowledged that their expert consensus was the product of clinical opinion in the absence of extensive evidence, they cited as one of the study’s strengths its broad expert representation across geographical regions, where rosacea presentations may differ. Erythema and telangiectasia, Dr. Tan and colleagues wrote, “may not be visible in skin phototypes V and VI, an issue that may be overcome with experience and appropriate history taking.” They added that “other techniques, including skin biopsy, can also be considered for diagnostic support.” They recommended the development of new validated scales to be used in darker-skinned patients.
The panel also identified the psychosocial impact of rosacea as one severely understudied area of rosacea, and advocated the development of a new research tool that would assess psychological comorbidities. The proposed tool, they wrote, “should go beyond those currently available and assess the psychosocial impact for all major phenotypes.” The only rosacea-specific quality of life scoring measure, RosaQoL, contains notable deficiencies, they noted, including a lack of a measure for phymatous changes.
“Since clinicians and patients often have disparate views of disease,” the researchers wrote, “objective and practical tools based on individual presenting features are likely to be of value in setting treatment targets and monitoring treatment progress for patients with rosacea.”
The panel included three ophthalmologists from Germany and the United States; their recommendations were considered exploratory.
The study, which consisted of both electronic surveys and in-person meetings, was funded by Galderma. Twelve of its coauthors, including Dr. Tan, disclosed financial relationships with manufacturers.
FROM THE BRITISH JOURNAL OF DERMATOLOGY
Key clinical point: Rosacea diagnosis, severity grading, and management should be based on disease phenotypes, which can span more than one of the currently recognized subtypes.
Major finding: Persistent centrofacial erythema with periodic intensification, and phymatous changes, are two phenotypes independently diagnostic of rosacea
Data source: An expert panel of 17 dermatologists from North America, Europe, Asia, Africa, and South America.
Disclosures: Galderma sponsored the study, for which all authors received honoraria; 12 disclosed additional funding from Galderma or other manufacturers.
Calcium channel blocker reduces cardiac iron loading in thalassemia major
The calcium channel blocker amlodipine, added to iron chelation therapy, significantly reduced excess myocardial iron concentration in patients with thalassemia major, compared with chelation alone, according to results from a randomized trial.
The findings (Blood. 2016;128[12]:1555-61) suggest that amlodipine, a cheap, widely available drug with a well-established safety profile, may serve as an adjunct to standard treatment for people with thalassemia major and cardiac siderosis. Cardiovascular disease caused by excess myocardial iron remains a major cause of morbidity and mortality in thalassemia major.
Juliano L. Fernandes, MD, PhD, of the Jose Michel Kalaf Research Institute in Campinas, Brazil, led the study, which randomized 62 patients already receiving chelation treatment for thalassemia major to 1 year of chelation plus placebo (n = 31) or chelation plus 5 mg daily amlodipine (n = 31).
Patients in each arm were subdivided into two subgroups: those whose baseline myocardial iron concentration was within normal thresholds, and those with excess myocardial iron concentration as measured by magnetic resonance imaging (above 0.59 mg/g dry weight or with a cardiac T2* below 35 milliseconds).
In the amlodipine arm, patients with excess cardiac iron at baseline (n = 15) saw significant reductions in myocardial iron concentrations at 1 year, compared with those randomized to placebo (n = 15). The former had a median reduction of –0.26 mg/g (95% confidence interval, –1.02 to –0.01) while the placebo group saw an increase of 0.01 mg/g (95% CI, 20.13 to 20.23; P = .02).
The investigators acknowledged that some of the findings were limited by the study’s short observation period.
Patients without excess myocardial iron concentration at baseline did not see significant changes associated with amlodipine. While Dr. Fernandes and his colleagues could not conclude that the drug prevented excess cardiac iron from accumulating, “our data cannot rule out the possibility that extended use of amlodipine might prevent myocardial iron accumulation with a longer observation period.”
Secondary endpoints of the study included measurements of iron storage in the liver and of serum ferritin, neither of which appeared to be affected by amlodipine treatment, which the investigators said was consistent with the drug’s known mechanism of action. No serious adverse effects were reported related to amlodipine treatment.
Dr. Fernandes and his colleagues also did not find improvements in left ventricular ejection fraction associated with amlodipine use at 12 months. This may be due, they wrote in their analysis, to a “relatively low prevalence of reduced ejection fraction or severe myocardial siderosis upon trial enrollment, limiting the power of the study to assess these outcomes.”
The government of Brazil and the Sultan Bin Khalifa Translational Research Scholarship sponsored the study. Dr. Fernandes reported receiving fees from Novartis and Sanofi. The remaining 12 authors disclosed no conflicts of interest.
Why is this small clinical trial of such pivotal importance in this day and age of massive multicenter prospective randomized studies? The answer is that it tells us that iron entry into the heart through L-type calcium channels, a mechanism that has been clearly demonstrated in vitro, seems to be actually occurring in humans. As an added bonus, we have a possible new adjunctive treatment of iron cardiomyopathy. More clinical studies are needed, and certainly biochemical studies need to continue because all calcium channel blockers do not have the same effect in vitro, but at least the “channels” for more progress on both clinical and biochemical fronts are now open.
Thomas D. Coates, MD, is with Children’s Hospital of Los Angeles and University of Southern California, Los Angeles. He made his remarks in an editorial that accompanied the published study.
Why is this small clinical trial of such pivotal importance in this day and age of massive multicenter prospective randomized studies? The answer is that it tells us that iron entry into the heart through L-type calcium channels, a mechanism that has been clearly demonstrated in vitro, seems to be actually occurring in humans. As an added bonus, we have a possible new adjunctive treatment of iron cardiomyopathy. More clinical studies are needed, and certainly biochemical studies need to continue because all calcium channel blockers do not have the same effect in vitro, but at least the “channels” for more progress on both clinical and biochemical fronts are now open.
Thomas D. Coates, MD, is with Children’s Hospital of Los Angeles and University of Southern California, Los Angeles. He made his remarks in an editorial that accompanied the published study.
Why is this small clinical trial of such pivotal importance in this day and age of massive multicenter prospective randomized studies? The answer is that it tells us that iron entry into the heart through L-type calcium channels, a mechanism that has been clearly demonstrated in vitro, seems to be actually occurring in humans. As an added bonus, we have a possible new adjunctive treatment of iron cardiomyopathy. More clinical studies are needed, and certainly biochemical studies need to continue because all calcium channel blockers do not have the same effect in vitro, but at least the “channels” for more progress on both clinical and biochemical fronts are now open.
Thomas D. Coates, MD, is with Children’s Hospital of Los Angeles and University of Southern California, Los Angeles. He made his remarks in an editorial that accompanied the published study.
The calcium channel blocker amlodipine, added to iron chelation therapy, significantly reduced excess myocardial iron concentration in patients with thalassemia major, compared with chelation alone, according to results from a randomized trial.
The findings (Blood. 2016;128[12]:1555-61) suggest that amlodipine, a cheap, widely available drug with a well-established safety profile, may serve as an adjunct to standard treatment for people with thalassemia major and cardiac siderosis. Cardiovascular disease caused by excess myocardial iron remains a major cause of morbidity and mortality in thalassemia major.
Juliano L. Fernandes, MD, PhD, of the Jose Michel Kalaf Research Institute in Campinas, Brazil, led the study, which randomized 62 patients already receiving chelation treatment for thalassemia major to 1 year of chelation plus placebo (n = 31) or chelation plus 5 mg daily amlodipine (n = 31).
Patients in each arm were subdivided into two subgroups: those whose baseline myocardial iron concentration was within normal thresholds, and those with excess myocardial iron concentration as measured by magnetic resonance imaging (above 0.59 mg/g dry weight or with a cardiac T2* below 35 milliseconds).
In the amlodipine arm, patients with excess cardiac iron at baseline (n = 15) saw significant reductions in myocardial iron concentrations at 1 year, compared with those randomized to placebo (n = 15). The former had a median reduction of –0.26 mg/g (95% confidence interval, –1.02 to –0.01) while the placebo group saw an increase of 0.01 mg/g (95% CI, 20.13 to 20.23; P = .02).
The investigators acknowledged that some of the findings were limited by the study’s short observation period.
Patients without excess myocardial iron concentration at baseline did not see significant changes associated with amlodipine. While Dr. Fernandes and his colleagues could not conclude that the drug prevented excess cardiac iron from accumulating, “our data cannot rule out the possibility that extended use of amlodipine might prevent myocardial iron accumulation with a longer observation period.”
Secondary endpoints of the study included measurements of iron storage in the liver and of serum ferritin, neither of which appeared to be affected by amlodipine treatment, which the investigators said was consistent with the drug’s known mechanism of action. No serious adverse effects were reported related to amlodipine treatment.
Dr. Fernandes and his colleagues also did not find improvements in left ventricular ejection fraction associated with amlodipine use at 12 months. This may be due, they wrote in their analysis, to a “relatively low prevalence of reduced ejection fraction or severe myocardial siderosis upon trial enrollment, limiting the power of the study to assess these outcomes.”
The government of Brazil and the Sultan Bin Khalifa Translational Research Scholarship sponsored the study. Dr. Fernandes reported receiving fees from Novartis and Sanofi. The remaining 12 authors disclosed no conflicts of interest.
The calcium channel blocker amlodipine, added to iron chelation therapy, significantly reduced excess myocardial iron concentration in patients with thalassemia major, compared with chelation alone, according to results from a randomized trial.
The findings (Blood. 2016;128[12]:1555-61) suggest that amlodipine, a cheap, widely available drug with a well-established safety profile, may serve as an adjunct to standard treatment for people with thalassemia major and cardiac siderosis. Cardiovascular disease caused by excess myocardial iron remains a major cause of morbidity and mortality in thalassemia major.
Juliano L. Fernandes, MD, PhD, of the Jose Michel Kalaf Research Institute in Campinas, Brazil, led the study, which randomized 62 patients already receiving chelation treatment for thalassemia major to 1 year of chelation plus placebo (n = 31) or chelation plus 5 mg daily amlodipine (n = 31).
Patients in each arm were subdivided into two subgroups: those whose baseline myocardial iron concentration was within normal thresholds, and those with excess myocardial iron concentration as measured by magnetic resonance imaging (above 0.59 mg/g dry weight or with a cardiac T2* below 35 milliseconds).
In the amlodipine arm, patients with excess cardiac iron at baseline (n = 15) saw significant reductions in myocardial iron concentrations at 1 year, compared with those randomized to placebo (n = 15). The former had a median reduction of –0.26 mg/g (95% confidence interval, –1.02 to –0.01) while the placebo group saw an increase of 0.01 mg/g (95% CI, 20.13 to 20.23; P = .02).
The investigators acknowledged that some of the findings were limited by the study’s short observation period.
Patients without excess myocardial iron concentration at baseline did not see significant changes associated with amlodipine. While Dr. Fernandes and his colleagues could not conclude that the drug prevented excess cardiac iron from accumulating, “our data cannot rule out the possibility that extended use of amlodipine might prevent myocardial iron accumulation with a longer observation period.”
Secondary endpoints of the study included measurements of iron storage in the liver and of serum ferritin, neither of which appeared to be affected by amlodipine treatment, which the investigators said was consistent with the drug’s known mechanism of action. No serious adverse effects were reported related to amlodipine treatment.
Dr. Fernandes and his colleagues also did not find improvements in left ventricular ejection fraction associated with amlodipine use at 12 months. This may be due, they wrote in their analysis, to a “relatively low prevalence of reduced ejection fraction or severe myocardial siderosis upon trial enrollment, limiting the power of the study to assess these outcomes.”
The government of Brazil and the Sultan Bin Khalifa Translational Research Scholarship sponsored the study. Dr. Fernandes reported receiving fees from Novartis and Sanofi. The remaining 12 authors disclosed no conflicts of interest.
FROM BLOOD
Key clinical point: Amlodipine added to standard chelation therapy significantly reduced cardiac iron in thalassemia major patients with cardiac siderosis.
Major finding: At 12 months, cardiac iron was a median 0.26 mg/g lower in subjects with myocardial iron overload treated with 5 mg daily amlodipine plus chelation, while patients treated with chelation alone saw a 0.01 mg/g increase (P = .02).
Data source: A randomized, double-blind, placebo-controlled trial enrolling from 62 patients with TM from six centers in Brazil, about half with cardiac siderosis at baseline.
Disclosures: The Brazil government and the Sultan Bin Khalifa Translational Research Scholarship sponsored the investigation. Its lead author reported receiving fees from Novartis and Sanofi. Other study investigators and the author of a linked editorial declared no conflicts of interest.
Steroids could reduce death rate for TB patients with acute respiratory failure
Tuberculosis patients admitted to intensive care units with acute respiratory failure had significantly better survival at 90 days after treatment with corticosteroids and anti-TB drugs, compared with patients not treated with the steroids, according to a retrospective study.
An adjusted inverse probability of treatment weighted analysis using propensity scores revealed corticosteroid use to be independently associated with a significantly reduced 90-day mortality rate (OR = 0.47; 95% CI, 0.22-0.98). This statistical approach was used because it reduces selection bias and other potential confounding factors in a way that a multivariate analysis cannot, wrote Ji Young Yang, MD, of Busan (South Korea) Paik Hospital and Inje University College of Medicine in Busan.
Mortality rates were similar between the steroid-treated and non–steroid-treated groups (48.6% and 50%, respectively), and unadjusted 90-day mortality risk was not affected by steroid administration (odds ratio, 0.94; 95% CI, 0.46-1.92; P = .875), reported Dr. Yang and colleagues (Clin Infect Dis. 2016 Sep 8. doi: 10.1093/cid/ciw616).
The study involved the examination of records of 124 patients (mean age 62, 64% men) admitted to a single center over a 25-year period ending in 2014. Of these, 56.5% received corticosteroids, and 49.2% of the cohort died within 90 days.
The investigators acknowledged that their study was limited by various factors, including its small size, its use of data from a single center, and its lack of a standardized approach to steroid treatment.
“Further prospective randomized controlled trials will therefore be necessary to clarify the role of steroids in the management of these patients,” they wrote in their analysis. However, Dr. Yang and colleagues argued, in acute respiratory failure – a rare but dangerous complication in TB – “corticosteroids represent an attractive option because they can suppress cytokine expression and are effective in managing the inflammatory complications of extrapulmonary tuberculosis. Moreover, corticosteroids have been recently been shown to reduce mortality or treatment failure in patients with tuberculosis or severe pneumonia.”
Robert C. Hyzy, MD, director of the critical care medicine unit at the University of Michigan, Ann Arbor, said the findings “should be considered hypothesis generating.
“Clinicians should wait for prospective validation of this observation before considering the use of corticosteroids in hospitalized patients with tuberculosis,” he added.
Dr. Yang and colleagues disclosed no conflicts of interest or outside funding for their study.
Tuberculosis patients admitted to intensive care units with acute respiratory failure had significantly better survival at 90 days after treatment with corticosteroids and anti-TB drugs, compared with patients not treated with the steroids, according to a retrospective study.
An adjusted inverse probability of treatment weighted analysis using propensity scores revealed corticosteroid use to be independently associated with a significantly reduced 90-day mortality rate (OR = 0.47; 95% CI, 0.22-0.98). This statistical approach was used because it reduces selection bias and other potential confounding factors in a way that a multivariate analysis cannot, wrote Ji Young Yang, MD, of Busan (South Korea) Paik Hospital and Inje University College of Medicine in Busan.
Mortality rates were similar between the steroid-treated and non–steroid-treated groups (48.6% and 50%, respectively), and unadjusted 90-day mortality risk was not affected by steroid administration (odds ratio, 0.94; 95% CI, 0.46-1.92; P = .875), reported Dr. Yang and colleagues (Clin Infect Dis. 2016 Sep 8. doi: 10.1093/cid/ciw616).
The study involved the examination of records of 124 patients (mean age 62, 64% men) admitted to a single center over a 25-year period ending in 2014. Of these, 56.5% received corticosteroids, and 49.2% of the cohort died within 90 days.
The investigators acknowledged that their study was limited by various factors, including its small size, its use of data from a single center, and its lack of a standardized approach to steroid treatment.
“Further prospective randomized controlled trials will therefore be necessary to clarify the role of steroids in the management of these patients,” they wrote in their analysis. However, Dr. Yang and colleagues argued, in acute respiratory failure – a rare but dangerous complication in TB – “corticosteroids represent an attractive option because they can suppress cytokine expression and are effective in managing the inflammatory complications of extrapulmonary tuberculosis. Moreover, corticosteroids have been recently been shown to reduce mortality or treatment failure in patients with tuberculosis or severe pneumonia.”
Robert C. Hyzy, MD, director of the critical care medicine unit at the University of Michigan, Ann Arbor, said the findings “should be considered hypothesis generating.
“Clinicians should wait for prospective validation of this observation before considering the use of corticosteroids in hospitalized patients with tuberculosis,” he added.
Dr. Yang and colleagues disclosed no conflicts of interest or outside funding for their study.
Tuberculosis patients admitted to intensive care units with acute respiratory failure had significantly better survival at 90 days after treatment with corticosteroids and anti-TB drugs, compared with patients not treated with the steroids, according to a retrospective study.
An adjusted inverse probability of treatment weighted analysis using propensity scores revealed corticosteroid use to be independently associated with a significantly reduced 90-day mortality rate (OR = 0.47; 95% CI, 0.22-0.98). This statistical approach was used because it reduces selection bias and other potential confounding factors in a way that a multivariate analysis cannot, wrote Ji Young Yang, MD, of Busan (South Korea) Paik Hospital and Inje University College of Medicine in Busan.
Mortality rates were similar between the steroid-treated and non–steroid-treated groups (48.6% and 50%, respectively), and unadjusted 90-day mortality risk was not affected by steroid administration (odds ratio, 0.94; 95% CI, 0.46-1.92; P = .875), reported Dr. Yang and colleagues (Clin Infect Dis. 2016 Sep 8. doi: 10.1093/cid/ciw616).
The study involved the examination of records of 124 patients (mean age 62, 64% men) admitted to a single center over a 25-year period ending in 2014. Of these, 56.5% received corticosteroids, and 49.2% of the cohort died within 90 days.
The investigators acknowledged that their study was limited by various factors, including its small size, its use of data from a single center, and its lack of a standardized approach to steroid treatment.
“Further prospective randomized controlled trials will therefore be necessary to clarify the role of steroids in the management of these patients,” they wrote in their analysis. However, Dr. Yang and colleagues argued, in acute respiratory failure – a rare but dangerous complication in TB – “corticosteroids represent an attractive option because they can suppress cytokine expression and are effective in managing the inflammatory complications of extrapulmonary tuberculosis. Moreover, corticosteroids have been recently been shown to reduce mortality or treatment failure in patients with tuberculosis or severe pneumonia.”
Robert C. Hyzy, MD, director of the critical care medicine unit at the University of Michigan, Ann Arbor, said the findings “should be considered hypothesis generating.
“Clinicians should wait for prospective validation of this observation before considering the use of corticosteroids in hospitalized patients with tuberculosis,” he added.
Dr. Yang and colleagues disclosed no conflicts of interest or outside funding for their study.
Key clinical point: Corticosteroids used in combination with anti-TB treatment appeared to lower 90-day mortality in TB patients with ARF.
Major finding: Reduced 90-day mortality was associated with corticosteroid use (odds ratio, 0.47; 95% CI, 0.22-0.98; P = .049).
Data source: A retrospective cohort study of 124 patients admitted to intensive care units with TB and ARF in a single Korean center from 1989 to 2014.
Disclosures: The investigators reported no outside funding or conflicts of interest.
The fast-changing world of lower-limb atherectomy
Decisions in popliteal or below-knee atherectomy can be complicated by a wide array of devices and lesion types.
Limited data on the long-term durability of interventions, or direct comparisons of approaches, can also complicate the decision-making process, as do cost concerns.
In his Sunday, September 18 talk at VIVA, titled “Popliteal and Below-the-Knee Atherectomy – Which Tool in Which Circumstance and When Not to Bother,” James F. McKinsey, MD, aims to help clinicians navigate this quickly changing field, with updates on emerging technologies.
Directional atherectomy and laser devices continue to undergo innovation, with new devices introduced almost annually. The changing device picture can be confusing, acknowledged Dr. McKinsey of Mt. Sinai Health System in New York. “I am well versed with many of them because I have a high volume. But people with just a few cases a month may not be,” he said.
Lower volume practitioners “need to find at least one, if not two, devices that they are going to be comfortable with,” Dr. McKinsey said, noting that each is associated with a special technique and may require additional support or set-up costs, such as a laser box or a generator. “And it becomes a question of how many different things can you have on the shelf?”
Dr. McKinsey said his talk is aimed at helping practitioners decide which lesions to treat, and with which device – with close attention to the morphological characteristics of lesions. “It’s almost like an algorithm,” he said.
Increasingly, he noted, lower-limb atherectomy is being approached with more than one technique. There is a strong practice trend toward combining atherectomy with drug-coated balloon therapy, he said. “I think the idea of leaving nothing behind [in the vessel] before you do a drug-coated balloon has gotten much more support. People are coming in now and saying they want to prepare the artery by debulking it, then come back and do DCB.” A new rotational laser device, he says, has particular promise in combination with DCBs.
But combining approaches means cost increases at a time when “reimbursement is going down and the product costs and associated expenses are going up,” he said.
Also on the horizon is another, potentially game-changing technology: bioabsorbable stents. While these fall outside the scope of his talk, Dr. McKinsey said he’s assisted a number of lower-limb procedures in Europe this year using them. The technology is especially promising for “more complicated, more calcified lesions,” he said. “In Europe it is being used fairly extensively,” he noted, and likely to come online in the United States within a year or so.
As with the combined approaches, the introduction of drug-eluting bioabsorbable stents into the treatment of lower-limb lesions is also likely to incur high costs, Dr. McKinsey noted. What’s needed are longer-term studies that follow patients up to 5 years, to understand whether high upfront costs are offset by later benefits.
“We have to look at is not necessarily the cost of doing a case, but the cost of treating that patient. We may have a greater upfront cost, but if the intervention has greater durability, and the patient doesn’t have a repeat procedure, then society and healthcare providers do better,” he said.
Decisions in popliteal or below-knee atherectomy can be complicated by a wide array of devices and lesion types.
Limited data on the long-term durability of interventions, or direct comparisons of approaches, can also complicate the decision-making process, as do cost concerns.
In his Sunday, September 18 talk at VIVA, titled “Popliteal and Below-the-Knee Atherectomy – Which Tool in Which Circumstance and When Not to Bother,” James F. McKinsey, MD, aims to help clinicians navigate this quickly changing field, with updates on emerging technologies.
Directional atherectomy and laser devices continue to undergo innovation, with new devices introduced almost annually. The changing device picture can be confusing, acknowledged Dr. McKinsey of Mt. Sinai Health System in New York. “I am well versed with many of them because I have a high volume. But people with just a few cases a month may not be,” he said.
Lower volume practitioners “need to find at least one, if not two, devices that they are going to be comfortable with,” Dr. McKinsey said, noting that each is associated with a special technique and may require additional support or set-up costs, such as a laser box or a generator. “And it becomes a question of how many different things can you have on the shelf?”
Dr. McKinsey said his talk is aimed at helping practitioners decide which lesions to treat, and with which device – with close attention to the morphological characteristics of lesions. “It’s almost like an algorithm,” he said.
Increasingly, he noted, lower-limb atherectomy is being approached with more than one technique. There is a strong practice trend toward combining atherectomy with drug-coated balloon therapy, he said. “I think the idea of leaving nothing behind [in the vessel] before you do a drug-coated balloon has gotten much more support. People are coming in now and saying they want to prepare the artery by debulking it, then come back and do DCB.” A new rotational laser device, he says, has particular promise in combination with DCBs.
But combining approaches means cost increases at a time when “reimbursement is going down and the product costs and associated expenses are going up,” he said.
Also on the horizon is another, potentially game-changing technology: bioabsorbable stents. While these fall outside the scope of his talk, Dr. McKinsey said he’s assisted a number of lower-limb procedures in Europe this year using them. The technology is especially promising for “more complicated, more calcified lesions,” he said. “In Europe it is being used fairly extensively,” he noted, and likely to come online in the United States within a year or so.
As with the combined approaches, the introduction of drug-eluting bioabsorbable stents into the treatment of lower-limb lesions is also likely to incur high costs, Dr. McKinsey noted. What’s needed are longer-term studies that follow patients up to 5 years, to understand whether high upfront costs are offset by later benefits.
“We have to look at is not necessarily the cost of doing a case, but the cost of treating that patient. We may have a greater upfront cost, but if the intervention has greater durability, and the patient doesn’t have a repeat procedure, then society and healthcare providers do better,” he said.
Decisions in popliteal or below-knee atherectomy can be complicated by a wide array of devices and lesion types.
Limited data on the long-term durability of interventions, or direct comparisons of approaches, can also complicate the decision-making process, as do cost concerns.
In his Sunday, September 18 talk at VIVA, titled “Popliteal and Below-the-Knee Atherectomy – Which Tool in Which Circumstance and When Not to Bother,” James F. McKinsey, MD, aims to help clinicians navigate this quickly changing field, with updates on emerging technologies.
Directional atherectomy and laser devices continue to undergo innovation, with new devices introduced almost annually. The changing device picture can be confusing, acknowledged Dr. McKinsey of Mt. Sinai Health System in New York. “I am well versed with many of them because I have a high volume. But people with just a few cases a month may not be,” he said.
Lower volume practitioners “need to find at least one, if not two, devices that they are going to be comfortable with,” Dr. McKinsey said, noting that each is associated with a special technique and may require additional support or set-up costs, such as a laser box or a generator. “And it becomes a question of how many different things can you have on the shelf?”
Dr. McKinsey said his talk is aimed at helping practitioners decide which lesions to treat, and with which device – with close attention to the morphological characteristics of lesions. “It’s almost like an algorithm,” he said.
Increasingly, he noted, lower-limb atherectomy is being approached with more than one technique. There is a strong practice trend toward combining atherectomy with drug-coated balloon therapy, he said. “I think the idea of leaving nothing behind [in the vessel] before you do a drug-coated balloon has gotten much more support. People are coming in now and saying they want to prepare the artery by debulking it, then come back and do DCB.” A new rotational laser device, he says, has particular promise in combination with DCBs.
But combining approaches means cost increases at a time when “reimbursement is going down and the product costs and associated expenses are going up,” he said.
Also on the horizon is another, potentially game-changing technology: bioabsorbable stents. While these fall outside the scope of his talk, Dr. McKinsey said he’s assisted a number of lower-limb procedures in Europe this year using them. The technology is especially promising for “more complicated, more calcified lesions,” he said. “In Europe it is being used fairly extensively,” he noted, and likely to come online in the United States within a year or so.
As with the combined approaches, the introduction of drug-eluting bioabsorbable stents into the treatment of lower-limb lesions is also likely to incur high costs, Dr. McKinsey noted. What’s needed are longer-term studies that follow patients up to 5 years, to understand whether high upfront costs are offset by later benefits.
“We have to look at is not necessarily the cost of doing a case, but the cost of treating that patient. We may have a greater upfront cost, but if the intervention has greater durability, and the patient doesn’t have a repeat procedure, then society and healthcare providers do better,” he said.
Early days of IVF marked by competition, innovation
In 1978, when England’s Louise Brown became the world’s first baby born through in vitro fertilization, physicians at academic centers all over the United States scrambled to figure out how they, too, could provide IVF to the thousands of infertile couples for whom nothing else had worked.
Interest in IVF was strong even before British physiologist Robert Edwards and gynecologist Patrick Steptoe announced their success. “We knew that IVF was being developed, that it had been accomplished in animals, and ultimately we knew it was going to succeed in humans,” said reproductive endocrinologist Zev Rosenwaks, MD, of the Weill Cornell Center for Reproductive Medicine in New York.
In the late 1970s, “we were able to help only about two-thirds of couples with infertility, either with tubal surgery, insemination – often with donor sperm – or ovulation induction. A full third could not be helped. We predicted that IVF would allow us to treat virtually everyone,” Dr. Rosenwaks said.
But even after the first IVF birth, information on the revolutionary procedure remained frustratingly scarce.
“Edwards and Steptoe would talk to nobody,” said Richard Marrs, MD, a reproductive endocrinologist and infertility specialist in Los Angeles.
And federal research support for “test-tube babies,” as IVF was known in the media then, was nil thanks to a ban on government-funded human embryo research that persists to this day.
The U.S. physicians who took part in the rush to achieve an IVF birth – most of them young fellows at the time – recall a period of improvisation, collaboration, shoestring budgets, and surprise findings.
“People who just started 10 or even 20 years ago don’t realize what it took for us to learn how to go about doing IVF,” said Dr. Rosenwaks, who in the first years of IVF worked closely with Dr. Howard Jones and Dr. Georgeanna Jones, the first team in the U.S. to announce an IVF baby.
Labs in closets
In the late 1970s, Dr. Marrs, then a fellow at the University of Southern California, was focused on surgical methods to treat infertility – and demand was sky-high. Intrauterine devices used in the 1970s left many women with severe scarring and inflammation of the fallopian tubes.
“I was very surgically oriented,” Dr. Marrs said. “I thought I could fix any disaster in the pelvis that was put in front of me, especially with microsurgery.”
After the news of IVF success in England, Dr. Marrs threw himself into a side project at a nearby cancer center, working on single-cell cultures. “I thought if I could grow tumor cells, I could one day grow embryos,” he said.
A year later, Dr. Marrs set up the first IVF lab at USC – in a storage closet. “I sterilized the place and that was our first IVF lab, literally a closet with an incubator and a microscope.” Its budget was accordingly thin, as the director at the time felt certain that IVF was a dead end. To fund his work, Dr. Marrs asked IVF candidate patients for research donations in lieu of payment.
But before Dr. Marrs attempted to perform his first IVF, two centers in Australia announced their own IVF babies. “I decided I really needed to go see someone who had had a baby,” he said. He used his vacation time to fly to Melbourne, shuttling between two competing clinics that were “four blocks apart and wouldn’t even talk to each other,” he recalled.
Over 6 weeks, “I learned how to stimulate, how to time ovulation. I watched the PhDs in the lab – how they handled the eggs and the sperm, what the conditions were, the incubator settings,” he said.
The first IVF babies in the United States were born only months apart: The first, in December 1981, was at the Jones Institute for Reproductive Medicine in Norfolk, Va., where Dr. Rosenwaks served as the first director.
The second baby born was at USC. After that, “we had 4,000 women on a waiting list, all under age 35,” Dr. Marrs said. The Jones Institute reportedly had 5,000.
As demand soared and more IVF babies arrived, the cloak of secrecy surrounding the procedure started to lift. British, Australian, and U.S. clinicians started getting together regularly. “We would pick a spot in the world, present our data: what we’d done, how many cycles, what we used for stimulation, when we took the eggs out,” Dr. Marrs said. “I don’t know how many hundreds of thousands of miles I flew in the first years of IVF, because it was the only way I could get information. We would literally stay up all night talking.”
Answering safety questions
Alan H. DeCherney, MD, currently an infertility researcher at the National Institutes of Health, started Yale University’s IVF program at around the same time Dr. Marrs and the Joneses were starting theirs. Yale already had a large infertility practice, and only academic centers had the laboratory resources and skilled staff needed to attempt IVF in those years.
In 1983, when Yale announced the birth of its first IVF baby – the fifth in the United States – Dr. DeCherney was starting to think about measuring outcomes, as there was concern over the potential for congenital anomalies related to IVF. “This was such a change in the way conception occurred, people were afraid that all kinds of crazy things would happen,” he said.
One concern was about ovarian stimulation with fertility drugs or gonadotropins. The earliest efforts – including by Dr. Steptoe and Dr. Edwards – used no drugs, instead trying to pinpoint the moment of natural egg release by measuring a woman’s hormone levels constantly, but these proved disappointing. Use of clomiphene citrate and human menopausal gonadotropin allowed for more control over timing, and for multiple mature eggs to be harvested at once.
But there were still many unanswered questions related to these agents’ safety and dosing, both for women and for babies.
When the NIH refused to fund a study of IVF outcomes, Dr. DeCherney and Dr. Marrs collaborated on a registry funded by a gonadotropin maker. “The drug company didn’t want to be associated with some terrible abnormal outcomes,” Dr. DeCherney recalled, though by then, “there were 10, maybe even 20 babies around the world, and they seemed to be fine,” he said.
The first registry results affirmed no changes in the rate of congenital abnormalities. (Larger, more recent studies have shown a small but significant elevation in birth defect risk associated with IVF.) A few years later, ovarian stimulation was adjusted to correspond with ovarian reserve, reducing the risk of ovarian hyperstimulation syndrome.
But even by the late 1980s, success rates for IVF per attempted cycle were still low overall, leading many critics, even within the profession, to accuse practitioners of misleading couples. Charles E. Miller, MD, an infertility specialist in Chicago, recalled an early investigation by a major newspaper “that looked at all the IVF clinics in Chicago and found the chances of having a baby was under 3%.”
It was true, Dr. Miller acknowledged – “the rates were dismal. But remember that IVF at the time was still considered a procedure of last resort.” Complex diagnostic testing to determine the cause of infertility, surgery, and fertility drugs all came first.
Some important innovations would soon change that and turn IVF into a mainstay of infertility treatment that could help women not only with damaged tubes but also with ovarian failure, low ovarian reserve, or dense pelvic adhesions. Even some types of male factor infertility would find an answer in IVF, by way of intracytoplasmic sperm transfer.
Eggs without surgery
Laparoscopic egg retrieval was the norm in the first decade of IVF. “We went through the belly button, allowing us to directly visualize the ovary and see whether ovulation had already occurred or we had to retrieve it by introducing a needle into the follicle,” Dr. Rosenwaks recalled.
“Some of us were doing 6 or even 10 laparoscopies a day, and it was physically quite challenging,” he said. “There were no video screens in those days. You had to bend over the scope.” And it was worse still for patients, who had to endure multiple surgeries.
Though egg and embryo cryopreservation were already being worked on, it would be years before these techniques were optimized, giving women more chances from a single retrieval of oocytes.
Finding a less invasive means of retrieving eggs was crucial.
Maria Bustillo, MD, an infertility specialist in Miami, recalled being criticized by peers when she and her then-colleagues at the Genetics & IVF Institute in Fairfax, Va., began retrieving eggs via a needle placed in the vagina, using abdominal ultrasound as a guide.
While the technique was far less invasive than laparoscopy, “we were doing it semi-blindly, and were told it was dangerous,” Dr. Bustillo said.
But these freehand ultrasound retrievals paved the way for what would become a revolutionary advance – the vaginal ultrasound probe, which by the end of the 1980s made nonsurgical extraction of eggs the norm.
Dr. Marrs recalled receiving a prototype of a vaginal ultrasound probe, in the mid-1980s, and finding patients unwilling to use it, except one who relented only because she had an empty bladder. Abdominal ultrasonography required a full bladder to work.
“It was as though somebody had removed the cloud cover,” he said. “I couldn’t believe it. I could see everything: her ovaries, tiny follicles, the uterus.”
Later probes were fitted with a needle and aspirator to retrieve eggs. Multiple IVF cycles no longer meant multiple surgeries, and the less-invasive procedure helped in recruiting egg donors, allowing women with ovarian disease or low ovarian reserves, including older women, to receive IVF.
“It didn’t make sense for a volunteer to go through a surgery, especially back in the early ’80s when the results were not all that great,” Dr. Bustillo said.
Improving ‘home brews’
The culture media in which embryos were grown was another strong factor limiting the success rates of early IVF. James Toner, MD, PhD, an IVF specialist in Atlanta, called the early media “home brews.”
“Everyone made them themselves,” said Dr. Toner, who spent 15 years at the Jones Institute. “You had to do a hamster or mouse embryo test on every batch to make sure embryos would grow.” And often they did not.
Poor success rates resulted in the emergence of alternative procedures: GIFT (gamete intrafallopian transfer) and ZIFT (zygote intrafallopian transfer). Both aimed to get embryos back into the patient as soon as possible, with the thought that the natural environment offered a better chance for success.
But advances in culture media allowed more time for embryos to be observed. With longer development, “you could do a better job selecting the ones that had a chance, and de-selecting those with no chance,” Dr. Toner said.
This also meant fewer embryos could be transferred back into patients, lowering the likelihood of multiples. Ultimately, for young women, single-embryo transfer would become the norm. “The problem of multiple pregnancy that we used to have no longer exists for IVF,” Dr. Toner said.
Allowing embryos to reach the blastocyst stage – day 5 or 6 – opened other, previously unthinkable possibilities: placing embryos directly into the uterus, without surgery, and pre-implementation genetic screening for abnormalities.
“As the cell number went up, the idea that you could do a genetic test with minimal impact on the embryo eventually became true,” Dr. Toner said.
A genetic revolution?
While many important IVF innovations were achieved in countries with staunch government support, one of the remarkable things about IVF’s evolution in the United States is that so many occurred with virtually none.
By the mid-1990s, most of the early practitioners had moved from academic settings into private practice, though they continued to publish. “After a while it didn’t help to be in academics. It just sort of slowed you down. Because you weren’t going to get any [government] money anyway, you might as well be in a place that’s a little more nimble,” Dr. Toner said.
At the same time, he said, IVF remains a costly, usually unreimbursed procedure – limiting patients’ willingness to take part in randomized trials. “IVF research is built more on cohort studies.”
Most of the current research focus in IVF is on possibilities for genetic screening. Dr. Miller said that rapid DNA sequencing is allowing specialists to “look at more, pick up more abnormalities. That will continue to improve so that we will be able to see virtually everything.”
But he cautioned there is still much to be done in IVF apart from the genetics – he’s concerned, he said, that the field has moved too far from its surgical origins, and is working with the academic societies to encourage more surgical training.
“We don’t do the same work we did before on fallopian tubes, which is good,” Dr. Miller said, noting that there have been many advances, particularly minimally invasive surgeries in the uterus or ovaries, that have occurred parallel to IVF and can improve success rates. “I think we have a better understanding of what kind of patients require surgical treatments and what kind of surgeries can help enhance fertility, and also what not to do.”
Dr. Bustillo said that “cytogenetics is wonderful, but not everything. You have embryos that are genetically normal and still don’t implant. There’s a lot of work to be done on the interaction between the mother and the embryo.”
Dr. Marrs said that even safety questions related to stimulation have yet to be fully answered. “I’ve always been a big believer that lower is better, but we need to know whether stimulation creates genetic abnormalities and whether less stimulation produces fewer – and we need more data to prove it,” he said. Dr. Marrs is an investigator on a national randomized trial comparing outcomes from IVF with standard-dose and ultra-low dose stimulation.
Access, income, and age
The IVF pioneers agree broadly that access to IVF is nowhere near what it should be in the United States, where only 15 states mandate any insurance coverage for infertility.
“Our limited access to care is a crime,” Dr. Toner said. “People who, through no fault of their own, find themselves infertile are asked to write a check for $15,000 to get pregnant. That’s not fair.”
Dr. DeCherney called access “an ethical issue, because who gets IVF? People with higher incomes. And if IVF allows you to select better embryos – whatever that means – it gives that group another advantage.”
Dr. Toner warned that the push toward genetic testing of embryos, especially in the absence of known hereditary disease, could create new problems for the profession – not unlike in the early days of IVF, when the Jones Institute and other clinics were picketed over the specter of “test tube babies.”
“It’s one thing to say this embryo does not have the right number of chromosomes and couldn’t possibly be a child, so let’s not use it, but what about looking for traits? Sex selection? We have this privileged position in which the government does not really interfere in what we do, but to retain this status we need to stay within the bounds that our society accepts,” Dr. Toner said.
In recent years, IVF uptake has been high among women of advanced reproductive age, which poses its own set of challenges. Outcomes in older women using their own eggs become progressively poorer with age, though donor eggs drastically improve their chances, and egg freezing offers the possibility of preserving quality eggs for later pregnancies.
“We could make this situation better by promoting social freezing, doing more work for women early in their lives to get out their own eggs and store them,” Dr. Miller said. “But again, you still face the issue of access.”
Regardless of what technologies are available or become available in assisted reproduction, doctors and women alike need to be better educated on their options and chances early, with a clearer understanding of what happens as they age, Dr. Bustillo said.
“This is not to pressure them, but just so they understand that when they get to be 42 and are just thinking about reproducing, it’s not a major surprise when I tell them this could be a problem,” she said.
Throughout 2016, Ob.Gyn. News is celebrating its 50th anniversary with exclusive articles looking at the evolution of the specialty, including the history of contraception, changes in gynecologic surgery, and the transformation of the well-woman visit.
In 1978, when England’s Louise Brown became the world’s first baby born through in vitro fertilization, physicians at academic centers all over the United States scrambled to figure out how they, too, could provide IVF to the thousands of infertile couples for whom nothing else had worked.
Interest in IVF was strong even before British physiologist Robert Edwards and gynecologist Patrick Steptoe announced their success. “We knew that IVF was being developed, that it had been accomplished in animals, and ultimately we knew it was going to succeed in humans,” said reproductive endocrinologist Zev Rosenwaks, MD, of the Weill Cornell Center for Reproductive Medicine in New York.
In the late 1970s, “we were able to help only about two-thirds of couples with infertility, either with tubal surgery, insemination – often with donor sperm – or ovulation induction. A full third could not be helped. We predicted that IVF would allow us to treat virtually everyone,” Dr. Rosenwaks said.
But even after the first IVF birth, information on the revolutionary procedure remained frustratingly scarce.
“Edwards and Steptoe would talk to nobody,” said Richard Marrs, MD, a reproductive endocrinologist and infertility specialist in Los Angeles.
And federal research support for “test-tube babies,” as IVF was known in the media then, was nil thanks to a ban on government-funded human embryo research that persists to this day.
The U.S. physicians who took part in the rush to achieve an IVF birth – most of them young fellows at the time – recall a period of improvisation, collaboration, shoestring budgets, and surprise findings.
“People who just started 10 or even 20 years ago don’t realize what it took for us to learn how to go about doing IVF,” said Dr. Rosenwaks, who in the first years of IVF worked closely with Dr. Howard Jones and Dr. Georgeanna Jones, the first team in the U.S. to announce an IVF baby.
Labs in closets
In the late 1970s, Dr. Marrs, then a fellow at the University of Southern California, was focused on surgical methods to treat infertility – and demand was sky-high. Intrauterine devices used in the 1970s left many women with severe scarring and inflammation of the fallopian tubes.
“I was very surgically oriented,” Dr. Marrs said. “I thought I could fix any disaster in the pelvis that was put in front of me, especially with microsurgery.”
After the news of IVF success in England, Dr. Marrs threw himself into a side project at a nearby cancer center, working on single-cell cultures. “I thought if I could grow tumor cells, I could one day grow embryos,” he said.
A year later, Dr. Marrs set up the first IVF lab at USC – in a storage closet. “I sterilized the place and that was our first IVF lab, literally a closet with an incubator and a microscope.” Its budget was accordingly thin, as the director at the time felt certain that IVF was a dead end. To fund his work, Dr. Marrs asked IVF candidate patients for research donations in lieu of payment.
But before Dr. Marrs attempted to perform his first IVF, two centers in Australia announced their own IVF babies. “I decided I really needed to go see someone who had had a baby,” he said. He used his vacation time to fly to Melbourne, shuttling between two competing clinics that were “four blocks apart and wouldn’t even talk to each other,” he recalled.
Over 6 weeks, “I learned how to stimulate, how to time ovulation. I watched the PhDs in the lab – how they handled the eggs and the sperm, what the conditions were, the incubator settings,” he said.
The first IVF babies in the United States were born only months apart: The first, in December 1981, was at the Jones Institute for Reproductive Medicine in Norfolk, Va., where Dr. Rosenwaks served as the first director.
The second baby born was at USC. After that, “we had 4,000 women on a waiting list, all under age 35,” Dr. Marrs said. The Jones Institute reportedly had 5,000.
As demand soared and more IVF babies arrived, the cloak of secrecy surrounding the procedure started to lift. British, Australian, and U.S. clinicians started getting together regularly. “We would pick a spot in the world, present our data: what we’d done, how many cycles, what we used for stimulation, when we took the eggs out,” Dr. Marrs said. “I don’t know how many hundreds of thousands of miles I flew in the first years of IVF, because it was the only way I could get information. We would literally stay up all night talking.”
Answering safety questions
Alan H. DeCherney, MD, currently an infertility researcher at the National Institutes of Health, started Yale University’s IVF program at around the same time Dr. Marrs and the Joneses were starting theirs. Yale already had a large infertility practice, and only academic centers had the laboratory resources and skilled staff needed to attempt IVF in those years.
In 1983, when Yale announced the birth of its first IVF baby – the fifth in the United States – Dr. DeCherney was starting to think about measuring outcomes, as there was concern over the potential for congenital anomalies related to IVF. “This was such a change in the way conception occurred, people were afraid that all kinds of crazy things would happen,” he said.
One concern was about ovarian stimulation with fertility drugs or gonadotropins. The earliest efforts – including by Dr. Steptoe and Dr. Edwards – used no drugs, instead trying to pinpoint the moment of natural egg release by measuring a woman’s hormone levels constantly, but these proved disappointing. Use of clomiphene citrate and human menopausal gonadotropin allowed for more control over timing, and for multiple mature eggs to be harvested at once.
But there were still many unanswered questions related to these agents’ safety and dosing, both for women and for babies.
When the NIH refused to fund a study of IVF outcomes, Dr. DeCherney and Dr. Marrs collaborated on a registry funded by a gonadotropin maker. “The drug company didn’t want to be associated with some terrible abnormal outcomes,” Dr. DeCherney recalled, though by then, “there were 10, maybe even 20 babies around the world, and they seemed to be fine,” he said.
The first registry results affirmed no changes in the rate of congenital abnormalities. (Larger, more recent studies have shown a small but significant elevation in birth defect risk associated with IVF.) A few years later, ovarian stimulation was adjusted to correspond with ovarian reserve, reducing the risk of ovarian hyperstimulation syndrome.
But even by the late 1980s, success rates for IVF per attempted cycle were still low overall, leading many critics, even within the profession, to accuse practitioners of misleading couples. Charles E. Miller, MD, an infertility specialist in Chicago, recalled an early investigation by a major newspaper “that looked at all the IVF clinics in Chicago and found the chances of having a baby was under 3%.”
It was true, Dr. Miller acknowledged – “the rates were dismal. But remember that IVF at the time was still considered a procedure of last resort.” Complex diagnostic testing to determine the cause of infertility, surgery, and fertility drugs all came first.
Some important innovations would soon change that and turn IVF into a mainstay of infertility treatment that could help women not only with damaged tubes but also with ovarian failure, low ovarian reserve, or dense pelvic adhesions. Even some types of male factor infertility would find an answer in IVF, by way of intracytoplasmic sperm transfer.
Eggs without surgery
Laparoscopic egg retrieval was the norm in the first decade of IVF. “We went through the belly button, allowing us to directly visualize the ovary and see whether ovulation had already occurred or we had to retrieve it by introducing a needle into the follicle,” Dr. Rosenwaks recalled.
“Some of us were doing 6 or even 10 laparoscopies a day, and it was physically quite challenging,” he said. “There were no video screens in those days. You had to bend over the scope.” And it was worse still for patients, who had to endure multiple surgeries.
Though egg and embryo cryopreservation were already being worked on, it would be years before these techniques were optimized, giving women more chances from a single retrieval of oocytes.
Finding a less invasive means of retrieving eggs was crucial.
Maria Bustillo, MD, an infertility specialist in Miami, recalled being criticized by peers when she and her then-colleagues at the Genetics & IVF Institute in Fairfax, Va., began retrieving eggs via a needle placed in the vagina, using abdominal ultrasound as a guide.
While the technique was far less invasive than laparoscopy, “we were doing it semi-blindly, and were told it was dangerous,” Dr. Bustillo said.
But these freehand ultrasound retrievals paved the way for what would become a revolutionary advance – the vaginal ultrasound probe, which by the end of the 1980s made nonsurgical extraction of eggs the norm.
Dr. Marrs recalled receiving a prototype of a vaginal ultrasound probe, in the mid-1980s, and finding patients unwilling to use it, except one who relented only because she had an empty bladder. Abdominal ultrasonography required a full bladder to work.
“It was as though somebody had removed the cloud cover,” he said. “I couldn’t believe it. I could see everything: her ovaries, tiny follicles, the uterus.”
Later probes were fitted with a needle and aspirator to retrieve eggs. Multiple IVF cycles no longer meant multiple surgeries, and the less-invasive procedure helped in recruiting egg donors, allowing women with ovarian disease or low ovarian reserves, including older women, to receive IVF.
“It didn’t make sense for a volunteer to go through a surgery, especially back in the early ’80s when the results were not all that great,” Dr. Bustillo said.
Improving ‘home brews’
The culture media in which embryos were grown was another strong factor limiting the success rates of early IVF. James Toner, MD, PhD, an IVF specialist in Atlanta, called the early media “home brews.”
“Everyone made them themselves,” said Dr. Toner, who spent 15 years at the Jones Institute. “You had to do a hamster or mouse embryo test on every batch to make sure embryos would grow.” And often they did not.
Poor success rates resulted in the emergence of alternative procedures: GIFT (gamete intrafallopian transfer) and ZIFT (zygote intrafallopian transfer). Both aimed to get embryos back into the patient as soon as possible, with the thought that the natural environment offered a better chance for success.
But advances in culture media allowed more time for embryos to be observed. With longer development, “you could do a better job selecting the ones that had a chance, and de-selecting those with no chance,” Dr. Toner said.
This also meant fewer embryos could be transferred back into patients, lowering the likelihood of multiples. Ultimately, for young women, single-embryo transfer would become the norm. “The problem of multiple pregnancy that we used to have no longer exists for IVF,” Dr. Toner said.
Allowing embryos to reach the blastocyst stage – day 5 or 6 – opened other, previously unthinkable possibilities: placing embryos directly into the uterus, without surgery, and pre-implementation genetic screening for abnormalities.
“As the cell number went up, the idea that you could do a genetic test with minimal impact on the embryo eventually became true,” Dr. Toner said.
A genetic revolution?
While many important IVF innovations were achieved in countries with staunch government support, one of the remarkable things about IVF’s evolution in the United States is that so many occurred with virtually none.
By the mid-1990s, most of the early practitioners had moved from academic settings into private practice, though they continued to publish. “After a while it didn’t help to be in academics. It just sort of slowed you down. Because you weren’t going to get any [government] money anyway, you might as well be in a place that’s a little more nimble,” Dr. Toner said.
At the same time, he said, IVF remains a costly, usually unreimbursed procedure – limiting patients’ willingness to take part in randomized trials. “IVF research is built more on cohort studies.”
Most of the current research focus in IVF is on possibilities for genetic screening. Dr. Miller said that rapid DNA sequencing is allowing specialists to “look at more, pick up more abnormalities. That will continue to improve so that we will be able to see virtually everything.”
But he cautioned there is still much to be done in IVF apart from the genetics – he’s concerned, he said, that the field has moved too far from its surgical origins, and is working with the academic societies to encourage more surgical training.
“We don’t do the same work we did before on fallopian tubes, which is good,” Dr. Miller said, noting that there have been many advances, particularly minimally invasive surgeries in the uterus or ovaries, that have occurred parallel to IVF and can improve success rates. “I think we have a better understanding of what kind of patients require surgical treatments and what kind of surgeries can help enhance fertility, and also what not to do.”
Dr. Bustillo said that “cytogenetics is wonderful, but not everything. You have embryos that are genetically normal and still don’t implant. There’s a lot of work to be done on the interaction between the mother and the embryo.”
Dr. Marrs said that even safety questions related to stimulation have yet to be fully answered. “I’ve always been a big believer that lower is better, but we need to know whether stimulation creates genetic abnormalities and whether less stimulation produces fewer – and we need more data to prove it,” he said. Dr. Marrs is an investigator on a national randomized trial comparing outcomes from IVF with standard-dose and ultra-low dose stimulation.
Access, income, and age
The IVF pioneers agree broadly that access to IVF is nowhere near what it should be in the United States, where only 15 states mandate any insurance coverage for infertility.
“Our limited access to care is a crime,” Dr. Toner said. “People who, through no fault of their own, find themselves infertile are asked to write a check for $15,000 to get pregnant. That’s not fair.”
Dr. DeCherney called access “an ethical issue, because who gets IVF? People with higher incomes. And if IVF allows you to select better embryos – whatever that means – it gives that group another advantage.”
Dr. Toner warned that the push toward genetic testing of embryos, especially in the absence of known hereditary disease, could create new problems for the profession – not unlike in the early days of IVF, when the Jones Institute and other clinics were picketed over the specter of “test tube babies.”
“It’s one thing to say this embryo does not have the right number of chromosomes and couldn’t possibly be a child, so let’s not use it, but what about looking for traits? Sex selection? We have this privileged position in which the government does not really interfere in what we do, but to retain this status we need to stay within the bounds that our society accepts,” Dr. Toner said.
In recent years, IVF uptake has been high among women of advanced reproductive age, which poses its own set of challenges. Outcomes in older women using their own eggs become progressively poorer with age, though donor eggs drastically improve their chances, and egg freezing offers the possibility of preserving quality eggs for later pregnancies.
“We could make this situation better by promoting social freezing, doing more work for women early in their lives to get out their own eggs and store them,” Dr. Miller said. “But again, you still face the issue of access.”
Regardless of what technologies are available or become available in assisted reproduction, doctors and women alike need to be better educated on their options and chances early, with a clearer understanding of what happens as they age, Dr. Bustillo said.
“This is not to pressure them, but just so they understand that when they get to be 42 and are just thinking about reproducing, it’s not a major surprise when I tell them this could be a problem,” she said.
Throughout 2016, Ob.Gyn. News is celebrating its 50th anniversary with exclusive articles looking at the evolution of the specialty, including the history of contraception, changes in gynecologic surgery, and the transformation of the well-woman visit.
In 1978, when England’s Louise Brown became the world’s first baby born through in vitro fertilization, physicians at academic centers all over the United States scrambled to figure out how they, too, could provide IVF to the thousands of infertile couples for whom nothing else had worked.
Interest in IVF was strong even before British physiologist Robert Edwards and gynecologist Patrick Steptoe announced their success. “We knew that IVF was being developed, that it had been accomplished in animals, and ultimately we knew it was going to succeed in humans,” said reproductive endocrinologist Zev Rosenwaks, MD, of the Weill Cornell Center for Reproductive Medicine in New York.
In the late 1970s, “we were able to help only about two-thirds of couples with infertility, either with tubal surgery, insemination – often with donor sperm – or ovulation induction. A full third could not be helped. We predicted that IVF would allow us to treat virtually everyone,” Dr. Rosenwaks said.
But even after the first IVF birth, information on the revolutionary procedure remained frustratingly scarce.
“Edwards and Steptoe would talk to nobody,” said Richard Marrs, MD, a reproductive endocrinologist and infertility specialist in Los Angeles.
And federal research support for “test-tube babies,” as IVF was known in the media then, was nil thanks to a ban on government-funded human embryo research that persists to this day.
The U.S. physicians who took part in the rush to achieve an IVF birth – most of them young fellows at the time – recall a period of improvisation, collaboration, shoestring budgets, and surprise findings.
“People who just started 10 or even 20 years ago don’t realize what it took for us to learn how to go about doing IVF,” said Dr. Rosenwaks, who in the first years of IVF worked closely with Dr. Howard Jones and Dr. Georgeanna Jones, the first team in the U.S. to announce an IVF baby.
Labs in closets
In the late 1970s, Dr. Marrs, then a fellow at the University of Southern California, was focused on surgical methods to treat infertility – and demand was sky-high. Intrauterine devices used in the 1970s left many women with severe scarring and inflammation of the fallopian tubes.
“I was very surgically oriented,” Dr. Marrs said. “I thought I could fix any disaster in the pelvis that was put in front of me, especially with microsurgery.”
After the news of IVF success in England, Dr. Marrs threw himself into a side project at a nearby cancer center, working on single-cell cultures. “I thought if I could grow tumor cells, I could one day grow embryos,” he said.
A year later, Dr. Marrs set up the first IVF lab at USC – in a storage closet. “I sterilized the place and that was our first IVF lab, literally a closet with an incubator and a microscope.” Its budget was accordingly thin, as the director at the time felt certain that IVF was a dead end. To fund his work, Dr. Marrs asked IVF candidate patients for research donations in lieu of payment.
But before Dr. Marrs attempted to perform his first IVF, two centers in Australia announced their own IVF babies. “I decided I really needed to go see someone who had had a baby,” he said. He used his vacation time to fly to Melbourne, shuttling between two competing clinics that were “four blocks apart and wouldn’t even talk to each other,” he recalled.
Over 6 weeks, “I learned how to stimulate, how to time ovulation. I watched the PhDs in the lab – how they handled the eggs and the sperm, what the conditions were, the incubator settings,” he said.
The first IVF babies in the United States were born only months apart: The first, in December 1981, was at the Jones Institute for Reproductive Medicine in Norfolk, Va., where Dr. Rosenwaks served as the first director.
The second baby born was at USC. After that, “we had 4,000 women on a waiting list, all under age 35,” Dr. Marrs said. The Jones Institute reportedly had 5,000.
As demand soared and more IVF babies arrived, the cloak of secrecy surrounding the procedure started to lift. British, Australian, and U.S. clinicians started getting together regularly. “We would pick a spot in the world, present our data: what we’d done, how many cycles, what we used for stimulation, when we took the eggs out,” Dr. Marrs said. “I don’t know how many hundreds of thousands of miles I flew in the first years of IVF, because it was the only way I could get information. We would literally stay up all night talking.”
Answering safety questions
Alan H. DeCherney, MD, currently an infertility researcher at the National Institutes of Health, started Yale University’s IVF program at around the same time Dr. Marrs and the Joneses were starting theirs. Yale already had a large infertility practice, and only academic centers had the laboratory resources and skilled staff needed to attempt IVF in those years.
In 1983, when Yale announced the birth of its first IVF baby – the fifth in the United States – Dr. DeCherney was starting to think about measuring outcomes, as there was concern over the potential for congenital anomalies related to IVF. “This was such a change in the way conception occurred, people were afraid that all kinds of crazy things would happen,” he said.
One concern was about ovarian stimulation with fertility drugs or gonadotropins. The earliest efforts – including by Dr. Steptoe and Dr. Edwards – used no drugs, instead trying to pinpoint the moment of natural egg release by measuring a woman’s hormone levels constantly, but these proved disappointing. Use of clomiphene citrate and human menopausal gonadotropin allowed for more control over timing, and for multiple mature eggs to be harvested at once.
But there were still many unanswered questions related to these agents’ safety and dosing, both for women and for babies.
When the NIH refused to fund a study of IVF outcomes, Dr. DeCherney and Dr. Marrs collaborated on a registry funded by a gonadotropin maker. “The drug company didn’t want to be associated with some terrible abnormal outcomes,” Dr. DeCherney recalled, though by then, “there were 10, maybe even 20 babies around the world, and they seemed to be fine,” he said.
The first registry results affirmed no changes in the rate of congenital abnormalities. (Larger, more recent studies have shown a small but significant elevation in birth defect risk associated with IVF.) A few years later, ovarian stimulation was adjusted to correspond with ovarian reserve, reducing the risk of ovarian hyperstimulation syndrome.
But even by the late 1980s, success rates for IVF per attempted cycle were still low overall, leading many critics, even within the profession, to accuse practitioners of misleading couples. Charles E. Miller, MD, an infertility specialist in Chicago, recalled an early investigation by a major newspaper “that looked at all the IVF clinics in Chicago and found the chances of having a baby was under 3%.”
It was true, Dr. Miller acknowledged – “the rates were dismal. But remember that IVF at the time was still considered a procedure of last resort.” Complex diagnostic testing to determine the cause of infertility, surgery, and fertility drugs all came first.
Some important innovations would soon change that and turn IVF into a mainstay of infertility treatment that could help women not only with damaged tubes but also with ovarian failure, low ovarian reserve, or dense pelvic adhesions. Even some types of male factor infertility would find an answer in IVF, by way of intracytoplasmic sperm transfer.
Eggs without surgery
Laparoscopic egg retrieval was the norm in the first decade of IVF. “We went through the belly button, allowing us to directly visualize the ovary and see whether ovulation had already occurred or we had to retrieve it by introducing a needle into the follicle,” Dr. Rosenwaks recalled.
“Some of us were doing 6 or even 10 laparoscopies a day, and it was physically quite challenging,” he said. “There were no video screens in those days. You had to bend over the scope.” And it was worse still for patients, who had to endure multiple surgeries.
Though egg and embryo cryopreservation were already being worked on, it would be years before these techniques were optimized, giving women more chances from a single retrieval of oocytes.
Finding a less invasive means of retrieving eggs was crucial.
Maria Bustillo, MD, an infertility specialist in Miami, recalled being criticized by peers when she and her then-colleagues at the Genetics & IVF Institute in Fairfax, Va., began retrieving eggs via a needle placed in the vagina, using abdominal ultrasound as a guide.
While the technique was far less invasive than laparoscopy, “we were doing it semi-blindly, and were told it was dangerous,” Dr. Bustillo said.
But these freehand ultrasound retrievals paved the way for what would become a revolutionary advance – the vaginal ultrasound probe, which by the end of the 1980s made nonsurgical extraction of eggs the norm.
Dr. Marrs recalled receiving a prototype of a vaginal ultrasound probe, in the mid-1980s, and finding patients unwilling to use it, except one who relented only because she had an empty bladder. Abdominal ultrasonography required a full bladder to work.
“It was as though somebody had removed the cloud cover,” he said. “I couldn’t believe it. I could see everything: her ovaries, tiny follicles, the uterus.”
Later probes were fitted with a needle and aspirator to retrieve eggs. Multiple IVF cycles no longer meant multiple surgeries, and the less-invasive procedure helped in recruiting egg donors, allowing women with ovarian disease or low ovarian reserves, including older women, to receive IVF.
“It didn’t make sense for a volunteer to go through a surgery, especially back in the early ’80s when the results were not all that great,” Dr. Bustillo said.
Improving ‘home brews’
The culture media in which embryos were grown was another strong factor limiting the success rates of early IVF. James Toner, MD, PhD, an IVF specialist in Atlanta, called the early media “home brews.”
“Everyone made them themselves,” said Dr. Toner, who spent 15 years at the Jones Institute. “You had to do a hamster or mouse embryo test on every batch to make sure embryos would grow.” And often they did not.
Poor success rates resulted in the emergence of alternative procedures: GIFT (gamete intrafallopian transfer) and ZIFT (zygote intrafallopian transfer). Both aimed to get embryos back into the patient as soon as possible, with the thought that the natural environment offered a better chance for success.
But advances in culture media allowed more time for embryos to be observed. With longer development, “you could do a better job selecting the ones that had a chance, and de-selecting those with no chance,” Dr. Toner said.
This also meant fewer embryos could be transferred back into patients, lowering the likelihood of multiples. Ultimately, for young women, single-embryo transfer would become the norm. “The problem of multiple pregnancy that we used to have no longer exists for IVF,” Dr. Toner said.
Allowing embryos to reach the blastocyst stage – day 5 or 6 – opened other, previously unthinkable possibilities: placing embryos directly into the uterus, without surgery, and pre-implementation genetic screening for abnormalities.
“As the cell number went up, the idea that you could do a genetic test with minimal impact on the embryo eventually became true,” Dr. Toner said.
A genetic revolution?
While many important IVF innovations were achieved in countries with staunch government support, one of the remarkable things about IVF’s evolution in the United States is that so many occurred with virtually none.
By the mid-1990s, most of the early practitioners had moved from academic settings into private practice, though they continued to publish. “After a while it didn’t help to be in academics. It just sort of slowed you down. Because you weren’t going to get any [government] money anyway, you might as well be in a place that’s a little more nimble,” Dr. Toner said.
At the same time, he said, IVF remains a costly, usually unreimbursed procedure – limiting patients’ willingness to take part in randomized trials. “IVF research is built more on cohort studies.”
Most of the current research focus in IVF is on possibilities for genetic screening. Dr. Miller said that rapid DNA sequencing is allowing specialists to “look at more, pick up more abnormalities. That will continue to improve so that we will be able to see virtually everything.”
But he cautioned there is still much to be done in IVF apart from the genetics – he’s concerned, he said, that the field has moved too far from its surgical origins, and is working with the academic societies to encourage more surgical training.
“We don’t do the same work we did before on fallopian tubes, which is good,” Dr. Miller said, noting that there have been many advances, particularly minimally invasive surgeries in the uterus or ovaries, that have occurred parallel to IVF and can improve success rates. “I think we have a better understanding of what kind of patients require surgical treatments and what kind of surgeries can help enhance fertility, and also what not to do.”
Dr. Bustillo said that “cytogenetics is wonderful, but not everything. You have embryos that are genetically normal and still don’t implant. There’s a lot of work to be done on the interaction between the mother and the embryo.”
Dr. Marrs said that even safety questions related to stimulation have yet to be fully answered. “I’ve always been a big believer that lower is better, but we need to know whether stimulation creates genetic abnormalities and whether less stimulation produces fewer – and we need more data to prove it,” he said. Dr. Marrs is an investigator on a national randomized trial comparing outcomes from IVF with standard-dose and ultra-low dose stimulation.
Access, income, and age
The IVF pioneers agree broadly that access to IVF is nowhere near what it should be in the United States, where only 15 states mandate any insurance coverage for infertility.
“Our limited access to care is a crime,” Dr. Toner said. “People who, through no fault of their own, find themselves infertile are asked to write a check for $15,000 to get pregnant. That’s not fair.”
Dr. DeCherney called access “an ethical issue, because who gets IVF? People with higher incomes. And if IVF allows you to select better embryos – whatever that means – it gives that group another advantage.”
Dr. Toner warned that the push toward genetic testing of embryos, especially in the absence of known hereditary disease, could create new problems for the profession – not unlike in the early days of IVF, when the Jones Institute and other clinics were picketed over the specter of “test tube babies.”
“It’s one thing to say this embryo does not have the right number of chromosomes and couldn’t possibly be a child, so let’s not use it, but what about looking for traits? Sex selection? We have this privileged position in which the government does not really interfere in what we do, but to retain this status we need to stay within the bounds that our society accepts,” Dr. Toner said.
In recent years, IVF uptake has been high among women of advanced reproductive age, which poses its own set of challenges. Outcomes in older women using their own eggs become progressively poorer with age, though donor eggs drastically improve their chances, and egg freezing offers the possibility of preserving quality eggs for later pregnancies.
“We could make this situation better by promoting social freezing, doing more work for women early in their lives to get out their own eggs and store them,” Dr. Miller said. “But again, you still face the issue of access.”
Regardless of what technologies are available or become available in assisted reproduction, doctors and women alike need to be better educated on their options and chances early, with a clearer understanding of what happens as they age, Dr. Bustillo said.
“This is not to pressure them, but just so they understand that when they get to be 42 and are just thinking about reproducing, it’s not a major surprise when I tell them this could be a problem,” she said.
Throughout 2016, Ob.Gyn. News is celebrating its 50th anniversary with exclusive articles looking at the evolution of the specialty, including the history of contraception, changes in gynecologic surgery, and the transformation of the well-woman visit.