User login
Maintenance taxane therapy of no benefit in advanced ovarian cancer
NATIONAL HARBOR, MD. – Using either a polymerized formulation of paclitaxel or paclitaxel alone as maintenance therapy conferred no survival benefit for women with advanced ovarian, fallopian tube, or peritoneal cancer.
In a phase III randomized trial, patients survived a median 54.8 months with surveillance alone, 51.3 months with maintenance paclitaxel, and 60.0 months with maintenance paclitaxel poliglumex; these differences were not statistically significant.
In a presentation at the annual meeting of the Society of Gynecologic Oncology, Larry Copeland, MD, a professor of gynecologic oncology at Ohio State University, Columbus, said that treatment with surgery and chemotherapy yields a clinical complete response in many patients with these cancers. However, he said, recurrent progressive disease is still very common; the rationale behind maintenance chemotherapy is that it may “reduce the risk of recurrence and extend survival.”
There had been promising earlier data supporting this approach from a previous phase III comparison trial that evaluated the difference in clinical complete response between 3 or 12 cycles of paclitaxel, said Dr. Copeland. The results of a predefined interim analysis prompted the data monitoring committee to close that study (J Clin Oncol. 2003;21[13]:2460-5) since the 12-cycle, 7-month arm of the study had better progression-free survival. However, the 12-cycle protocol did not confer a benefit in overall survival, the investigators later reported (Gynecol Oncol 2009;114[2]:195-8).
The current stage III randomized trial enrolled women with stage III or IV ovarian, fallopian tube, or peritoneal cancer who had had five to eight cycles of chemotherapy and achieved clinical complete response. If patients had neuropathy, it could not exceed grade 1, and their cancer performance status scores could not exceed 2, Dr. Copeland said.
Patients were randomized 1:1:1 to a surveillance arm, to receive paclitaxel as a 3-hour infusion, or to receive paclitaxel poliglumex as a 10- to 20-minute infusion. Both study drugs were dosed at 35 mg/m2 every 28 days for 12 cycles.
The study ran from March 2005 to January 2014, enrolling 1,157 patients who were followed for a median of 71 months. Over 80% of patients in each study arm had ovarian cancer, and a similar number had stage III cancer and serous histology. Over 90% of patients in each arm had grade 2 or higher histology.
The study was designed as a superiority design, and patients were not to receive other cancer treatments until they had disease progression. Primary clinical endpoints for the study included overall survival, quality of life as measured by the Ovarian Specific Questionnaire Quality of Life–Cancer, and patient-reported neurotoxicity as reported on the Functional Assessment of Cancer Therapy/Gynecologic Oncology Group/Neurotoxicity questionnaire.
The third scheduled interim analysis, in May 2016, examined the primary endpoint of overall survival, triggered when at least 200 deaths occurred in the surveillance group. The overall final analysis had been scheduled for the point at which at least 301 deaths happened in the surveillance group. In the abstract accompanying the presentation, Dr. Copeland wrote that the data at the point of this interim analysis indicated that “the relative death hazards passed the futility boundaries for both taxane regimens.”
“The log-rank statistic for each taxane regimen was below the interval specific in the study design, making it unlikely either of the taxane regimens has superior overall survival compared to surveillance,” said Dr. Copeland.
The hazard ratio for overall survival of the paclitaxel group compared to surveillance was 1.104, with a 97.5% confidence interval (CI) of 0.884-1.38. For the paclitaxel poliglumex group, the hazard ratio compared to surveillance alone was 0.979 (97.5% CI, 0.781-1.23).
Dr. Copeland and his colleagues also looked at progression-free survival, not a primary endpoint of the study. For the paclitaxel patients compared to surveillance, the HR for progression-free survival was 0.783 (95% CI, 0.666-0.921). For paclitaxel poliglumex, the HR was 0.847 (95% CI, 0.666-0.921).
Not unexpectedly, more patients who received taxane treatment than those in the surveillance arm experienced adverse events, said Dr. Copeland. The most common adverse events were hypersensitivity or allergic reactions, fatigue, alopecia, nausea, constipation, and sensory neuropathies. Grade 2 alopecia was experienced by about a quarter of the paclitaxel poliglumex cohort, and by a little less than half of the paclitaxel cohort. Neurologic adverse events were very common, reported by three quarters of the paclitaxel poliglumex cohort, four in five of the paclitaxel group, and by a little over half of the surveillance cohort.
Overall quality of life scores did not differ significantly among the treatment arms.
In an exploratory analysis, Dr. Copeland and his colleagues determined that patients with no residual disease (R0 patients) after initial cytoreductive surgery fared better, with a median 70 months of survival compared to a median 43.6 months for individuals with gross residual disease. When those patients were sorted out by treatment arm, there was no significant difference in OS for R0 patients who received any intervention or surveillance.
“Overall survival was not improved with taxane maintenance, though progression-free survival is slightly delayed,” Dr. Copeland concluded.
In discussion after the presentation, he said that he is not sure that further investigations will be pursued for paclitaxel poliglumex in the treatment of ovarian cancers.
Paclitaxel poliglumex (CT-2103; Opaxio) is paclitaxel conjugated to a polyglutamate polymer, a formulation that may enhance tumor penetration and retention, and that allows shorter infusion at a peripheral site. Previous work had shown that CT-2103’s structure enhanced pharmacokinetics and potentially decreased toxicity (Expert Opin Investig Drugs. 2004 Nov;13[11]:1501-8).
Dr. Copeland reported receiving consulting or honoraria fees from Clovis, Advaxis, Janssen, and Tesaro, and is a stockholder or shareholder in Merck, Eli Lilly, and Cardinal Health. The study was sponsored by the Cancer Therapy Evaluation Program of the National Cancer Institute and by Cell Therapeutics, which plans to market paclitaxel poliglumex.
[email protected]
On Twitter @karioakes
NATIONAL HARBOR, MD. – Using either a polymerized formulation of paclitaxel or paclitaxel alone as maintenance therapy conferred no survival benefit for women with advanced ovarian, fallopian tube, or peritoneal cancer.
In a phase III randomized trial, patients survived a median 54.8 months with surveillance alone, 51.3 months with maintenance paclitaxel, and 60.0 months with maintenance paclitaxel poliglumex; these differences were not statistically significant.
In a presentation at the annual meeting of the Society of Gynecologic Oncology, Larry Copeland, MD, a professor of gynecologic oncology at Ohio State University, Columbus, said that treatment with surgery and chemotherapy yields a clinical complete response in many patients with these cancers. However, he said, recurrent progressive disease is still very common; the rationale behind maintenance chemotherapy is that it may “reduce the risk of recurrence and extend survival.”
There had been promising earlier data supporting this approach from a previous phase III comparison trial that evaluated the difference in clinical complete response between 3 or 12 cycles of paclitaxel, said Dr. Copeland. The results of a predefined interim analysis prompted the data monitoring committee to close that study (J Clin Oncol. 2003;21[13]:2460-5) since the 12-cycle, 7-month arm of the study had better progression-free survival. However, the 12-cycle protocol did not confer a benefit in overall survival, the investigators later reported (Gynecol Oncol 2009;114[2]:195-8).
The current stage III randomized trial enrolled women with stage III or IV ovarian, fallopian tube, or peritoneal cancer who had had five to eight cycles of chemotherapy and achieved clinical complete response. If patients had neuropathy, it could not exceed grade 1, and their cancer performance status scores could not exceed 2, Dr. Copeland said.
Patients were randomized 1:1:1 to a surveillance arm, to receive paclitaxel as a 3-hour infusion, or to receive paclitaxel poliglumex as a 10- to 20-minute infusion. Both study drugs were dosed at 35 mg/m2 every 28 days for 12 cycles.
The study ran from March 2005 to January 2014, enrolling 1,157 patients who were followed for a median of 71 months. Over 80% of patients in each study arm had ovarian cancer, and a similar number had stage III cancer and serous histology. Over 90% of patients in each arm had grade 2 or higher histology.
The study was designed as a superiority design, and patients were not to receive other cancer treatments until they had disease progression. Primary clinical endpoints for the study included overall survival, quality of life as measured by the Ovarian Specific Questionnaire Quality of Life–Cancer, and patient-reported neurotoxicity as reported on the Functional Assessment of Cancer Therapy/Gynecologic Oncology Group/Neurotoxicity questionnaire.
The third scheduled interim analysis, in May 2016, examined the primary endpoint of overall survival, triggered when at least 200 deaths occurred in the surveillance group. The overall final analysis had been scheduled for the point at which at least 301 deaths happened in the surveillance group. In the abstract accompanying the presentation, Dr. Copeland wrote that the data at the point of this interim analysis indicated that “the relative death hazards passed the futility boundaries for both taxane regimens.”
“The log-rank statistic for each taxane regimen was below the interval specific in the study design, making it unlikely either of the taxane regimens has superior overall survival compared to surveillance,” said Dr. Copeland.
The hazard ratio for overall survival of the paclitaxel group compared to surveillance was 1.104, with a 97.5% confidence interval (CI) of 0.884-1.38. For the paclitaxel poliglumex group, the hazard ratio compared to surveillance alone was 0.979 (97.5% CI, 0.781-1.23).
Dr. Copeland and his colleagues also looked at progression-free survival, not a primary endpoint of the study. For the paclitaxel patients compared to surveillance, the HR for progression-free survival was 0.783 (95% CI, 0.666-0.921). For paclitaxel poliglumex, the HR was 0.847 (95% CI, 0.666-0.921).
Not unexpectedly, more patients who received taxane treatment than those in the surveillance arm experienced adverse events, said Dr. Copeland. The most common adverse events were hypersensitivity or allergic reactions, fatigue, alopecia, nausea, constipation, and sensory neuropathies. Grade 2 alopecia was experienced by about a quarter of the paclitaxel poliglumex cohort, and by a little less than half of the paclitaxel cohort. Neurologic adverse events were very common, reported by three quarters of the paclitaxel poliglumex cohort, four in five of the paclitaxel group, and by a little over half of the surveillance cohort.
Overall quality of life scores did not differ significantly among the treatment arms.
In an exploratory analysis, Dr. Copeland and his colleagues determined that patients with no residual disease (R0 patients) after initial cytoreductive surgery fared better, with a median 70 months of survival compared to a median 43.6 months for individuals with gross residual disease. When those patients were sorted out by treatment arm, there was no significant difference in OS for R0 patients who received any intervention or surveillance.
“Overall survival was not improved with taxane maintenance, though progression-free survival is slightly delayed,” Dr. Copeland concluded.
In discussion after the presentation, he said that he is not sure that further investigations will be pursued for paclitaxel poliglumex in the treatment of ovarian cancers.
Paclitaxel poliglumex (CT-2103; Opaxio) is paclitaxel conjugated to a polyglutamate polymer, a formulation that may enhance tumor penetration and retention, and that allows shorter infusion at a peripheral site. Previous work had shown that CT-2103’s structure enhanced pharmacokinetics and potentially decreased toxicity (Expert Opin Investig Drugs. 2004 Nov;13[11]:1501-8).
Dr. Copeland reported receiving consulting or honoraria fees from Clovis, Advaxis, Janssen, and Tesaro, and is a stockholder or shareholder in Merck, Eli Lilly, and Cardinal Health. The study was sponsored by the Cancer Therapy Evaluation Program of the National Cancer Institute and by Cell Therapeutics, which plans to market paclitaxel poliglumex.
[email protected]
On Twitter @karioakes
NATIONAL HARBOR, MD. – Using either a polymerized formulation of paclitaxel or paclitaxel alone as maintenance therapy conferred no survival benefit for women with advanced ovarian, fallopian tube, or peritoneal cancer.
In a phase III randomized trial, patients survived a median 54.8 months with surveillance alone, 51.3 months with maintenance paclitaxel, and 60.0 months with maintenance paclitaxel poliglumex; these differences were not statistically significant.
In a presentation at the annual meeting of the Society of Gynecologic Oncology, Larry Copeland, MD, a professor of gynecologic oncology at Ohio State University, Columbus, said that treatment with surgery and chemotherapy yields a clinical complete response in many patients with these cancers. However, he said, recurrent progressive disease is still very common; the rationale behind maintenance chemotherapy is that it may “reduce the risk of recurrence and extend survival.”
There had been promising earlier data supporting this approach from a previous phase III comparison trial that evaluated the difference in clinical complete response between 3 or 12 cycles of paclitaxel, said Dr. Copeland. The results of a predefined interim analysis prompted the data monitoring committee to close that study (J Clin Oncol. 2003;21[13]:2460-5) since the 12-cycle, 7-month arm of the study had better progression-free survival. However, the 12-cycle protocol did not confer a benefit in overall survival, the investigators later reported (Gynecol Oncol 2009;114[2]:195-8).
The current stage III randomized trial enrolled women with stage III or IV ovarian, fallopian tube, or peritoneal cancer who had had five to eight cycles of chemotherapy and achieved clinical complete response. If patients had neuropathy, it could not exceed grade 1, and their cancer performance status scores could not exceed 2, Dr. Copeland said.
Patients were randomized 1:1:1 to a surveillance arm, to receive paclitaxel as a 3-hour infusion, or to receive paclitaxel poliglumex as a 10- to 20-minute infusion. Both study drugs were dosed at 35 mg/m2 every 28 days for 12 cycles.
The study ran from March 2005 to January 2014, enrolling 1,157 patients who were followed for a median of 71 months. Over 80% of patients in each study arm had ovarian cancer, and a similar number had stage III cancer and serous histology. Over 90% of patients in each arm had grade 2 or higher histology.
The study was designed as a superiority design, and patients were not to receive other cancer treatments until they had disease progression. Primary clinical endpoints for the study included overall survival, quality of life as measured by the Ovarian Specific Questionnaire Quality of Life–Cancer, and patient-reported neurotoxicity as reported on the Functional Assessment of Cancer Therapy/Gynecologic Oncology Group/Neurotoxicity questionnaire.
The third scheduled interim analysis, in May 2016, examined the primary endpoint of overall survival, triggered when at least 200 deaths occurred in the surveillance group. The overall final analysis had been scheduled for the point at which at least 301 deaths happened in the surveillance group. In the abstract accompanying the presentation, Dr. Copeland wrote that the data at the point of this interim analysis indicated that “the relative death hazards passed the futility boundaries for both taxane regimens.”
“The log-rank statistic for each taxane regimen was below the interval specific in the study design, making it unlikely either of the taxane regimens has superior overall survival compared to surveillance,” said Dr. Copeland.
The hazard ratio for overall survival of the paclitaxel group compared to surveillance was 1.104, with a 97.5% confidence interval (CI) of 0.884-1.38. For the paclitaxel poliglumex group, the hazard ratio compared to surveillance alone was 0.979 (97.5% CI, 0.781-1.23).
Dr. Copeland and his colleagues also looked at progression-free survival, not a primary endpoint of the study. For the paclitaxel patients compared to surveillance, the HR for progression-free survival was 0.783 (95% CI, 0.666-0.921). For paclitaxel poliglumex, the HR was 0.847 (95% CI, 0.666-0.921).
Not unexpectedly, more patients who received taxane treatment than those in the surveillance arm experienced adverse events, said Dr. Copeland. The most common adverse events were hypersensitivity or allergic reactions, fatigue, alopecia, nausea, constipation, and sensory neuropathies. Grade 2 alopecia was experienced by about a quarter of the paclitaxel poliglumex cohort, and by a little less than half of the paclitaxel cohort. Neurologic adverse events were very common, reported by three quarters of the paclitaxel poliglumex cohort, four in five of the paclitaxel group, and by a little over half of the surveillance cohort.
Overall quality of life scores did not differ significantly among the treatment arms.
In an exploratory analysis, Dr. Copeland and his colleagues determined that patients with no residual disease (R0 patients) after initial cytoreductive surgery fared better, with a median 70 months of survival compared to a median 43.6 months for individuals with gross residual disease. When those patients were sorted out by treatment arm, there was no significant difference in OS for R0 patients who received any intervention or surveillance.
“Overall survival was not improved with taxane maintenance, though progression-free survival is slightly delayed,” Dr. Copeland concluded.
In discussion after the presentation, he said that he is not sure that further investigations will be pursued for paclitaxel poliglumex in the treatment of ovarian cancers.
Paclitaxel poliglumex (CT-2103; Opaxio) is paclitaxel conjugated to a polyglutamate polymer, a formulation that may enhance tumor penetration and retention, and that allows shorter infusion at a peripheral site. Previous work had shown that CT-2103’s structure enhanced pharmacokinetics and potentially decreased toxicity (Expert Opin Investig Drugs. 2004 Nov;13[11]:1501-8).
Dr. Copeland reported receiving consulting or honoraria fees from Clovis, Advaxis, Janssen, and Tesaro, and is a stockholder or shareholder in Merck, Eli Lilly, and Cardinal Health. The study was sponsored by the Cancer Therapy Evaluation Program of the National Cancer Institute and by Cell Therapeutics, which plans to market paclitaxel poliglumex.
[email protected]
On Twitter @karioakes
AT THE ANNUAL MEETING ON WOMEN’S CANCER
Key clinical point:
Major finding: There was no statistically significant overall survival benefit of maintenance taxane therapy for advanced ovarian, fallopian tube, or peritoneal cancer, compared with surveillance.
Data source: Phase III randomized trial of 1,157 patients with advanced ovarian, fallopian tube, or peritoneal cancer.
Disclosures: Dr. Copeland reported receiving consulting or honoraria fees from Clovis, Advaxis, Janssen, and Tesaro, and is a stockholder or shareholder in Merck, Eli Lilly, and Cardinal Health. The study was sponsored by the Cancer Therapy Evaluation Program of the National Cancer Institute and by Cell Therapeutics, which plans to market paclitaxel poliglumex.
Are new medications on horizon for patients with depression, inflammation?
SCOTTSDALE, ARIZ. – Inflammation is inextricably linked to depression in a subset of patients who differ from other depressed patients in their responses to certain interventions, according to Charles L. Raison, MD.
“The brains of people who are depressed and who have inflammation look very different from those of people who are depressed without inflammation,” Dr. Raison said in an interview at the annual meeting of the American College of Psychiatrists. “They have different connectivity patterns, different glutaminergic patterns, different signaling. It seems that inflammatory processes change the way different parts of the brain talk to each other and seem to do so in consistent ways.”
Dr. Raison, the Mary Sue and Mike Shannon Chair for Healthy Minds, Children & Families at the University of Wisconsin–Madison, told a plenary audience at the meeting: “We [psychiatrists] are so brain centric, it’s easy to forget how much the immune system drives us. It’s either like a second brain, or it is at least part of the brain.”
Over the years, Dr. Raison and his colleagues have observed how inflammation can interfere with mood, leading to depression in people who previously did not report or describe depressive symptoms.
In the early 2000s, Dr. Raison and others such as Andrew H. Miller, MD, a psychiatric oncologist, investigated the inflammatory response and levels of depression in people treated with interferon-alpha for hepatitis C infection (J Clin Psychiatry. 2005 Jan;66[1]:41-8). They found that more than half of people who had not reported or described depressive symptoms at baseline subsequently reported depressive symptoms. “In a nutshell, we found that interferon-alpha induces every single brain-body function associated with regular old major depression,” said Dr. Raison, also a professor of psychiatry at the university.
In another study, this one led by neuropsychosomatic specialist Dominique L. Musselman, MD, a similar cohort of hepatitis C patients assessed for baseline depression was randomly assigned to either placebo or paroxetine during the course of interferon-alpha treatment. Patients treated with placebo had a 0.24 relative risk (95% confidence interval, 0.08-0.93) of developing depression, compared with the paroxetine group (N Eng J Med. 2001;344:961-6).
The real “breakthrough” in understanding the role of inflammation in depression, Dr. Raison said, came from studies that made the association between early-life adversity, depression, and inflammation. In one particular study, Dr. Raison and colleagues found that stress-induced spikes in interleukin-6 and NF-kappaB DNA-binding were greater in patients with higher baseline levels of depression and higher levels of early life stress (Am J Psychiatry. 2006 Sep;163[9]:1630-3).
Spikes in the inflammatory response independently correlated with depression severity but not with early life stress, which Dr. Raison said suggests that adversity likely can cause inflammation – and thus predisposes people to depression, and not necessarily vice versa.
“Something about early adversity in life programs the brain-body complex to run inflammatory systems hot, probably because it’s an effective way to be ready for [a stream of] unpredictable miseries,” Dr. Raison said during the session. “Chronic, elevated inflammation [early on] seems to predict increased depression later.”
Now that the link has been established between some depression and inflammation, the next step for science is to tease out who is most likely to respond to anti-inflammatory interventions for depression, Dr. Raison said.
“Something that is just starting to emerge is that maybe the relationship between inflammation and depression is not a straight line but a U-shaped curve, such that if you have too much inflammation, you’re in trouble, and if you have too little, you’re also in trouble,” he said in the interview, citing a study he and others conducted into blocking the inflammatory response. In that study, people with major depression who were otherwise medically healthy received either three infusions of the anti-inflammatory tumor necrosis factor–alpha antagonist infliximab (5 mg/kg), or of salt water. The investigators found that placebo worked just as well as infliximab. But patients with lower levels of inflammation at baseline had the greatest improvements in their Hamilton Rating Scale for Depression scores with placebo when compared with treatment (JAMA Psychiatry. 2013 Jan;70[1]:31-41).
Data are not yet conclusive, but Dr. Raison said the field soon could use biomarkers such as levels of C-reactive protein to determine whether patients will respond to anti-inflammatories such as omega-3 essential fatty acids. “Everyone in psychiatry is desperate to find clear, unambiguous answers. We’re right on the edge, but we’re not there yet.”
Until then, Dr. Raison cautioned against the “indiscriminate” use of anti-inflammatories, lest they exacerbate patients’ depressive symptoms. “For instance, omega-3 fatty acids might actually be counterproductive in a lot of depressed people,” he said. Still, he believes that “developing and studying anti-inflammatory strategies is probably going to lead to a novel way of treating depression in some people. What is beautiful is that if these studies continue, we might actually be able – for the first time – to target a subgroup of patients for a specific treatment.”
Dr. Raison is on the scientific advisory board of the Usona Institute, a nonprofit medical research firm.
[email protected]
On Twitter @whitneymcknight
SCOTTSDALE, ARIZ. – Inflammation is inextricably linked to depression in a subset of patients who differ from other depressed patients in their responses to certain interventions, according to Charles L. Raison, MD.
“The brains of people who are depressed and who have inflammation look very different from those of people who are depressed without inflammation,” Dr. Raison said in an interview at the annual meeting of the American College of Psychiatrists. “They have different connectivity patterns, different glutaminergic patterns, different signaling. It seems that inflammatory processes change the way different parts of the brain talk to each other and seem to do so in consistent ways.”
Dr. Raison, the Mary Sue and Mike Shannon Chair for Healthy Minds, Children & Families at the University of Wisconsin–Madison, told a plenary audience at the meeting: “We [psychiatrists] are so brain centric, it’s easy to forget how much the immune system drives us. It’s either like a second brain, or it is at least part of the brain.”
Over the years, Dr. Raison and his colleagues have observed how inflammation can interfere with mood, leading to depression in people who previously did not report or describe depressive symptoms.
In the early 2000s, Dr. Raison and others such as Andrew H. Miller, MD, a psychiatric oncologist, investigated the inflammatory response and levels of depression in people treated with interferon-alpha for hepatitis C infection (J Clin Psychiatry. 2005 Jan;66[1]:41-8). They found that more than half of people who had not reported or described depressive symptoms at baseline subsequently reported depressive symptoms. “In a nutshell, we found that interferon-alpha induces every single brain-body function associated with regular old major depression,” said Dr. Raison, also a professor of psychiatry at the university.
In another study, this one led by neuropsychosomatic specialist Dominique L. Musselman, MD, a similar cohort of hepatitis C patients assessed for baseline depression was randomly assigned to either placebo or paroxetine during the course of interferon-alpha treatment. Patients treated with placebo had a 0.24 relative risk (95% confidence interval, 0.08-0.93) of developing depression, compared with the paroxetine group (N Eng J Med. 2001;344:961-6).
The real “breakthrough” in understanding the role of inflammation in depression, Dr. Raison said, came from studies that made the association between early-life adversity, depression, and inflammation. In one particular study, Dr. Raison and colleagues found that stress-induced spikes in interleukin-6 and NF-kappaB DNA-binding were greater in patients with higher baseline levels of depression and higher levels of early life stress (Am J Psychiatry. 2006 Sep;163[9]:1630-3).
Spikes in the inflammatory response independently correlated with depression severity but not with early life stress, which Dr. Raison said suggests that adversity likely can cause inflammation – and thus predisposes people to depression, and not necessarily vice versa.
“Something about early adversity in life programs the brain-body complex to run inflammatory systems hot, probably because it’s an effective way to be ready for [a stream of] unpredictable miseries,” Dr. Raison said during the session. “Chronic, elevated inflammation [early on] seems to predict increased depression later.”
Now that the link has been established between some depression and inflammation, the next step for science is to tease out who is most likely to respond to anti-inflammatory interventions for depression, Dr. Raison said.
“Something that is just starting to emerge is that maybe the relationship between inflammation and depression is not a straight line but a U-shaped curve, such that if you have too much inflammation, you’re in trouble, and if you have too little, you’re also in trouble,” he said in the interview, citing a study he and others conducted into blocking the inflammatory response. In that study, people with major depression who were otherwise medically healthy received either three infusions of the anti-inflammatory tumor necrosis factor–alpha antagonist infliximab (5 mg/kg), or of salt water. The investigators found that placebo worked just as well as infliximab. But patients with lower levels of inflammation at baseline had the greatest improvements in their Hamilton Rating Scale for Depression scores with placebo when compared with treatment (JAMA Psychiatry. 2013 Jan;70[1]:31-41).
Data are not yet conclusive, but Dr. Raison said the field soon could use biomarkers such as levels of C-reactive protein to determine whether patients will respond to anti-inflammatories such as omega-3 essential fatty acids. “Everyone in psychiatry is desperate to find clear, unambiguous answers. We’re right on the edge, but we’re not there yet.”
Until then, Dr. Raison cautioned against the “indiscriminate” use of anti-inflammatories, lest they exacerbate patients’ depressive symptoms. “For instance, omega-3 fatty acids might actually be counterproductive in a lot of depressed people,” he said. Still, he believes that “developing and studying anti-inflammatory strategies is probably going to lead to a novel way of treating depression in some people. What is beautiful is that if these studies continue, we might actually be able – for the first time – to target a subgroup of patients for a specific treatment.”
Dr. Raison is on the scientific advisory board of the Usona Institute, a nonprofit medical research firm.
[email protected]
On Twitter @whitneymcknight
SCOTTSDALE, ARIZ. – Inflammation is inextricably linked to depression in a subset of patients who differ from other depressed patients in their responses to certain interventions, according to Charles L. Raison, MD.
“The brains of people who are depressed and who have inflammation look very different from those of people who are depressed without inflammation,” Dr. Raison said in an interview at the annual meeting of the American College of Psychiatrists. “They have different connectivity patterns, different glutaminergic patterns, different signaling. It seems that inflammatory processes change the way different parts of the brain talk to each other and seem to do so in consistent ways.”
Dr. Raison, the Mary Sue and Mike Shannon Chair for Healthy Minds, Children & Families at the University of Wisconsin–Madison, told a plenary audience at the meeting: “We [psychiatrists] are so brain centric, it’s easy to forget how much the immune system drives us. It’s either like a second brain, or it is at least part of the brain.”
Over the years, Dr. Raison and his colleagues have observed how inflammation can interfere with mood, leading to depression in people who previously did not report or describe depressive symptoms.
In the early 2000s, Dr. Raison and others such as Andrew H. Miller, MD, a psychiatric oncologist, investigated the inflammatory response and levels of depression in people treated with interferon-alpha for hepatitis C infection (J Clin Psychiatry. 2005 Jan;66[1]:41-8). They found that more than half of people who had not reported or described depressive symptoms at baseline subsequently reported depressive symptoms. “In a nutshell, we found that interferon-alpha induces every single brain-body function associated with regular old major depression,” said Dr. Raison, also a professor of psychiatry at the university.
In another study, this one led by neuropsychosomatic specialist Dominique L. Musselman, MD, a similar cohort of hepatitis C patients assessed for baseline depression was randomly assigned to either placebo or paroxetine during the course of interferon-alpha treatment. Patients treated with placebo had a 0.24 relative risk (95% confidence interval, 0.08-0.93) of developing depression, compared with the paroxetine group (N Eng J Med. 2001;344:961-6).
The real “breakthrough” in understanding the role of inflammation in depression, Dr. Raison said, came from studies that made the association between early-life adversity, depression, and inflammation. In one particular study, Dr. Raison and colleagues found that stress-induced spikes in interleukin-6 and NF-kappaB DNA-binding were greater in patients with higher baseline levels of depression and higher levels of early life stress (Am J Psychiatry. 2006 Sep;163[9]:1630-3).
Spikes in the inflammatory response independently correlated with depression severity but not with early life stress, which Dr. Raison said suggests that adversity likely can cause inflammation – and thus predisposes people to depression, and not necessarily vice versa.
“Something about early adversity in life programs the brain-body complex to run inflammatory systems hot, probably because it’s an effective way to be ready for [a stream of] unpredictable miseries,” Dr. Raison said during the session. “Chronic, elevated inflammation [early on] seems to predict increased depression later.”
Now that the link has been established between some depression and inflammation, the next step for science is to tease out who is most likely to respond to anti-inflammatory interventions for depression, Dr. Raison said.
“Something that is just starting to emerge is that maybe the relationship between inflammation and depression is not a straight line but a U-shaped curve, such that if you have too much inflammation, you’re in trouble, and if you have too little, you’re also in trouble,” he said in the interview, citing a study he and others conducted into blocking the inflammatory response. In that study, people with major depression who were otherwise medically healthy received either three infusions of the anti-inflammatory tumor necrosis factor–alpha antagonist infliximab (5 mg/kg), or of salt water. The investigators found that placebo worked just as well as infliximab. But patients with lower levels of inflammation at baseline had the greatest improvements in their Hamilton Rating Scale for Depression scores with placebo when compared with treatment (JAMA Psychiatry. 2013 Jan;70[1]:31-41).
Data are not yet conclusive, but Dr. Raison said the field soon could use biomarkers such as levels of C-reactive protein to determine whether patients will respond to anti-inflammatories such as omega-3 essential fatty acids. “Everyone in psychiatry is desperate to find clear, unambiguous answers. We’re right on the edge, but we’re not there yet.”
Until then, Dr. Raison cautioned against the “indiscriminate” use of anti-inflammatories, lest they exacerbate patients’ depressive symptoms. “For instance, omega-3 fatty acids might actually be counterproductive in a lot of depressed people,” he said. Still, he believes that “developing and studying anti-inflammatory strategies is probably going to lead to a novel way of treating depression in some people. What is beautiful is that if these studies continue, we might actually be able – for the first time – to target a subgroup of patients for a specific treatment.”
Dr. Raison is on the scientific advisory board of the Usona Institute, a nonprofit medical research firm.
[email protected]
On Twitter @whitneymcknight
EXPERT OPINION FROM THE AMERICAN COLLEGE OF PSYCHIATRISTS MEETING
Portfolio of physician-led measures nets better quality of care
ORLANDO – A multifaceted portfolio of physician-led measures with feedback and financial incentives can dramatically improve the quality of care provided at cancer centers, suggests the experience of Stanford (Calif.) Health Care.
Physician leaders of 13 disease-specific cancer care programs (CCPs) identified measures of care that were meaningful to their team and patients, spanning the spectrum from new diagnosis through end of life and survivorship care. Quality and analytics teams developed 16 corresponding metrics and performance reports used for feedback. Programs were also given a financial incentive to meet jointly set targets.
After a year, the CCPs had improved on 12 of the metrics and maintained high baseline levels of performance on the other 4 metrics, investigators reported at a symposium on quality care sponsored by the American Society of Clinical Oncology. For example, they got better at entering staging information in a dedicated field in the electronic health record (+50% absolute increase), recording hand and foot pain (+34%), performing hepatitis B testing before rituximab use (+17%), and referring patients with ovarian cancer for genetic counseling (+43%).
“The main drivers, I would argue, besides the Hawthorne effect, were a high level of physician engagement in the selection, management, and improvement of the metrics, and these metrics excited the care teams, which also provided some motivation,” she said. “We provided real-time, high-quality feedback of performance. And last but probably not least was a financial incentive for the CCP as a team, not part of any individual compensation.”
The investigators plan to continue measuring the metrics, to expand them to other sites in their network, and to add new metrics that are common across the programs to minimize measurement burden, according to Ms. Porter. “We also plan to build cohorts for value-based care and unplanned care like ED visits and unplanned admissions. Finally, we want to keep momentum going and capitalize upon a provider engagement in value measurement and improvement,” she said.
“Based on this work and prior abstracts, … there are many validated metrics to be used. So, to choose those metrics and to choose them through local leadership support, most importantly, engaging frontline staff and having their buy-in of the measures that you are collecting are important,” commented invited discussant Jessica A. Zerillo, MD, MPH, of the Beth Israel Deaconess Medical Center in Boston. “And this can include using incentives that drive such stakeholders, whether they be financial or simply pride with public reporting.”
Study details
“In the summer of 2015, we were starting to feel a lot of pressure to prepare for evolving reimbursement models,” Ms. Porter said, explaining the initiative’s genesis. “Mainly, how do we define our value, and how can we measure and improve on that value of the care we deliver? One answer, of course, is to measure and reduce unnecessary variation. And we knew, to be successful, we had to increase our physician engagement and leadership in the selection and improvement of our metrics.”
Physician leaders of the CCPs were asked to choose quality measures that met three criteria: they were meaningful and important to both the care team and patients, they had pertinent data elements already available in existing databases (to reduce documentation burden), and they were multidisciplinary in nature, reflecting the care provided by the whole program. The measures ultimately selected included a variety of those put forth by American Society of Clinical Oncology’s Quality Oncology Practice Initiative and the American Society for Radiation Oncology. CCPs were offered a financial incentive for meeting targets ranging from $75,000 to $125,000 that was based on number of providers and patient volume, rather than on the impact of improved metric. “This was really meant for reinvestment back into their quality programs,” Ms. Porter said. “I would argue this was really a culture-building year for us, and we hope that next year there might be a little bit more tangible value with the metrics.”
The quality team gave CCPs monthly or quarterly performance reports with unblinded physician- and patient-level details that were ultimately disseminated to all the other CCPs. They also investigated any missing data for individual metrics.
Study results showed that half of the 16 measures the physician leaders chose pertained to the diagnosis and treatment planning phase of care, according to Ms. Porter. “It was important to many of our CCPs to ensure that specific testing was done, which would then, in turn, drive treatment planning decisions,” she commented.
At the end of the year, each metric was assessed among 13 to 2,406 patients. “All CCPs met their predetermined target and earned their financial incentive award for the year,” Ms. Porter reported.
Improvement was most marked, with a 50% absolute increase, for the metric of completing a staging module, which required conversion of staging information (historically embedded in progress notes) into a structured format in a dedicated field in the electronic health record within 45 days of a patient’s first cancer treatment. This practice enables ready identification of stage cohorts in which value of care can be assessed, she noted.
There were also sizable absolute increases in relevant CCPs in the proportion of blood and marrow transplant recipients referred to survivorship care by day 100 (+20%) and visiting that service by day 180 (+13%), recording of hand and foot pain (+34%) and radiation dermatitis (+21%), mismatch repair testing in patients with newly diagnosed colorectal cancer (+10%), referral of patients with newly diagnosed ovarian cancer for genetic counseling (+43%), cytogenetic testing in patients with newly diagnosed hematologic malignancies (+17%), hepatitis B testing before rituximab administration (+17%), and allowance of at least 2 nights for treatment plan physics–quality assurance before the start of a nonemergent radiation oncology treatment (+14%).
Meanwhile, there were decreases, considered favorable changes, in chemotherapy use in the last 2 weeks of life among neuro-oncology patients (–9%) and in patients’ receipt of more than 10 fractions of radiation therapy for palliation of bone metastases (–9%).
Finally, there was no change in several metrics of quality that were already at very high or low levels, as appropriate, at baseline: molecular testing in patients with newly diagnosed acute myeloid leukemia (stable at 95%), hospice enrollment at the time of death for neuro-oncology patients (stable at 100%), chemotherapy in the last 2 weeks of life for patients with sarcoma (stable at 0%), and epidermal growth factor receptor testing in patients with newly diagnosed lung adenocarcinoma (stable at 98%).
ORLANDO – A multifaceted portfolio of physician-led measures with feedback and financial incentives can dramatically improve the quality of care provided at cancer centers, suggests the experience of Stanford (Calif.) Health Care.
Physician leaders of 13 disease-specific cancer care programs (CCPs) identified measures of care that were meaningful to their team and patients, spanning the spectrum from new diagnosis through end of life and survivorship care. Quality and analytics teams developed 16 corresponding metrics and performance reports used for feedback. Programs were also given a financial incentive to meet jointly set targets.
After a year, the CCPs had improved on 12 of the metrics and maintained high baseline levels of performance on the other 4 metrics, investigators reported at a symposium on quality care sponsored by the American Society of Clinical Oncology. For example, they got better at entering staging information in a dedicated field in the electronic health record (+50% absolute increase), recording hand and foot pain (+34%), performing hepatitis B testing before rituximab use (+17%), and referring patients with ovarian cancer for genetic counseling (+43%).
“The main drivers, I would argue, besides the Hawthorne effect, were a high level of physician engagement in the selection, management, and improvement of the metrics, and these metrics excited the care teams, which also provided some motivation,” she said. “We provided real-time, high-quality feedback of performance. And last but probably not least was a financial incentive for the CCP as a team, not part of any individual compensation.”
The investigators plan to continue measuring the metrics, to expand them to other sites in their network, and to add new metrics that are common across the programs to minimize measurement burden, according to Ms. Porter. “We also plan to build cohorts for value-based care and unplanned care like ED visits and unplanned admissions. Finally, we want to keep momentum going and capitalize upon a provider engagement in value measurement and improvement,” she said.
“Based on this work and prior abstracts, … there are many validated metrics to be used. So, to choose those metrics and to choose them through local leadership support, most importantly, engaging frontline staff and having their buy-in of the measures that you are collecting are important,” commented invited discussant Jessica A. Zerillo, MD, MPH, of the Beth Israel Deaconess Medical Center in Boston. “And this can include using incentives that drive such stakeholders, whether they be financial or simply pride with public reporting.”
Study details
“In the summer of 2015, we were starting to feel a lot of pressure to prepare for evolving reimbursement models,” Ms. Porter said, explaining the initiative’s genesis. “Mainly, how do we define our value, and how can we measure and improve on that value of the care we deliver? One answer, of course, is to measure and reduce unnecessary variation. And we knew, to be successful, we had to increase our physician engagement and leadership in the selection and improvement of our metrics.”
Physician leaders of the CCPs were asked to choose quality measures that met three criteria: they were meaningful and important to both the care team and patients, they had pertinent data elements already available in existing databases (to reduce documentation burden), and they were multidisciplinary in nature, reflecting the care provided by the whole program. The measures ultimately selected included a variety of those put forth by American Society of Clinical Oncology’s Quality Oncology Practice Initiative and the American Society for Radiation Oncology. CCPs were offered a financial incentive for meeting targets ranging from $75,000 to $125,000 that was based on number of providers and patient volume, rather than on the impact of improved metric. “This was really meant for reinvestment back into their quality programs,” Ms. Porter said. “I would argue this was really a culture-building year for us, and we hope that next year there might be a little bit more tangible value with the metrics.”
The quality team gave CCPs monthly or quarterly performance reports with unblinded physician- and patient-level details that were ultimately disseminated to all the other CCPs. They also investigated any missing data for individual metrics.
Study results showed that half of the 16 measures the physician leaders chose pertained to the diagnosis and treatment planning phase of care, according to Ms. Porter. “It was important to many of our CCPs to ensure that specific testing was done, which would then, in turn, drive treatment planning decisions,” she commented.
At the end of the year, each metric was assessed among 13 to 2,406 patients. “All CCPs met their predetermined target and earned their financial incentive award for the year,” Ms. Porter reported.
Improvement was most marked, with a 50% absolute increase, for the metric of completing a staging module, which required conversion of staging information (historically embedded in progress notes) into a structured format in a dedicated field in the electronic health record within 45 days of a patient’s first cancer treatment. This practice enables ready identification of stage cohorts in which value of care can be assessed, she noted.
There were also sizable absolute increases in relevant CCPs in the proportion of blood and marrow transplant recipients referred to survivorship care by day 100 (+20%) and visiting that service by day 180 (+13%), recording of hand and foot pain (+34%) and radiation dermatitis (+21%), mismatch repair testing in patients with newly diagnosed colorectal cancer (+10%), referral of patients with newly diagnosed ovarian cancer for genetic counseling (+43%), cytogenetic testing in patients with newly diagnosed hematologic malignancies (+17%), hepatitis B testing before rituximab administration (+17%), and allowance of at least 2 nights for treatment plan physics–quality assurance before the start of a nonemergent radiation oncology treatment (+14%).
Meanwhile, there were decreases, considered favorable changes, in chemotherapy use in the last 2 weeks of life among neuro-oncology patients (–9%) and in patients’ receipt of more than 10 fractions of radiation therapy for palliation of bone metastases (–9%).
Finally, there was no change in several metrics of quality that were already at very high or low levels, as appropriate, at baseline: molecular testing in patients with newly diagnosed acute myeloid leukemia (stable at 95%), hospice enrollment at the time of death for neuro-oncology patients (stable at 100%), chemotherapy in the last 2 weeks of life for patients with sarcoma (stable at 0%), and epidermal growth factor receptor testing in patients with newly diagnosed lung adenocarcinoma (stable at 98%).
ORLANDO – A multifaceted portfolio of physician-led measures with feedback and financial incentives can dramatically improve the quality of care provided at cancer centers, suggests the experience of Stanford (Calif.) Health Care.
Physician leaders of 13 disease-specific cancer care programs (CCPs) identified measures of care that were meaningful to their team and patients, spanning the spectrum from new diagnosis through end of life and survivorship care. Quality and analytics teams developed 16 corresponding metrics and performance reports used for feedback. Programs were also given a financial incentive to meet jointly set targets.
After a year, the CCPs had improved on 12 of the metrics and maintained high baseline levels of performance on the other 4 metrics, investigators reported at a symposium on quality care sponsored by the American Society of Clinical Oncology. For example, they got better at entering staging information in a dedicated field in the electronic health record (+50% absolute increase), recording hand and foot pain (+34%), performing hepatitis B testing before rituximab use (+17%), and referring patients with ovarian cancer for genetic counseling (+43%).
“The main drivers, I would argue, besides the Hawthorne effect, were a high level of physician engagement in the selection, management, and improvement of the metrics, and these metrics excited the care teams, which also provided some motivation,” she said. “We provided real-time, high-quality feedback of performance. And last but probably not least was a financial incentive for the CCP as a team, not part of any individual compensation.”
The investigators plan to continue measuring the metrics, to expand them to other sites in their network, and to add new metrics that are common across the programs to minimize measurement burden, according to Ms. Porter. “We also plan to build cohorts for value-based care and unplanned care like ED visits and unplanned admissions. Finally, we want to keep momentum going and capitalize upon a provider engagement in value measurement and improvement,” she said.
“Based on this work and prior abstracts, … there are many validated metrics to be used. So, to choose those metrics and to choose them through local leadership support, most importantly, engaging frontline staff and having their buy-in of the measures that you are collecting are important,” commented invited discussant Jessica A. Zerillo, MD, MPH, of the Beth Israel Deaconess Medical Center in Boston. “And this can include using incentives that drive such stakeholders, whether they be financial or simply pride with public reporting.”
Study details
“In the summer of 2015, we were starting to feel a lot of pressure to prepare for evolving reimbursement models,” Ms. Porter said, explaining the initiative’s genesis. “Mainly, how do we define our value, and how can we measure and improve on that value of the care we deliver? One answer, of course, is to measure and reduce unnecessary variation. And we knew, to be successful, we had to increase our physician engagement and leadership in the selection and improvement of our metrics.”
Physician leaders of the CCPs were asked to choose quality measures that met three criteria: they were meaningful and important to both the care team and patients, they had pertinent data elements already available in existing databases (to reduce documentation burden), and they were multidisciplinary in nature, reflecting the care provided by the whole program. The measures ultimately selected included a variety of those put forth by American Society of Clinical Oncology’s Quality Oncology Practice Initiative and the American Society for Radiation Oncology. CCPs were offered a financial incentive for meeting targets ranging from $75,000 to $125,000 that was based on number of providers and patient volume, rather than on the impact of improved metric. “This was really meant for reinvestment back into their quality programs,” Ms. Porter said. “I would argue this was really a culture-building year for us, and we hope that next year there might be a little bit more tangible value with the metrics.”
The quality team gave CCPs monthly or quarterly performance reports with unblinded physician- and patient-level details that were ultimately disseminated to all the other CCPs. They also investigated any missing data for individual metrics.
Study results showed that half of the 16 measures the physician leaders chose pertained to the diagnosis and treatment planning phase of care, according to Ms. Porter. “It was important to many of our CCPs to ensure that specific testing was done, which would then, in turn, drive treatment planning decisions,” she commented.
At the end of the year, each metric was assessed among 13 to 2,406 patients. “All CCPs met their predetermined target and earned their financial incentive award for the year,” Ms. Porter reported.
Improvement was most marked, with a 50% absolute increase, for the metric of completing a staging module, which required conversion of staging information (historically embedded in progress notes) into a structured format in a dedicated field in the electronic health record within 45 days of a patient’s first cancer treatment. This practice enables ready identification of stage cohorts in which value of care can be assessed, she noted.
There were also sizable absolute increases in relevant CCPs in the proportion of blood and marrow transplant recipients referred to survivorship care by day 100 (+20%) and visiting that service by day 180 (+13%), recording of hand and foot pain (+34%) and radiation dermatitis (+21%), mismatch repair testing in patients with newly diagnosed colorectal cancer (+10%), referral of patients with newly diagnosed ovarian cancer for genetic counseling (+43%), cytogenetic testing in patients with newly diagnosed hematologic malignancies (+17%), hepatitis B testing before rituximab administration (+17%), and allowance of at least 2 nights for treatment plan physics–quality assurance before the start of a nonemergent radiation oncology treatment (+14%).
Meanwhile, there were decreases, considered favorable changes, in chemotherapy use in the last 2 weeks of life among neuro-oncology patients (–9%) and in patients’ receipt of more than 10 fractions of radiation therapy for palliation of bone metastases (–9%).
Finally, there was no change in several metrics of quality that were already at very high or low levels, as appropriate, at baseline: molecular testing in patients with newly diagnosed acute myeloid leukemia (stable at 95%), hospice enrollment at the time of death for neuro-oncology patients (stable at 100%), chemotherapy in the last 2 weeks of life for patients with sarcoma (stable at 0%), and epidermal growth factor receptor testing in patients with newly diagnosed lung adenocarcinoma (stable at 98%).
AT THE QUALITY CARE SYMPOSIUM
Key clinical point:
Major finding: Over a 1-year period, the center saw improvements in practices such as completion of staging modules (+50%), recording of hand and foot pain (+34%), hepatitis B testing before rituximab use (+17%), and referral of patients with ovarian cancer for genetic counseling (+43%).
Data source: An initiative targeting 16 quality metrics undertaken by 13 cancer care programs at Stanford Health Care.
Disclosures: Ms. Porter disclosed that she had no relevant conflicts of interest.
Machine learning melanoma
What if an app could diagnose melanoma from a photo? That was my idea. In December 2009, Google introduced Google Goggles, an application that recognized images. At the time, I thought, “Wouldn’t it be neat if we could use this with telederm?” I even pitched it to a friend at the search giant. “Great idea!” he wrote back, placating me. For those uninitiated in innovation, “Great idea!” is a euphemism for “Yeah, we thought of that.”
Yes, it isn’t only mine; no doubt, many of you had this same idea: Let’s use amazing image interpretation capabilities from companies like Google or Apple to help us make diagnoses. Sounds simple. It isn’t. This is why most melanoma-finding apps are for entertainment purposes only – they don’t work.
So can melanoma be diagnosed from an app? A Stanford University team believes so. They trained a machine learning system to make dermatologic diagnoses from photos of skin lesions. To overcome previous barriers, they used open-sourced software from Google and awesome processors. For a start, they pretrained the program on over 1.28 million images. Then they fed it 128,450 images of known diagnoses.
Then, just as when Google’s AlphaGo algorithm challenged Lee Sedol, the world Go champion, the Stanford research team challenged 21 dermatologists. They had to choose if they would biopsy/treat or reassure patients based on photos of benign lesions, keratinocyte carcinomas, clinical melanomas, and dermoscopic melanomas. Guess who won?
In a stunning victory (or defeat, if you’re rooting for our team), the trained algorithm matched or outperformed all the dermatologists when scored on sensitivity-specificity curves. While we dermatologists, of course, use more than just a photo to diagnose skin cancer, many around the globe don’t have access to us. Based on these findings, they might need access only to a smartphone to get potentially life-saving advice.
But, what does this mean? Will we someday be outsourced to AI? Will a future POTUS promise to “bring back the doctor industry?” Not if we adapt. The future is bright – if we learn to apply machine learning in ways that can have an impact. (Brain + Computer > Brain.) Consider the following: An optimized ophthalmologist who reads retinal scans prediagnosed by a computer. A teledermatologist who uses AI to perform perfectly in diagnosing melanoma.
Patients have always wanted high quality and high touch care. In the history of medicine, we’ve never been better at both than we are today. Until tomorrow, when we’ll be better still.
Jeff Benabio, MD, MBA, is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected]. He has no disclosures related to this column.
What if an app could diagnose melanoma from a photo? That was my idea. In December 2009, Google introduced Google Goggles, an application that recognized images. At the time, I thought, “Wouldn’t it be neat if we could use this with telederm?” I even pitched it to a friend at the search giant. “Great idea!” he wrote back, placating me. For those uninitiated in innovation, “Great idea!” is a euphemism for “Yeah, we thought of that.”
Yes, it isn’t only mine; no doubt, many of you had this same idea: Let’s use amazing image interpretation capabilities from companies like Google or Apple to help us make diagnoses. Sounds simple. It isn’t. This is why most melanoma-finding apps are for entertainment purposes only – they don’t work.
So can melanoma be diagnosed from an app? A Stanford University team believes so. They trained a machine learning system to make dermatologic diagnoses from photos of skin lesions. To overcome previous barriers, they used open-sourced software from Google and awesome processors. For a start, they pretrained the program on over 1.28 million images. Then they fed it 128,450 images of known diagnoses.
Then, just as when Google’s AlphaGo algorithm challenged Lee Sedol, the world Go champion, the Stanford research team challenged 21 dermatologists. They had to choose if they would biopsy/treat or reassure patients based on photos of benign lesions, keratinocyte carcinomas, clinical melanomas, and dermoscopic melanomas. Guess who won?
In a stunning victory (or defeat, if you’re rooting for our team), the trained algorithm matched or outperformed all the dermatologists when scored on sensitivity-specificity curves. While we dermatologists, of course, use more than just a photo to diagnose skin cancer, many around the globe don’t have access to us. Based on these findings, they might need access only to a smartphone to get potentially life-saving advice.
But, what does this mean? Will we someday be outsourced to AI? Will a future POTUS promise to “bring back the doctor industry?” Not if we adapt. The future is bright – if we learn to apply machine learning in ways that can have an impact. (Brain + Computer > Brain.) Consider the following: An optimized ophthalmologist who reads retinal scans prediagnosed by a computer. A teledermatologist who uses AI to perform perfectly in diagnosing melanoma.
Patients have always wanted high quality and high touch care. In the history of medicine, we’ve never been better at both than we are today. Until tomorrow, when we’ll be better still.
Jeff Benabio, MD, MBA, is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected]. He has no disclosures related to this column.
What if an app could diagnose melanoma from a photo? That was my idea. In December 2009, Google introduced Google Goggles, an application that recognized images. At the time, I thought, “Wouldn’t it be neat if we could use this with telederm?” I even pitched it to a friend at the search giant. “Great idea!” he wrote back, placating me. For those uninitiated in innovation, “Great idea!” is a euphemism for “Yeah, we thought of that.”
Yes, it isn’t only mine; no doubt, many of you had this same idea: Let’s use amazing image interpretation capabilities from companies like Google or Apple to help us make diagnoses. Sounds simple. It isn’t. This is why most melanoma-finding apps are for entertainment purposes only – they don’t work.
So can melanoma be diagnosed from an app? A Stanford University team believes so. They trained a machine learning system to make dermatologic diagnoses from photos of skin lesions. To overcome previous barriers, they used open-sourced software from Google and awesome processors. For a start, they pretrained the program on over 1.28 million images. Then they fed it 128,450 images of known diagnoses.
Then, just as when Google’s AlphaGo algorithm challenged Lee Sedol, the world Go champion, the Stanford research team challenged 21 dermatologists. They had to choose if they would biopsy/treat or reassure patients based on photos of benign lesions, keratinocyte carcinomas, clinical melanomas, and dermoscopic melanomas. Guess who won?
In a stunning victory (or defeat, if you’re rooting for our team), the trained algorithm matched or outperformed all the dermatologists when scored on sensitivity-specificity curves. While we dermatologists, of course, use more than just a photo to diagnose skin cancer, many around the globe don’t have access to us. Based on these findings, they might need access only to a smartphone to get potentially life-saving advice.
But, what does this mean? Will we someday be outsourced to AI? Will a future POTUS promise to “bring back the doctor industry?” Not if we adapt. The future is bright – if we learn to apply machine learning in ways that can have an impact. (Brain + Computer > Brain.) Consider the following: An optimized ophthalmologist who reads retinal scans prediagnosed by a computer. A teledermatologist who uses AI to perform perfectly in diagnosing melanoma.
Patients have always wanted high quality and high touch care. In the history of medicine, we’ve never been better at both than we are today. Until tomorrow, when we’ll be better still.
Jeff Benabio, MD, MBA, is director of Healthcare Transformation and chief of dermatology at Kaiser Permanente San Diego. Dr. Benabio is @Dermdoc on Twitter. Write to him at [email protected]. He has no disclosures related to this column.
Auto-HCT patients run high risks for myeloid neoplasms
ORLANDO – For post–autologous hematopoietic cell transplant (auto-HCT) patients, the 10-year risk of developing a myeloid neoplasm was as high as 6%, based on a recent review of two large cancer databases.
Older age at transplant, receiving total body irradiation, and receiving multiple lines of chemotherapy before transplant all upped the risk of later cancers, according to a study presented by Shahrukh Hashmi, MD, and his collaborators at the combined annual meetings of the Center for International Blood & Marrow Transplant Research (CIBMTR) and the American Society for Blood and Marrow Transplantation.
“The guidelines for autologous stem cell transplantation for surveillance for AML [acute myeloid leukemia] and MDS [myelodysplastic syndrome] need to be clearly formulated. We are doing 30,000 autologous transplants a year globally and these patients are at risk for the most feared cancer, which is leukemia and MDS, for which outcomes are very poor,” said Dr. Hashmi of the Mayo Clinic in Rochester, Minn.
The researchers examined data from auto-HCT patients with diagnoses of non-Hodgkin lymphoma (NHL), Hodgkin lymphoma, and multiple myeloma to determine the relative risks of developing AML and MDS. The study also explored which patient characteristics and aspects of the conditioning regimen might affect risk for later myeloid neoplasms.
In the dataset of 9,108 patients that Dr. Hashmi and his colleagues obtained from CIBMTR, 3,540 patients had NHL.
“As age progresses, the risk of acquiring myeloid neoplasms increases significantly,” he said, noting that the relative risk (RR) rose to 4.52 for patients aged 55 years and older at the time of transplant (95% confidence interval [CI], 2.63-7.77; P less than .0001).
Patients with NHL who received more than two lines of chemotherapy had approximately double the rate of myeloid cancers (RR, 1.93; 95% CI, 1.34-2.78; P = .0004).
The type of conditioning regimen made a difference for NHL patients as well. With total-body irradiation set as the reference at RR = 1, carmustine-etoposide-cytarabine-melphalan (BEAM) or similar therapies were relatively protective, with an RR of 0.59 (95% CI, 0.40-0.87; P = .0083). Also protective were cyclophosphamide-carmustine-etoposide (CBV) and similar therapies (RR, 0.57; 95% CI, 0.33-0.99; P = .0463).
Age at transplant was a factor among the 4,653 patients with multiple myeloma, with an RR of 2.47 for those transplanted at age 55 years or older (95% CI, 1.55-3.93; P = .0001). Multiple lines of chemotherapy also increased risk, with patients who received more than two lines having an RR of 1.77 for neoplasm (95% CI, 0.04-2.06; P = .0302). Women had less than half the risk of recurrence as men (RR, 0.44; 95% CI, 0.28-0.69; P = .0003).
Among the 915 study patients with Hodgkin lymphoma, patients aged 45 years and older at the time of transplant carried an RR of 5.59 for new myeloid neoplasms (95% CI, 2.98-11.70; P less than .0001).
Total-body irradiation was received by 14% of patients with non-Hodgkin lymphoma and by 5% of patients with multiple myeloma and Hodgkin lymphoma. Total-body irradiation was associated with a fourfold increase in neoplasm risk (RR, 4.02; 95% CI, 1.40-11.55; P = .0096).
Dr. Hashmi and his colleagues then examined the incidence rates for myelodysplastic syndrome and acute myelogenous leukemia in the Surveillance, Epidemiology, and End Results (SEER) database , finding that, even at baseline, the rates of myeloid neoplasms were higher for patients with NHL, Hodgkin lymphoma, or MM patients than for the general population of cancer survivors. “Post NHL, Hodgkin lymphoma, and myeloma, the risks are significantly higher to begin with. … We saw a high risk of AML and MDS compared to the SEER controls – risks as high as 100 times greater for auto-transplant patients,” said Dr. Hashmi. “A risk of one hundred times more for MDS was astounding, surprising, unexpected,” he said. The risk of AML, he said, was elevated about 10-50 times in the CIBMTR data.
The cumulative incidence of MDS or AML for NHL was 6% at 10 years post transplant, 4% for Hodgkin lymphoma, and 3% for multiple myeloma.
A limitation of the study, said Dr. Hashmi, was that the investigators did not assess for post-transplant maintenance chemotherapy.
“We have to prospectively assess our transplant patients in a fashion to detect changes early. Or maybe they were present at the time of transplant and we never did sophisticated methods [like] next-generation sequencing” to detect them, he said.
Dr. Hashmi reported no conflicts of interest.
[email protected]
On Twitter @karioakes
ORLANDO – For post–autologous hematopoietic cell transplant (auto-HCT) patients, the 10-year risk of developing a myeloid neoplasm was as high as 6%, based on a recent review of two large cancer databases.
Older age at transplant, receiving total body irradiation, and receiving multiple lines of chemotherapy before transplant all upped the risk of later cancers, according to a study presented by Shahrukh Hashmi, MD, and his collaborators at the combined annual meetings of the Center for International Blood & Marrow Transplant Research (CIBMTR) and the American Society for Blood and Marrow Transplantation.
“The guidelines for autologous stem cell transplantation for surveillance for AML [acute myeloid leukemia] and MDS [myelodysplastic syndrome] need to be clearly formulated. We are doing 30,000 autologous transplants a year globally and these patients are at risk for the most feared cancer, which is leukemia and MDS, for which outcomes are very poor,” said Dr. Hashmi of the Mayo Clinic in Rochester, Minn.
The researchers examined data from auto-HCT patients with diagnoses of non-Hodgkin lymphoma (NHL), Hodgkin lymphoma, and multiple myeloma to determine the relative risks of developing AML and MDS. The study also explored which patient characteristics and aspects of the conditioning regimen might affect risk for later myeloid neoplasms.
In the dataset of 9,108 patients that Dr. Hashmi and his colleagues obtained from CIBMTR, 3,540 patients had NHL.
“As age progresses, the risk of acquiring myeloid neoplasms increases significantly,” he said, noting that the relative risk (RR) rose to 4.52 for patients aged 55 years and older at the time of transplant (95% confidence interval [CI], 2.63-7.77; P less than .0001).
Patients with NHL who received more than two lines of chemotherapy had approximately double the rate of myeloid cancers (RR, 1.93; 95% CI, 1.34-2.78; P = .0004).
The type of conditioning regimen made a difference for NHL patients as well. With total-body irradiation set as the reference at RR = 1, carmustine-etoposide-cytarabine-melphalan (BEAM) or similar therapies were relatively protective, with an RR of 0.59 (95% CI, 0.40-0.87; P = .0083). Also protective were cyclophosphamide-carmustine-etoposide (CBV) and similar therapies (RR, 0.57; 95% CI, 0.33-0.99; P = .0463).
Age at transplant was a factor among the 4,653 patients with multiple myeloma, with an RR of 2.47 for those transplanted at age 55 years or older (95% CI, 1.55-3.93; P = .0001). Multiple lines of chemotherapy also increased risk, with patients who received more than two lines having an RR of 1.77 for neoplasm (95% CI, 0.04-2.06; P = .0302). Women had less than half the risk of recurrence as men (RR, 0.44; 95% CI, 0.28-0.69; P = .0003).
Among the 915 study patients with Hodgkin lymphoma, patients aged 45 years and older at the time of transplant carried an RR of 5.59 for new myeloid neoplasms (95% CI, 2.98-11.70; P less than .0001).
Total-body irradiation was received by 14% of patients with non-Hodgkin lymphoma and by 5% of patients with multiple myeloma and Hodgkin lymphoma. Total-body irradiation was associated with a fourfold increase in neoplasm risk (RR, 4.02; 95% CI, 1.40-11.55; P = .0096).
Dr. Hashmi and his colleagues then examined the incidence rates for myelodysplastic syndrome and acute myelogenous leukemia in the Surveillance, Epidemiology, and End Results (SEER) database , finding that, even at baseline, the rates of myeloid neoplasms were higher for patients with NHL, Hodgkin lymphoma, or MM patients than for the general population of cancer survivors. “Post NHL, Hodgkin lymphoma, and myeloma, the risks are significantly higher to begin with. … We saw a high risk of AML and MDS compared to the SEER controls – risks as high as 100 times greater for auto-transplant patients,” said Dr. Hashmi. “A risk of one hundred times more for MDS was astounding, surprising, unexpected,” he said. The risk of AML, he said, was elevated about 10-50 times in the CIBMTR data.
The cumulative incidence of MDS or AML for NHL was 6% at 10 years post transplant, 4% for Hodgkin lymphoma, and 3% for multiple myeloma.
A limitation of the study, said Dr. Hashmi, was that the investigators did not assess for post-transplant maintenance chemotherapy.
“We have to prospectively assess our transplant patients in a fashion to detect changes early. Or maybe they were present at the time of transplant and we never did sophisticated methods [like] next-generation sequencing” to detect them, he said.
Dr. Hashmi reported no conflicts of interest.
[email protected]
On Twitter @karioakes
ORLANDO – For post–autologous hematopoietic cell transplant (auto-HCT) patients, the 10-year risk of developing a myeloid neoplasm was as high as 6%, based on a recent review of two large cancer databases.
Older age at transplant, receiving total body irradiation, and receiving multiple lines of chemotherapy before transplant all upped the risk of later cancers, according to a study presented by Shahrukh Hashmi, MD, and his collaborators at the combined annual meetings of the Center for International Blood & Marrow Transplant Research (CIBMTR) and the American Society for Blood and Marrow Transplantation.
“The guidelines for autologous stem cell transplantation for surveillance for AML [acute myeloid leukemia] and MDS [myelodysplastic syndrome] need to be clearly formulated. We are doing 30,000 autologous transplants a year globally and these patients are at risk for the most feared cancer, which is leukemia and MDS, for which outcomes are very poor,” said Dr. Hashmi of the Mayo Clinic in Rochester, Minn.
The researchers examined data from auto-HCT patients with diagnoses of non-Hodgkin lymphoma (NHL), Hodgkin lymphoma, and multiple myeloma to determine the relative risks of developing AML and MDS. The study also explored which patient characteristics and aspects of the conditioning regimen might affect risk for later myeloid neoplasms.
In the dataset of 9,108 patients that Dr. Hashmi and his colleagues obtained from CIBMTR, 3,540 patients had NHL.
“As age progresses, the risk of acquiring myeloid neoplasms increases significantly,” he said, noting that the relative risk (RR) rose to 4.52 for patients aged 55 years and older at the time of transplant (95% confidence interval [CI], 2.63-7.77; P less than .0001).
Patients with NHL who received more than two lines of chemotherapy had approximately double the rate of myeloid cancers (RR, 1.93; 95% CI, 1.34-2.78; P = .0004).
The type of conditioning regimen made a difference for NHL patients as well. With total-body irradiation set as the reference at RR = 1, carmustine-etoposide-cytarabine-melphalan (BEAM) or similar therapies were relatively protective, with an RR of 0.59 (95% CI, 0.40-0.87; P = .0083). Also protective were cyclophosphamide-carmustine-etoposide (CBV) and similar therapies (RR, 0.57; 95% CI, 0.33-0.99; P = .0463).
Age at transplant was a factor among the 4,653 patients with multiple myeloma, with an RR of 2.47 for those transplanted at age 55 years or older (95% CI, 1.55-3.93; P = .0001). Multiple lines of chemotherapy also increased risk, with patients who received more than two lines having an RR of 1.77 for neoplasm (95% CI, 0.04-2.06; P = .0302). Women had less than half the risk of recurrence as men (RR, 0.44; 95% CI, 0.28-0.69; P = .0003).
Among the 915 study patients with Hodgkin lymphoma, patients aged 45 years and older at the time of transplant carried an RR of 5.59 for new myeloid neoplasms (95% CI, 2.98-11.70; P less than .0001).
Total-body irradiation was received by 14% of patients with non-Hodgkin lymphoma and by 5% of patients with multiple myeloma and Hodgkin lymphoma. Total-body irradiation was associated with a fourfold increase in neoplasm risk (RR, 4.02; 95% CI, 1.40-11.55; P = .0096).
Dr. Hashmi and his colleagues then examined the incidence rates for myelodysplastic syndrome and acute myelogenous leukemia in the Surveillance, Epidemiology, and End Results (SEER) database , finding that, even at baseline, the rates of myeloid neoplasms were higher for patients with NHL, Hodgkin lymphoma, or MM patients than for the general population of cancer survivors. “Post NHL, Hodgkin lymphoma, and myeloma, the risks are significantly higher to begin with. … We saw a high risk of AML and MDS compared to the SEER controls – risks as high as 100 times greater for auto-transplant patients,” said Dr. Hashmi. “A risk of one hundred times more for MDS was astounding, surprising, unexpected,” he said. The risk of AML, he said, was elevated about 10-50 times in the CIBMTR data.
The cumulative incidence of MDS or AML for NHL was 6% at 10 years post transplant, 4% for Hodgkin lymphoma, and 3% for multiple myeloma.
A limitation of the study, said Dr. Hashmi, was that the investigators did not assess for post-transplant maintenance chemotherapy.
“We have to prospectively assess our transplant patients in a fashion to detect changes early. Or maybe they were present at the time of transplant and we never did sophisticated methods [like] next-generation sequencing” to detect them, he said.
Dr. Hashmi reported no conflicts of interest.
[email protected]
On Twitter @karioakes
AT THE BMT TANDEM MEETINGS
Key clinical point:
Major finding: The 10-year cumulative risk for auto-HCT patients with Hodgkin or non-Hodgkin lymphoma or multiple myeloma was as high at 6%.
Data source: Review of 9,108 patients from an international transplant database.
Disclosures: Dr. Hashmi reported no conflicts of interest.
Local Data on Cancer Mortality Reveal Valuable ‘Patterns’ in Changes
Cancer death rates in the U.S. declined by 20% between 1980 and 2014, but not everywhere: In 160 counties, mortality rose substantially during the same time, according to University of Washington researchers. And those weren’t the only striking variations they found.
The researchers analyzed data on deaths from 29 cancer types. Deaths dropped from about 240 per 100,000 people in 1980 to 192 per 100,000 in 2014. But the researchers say they found “stark” disparities. In 2014, the county with the highest overall cancer mortality had about 7 times as many cancer deaths per 100,000 residents as the county with the lowest overall cancer mortality. For many cancers there were distinct clusters of counties in different regions with especially high mortality, such as in Kentucky, West Virginia, and Alabama.
Related: Major Cancer Death Rates Are Down
The pattern of changes across counties also varied tremendously by type, the researchers say. For instance, breast, cervical, prostate, testicular, and other cancers, mortality rates declined in nearly all counties, whereas liver cancer and mesothelioma increased in nearly all counties.
Previous reports on geographic differences in cancer mortality have focused on variation by state, the researchers say. But the local patterns they found would have been masked by a national or state number. Their innovative approach to aggregating and analyzing the data at the county level has value, they note, because “public health programs and policies are mainly designed and implemented at the local level.”
The policy response from the public health and medical care communities, the researchers add, depends on “parsing these trends into component factors”: trends driven by known risk factors, unexplained trends in incidence, cancers for which screening and early detection can make a major difference, and cancers for which high-quality treatment can make a major difference. Local information, the researchers point out, can be useful for health care practitioners to understand community needs for care and aid in identifying “cancer hot spots” that need more investigation.
In an article for the National Cancer Institute’s newsletter, Eric Durbin, DPh, director of cancer informatics for the Kentucky Cancer Registry at the University of Kentucky Markey Cancer Center, cautioned against basing too many assumptions on local data, especially in rural, sparsely populated areas where small number changes can translate into giant percentages. “We really have no other way to guide cancer prevention and control activities other than using [that] data. Otherwise, you’re just throwing money or resources at a problem without any way to measure the impact,” added Durbin.
Sources:
- National Cancer Institute. U.S. cancer mortality rates falling, but some regions left behind, study finds. https://www.cancer.gov/news-events/cancer-currents-blog/2017/cancer-death-disparities. Published February 21, 2017. Accessed March 15, 2017.
- Mokdad AH, Dwyer-Lindgren L, Fitzmaurice C, et al. JAMA. 2017;317(4):388-406.
doi: 10.1001/jama.2016.20324.
Cancer death rates in the U.S. declined by 20% between 1980 and 2014, but not everywhere: In 160 counties, mortality rose substantially during the same time, according to University of Washington researchers. And those weren’t the only striking variations they found.
The researchers analyzed data on deaths from 29 cancer types. Deaths dropped from about 240 per 100,000 people in 1980 to 192 per 100,000 in 2014. But the researchers say they found “stark” disparities. In 2014, the county with the highest overall cancer mortality had about 7 times as many cancer deaths per 100,000 residents as the county with the lowest overall cancer mortality. For many cancers there were distinct clusters of counties in different regions with especially high mortality, such as in Kentucky, West Virginia, and Alabama.
Related: Major Cancer Death Rates Are Down
The pattern of changes across counties also varied tremendously by type, the researchers say. For instance, breast, cervical, prostate, testicular, and other cancers, mortality rates declined in nearly all counties, whereas liver cancer and mesothelioma increased in nearly all counties.
Previous reports on geographic differences in cancer mortality have focused on variation by state, the researchers say. But the local patterns they found would have been masked by a national or state number. Their innovative approach to aggregating and analyzing the data at the county level has value, they note, because “public health programs and policies are mainly designed and implemented at the local level.”
The policy response from the public health and medical care communities, the researchers add, depends on “parsing these trends into component factors”: trends driven by known risk factors, unexplained trends in incidence, cancers for which screening and early detection can make a major difference, and cancers for which high-quality treatment can make a major difference. Local information, the researchers point out, can be useful for health care practitioners to understand community needs for care and aid in identifying “cancer hot spots” that need more investigation.
In an article for the National Cancer Institute’s newsletter, Eric Durbin, DPh, director of cancer informatics for the Kentucky Cancer Registry at the University of Kentucky Markey Cancer Center, cautioned against basing too many assumptions on local data, especially in rural, sparsely populated areas where small number changes can translate into giant percentages. “We really have no other way to guide cancer prevention and control activities other than using [that] data. Otherwise, you’re just throwing money or resources at a problem without any way to measure the impact,” added Durbin.
Sources:
- National Cancer Institute. U.S. cancer mortality rates falling, but some regions left behind, study finds. https://www.cancer.gov/news-events/cancer-currents-blog/2017/cancer-death-disparities. Published February 21, 2017. Accessed March 15, 2017.
- Mokdad AH, Dwyer-Lindgren L, Fitzmaurice C, et al. JAMA. 2017;317(4):388-406.
doi: 10.1001/jama.2016.20324.
Cancer death rates in the U.S. declined by 20% between 1980 and 2014, but not everywhere: In 160 counties, mortality rose substantially during the same time, according to University of Washington researchers. And those weren’t the only striking variations they found.
The researchers analyzed data on deaths from 29 cancer types. Deaths dropped from about 240 per 100,000 people in 1980 to 192 per 100,000 in 2014. But the researchers say they found “stark” disparities. In 2014, the county with the highest overall cancer mortality had about 7 times as many cancer deaths per 100,000 residents as the county with the lowest overall cancer mortality. For many cancers there were distinct clusters of counties in different regions with especially high mortality, such as in Kentucky, West Virginia, and Alabama.
Related: Major Cancer Death Rates Are Down
The pattern of changes across counties also varied tremendously by type, the researchers say. For instance, breast, cervical, prostate, testicular, and other cancers, mortality rates declined in nearly all counties, whereas liver cancer and mesothelioma increased in nearly all counties.
Previous reports on geographic differences in cancer mortality have focused on variation by state, the researchers say. But the local patterns they found would have been masked by a national or state number. Their innovative approach to aggregating and analyzing the data at the county level has value, they note, because “public health programs and policies are mainly designed and implemented at the local level.”
The policy response from the public health and medical care communities, the researchers add, depends on “parsing these trends into component factors”: trends driven by known risk factors, unexplained trends in incidence, cancers for which screening and early detection can make a major difference, and cancers for which high-quality treatment can make a major difference. Local information, the researchers point out, can be useful for health care practitioners to understand community needs for care and aid in identifying “cancer hot spots” that need more investigation.
In an article for the National Cancer Institute’s newsletter, Eric Durbin, DPh, director of cancer informatics for the Kentucky Cancer Registry at the University of Kentucky Markey Cancer Center, cautioned against basing too many assumptions on local data, especially in rural, sparsely populated areas where small number changes can translate into giant percentages. “We really have no other way to guide cancer prevention and control activities other than using [that] data. Otherwise, you’re just throwing money or resources at a problem without any way to measure the impact,” added Durbin.
Sources:
- National Cancer Institute. U.S. cancer mortality rates falling, but some regions left behind, study finds. https://www.cancer.gov/news-events/cancer-currents-blog/2017/cancer-death-disparities. Published February 21, 2017. Accessed March 15, 2017.
- Mokdad AH, Dwyer-Lindgren L, Fitzmaurice C, et al. JAMA. 2017;317(4):388-406.
doi: 10.1001/jama.2016.20324.
Computerized systems reduce risk of VTE, analysis suggests
The use of computerized clinical decision support systems can reduce the risk of venous thromboembolism (VTE) among surgical patients, according to new research.
Results of a review and meta-analysis showed that use of these computerized systems was associated with a significant increase in the proportion of surgical patients with adequate VTE prophylaxis and a significant decrease in the patients’ risk of developing VTE.
Zachary M. Borab, of the New York University School of Medicine in New York, New York, and his colleagues reported these findings in JAMA Surgery.
A computerized clinical decision support system is rule or algorithm-based software that can be integrated into an electronic health record and uses data to present evidence-based knowledge at the individual patient level.
Borab and his colleagues conducted a review and meta-analysis to assess the effect of such systems on increasing adherence to VTE prophylaxis guidelines and decreasing post-operative VTEs, when compared with routine care.
The researchers combed through several databases looking for studies of surgical patients in which investigators compared routine care to computerized clinical decision support systems with VTE risk stratification and assistance in ordering VTE prophylaxis.
The team found 11 studies that were eligible for meta-analysis—9 prospective and 2 retrospective trials. The trials included a total of 156,366 patients—104,241 in the computerized clinical decision support systems group and 52,125 in the control group.
Analysis of these data revealed that using the computerized systems was associated with a significant increase in the rate of appropriate ordering of VTE prophylaxis. The odds ratio was 2.35 (95% CI, 1.78-3.10; P<0.001).
Use of the computerized systems was also associated with a significant decrease in the risk of VTE. The risk ratio was 0.78 (95% CI, 0.72-0.85; P<0.001).
Based on these results, Borab and his colleagues concluded that computerized clinical decision support systems should be used to help clinicians assess the risk of VTE and provide the appropriate prophylaxis in surgical patients.
The use of computerized clinical decision support systems can reduce the risk of venous thromboembolism (VTE) among surgical patients, according to new research.
Results of a review and meta-analysis showed that use of these computerized systems was associated with a significant increase in the proportion of surgical patients with adequate VTE prophylaxis and a significant decrease in the patients’ risk of developing VTE.
Zachary M. Borab, of the New York University School of Medicine in New York, New York, and his colleagues reported these findings in JAMA Surgery.
A computerized clinical decision support system is rule or algorithm-based software that can be integrated into an electronic health record and uses data to present evidence-based knowledge at the individual patient level.
Borab and his colleagues conducted a review and meta-analysis to assess the effect of such systems on increasing adherence to VTE prophylaxis guidelines and decreasing post-operative VTEs, when compared with routine care.
The researchers combed through several databases looking for studies of surgical patients in which investigators compared routine care to computerized clinical decision support systems with VTE risk stratification and assistance in ordering VTE prophylaxis.
The team found 11 studies that were eligible for meta-analysis—9 prospective and 2 retrospective trials. The trials included a total of 156,366 patients—104,241 in the computerized clinical decision support systems group and 52,125 in the control group.
Analysis of these data revealed that using the computerized systems was associated with a significant increase in the rate of appropriate ordering of VTE prophylaxis. The odds ratio was 2.35 (95% CI, 1.78-3.10; P<0.001).
Use of the computerized systems was also associated with a significant decrease in the risk of VTE. The risk ratio was 0.78 (95% CI, 0.72-0.85; P<0.001).
Based on these results, Borab and his colleagues concluded that computerized clinical decision support systems should be used to help clinicians assess the risk of VTE and provide the appropriate prophylaxis in surgical patients.
The use of computerized clinical decision support systems can reduce the risk of venous thromboembolism (VTE) among surgical patients, according to new research.
Results of a review and meta-analysis showed that use of these computerized systems was associated with a significant increase in the proportion of surgical patients with adequate VTE prophylaxis and a significant decrease in the patients’ risk of developing VTE.
Zachary M. Borab, of the New York University School of Medicine in New York, New York, and his colleagues reported these findings in JAMA Surgery.
A computerized clinical decision support system is rule or algorithm-based software that can be integrated into an electronic health record and uses data to present evidence-based knowledge at the individual patient level.
Borab and his colleagues conducted a review and meta-analysis to assess the effect of such systems on increasing adherence to VTE prophylaxis guidelines and decreasing post-operative VTEs, when compared with routine care.
The researchers combed through several databases looking for studies of surgical patients in which investigators compared routine care to computerized clinical decision support systems with VTE risk stratification and assistance in ordering VTE prophylaxis.
The team found 11 studies that were eligible for meta-analysis—9 prospective and 2 retrospective trials. The trials included a total of 156,366 patients—104,241 in the computerized clinical decision support systems group and 52,125 in the control group.
Analysis of these data revealed that using the computerized systems was associated with a significant increase in the rate of appropriate ordering of VTE prophylaxis. The odds ratio was 2.35 (95% CI, 1.78-3.10; P<0.001).
Use of the computerized systems was also associated with a significant decrease in the risk of VTE. The risk ratio was 0.78 (95% CI, 0.72-0.85; P<0.001).
Based on these results, Borab and his colleagues concluded that computerized clinical decision support systems should be used to help clinicians assess the risk of VTE and provide the appropriate prophylaxis in surgical patients.
Team develops paper-based test for blood typing
Researchers say they have created a paper-based assay that provides “rapid and reliable” blood typing.
The team used this test to analyze 3550 blood samples and observed a more than 99.9% accuracy rate.
The test was able to classify samples into the common ABO and Rh blood groups in less than 30 seconds.
With slightly more time (but still in less than 2 minutes), the assay was able to identify multiple rare blood types.
Hong Zhang, of Southwest Hospital, Third Military Medical University in Chongqing, China, and colleagues described this test in Science Translational Medicine.
To create the test, the researchers took advantage of chemical reactions between blood serum proteins and the dye bromocreosol green.
The team applied a small sample of whole blood onto a test-strip containing antibodies that recognized different blood group antigens.
The results appeared as visual color changes—teal if a blood group antigen was present in a sample and brown if not.
The researchers also incorporated a separation membrane to isolate plasma from whole blood, which allowed them to simultaneously identify specific blood cell antigens and detect antibodies in plasma based on how the blood cells clumped together (also known as forward and reverse typing), without a centrifuge.
The team said the rapid turnaround time of this test could be ideal for resource-limited situations, such as war zones, remote areas, and during emergencies.
Researchers say they have created a paper-based assay that provides “rapid and reliable” blood typing.
The team used this test to analyze 3550 blood samples and observed a more than 99.9% accuracy rate.
The test was able to classify samples into the common ABO and Rh blood groups in less than 30 seconds.
With slightly more time (but still in less than 2 minutes), the assay was able to identify multiple rare blood types.
Hong Zhang, of Southwest Hospital, Third Military Medical University in Chongqing, China, and colleagues described this test in Science Translational Medicine.
To create the test, the researchers took advantage of chemical reactions between blood serum proteins and the dye bromocreosol green.
The team applied a small sample of whole blood onto a test-strip containing antibodies that recognized different blood group antigens.
The results appeared as visual color changes—teal if a blood group antigen was present in a sample and brown if not.
The researchers also incorporated a separation membrane to isolate plasma from whole blood, which allowed them to simultaneously identify specific blood cell antigens and detect antibodies in plasma based on how the blood cells clumped together (also known as forward and reverse typing), without a centrifuge.
The team said the rapid turnaround time of this test could be ideal for resource-limited situations, such as war zones, remote areas, and during emergencies.
Researchers say they have created a paper-based assay that provides “rapid and reliable” blood typing.
The team used this test to analyze 3550 blood samples and observed a more than 99.9% accuracy rate.
The test was able to classify samples into the common ABO and Rh blood groups in less than 30 seconds.
With slightly more time (but still in less than 2 minutes), the assay was able to identify multiple rare blood types.
Hong Zhang, of Southwest Hospital, Third Military Medical University in Chongqing, China, and colleagues described this test in Science Translational Medicine.
To create the test, the researchers took advantage of chemical reactions between blood serum proteins and the dye bromocreosol green.
The team applied a small sample of whole blood onto a test-strip containing antibodies that recognized different blood group antigens.
The results appeared as visual color changes—teal if a blood group antigen was present in a sample and brown if not.
The researchers also incorporated a separation membrane to isolate plasma from whole blood, which allowed them to simultaneously identify specific blood cell antigens and detect antibodies in plasma based on how the blood cells clumped together (also known as forward and reverse typing), without a centrifuge.
The team said the rapid turnaround time of this test could be ideal for resource-limited situations, such as war zones, remote areas, and during emergencies.
Death risks associated with long-term DAPT
A new analysis suggests that patients who receive dual antiplatelet therapy (DAPT) for at least 1 year after coronary stenting are more likely to experience ischemic events than bleeding events, but both types of events are associated with a high risk of death.
Researchers performed a secondary analysis of data from the DAPT study and found that 4% of patients had ischemic events and 2% had bleeding events between 12 and 33 months after stenting.
Both types of events incurred a serious mortality risk—an 18-fold increase after any bleeding event and a 13-fold increase after any ischemic event.
These findings were published in JAMA Cardiology.
“We know from previous trials that continuing dual antiplatelet therapy longer than 12 months after coronary stenting is associated with both decreased ischemia and increased bleeding risk, so these findings reinforce the need to identify individuals who are likely to experience more benefit than harm from continued dual antiplatelet therapy,” said study author Eric Secemsky, MD, of Massachusetts General Hospital in Boston.
For this study, Dr Secemsky and his colleagues analyzed data collected in the DAPT trial, which was designed to determine the benefits and risks of continuing DAPT for more than a year.
The trial enrolled 25,682 patients who were set to receive a drug-eluting or bare-metal stent. After stent placement, they received DAPT—aspirin plus thienopyridine (clopidogrel or prasugrel)—for at least 12 months.
After 12 months of therapy, patients who were treatment-compliant and event-free (no myocardial infarction, stroke, or moderate or severe bleeding) were randomized to continued DAPT or aspirin alone for an additional 18 months. At month 30, patients discontinued randomized treatment but remained on aspirin and were followed for 3 months.
For the present secondary analysis, Dr Secemsky and his colleagues examined data from all 11,648 randomized patients.
Ischemic events
During the study period, 478 patients (4.1%) had 502 ischemic events, including 306 myocardial infarctions, 113 cases of stent thrombosis, and 83 ischemic strokes.
The death rate among patients with ischemic events was 10.9% (n=52), and 78.8% of these deaths (n=41) were attributable to cardiovascular causes. The death rate was 0.7% among patients without a cardiovascular event (82/11,082, P<0.001).
The cumulative incidence of death after ischemic events was 0.5% (0.3% with myocardial infarction, 0.1% with stent thrombosis, and 0.1% with ischemic stroke) among the more than 11,600 randomized patients.
The unadjusted annualized mortality rate after an ischemic event was 27.2 per 100 person-years.
When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, having an ischemic event was associated with a 12.6-fold increased risk of death (hazard ratio=14.6 for stent thrombosis, 13.1 for ischemic stroke, and 9.1 for myocardial infarction).
Deaths after ischemic stroke or stent thrombosis usually occurred soon after the event, but the increased risk of death from a myocardial infarction persisted throughout the study period.
Bleeding events
A total of 232 patients (2.0%) had 235 bleeding events—155 moderate and 80 severe bleeds.
The death rate among patients with bleeding events was 17.7% (n=41), compared to 1.6% among patients without a bleed (181/11,416, P<0.001). However, more than half of the deaths occurring after a bleeding event were attributable to cardiovascular causes (53.7%, n=22).
The cumulative incidence of death after a bleeding event was 0.3% (0.1% with moderate and 0.2% with severe bleeding) in the randomized study population.
The unadjusted annualized mortality rate after a bleeding event was 21.5 per 100 person-years.
When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, a bleeding event was associated with an 18.1-fold increased risk of death (hazard ratio=36.3 for a severe bleed and 8.0 for a moderate bleed).
Deaths following bleeding events primarily occurred within 30 days of the event.
“Since our analysis found that the development of both ischemic and bleeding events portend a particularly poor overall prognosis, we conclude that we must be thoughtful when prescribing any treatment, such as dual antiplatelet therapy, that may include bleeding risk,” Dr Secemsky said.
“In order to understand the implications of therapies that have potentially conflicting effects—such as decreasing ischemic risk while increasing bleeding risk—we must understand the prognostic factors related to these events. Our efforts now need to be focused on individualizing treatment and identifying those who are at the greatest risk of developing recurrent ischemia and at the lowest risk of developing a bleed.”
In a previous study, Dr Secemsky and his colleagues developed a risk score using DAPT data that can help determine whether or not DAPT should continue past the 1-year mark.
The tool has recently been included in American College of Cardiology(ACC)/American Heart Association guidelines on the duration of DAPT and is available on the ACC website.
A new analysis suggests that patients who receive dual antiplatelet therapy (DAPT) for at least 1 year after coronary stenting are more likely to experience ischemic events than bleeding events, but both types of events are associated with a high risk of death.
Researchers performed a secondary analysis of data from the DAPT study and found that 4% of patients had ischemic events and 2% had bleeding events between 12 and 33 months after stenting.
Both types of events incurred a serious mortality risk—an 18-fold increase after any bleeding event and a 13-fold increase after any ischemic event.
These findings were published in JAMA Cardiology.
“We know from previous trials that continuing dual antiplatelet therapy longer than 12 months after coronary stenting is associated with both decreased ischemia and increased bleeding risk, so these findings reinforce the need to identify individuals who are likely to experience more benefit than harm from continued dual antiplatelet therapy,” said study author Eric Secemsky, MD, of Massachusetts General Hospital in Boston.
For this study, Dr Secemsky and his colleagues analyzed data collected in the DAPT trial, which was designed to determine the benefits and risks of continuing DAPT for more than a year.
The trial enrolled 25,682 patients who were set to receive a drug-eluting or bare-metal stent. After stent placement, they received DAPT—aspirin plus thienopyridine (clopidogrel or prasugrel)—for at least 12 months.
After 12 months of therapy, patients who were treatment-compliant and event-free (no myocardial infarction, stroke, or moderate or severe bleeding) were randomized to continued DAPT or aspirin alone for an additional 18 months. At month 30, patients discontinued randomized treatment but remained on aspirin and were followed for 3 months.
For the present secondary analysis, Dr Secemsky and his colleagues examined data from all 11,648 randomized patients.
Ischemic events
During the study period, 478 patients (4.1%) had 502 ischemic events, including 306 myocardial infarctions, 113 cases of stent thrombosis, and 83 ischemic strokes.
The death rate among patients with ischemic events was 10.9% (n=52), and 78.8% of these deaths (n=41) were attributable to cardiovascular causes. The death rate was 0.7% among patients without a cardiovascular event (82/11,082, P<0.001).
The cumulative incidence of death after ischemic events was 0.5% (0.3% with myocardial infarction, 0.1% with stent thrombosis, and 0.1% with ischemic stroke) among the more than 11,600 randomized patients.
The unadjusted annualized mortality rate after an ischemic event was 27.2 per 100 person-years.
When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, having an ischemic event was associated with a 12.6-fold increased risk of death (hazard ratio=14.6 for stent thrombosis, 13.1 for ischemic stroke, and 9.1 for myocardial infarction).
Deaths after ischemic stroke or stent thrombosis usually occurred soon after the event, but the increased risk of death from a myocardial infarction persisted throughout the study period.
Bleeding events
A total of 232 patients (2.0%) had 235 bleeding events—155 moderate and 80 severe bleeds.
The death rate among patients with bleeding events was 17.7% (n=41), compared to 1.6% among patients without a bleed (181/11,416, P<0.001). However, more than half of the deaths occurring after a bleeding event were attributable to cardiovascular causes (53.7%, n=22).
The cumulative incidence of death after a bleeding event was 0.3% (0.1% with moderate and 0.2% with severe bleeding) in the randomized study population.
The unadjusted annualized mortality rate after a bleeding event was 21.5 per 100 person-years.
When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, a bleeding event was associated with an 18.1-fold increased risk of death (hazard ratio=36.3 for a severe bleed and 8.0 for a moderate bleed).
Deaths following bleeding events primarily occurred within 30 days of the event.
“Since our analysis found that the development of both ischemic and bleeding events portend a particularly poor overall prognosis, we conclude that we must be thoughtful when prescribing any treatment, such as dual antiplatelet therapy, that may include bleeding risk,” Dr Secemsky said.
“In order to understand the implications of therapies that have potentially conflicting effects—such as decreasing ischemic risk while increasing bleeding risk—we must understand the prognostic factors related to these events. Our efforts now need to be focused on individualizing treatment and identifying those who are at the greatest risk of developing recurrent ischemia and at the lowest risk of developing a bleed.”
In a previous study, Dr Secemsky and his colleagues developed a risk score using DAPT data that can help determine whether or not DAPT should continue past the 1-year mark.
The tool has recently been included in American College of Cardiology(ACC)/American Heart Association guidelines on the duration of DAPT and is available on the ACC website.
A new analysis suggests that patients who receive dual antiplatelet therapy (DAPT) for at least 1 year after coronary stenting are more likely to experience ischemic events than bleeding events, but both types of events are associated with a high risk of death.
Researchers performed a secondary analysis of data from the DAPT study and found that 4% of patients had ischemic events and 2% had bleeding events between 12 and 33 months after stenting.
Both types of events incurred a serious mortality risk—an 18-fold increase after any bleeding event and a 13-fold increase after any ischemic event.
These findings were published in JAMA Cardiology.
“We know from previous trials that continuing dual antiplatelet therapy longer than 12 months after coronary stenting is associated with both decreased ischemia and increased bleeding risk, so these findings reinforce the need to identify individuals who are likely to experience more benefit than harm from continued dual antiplatelet therapy,” said study author Eric Secemsky, MD, of Massachusetts General Hospital in Boston.
For this study, Dr Secemsky and his colleagues analyzed data collected in the DAPT trial, which was designed to determine the benefits and risks of continuing DAPT for more than a year.
The trial enrolled 25,682 patients who were set to receive a drug-eluting or bare-metal stent. After stent placement, they received DAPT—aspirin plus thienopyridine (clopidogrel or prasugrel)—for at least 12 months.
After 12 months of therapy, patients who were treatment-compliant and event-free (no myocardial infarction, stroke, or moderate or severe bleeding) were randomized to continued DAPT or aspirin alone for an additional 18 months. At month 30, patients discontinued randomized treatment but remained on aspirin and were followed for 3 months.
For the present secondary analysis, Dr Secemsky and his colleagues examined data from all 11,648 randomized patients.
Ischemic events
During the study period, 478 patients (4.1%) had 502 ischemic events, including 306 myocardial infarctions, 113 cases of stent thrombosis, and 83 ischemic strokes.
The death rate among patients with ischemic events was 10.9% (n=52), and 78.8% of these deaths (n=41) were attributable to cardiovascular causes. The death rate was 0.7% among patients without a cardiovascular event (82/11,082, P<0.001).
The cumulative incidence of death after ischemic events was 0.5% (0.3% with myocardial infarction, 0.1% with stent thrombosis, and 0.1% with ischemic stroke) among the more than 11,600 randomized patients.
The unadjusted annualized mortality rate after an ischemic event was 27.2 per 100 person-years.
When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, having an ischemic event was associated with a 12.6-fold increased risk of death (hazard ratio=14.6 for stent thrombosis, 13.1 for ischemic stroke, and 9.1 for myocardial infarction).
Deaths after ischemic stroke or stent thrombosis usually occurred soon after the event, but the increased risk of death from a myocardial infarction persisted throughout the study period.
Bleeding events
A total of 232 patients (2.0%) had 235 bleeding events—155 moderate and 80 severe bleeds.
The death rate among patients with bleeding events was 17.7% (n=41), compared to 1.6% among patients without a bleed (181/11,416, P<0.001). However, more than half of the deaths occurring after a bleeding event were attributable to cardiovascular causes (53.7%, n=22).
The cumulative incidence of death after a bleeding event was 0.3% (0.1% with moderate and 0.2% with severe bleeding) in the randomized study population.
The unadjusted annualized mortality rate after a bleeding event was 21.5 per 100 person-years.
When the researchers controlled for demographic characteristics, comorbid conditions, and procedural factors, a bleeding event was associated with an 18.1-fold increased risk of death (hazard ratio=36.3 for a severe bleed and 8.0 for a moderate bleed).
Deaths following bleeding events primarily occurred within 30 days of the event.
“Since our analysis found that the development of both ischemic and bleeding events portend a particularly poor overall prognosis, we conclude that we must be thoughtful when prescribing any treatment, such as dual antiplatelet therapy, that may include bleeding risk,” Dr Secemsky said.
“In order to understand the implications of therapies that have potentially conflicting effects—such as decreasing ischemic risk while increasing bleeding risk—we must understand the prognostic factors related to these events. Our efforts now need to be focused on individualizing treatment and identifying those who are at the greatest risk of developing recurrent ischemia and at the lowest risk of developing a bleed.”
In a previous study, Dr Secemsky and his colleagues developed a risk score using DAPT data that can help determine whether or not DAPT should continue past the 1-year mark.
The tool has recently been included in American College of Cardiology(ACC)/American Heart Association guidelines on the duration of DAPT and is available on the ACC website.
Rash in both axillae
The family physician (FP) suspected that the patient had a contact dermatitis to his deodorant. After further questioning, the patient said he had changed his deodorant about one month before the rash started. The FP explained that an ingredient in this new deodorant was likely causing the allergic reaction.
The FP prescribed 0.1% triamcinolone cream to be applied twice daily. He suggested that the patient either go back to his original deodorant or read the ingredients on the new deodorant and choose a deodorant that does not have the same ingredients.
At a follow-up visit one month later, the patient's skin had cleared and he was very happy with the results. He said he’d gone back to using his original deodorant, which didn’t have the same ingredients as the new one.
This is a typical case of contact dermatitis in which the history and physical exam were sufficient to make the diagnosis. No patch testing or referrals to Dermatology were required.
Photos and text for Photo Rounds Friday courtesy of Richard P. Usatine, MD. This case was adapted from: Usatine R. Contact dermatitis. In: Usatine R, Smith M, Mayeaux EJ, et al, eds. Color Atlas of Family Medicine. 2nd ed. New York, NY: McGraw-Hill; 2013:591-596.
To learn more about the Color Atlas of Family Medicine, see: www.amazon.com/Color-Family-Medicine-Richard-Usatine/dp/0071769641/
You can now get the second edition of the Color Atlas of Family Medicine as an app by clicking on this link: usatinemedia.com
The family physician (FP) suspected that the patient had a contact dermatitis to his deodorant. After further questioning, the patient said he had changed his deodorant about one month before the rash started. The FP explained that an ingredient in this new deodorant was likely causing the allergic reaction.
The FP prescribed 0.1% triamcinolone cream to be applied twice daily. He suggested that the patient either go back to his original deodorant or read the ingredients on the new deodorant and choose a deodorant that does not have the same ingredients.
At a follow-up visit one month later, the patient's skin had cleared and he was very happy with the results. He said he’d gone back to using his original deodorant, which didn’t have the same ingredients as the new one.
This is a typical case of contact dermatitis in which the history and physical exam were sufficient to make the diagnosis. No patch testing or referrals to Dermatology were required.
Photos and text for Photo Rounds Friday courtesy of Richard P. Usatine, MD. This case was adapted from: Usatine R. Contact dermatitis. In: Usatine R, Smith M, Mayeaux EJ, et al, eds. Color Atlas of Family Medicine. 2nd ed. New York, NY: McGraw-Hill; 2013:591-596.
To learn more about the Color Atlas of Family Medicine, see: www.amazon.com/Color-Family-Medicine-Richard-Usatine/dp/0071769641/
You can now get the second edition of the Color Atlas of Family Medicine as an app by clicking on this link: usatinemedia.com
The family physician (FP) suspected that the patient had a contact dermatitis to his deodorant. After further questioning, the patient said he had changed his deodorant about one month before the rash started. The FP explained that an ingredient in this new deodorant was likely causing the allergic reaction.
The FP prescribed 0.1% triamcinolone cream to be applied twice daily. He suggested that the patient either go back to his original deodorant or read the ingredients on the new deodorant and choose a deodorant that does not have the same ingredients.
At a follow-up visit one month later, the patient's skin had cleared and he was very happy with the results. He said he’d gone back to using his original deodorant, which didn’t have the same ingredients as the new one.
This is a typical case of contact dermatitis in which the history and physical exam were sufficient to make the diagnosis. No patch testing or referrals to Dermatology were required.
Photos and text for Photo Rounds Friday courtesy of Richard P. Usatine, MD. This case was adapted from: Usatine R. Contact dermatitis. In: Usatine R, Smith M, Mayeaux EJ, et al, eds. Color Atlas of Family Medicine. 2nd ed. New York, NY: McGraw-Hill; 2013:591-596.
To learn more about the Color Atlas of Family Medicine, see: www.amazon.com/Color-Family-Medicine-Richard-Usatine/dp/0071769641/
You can now get the second edition of the Color Atlas of Family Medicine as an app by clicking on this link: usatinemedia.com