User login
New Initiative Expands Native Research Opportunities
IHS has announced that it is funding a new project: the National Native Health Research Training Initiative, a cooperative agreement aimed at building capacity, reducing health disparities, and sharing best practices in American Indian and Alaska Native (AI/AN) health research. The funding offers about $225,000 per year for up to 5 years.
Related: IHS Awards Funding for Health Care Self-Governance
The project is designed to promote tribally driven research through education and training opportunities. The initiative “will help expand the community of American Indian and Alaska Native researchers and enhance the ability of tribes to participate in and initiate research projects that address specific needs in their communities,” said IHS Principal Deputy Director Mary Smith.
The funding opportunity is open to a national membership organization of American Indian and Alaska Native scientists, researchers, and students. The organization selected will further the IHS research program objectives with expanded outreach and education efforts for AI/AN students, faculty, and health professionals by, for example, making it easier for tribes to use research findings to address AI/AN needs or by promoting health research methods to better understand the effects of traditional Indian medicine, indigenous knowledge, and traditional ecological knowledge on AI/AN health.
Related: IHS and CMS Partner for Patient Safety Improvements
IHS says it also expects the award recipient to develop regular conference training for health professionals and tribal leaders about health research methods, findings, and best practices to meet the needs and advance the health and health care of AI/AN people.
IHS has announced that it is funding a new project: the National Native Health Research Training Initiative, a cooperative agreement aimed at building capacity, reducing health disparities, and sharing best practices in American Indian and Alaska Native (AI/AN) health research. The funding offers about $225,000 per year for up to 5 years.
Related: IHS Awards Funding for Health Care Self-Governance
The project is designed to promote tribally driven research through education and training opportunities. The initiative “will help expand the community of American Indian and Alaska Native researchers and enhance the ability of tribes to participate in and initiate research projects that address specific needs in their communities,” said IHS Principal Deputy Director Mary Smith.
The funding opportunity is open to a national membership organization of American Indian and Alaska Native scientists, researchers, and students. The organization selected will further the IHS research program objectives with expanded outreach and education efforts for AI/AN students, faculty, and health professionals by, for example, making it easier for tribes to use research findings to address AI/AN needs or by promoting health research methods to better understand the effects of traditional Indian medicine, indigenous knowledge, and traditional ecological knowledge on AI/AN health.
Related: IHS and CMS Partner for Patient Safety Improvements
IHS says it also expects the award recipient to develop regular conference training for health professionals and tribal leaders about health research methods, findings, and best practices to meet the needs and advance the health and health care of AI/AN people.
IHS has announced that it is funding a new project: the National Native Health Research Training Initiative, a cooperative agreement aimed at building capacity, reducing health disparities, and sharing best practices in American Indian and Alaska Native (AI/AN) health research. The funding offers about $225,000 per year for up to 5 years.
Related: IHS Awards Funding for Health Care Self-Governance
The project is designed to promote tribally driven research through education and training opportunities. The initiative “will help expand the community of American Indian and Alaska Native researchers and enhance the ability of tribes to participate in and initiate research projects that address specific needs in their communities,” said IHS Principal Deputy Director Mary Smith.
The funding opportunity is open to a national membership organization of American Indian and Alaska Native scientists, researchers, and students. The organization selected will further the IHS research program objectives with expanded outreach and education efforts for AI/AN students, faculty, and health professionals by, for example, making it easier for tribes to use research findings to address AI/AN needs or by promoting health research methods to better understand the effects of traditional Indian medicine, indigenous knowledge, and traditional ecological knowledge on AI/AN health.
Related: IHS and CMS Partner for Patient Safety Improvements
IHS says it also expects the award recipient to develop regular conference training for health professionals and tribal leaders about health research methods, findings, and best practices to meet the needs and advance the health and health care of AI/AN people.
Non-Invasive Testing for CAD
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
Secukinumab for psoriasis at 4 years: undiminished efficacy and safety
VIENNA – Four-year follow-up of patients on secukinumab for psoriasis shows sustained very high efficacy, with almost 100% of patients who had a Psoriasis Area and Severity Index (PASI) 90 or 100 response at 1 year maintaining it through 4 years, Robert Bissonnette, MD, reported at the annual congress of the European Academy of Dermatology and Venereology.
“I must warn you that my presentation will be very boring as compared to what I’ve seen earlier at this meeting, the very cutting edge phase II and phase III data being presented. My presentation doesn’t contain any surprises. However, as a clinician who is using interleukin-17A inhibition in my practice to treat psoriasis patients, that’s probably what I want,” said Dr. Bissonnette, president of Innovaderm Research in Montreal.
“This is the longest-term safety and efficacy data available to date for patients treated with an IL-17 antagonist using an approved dose,” he noted.
Dr. Bissonnette presented 4-year results in the 165 participants who took the approved regimen from the start of the study. These were patients at the serious end of the disease severity spectrum. Their mean baseline PASI score was 23.5, with 33% of their body surface area being affected. Their mean Dermatology Life Quality Index (DLQI) score was 13.1. The mean body mass index was 28.7 kg/m2. A total of 71% of subjects had previously been on systemic therapy. One-third of participants had been on other biologics.
At 1 year, 88.9% of subjects had a PASI 75 response; at 4 years, the PASI 75 rate was 88.5%. Similarly, the PASI 90 rate was 68.5% at 1 year and 66.4% after 4 years. The PASI 100 rate was 43.8% at 1 year and 43.5% at year 4.
After 1 year on secukinumab, patients showed a mean 91.1% improvement, compared with their baseline PASI score. At 4 years, the figure was 90.8%.
Bearing in mind that the average baseline DLQI score at baseline was 13.1, it’s noteworthy that after 1 year on secukinumab, 72.7% of patients had a DLQI of 0 or 1, indicating psoriasis had no impact on their life. At year 4, the rate was 70.8%, Dr. Bissonnette continued.
As an audience member observed, however, the study population decreased from 165 patients to 131 over the course of 4 years. And since this was an “as observed” analysis, outcomes were counted only in those patients still in the study. It’s accepted as a legitimate statistical method, but it casts outcomes in a particularly favorable light.
“The main reason for dropouts was for personal reasons,” Dr. Bissonnette explained in response. “Number two was lack of or loss of efficacy. Loss of efficacy over time occurred at an absolute rate of 4%-8% per year.”
Overall, adverse event rates declined over the course of 4 years of follow-up.
“This is reassuring, but I don’t think it’s evidence that adverse events actually decrease over time because of longer use of secukinumab. I think it’s probably due to something we usually see in long-term clinical trials: a phenomenon of underreporting. When patients are treated with a new agent they tend to be very, very conscientious about what’s going on with their well-being. They will report a slight sore throat, a slight congestion. But once they’ve been on treatment for a longer time they’re less likely to report those very minor adverse events,” according to the dermatologist.
The Food and Drug Administration requires clinical trialists to keep careful track of selected adverse events in studies of biologic agents. In 4 years on secukinumab, there were no cases of tuberculosis, neutropenia, major adverse cardiovascular events, or Crohn’s disease. There were two cases of ulcerative colitis in year 2; however, one involved an exacerbation of preexisting disease. Also, two patients developed cancer other than nonmelanoma skin cancer in year 2. The incidence of vulvovaginal candidiasis was 1.8% during years 1 and 2, 0.6% in year 3, and zero in year 4.
Thus, the safety profile was favorable, with no pattern of increasing adverse events with longer medication use, Dr. Bissonnette concluded.
The study was sponsored by Novartis. Dr. Bissonnette reported serving as an investigator for and consultant to Novartis and 16 other pharmaceutical companies.
VIENNA – Four-year follow-up of patients on secukinumab for psoriasis shows sustained very high efficacy, with almost 100% of patients who had a Psoriasis Area and Severity Index (PASI) 90 or 100 response at 1 year maintaining it through 4 years, Robert Bissonnette, MD, reported at the annual congress of the European Academy of Dermatology and Venereology.
“I must warn you that my presentation will be very boring as compared to what I’ve seen earlier at this meeting, the very cutting edge phase II and phase III data being presented. My presentation doesn’t contain any surprises. However, as a clinician who is using interleukin-17A inhibition in my practice to treat psoriasis patients, that’s probably what I want,” said Dr. Bissonnette, president of Innovaderm Research in Montreal.
“This is the longest-term safety and efficacy data available to date for patients treated with an IL-17 antagonist using an approved dose,” he noted.
Dr. Bissonnette presented 4-year results in the 165 participants who took the approved regimen from the start of the study. These were patients at the serious end of the disease severity spectrum. Their mean baseline PASI score was 23.5, with 33% of their body surface area being affected. Their mean Dermatology Life Quality Index (DLQI) score was 13.1. The mean body mass index was 28.7 kg/m2. A total of 71% of subjects had previously been on systemic therapy. One-third of participants had been on other biologics.
At 1 year, 88.9% of subjects had a PASI 75 response; at 4 years, the PASI 75 rate was 88.5%. Similarly, the PASI 90 rate was 68.5% at 1 year and 66.4% after 4 years. The PASI 100 rate was 43.8% at 1 year and 43.5% at year 4.
After 1 year on secukinumab, patients showed a mean 91.1% improvement, compared with their baseline PASI score. At 4 years, the figure was 90.8%.
Bearing in mind that the average baseline DLQI score at baseline was 13.1, it’s noteworthy that after 1 year on secukinumab, 72.7% of patients had a DLQI of 0 or 1, indicating psoriasis had no impact on their life. At year 4, the rate was 70.8%, Dr. Bissonnette continued.
As an audience member observed, however, the study population decreased from 165 patients to 131 over the course of 4 years. And since this was an “as observed” analysis, outcomes were counted only in those patients still in the study. It’s accepted as a legitimate statistical method, but it casts outcomes in a particularly favorable light.
“The main reason for dropouts was for personal reasons,” Dr. Bissonnette explained in response. “Number two was lack of or loss of efficacy. Loss of efficacy over time occurred at an absolute rate of 4%-8% per year.”
Overall, adverse event rates declined over the course of 4 years of follow-up.
“This is reassuring, but I don’t think it’s evidence that adverse events actually decrease over time because of longer use of secukinumab. I think it’s probably due to something we usually see in long-term clinical trials: a phenomenon of underreporting. When patients are treated with a new agent they tend to be very, very conscientious about what’s going on with their well-being. They will report a slight sore throat, a slight congestion. But once they’ve been on treatment for a longer time they’re less likely to report those very minor adverse events,” according to the dermatologist.
The Food and Drug Administration requires clinical trialists to keep careful track of selected adverse events in studies of biologic agents. In 4 years on secukinumab, there were no cases of tuberculosis, neutropenia, major adverse cardiovascular events, or Crohn’s disease. There were two cases of ulcerative colitis in year 2; however, one involved an exacerbation of preexisting disease. Also, two patients developed cancer other than nonmelanoma skin cancer in year 2. The incidence of vulvovaginal candidiasis was 1.8% during years 1 and 2, 0.6% in year 3, and zero in year 4.
Thus, the safety profile was favorable, with no pattern of increasing adverse events with longer medication use, Dr. Bissonnette concluded.
The study was sponsored by Novartis. Dr. Bissonnette reported serving as an investigator for and consultant to Novartis and 16 other pharmaceutical companies.
VIENNA – Four-year follow-up of patients on secukinumab for psoriasis shows sustained very high efficacy, with almost 100% of patients who had a Psoriasis Area and Severity Index (PASI) 90 or 100 response at 1 year maintaining it through 4 years, Robert Bissonnette, MD, reported at the annual congress of the European Academy of Dermatology and Venereology.
“I must warn you that my presentation will be very boring as compared to what I’ve seen earlier at this meeting, the very cutting edge phase II and phase III data being presented. My presentation doesn’t contain any surprises. However, as a clinician who is using interleukin-17A inhibition in my practice to treat psoriasis patients, that’s probably what I want,” said Dr. Bissonnette, president of Innovaderm Research in Montreal.
“This is the longest-term safety and efficacy data available to date for patients treated with an IL-17 antagonist using an approved dose,” he noted.
Dr. Bissonnette presented 4-year results in the 165 participants who took the approved regimen from the start of the study. These were patients at the serious end of the disease severity spectrum. Their mean baseline PASI score was 23.5, with 33% of their body surface area being affected. Their mean Dermatology Life Quality Index (DLQI) score was 13.1. The mean body mass index was 28.7 kg/m2. A total of 71% of subjects had previously been on systemic therapy. One-third of participants had been on other biologics.
At 1 year, 88.9% of subjects had a PASI 75 response; at 4 years, the PASI 75 rate was 88.5%. Similarly, the PASI 90 rate was 68.5% at 1 year and 66.4% after 4 years. The PASI 100 rate was 43.8% at 1 year and 43.5% at year 4.
After 1 year on secukinumab, patients showed a mean 91.1% improvement, compared with their baseline PASI score. At 4 years, the figure was 90.8%.
Bearing in mind that the average baseline DLQI score at baseline was 13.1, it’s noteworthy that after 1 year on secukinumab, 72.7% of patients had a DLQI of 0 or 1, indicating psoriasis had no impact on their life. At year 4, the rate was 70.8%, Dr. Bissonnette continued.
As an audience member observed, however, the study population decreased from 165 patients to 131 over the course of 4 years. And since this was an “as observed” analysis, outcomes were counted only in those patients still in the study. It’s accepted as a legitimate statistical method, but it casts outcomes in a particularly favorable light.
“The main reason for dropouts was for personal reasons,” Dr. Bissonnette explained in response. “Number two was lack of or loss of efficacy. Loss of efficacy over time occurred at an absolute rate of 4%-8% per year.”
Overall, adverse event rates declined over the course of 4 years of follow-up.
“This is reassuring, but I don’t think it’s evidence that adverse events actually decrease over time because of longer use of secukinumab. I think it’s probably due to something we usually see in long-term clinical trials: a phenomenon of underreporting. When patients are treated with a new agent they tend to be very, very conscientious about what’s going on with their well-being. They will report a slight sore throat, a slight congestion. But once they’ve been on treatment for a longer time they’re less likely to report those very minor adverse events,” according to the dermatologist.
The Food and Drug Administration requires clinical trialists to keep careful track of selected adverse events in studies of biologic agents. In 4 years on secukinumab, there were no cases of tuberculosis, neutropenia, major adverse cardiovascular events, or Crohn’s disease. There were two cases of ulcerative colitis in year 2; however, one involved an exacerbation of preexisting disease. Also, two patients developed cancer other than nonmelanoma skin cancer in year 2. The incidence of vulvovaginal candidiasis was 1.8% during years 1 and 2, 0.6% in year 3, and zero in year 4.
Thus, the safety profile was favorable, with no pattern of increasing adverse events with longer medication use, Dr. Bissonnette concluded.
The study was sponsored by Novartis. Dr. Bissonnette reported serving as an investigator for and consultant to Novartis and 16 other pharmaceutical companies.
AT THE EADV CONGRESS
Key clinical point:
Major finding: After 1 year on secukinumab, 43.8% of psoriasis patients had a PASI 100 response. After 3 additional years on the interleukin-17A inhibitor, the rate was virtually unchanged at 43.5%.
Data source: This was analysis of 165 psoriasis patients on secukinumab at the approved dose prospectively followed for 4 years in an extension of a phase III clinical trial.
Disclosures: Novartis sponsored the study. The presenter reported serving as an investigator for and consultant to Novartis and 16 other pharmaceutical companies.
Unvaccinated patients rack up billions in preventable costs
Adult patients who avoid vaccines cost the health care system $7 billion in preventable illness in 2015, according to a meta-analysis.
Sachiko Ozawa, PhD., of the University of North Carolina at Chapel Hill and her colleagues estimated the annual economic burden of diseases associated with 10 adult vaccines recommended by the Centers for Disease Control and Prevention that protect against 14 pathogens by looking at studies with U.S. cost data for adult age groups and using cost-of-illness modeling (Health Affairs 2016 Oct. doi:10.1377/hlthaff.2016.0462).
The cost of outpatient care ranged between $108 and $457 per patient, while the cost of medication ranged from $0 per patient for diseases that do not have curative drug treatments to $605 per patients treated for tetanus, investigators found. When it came to inpatient care, costs ranged from $5,770 per patient for those hospitalized for influenza to $15,600 for those hospitalized for invasive meningococcal disease.
Outpatient productivity loss per patient ranged from $29 for patients requiring a single outpatient visit to $154 for patients diagnosed with HPV-related cancers. Inpatient productivity loss per person ranged from $122 for patients with mumps to $580 for patients with tetanus.
The results underscore the need for improved uptake of vaccines among adults and the need for patients to better appreciate the value of vaccines, Dr. Ozawa said in an interview.
“If these individuals were to be vaccinated, than $7 billion in costs would be eliminated every year from the U.S. economy,” she said. “That’s pretty big. That’s the high-level takeaway.”
Dr. Ozawa said that she hopes the study will spur some creative policy solutions to increase vaccine usage, while preserving the autonomy of patients to make more informed choices.
[email protected]
On Twitter @legal_med
Adult patients who avoid vaccines cost the health care system $7 billion in preventable illness in 2015, according to a meta-analysis.
Sachiko Ozawa, PhD., of the University of North Carolina at Chapel Hill and her colleagues estimated the annual economic burden of diseases associated with 10 adult vaccines recommended by the Centers for Disease Control and Prevention that protect against 14 pathogens by looking at studies with U.S. cost data for adult age groups and using cost-of-illness modeling (Health Affairs 2016 Oct. doi:10.1377/hlthaff.2016.0462).
The cost of outpatient care ranged between $108 and $457 per patient, while the cost of medication ranged from $0 per patient for diseases that do not have curative drug treatments to $605 per patients treated for tetanus, investigators found. When it came to inpatient care, costs ranged from $5,770 per patient for those hospitalized for influenza to $15,600 for those hospitalized for invasive meningococcal disease.
Outpatient productivity loss per patient ranged from $29 for patients requiring a single outpatient visit to $154 for patients diagnosed with HPV-related cancers. Inpatient productivity loss per person ranged from $122 for patients with mumps to $580 for patients with tetanus.
The results underscore the need for improved uptake of vaccines among adults and the need for patients to better appreciate the value of vaccines, Dr. Ozawa said in an interview.
“If these individuals were to be vaccinated, than $7 billion in costs would be eliminated every year from the U.S. economy,” she said. “That’s pretty big. That’s the high-level takeaway.”
Dr. Ozawa said that she hopes the study will spur some creative policy solutions to increase vaccine usage, while preserving the autonomy of patients to make more informed choices.
[email protected]
On Twitter @legal_med
Adult patients who avoid vaccines cost the health care system $7 billion in preventable illness in 2015, according to a meta-analysis.
Sachiko Ozawa, PhD., of the University of North Carolina at Chapel Hill and her colleagues estimated the annual economic burden of diseases associated with 10 adult vaccines recommended by the Centers for Disease Control and Prevention that protect against 14 pathogens by looking at studies with U.S. cost data for adult age groups and using cost-of-illness modeling (Health Affairs 2016 Oct. doi:10.1377/hlthaff.2016.0462).
The cost of outpatient care ranged between $108 and $457 per patient, while the cost of medication ranged from $0 per patient for diseases that do not have curative drug treatments to $605 per patients treated for tetanus, investigators found. When it came to inpatient care, costs ranged from $5,770 per patient for those hospitalized for influenza to $15,600 for those hospitalized for invasive meningococcal disease.
Outpatient productivity loss per patient ranged from $29 for patients requiring a single outpatient visit to $154 for patients diagnosed with HPV-related cancers. Inpatient productivity loss per person ranged from $122 for patients with mumps to $580 for patients with tetanus.
The results underscore the need for improved uptake of vaccines among adults and the need for patients to better appreciate the value of vaccines, Dr. Ozawa said in an interview.
“If these individuals were to be vaccinated, than $7 billion in costs would be eliminated every year from the U.S. economy,” she said. “That’s pretty big. That’s the high-level takeaway.”
Dr. Ozawa said that she hopes the study will spur some creative policy solutions to increase vaccine usage, while preserving the autonomy of patients to make more informed choices.
[email protected]
On Twitter @legal_med
Pyuria is an important tool in diagnosing UTI in infants
Diagnosing urinary tract infections can be achieved by determining the white blood cell concentration of the patient’s urine, according to a new study.
“Previously recommended pyuria thresholds for the presumptive diagnosis of UTI in young infants were based on manual microscopy of centrifuged urine [but] test performance has not been studied in newer automated systems that analyze uncentrifuged urine,” wrote Pradip P. Chaudhari, MD, and his associates at Harvard University in Boston.
Of these 2,700 infants with a median age of 1.7 months, 211 (7.8%) had a urine culture come back positive for UTI. Likelihood ratio (LR) positive and negative were calculated to determine the microscopic pyuria thresholds at which UTIs became more likely in both dilute and concentrated urine. A white blood cell to high-power field (WBC/HPF) count of 3 yielded an LR-positive of 9.9 and LR-negative of just 0.15, making it the cutoff for dilute urine samples. For concentrated urine samples, 6 WBC/HPF had an LR-positive of 10.1 and LR-negative of 0.17, making it the cutoff for those samples. Leukocyte esterase (LE) thresholds also were determined for dipstick testing, with investigators finding that any positive result on the dipstick was a strong indicator of UTI.
“The optimal diagnostic threshold for microscopic pyuria varies by urine concentration,” the authors concluded. “For young infants, urine concentration should be incorporated into the interpretation of uncentrifuged urine analyzed by automated microscopic urinalysis systems.”
There was no external funding for this study. Dr. Chaudhari and his coauthors did not report any relevant financial disclosures.
In this issue of Pediatrics, Chaudhari et al. share the results of a study of the impact of urine concentration on the optimal threshold in the new era of automated urinalysis. Centrifugation of urine specimens has long been standard laboratory practice, presumably performed to concentrate sediment and facilitate the detection of cellular elements and bacteria.
However, for the many sites that do not have machines for automated urinalyses (virtually all office practices, for example), the most important finding in this study may well be how well LE [leukocyte esterase] performs regardless of urine concentration. The optimal threshold for LE is not clear, however. The authors use “small” as their threshold for LE. At any threshold, can a negative urinalysis be relied on to exclude the diagnosis of UTI? A “positive” culture without inflammation evident in the urine is likely due to contamination, very early infection (rare), or asymptomatic bacteriuria (positive urine cultures in febrile children can still represent asymptomatic bacteriuria, because the fever may be due to a source other than the urinary tract).
If there are, in fact, some true UTIs without evidence of inflammation from the urinalysis, are they as harmful as those with “pyuria”?
Animal data demonstrate it is the inflammatory response, not the presence of organisms, that causes renal damage in the form of scarring. So the role of using evidence of inflammation in the urine to screen for who needs a culture seems justified on the basis not only of practicality at point of care and likelihood of UTI, but also sparing individuals at low to no risk of scarring from invasive urine collection. Moreover, using the urinalysis as a screen permits selecting individuals for antimicrobial treatment 24 hours sooner than if clinicians were to wait for culture results before treating. The urinalysis provides a practical window for clinicians to render prompt treatment. And Chaudhari et al. provide valuable assistance for interpreting the results of automated urinalyses.
Kenneth B. Roberts, MD , is a professor of therapeutic radiology at Yale University, New Haven, Conn. He did not report any relevant financial disclosures. These comments are excerpted from a commentary that accompanied Dr. Chaudhari and his associates’ study ( Pediatrics. 2016;138(5):e20162877 ).
In this issue of Pediatrics, Chaudhari et al. share the results of a study of the impact of urine concentration on the optimal threshold in the new era of automated urinalysis. Centrifugation of urine specimens has long been standard laboratory practice, presumably performed to concentrate sediment and facilitate the detection of cellular elements and bacteria.
However, for the many sites that do not have machines for automated urinalyses (virtually all office practices, for example), the most important finding in this study may well be how well LE [leukocyte esterase] performs regardless of urine concentration. The optimal threshold for LE is not clear, however. The authors use “small” as their threshold for LE. At any threshold, can a negative urinalysis be relied on to exclude the diagnosis of UTI? A “positive” culture without inflammation evident in the urine is likely due to contamination, very early infection (rare), or asymptomatic bacteriuria (positive urine cultures in febrile children can still represent asymptomatic bacteriuria, because the fever may be due to a source other than the urinary tract).
If there are, in fact, some true UTIs without evidence of inflammation from the urinalysis, are they as harmful as those with “pyuria”?
Animal data demonstrate it is the inflammatory response, not the presence of organisms, that causes renal damage in the form of scarring. So the role of using evidence of inflammation in the urine to screen for who needs a culture seems justified on the basis not only of practicality at point of care and likelihood of UTI, but also sparing individuals at low to no risk of scarring from invasive urine collection. Moreover, using the urinalysis as a screen permits selecting individuals for antimicrobial treatment 24 hours sooner than if clinicians were to wait for culture results before treating. The urinalysis provides a practical window for clinicians to render prompt treatment. And Chaudhari et al. provide valuable assistance for interpreting the results of automated urinalyses.
Kenneth B. Roberts, MD , is a professor of therapeutic radiology at Yale University, New Haven, Conn. He did not report any relevant financial disclosures. These comments are excerpted from a commentary that accompanied Dr. Chaudhari and his associates’ study ( Pediatrics. 2016;138(5):e20162877 ).
In this issue of Pediatrics, Chaudhari et al. share the results of a study of the impact of urine concentration on the optimal threshold in the new era of automated urinalysis. Centrifugation of urine specimens has long been standard laboratory practice, presumably performed to concentrate sediment and facilitate the detection of cellular elements and bacteria.
However, for the many sites that do not have machines for automated urinalyses (virtually all office practices, for example), the most important finding in this study may well be how well LE [leukocyte esterase] performs regardless of urine concentration. The optimal threshold for LE is not clear, however. The authors use “small” as their threshold for LE. At any threshold, can a negative urinalysis be relied on to exclude the diagnosis of UTI? A “positive” culture without inflammation evident in the urine is likely due to contamination, very early infection (rare), or asymptomatic bacteriuria (positive urine cultures in febrile children can still represent asymptomatic bacteriuria, because the fever may be due to a source other than the urinary tract).
If there are, in fact, some true UTIs without evidence of inflammation from the urinalysis, are they as harmful as those with “pyuria”?
Animal data demonstrate it is the inflammatory response, not the presence of organisms, that causes renal damage in the form of scarring. So the role of using evidence of inflammation in the urine to screen for who needs a culture seems justified on the basis not only of practicality at point of care and likelihood of UTI, but also sparing individuals at low to no risk of scarring from invasive urine collection. Moreover, using the urinalysis as a screen permits selecting individuals for antimicrobial treatment 24 hours sooner than if clinicians were to wait for culture results before treating. The urinalysis provides a practical window for clinicians to render prompt treatment. And Chaudhari et al. provide valuable assistance for interpreting the results of automated urinalyses.
Kenneth B. Roberts, MD , is a professor of therapeutic radiology at Yale University, New Haven, Conn. He did not report any relevant financial disclosures. These comments are excerpted from a commentary that accompanied Dr. Chaudhari and his associates’ study ( Pediatrics. 2016;138(5):e20162877 ).
Diagnosing urinary tract infections can be achieved by determining the white blood cell concentration of the patient’s urine, according to a new study.
“Previously recommended pyuria thresholds for the presumptive diagnosis of UTI in young infants were based on manual microscopy of centrifuged urine [but] test performance has not been studied in newer automated systems that analyze uncentrifuged urine,” wrote Pradip P. Chaudhari, MD, and his associates at Harvard University in Boston.
Of these 2,700 infants with a median age of 1.7 months, 211 (7.8%) had a urine culture come back positive for UTI. Likelihood ratio (LR) positive and negative were calculated to determine the microscopic pyuria thresholds at which UTIs became more likely in both dilute and concentrated urine. A white blood cell to high-power field (WBC/HPF) count of 3 yielded an LR-positive of 9.9 and LR-negative of just 0.15, making it the cutoff for dilute urine samples. For concentrated urine samples, 6 WBC/HPF had an LR-positive of 10.1 and LR-negative of 0.17, making it the cutoff for those samples. Leukocyte esterase (LE) thresholds also were determined for dipstick testing, with investigators finding that any positive result on the dipstick was a strong indicator of UTI.
“The optimal diagnostic threshold for microscopic pyuria varies by urine concentration,” the authors concluded. “For young infants, urine concentration should be incorporated into the interpretation of uncentrifuged urine analyzed by automated microscopic urinalysis systems.”
There was no external funding for this study. Dr. Chaudhari and his coauthors did not report any relevant financial disclosures.
Diagnosing urinary tract infections can be achieved by determining the white blood cell concentration of the patient’s urine, according to a new study.
“Previously recommended pyuria thresholds for the presumptive diagnosis of UTI in young infants were based on manual microscopy of centrifuged urine [but] test performance has not been studied in newer automated systems that analyze uncentrifuged urine,” wrote Pradip P. Chaudhari, MD, and his associates at Harvard University in Boston.
Of these 2,700 infants with a median age of 1.7 months, 211 (7.8%) had a urine culture come back positive for UTI. Likelihood ratio (LR) positive and negative were calculated to determine the microscopic pyuria thresholds at which UTIs became more likely in both dilute and concentrated urine. A white blood cell to high-power field (WBC/HPF) count of 3 yielded an LR-positive of 9.9 and LR-negative of just 0.15, making it the cutoff for dilute urine samples. For concentrated urine samples, 6 WBC/HPF had an LR-positive of 10.1 and LR-negative of 0.17, making it the cutoff for those samples. Leukocyte esterase (LE) thresholds also were determined for dipstick testing, with investigators finding that any positive result on the dipstick was a strong indicator of UTI.
“The optimal diagnostic threshold for microscopic pyuria varies by urine concentration,” the authors concluded. “For young infants, urine concentration should be incorporated into the interpretation of uncentrifuged urine analyzed by automated microscopic urinalysis systems.”
There was no external funding for this study. Dr. Chaudhari and his coauthors did not report any relevant financial disclosures.
FROM PEDIATRICS
Key clinical point:
Major finding: UTIs can be safely diagnosed if the patient has a pyuria threshold of at least 3 WBC/HPF in dilute urine and 6 WBC/HPF in concentrated urine.
Data source: Retrospective cross-sectional study of 2,700 infants younger than 3 months between May 2009 and December 2014.
Disclosures: No external funding for this study; authors did not report any relevant financial disclosures.
Treatment of depression – nonpharmacologic vs. pharmacologic
Major depressive disorder (MDD) affects 16% of adults in the United States at some point in their lives. It is one of the most important causes of disability, time off from work, and personal distress, accounting for more than 8 million office visits per year.
Recent information shows that while 8% of the population screens positive for depression, only a quarter of those with depression receive treatment. Most patients with depression are cared for by primary care physicians, not psychiatrists.1 It is important that primary care physicians are familiar with the range of evidence-based treatments for depression and their relative efficacy. Most patients with depression receive antidepressant medication and less than one-third of patients receive some form of psychotherapy.1 The American College of Physicians guideline reviews the evidence regarding the relative efficacy and safety of second-generation antidepressants and nonpharmacologic treatment of depression.2
Outcomes evaluated in this guideline include response, remission, functional capacity, quality of life, reduction of suicidality or hospitalizations, and harms.
The pharmacotherapy treatment of depression, as assessed in this guideline, are second-generation antidepressants (SGAs), which include selective serotonin reuptake inhibitors, serotonin norepinephrine reuptake inhibitors, and selective serotonin norepinephrine reuptake inhibitors. Previous reviews have shown that the SGAs have similar efficacy and safety with the side effects varying among the different medications; common side effects include constipation, diarrhea, nausea, decreased sexual ability, dizziness, headache, insomnia, and fatigue.
The strongest evidence, rated as moderate quality, comes from trials comparing SGAs to a form of psychotherapy called cognitive-behavioral therapy (CBT). CBT uses the technique of “collaborative empiricism” to question patients maladaptive beliefs, and by examining those beliefs, help patients to take on interpretations of reality that are less biased by their initial negative thoughts. Through these “cognitive” exercises, patients begin to take on healthier, more-adaptive approaches to the social, physical, and emotional challenges in their lives. These interpretations are then “tested” in the real world, the behavioral aspect of CBT. Studies that ranged in time from 8 to 52 weeks in patients with MDD showed SGAs and CBT to have equal efficacy with regard to response and remission of depression to therapy. Combining SGA and CBT, compared with SGA alone, did not show a difference in outcomes of response to therapy or remission of depression, though patients who received both therapies had some improved efficacy in work function.
When SGAs were compared with interpersonal therapy, psychodynamic therapy, St. John’s wort, acupuncture, and exercise, there was low-quality evidence that these interventions performed with equal efficacy to SGAs. Two trials of exercise, compared with sertraline, had moderate-quality evidence showing similar efficacy between the two treatments.
When patients have an incomplete response to initial treatment with an SGA, there was no difference in response or remission when using a strategy of switching from one SGA to another versus switching to cognitive therapy. Similarly, with regard to augmentation, CBT appears to work equally to augmenting initial SGA therapy with bupropion or buspirone.
The guidelines discuss that, with regard to adverse effects, while the discontinuation rates of SGAs and CBT are similar, CBT likely has fewer side effects. In addition, it is important to recognize that CBT has lower relapse rate associated with its use than do SGAs. This is presumably because once a skill set is developed when learning CBT, those skills can continue to be used long term.
The bottom line
Most patients who experience depression are cared for by their primary care physician. Treatments for depression include psychotherapy, complementary and alternative medicine (CAM), exercise, and pharmacotherapy. After discussion with the patient, the American College of Physicians recommends choosing either cognitive-behavioral therapy or second-generation antidepressants when treating depression.
References
1. Olfson M, Blanco C, Marcus SC. Treatment of Adult Depression in the United States. JAMA Intern Med. 2016 Oct;176(10):1482-91.
2. Qaseem A, et al. Nonpharmacologic Versus Pharmacologic Treatment of Adult Patients With Major Depressive Disorder: A Clinical Practice Guideline From the American College of Physicians. Ann Intern Med. 2016 Mar 1;164:350-59.
Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University in Philadelphia. Aaron Sutton is a behavioral therapy consultant in the family medicine residency program at Abington Memorial Hospital.
Major depressive disorder (MDD) affects 16% of adults in the United States at some point in their lives. It is one of the most important causes of disability, time off from work, and personal distress, accounting for more than 8 million office visits per year.
Recent information shows that while 8% of the population screens positive for depression, only a quarter of those with depression receive treatment. Most patients with depression are cared for by primary care physicians, not psychiatrists.1 It is important that primary care physicians are familiar with the range of evidence-based treatments for depression and their relative efficacy. Most patients with depression receive antidepressant medication and less than one-third of patients receive some form of psychotherapy.1 The American College of Physicians guideline reviews the evidence regarding the relative efficacy and safety of second-generation antidepressants and nonpharmacologic treatment of depression.2
Outcomes evaluated in this guideline include response, remission, functional capacity, quality of life, reduction of suicidality or hospitalizations, and harms.
The pharmacotherapy treatment of depression, as assessed in this guideline, are second-generation antidepressants (SGAs), which include selective serotonin reuptake inhibitors, serotonin norepinephrine reuptake inhibitors, and selective serotonin norepinephrine reuptake inhibitors. Previous reviews have shown that the SGAs have similar efficacy and safety with the side effects varying among the different medications; common side effects include constipation, diarrhea, nausea, decreased sexual ability, dizziness, headache, insomnia, and fatigue.
The strongest evidence, rated as moderate quality, comes from trials comparing SGAs to a form of psychotherapy called cognitive-behavioral therapy (CBT). CBT uses the technique of “collaborative empiricism” to question patients maladaptive beliefs, and by examining those beliefs, help patients to take on interpretations of reality that are less biased by their initial negative thoughts. Through these “cognitive” exercises, patients begin to take on healthier, more-adaptive approaches to the social, physical, and emotional challenges in their lives. These interpretations are then “tested” in the real world, the behavioral aspect of CBT. Studies that ranged in time from 8 to 52 weeks in patients with MDD showed SGAs and CBT to have equal efficacy with regard to response and remission of depression to therapy. Combining SGA and CBT, compared with SGA alone, did not show a difference in outcomes of response to therapy or remission of depression, though patients who received both therapies had some improved efficacy in work function.
When SGAs were compared with interpersonal therapy, psychodynamic therapy, St. John’s wort, acupuncture, and exercise, there was low-quality evidence that these interventions performed with equal efficacy to SGAs. Two trials of exercise, compared with sertraline, had moderate-quality evidence showing similar efficacy between the two treatments.
When patients have an incomplete response to initial treatment with an SGA, there was no difference in response or remission when using a strategy of switching from one SGA to another versus switching to cognitive therapy. Similarly, with regard to augmentation, CBT appears to work equally to augmenting initial SGA therapy with bupropion or buspirone.
The guidelines discuss that, with regard to adverse effects, while the discontinuation rates of SGAs and CBT are similar, CBT likely has fewer side effects. In addition, it is important to recognize that CBT has lower relapse rate associated with its use than do SGAs. This is presumably because once a skill set is developed when learning CBT, those skills can continue to be used long term.
The bottom line
Most patients who experience depression are cared for by their primary care physician. Treatments for depression include psychotherapy, complementary and alternative medicine (CAM), exercise, and pharmacotherapy. After discussion with the patient, the American College of Physicians recommends choosing either cognitive-behavioral therapy or second-generation antidepressants when treating depression.
References
1. Olfson M, Blanco C, Marcus SC. Treatment of Adult Depression in the United States. JAMA Intern Med. 2016 Oct;176(10):1482-91.
2. Qaseem A, et al. Nonpharmacologic Versus Pharmacologic Treatment of Adult Patients With Major Depressive Disorder: A Clinical Practice Guideline From the American College of Physicians. Ann Intern Med. 2016 Mar 1;164:350-59.
Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University in Philadelphia. Aaron Sutton is a behavioral therapy consultant in the family medicine residency program at Abington Memorial Hospital.
Major depressive disorder (MDD) affects 16% of adults in the United States at some point in their lives. It is one of the most important causes of disability, time off from work, and personal distress, accounting for more than 8 million office visits per year.
Recent information shows that while 8% of the population screens positive for depression, only a quarter of those with depression receive treatment. Most patients with depression are cared for by primary care physicians, not psychiatrists.1 It is important that primary care physicians are familiar with the range of evidence-based treatments for depression and their relative efficacy. Most patients with depression receive antidepressant medication and less than one-third of patients receive some form of psychotherapy.1 The American College of Physicians guideline reviews the evidence regarding the relative efficacy and safety of second-generation antidepressants and nonpharmacologic treatment of depression.2
Outcomes evaluated in this guideline include response, remission, functional capacity, quality of life, reduction of suicidality or hospitalizations, and harms.
The pharmacotherapy treatment of depression, as assessed in this guideline, are second-generation antidepressants (SGAs), which include selective serotonin reuptake inhibitors, serotonin norepinephrine reuptake inhibitors, and selective serotonin norepinephrine reuptake inhibitors. Previous reviews have shown that the SGAs have similar efficacy and safety with the side effects varying among the different medications; common side effects include constipation, diarrhea, nausea, decreased sexual ability, dizziness, headache, insomnia, and fatigue.
The strongest evidence, rated as moderate quality, comes from trials comparing SGAs to a form of psychotherapy called cognitive-behavioral therapy (CBT). CBT uses the technique of “collaborative empiricism” to question patients maladaptive beliefs, and by examining those beliefs, help patients to take on interpretations of reality that are less biased by their initial negative thoughts. Through these “cognitive” exercises, patients begin to take on healthier, more-adaptive approaches to the social, physical, and emotional challenges in their lives. These interpretations are then “tested” in the real world, the behavioral aspect of CBT. Studies that ranged in time from 8 to 52 weeks in patients with MDD showed SGAs and CBT to have equal efficacy with regard to response and remission of depression to therapy. Combining SGA and CBT, compared with SGA alone, did not show a difference in outcomes of response to therapy or remission of depression, though patients who received both therapies had some improved efficacy in work function.
When SGAs were compared with interpersonal therapy, psychodynamic therapy, St. John’s wort, acupuncture, and exercise, there was low-quality evidence that these interventions performed with equal efficacy to SGAs. Two trials of exercise, compared with sertraline, had moderate-quality evidence showing similar efficacy between the two treatments.
When patients have an incomplete response to initial treatment with an SGA, there was no difference in response or remission when using a strategy of switching from one SGA to another versus switching to cognitive therapy. Similarly, with regard to augmentation, CBT appears to work equally to augmenting initial SGA therapy with bupropion or buspirone.
The guidelines discuss that, with regard to adverse effects, while the discontinuation rates of SGAs and CBT are similar, CBT likely has fewer side effects. In addition, it is important to recognize that CBT has lower relapse rate associated with its use than do SGAs. This is presumably because once a skill set is developed when learning CBT, those skills can continue to be used long term.
The bottom line
Most patients who experience depression are cared for by their primary care physician. Treatments for depression include psychotherapy, complementary and alternative medicine (CAM), exercise, and pharmacotherapy. After discussion with the patient, the American College of Physicians recommends choosing either cognitive-behavioral therapy or second-generation antidepressants when treating depression.
References
1. Olfson M, Blanco C, Marcus SC. Treatment of Adult Depression in the United States. JAMA Intern Med. 2016 Oct;176(10):1482-91.
2. Qaseem A, et al. Nonpharmacologic Versus Pharmacologic Treatment of Adult Patients With Major Depressive Disorder: A Clinical Practice Guideline From the American College of Physicians. Ann Intern Med. 2016 Mar 1;164:350-59.
Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University in Philadelphia. Aaron Sutton is a behavioral therapy consultant in the family medicine residency program at Abington Memorial Hospital.
Myth of the Month: Does nitroglycerin response predict coronary artery disease?
A 55-year-old man presents to the emergency department with substernal chest pain. The pain has occurred off and on over the past 2 hours. He has no family history of coronary artery disease. He has no history of diabetes, hypertension, or cigarette smoking. His most recent total cholesterol was 220 mg/dL (HDL, 40; LDL, 155). Blood pressure is 130/70. An ECG obtained on arrival is unremarkable. When he reached the ED, he received a nitroglycerin tablet with resolution of his pain within 4 minutes.
What is the most accurate statement?
A. The chance of CAD in this man over the next 10 years was 8% before his symptoms and is now greater than 20%.
B. The chance of CAD in this man over the next 10 years was 8% and is still 8%.
C. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 100%.
D. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 50%.
For years, giving nitroglycerin to patients who present with chest pain has been considered a good therapy, and the response to the medication has been considered a sign that the pain was likely due to cardiac ischemia. Is there evidence that this is true?
The study was a retrospective review of 223 patients who presented to the ED over a 5-month period with ongoing chest pain. They looked at patients who had ongoing chest pain in the ED, received nitroglycerin, and did not receive any therapy other than aspirin within 10 minutes of receiving nitroglycerin. Nitroglycerin response was compared with the final diagnosis of cardiac versus noncardiac chest pain.
Of the patients with a final determination of cardiac chest pain, 88% had a nitroglycerin response, whereas 92% of the patients with noncardiac chest pain had a nitroglycerin response (P = .50).
Deborah B. Diercks, MD, and her colleagues looked at improvement in chest pain scores in the ED in patients treated with nitroglycerin and whether it correlated with a cardiac etiology of chest pain.2 The study was a prospective, observational study of 664 patients in an urban tertiary care ED over a 16-month period. An 11-point numeric chest pain scale was assessed and recorded by research assistants before and 5 minutes after receiving nitroglycerin. The scale ranged from 0 (no pain) to 10 (worst pain imaginable).
A final diagnosis of a cardiac etiology for chest pain was found in 18% of the patients in the study. Of the patients who had cardiac-related chest pain, 20% had no reduction in pain with nitroglycerin, compared with 19% of the patients without cardiac-related chest pain. Complete or significant reduction in chest pain occurred with nitroglycerin in 31% of patients with cardiac chest pain and 27% of the patients without cardiac chest pain (P = .76).
Two other studies with similar designs showed similar results. Robert Steele, MD, and his colleagues studied 270 patients in a prospective observational cohort study of patients with chest pain presenting to an urban ED.3 Patients presenting to the ED with active chest pain who received nitroglycerin were enrolled.
The sensitivity in this study for nitroglycerin relief determining cardiac chest pain was 72%, and the specificity was 37%, with a positive likelihood ratio for coronary artery disease if nitroglycerin response of 1.1 (0.96-1.34).
In another prospective, observational cohort study, 459 patients who presented to an ED with chest pain were evaluated for response to nitroglycerin as a marker for ischemic cardiac disease.4 In this study, presence of ischemic cardiac disease was defined as diagnosis in the ED or during a 4-month follow-up period. Nitroglycerin relieved chest pain in 35% of patients who had coronary disease, whereas 41% of patients without coronary disease had a nitroglycerin response. This study had a much lower overall nitroglycerin response rate than any of the other studies.
Katherine Grailey, MD, and Paul Glasziou, MD, PhD, published a meta-analysis of nitroglycerin use for the diagnosis of chest pain, using the above referenced studies. They concluded that in the acute setting, nitroglycerin is not a reliable test of treatment for use in diagnosis of coronary artery disease.5
High response rate for nitroglycerin in the noncoronary artery groups in the studies may be due to a strong placebo effect and/or that nitroglycerin may help with pain caused by esophageal spasm. The lack of specificity in the pain relief response for nitroglycerin makes it not a helpful test. Note that all the studies have been in the acute, ED setting for chest pain. In the case presented at the beginning of the article, the response the patient had to nitroglycerin would not change the probability that he has coronary artery disease.
References
1. Am J Cardiol. 2002 Dec 1;90(11):1264-6.
2. Ann Emerg Med. 2005 Jun;45(6):581-5.
3. CJEM. 2006 May;8(3):164-9.
4. Ann Intern Med. 2003 Dec 16;139(12):979-86.
5. Emerg Med J. 2012 Mar;29(3):173-6.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected] .
A 55-year-old man presents to the emergency department with substernal chest pain. The pain has occurred off and on over the past 2 hours. He has no family history of coronary artery disease. He has no history of diabetes, hypertension, or cigarette smoking. His most recent total cholesterol was 220 mg/dL (HDL, 40; LDL, 155). Blood pressure is 130/70. An ECG obtained on arrival is unremarkable. When he reached the ED, he received a nitroglycerin tablet with resolution of his pain within 4 minutes.
What is the most accurate statement?
A. The chance of CAD in this man over the next 10 years was 8% before his symptoms and is now greater than 20%.
B. The chance of CAD in this man over the next 10 years was 8% and is still 8%.
C. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 100%.
D. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 50%.
For years, giving nitroglycerin to patients who present with chest pain has been considered a good therapy, and the response to the medication has been considered a sign that the pain was likely due to cardiac ischemia. Is there evidence that this is true?
The study was a retrospective review of 223 patients who presented to the ED over a 5-month period with ongoing chest pain. They looked at patients who had ongoing chest pain in the ED, received nitroglycerin, and did not receive any therapy other than aspirin within 10 minutes of receiving nitroglycerin. Nitroglycerin response was compared with the final diagnosis of cardiac versus noncardiac chest pain.
Of the patients with a final determination of cardiac chest pain, 88% had a nitroglycerin response, whereas 92% of the patients with noncardiac chest pain had a nitroglycerin response (P = .50).
Deborah B. Diercks, MD, and her colleagues looked at improvement in chest pain scores in the ED in patients treated with nitroglycerin and whether it correlated with a cardiac etiology of chest pain.2 The study was a prospective, observational study of 664 patients in an urban tertiary care ED over a 16-month period. An 11-point numeric chest pain scale was assessed and recorded by research assistants before and 5 minutes after receiving nitroglycerin. The scale ranged from 0 (no pain) to 10 (worst pain imaginable).
A final diagnosis of a cardiac etiology for chest pain was found in 18% of the patients in the study. Of the patients who had cardiac-related chest pain, 20% had no reduction in pain with nitroglycerin, compared with 19% of the patients without cardiac-related chest pain. Complete or significant reduction in chest pain occurred with nitroglycerin in 31% of patients with cardiac chest pain and 27% of the patients without cardiac chest pain (P = .76).
Two other studies with similar designs showed similar results. Robert Steele, MD, and his colleagues studied 270 patients in a prospective observational cohort study of patients with chest pain presenting to an urban ED.3 Patients presenting to the ED with active chest pain who received nitroglycerin were enrolled.
The sensitivity in this study for nitroglycerin relief determining cardiac chest pain was 72%, and the specificity was 37%, with a positive likelihood ratio for coronary artery disease if nitroglycerin response of 1.1 (0.96-1.34).
In another prospective, observational cohort study, 459 patients who presented to an ED with chest pain were evaluated for response to nitroglycerin as a marker for ischemic cardiac disease.4 In this study, presence of ischemic cardiac disease was defined as diagnosis in the ED or during a 4-month follow-up period. Nitroglycerin relieved chest pain in 35% of patients who had coronary disease, whereas 41% of patients without coronary disease had a nitroglycerin response. This study had a much lower overall nitroglycerin response rate than any of the other studies.
Katherine Grailey, MD, and Paul Glasziou, MD, PhD, published a meta-analysis of nitroglycerin use for the diagnosis of chest pain, using the above referenced studies. They concluded that in the acute setting, nitroglycerin is not a reliable test of treatment for use in diagnosis of coronary artery disease.5
High response rate for nitroglycerin in the noncoronary artery groups in the studies may be due to a strong placebo effect and/or that nitroglycerin may help with pain caused by esophageal spasm. The lack of specificity in the pain relief response for nitroglycerin makes it not a helpful test. Note that all the studies have been in the acute, ED setting for chest pain. In the case presented at the beginning of the article, the response the patient had to nitroglycerin would not change the probability that he has coronary artery disease.
References
1. Am J Cardiol. 2002 Dec 1;90(11):1264-6.
2. Ann Emerg Med. 2005 Jun;45(6):581-5.
3. CJEM. 2006 May;8(3):164-9.
4. Ann Intern Med. 2003 Dec 16;139(12):979-86.
5. Emerg Med J. 2012 Mar;29(3):173-6.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected] .
A 55-year-old man presents to the emergency department with substernal chest pain. The pain has occurred off and on over the past 2 hours. He has no family history of coronary artery disease. He has no history of diabetes, hypertension, or cigarette smoking. His most recent total cholesterol was 220 mg/dL (HDL, 40; LDL, 155). Blood pressure is 130/70. An ECG obtained on arrival is unremarkable. When he reached the ED, he received a nitroglycerin tablet with resolution of his pain within 4 minutes.
What is the most accurate statement?
A. The chance of CAD in this man over the next 10 years was 8% before his symptoms and is now greater than 20%.
B. The chance of CAD in this man over the next 10 years was 8% and is still 8%.
C. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 100%.
D. The chance of CAD in this man over the next 10 years was 15% before his symptoms and is now close to 50%.
For years, giving nitroglycerin to patients who present with chest pain has been considered a good therapy, and the response to the medication has been considered a sign that the pain was likely due to cardiac ischemia. Is there evidence that this is true?
The study was a retrospective review of 223 patients who presented to the ED over a 5-month period with ongoing chest pain. They looked at patients who had ongoing chest pain in the ED, received nitroglycerin, and did not receive any therapy other than aspirin within 10 minutes of receiving nitroglycerin. Nitroglycerin response was compared with the final diagnosis of cardiac versus noncardiac chest pain.
Of the patients with a final determination of cardiac chest pain, 88% had a nitroglycerin response, whereas 92% of the patients with noncardiac chest pain had a nitroglycerin response (P = .50).
Deborah B. Diercks, MD, and her colleagues looked at improvement in chest pain scores in the ED in patients treated with nitroglycerin and whether it correlated with a cardiac etiology of chest pain.2 The study was a prospective, observational study of 664 patients in an urban tertiary care ED over a 16-month period. An 11-point numeric chest pain scale was assessed and recorded by research assistants before and 5 minutes after receiving nitroglycerin. The scale ranged from 0 (no pain) to 10 (worst pain imaginable).
A final diagnosis of a cardiac etiology for chest pain was found in 18% of the patients in the study. Of the patients who had cardiac-related chest pain, 20% had no reduction in pain with nitroglycerin, compared with 19% of the patients without cardiac-related chest pain. Complete or significant reduction in chest pain occurred with nitroglycerin in 31% of patients with cardiac chest pain and 27% of the patients without cardiac chest pain (P = .76).
Two other studies with similar designs showed similar results. Robert Steele, MD, and his colleagues studied 270 patients in a prospective observational cohort study of patients with chest pain presenting to an urban ED.3 Patients presenting to the ED with active chest pain who received nitroglycerin were enrolled.
The sensitivity in this study for nitroglycerin relief determining cardiac chest pain was 72%, and the specificity was 37%, with a positive likelihood ratio for coronary artery disease if nitroglycerin response of 1.1 (0.96-1.34).
In another prospective, observational cohort study, 459 patients who presented to an ED with chest pain were evaluated for response to nitroglycerin as a marker for ischemic cardiac disease.4 In this study, presence of ischemic cardiac disease was defined as diagnosis in the ED or during a 4-month follow-up period. Nitroglycerin relieved chest pain in 35% of patients who had coronary disease, whereas 41% of patients without coronary disease had a nitroglycerin response. This study had a much lower overall nitroglycerin response rate than any of the other studies.
Katherine Grailey, MD, and Paul Glasziou, MD, PhD, published a meta-analysis of nitroglycerin use for the diagnosis of chest pain, using the above referenced studies. They concluded that in the acute setting, nitroglycerin is not a reliable test of treatment for use in diagnosis of coronary artery disease.5
High response rate for nitroglycerin in the noncoronary artery groups in the studies may be due to a strong placebo effect and/or that nitroglycerin may help with pain caused by esophageal spasm. The lack of specificity in the pain relief response for nitroglycerin makes it not a helpful test. Note that all the studies have been in the acute, ED setting for chest pain. In the case presented at the beginning of the article, the response the patient had to nitroglycerin would not change the probability that he has coronary artery disease.
References
1. Am J Cardiol. 2002 Dec 1;90(11):1264-6.
2. Ann Emerg Med. 2005 Jun;45(6):581-5.
3. CJEM. 2006 May;8(3):164-9.
4. Ann Intern Med. 2003 Dec 16;139(12):979-86.
5. Emerg Med J. 2012 Mar;29(3):173-6.
Dr. Paauw is professor of medicine in the division of general internal medicine at the University of Washington, Seattle, and he serves as third-year medical student clerkship director at the University of Washington. Contact Dr. Paauw at [email protected] .
Dengue vaccine beneficial only in moderate to high transmission settings
Pediatric patients with previous natural exposure to dengue virus benefit from the dengue virus vaccine, while vaccination of seronegative patients leads to an increased risk for hospitalization because of dengue, according to the results of a mathematical model simulation.
Because of the first approved dengue vaccine’s highly variable efficacy rates among pediatric patients, the vaccine should only be used in moderate to high transmission settings, the investigators who designed the model concluded in a paper published in Science.
Dengvaxia, developed by Sanofi-Pasteur, is a recombinant chimeric live attenuated dengue virus vaccine based on a yellow fever vaccine backbone. The vaccine’s development was “considerably more challenging than for other Flavivirus infections because of the immunological interactions between the four dengue virus serotypes and the risk of immune-mediated enhancement of disease” which causes secondary infections to lead to more severe disease, Neil Ferguson, PhD, of the Imperial College of London and his associates wrote (Science. 2016 Sep 2;353:1033-6. doi: 10.1126/science.aaf9590).
Despite the complexity of the virus and vaccine, Dengvaxia was recently approved for use in six countries, and two large multicenter phase III clinical trials recently concluded. Investigators for the trials, which involved over 30,000 children in Southeast Asia and Latin America, reported an overall vaccine efficacy of about 60% in cases of symptomatic dengue disease. However, the vaccine’s efficacy varied by severity of dengue infection and by age and serotype of the patient at time of vaccination. Investigators for both trials reported higher efficacy in patients with severe infection and in patients who were seropositive for dengue virus (indicating previous exposure to the virus) at the time of vaccination. In addition, investigators for both trials reported lower vaccine efficacies in younger patients, a pattern “consistent with reduced efficacy in individuals who have not lived long enough to experience a natural infection,” the authors noted.
In an effort to provide guidance for future clinical trials and to predict the impact of wide-scale use of Dengvaxia, investigators developed a mathematical model of dengue transmission based on data from the two trials.
The model confirmed that secondary infections were nearly twice as likely to cause symptomatic infection, compared with primary and postsecondary infections.
In a highly important result, the model simulation showed that seropositive recipients always gained a substantial benefit – more than a 90% reduction in the risk of hospitalization because of dengue – from vaccination. However, among seronegative recipients, the vaccine initially induced near-perfect protection, but this protection rapidly decayed (mean duration, 7 months). Moreover, the model showed that seronegative recipients who received the vaccine were at an increased risk for hospitalization with dengue.
“This is true both in the short term and in the long term and raises fundamental issues about individual versus population benefits of vaccination,” investigators wrote. “Individual serological testing, if feasible, might radically improve the benefit-risk trade-off.”
The model also demonstrated that the optimal age for vaccination depends on the transmission intensity rate in a region where a child lives. In high-transmission settings, the optimal age to target for vaccination can be 9 years or younger, and as intensity of transmission decreases, optimal age of vaccination should increase, according to investigators.
The study was funded by the UK Medical Research Council, the UK National Institute of Health Research, the National Institutes of Health, and the Bill and Melinda Gates Foundation. Authors did not report any relevant disclosures.
[email protected]
On Twitter @jessnicolecraig
Pediatric patients with previous natural exposure to dengue virus benefit from the dengue virus vaccine, while vaccination of seronegative patients leads to an increased risk for hospitalization because of dengue, according to the results of a mathematical model simulation.
Because of the first approved dengue vaccine’s highly variable efficacy rates among pediatric patients, the vaccine should only be used in moderate to high transmission settings, the investigators who designed the model concluded in a paper published in Science.
Dengvaxia, developed by Sanofi-Pasteur, is a recombinant chimeric live attenuated dengue virus vaccine based on a yellow fever vaccine backbone. The vaccine’s development was “considerably more challenging than for other Flavivirus infections because of the immunological interactions between the four dengue virus serotypes and the risk of immune-mediated enhancement of disease” which causes secondary infections to lead to more severe disease, Neil Ferguson, PhD, of the Imperial College of London and his associates wrote (Science. 2016 Sep 2;353:1033-6. doi: 10.1126/science.aaf9590).
Despite the complexity of the virus and vaccine, Dengvaxia was recently approved for use in six countries, and two large multicenter phase III clinical trials recently concluded. Investigators for the trials, which involved over 30,000 children in Southeast Asia and Latin America, reported an overall vaccine efficacy of about 60% in cases of symptomatic dengue disease. However, the vaccine’s efficacy varied by severity of dengue infection and by age and serotype of the patient at time of vaccination. Investigators for both trials reported higher efficacy in patients with severe infection and in patients who were seropositive for dengue virus (indicating previous exposure to the virus) at the time of vaccination. In addition, investigators for both trials reported lower vaccine efficacies in younger patients, a pattern “consistent with reduced efficacy in individuals who have not lived long enough to experience a natural infection,” the authors noted.
In an effort to provide guidance for future clinical trials and to predict the impact of wide-scale use of Dengvaxia, investigators developed a mathematical model of dengue transmission based on data from the two trials.
The model confirmed that secondary infections were nearly twice as likely to cause symptomatic infection, compared with primary and postsecondary infections.
In a highly important result, the model simulation showed that seropositive recipients always gained a substantial benefit – more than a 90% reduction in the risk of hospitalization because of dengue – from vaccination. However, among seronegative recipients, the vaccine initially induced near-perfect protection, but this protection rapidly decayed (mean duration, 7 months). Moreover, the model showed that seronegative recipients who received the vaccine were at an increased risk for hospitalization with dengue.
“This is true both in the short term and in the long term and raises fundamental issues about individual versus population benefits of vaccination,” investigators wrote. “Individual serological testing, if feasible, might radically improve the benefit-risk trade-off.”
The model also demonstrated that the optimal age for vaccination depends on the transmission intensity rate in a region where a child lives. In high-transmission settings, the optimal age to target for vaccination can be 9 years or younger, and as intensity of transmission decreases, optimal age of vaccination should increase, according to investigators.
The study was funded by the UK Medical Research Council, the UK National Institute of Health Research, the National Institutes of Health, and the Bill and Melinda Gates Foundation. Authors did not report any relevant disclosures.
[email protected]
On Twitter @jessnicolecraig
Pediatric patients with previous natural exposure to dengue virus benefit from the dengue virus vaccine, while vaccination of seronegative patients leads to an increased risk for hospitalization because of dengue, according to the results of a mathematical model simulation.
Because of the first approved dengue vaccine’s highly variable efficacy rates among pediatric patients, the vaccine should only be used in moderate to high transmission settings, the investigators who designed the model concluded in a paper published in Science.
Dengvaxia, developed by Sanofi-Pasteur, is a recombinant chimeric live attenuated dengue virus vaccine based on a yellow fever vaccine backbone. The vaccine’s development was “considerably more challenging than for other Flavivirus infections because of the immunological interactions between the four dengue virus serotypes and the risk of immune-mediated enhancement of disease” which causes secondary infections to lead to more severe disease, Neil Ferguson, PhD, of the Imperial College of London and his associates wrote (Science. 2016 Sep 2;353:1033-6. doi: 10.1126/science.aaf9590).
Despite the complexity of the virus and vaccine, Dengvaxia was recently approved for use in six countries, and two large multicenter phase III clinical trials recently concluded. Investigators for the trials, which involved over 30,000 children in Southeast Asia and Latin America, reported an overall vaccine efficacy of about 60% in cases of symptomatic dengue disease. However, the vaccine’s efficacy varied by severity of dengue infection and by age and serotype of the patient at time of vaccination. Investigators for both trials reported higher efficacy in patients with severe infection and in patients who were seropositive for dengue virus (indicating previous exposure to the virus) at the time of vaccination. In addition, investigators for both trials reported lower vaccine efficacies in younger patients, a pattern “consistent with reduced efficacy in individuals who have not lived long enough to experience a natural infection,” the authors noted.
In an effort to provide guidance for future clinical trials and to predict the impact of wide-scale use of Dengvaxia, investigators developed a mathematical model of dengue transmission based on data from the two trials.
The model confirmed that secondary infections were nearly twice as likely to cause symptomatic infection, compared with primary and postsecondary infections.
In a highly important result, the model simulation showed that seropositive recipients always gained a substantial benefit – more than a 90% reduction in the risk of hospitalization because of dengue – from vaccination. However, among seronegative recipients, the vaccine initially induced near-perfect protection, but this protection rapidly decayed (mean duration, 7 months). Moreover, the model showed that seronegative recipients who received the vaccine were at an increased risk for hospitalization with dengue.
“This is true both in the short term and in the long term and raises fundamental issues about individual versus population benefits of vaccination,” investigators wrote. “Individual serological testing, if feasible, might radically improve the benefit-risk trade-off.”
The model also demonstrated that the optimal age for vaccination depends on the transmission intensity rate in a region where a child lives. In high-transmission settings, the optimal age to target for vaccination can be 9 years or younger, and as intensity of transmission decreases, optimal age of vaccination should increase, according to investigators.
The study was funded by the UK Medical Research Council, the UK National Institute of Health Research, the National Institutes of Health, and the Bill and Melinda Gates Foundation. Authors did not report any relevant disclosures.
[email protected]
On Twitter @jessnicolecraig
FROM SCIENCE
Key clinical point:
Major finding: Vaccine should only be used in moderate to high transmission settings. In high-transmission settings, the optimal age to target for vaccination is 9 years or younger.
Data source: Mathematical model simulation based on two large, multicenter, phase III clinical trials.
Disclosures: This study was funded by the UK Medical Research Council, the UK National Institute of Health Research, the National Institutes of Health, and the Bill and Melinda Gates Foundation. Authors did not report any relevant disclosures.
VIDEO: Open, robotic, laparoscopic approaches equally effective in pancreatectomy
WASHINGTON – Minimally invasive surgery – whether robotic or laparoscopic – is just as effective as open surgery in pancreatectomy.
Both minimally invasive approaches had perioperative and oncologic outcomes that were similar to open approaches, as well as to each other, Katelin Mirkin, MD, reported at the annual clinical congress of the American College of Surgeons. And while minimally invasive surgery (MIS) techniques were associated with a slightly faster move to neoadjuvant chemotherapy, survival outcomes in all three surgical approaches were similar.
Dr. Mirkin, a surgery resident at Penn State Milton S. Hershey Medical Center, Hershey, Pa., plumbed the National Cancer Database for patients with stage I-III pancreatic cancer who were treated by surgical resection from 2010 to 2012. Her cohort comprised 9,047 patients; of these, 7,924 were treated with open surgery, 992 with laparoscopic surgery, and 131 with robotic surgery. She examined a number of factors including lymph node harvest and surgical margins, length of stay and time to adjuvant chemotherapy, and survival.
Patients who had MIS were older (67 vs. 66 years) and more often treated at an academic center, but otherwise there were no significant baseline differences.
Dr. Mirkin first compared the open surgeries with MIS. There were no significant associations with surgical approach and cancer stage. However, distal resections were significantly more likely to be dealt with by MIS, and Whipple procedures by open approaches. There were also more open than MIS total resections.
MIS was more likely to conclude with negative surgical margins (79% vs. 75%), and open surgery more likely to end with positive margins (22% vs. 19%).
Perioperative outcomes favored MIS approaches for all types of surgery, with a mean overall stay of 9.5 days vs. 11.3 days for open surgery. The mean length of stay for a distal resection was 7 days for MIS vs. 8 for open. For a Whipple procedure, the mean stay was 10.7 vs. 11.9 days. For a total resection, it was 10 vs. 11.8 days.
MIS was also associated with a significantly shorter time to the initiation of adjuvant chemotherapy overall (56 vs. 59 days). For a Whipple, time to chemotherapy was 58 vs. 60 days, respectively. For a distal resection, it was 52 vs. 56 days, and for a total resection, 52 vs. 58 days.
Neither approach offered a survival benefit over the other, Dr. Mirkin noted. For stage I cancers, less than 50% of MIS patients and less than 25% of open patients were alive by 50 months. For those with stage II tumors, less than 25% of each group was alive by 40 months. For stage III tumors, the 40-month survival rates were about 10% for MIS patients and 15% for open patients.
Dr. Mirkin then examined perioperative, oncologic, and survival outcomes among those who underwent laparoscopic and robotic surgeries. There were no demographic differences between these groups.
Oncologic outcomes were almost identical with regard to the number of positive regional nodes harvested (six), and surgical margins. Nodes were negative in 82% of robotic cases vs. 78% of laparoscopic cases and positive in 17.6% of robotic cases and 19.4% of laparoscopic cases.
Length of stay was significantly shorter for a laparoscopic approach overall (10 vs. 9.4 days) and particularly in distal resection (7 vs. 10 days). However, there were no differences in length of stay in any other surgery type. Nor was there any difference in the time to neoadjuvant chemotherapy.
Survival outcomes were similar as well. For stage I cancers, 40-month survival was about 40% in the laparoscopic group and 25% in the robotic group. For stage II cancers, 40-month survival was about 15% and 25%, respectively. For stage III tumors, 20-month survival in the robotic group was near 0 and 25% in the laparoscopic group. By 40 months almost all patients were deceased.
A multivariate survival analysis controlled for age, sex, race, comorbidities, facility type and location, surgery type, surgical margins, pathologic stage, and systemic therapy. It found only one significant association: Patients with 12 or more lymph nodes harvested were 19% more likely to die than those with fewer than 12 nodes harvested.
Time to chemotherapy (longer or shorter than 57 days) did not significantly impact survival, Dr. Mirkin said.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
[email protected]
On Twitter @alz_gal
WASHINGTON – Minimally invasive surgery – whether robotic or laparoscopic – is just as effective as open surgery in pancreatectomy.
Both minimally invasive approaches had perioperative and oncologic outcomes that were similar to open approaches, as well as to each other, Katelin Mirkin, MD, reported at the annual clinical congress of the American College of Surgeons. And while minimally invasive surgery (MIS) techniques were associated with a slightly faster move to neoadjuvant chemotherapy, survival outcomes in all three surgical approaches were similar.
Dr. Mirkin, a surgery resident at Penn State Milton S. Hershey Medical Center, Hershey, Pa., plumbed the National Cancer Database for patients with stage I-III pancreatic cancer who were treated by surgical resection from 2010 to 2012. Her cohort comprised 9,047 patients; of these, 7,924 were treated with open surgery, 992 with laparoscopic surgery, and 131 with robotic surgery. She examined a number of factors including lymph node harvest and surgical margins, length of stay and time to adjuvant chemotherapy, and survival.
Patients who had MIS were older (67 vs. 66 years) and more often treated at an academic center, but otherwise there were no significant baseline differences.
Dr. Mirkin first compared the open surgeries with MIS. There were no significant associations with surgical approach and cancer stage. However, distal resections were significantly more likely to be dealt with by MIS, and Whipple procedures by open approaches. There were also more open than MIS total resections.
MIS was more likely to conclude with negative surgical margins (79% vs. 75%), and open surgery more likely to end with positive margins (22% vs. 19%).
Perioperative outcomes favored MIS approaches for all types of surgery, with a mean overall stay of 9.5 days vs. 11.3 days for open surgery. The mean length of stay for a distal resection was 7 days for MIS vs. 8 for open. For a Whipple procedure, the mean stay was 10.7 vs. 11.9 days. For a total resection, it was 10 vs. 11.8 days.
MIS was also associated with a significantly shorter time to the initiation of adjuvant chemotherapy overall (56 vs. 59 days). For a Whipple, time to chemotherapy was 58 vs. 60 days, respectively. For a distal resection, it was 52 vs. 56 days, and for a total resection, 52 vs. 58 days.
Neither approach offered a survival benefit over the other, Dr. Mirkin noted. For stage I cancers, less than 50% of MIS patients and less than 25% of open patients were alive by 50 months. For those with stage II tumors, less than 25% of each group was alive by 40 months. For stage III tumors, the 40-month survival rates were about 10% for MIS patients and 15% for open patients.
Dr. Mirkin then examined perioperative, oncologic, and survival outcomes among those who underwent laparoscopic and robotic surgeries. There were no demographic differences between these groups.
Oncologic outcomes were almost identical with regard to the number of positive regional nodes harvested (six), and surgical margins. Nodes were negative in 82% of robotic cases vs. 78% of laparoscopic cases and positive in 17.6% of robotic cases and 19.4% of laparoscopic cases.
Length of stay was significantly shorter for a laparoscopic approach overall (10 vs. 9.4 days) and particularly in distal resection (7 vs. 10 days). However, there were no differences in length of stay in any other surgery type. Nor was there any difference in the time to neoadjuvant chemotherapy.
Survival outcomes were similar as well. For stage I cancers, 40-month survival was about 40% in the laparoscopic group and 25% in the robotic group. For stage II cancers, 40-month survival was about 15% and 25%, respectively. For stage III tumors, 20-month survival in the robotic group was near 0 and 25% in the laparoscopic group. By 40 months almost all patients were deceased.
A multivariate survival analysis controlled for age, sex, race, comorbidities, facility type and location, surgery type, surgical margins, pathologic stage, and systemic therapy. It found only one significant association: Patients with 12 or more lymph nodes harvested were 19% more likely to die than those with fewer than 12 nodes harvested.
Time to chemotherapy (longer or shorter than 57 days) did not significantly impact survival, Dr. Mirkin said.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
[email protected]
On Twitter @alz_gal
WASHINGTON – Minimally invasive surgery – whether robotic or laparoscopic – is just as effective as open surgery in pancreatectomy.
Both minimally invasive approaches had perioperative and oncologic outcomes that were similar to open approaches, as well as to each other, Katelin Mirkin, MD, reported at the annual clinical congress of the American College of Surgeons. And while minimally invasive surgery (MIS) techniques were associated with a slightly faster move to neoadjuvant chemotherapy, survival outcomes in all three surgical approaches were similar.
Dr. Mirkin, a surgery resident at Penn State Milton S. Hershey Medical Center, Hershey, Pa., plumbed the National Cancer Database for patients with stage I-III pancreatic cancer who were treated by surgical resection from 2010 to 2012. Her cohort comprised 9,047 patients; of these, 7,924 were treated with open surgery, 992 with laparoscopic surgery, and 131 with robotic surgery. She examined a number of factors including lymph node harvest and surgical margins, length of stay and time to adjuvant chemotherapy, and survival.
Patients who had MIS were older (67 vs. 66 years) and more often treated at an academic center, but otherwise there were no significant baseline differences.
Dr. Mirkin first compared the open surgeries with MIS. There were no significant associations with surgical approach and cancer stage. However, distal resections were significantly more likely to be dealt with by MIS, and Whipple procedures by open approaches. There were also more open than MIS total resections.
MIS was more likely to conclude with negative surgical margins (79% vs. 75%), and open surgery more likely to end with positive margins (22% vs. 19%).
Perioperative outcomes favored MIS approaches for all types of surgery, with a mean overall stay of 9.5 days vs. 11.3 days for open surgery. The mean length of stay for a distal resection was 7 days for MIS vs. 8 for open. For a Whipple procedure, the mean stay was 10.7 vs. 11.9 days. For a total resection, it was 10 vs. 11.8 days.
MIS was also associated with a significantly shorter time to the initiation of adjuvant chemotherapy overall (56 vs. 59 days). For a Whipple, time to chemotherapy was 58 vs. 60 days, respectively. For a distal resection, it was 52 vs. 56 days, and for a total resection, 52 vs. 58 days.
Neither approach offered a survival benefit over the other, Dr. Mirkin noted. For stage I cancers, less than 50% of MIS patients and less than 25% of open patients were alive by 50 months. For those with stage II tumors, less than 25% of each group was alive by 40 months. For stage III tumors, the 40-month survival rates were about 10% for MIS patients and 15% for open patients.
Dr. Mirkin then examined perioperative, oncologic, and survival outcomes among those who underwent laparoscopic and robotic surgeries. There were no demographic differences between these groups.
Oncologic outcomes were almost identical with regard to the number of positive regional nodes harvested (six), and surgical margins. Nodes were negative in 82% of robotic cases vs. 78% of laparoscopic cases and positive in 17.6% of robotic cases and 19.4% of laparoscopic cases.
Length of stay was significantly shorter for a laparoscopic approach overall (10 vs. 9.4 days) and particularly in distal resection (7 vs. 10 days). However, there were no differences in length of stay in any other surgery type. Nor was there any difference in the time to neoadjuvant chemotherapy.
Survival outcomes were similar as well. For stage I cancers, 40-month survival was about 40% in the laparoscopic group and 25% in the robotic group. For stage II cancers, 40-month survival was about 15% and 25%, respectively. For stage III tumors, 20-month survival in the robotic group was near 0 and 25% in the laparoscopic group. By 40 months almost all patients were deceased.
A multivariate survival analysis controlled for age, sex, race, comorbidities, facility type and location, surgery type, surgical margins, pathologic stage, and systemic therapy. It found only one significant association: Patients with 12 or more lymph nodes harvested were 19% more likely to die than those with fewer than 12 nodes harvested.
Time to chemotherapy (longer or shorter than 57 days) did not significantly impact survival, Dr. Mirkin said.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
[email protected]
On Twitter @alz_gal
AT THE ACS CLINICAL CONGRESS
Key clinical point:
Major finding: For stage I cancers, less than 50% of minimally invasive surgery patients and less than 25% of open surgery patients were alive by 50 months. For those with stage II tumors, less than 25% of each group was alive by 40 months.
Data source: The database review comprised 9,047 cases.
Disclosures: Dr. Mirkin had no financial disclosures.
Pregabalin reduces pain in IBS patients
LAS VEGAS – Pregabalin reduced abdominal pain in patients with irritable bowel syndrome (IBS) and moderate to severe abdominal pain, according to a study presented at the annual meeting of the American College of Gastroenterology.
Antispasmodics and neuromodulators are commonly used to treat such patients, but a significant number don’t respond to these agents, and opioids carry risks of addiction.
The drug makes sense for IBS patients experiencing significant pain, according to Yuri Saito, MD, of the department of medicine and a consultant in the division of gastroenterology at the Mayo Clinic. Rochester, Minn., who presented the research. She noted that pregabalin is approved by the Food and Drug Administration for fibromyalgia, which occurs in many IBS patients. IBS patients also experience frequent anxiety, which can exacerbate symptoms. Pregabalin is not approved for anxiety, but is often prescribed off label. “We thought there were multiple reasons why pregabalin would potentially be effective in IBS,” Dr. Saito said in an interview.
Patients taking pregabalin (n = 41) had lower Bowel Symptom Scale (BSS) pain scores than did placebo (n = 44; 25 vs. 42; P =.008) and lower overall BSS severity scores at weeks 9-12 (26 vs. 42; P =.009). BSS diarrhea scores were lower in pregabalin (17 vs. 32; P = .049), as were bloating BSS scores (29 vs. 44; P =.016).
The study focused on patients with moderate to severe pain, who had experienced three or more pain attacks in a month, and at least one attack during a 2-week screening period. The pregabalin dosage began at 75 mg twice per day and was ramped up to 225 mg twice per day. That dosage was maintained from day 7 through week 12.
Somewhat disappointingly, the researchers found no difference in quality of life measures, but the presence of fibromyalgia may have complicated those measures, Dr. Gerson said.
Thirty-two percent of subjects in the pregabalin arm experienced dizziness, compared with 5% in the placebo group (P =.01). Other side effects included blurred vision (15% vs. 2%; P =.05) and feeling high or tipsy (10% vs. 0%; P =.05).
The results are encouraging and provide an additional treatment option. “I think it’s probably useful, but mainly in patients with diarrhea-prominent IBS,” said Dr. Gerson.
Dr. Saito was more effusive: “The take-home message is that, for patients with moderate to severe pain who have not responded to antispasmodics or other neuromodulators, Pregabalin may be useful as an alternate modality.”
Dr. Saito is an adviser or board member with Commonwealth Labs, Salix, and Synergy. Dr. Gerson is on Allergan’s speakers bureau.
LAS VEGAS – Pregabalin reduced abdominal pain in patients with irritable bowel syndrome (IBS) and moderate to severe abdominal pain, according to a study presented at the annual meeting of the American College of Gastroenterology.
Antispasmodics and neuromodulators are commonly used to treat such patients, but a significant number don’t respond to these agents, and opioids carry risks of addiction.
The drug makes sense for IBS patients experiencing significant pain, according to Yuri Saito, MD, of the department of medicine and a consultant in the division of gastroenterology at the Mayo Clinic. Rochester, Minn., who presented the research. She noted that pregabalin is approved by the Food and Drug Administration for fibromyalgia, which occurs in many IBS patients. IBS patients also experience frequent anxiety, which can exacerbate symptoms. Pregabalin is not approved for anxiety, but is often prescribed off label. “We thought there were multiple reasons why pregabalin would potentially be effective in IBS,” Dr. Saito said in an interview.
Patients taking pregabalin (n = 41) had lower Bowel Symptom Scale (BSS) pain scores than did placebo (n = 44; 25 vs. 42; P =.008) and lower overall BSS severity scores at weeks 9-12 (26 vs. 42; P =.009). BSS diarrhea scores were lower in pregabalin (17 vs. 32; P = .049), as were bloating BSS scores (29 vs. 44; P =.016).
The study focused on patients with moderate to severe pain, who had experienced three or more pain attacks in a month, and at least one attack during a 2-week screening period. The pregabalin dosage began at 75 mg twice per day and was ramped up to 225 mg twice per day. That dosage was maintained from day 7 through week 12.
Somewhat disappointingly, the researchers found no difference in quality of life measures, but the presence of fibromyalgia may have complicated those measures, Dr. Gerson said.
Thirty-two percent of subjects in the pregabalin arm experienced dizziness, compared with 5% in the placebo group (P =.01). Other side effects included blurred vision (15% vs. 2%; P =.05) and feeling high or tipsy (10% vs. 0%; P =.05).
The results are encouraging and provide an additional treatment option. “I think it’s probably useful, but mainly in patients with diarrhea-prominent IBS,” said Dr. Gerson.
Dr. Saito was more effusive: “The take-home message is that, for patients with moderate to severe pain who have not responded to antispasmodics or other neuromodulators, Pregabalin may be useful as an alternate modality.”
Dr. Saito is an adviser or board member with Commonwealth Labs, Salix, and Synergy. Dr. Gerson is on Allergan’s speakers bureau.
LAS VEGAS – Pregabalin reduced abdominal pain in patients with irritable bowel syndrome (IBS) and moderate to severe abdominal pain, according to a study presented at the annual meeting of the American College of Gastroenterology.
Antispasmodics and neuromodulators are commonly used to treat such patients, but a significant number don’t respond to these agents, and opioids carry risks of addiction.
The drug makes sense for IBS patients experiencing significant pain, according to Yuri Saito, MD, of the department of medicine and a consultant in the division of gastroenterology at the Mayo Clinic. Rochester, Minn., who presented the research. She noted that pregabalin is approved by the Food and Drug Administration for fibromyalgia, which occurs in many IBS patients. IBS patients also experience frequent anxiety, which can exacerbate symptoms. Pregabalin is not approved for anxiety, but is often prescribed off label. “We thought there were multiple reasons why pregabalin would potentially be effective in IBS,” Dr. Saito said in an interview.
Patients taking pregabalin (n = 41) had lower Bowel Symptom Scale (BSS) pain scores than did placebo (n = 44; 25 vs. 42; P =.008) and lower overall BSS severity scores at weeks 9-12 (26 vs. 42; P =.009). BSS diarrhea scores were lower in pregabalin (17 vs. 32; P = .049), as were bloating BSS scores (29 vs. 44; P =.016).
The study focused on patients with moderate to severe pain, who had experienced three or more pain attacks in a month, and at least one attack during a 2-week screening period. The pregabalin dosage began at 75 mg twice per day and was ramped up to 225 mg twice per day. That dosage was maintained from day 7 through week 12.
Somewhat disappointingly, the researchers found no difference in quality of life measures, but the presence of fibromyalgia may have complicated those measures, Dr. Gerson said.
Thirty-two percent of subjects in the pregabalin arm experienced dizziness, compared with 5% in the placebo group (P =.01). Other side effects included blurred vision (15% vs. 2%; P =.05) and feeling high or tipsy (10% vs. 0%; P =.05).
The results are encouraging and provide an additional treatment option. “I think it’s probably useful, but mainly in patients with diarrhea-prominent IBS,” said Dr. Gerson.
Dr. Saito was more effusive: “The take-home message is that, for patients with moderate to severe pain who have not responded to antispasmodics or other neuromodulators, Pregabalin may be useful as an alternate modality.”
Dr. Saito is an adviser or board member with Commonwealth Labs, Salix, and Synergy. Dr. Gerson is on Allergan’s speakers bureau.
AT ACG 2016
Key clinical point:
Major finding: In a pilot study, pregabalin reduced pain scores and diarrhea in patients with IBS and moderate to severe abdominal pain.
Data source: A randomized, placebo controlled clinical trial.
Disclosures: The study was funded by Pfizer. Dr. Saito is an adviser or board member with Commonwealth Labs, Salix, and Synergy. Dr. Gerson is on Allergan’s speakers bureau.