Official Newspaper of the American College of Surgeons

Top Sections
From the Editor
Palliative Care
The Right Choice?
The Rural Surgeon
sn
Main menu
SN Main Menu
Explore menu
SN Explore Menu
Proclivity ID
18821001
Unpublish
Specialty Focus
Pain
Colon and Rectal
General Surgery
Plastic Surgery
Cardiothoracic
Altmetric
Article Authors "autobrand" affiliation
MDedge News
DSM Affiliated
Display in offset block
Disqus Exclude
Best Practices
CE/CME
Education Center
Medical Education Library
Enable Disqus
Display Author and Disclosure Link
Publication Type
News
Slot System
Top 25
Disable Sticky Ads
Disable Ad Block Mitigation
Featured Buckets Admin
Show Ads on this Publication's Homepage
Consolidated Pub
Show Article Page Numbers on TOC
Display logo in consolidated pubs except when content has these publications
Use larger logo size
Off
publication_blueconic_enabled
Off
Show More Destinations Menu
Disable Adhesion on Publication
Off
Restore Menu Label on Mobile Navigation
Disable Facebook Pixel from Publication
Exclude this publication from publication selection on articles and quiz

CHEST issues guidelines on EBUS-TBNA

Article Type
Changed
Display Headline
CHEST issues guidelines on EBUS-TBNA

Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has been recommended for diagnosis of suspected sarcoidosis or suspected tuberculosis with adenopathy and may be used as an initial diagnostic test for suspected lymphoma, according to guidelines issued by CHEST (the American College of Chest Physicians).

The guidelines, which are primarily focused on technical aspects of EBUS-TBNA, also advise obtaining additional samples for the purpose of molecular analysis in patients who undergo the procedure for the diagnosis or staging of non–small cell lung cancer.

The guidelines are based on a systematic review and critical analysis of the literature by an expert panel chaired by Dr. Momen M. Wahidi of Duke University Medical Center, Durham, N.C. Of the 12 guideline statements by the panel, 7 were graded evidence-based recommendations and 5 were ungraded consensus-based statements.

Ktg usa/Wikimedia Commons/CC-ASA3.0
A biopsy window is found and an FNA needle advanced into the mass with EUS guidance.

The guideline (Chest. 2016 Mar;149[3]:816-35) has been endorsed by the American Association of Bronchology and Interventional Pulmonology, American Association for Thoracic Surgery, Canadian Thoracic Society, European Association for Bronchology and Interventional Pulmonology, and Society of Thoracic Surgeons.

Use of EBUS-TBNA for diagnosis in patients with suspected sarcoidosis with mediastinal and/or hilar adenopathy is ranked Grade 1C (strong recommendation, low-quality evidence, benefits outweigh risks). The guideline writers concluded that EBUS-TBNA provides safe and minimally invasive access to the mediastinal and hilar lymph nodes with a pooled diagnostic accuracy of 79.1%. They qualified, however, that it may be difficult to use EBUS-TBNA to obtain adequate tissue from fibrotic lymph nodes and conventional bronchoscopic techniques such as transbronchial lung biopsy and endobronchial biopsy may be needed in selected patients.

One systematic review and meta-analysis study included 15 studies with a total of 553 patients with sarcoidosis. The diagnostic yield of EBUS-TBNA ranged from 54% to 93%, with the pooled diagnostic accuracy of 79% (95% confidence interval, 71-86). Ten additional studies including 573 combined patients were identified through updated searches of the systematic review, and led to a pooled diagnostic accuracy of 78.2%.

Similarly, a Grade 1C recommendation was made for using EBUS-TBNA for diagnosis in when other modalities are not diagnostic in patients with suspected tuberculosis with mediastinal and/or hilar adenopathy who require lymph node sampling. However, “it must be noted that no single study assessed the role of EBUS-TBNA for the diagnosis of TB [tuberculosis] as the primary outcome measure,” they wrote. Various techniques are available for the diagnosis of TB and should be incorporated during the diagnostic evaluation.

In patients with suspected lymphoma, EBUS-TBNA is an acceptable initial, minimally invasive diagnostic test, the guideline writers said in an Ungraded Consensus-Based Statement.

In some conditions, minimally invasive EBUS-TBNA may be preferred over surgical intervention. Repeat mediastinoscopy or surgical biopsy after treatment for relapsed lymphoma can be challenging, for example, with a lower diagnostic yield and higher complication rate.

Because treatment regimens for both non-Hodgkin and Hodgkin lymphoma depend on the specific subtype and histologic grade, a definitive diagnosis of lymphoma requires the evaluation of cell morphology, immunophenotype, and the overall architecture of the tissue. Reed-Sternberg cells, diagnostic of Hodgkin lymphoma, are usually scarce in cytologic aspirates, and it often is impossible to evaluate the overall background architecture. Currently available EBUS-TBNA needles provide only cytologic specimens, with reported high discordance between cytologic specimens and histologic specimens.

In five retrospective studies with a total of 212 patients undergoing EBUS-TBNA for suspected lymphoma, the pooled diagnostic accuracy was 68.7%, and there was heterogeneity across studies in the proportion of patients with de novo lymphoma and relapsed lymphoma. Higher diagnostic yield was noted for relapsed lymphoma, compared with de novo lymphoma. Also, the two studies with the highest yield included cases as diagnostic, even when additional tissue sampling was necessary to subclassify the lymphoma for clinical management.

The panel gave a weak recommendation (Grade 2C) based on low-quality evidence to their conclusion that moderate or deep sedation is acceptable for EBUS-TBNA, based on three studies. Moderate sedation allows patients to respond purposefully to verbal commands while maintaining a functional airway, spontaneous ventilation, and cardiovascular function. In deep sedation, patients cannot be easily aroused but respond purposefully to repeated or painful stimulation and may have compromised airway function and spontaneous ventilation; cardiovascular function usually is maintained.

In one retrospective multivariable analysis of 309 patients at two centers, deep sedation had a statistically significant benefit on diagnostic yield. In a prospective randomized, controlled study of 149 patients at a single center with a single operator, there was no difference in diagnostic yield for moderate and deep sedation. However, fewer patients in the moderate sedation group were able to complete the procedure, compared with the deep-sedation group. Patient comfort and satisfaction were similar for the two sedation groups, and no patients had major complications or needed escalation of care.

 

 

In terms of diagnostic yield, there was insufficient evidence to recommend for or against using an artificial airway when inserting the EBUS bronchoscope, the authors said. Reported practice is scattered and is largely based on expert opinion, operator comfort, sedation type, and institutional standards.

The placement of the endotracheal tube may block the ultrasonographic view of the higher paratracheal lymph nodes (lymph node stations 1, 2R, 2L, and 3P) and should be avoided if one of these lymph nodes is the sampling target of the procedure, they advised.

If using a transoral artificial airway, a bite block should be considered to protect the bronchoscope from bite damage; this approach is recommended independent of the depth of sedation. A minimum size of 8.0 should be used if placing an endotracheal tube for EBUS-TBNA to accommodate the scope diameter and leave room for gas exchange.

In an Ungraded Consensus-Based Statement, the guideline authors said that ultrasonographic features, such as size, shape, border, heterogeneity, central hilar structure, and necrosis can be used to predict malignant and benign diagnoses, but tissue samples still should be obtained to confirm a diagnosis.

Nine studies provided analysis of the characteristics of lymph nodes that predict malignancy during EBUS; however, the ultrasonographic features assessed were not the same in each study or they had varying definitions of what constituted “abnormal.” As a result, the ultrasonographic predictors of malignancy in lymph nodes are not reliable enough to forgo biopsy to obtain a definitive tissue diagnosis. However, the ultrasound features can be useful to guide sampling from lymph nodes most likely to be malignant.

A round shape, distinct margins, heterogeneous echogenicity, and a central necrosis sign were independently predictive of malignancy in one multivariate analysis that included more than 1,000 lymph nodes in nearly 500 patients. Furthermore, when all four factors were absent, 96% of the lymph nodes were benign.

In three additional studies, size criteria had conflicting results; one found size was not a reliable indicator, two others found that larger lymph nodes are more likely to harbor metastases. These studies also confirmed that round-shaped lymph nodes were more likely malignant than were triangular or draping lymph nodes. The measures used to define size may have caused the inconsistencies.

In a study that examined vascular image patterns within lymph nodes as a way to predict malignancy, nodes were considered malignant if vessel involvement increased in the node to rich flow with more than four vessels (grades 2 and 3) with a sensitivity of 87.7% and a specificity of 69.6%, suggesting that increased vascularity assessed by using power/color Doppler mode ultrasound is useful in predicting malignancy.

Two studies have assessed ultrasound features of lymph nodes in patients with sarcoidosis. In the first, lymph nodes with homogeneous echogenicity and a germinal center were more likely to indicate sarcoidosis than lung cancer. In the second, coagulation necrosis and heterogeneous echogenicity within lymph nodes were more likely to be present in tuberculosis as opposed to sarcoidosis.

In another Ungraded Consensus-Based Statement, the guideline authors said tissue sampling may be performed either with or without suction. In cases in which EBUS-TBNA is being performed with suction and the samples obtained are bloody, operators should consider switching to the use of no suction at the same sampling site. If intranodal blood vessels are visualized on EBUS imaging with or without Doppler imaging, operators should also consider obtaining samples without suction.

Needle choice should be determined by the operator, and the use of either a 21- or 22-gauge needle are acceptable options based on five trials comparing needle sizes, the authors said in a Grade 1C recommendation. No data are available on the use of 25-gauge needles.

“Future studies should investigate if ... smaller or more flexible needles would improve sampling at station 4L (known for its slightly angulated location) or if smaller needles would result in less blood contamination when sampling more vascular nodes. Studies should also examine if a particular needle size should be used depending on how the specimens are being processed (histopathology vs. cytopathology) and if needle size can affect the diagnosis of diseases that are more difficult to diagnose by EBUS-TBNA, such as lymphoma,” the authors wrote.

In the absence of rapid on-site evaluation (ROSE), the authors advised a minimum of three separate needle passes per sampling site in patients suspected of having lung cancer. The recommendation is an Ungraded Consensus-Based Statement.

Just one study of 102 patients with potentially operable non–small cell lung cancer and mediastinal adenopathy has examined the number of needle passes per sampling site. The results indicated optimal diagnostic values are reached after three passes. Each pass typically includes 5-15 agitations of the needle within the target site.

 

 

Sample adequacy was 90.1% after the first pass, 98.1% after two passes, and reached 100% after three passes. The sensitivity for differentiating malignant from benign lymph node stations was 69.8%, 83.7%, 95.3%, and 95.3% for one, two, three, and four passes, respectively.

No data exist regarding the number of needle passes required to obtain a sufficient diagnostic yield for lymphoma or nonmalignant diseases of the mediastinum.

In a Grade 1C recommendation, the authors said that tissue sampling can be performed with or without rapid on-site evaluation. ROSE does not affect the diagnostic yield in EBUS-TBNA procedures, but it may decrease the number of punctures and reduce the need for additional staging and diagnostic procedures. ROSE may be beneficial in judging the quantity of available malignant cells when testing for molecular markers is planned in patients with advanced adenocarcinoma of the lung.

In another Grade 1C recommendation, the authors said that patients undergoing EBUS-TBNA for the diagnosis or staging of suspected or known non–small cell lung cancer should have additional samples obtained for molecular analysis.

Molecular marker testing is necessary for tailoring chemotherapy to the cancer characteristics of each individual patient. The current data are insufficient to identify the number of passes needed to obtain adequate tissue for molecular marker testing, but it is strongly suggested that additional samples, over the proposed diagnostic threshold of three passes are recommended.

The guideline authors found insufficient quality of evidence to support any route of bronchoscope insertion for EBUS-TBNA over another. Translating the experience and literature from conventional flexible bronchoscopy given the size and rigidity of the EBUS bronchoscope distal tip, as well as the limited bronchoscopic view, is difficult, according to the guideline writers.

They noted that no studies were found that addressed the use of saline-filled balloons to overcome poor contact between the ultrasound probe and the bronchial wall. Although the saline-filled balloon can enhance image acquisition, it is unclear whether that translates into a better diagnostic yield, thus, no recommendations or suggestions can be made.

In a Grade 2C recommendation, they advised that low-fidelity inanimate mechanical airway models and high-fidelity computer-based electronic simulation be incorporated into training. In the three studies that compared conventional EBUS-TBNA training and simulation-based training incorporating either a low- or high-fidelity simulation tool, the same level of training could be acquired via conventional or simulation-based training; however, simulation-based training minimizes novice operators’ practice on patients.

In an Ungraded Consensus-Based Statement, the guideline authors advised that validated EBUS skills assessment tests be used to objectively assess skill level, but added that “none of the included simulation studies examined whether the skills demonstrated on a simulation assessment are transferred to an improvement in clinical skills as performed in patients.”

[email protected]

On Twitter @maryjodales

References

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has been recommended for diagnosis of suspected sarcoidosis or suspected tuberculosis with adenopathy and may be used as an initial diagnostic test for suspected lymphoma, according to guidelines issued by CHEST (the American College of Chest Physicians).

The guidelines, which are primarily focused on technical aspects of EBUS-TBNA, also advise obtaining additional samples for the purpose of molecular analysis in patients who undergo the procedure for the diagnosis or staging of non–small cell lung cancer.

The guidelines are based on a systematic review and critical analysis of the literature by an expert panel chaired by Dr. Momen M. Wahidi of Duke University Medical Center, Durham, N.C. Of the 12 guideline statements by the panel, 7 were graded evidence-based recommendations and 5 were ungraded consensus-based statements.

Ktg usa/Wikimedia Commons/CC-ASA3.0
A biopsy window is found and an FNA needle advanced into the mass with EUS guidance.

The guideline (Chest. 2016 Mar;149[3]:816-35) has been endorsed by the American Association of Bronchology and Interventional Pulmonology, American Association for Thoracic Surgery, Canadian Thoracic Society, European Association for Bronchology and Interventional Pulmonology, and Society of Thoracic Surgeons.

Use of EBUS-TBNA for diagnosis in patients with suspected sarcoidosis with mediastinal and/or hilar adenopathy is ranked Grade 1C (strong recommendation, low-quality evidence, benefits outweigh risks). The guideline writers concluded that EBUS-TBNA provides safe and minimally invasive access to the mediastinal and hilar lymph nodes with a pooled diagnostic accuracy of 79.1%. They qualified, however, that it may be difficult to use EBUS-TBNA to obtain adequate tissue from fibrotic lymph nodes and conventional bronchoscopic techniques such as transbronchial lung biopsy and endobronchial biopsy may be needed in selected patients.

One systematic review and meta-analysis study included 15 studies with a total of 553 patients with sarcoidosis. The diagnostic yield of EBUS-TBNA ranged from 54% to 93%, with the pooled diagnostic accuracy of 79% (95% confidence interval, 71-86). Ten additional studies including 573 combined patients were identified through updated searches of the systematic review, and led to a pooled diagnostic accuracy of 78.2%.

Similarly, a Grade 1C recommendation was made for using EBUS-TBNA for diagnosis in when other modalities are not diagnostic in patients with suspected tuberculosis with mediastinal and/or hilar adenopathy who require lymph node sampling. However, “it must be noted that no single study assessed the role of EBUS-TBNA for the diagnosis of TB [tuberculosis] as the primary outcome measure,” they wrote. Various techniques are available for the diagnosis of TB and should be incorporated during the diagnostic evaluation.

In patients with suspected lymphoma, EBUS-TBNA is an acceptable initial, minimally invasive diagnostic test, the guideline writers said in an Ungraded Consensus-Based Statement.

In some conditions, minimally invasive EBUS-TBNA may be preferred over surgical intervention. Repeat mediastinoscopy or surgical biopsy after treatment for relapsed lymphoma can be challenging, for example, with a lower diagnostic yield and higher complication rate.

Because treatment regimens for both non-Hodgkin and Hodgkin lymphoma depend on the specific subtype and histologic grade, a definitive diagnosis of lymphoma requires the evaluation of cell morphology, immunophenotype, and the overall architecture of the tissue. Reed-Sternberg cells, diagnostic of Hodgkin lymphoma, are usually scarce in cytologic aspirates, and it often is impossible to evaluate the overall background architecture. Currently available EBUS-TBNA needles provide only cytologic specimens, with reported high discordance between cytologic specimens and histologic specimens.

In five retrospective studies with a total of 212 patients undergoing EBUS-TBNA for suspected lymphoma, the pooled diagnostic accuracy was 68.7%, and there was heterogeneity across studies in the proportion of patients with de novo lymphoma and relapsed lymphoma. Higher diagnostic yield was noted for relapsed lymphoma, compared with de novo lymphoma. Also, the two studies with the highest yield included cases as diagnostic, even when additional tissue sampling was necessary to subclassify the lymphoma for clinical management.

The panel gave a weak recommendation (Grade 2C) based on low-quality evidence to their conclusion that moderate or deep sedation is acceptable for EBUS-TBNA, based on three studies. Moderate sedation allows patients to respond purposefully to verbal commands while maintaining a functional airway, spontaneous ventilation, and cardiovascular function. In deep sedation, patients cannot be easily aroused but respond purposefully to repeated or painful stimulation and may have compromised airway function and spontaneous ventilation; cardiovascular function usually is maintained.

In one retrospective multivariable analysis of 309 patients at two centers, deep sedation had a statistically significant benefit on diagnostic yield. In a prospective randomized, controlled study of 149 patients at a single center with a single operator, there was no difference in diagnostic yield for moderate and deep sedation. However, fewer patients in the moderate sedation group were able to complete the procedure, compared with the deep-sedation group. Patient comfort and satisfaction were similar for the two sedation groups, and no patients had major complications or needed escalation of care.

 

 

In terms of diagnostic yield, there was insufficient evidence to recommend for or against using an artificial airway when inserting the EBUS bronchoscope, the authors said. Reported practice is scattered and is largely based on expert opinion, operator comfort, sedation type, and institutional standards.

The placement of the endotracheal tube may block the ultrasonographic view of the higher paratracheal lymph nodes (lymph node stations 1, 2R, 2L, and 3P) and should be avoided if one of these lymph nodes is the sampling target of the procedure, they advised.

If using a transoral artificial airway, a bite block should be considered to protect the bronchoscope from bite damage; this approach is recommended independent of the depth of sedation. A minimum size of 8.0 should be used if placing an endotracheal tube for EBUS-TBNA to accommodate the scope diameter and leave room for gas exchange.

In an Ungraded Consensus-Based Statement, the guideline authors said that ultrasonographic features, such as size, shape, border, heterogeneity, central hilar structure, and necrosis can be used to predict malignant and benign diagnoses, but tissue samples still should be obtained to confirm a diagnosis.

Nine studies provided analysis of the characteristics of lymph nodes that predict malignancy during EBUS; however, the ultrasonographic features assessed were not the same in each study or they had varying definitions of what constituted “abnormal.” As a result, the ultrasonographic predictors of malignancy in lymph nodes are not reliable enough to forgo biopsy to obtain a definitive tissue diagnosis. However, the ultrasound features can be useful to guide sampling from lymph nodes most likely to be malignant.

A round shape, distinct margins, heterogeneous echogenicity, and a central necrosis sign were independently predictive of malignancy in one multivariate analysis that included more than 1,000 lymph nodes in nearly 500 patients. Furthermore, when all four factors were absent, 96% of the lymph nodes were benign.

In three additional studies, size criteria had conflicting results; one found size was not a reliable indicator, two others found that larger lymph nodes are more likely to harbor metastases. These studies also confirmed that round-shaped lymph nodes were more likely malignant than were triangular or draping lymph nodes. The measures used to define size may have caused the inconsistencies.

In a study that examined vascular image patterns within lymph nodes as a way to predict malignancy, nodes were considered malignant if vessel involvement increased in the node to rich flow with more than four vessels (grades 2 and 3) with a sensitivity of 87.7% and a specificity of 69.6%, suggesting that increased vascularity assessed by using power/color Doppler mode ultrasound is useful in predicting malignancy.

Two studies have assessed ultrasound features of lymph nodes in patients with sarcoidosis. In the first, lymph nodes with homogeneous echogenicity and a germinal center were more likely to indicate sarcoidosis than lung cancer. In the second, coagulation necrosis and heterogeneous echogenicity within lymph nodes were more likely to be present in tuberculosis as opposed to sarcoidosis.

In another Ungraded Consensus-Based Statement, the guideline authors said tissue sampling may be performed either with or without suction. In cases in which EBUS-TBNA is being performed with suction and the samples obtained are bloody, operators should consider switching to the use of no suction at the same sampling site. If intranodal blood vessels are visualized on EBUS imaging with or without Doppler imaging, operators should also consider obtaining samples without suction.

Needle choice should be determined by the operator, and the use of either a 21- or 22-gauge needle are acceptable options based on five trials comparing needle sizes, the authors said in a Grade 1C recommendation. No data are available on the use of 25-gauge needles.

“Future studies should investigate if ... smaller or more flexible needles would improve sampling at station 4L (known for its slightly angulated location) or if smaller needles would result in less blood contamination when sampling more vascular nodes. Studies should also examine if a particular needle size should be used depending on how the specimens are being processed (histopathology vs. cytopathology) and if needle size can affect the diagnosis of diseases that are more difficult to diagnose by EBUS-TBNA, such as lymphoma,” the authors wrote.

In the absence of rapid on-site evaluation (ROSE), the authors advised a minimum of three separate needle passes per sampling site in patients suspected of having lung cancer. The recommendation is an Ungraded Consensus-Based Statement.

Just one study of 102 patients with potentially operable non–small cell lung cancer and mediastinal adenopathy has examined the number of needle passes per sampling site. The results indicated optimal diagnostic values are reached after three passes. Each pass typically includes 5-15 agitations of the needle within the target site.

 

 

Sample adequacy was 90.1% after the first pass, 98.1% after two passes, and reached 100% after three passes. The sensitivity for differentiating malignant from benign lymph node stations was 69.8%, 83.7%, 95.3%, and 95.3% for one, two, three, and four passes, respectively.

No data exist regarding the number of needle passes required to obtain a sufficient diagnostic yield for lymphoma or nonmalignant diseases of the mediastinum.

In a Grade 1C recommendation, the authors said that tissue sampling can be performed with or without rapid on-site evaluation. ROSE does not affect the diagnostic yield in EBUS-TBNA procedures, but it may decrease the number of punctures and reduce the need for additional staging and diagnostic procedures. ROSE may be beneficial in judging the quantity of available malignant cells when testing for molecular markers is planned in patients with advanced adenocarcinoma of the lung.

In another Grade 1C recommendation, the authors said that patients undergoing EBUS-TBNA for the diagnosis or staging of suspected or known non–small cell lung cancer should have additional samples obtained for molecular analysis.

Molecular marker testing is necessary for tailoring chemotherapy to the cancer characteristics of each individual patient. The current data are insufficient to identify the number of passes needed to obtain adequate tissue for molecular marker testing, but it is strongly suggested that additional samples, over the proposed diagnostic threshold of three passes are recommended.

The guideline authors found insufficient quality of evidence to support any route of bronchoscope insertion for EBUS-TBNA over another. Translating the experience and literature from conventional flexible bronchoscopy given the size and rigidity of the EBUS bronchoscope distal tip, as well as the limited bronchoscopic view, is difficult, according to the guideline writers.

They noted that no studies were found that addressed the use of saline-filled balloons to overcome poor contact between the ultrasound probe and the bronchial wall. Although the saline-filled balloon can enhance image acquisition, it is unclear whether that translates into a better diagnostic yield, thus, no recommendations or suggestions can be made.

In a Grade 2C recommendation, they advised that low-fidelity inanimate mechanical airway models and high-fidelity computer-based electronic simulation be incorporated into training. In the three studies that compared conventional EBUS-TBNA training and simulation-based training incorporating either a low- or high-fidelity simulation tool, the same level of training could be acquired via conventional or simulation-based training; however, simulation-based training minimizes novice operators’ practice on patients.

In an Ungraded Consensus-Based Statement, the guideline authors advised that validated EBUS skills assessment tests be used to objectively assess skill level, but added that “none of the included simulation studies examined whether the skills demonstrated on a simulation assessment are transferred to an improvement in clinical skills as performed in patients.”

[email protected]

On Twitter @maryjodales

Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has been recommended for diagnosis of suspected sarcoidosis or suspected tuberculosis with adenopathy and may be used as an initial diagnostic test for suspected lymphoma, according to guidelines issued by CHEST (the American College of Chest Physicians).

The guidelines, which are primarily focused on technical aspects of EBUS-TBNA, also advise obtaining additional samples for the purpose of molecular analysis in patients who undergo the procedure for the diagnosis or staging of non–small cell lung cancer.

The guidelines are based on a systematic review and critical analysis of the literature by an expert panel chaired by Dr. Momen M. Wahidi of Duke University Medical Center, Durham, N.C. Of the 12 guideline statements by the panel, 7 were graded evidence-based recommendations and 5 were ungraded consensus-based statements.

Ktg usa/Wikimedia Commons/CC-ASA3.0
A biopsy window is found and an FNA needle advanced into the mass with EUS guidance.

The guideline (Chest. 2016 Mar;149[3]:816-35) has been endorsed by the American Association of Bronchology and Interventional Pulmonology, American Association for Thoracic Surgery, Canadian Thoracic Society, European Association for Bronchology and Interventional Pulmonology, and Society of Thoracic Surgeons.

Use of EBUS-TBNA for diagnosis in patients with suspected sarcoidosis with mediastinal and/or hilar adenopathy is ranked Grade 1C (strong recommendation, low-quality evidence, benefits outweigh risks). The guideline writers concluded that EBUS-TBNA provides safe and minimally invasive access to the mediastinal and hilar lymph nodes with a pooled diagnostic accuracy of 79.1%. They qualified, however, that it may be difficult to use EBUS-TBNA to obtain adequate tissue from fibrotic lymph nodes and conventional bronchoscopic techniques such as transbronchial lung biopsy and endobronchial biopsy may be needed in selected patients.

One systematic review and meta-analysis study included 15 studies with a total of 553 patients with sarcoidosis. The diagnostic yield of EBUS-TBNA ranged from 54% to 93%, with the pooled diagnostic accuracy of 79% (95% confidence interval, 71-86). Ten additional studies including 573 combined patients were identified through updated searches of the systematic review, and led to a pooled diagnostic accuracy of 78.2%.

Similarly, a Grade 1C recommendation was made for using EBUS-TBNA for diagnosis in when other modalities are not diagnostic in patients with suspected tuberculosis with mediastinal and/or hilar adenopathy who require lymph node sampling. However, “it must be noted that no single study assessed the role of EBUS-TBNA for the diagnosis of TB [tuberculosis] as the primary outcome measure,” they wrote. Various techniques are available for the diagnosis of TB and should be incorporated during the diagnostic evaluation.

In patients with suspected lymphoma, EBUS-TBNA is an acceptable initial, minimally invasive diagnostic test, the guideline writers said in an Ungraded Consensus-Based Statement.

In some conditions, minimally invasive EBUS-TBNA may be preferred over surgical intervention. Repeat mediastinoscopy or surgical biopsy after treatment for relapsed lymphoma can be challenging, for example, with a lower diagnostic yield and higher complication rate.

Because treatment regimens for both non-Hodgkin and Hodgkin lymphoma depend on the specific subtype and histologic grade, a definitive diagnosis of lymphoma requires the evaluation of cell morphology, immunophenotype, and the overall architecture of the tissue. Reed-Sternberg cells, diagnostic of Hodgkin lymphoma, are usually scarce in cytologic aspirates, and it often is impossible to evaluate the overall background architecture. Currently available EBUS-TBNA needles provide only cytologic specimens, with reported high discordance between cytologic specimens and histologic specimens.

In five retrospective studies with a total of 212 patients undergoing EBUS-TBNA for suspected lymphoma, the pooled diagnostic accuracy was 68.7%, and there was heterogeneity across studies in the proportion of patients with de novo lymphoma and relapsed lymphoma. Higher diagnostic yield was noted for relapsed lymphoma, compared with de novo lymphoma. Also, the two studies with the highest yield included cases as diagnostic, even when additional tissue sampling was necessary to subclassify the lymphoma for clinical management.

The panel gave a weak recommendation (Grade 2C) based on low-quality evidence to their conclusion that moderate or deep sedation is acceptable for EBUS-TBNA, based on three studies. Moderate sedation allows patients to respond purposefully to verbal commands while maintaining a functional airway, spontaneous ventilation, and cardiovascular function. In deep sedation, patients cannot be easily aroused but respond purposefully to repeated or painful stimulation and may have compromised airway function and spontaneous ventilation; cardiovascular function usually is maintained.

In one retrospective multivariable analysis of 309 patients at two centers, deep sedation had a statistically significant benefit on diagnostic yield. In a prospective randomized, controlled study of 149 patients at a single center with a single operator, there was no difference in diagnostic yield for moderate and deep sedation. However, fewer patients in the moderate sedation group were able to complete the procedure, compared with the deep-sedation group. Patient comfort and satisfaction were similar for the two sedation groups, and no patients had major complications or needed escalation of care.

 

 

In terms of diagnostic yield, there was insufficient evidence to recommend for or against using an artificial airway when inserting the EBUS bronchoscope, the authors said. Reported practice is scattered and is largely based on expert opinion, operator comfort, sedation type, and institutional standards.

The placement of the endotracheal tube may block the ultrasonographic view of the higher paratracheal lymph nodes (lymph node stations 1, 2R, 2L, and 3P) and should be avoided if one of these lymph nodes is the sampling target of the procedure, they advised.

If using a transoral artificial airway, a bite block should be considered to protect the bronchoscope from bite damage; this approach is recommended independent of the depth of sedation. A minimum size of 8.0 should be used if placing an endotracheal tube for EBUS-TBNA to accommodate the scope diameter and leave room for gas exchange.

In an Ungraded Consensus-Based Statement, the guideline authors said that ultrasonographic features, such as size, shape, border, heterogeneity, central hilar structure, and necrosis can be used to predict malignant and benign diagnoses, but tissue samples still should be obtained to confirm a diagnosis.

Nine studies provided analysis of the characteristics of lymph nodes that predict malignancy during EBUS; however, the ultrasonographic features assessed were not the same in each study or they had varying definitions of what constituted “abnormal.” As a result, the ultrasonographic predictors of malignancy in lymph nodes are not reliable enough to forgo biopsy to obtain a definitive tissue diagnosis. However, the ultrasound features can be useful to guide sampling from lymph nodes most likely to be malignant.

A round shape, distinct margins, heterogeneous echogenicity, and a central necrosis sign were independently predictive of malignancy in one multivariate analysis that included more than 1,000 lymph nodes in nearly 500 patients. Furthermore, when all four factors were absent, 96% of the lymph nodes were benign.

In three additional studies, size criteria had conflicting results; one found size was not a reliable indicator, two others found that larger lymph nodes are more likely to harbor metastases. These studies also confirmed that round-shaped lymph nodes were more likely malignant than were triangular or draping lymph nodes. The measures used to define size may have caused the inconsistencies.

In a study that examined vascular image patterns within lymph nodes as a way to predict malignancy, nodes were considered malignant if vessel involvement increased in the node to rich flow with more than four vessels (grades 2 and 3) with a sensitivity of 87.7% and a specificity of 69.6%, suggesting that increased vascularity assessed by using power/color Doppler mode ultrasound is useful in predicting malignancy.

Two studies have assessed ultrasound features of lymph nodes in patients with sarcoidosis. In the first, lymph nodes with homogeneous echogenicity and a germinal center were more likely to indicate sarcoidosis than lung cancer. In the second, coagulation necrosis and heterogeneous echogenicity within lymph nodes were more likely to be present in tuberculosis as opposed to sarcoidosis.

In another Ungraded Consensus-Based Statement, the guideline authors said tissue sampling may be performed either with or without suction. In cases in which EBUS-TBNA is being performed with suction and the samples obtained are bloody, operators should consider switching to the use of no suction at the same sampling site. If intranodal blood vessels are visualized on EBUS imaging with or without Doppler imaging, operators should also consider obtaining samples without suction.

Needle choice should be determined by the operator, and the use of either a 21- or 22-gauge needle are acceptable options based on five trials comparing needle sizes, the authors said in a Grade 1C recommendation. No data are available on the use of 25-gauge needles.

“Future studies should investigate if ... smaller or more flexible needles would improve sampling at station 4L (known for its slightly angulated location) or if smaller needles would result in less blood contamination when sampling more vascular nodes. Studies should also examine if a particular needle size should be used depending on how the specimens are being processed (histopathology vs. cytopathology) and if needle size can affect the diagnosis of diseases that are more difficult to diagnose by EBUS-TBNA, such as lymphoma,” the authors wrote.

In the absence of rapid on-site evaluation (ROSE), the authors advised a minimum of three separate needle passes per sampling site in patients suspected of having lung cancer. The recommendation is an Ungraded Consensus-Based Statement.

Just one study of 102 patients with potentially operable non–small cell lung cancer and mediastinal adenopathy has examined the number of needle passes per sampling site. The results indicated optimal diagnostic values are reached after three passes. Each pass typically includes 5-15 agitations of the needle within the target site.

 

 

Sample adequacy was 90.1% after the first pass, 98.1% after two passes, and reached 100% after three passes. The sensitivity for differentiating malignant from benign lymph node stations was 69.8%, 83.7%, 95.3%, and 95.3% for one, two, three, and four passes, respectively.

No data exist regarding the number of needle passes required to obtain a sufficient diagnostic yield for lymphoma or nonmalignant diseases of the mediastinum.

In a Grade 1C recommendation, the authors said that tissue sampling can be performed with or without rapid on-site evaluation. ROSE does not affect the diagnostic yield in EBUS-TBNA procedures, but it may decrease the number of punctures and reduce the need for additional staging and diagnostic procedures. ROSE may be beneficial in judging the quantity of available malignant cells when testing for molecular markers is planned in patients with advanced adenocarcinoma of the lung.

In another Grade 1C recommendation, the authors said that patients undergoing EBUS-TBNA for the diagnosis or staging of suspected or known non–small cell lung cancer should have additional samples obtained for molecular analysis.

Molecular marker testing is necessary for tailoring chemotherapy to the cancer characteristics of each individual patient. The current data are insufficient to identify the number of passes needed to obtain adequate tissue for molecular marker testing, but it is strongly suggested that additional samples, over the proposed diagnostic threshold of three passes are recommended.

The guideline authors found insufficient quality of evidence to support any route of bronchoscope insertion for EBUS-TBNA over another. Translating the experience and literature from conventional flexible bronchoscopy given the size and rigidity of the EBUS bronchoscope distal tip, as well as the limited bronchoscopic view, is difficult, according to the guideline writers.

They noted that no studies were found that addressed the use of saline-filled balloons to overcome poor contact between the ultrasound probe and the bronchial wall. Although the saline-filled balloon can enhance image acquisition, it is unclear whether that translates into a better diagnostic yield, thus, no recommendations or suggestions can be made.

In a Grade 2C recommendation, they advised that low-fidelity inanimate mechanical airway models and high-fidelity computer-based electronic simulation be incorporated into training. In the three studies that compared conventional EBUS-TBNA training and simulation-based training incorporating either a low- or high-fidelity simulation tool, the same level of training could be acquired via conventional or simulation-based training; however, simulation-based training minimizes novice operators’ practice on patients.

In an Ungraded Consensus-Based Statement, the guideline authors advised that validated EBUS skills assessment tests be used to objectively assess skill level, but added that “none of the included simulation studies examined whether the skills demonstrated on a simulation assessment are transferred to an improvement in clinical skills as performed in patients.”

[email protected]

On Twitter @maryjodales

References

References

Publications
Publications
Topics
Article Type
Display Headline
CHEST issues guidelines on EBUS-TBNA
Display Headline
CHEST issues guidelines on EBUS-TBNA
Article Source

FROM CHEST

PURLs Copyright

Inside the Article

Endovascular surges over surgery for patients hospitalized for CLI

A promising future for CLI treatment?
Article Type
Changed
Display Headline
Endovascular surges over surgery for patients hospitalized for CLI

Even though there was a steady rate of patients with critical limb ischemia (CLI) admitted to hospitals from 2003 to 2011, surgical revascularization decreased and endovascular treatment increased significantly, with concomitant decreases in in-hospital mortality and major amputation, according to the results of an analysis of the Nationwide Inpatient Sample of 642,433 patients hospitalized with CLI.

In addition, despite multiple adjustments, endovascular revascularization was associated with reduced in-hospital mortality, compared with surgical revascularization over the same period, according to a report online in the Journal of the American College of Cardiology.

©Ingram Publishing/Thinkstock

The annual in-hospital mortality rate decreased from 5.4% in 2003 to 3.4% in 2011 (P less than .001), and the major amputation rate dropped from 16.7% to 10.8%. There also was a significant decrease in length-of-stay (LOS) from 10 days to 8.4 days over the same period (P less than .001); however this did not translate to a significant difference in the cost of hospitalization, according to Dr. Shikhar Agarwal and colleagues at the Cleveland Clinic [doi:10.1016/j.jacc.2016.02.040].

Significant predictive factors of in-hospital mortality after multivariate regression analysis were female sex, older age, emergent admission, a primary indication of septicemia, heart failure, and respiratory disease, as well any stump complications present during admission. In contrast, any form of revascularization was associated with significantly reduced in-hospital mortality.

A comparison of revascularization methods showed that surgical revascularization significantly decreased from 13.9% in 2003 to 8.8% in 2011, while endovascular revascularization increased from 5.1% to 11%. Also, endovascular revascularization was associated with a significant decrease in in-hospital mortality compared with surgical revascularization over the study period (2.34% vs. 2.73%, respectively; odds ratio = .69). Major amputation rates were not significantly different between the two treatments (6.5% vs. 5.7%; OR = .99).

Length of stay was significantly lower with endovascular treatment compared with surgical (8.7 vs. 10.7 days) as were costs ($31,679 vs. $32,485, respectively).

Women had a higher rate of in-hospital mortality, but a lower rate of major amputation. Although race was not seen as a factor in predicting in-hospital mortality, blacks and other nonwhite races had significantly higher rates of amputation and lower rates of revascularization, compared with whites.

Approximately half of the patients assessed were admitted for primary CLI-related diagnoses. The other, non–CLI-related conditions – such as acute MI, cerebrovascular events, respiratory disease, heart failure, and acute kidney disease – have all been independently associated with increased in-hospital mortality and may be confounding, according to the authors. These are still relevant because CLI patients have an overall elevated cardiovascular risk in multiple vascular beds.

In terms of limitations, the authors noted the possibility of selection bias in the database, the rise of standalone outpatient centers in more recent years, which might funnel off select patients, and the lack of anatomical information in the NIS database, which precludes a determination of the appropriateness of treatment choice. Also, the type and invasiveness of the endovascular therapy cannot be determined. “It is possible that simple lesions were preferentially treated with endovascular therapy, whereas more complex lesions were treated by surgical techniques, leading to obvious differences in outcomes. Alternatively, it may be likely that the findings underestimate the impact of endovascular therapy, as sicker patients with higher comorbidities and poor targets were more likely to undergo endovascular revascularization,” the researchers pointed out.

“Despite similar rates of major amputation, endovascular revascularization was associated with reduced in-hospital mortality, mean LOS, and mean cost of hospitalization. Although the results are encouraging, there remain significant disparities and gaps that must be addressed,” Dr. Agarwal and his colleagues concluded.

The authors reported that they had no relevant disclosures.

[email protected]

References

Body

Many of the unanswered questions regarding the optimal approach to CLI are being addressed by the National Heart, Lung, and Blood Institute–sponsored, multicenter, randomized BEST-CLI (Best Endovascular vs. Best Surgical Therapy in Patients with Critical Limb Ischemia) trial. The BEST-CLI trial will hopefully be completed in 2017. Until that time, clinicians will continue to rely on the best available data to guide revascularization strategies for the management of CLI.

Consistent with prior investigations, Dr. Agarwal et al. demonstrated a significant reduction in the proportion of patients undergoing surgical revascularization with a concomitant rise in endovascular revascularization during the same time period. This was accompanied by a steady decline in the incidence of in-hospital mortality and major amputation. Endovascular therapy was associated with a shorter mean length of stay and reduced hospital costs, despite a similar rate of in-hospital major amputation. As the authors correctly point out, the decreasing amputation and mortality rates cannot be directly attributable to a rise in endovascular therapy, as these studies cannot provide causal conclusions. Numerous other factors can influence mortality and amputation rates, including better medical care, aggressive risk factor modification, and appropriate wound care. Still, these associations are powerful and hypothesis generating, and they warrant further investigation.

Whether the improving CLI outcomes can be explained by the growth of these endovascular therapies is yet to be proved. We await the results of the landmark BEST-CLI trial to provide clarity regarding this issue and to further clarify the future role of surgical versus endovascular revascularization.

Dr. John R. Laird and Dr. Gagan D. Singh of the University of California, Davis Medical Center, Sacramento, and Dr. Ehrin J. Armstrong of the University of Colorado, Denver, made their comments in an invited editorial published online in the Journal of the American College of Cardiology (doi: 10.1016/j.jacc.2016.02.041). Dr. Laird has served as a consultant or advisory board member for Bard Peripheral Vascular, Boston Scientific, Cordis, Medtronic, and Abbott Vascular; and has received research support from WL Gore. Dr. Armstrong has served as a consultant or advisory board member for Abbott Vascular, Boston Scientific, Medtronic, Merck, and Spectranetics. Dr. Singh reported that he has no relevant disclosures.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
endovascular, surgery, CLI, critical limb ischemia, length of stay, amputation, mortality
Author and Disclosure Information

Author and Disclosure Information

Body

Many of the unanswered questions regarding the optimal approach to CLI are being addressed by the National Heart, Lung, and Blood Institute–sponsored, multicenter, randomized BEST-CLI (Best Endovascular vs. Best Surgical Therapy in Patients with Critical Limb Ischemia) trial. The BEST-CLI trial will hopefully be completed in 2017. Until that time, clinicians will continue to rely on the best available data to guide revascularization strategies for the management of CLI.

Consistent with prior investigations, Dr. Agarwal et al. demonstrated a significant reduction in the proportion of patients undergoing surgical revascularization with a concomitant rise in endovascular revascularization during the same time period. This was accompanied by a steady decline in the incidence of in-hospital mortality and major amputation. Endovascular therapy was associated with a shorter mean length of stay and reduced hospital costs, despite a similar rate of in-hospital major amputation. As the authors correctly point out, the decreasing amputation and mortality rates cannot be directly attributable to a rise in endovascular therapy, as these studies cannot provide causal conclusions. Numerous other factors can influence mortality and amputation rates, including better medical care, aggressive risk factor modification, and appropriate wound care. Still, these associations are powerful and hypothesis generating, and they warrant further investigation.

Whether the improving CLI outcomes can be explained by the growth of these endovascular therapies is yet to be proved. We await the results of the landmark BEST-CLI trial to provide clarity regarding this issue and to further clarify the future role of surgical versus endovascular revascularization.

Dr. John R. Laird and Dr. Gagan D. Singh of the University of California, Davis Medical Center, Sacramento, and Dr. Ehrin J. Armstrong of the University of Colorado, Denver, made their comments in an invited editorial published online in the Journal of the American College of Cardiology (doi: 10.1016/j.jacc.2016.02.041). Dr. Laird has served as a consultant or advisory board member for Bard Peripheral Vascular, Boston Scientific, Cordis, Medtronic, and Abbott Vascular; and has received research support from WL Gore. Dr. Armstrong has served as a consultant or advisory board member for Abbott Vascular, Boston Scientific, Medtronic, Merck, and Spectranetics. Dr. Singh reported that he has no relevant disclosures.

Body

Many of the unanswered questions regarding the optimal approach to CLI are being addressed by the National Heart, Lung, and Blood Institute–sponsored, multicenter, randomized BEST-CLI (Best Endovascular vs. Best Surgical Therapy in Patients with Critical Limb Ischemia) trial. The BEST-CLI trial will hopefully be completed in 2017. Until that time, clinicians will continue to rely on the best available data to guide revascularization strategies for the management of CLI.

Consistent with prior investigations, Dr. Agarwal et al. demonstrated a significant reduction in the proportion of patients undergoing surgical revascularization with a concomitant rise in endovascular revascularization during the same time period. This was accompanied by a steady decline in the incidence of in-hospital mortality and major amputation. Endovascular therapy was associated with a shorter mean length of stay and reduced hospital costs, despite a similar rate of in-hospital major amputation. As the authors correctly point out, the decreasing amputation and mortality rates cannot be directly attributable to a rise in endovascular therapy, as these studies cannot provide causal conclusions. Numerous other factors can influence mortality and amputation rates, including better medical care, aggressive risk factor modification, and appropriate wound care. Still, these associations are powerful and hypothesis generating, and they warrant further investigation.

Whether the improving CLI outcomes can be explained by the growth of these endovascular therapies is yet to be proved. We await the results of the landmark BEST-CLI trial to provide clarity regarding this issue and to further clarify the future role of surgical versus endovascular revascularization.

Dr. John R. Laird and Dr. Gagan D. Singh of the University of California, Davis Medical Center, Sacramento, and Dr. Ehrin J. Armstrong of the University of Colorado, Denver, made their comments in an invited editorial published online in the Journal of the American College of Cardiology (doi: 10.1016/j.jacc.2016.02.041). Dr. Laird has served as a consultant or advisory board member for Bard Peripheral Vascular, Boston Scientific, Cordis, Medtronic, and Abbott Vascular; and has received research support from WL Gore. Dr. Armstrong has served as a consultant or advisory board member for Abbott Vascular, Boston Scientific, Medtronic, Merck, and Spectranetics. Dr. Singh reported that he has no relevant disclosures.

Title
A promising future for CLI treatment?
A promising future for CLI treatment?

Even though there was a steady rate of patients with critical limb ischemia (CLI) admitted to hospitals from 2003 to 2011, surgical revascularization decreased and endovascular treatment increased significantly, with concomitant decreases in in-hospital mortality and major amputation, according to the results of an analysis of the Nationwide Inpatient Sample of 642,433 patients hospitalized with CLI.

In addition, despite multiple adjustments, endovascular revascularization was associated with reduced in-hospital mortality, compared with surgical revascularization over the same period, according to a report online in the Journal of the American College of Cardiology.

©Ingram Publishing/Thinkstock

The annual in-hospital mortality rate decreased from 5.4% in 2003 to 3.4% in 2011 (P less than .001), and the major amputation rate dropped from 16.7% to 10.8%. There also was a significant decrease in length-of-stay (LOS) from 10 days to 8.4 days over the same period (P less than .001); however this did not translate to a significant difference in the cost of hospitalization, according to Dr. Shikhar Agarwal and colleagues at the Cleveland Clinic [doi:10.1016/j.jacc.2016.02.040].

Significant predictive factors of in-hospital mortality after multivariate regression analysis were female sex, older age, emergent admission, a primary indication of septicemia, heart failure, and respiratory disease, as well any stump complications present during admission. In contrast, any form of revascularization was associated with significantly reduced in-hospital mortality.

A comparison of revascularization methods showed that surgical revascularization significantly decreased from 13.9% in 2003 to 8.8% in 2011, while endovascular revascularization increased from 5.1% to 11%. Also, endovascular revascularization was associated with a significant decrease in in-hospital mortality compared with surgical revascularization over the study period (2.34% vs. 2.73%, respectively; odds ratio = .69). Major amputation rates were not significantly different between the two treatments (6.5% vs. 5.7%; OR = .99).

Length of stay was significantly lower with endovascular treatment compared with surgical (8.7 vs. 10.7 days) as were costs ($31,679 vs. $32,485, respectively).

Women had a higher rate of in-hospital mortality, but a lower rate of major amputation. Although race was not seen as a factor in predicting in-hospital mortality, blacks and other nonwhite races had significantly higher rates of amputation and lower rates of revascularization, compared with whites.

Approximately half of the patients assessed were admitted for primary CLI-related diagnoses. The other, non–CLI-related conditions – such as acute MI, cerebrovascular events, respiratory disease, heart failure, and acute kidney disease – have all been independently associated with increased in-hospital mortality and may be confounding, according to the authors. These are still relevant because CLI patients have an overall elevated cardiovascular risk in multiple vascular beds.

In terms of limitations, the authors noted the possibility of selection bias in the database, the rise of standalone outpatient centers in more recent years, which might funnel off select patients, and the lack of anatomical information in the NIS database, which precludes a determination of the appropriateness of treatment choice. Also, the type and invasiveness of the endovascular therapy cannot be determined. “It is possible that simple lesions were preferentially treated with endovascular therapy, whereas more complex lesions were treated by surgical techniques, leading to obvious differences in outcomes. Alternatively, it may be likely that the findings underestimate the impact of endovascular therapy, as sicker patients with higher comorbidities and poor targets were more likely to undergo endovascular revascularization,” the researchers pointed out.

“Despite similar rates of major amputation, endovascular revascularization was associated with reduced in-hospital mortality, mean LOS, and mean cost of hospitalization. Although the results are encouraging, there remain significant disparities and gaps that must be addressed,” Dr. Agarwal and his colleagues concluded.

The authors reported that they had no relevant disclosures.

[email protected]

Even though there was a steady rate of patients with critical limb ischemia (CLI) admitted to hospitals from 2003 to 2011, surgical revascularization decreased and endovascular treatment increased significantly, with concomitant decreases in in-hospital mortality and major amputation, according to the results of an analysis of the Nationwide Inpatient Sample of 642,433 patients hospitalized with CLI.

In addition, despite multiple adjustments, endovascular revascularization was associated with reduced in-hospital mortality, compared with surgical revascularization over the same period, according to a report online in the Journal of the American College of Cardiology.

©Ingram Publishing/Thinkstock

The annual in-hospital mortality rate decreased from 5.4% in 2003 to 3.4% in 2011 (P less than .001), and the major amputation rate dropped from 16.7% to 10.8%. There also was a significant decrease in length-of-stay (LOS) from 10 days to 8.4 days over the same period (P less than .001); however this did not translate to a significant difference in the cost of hospitalization, according to Dr. Shikhar Agarwal and colleagues at the Cleveland Clinic [doi:10.1016/j.jacc.2016.02.040].

Significant predictive factors of in-hospital mortality after multivariate regression analysis were female sex, older age, emergent admission, a primary indication of septicemia, heart failure, and respiratory disease, as well any stump complications present during admission. In contrast, any form of revascularization was associated with significantly reduced in-hospital mortality.

A comparison of revascularization methods showed that surgical revascularization significantly decreased from 13.9% in 2003 to 8.8% in 2011, while endovascular revascularization increased from 5.1% to 11%. Also, endovascular revascularization was associated with a significant decrease in in-hospital mortality compared with surgical revascularization over the study period (2.34% vs. 2.73%, respectively; odds ratio = .69). Major amputation rates were not significantly different between the two treatments (6.5% vs. 5.7%; OR = .99).

Length of stay was significantly lower with endovascular treatment compared with surgical (8.7 vs. 10.7 days) as were costs ($31,679 vs. $32,485, respectively).

Women had a higher rate of in-hospital mortality, but a lower rate of major amputation. Although race was not seen as a factor in predicting in-hospital mortality, blacks and other nonwhite races had significantly higher rates of amputation and lower rates of revascularization, compared with whites.

Approximately half of the patients assessed were admitted for primary CLI-related diagnoses. The other, non–CLI-related conditions – such as acute MI, cerebrovascular events, respiratory disease, heart failure, and acute kidney disease – have all been independently associated with increased in-hospital mortality and may be confounding, according to the authors. These are still relevant because CLI patients have an overall elevated cardiovascular risk in multiple vascular beds.

In terms of limitations, the authors noted the possibility of selection bias in the database, the rise of standalone outpatient centers in more recent years, which might funnel off select patients, and the lack of anatomical information in the NIS database, which precludes a determination of the appropriateness of treatment choice. Also, the type and invasiveness of the endovascular therapy cannot be determined. “It is possible that simple lesions were preferentially treated with endovascular therapy, whereas more complex lesions were treated by surgical techniques, leading to obvious differences in outcomes. Alternatively, it may be likely that the findings underestimate the impact of endovascular therapy, as sicker patients with higher comorbidities and poor targets were more likely to undergo endovascular revascularization,” the researchers pointed out.

“Despite similar rates of major amputation, endovascular revascularization was associated with reduced in-hospital mortality, mean LOS, and mean cost of hospitalization. Although the results are encouraging, there remain significant disparities and gaps that must be addressed,” Dr. Agarwal and his colleagues concluded.

The authors reported that they had no relevant disclosures.

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
Endovascular surges over surgery for patients hospitalized for CLI
Display Headline
Endovascular surges over surgery for patients hospitalized for CLI
Legacy Keywords
endovascular, surgery, CLI, critical limb ischemia, length of stay, amputation, mortality
Legacy Keywords
endovascular, surgery, CLI, critical limb ischemia, length of stay, amputation, mortality
Article Source

FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Surgery in hospitalized CLI patients decreased and endovascular treatment increased from 2003 to 2011 with a concomitant decrease in in-hospital mortality and major amputation.

Major finding: Surgical revascularization significantly decreased from 13.9% in 2003 to 8.8% in 2011, while endovascular revascularization increased from 5.1% to 11%.

Data source: A retrospective database analysis of 642,433 patients hospitalized with CLI from 2003 to 2011 who were included in the Nationwide Inpatient Sample.

Disclosures: The authors reported that they had no relevant disclosures.

Skip lymphadenectomy if SLN mapping finds low-grade endometrial cancer

Article Type
Changed
Display Headline
Skip lymphadenectomy if SLN mapping finds low-grade endometrial cancer

SAN DIEGO – Lymphadenectomy is unnecessary if sentinel lymph node mapping successfully stages low-grade endometrial cancer, according to researchers from Johns Hopkins University in Baltimore.

Lymphadenectomy guided by frozen section remains common in the United States. But the Johns Hopkins research team found that using sentinel lymph node (SLN) mapping and biopsy instead cuts the rate of lymphadenectomy by 76%, without reducing the detection of lymphatic metastases.

Dr. Abdulrahman Sinno

It’s an important finding for cancer patients likely to survive their diagnosis. “We see low-grade patients in the clinic” who’ve had unnecessary lymphadenectomies, “and they are in terrible shape,” said investigator Dr. Abdulrahman Sinno, a gynecologic oncology fellow at Johns Hopkins. Up to half “have horrible side effects,” including crippling lymphedema and pain.

SLN mapping is “an alternative that gives us the information we need for nodal assessment without putting patients at risk. You’ll know if patients have metastases or not. If they fail to map, you do a frozen section, and if you have high-risk features, a lymphadenectomy only on [the side] that didn’t map,” Dr. Sinno said at the annual meeting of the Society of Gynecologic Oncology.

For the past several years, physicians at Johns Hopkins has been doing both SLN mapping for low-grade endometrial cancer as well as frozen sections to decide the need for lymphadenectomy. Using both approaches allowed the investigators to review how patients would have fared if they had gotten only one.

“[We could] safely study the utility of SLN mapping while maintaining the historical standard of using frozen sections to direct the need for lymphadenectomy,” Dr. Sinno said.

SLN mapping outperformed frozen section. Among 114 women, most with grade 1 disease but some with grade 2 or complex atypical hyperplasia, 8 had lymph node metastases. Mapping identified every one, five by standard hematoxylin-eosin staining, and three by ultrastaging. Frozen-section guided lymphadenectomy missed three.

Eighty four (37%) of the 224 hemi-pelvises in the study had lymphadenectomies based on worrisome frozen-section findings. If SLN mapping had been relied on to make the call, lymphadenectomies would have been performed in 20 (9%), a statistically significant difference (P = 0.004).

“Strategies that rely exclusively on uterine frozen section result in significant overtreatment. In the absence of a therapeutic benefit to lymphadenectomy, we believe” this is “unjustifiable when an alternative exists.” At Johns Hopkins these days, “if you map, you’re done,” Dr. Sinno said.

Almost two-thirds of the women had grade 1 endometrial cancer on preoperative histopathology, and about the same number on final pathology. Bilateral SLN mapping was successful in 71 cases (62%) and unilateral mapping in 27 cases (24%). At least one SLN was detected in 98 women (86%).

There were six recurrences after a median follow-up of 15 months. Four were in women who had full pelvic and periaortic lymphadenectomies that were negative. There was also a port site recurrence and a recurrence in an outlying patient with advanced disease. Overall, “recurrence was independent of whether sentinel nodes were applied,” Dr. Sinno said.

Women in the study were a median of 60 years old, with a median body mass index of 33.3 kg/m2.

Dr. Sinno reported having no relevant financial disclosures.

[email protected]

References

Click for Credit Link
Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
lymphadenectomy, endometrial cancer, sentinel lymph node mapping
Sections
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event
Related Articles

SAN DIEGO – Lymphadenectomy is unnecessary if sentinel lymph node mapping successfully stages low-grade endometrial cancer, according to researchers from Johns Hopkins University in Baltimore.

Lymphadenectomy guided by frozen section remains common in the United States. But the Johns Hopkins research team found that using sentinel lymph node (SLN) mapping and biopsy instead cuts the rate of lymphadenectomy by 76%, without reducing the detection of lymphatic metastases.

Dr. Abdulrahman Sinno

It’s an important finding for cancer patients likely to survive their diagnosis. “We see low-grade patients in the clinic” who’ve had unnecessary lymphadenectomies, “and they are in terrible shape,” said investigator Dr. Abdulrahman Sinno, a gynecologic oncology fellow at Johns Hopkins. Up to half “have horrible side effects,” including crippling lymphedema and pain.

SLN mapping is “an alternative that gives us the information we need for nodal assessment without putting patients at risk. You’ll know if patients have metastases or not. If they fail to map, you do a frozen section, and if you have high-risk features, a lymphadenectomy only on [the side] that didn’t map,” Dr. Sinno said at the annual meeting of the Society of Gynecologic Oncology.

For the past several years, physicians at Johns Hopkins has been doing both SLN mapping for low-grade endometrial cancer as well as frozen sections to decide the need for lymphadenectomy. Using both approaches allowed the investigators to review how patients would have fared if they had gotten only one.

“[We could] safely study the utility of SLN mapping while maintaining the historical standard of using frozen sections to direct the need for lymphadenectomy,” Dr. Sinno said.

SLN mapping outperformed frozen section. Among 114 women, most with grade 1 disease but some with grade 2 or complex atypical hyperplasia, 8 had lymph node metastases. Mapping identified every one, five by standard hematoxylin-eosin staining, and three by ultrastaging. Frozen-section guided lymphadenectomy missed three.

Eighty four (37%) of the 224 hemi-pelvises in the study had lymphadenectomies based on worrisome frozen-section findings. If SLN mapping had been relied on to make the call, lymphadenectomies would have been performed in 20 (9%), a statistically significant difference (P = 0.004).

“Strategies that rely exclusively on uterine frozen section result in significant overtreatment. In the absence of a therapeutic benefit to lymphadenectomy, we believe” this is “unjustifiable when an alternative exists.” At Johns Hopkins these days, “if you map, you’re done,” Dr. Sinno said.

Almost two-thirds of the women had grade 1 endometrial cancer on preoperative histopathology, and about the same number on final pathology. Bilateral SLN mapping was successful in 71 cases (62%) and unilateral mapping in 27 cases (24%). At least one SLN was detected in 98 women (86%).

There were six recurrences after a median follow-up of 15 months. Four were in women who had full pelvic and periaortic lymphadenectomies that were negative. There was also a port site recurrence and a recurrence in an outlying patient with advanced disease. Overall, “recurrence was independent of whether sentinel nodes were applied,” Dr. Sinno said.

Women in the study were a median of 60 years old, with a median body mass index of 33.3 kg/m2.

Dr. Sinno reported having no relevant financial disclosures.

[email protected]

SAN DIEGO – Lymphadenectomy is unnecessary if sentinel lymph node mapping successfully stages low-grade endometrial cancer, according to researchers from Johns Hopkins University in Baltimore.

Lymphadenectomy guided by frozen section remains common in the United States. But the Johns Hopkins research team found that using sentinel lymph node (SLN) mapping and biopsy instead cuts the rate of lymphadenectomy by 76%, without reducing the detection of lymphatic metastases.

Dr. Abdulrahman Sinno

It’s an important finding for cancer patients likely to survive their diagnosis. “We see low-grade patients in the clinic” who’ve had unnecessary lymphadenectomies, “and they are in terrible shape,” said investigator Dr. Abdulrahman Sinno, a gynecologic oncology fellow at Johns Hopkins. Up to half “have horrible side effects,” including crippling lymphedema and pain.

SLN mapping is “an alternative that gives us the information we need for nodal assessment without putting patients at risk. You’ll know if patients have metastases or not. If they fail to map, you do a frozen section, and if you have high-risk features, a lymphadenectomy only on [the side] that didn’t map,” Dr. Sinno said at the annual meeting of the Society of Gynecologic Oncology.

For the past several years, physicians at Johns Hopkins has been doing both SLN mapping for low-grade endometrial cancer as well as frozen sections to decide the need for lymphadenectomy. Using both approaches allowed the investigators to review how patients would have fared if they had gotten only one.

“[We could] safely study the utility of SLN mapping while maintaining the historical standard of using frozen sections to direct the need for lymphadenectomy,” Dr. Sinno said.

SLN mapping outperformed frozen section. Among 114 women, most with grade 1 disease but some with grade 2 or complex atypical hyperplasia, 8 had lymph node metastases. Mapping identified every one, five by standard hematoxylin-eosin staining, and three by ultrastaging. Frozen-section guided lymphadenectomy missed three.

Eighty four (37%) of the 224 hemi-pelvises in the study had lymphadenectomies based on worrisome frozen-section findings. If SLN mapping had been relied on to make the call, lymphadenectomies would have been performed in 20 (9%), a statistically significant difference (P = 0.004).

“Strategies that rely exclusively on uterine frozen section result in significant overtreatment. In the absence of a therapeutic benefit to lymphadenectomy, we believe” this is “unjustifiable when an alternative exists.” At Johns Hopkins these days, “if you map, you’re done,” Dr. Sinno said.

Almost two-thirds of the women had grade 1 endometrial cancer on preoperative histopathology, and about the same number on final pathology. Bilateral SLN mapping was successful in 71 cases (62%) and unilateral mapping in 27 cases (24%). At least one SLN was detected in 98 women (86%).

There were six recurrences after a median follow-up of 15 months. Four were in women who had full pelvic and periaortic lymphadenectomies that were negative. There was also a port site recurrence and a recurrence in an outlying patient with advanced disease. Overall, “recurrence was independent of whether sentinel nodes were applied,” Dr. Sinno said.

Women in the study were a median of 60 years old, with a median body mass index of 33.3 kg/m2.

Dr. Sinno reported having no relevant financial disclosures.

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
Skip lymphadenectomy if SLN mapping finds low-grade endometrial cancer
Display Headline
Skip lymphadenectomy if SLN mapping finds low-grade endometrial cancer
Legacy Keywords
lymphadenectomy, endometrial cancer, sentinel lymph node mapping
Legacy Keywords
lymphadenectomy, endometrial cancer, sentinel lymph node mapping
Click for Credit Status
Active
Sections
Article Source

AT THE ANNUAL MEETING ON WOMEN’S CANCER

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Successful sentinel lymph node mapping gives all the information needed for nodal assessment.

Major finding: Sentinel lymph node mapping identified all eight nodal metastases; frozen-section guided lymphadenectomy missed three.

Data source: A review of 114 cases at Johns Hopkins University.

Disclosures: Dr. Sinno reported having no relevant financial disclosures.

VIDEO: Determining your practice’s fair market value in a quality-based world

Article Type
Changed
Display Headline
VIDEO: Determining your practice’s fair market value in a quality-based world

AUSTIN, TEX. – The shift from fee-for-service to value-based health care raises important questions about determining a physician practice’s fair market value, according to financial analyst Albert “Chip” D. Hutzler.

How will the new systems impact valuation? What about commercial reasonableness of arrangements? In a video interview at an American Health Lawyers Association meeting, Mr. Hutzler of HealthCare Appraisers, Delray, Fla., discussed the intersection of fair market value and value-based care, and he offered guidance on how to prepare for the changes.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel

[email protected]

On Twitter @legal_med

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
hospital acquisition, Stark Law
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

AUSTIN, TEX. – The shift from fee-for-service to value-based health care raises important questions about determining a physician practice’s fair market value, according to financial analyst Albert “Chip” D. Hutzler.

How will the new systems impact valuation? What about commercial reasonableness of arrangements? In a video interview at an American Health Lawyers Association meeting, Mr. Hutzler of HealthCare Appraisers, Delray, Fla., discussed the intersection of fair market value and value-based care, and he offered guidance on how to prepare for the changes.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel

[email protected]

On Twitter @legal_med

AUSTIN, TEX. – The shift from fee-for-service to value-based health care raises important questions about determining a physician practice’s fair market value, according to financial analyst Albert “Chip” D. Hutzler.

How will the new systems impact valuation? What about commercial reasonableness of arrangements? In a video interview at an American Health Lawyers Association meeting, Mr. Hutzler of HealthCare Appraisers, Delray, Fla., discussed the intersection of fair market value and value-based care, and he offered guidance on how to prepare for the changes.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel

[email protected]

On Twitter @legal_med

References

References

Publications
Publications
Topics
Article Type
Display Headline
VIDEO: Determining your practice’s fair market value in a quality-based world
Display Headline
VIDEO: Determining your practice’s fair market value in a quality-based world
Legacy Keywords
hospital acquisition, Stark Law
Legacy Keywords
hospital acquisition, Stark Law
Article Source

AT THE PHYSICIANS AND HOSPITALS LAW INSTITUTE

PURLs Copyright

Inside the Article

10 ways EHRs lead to burnout

Article Type
Changed
Display Headline
10 ways EHRs lead to burnout

LAS VEGAS – Doctors are dreading what some have started to call EHR "pajama time.”

“That’s the hour or two that physicians are spending – every night after their kids go to bed – finishing up their documentation, clearing out their in-box,” according to Dr. Christine Sinsky, vice president of professional satisfaction at the American Medical Association.

At a session held in conjunction with the annual meeting of the Healthcare Information and Management Systems Society, Dr. Sinsky spoke about how electronic health records have not lived up to their promise of helping streamline patient care and instead have added hours and headaches to most physicians’ days.

Leah-Anne Thompson/Thinkstock

Data on the impact of EHR systems on physicians’ workflows and satisfaction is beginning to accumulate, she said. University of Wisconsin researchers studying the impact of EHR systems on physicians’ workflow and lives looked at how often and when doctors were accessing their patients’ medical records, she said. What they found was that so many doctors don’t have enough time in their days to finish their documentation, so they spend their evenings and weekends finishing up. Their preliminary findings were presented in 2015 at a primary care research meeting.

Dr. Sinsky said the researchers see “a bump” of time spent on Saturday nights.

“I call that ‘date night’. That Saturday night belongs to Epic, Cerner, or McKesson,” she said sarcastically. “Well, I don’t want my doctor on her electronic health record on a Saturday night. I want my doctor having fun on Saturday night, because I want her to love her job.”

That same study “found that primary care physicians were spending 38 hours a month after hours doing data entry work,” in other words “working a full extra week every month doing documentation after hours, between 7 p.m. and 7 a.m.,” said Dr. Sinky, who is also an internist in Dubuque, Iowa.

Here are 10 ways EHRs contribute to more work, Dr. Sinsky said:

1. Too many clicks. “It takes 33 clicks to order and record a flu shot. And in the emergency room, it takes 4,000 clicks to get through the day for a 10-hour shift,” Dr. Sinsky said. “Studies have shown that physicians are spending 44% of their day doing data entry work, [but] 28% of the day with their patient.”

In her own EHR, she said, “it took 21 clicks, eight scrolls, and five screens just to compose the billing invoice, and within that EHR, the responsibility, which used to be a clerical responsibility, has transferred many things to the physician. All of those clicks, all those screens, and all those minutes add up.”

2. Note bloat. With her current EHR, Dr. Sinksy said, “I have six pages of notes for an upper respiratory infection.” This is not efficient. She offered another example: “I had a patient recently who I sent to a local university,” Dr. Sinsky said. “I got back an enormous note, about 12 pages long. But I still didn’t know, at the end of it. Did she have cancer, or not?”

3. Poor workflow. Today’s EHRs have a workflow that doesn’t match how clinicians work, she said. “Right now, many clinicians are encountering these very rigid workflows that don’t meet the patient’s need and don’t meet the provider’s need.” For example, “in some EHRs, the physician can’t look at any clinical data while dictating the note. This means that the physician has to rely on memory or print lab results, x-ray reports, medication lists, etc., in order to reference these data points in their clinic note.”

4. A lack of focus on the patient. Most EHRs lack a place for a photo of the patient and his or her family, and a place for the patient’s story, a deficiency that detracts from the value of the encounter.

5. No support for team care. Often, both a physician and a nurse or medical assistant need to add documentation to the EHR. Yet many systems are set up such that each party must log in, then log out, before another can contribute. “The nurse has to sign in and sign out; the doctor has to sign in and sign out. That’s about a 2-minute process, so it’s completely unworkable,” Dr. Sinsky said.

6. Distracted hikes to the printer. While most health care settings have installed the computer in the exam rooms, few have also installed a printer. “The doctor types up the exit summary, hits print, runs around the corner, down the hall, around the corner to the one printer, picks up the visit summary, goes back down the corner down the hall. Meanwhile, they’ve broken their bond with the patient and been interrupted several times on that journey.”

 

 

7. Single-use workstations. Doctors who can sit side by side with their nurses and talk about the patient as they’re working on the EHR can save 30 minutes per day. But most office practice setups don’t accommodate that interaction.

8. Small monitors. Being able to see a large display of information rather than a tiny swatch can save 20 minutes of physician time a day, Dr. Sinsky said.

9. A long sign-in process. Streamlining the way a doctor signs into a computer, perhaps with the use of technologies like the tap of one’s badge, “can save 14 minutes of physician time a day,” Dr. Sinsky said.

10. Underuse of medical and nursing students. Practices are beginning to hire premed and prenursing students as assistants who shadow the physician with each patient. While the physician is “giving undivided attention to the patient, the practice partner is cuing up the orders, doing the billing invoice ,and recording much of the encounter.” At the University of California, Los Angeles, researchers found that the use of these assistants saves 3 hours of physician time each day (JAMA Intern Med. 2014;174[7]:1190-3).

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

LAS VEGAS – Doctors are dreading what some have started to call EHR "pajama time.”

“That’s the hour or two that physicians are spending – every night after their kids go to bed – finishing up their documentation, clearing out their in-box,” according to Dr. Christine Sinsky, vice president of professional satisfaction at the American Medical Association.

At a session held in conjunction with the annual meeting of the Healthcare Information and Management Systems Society, Dr. Sinsky spoke about how electronic health records have not lived up to their promise of helping streamline patient care and instead have added hours and headaches to most physicians’ days.

Leah-Anne Thompson/Thinkstock

Data on the impact of EHR systems on physicians’ workflows and satisfaction is beginning to accumulate, she said. University of Wisconsin researchers studying the impact of EHR systems on physicians’ workflow and lives looked at how often and when doctors were accessing their patients’ medical records, she said. What they found was that so many doctors don’t have enough time in their days to finish their documentation, so they spend their evenings and weekends finishing up. Their preliminary findings were presented in 2015 at a primary care research meeting.

Dr. Sinsky said the researchers see “a bump” of time spent on Saturday nights.

“I call that ‘date night’. That Saturday night belongs to Epic, Cerner, or McKesson,” she said sarcastically. “Well, I don’t want my doctor on her electronic health record on a Saturday night. I want my doctor having fun on Saturday night, because I want her to love her job.”

That same study “found that primary care physicians were spending 38 hours a month after hours doing data entry work,” in other words “working a full extra week every month doing documentation after hours, between 7 p.m. and 7 a.m.,” said Dr. Sinky, who is also an internist in Dubuque, Iowa.

Here are 10 ways EHRs contribute to more work, Dr. Sinsky said:

1. Too many clicks. “It takes 33 clicks to order and record a flu shot. And in the emergency room, it takes 4,000 clicks to get through the day for a 10-hour shift,” Dr. Sinsky said. “Studies have shown that physicians are spending 44% of their day doing data entry work, [but] 28% of the day with their patient.”

In her own EHR, she said, “it took 21 clicks, eight scrolls, and five screens just to compose the billing invoice, and within that EHR, the responsibility, which used to be a clerical responsibility, has transferred many things to the physician. All of those clicks, all those screens, and all those minutes add up.”

2. Note bloat. With her current EHR, Dr. Sinksy said, “I have six pages of notes for an upper respiratory infection.” This is not efficient. She offered another example: “I had a patient recently who I sent to a local university,” Dr. Sinsky said. “I got back an enormous note, about 12 pages long. But I still didn’t know, at the end of it. Did she have cancer, or not?”

3. Poor workflow. Today’s EHRs have a workflow that doesn’t match how clinicians work, she said. “Right now, many clinicians are encountering these very rigid workflows that don’t meet the patient’s need and don’t meet the provider’s need.” For example, “in some EHRs, the physician can’t look at any clinical data while dictating the note. This means that the physician has to rely on memory or print lab results, x-ray reports, medication lists, etc., in order to reference these data points in their clinic note.”

4. A lack of focus on the patient. Most EHRs lack a place for a photo of the patient and his or her family, and a place for the patient’s story, a deficiency that detracts from the value of the encounter.

5. No support for team care. Often, both a physician and a nurse or medical assistant need to add documentation to the EHR. Yet many systems are set up such that each party must log in, then log out, before another can contribute. “The nurse has to sign in and sign out; the doctor has to sign in and sign out. That’s about a 2-minute process, so it’s completely unworkable,” Dr. Sinsky said.

6. Distracted hikes to the printer. While most health care settings have installed the computer in the exam rooms, few have also installed a printer. “The doctor types up the exit summary, hits print, runs around the corner, down the hall, around the corner to the one printer, picks up the visit summary, goes back down the corner down the hall. Meanwhile, they’ve broken their bond with the patient and been interrupted several times on that journey.”

 

 

7. Single-use workstations. Doctors who can sit side by side with their nurses and talk about the patient as they’re working on the EHR can save 30 minutes per day. But most office practice setups don’t accommodate that interaction.

8. Small monitors. Being able to see a large display of information rather than a tiny swatch can save 20 minutes of physician time a day, Dr. Sinsky said.

9. A long sign-in process. Streamlining the way a doctor signs into a computer, perhaps with the use of technologies like the tap of one’s badge, “can save 14 minutes of physician time a day,” Dr. Sinsky said.

10. Underuse of medical and nursing students. Practices are beginning to hire premed and prenursing students as assistants who shadow the physician with each patient. While the physician is “giving undivided attention to the patient, the practice partner is cuing up the orders, doing the billing invoice ,and recording much of the encounter.” At the University of California, Los Angeles, researchers found that the use of these assistants saves 3 hours of physician time each day (JAMA Intern Med. 2014;174[7]:1190-3).

LAS VEGAS – Doctors are dreading what some have started to call EHR "pajama time.”

“That’s the hour or two that physicians are spending – every night after their kids go to bed – finishing up their documentation, clearing out their in-box,” according to Dr. Christine Sinsky, vice president of professional satisfaction at the American Medical Association.

At a session held in conjunction with the annual meeting of the Healthcare Information and Management Systems Society, Dr. Sinsky spoke about how electronic health records have not lived up to their promise of helping streamline patient care and instead have added hours and headaches to most physicians’ days.

Leah-Anne Thompson/Thinkstock

Data on the impact of EHR systems on physicians’ workflows and satisfaction is beginning to accumulate, she said. University of Wisconsin researchers studying the impact of EHR systems on physicians’ workflow and lives looked at how often and when doctors were accessing their patients’ medical records, she said. What they found was that so many doctors don’t have enough time in their days to finish their documentation, so they spend their evenings and weekends finishing up. Their preliminary findings were presented in 2015 at a primary care research meeting.

Dr. Sinsky said the researchers see “a bump” of time spent on Saturday nights.

“I call that ‘date night’. That Saturday night belongs to Epic, Cerner, or McKesson,” she said sarcastically. “Well, I don’t want my doctor on her electronic health record on a Saturday night. I want my doctor having fun on Saturday night, because I want her to love her job.”

That same study “found that primary care physicians were spending 38 hours a month after hours doing data entry work,” in other words “working a full extra week every month doing documentation after hours, between 7 p.m. and 7 a.m.,” said Dr. Sinky, who is also an internist in Dubuque, Iowa.

Here are 10 ways EHRs contribute to more work, Dr. Sinsky said:

1. Too many clicks. “It takes 33 clicks to order and record a flu shot. And in the emergency room, it takes 4,000 clicks to get through the day for a 10-hour shift,” Dr. Sinsky said. “Studies have shown that physicians are spending 44% of their day doing data entry work, [but] 28% of the day with their patient.”

In her own EHR, she said, “it took 21 clicks, eight scrolls, and five screens just to compose the billing invoice, and within that EHR, the responsibility, which used to be a clerical responsibility, has transferred many things to the physician. All of those clicks, all those screens, and all those minutes add up.”

2. Note bloat. With her current EHR, Dr. Sinksy said, “I have six pages of notes for an upper respiratory infection.” This is not efficient. She offered another example: “I had a patient recently who I sent to a local university,” Dr. Sinsky said. “I got back an enormous note, about 12 pages long. But I still didn’t know, at the end of it. Did she have cancer, or not?”

3. Poor workflow. Today’s EHRs have a workflow that doesn’t match how clinicians work, she said. “Right now, many clinicians are encountering these very rigid workflows that don’t meet the patient’s need and don’t meet the provider’s need.” For example, “in some EHRs, the physician can’t look at any clinical data while dictating the note. This means that the physician has to rely on memory or print lab results, x-ray reports, medication lists, etc., in order to reference these data points in their clinic note.”

4. A lack of focus on the patient. Most EHRs lack a place for a photo of the patient and his or her family, and a place for the patient’s story, a deficiency that detracts from the value of the encounter.

5. No support for team care. Often, both a physician and a nurse or medical assistant need to add documentation to the EHR. Yet many systems are set up such that each party must log in, then log out, before another can contribute. “The nurse has to sign in and sign out; the doctor has to sign in and sign out. That’s about a 2-minute process, so it’s completely unworkable,” Dr. Sinsky said.

6. Distracted hikes to the printer. While most health care settings have installed the computer in the exam rooms, few have also installed a printer. “The doctor types up the exit summary, hits print, runs around the corner, down the hall, around the corner to the one printer, picks up the visit summary, goes back down the corner down the hall. Meanwhile, they’ve broken their bond with the patient and been interrupted several times on that journey.”

 

 

7. Single-use workstations. Doctors who can sit side by side with their nurses and talk about the patient as they’re working on the EHR can save 30 minutes per day. But most office practice setups don’t accommodate that interaction.

8. Small monitors. Being able to see a large display of information rather than a tiny swatch can save 20 minutes of physician time a day, Dr. Sinsky said.

9. A long sign-in process. Streamlining the way a doctor signs into a computer, perhaps with the use of technologies like the tap of one’s badge, “can save 14 minutes of physician time a day,” Dr. Sinsky said.

10. Underuse of medical and nursing students. Practices are beginning to hire premed and prenursing students as assistants who shadow the physician with each patient. While the physician is “giving undivided attention to the patient, the practice partner is cuing up the orders, doing the billing invoice ,and recording much of the encounter.” At the University of California, Los Angeles, researchers found that the use of these assistants saves 3 hours of physician time each day (JAMA Intern Med. 2014;174[7]:1190-3).

References

References

Publications
Publications
Topics
Article Type
Display Headline
10 ways EHRs lead to burnout
Display Headline
10 ways EHRs lead to burnout
Sections
Article Source

EXPERT ANALYSIS FROM HIMSS16

PURLs Copyright

Inside the Article

Lies, damn lies, and research: Improving reproducibility in biomedical science

Guidelines should not seem threatening
Article Type
Changed
Display Headline
Lies, damn lies, and research: Improving reproducibility in biomedical science

The issue of scientific reproducibility has come to the fore in the past several years, driven by noteworthy failures to replicate critical findings in several much-publicized reports coupled to a series of scandals calling into question the role of journals and granting agencies in maintaining quality and oversight.

In a special Nature online collection, the journal assembled articles and perspectives from 2011 to the present dealing with this issue of research reproducibility in science and medicine. These articles were supplemented with current editorial comment.

Seeing these broad spectrum concerns pulled together in one place makes it difficult not to be pessimistic about the current state of research investigations across the board. The saving grace, however, is that these same reports show that a lot of people realize that there is a problem – people who are trying to make changes and who are in a position to be effective.

According to the reports presented in the collection, the problems in research accountability and reproducibility have grown to an alarming extent. In one estimate, irreproducibility ends up costing biomedical research some $28 billion wasted dollars per year (Nature. 2015 Jun 9. doi: 10.1038/nature.2015.17711).

A litany of concerns

In 2012, scientists at AMGEN (Thousand Oaks, Calif.) reported that, even cooperating closely with the original investigators, they were able to reproduce only 6 of 53 studies considered to be benchmarks of cancer research (Nature. 2016 Feb 4. doi: 10.1038/nature.2016.19269).

Scientists at Bayer HealthCare reported in Nature Reviews Drug Discovery that they could successfully reproduce results in only a quarter of 67 so-called seminal studies (2011 Sep. doi: 10.1038/nrd3439-c1).

According to a 2013 report in The Economist, Dr. John Ioannidis, an expert in the field of scientific reproducibility, argued that in his field, “epidemiology, you might expect one in ten hypotheses to be true. In exploratory disciplines like genomics, which rely on combing through vast troves of data about genes and proteins for interesting relationships, you might expect just one in a thousand to prove correct.”

This increasing litany of irreproducibility has raised alarm in the scientific community and has led to a search for answers, as so many preclinical studies form the precursor data for eventual human trials.

Despite the concerns raised, human clinical trials seem to be less at risk for irreproducibility, according to an editorial by Dr. Francis S. Collins, director, and Dr. Lawrence A. Tabak, principal deputy director of the U.S. National Institutes of Health, “because they are already governed by various regulations that stipulate rigorous design and independent oversight – including randomization, blinding, power estimates, pre-registration of outcome measures in standardized, public databases such as ClinicalTrials.gov and oversight by institutional review boards and data safety monitoring boards. Furthermore, the clinical trials community has taken important steps toward adopting standard reporting elements,” (Nature. 2014 Jan. doi: 10.1038/505612a).

The paucity of P

Today, the P-value, .05 or less, is all too often considered the sine qua non of scientific proof. “Most statisticians consider this appalling, as the P value was never intended to be used as a strong indicator of certainty as it too often is today. Most scientists would look at [a] P value of .01 and say that there was just a 1% chance of [the] result being a false alarm. But they would be wrong.” The 2014 report goes on to state how, according to one widely used calculation by authentic statisticians, a P value of .01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of .05 raises that chance of a false alarm to at least 29% (Nature. 2014 Feb. doi: 10.1038/506150a).

Beyond this assessment problem, P values may allow for considerable researcher bias, conscious and unconscious, even to the extent of encouraging “P-hacking”: one of the few statistical terms to ever make it into the Urban Dictionary. “P-hacking is trying multiple things until you get the desired result” – even unconsciously, according to one researcher quoted.

In addition, “unless statistical power is very high (and much higher than in most experiments), the P value should be interpreted tentatively at best” (Nat Methods. 2015 Feb 26. doi: 10.1038/nmeth.3288).

So bad is the problem that “misuse of the P value – a common test for judging the strength of scientific evidence – is contributing to the number of research findings that cannot be reproduced,” the American Statistical Association warns in a statement released in March, adding that the P value cannot be used to determine whether a hypothesis is true or even whether results are important (Nature. 2016 Mar 7. doi: 10.1038/nature.2016.19503).

 

 

And none of this even remotely addresses those instances where researchers report findings that “trend towards significance” when they can’t even meet the magical P threshold.

A muddling of mice (and more)

Fundamental to biological research is the vast array of preliminary animal studies that must be performed before clinical testing can begin.

Animal-based research has been under intense scrutiny due to a variety of perceived flaws and omissions that have been found to be all too common. For example, in a report in PLoS Biology, Dr. Ulrich Dirnagl of the Charité Medical University in Berlin reviewed 100 reports published between 2000 and 2013, which included 522 experiments using rodents to test cancer and stroke treatments. Around two-thirds of the experiments did not report whether any animals had been dropped from the final analysis, and of the 30% that did report rodents dropped from analysis, only 14 explained why (2016 Jan 4. doi: 10.1371/journal.pbio.1002331). Similarly, Dr. John Ioannidis and his colleagues assessed a random sample of 268 biomedical papers listed in PubMed published between 2000 and 2014 and found that only one contained sufficient details to replicate the work (Nature. 2016 Jan 5. doi: 10.1038/nature.2015.19101).

A multitude of genetic and environmental factors have also been found influential in animal research. For example, the gut microbiome (which has been found to influence many aspects of mouse health and metabolism) varies widely in the same species of mice fed on different diets or obtained from different vendors. And there can be differences in physiology and behavior based on circadian rhythms, and even variations in cage design (Nature. 2016 Feb 16. doi: 10.1038/530254a).

But things are looking brighter. By the beginning of 2016, more than 600 journals had signed up for the voluntary ARRIVE (Animals in Research: Reporting of In Vivo Experiments) guidelines designed to improve the reporting of animal experiments. The guidelines include a checklist of elements to be included in any reporting of animal research, including animal strain, sex, and adverse events (Nature. 2016 Feb 1. doi: 10.1038/nature.2016.19274).

Problems have also been reported in the use of cell lines and antibodies in biomedical research. For example, a report in Nature indicated that too many biomedical researchers are lax in checking for impostor cell lines when they perform their research (Nature. 2015 Oct 12. doi: 10.1038/nature.2015.18544). And recent studies have shown that improper or misused antibodies are a significant source of false findings and irreproducibility in the modern literature (Nature. 2015 May 19. doi: 10.1038/521274a).

Reviewer, view thyself

The editorial in The Economist also discussed some of the failures of the peer-reviewed scientific literature, usually considered the final gateway of quality control, to provide appropriate review and correction of research errors. The editorial cites a damning test of lower-tier research publications by Dr. John Bohannon, a biologist at Harvard, who submitted a pseudonymous paper on the effects of a chemical derived from lichen cells to 304 journals describing themselves as using peer review. The paper was concocted wholesale with manifold and obvious errors in study design, analysis, and interpretation of results, according to Dr. Bohannon. This fictitious paper from a fictitious researcher based at a fictitious university was accepted for publication by an alarming 147 of the journals.

The problem is not new. In 1998, Dr. Fiona Godlee, editor of the British Medical Journal, sent an article with eight deliberate mistakes in study design, analysis, and interpretation to more than 200 of the journal’s regular reviewers. None of the reviewers found all the mistakes, and on average, they spotted fewer than two. And another study by the BMJ showed that experience was an issue, not in improving quality of reviewers, but quite the opposite. Over a 14-year period assessed, 1,500 referees, as rated by editors at leading journals, showed a slow but steady drop in their scores.

Such studies prompted a profound reassessment by the journals, in part pushed by some major granting agencies, including the National Institutes of Health.

Not taking grants for granted

The National Institutes for Health are advancing efforts to expand scientific rigor and reproducibility in their grants projects.

“As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.”

Whether current efforts by scientists, societies, granting organizations, and journals can lead to authentic reform and a vast and relatively quick improvement in reproducibility of scientific results is still an open question. In discussing a 2015 report on the subject by the biomedical research community in the United Kingdom, neurophysiologist Dr. Dorothy Bishop had this to say: “I feel quite upbeat about it. ... Now that we’re aware of it, we have all sorts of ideas about how to deal with it. These are doable things. I feel that the mood is one of making science a much better thing. It might lead to slightly slower science. That could be better” (Nature. 2015 Oct 29. doi: 10.1038/nature.2015.18684).

 

 

[email protected]

References

Body

In the recent Nature editorial, “Repetitive flaws,” comments are offered regarding the new NIH guidelines that require grant proposals to account for biological variables and describe how experimental materials may be authenticated (2016 Jan 21. doi: 10.1038/529256a). It is proposed that these requirements will attempt to improve the quality and reproducibility of research. Many concerns regarding scientific reproducibility have been raised in the past few years. As the editorial states, the NIH guidelines “can help to make researchers aspire to the values that produced them” and they can “inspire researchers to uphold their identity and integrity.”

To those investigators who strive to report only their best results following exhaustive and sincere confirmation, these guidelines will not seem threatening. Providing experimental details of one’s work is helpful in many ways (you can personally reproduce them with new and different lab personnel or after a lapse of time, you will have excellent experimental records, you will have excellent documentation when it comes time to write another grant, and so on), and I have personally been frustrated when my laboratory cannot duplicate published work of others. However, questions raised include who will pay for reproducing the work of others and how will the sacrifice of additional animals or subjects be justified? Many laboratories are already financially strapped due to current funding challenges and time is also extremely valuable. In addition, junior researchers are on tenure and promotion timelines that provide stress and need for publications to establish independence and credibility, and established investigators must document continued productivity to be judged adequate to obtain continued funding.

The quality of peer review of research publications has also been challenged recently, adding to the concern over the veracity of published research. Many journals now have mandatory statistical review prior to acceptance. This also delays time to publication. In addition, the generous reviewers who perform peer review often do so at the cost of their valuable, uncompensated time.

Despite these hurdles and questions, those who perform valuable and needed research to improve the lives and care of our patients must continue to strive to produce the highest level of evidence.

Dr. Jennifer S. Lawton is a professor of surgery at the division of cardiothoracic surgery, Washington University, St. Louis. She is also an associate medical editor for Thoracic Surgery News.

Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Body

In the recent Nature editorial, “Repetitive flaws,” comments are offered regarding the new NIH guidelines that require grant proposals to account for biological variables and describe how experimental materials may be authenticated (2016 Jan 21. doi: 10.1038/529256a). It is proposed that these requirements will attempt to improve the quality and reproducibility of research. Many concerns regarding scientific reproducibility have been raised in the past few years. As the editorial states, the NIH guidelines “can help to make researchers aspire to the values that produced them” and they can “inspire researchers to uphold their identity and integrity.”

To those investigators who strive to report only their best results following exhaustive and sincere confirmation, these guidelines will not seem threatening. Providing experimental details of one’s work is helpful in many ways (you can personally reproduce them with new and different lab personnel or after a lapse of time, you will have excellent experimental records, you will have excellent documentation when it comes time to write another grant, and so on), and I have personally been frustrated when my laboratory cannot duplicate published work of others. However, questions raised include who will pay for reproducing the work of others and how will the sacrifice of additional animals or subjects be justified? Many laboratories are already financially strapped due to current funding challenges and time is also extremely valuable. In addition, junior researchers are on tenure and promotion timelines that provide stress and need for publications to establish independence and credibility, and established investigators must document continued productivity to be judged adequate to obtain continued funding.

The quality of peer review of research publications has also been challenged recently, adding to the concern over the veracity of published research. Many journals now have mandatory statistical review prior to acceptance. This also delays time to publication. In addition, the generous reviewers who perform peer review often do so at the cost of their valuable, uncompensated time.

Despite these hurdles and questions, those who perform valuable and needed research to improve the lives and care of our patients must continue to strive to produce the highest level of evidence.

Dr. Jennifer S. Lawton is a professor of surgery at the division of cardiothoracic surgery, Washington University, St. Louis. She is also an associate medical editor for Thoracic Surgery News.

Body

In the recent Nature editorial, “Repetitive flaws,” comments are offered regarding the new NIH guidelines that require grant proposals to account for biological variables and describe how experimental materials may be authenticated (2016 Jan 21. doi: 10.1038/529256a). It is proposed that these requirements will attempt to improve the quality and reproducibility of research. Many concerns regarding scientific reproducibility have been raised in the past few years. As the editorial states, the NIH guidelines “can help to make researchers aspire to the values that produced them” and they can “inspire researchers to uphold their identity and integrity.”

To those investigators who strive to report only their best results following exhaustive and sincere confirmation, these guidelines will not seem threatening. Providing experimental details of one’s work is helpful in many ways (you can personally reproduce them with new and different lab personnel or after a lapse of time, you will have excellent experimental records, you will have excellent documentation when it comes time to write another grant, and so on), and I have personally been frustrated when my laboratory cannot duplicate published work of others. However, questions raised include who will pay for reproducing the work of others and how will the sacrifice of additional animals or subjects be justified? Many laboratories are already financially strapped due to current funding challenges and time is also extremely valuable. In addition, junior researchers are on tenure and promotion timelines that provide stress and need for publications to establish independence and credibility, and established investigators must document continued productivity to be judged adequate to obtain continued funding.

The quality of peer review of research publications has also been challenged recently, adding to the concern over the veracity of published research. Many journals now have mandatory statistical review prior to acceptance. This also delays time to publication. In addition, the generous reviewers who perform peer review often do so at the cost of their valuable, uncompensated time.

Despite these hurdles and questions, those who perform valuable and needed research to improve the lives and care of our patients must continue to strive to produce the highest level of evidence.

Dr. Jennifer S. Lawton is a professor of surgery at the division of cardiothoracic surgery, Washington University, St. Louis. She is also an associate medical editor for Thoracic Surgery News.

Title
Guidelines should not seem threatening
Guidelines should not seem threatening

The issue of scientific reproducibility has come to the fore in the past several years, driven by noteworthy failures to replicate critical findings in several much-publicized reports coupled to a series of scandals calling into question the role of journals and granting agencies in maintaining quality and oversight.

In a special Nature online collection, the journal assembled articles and perspectives from 2011 to the present dealing with this issue of research reproducibility in science and medicine. These articles were supplemented with current editorial comment.

Seeing these broad spectrum concerns pulled together in one place makes it difficult not to be pessimistic about the current state of research investigations across the board. The saving grace, however, is that these same reports show that a lot of people realize that there is a problem – people who are trying to make changes and who are in a position to be effective.

According to the reports presented in the collection, the problems in research accountability and reproducibility have grown to an alarming extent. In one estimate, irreproducibility ends up costing biomedical research some $28 billion wasted dollars per year (Nature. 2015 Jun 9. doi: 10.1038/nature.2015.17711).

A litany of concerns

In 2012, scientists at AMGEN (Thousand Oaks, Calif.) reported that, even cooperating closely with the original investigators, they were able to reproduce only 6 of 53 studies considered to be benchmarks of cancer research (Nature. 2016 Feb 4. doi: 10.1038/nature.2016.19269).

Scientists at Bayer HealthCare reported in Nature Reviews Drug Discovery that they could successfully reproduce results in only a quarter of 67 so-called seminal studies (2011 Sep. doi: 10.1038/nrd3439-c1).

According to a 2013 report in The Economist, Dr. John Ioannidis, an expert in the field of scientific reproducibility, argued that in his field, “epidemiology, you might expect one in ten hypotheses to be true. In exploratory disciplines like genomics, which rely on combing through vast troves of data about genes and proteins for interesting relationships, you might expect just one in a thousand to prove correct.”

This increasing litany of irreproducibility has raised alarm in the scientific community and has led to a search for answers, as so many preclinical studies form the precursor data for eventual human trials.

Despite the concerns raised, human clinical trials seem to be less at risk for irreproducibility, according to an editorial by Dr. Francis S. Collins, director, and Dr. Lawrence A. Tabak, principal deputy director of the U.S. National Institutes of Health, “because they are already governed by various regulations that stipulate rigorous design and independent oversight – including randomization, blinding, power estimates, pre-registration of outcome measures in standardized, public databases such as ClinicalTrials.gov and oversight by institutional review boards and data safety monitoring boards. Furthermore, the clinical trials community has taken important steps toward adopting standard reporting elements,” (Nature. 2014 Jan. doi: 10.1038/505612a).

The paucity of P

Today, the P-value, .05 or less, is all too often considered the sine qua non of scientific proof. “Most statisticians consider this appalling, as the P value was never intended to be used as a strong indicator of certainty as it too often is today. Most scientists would look at [a] P value of .01 and say that there was just a 1% chance of [the] result being a false alarm. But they would be wrong.” The 2014 report goes on to state how, according to one widely used calculation by authentic statisticians, a P value of .01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of .05 raises that chance of a false alarm to at least 29% (Nature. 2014 Feb. doi: 10.1038/506150a).

Beyond this assessment problem, P values may allow for considerable researcher bias, conscious and unconscious, even to the extent of encouraging “P-hacking”: one of the few statistical terms to ever make it into the Urban Dictionary. “P-hacking is trying multiple things until you get the desired result” – even unconsciously, according to one researcher quoted.

In addition, “unless statistical power is very high (and much higher than in most experiments), the P value should be interpreted tentatively at best” (Nat Methods. 2015 Feb 26. doi: 10.1038/nmeth.3288).

So bad is the problem that “misuse of the P value – a common test for judging the strength of scientific evidence – is contributing to the number of research findings that cannot be reproduced,” the American Statistical Association warns in a statement released in March, adding that the P value cannot be used to determine whether a hypothesis is true or even whether results are important (Nature. 2016 Mar 7. doi: 10.1038/nature.2016.19503).

 

 

And none of this even remotely addresses those instances where researchers report findings that “trend towards significance” when they can’t even meet the magical P threshold.

A muddling of mice (and more)

Fundamental to biological research is the vast array of preliminary animal studies that must be performed before clinical testing can begin.

Animal-based research has been under intense scrutiny due to a variety of perceived flaws and omissions that have been found to be all too common. For example, in a report in PLoS Biology, Dr. Ulrich Dirnagl of the Charité Medical University in Berlin reviewed 100 reports published between 2000 and 2013, which included 522 experiments using rodents to test cancer and stroke treatments. Around two-thirds of the experiments did not report whether any animals had been dropped from the final analysis, and of the 30% that did report rodents dropped from analysis, only 14 explained why (2016 Jan 4. doi: 10.1371/journal.pbio.1002331). Similarly, Dr. John Ioannidis and his colleagues assessed a random sample of 268 biomedical papers listed in PubMed published between 2000 and 2014 and found that only one contained sufficient details to replicate the work (Nature. 2016 Jan 5. doi: 10.1038/nature.2015.19101).

A multitude of genetic and environmental factors have also been found influential in animal research. For example, the gut microbiome (which has been found to influence many aspects of mouse health and metabolism) varies widely in the same species of mice fed on different diets or obtained from different vendors. And there can be differences in physiology and behavior based on circadian rhythms, and even variations in cage design (Nature. 2016 Feb 16. doi: 10.1038/530254a).

But things are looking brighter. By the beginning of 2016, more than 600 journals had signed up for the voluntary ARRIVE (Animals in Research: Reporting of In Vivo Experiments) guidelines designed to improve the reporting of animal experiments. The guidelines include a checklist of elements to be included in any reporting of animal research, including animal strain, sex, and adverse events (Nature. 2016 Feb 1. doi: 10.1038/nature.2016.19274).

Problems have also been reported in the use of cell lines and antibodies in biomedical research. For example, a report in Nature indicated that too many biomedical researchers are lax in checking for impostor cell lines when they perform their research (Nature. 2015 Oct 12. doi: 10.1038/nature.2015.18544). And recent studies have shown that improper or misused antibodies are a significant source of false findings and irreproducibility in the modern literature (Nature. 2015 May 19. doi: 10.1038/521274a).

Reviewer, view thyself

The editorial in The Economist also discussed some of the failures of the peer-reviewed scientific literature, usually considered the final gateway of quality control, to provide appropriate review and correction of research errors. The editorial cites a damning test of lower-tier research publications by Dr. John Bohannon, a biologist at Harvard, who submitted a pseudonymous paper on the effects of a chemical derived from lichen cells to 304 journals describing themselves as using peer review. The paper was concocted wholesale with manifold and obvious errors in study design, analysis, and interpretation of results, according to Dr. Bohannon. This fictitious paper from a fictitious researcher based at a fictitious university was accepted for publication by an alarming 147 of the journals.

The problem is not new. In 1998, Dr. Fiona Godlee, editor of the British Medical Journal, sent an article with eight deliberate mistakes in study design, analysis, and interpretation to more than 200 of the journal’s regular reviewers. None of the reviewers found all the mistakes, and on average, they spotted fewer than two. And another study by the BMJ showed that experience was an issue, not in improving quality of reviewers, but quite the opposite. Over a 14-year period assessed, 1,500 referees, as rated by editors at leading journals, showed a slow but steady drop in their scores.

Such studies prompted a profound reassessment by the journals, in part pushed by some major granting agencies, including the National Institutes of Health.

Not taking grants for granted

The National Institutes for Health are advancing efforts to expand scientific rigor and reproducibility in their grants projects.

“As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.”

Whether current efforts by scientists, societies, granting organizations, and journals can lead to authentic reform and a vast and relatively quick improvement in reproducibility of scientific results is still an open question. In discussing a 2015 report on the subject by the biomedical research community in the United Kingdom, neurophysiologist Dr. Dorothy Bishop had this to say: “I feel quite upbeat about it. ... Now that we’re aware of it, we have all sorts of ideas about how to deal with it. These are doable things. I feel that the mood is one of making science a much better thing. It might lead to slightly slower science. That could be better” (Nature. 2015 Oct 29. doi: 10.1038/nature.2015.18684).

 

 

[email protected]

The issue of scientific reproducibility has come to the fore in the past several years, driven by noteworthy failures to replicate critical findings in several much-publicized reports coupled to a series of scandals calling into question the role of journals and granting agencies in maintaining quality and oversight.

In a special Nature online collection, the journal assembled articles and perspectives from 2011 to the present dealing with this issue of research reproducibility in science and medicine. These articles were supplemented with current editorial comment.

Seeing these broad spectrum concerns pulled together in one place makes it difficult not to be pessimistic about the current state of research investigations across the board. The saving grace, however, is that these same reports show that a lot of people realize that there is a problem – people who are trying to make changes and who are in a position to be effective.

According to the reports presented in the collection, the problems in research accountability and reproducibility have grown to an alarming extent. In one estimate, irreproducibility ends up costing biomedical research some $28 billion wasted dollars per year (Nature. 2015 Jun 9. doi: 10.1038/nature.2015.17711).

A litany of concerns

In 2012, scientists at AMGEN (Thousand Oaks, Calif.) reported that, even cooperating closely with the original investigators, they were able to reproduce only 6 of 53 studies considered to be benchmarks of cancer research (Nature. 2016 Feb 4. doi: 10.1038/nature.2016.19269).

Scientists at Bayer HealthCare reported in Nature Reviews Drug Discovery that they could successfully reproduce results in only a quarter of 67 so-called seminal studies (2011 Sep. doi: 10.1038/nrd3439-c1).

According to a 2013 report in The Economist, Dr. John Ioannidis, an expert in the field of scientific reproducibility, argued that in his field, “epidemiology, you might expect one in ten hypotheses to be true. In exploratory disciplines like genomics, which rely on combing through vast troves of data about genes and proteins for interesting relationships, you might expect just one in a thousand to prove correct.”

This increasing litany of irreproducibility has raised alarm in the scientific community and has led to a search for answers, as so many preclinical studies form the precursor data for eventual human trials.

Despite the concerns raised, human clinical trials seem to be less at risk for irreproducibility, according to an editorial by Dr. Francis S. Collins, director, and Dr. Lawrence A. Tabak, principal deputy director of the U.S. National Institutes of Health, “because they are already governed by various regulations that stipulate rigorous design and independent oversight – including randomization, blinding, power estimates, pre-registration of outcome measures in standardized, public databases such as ClinicalTrials.gov and oversight by institutional review boards and data safety monitoring boards. Furthermore, the clinical trials community has taken important steps toward adopting standard reporting elements,” (Nature. 2014 Jan. doi: 10.1038/505612a).

The paucity of P

Today, the P-value, .05 or less, is all too often considered the sine qua non of scientific proof. “Most statisticians consider this appalling, as the P value was never intended to be used as a strong indicator of certainty as it too often is today. Most scientists would look at [a] P value of .01 and say that there was just a 1% chance of [the] result being a false alarm. But they would be wrong.” The 2014 report goes on to state how, according to one widely used calculation by authentic statisticians, a P value of .01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of .05 raises that chance of a false alarm to at least 29% (Nature. 2014 Feb. doi: 10.1038/506150a).

Beyond this assessment problem, P values may allow for considerable researcher bias, conscious and unconscious, even to the extent of encouraging “P-hacking”: one of the few statistical terms to ever make it into the Urban Dictionary. “P-hacking is trying multiple things until you get the desired result” – even unconsciously, according to one researcher quoted.

In addition, “unless statistical power is very high (and much higher than in most experiments), the P value should be interpreted tentatively at best” (Nat Methods. 2015 Feb 26. doi: 10.1038/nmeth.3288).

So bad is the problem that “misuse of the P value – a common test for judging the strength of scientific evidence – is contributing to the number of research findings that cannot be reproduced,” the American Statistical Association warns in a statement released in March, adding that the P value cannot be used to determine whether a hypothesis is true or even whether results are important (Nature. 2016 Mar 7. doi: 10.1038/nature.2016.19503).

 

 

And none of this even remotely addresses those instances where researchers report findings that “trend towards significance” when they can’t even meet the magical P threshold.

A muddling of mice (and more)

Fundamental to biological research is the vast array of preliminary animal studies that must be performed before clinical testing can begin.

Animal-based research has been under intense scrutiny due to a variety of perceived flaws and omissions that have been found to be all too common. For example, in a report in PLoS Biology, Dr. Ulrich Dirnagl of the Charité Medical University in Berlin reviewed 100 reports published between 2000 and 2013, which included 522 experiments using rodents to test cancer and stroke treatments. Around two-thirds of the experiments did not report whether any animals had been dropped from the final analysis, and of the 30% that did report rodents dropped from analysis, only 14 explained why (2016 Jan 4. doi: 10.1371/journal.pbio.1002331). Similarly, Dr. John Ioannidis and his colleagues assessed a random sample of 268 biomedical papers listed in PubMed published between 2000 and 2014 and found that only one contained sufficient details to replicate the work (Nature. 2016 Jan 5. doi: 10.1038/nature.2015.19101).

A multitude of genetic and environmental factors have also been found influential in animal research. For example, the gut microbiome (which has been found to influence many aspects of mouse health and metabolism) varies widely in the same species of mice fed on different diets or obtained from different vendors. And there can be differences in physiology and behavior based on circadian rhythms, and even variations in cage design (Nature. 2016 Feb 16. doi: 10.1038/530254a).

But things are looking brighter. By the beginning of 2016, more than 600 journals had signed up for the voluntary ARRIVE (Animals in Research: Reporting of In Vivo Experiments) guidelines designed to improve the reporting of animal experiments. The guidelines include a checklist of elements to be included in any reporting of animal research, including animal strain, sex, and adverse events (Nature. 2016 Feb 1. doi: 10.1038/nature.2016.19274).

Problems have also been reported in the use of cell lines and antibodies in biomedical research. For example, a report in Nature indicated that too many biomedical researchers are lax in checking for impostor cell lines when they perform their research (Nature. 2015 Oct 12. doi: 10.1038/nature.2015.18544). And recent studies have shown that improper or misused antibodies are a significant source of false findings and irreproducibility in the modern literature (Nature. 2015 May 19. doi: 10.1038/521274a).

Reviewer, view thyself

The editorial in The Economist also discussed some of the failures of the peer-reviewed scientific literature, usually considered the final gateway of quality control, to provide appropriate review and correction of research errors. The editorial cites a damning test of lower-tier research publications by Dr. John Bohannon, a biologist at Harvard, who submitted a pseudonymous paper on the effects of a chemical derived from lichen cells to 304 journals describing themselves as using peer review. The paper was concocted wholesale with manifold and obvious errors in study design, analysis, and interpretation of results, according to Dr. Bohannon. This fictitious paper from a fictitious researcher based at a fictitious university was accepted for publication by an alarming 147 of the journals.

The problem is not new. In 1998, Dr. Fiona Godlee, editor of the British Medical Journal, sent an article with eight deliberate mistakes in study design, analysis, and interpretation to more than 200 of the journal’s regular reviewers. None of the reviewers found all the mistakes, and on average, they spotted fewer than two. And another study by the BMJ showed that experience was an issue, not in improving quality of reviewers, but quite the opposite. Over a 14-year period assessed, 1,500 referees, as rated by editors at leading journals, showed a slow but steady drop in their scores.

Such studies prompted a profound reassessment by the journals, in part pushed by some major granting agencies, including the National Institutes of Health.

Not taking grants for granted

The National Institutes for Health are advancing efforts to expand scientific rigor and reproducibility in their grants projects.

“As part of an increasing drive to boost the reliability of research, the NIH will require applicants to explain the scientific premise behind their proposals and defend the quality of their experimental designs. They must also account for biological variables (for example, by including both male and female mice in planned studies) and describe how they will authenticate experimental materials such as cell lines and antibodies.”

Whether current efforts by scientists, societies, granting organizations, and journals can lead to authentic reform and a vast and relatively quick improvement in reproducibility of scientific results is still an open question. In discussing a 2015 report on the subject by the biomedical research community in the United Kingdom, neurophysiologist Dr. Dorothy Bishop had this to say: “I feel quite upbeat about it. ... Now that we’re aware of it, we have all sorts of ideas about how to deal with it. These are doable things. I feel that the mood is one of making science a much better thing. It might lead to slightly slower science. That could be better” (Nature. 2015 Oct 29. doi: 10.1038/nature.2015.18684).

 

 

[email protected]

References

References

Publications
Publications
Topics
Article Type
Display Headline
Lies, damn lies, and research: Improving reproducibility in biomedical science
Display Headline
Lies, damn lies, and research: Improving reproducibility in biomedical science
Article Source

PURLs Copyright

Inside the Article

Inking bests suturing to mark breast tumor margins

Article Type
Changed
Display Headline
Inking bests suturing to mark breast tumor margins

BOSTON – With art supplies and “mystery” sutures, a team of investigators has determined that immediate intraoperative inking of lumpectomy specimens appears to be a better method than suture placement for orienting specimens for pathology, a finding that could reduce re-excisions.

“Intraoperative specimen suturing appears to be inaccurate for margin identification, and additional means by the surgeon to improve the accuracy are needed. This could be either immediate inking or routine excision of shave margins, either at the original surgery or when you go back for re-excision, to take all the margins instead of one specific margin,” Dr. Angel Arnaout, a breast surgeon at the University of Ottawa, Ont., said at the annual Society of Surgical Oncology Cancer Symposium.

She presented results of the randomized, blinded SMART (Specimen Margin Assessment Technique) trial comparing suture placement with intraoperative inking within the same surgical specimen.

Positive resection margins during breast-conserving surgery are associated with a twofold increase in the risk of ipsilateral tumor recurrence, and require re-excision or conversion to mastectomy. Re-excisions rates range from 25% to 45%, but in the majority of cases – 53% to 65% – no additional disease is found, Dr. Arnaout said.

Dr. Angel Arnaout

In addition to adding to patient distress, re-excisions diminish cosmetic results because they take additional tissue, and delay the start of adjuvant breast cancer therapy, she noted.

Surgeons typically orient specimens for pathologists by placing two or three sutures in the center of the margin surfaces or, less frequently, by intraoperative marking of the margins with ink. But in transit from the OR to the pathology lab, specimens often flatten or “pancake” under their own weight, making it challenging for the pathologist to identify the originally marked margins.

To test whether suturing or inking is better at identifying the extent of tumors during breast-conserving surgery, the investigators devised a strategy using benign breast tissue removed during prophylactic mastectomy or reduction mammoplasty.

The surgeons first performed a sham lumpectomy within the tissue. They then painted the specimen margins in the OR using a phospholuminescent paint obtained from an art supply store. The paint dries clear and colorless, but glows in different colors under black light. By using the paint, carefully selected so as not to interfere with pathology staining, the investigators were able to blind the pathologists to the actual orientation of the margins.

While the tissue was still in the OR, the surgeons then placed two or three orienting sutures in customary locations, plus a third, “mystery” suture at a location known only to the surgeon.

In the pathology lab, the assistant was asked to look at the sutures and outline in black ink the edges of the margins using the sutures for guidance.

The specimen was then examined under black light to determine the degree of discordance between surgeons and pathologist for identification of margins using the suture and painting methods (primary outcome), and to evaluate the discrepancy in the extent of margin surface areas as identified by the surgeons and the pathologists (secondary outcome).

“We asked breast surgeons ‘what would be a clinically significant discordance rate for you?’ and most of them said between 10% and 20%, so we looked for a 15% discordance rate as being clinically significant between the two margin assessment techniques,” Dr. Arnaout said.

They found that the overall discordance in the location of the mystery suture between surgeons and pathologists was similar whether the samples contained two or three location sutures, with discordance rates of 46% (34 of 75 samples) for two sutures, and 47% (41 of 88 samples) for three sutures.

For the secondary outcome of discordance in the extent of margins, they found that pathologists tended to estimate the mean surface of the anterior and posterior margins to be significantly larger than the surgeons did (P less than .01), while significantly underestimating the superior, inferior, and lateral margins (P less than .01 for superior and inferior, and P = .04 for lateral margins).

Examination under black light showed that often two or three additional margins would be included by the surgeon within what the pathologist had identified as the anterior margin.

“This has implications for the surgeon and for the patient,” Dr. Arnaout said. “For most of us that do lumpectomies that go from skin down to chest wall, if an anterior margin or posterior margin is positive, a lot of times we would fight not to go back because we would say there is no further room for re-excision.”

But if the tumor is within what is actually the superior margin but labeled by the pathologist as an anterior margin, the patient may miss an opportunity for successful re-excision, she explained.

 

 

The study was limited by the fact that the sham lumpectomy specimens did not contain skin or muscle, which would have allowed for more accurate margin orientation in an actual operative setting; by lack of compression of specimens with small nonpalpable lesions in containers, which further distorts specimens; and by the use of smaller lumpectomy specimens than normally obtained during cancer surgery, which could have resulted in overestimation of the discordance that might occur when larger specimens are taken, Dr. Arnaout said.

“The conclusion of our study is that specimen margin orientation really should be defined by the surgeon who knows the original shape and orientation of the tissue during surgery, and not to rely on the pathology to reorient based on some sutures placed in the centers of the specimens without defining the extent of the surface area of each of the margins,” she said.

The Canadian Cancer Society Research Institute and the Canadian Surgical Research Fund supported the study. Dr. Arnaout and coauthors reported no conflicts of interest.

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

BOSTON – With art supplies and “mystery” sutures, a team of investigators has determined that immediate intraoperative inking of lumpectomy specimens appears to be a better method than suture placement for orienting specimens for pathology, a finding that could reduce re-excisions.

“Intraoperative specimen suturing appears to be inaccurate for margin identification, and additional means by the surgeon to improve the accuracy are needed. This could be either immediate inking or routine excision of shave margins, either at the original surgery or when you go back for re-excision, to take all the margins instead of one specific margin,” Dr. Angel Arnaout, a breast surgeon at the University of Ottawa, Ont., said at the annual Society of Surgical Oncology Cancer Symposium.

She presented results of the randomized, blinded SMART (Specimen Margin Assessment Technique) trial comparing suture placement with intraoperative inking within the same surgical specimen.

Positive resection margins during breast-conserving surgery are associated with a twofold increase in the risk of ipsilateral tumor recurrence, and require re-excision or conversion to mastectomy. Re-excisions rates range from 25% to 45%, but in the majority of cases – 53% to 65% – no additional disease is found, Dr. Arnaout said.

Dr. Angel Arnaout

In addition to adding to patient distress, re-excisions diminish cosmetic results because they take additional tissue, and delay the start of adjuvant breast cancer therapy, she noted.

Surgeons typically orient specimens for pathologists by placing two or three sutures in the center of the margin surfaces or, less frequently, by intraoperative marking of the margins with ink. But in transit from the OR to the pathology lab, specimens often flatten or “pancake” under their own weight, making it challenging for the pathologist to identify the originally marked margins.

To test whether suturing or inking is better at identifying the extent of tumors during breast-conserving surgery, the investigators devised a strategy using benign breast tissue removed during prophylactic mastectomy or reduction mammoplasty.

The surgeons first performed a sham lumpectomy within the tissue. They then painted the specimen margins in the OR using a phospholuminescent paint obtained from an art supply store. The paint dries clear and colorless, but glows in different colors under black light. By using the paint, carefully selected so as not to interfere with pathology staining, the investigators were able to blind the pathologists to the actual orientation of the margins.

While the tissue was still in the OR, the surgeons then placed two or three orienting sutures in customary locations, plus a third, “mystery” suture at a location known only to the surgeon.

In the pathology lab, the assistant was asked to look at the sutures and outline in black ink the edges of the margins using the sutures for guidance.

The specimen was then examined under black light to determine the degree of discordance between surgeons and pathologist for identification of margins using the suture and painting methods (primary outcome), and to evaluate the discrepancy in the extent of margin surface areas as identified by the surgeons and the pathologists (secondary outcome).

“We asked breast surgeons ‘what would be a clinically significant discordance rate for you?’ and most of them said between 10% and 20%, so we looked for a 15% discordance rate as being clinically significant between the two margin assessment techniques,” Dr. Arnaout said.

They found that the overall discordance in the location of the mystery suture between surgeons and pathologists was similar whether the samples contained two or three location sutures, with discordance rates of 46% (34 of 75 samples) for two sutures, and 47% (41 of 88 samples) for three sutures.

For the secondary outcome of discordance in the extent of margins, they found that pathologists tended to estimate the mean surface of the anterior and posterior margins to be significantly larger than the surgeons did (P less than .01), while significantly underestimating the superior, inferior, and lateral margins (P less than .01 for superior and inferior, and P = .04 for lateral margins).

Examination under black light showed that often two or three additional margins would be included by the surgeon within what the pathologist had identified as the anterior margin.

“This has implications for the surgeon and for the patient,” Dr. Arnaout said. “For most of us that do lumpectomies that go from skin down to chest wall, if an anterior margin or posterior margin is positive, a lot of times we would fight not to go back because we would say there is no further room for re-excision.”

But if the tumor is within what is actually the superior margin but labeled by the pathologist as an anterior margin, the patient may miss an opportunity for successful re-excision, she explained.

 

 

The study was limited by the fact that the sham lumpectomy specimens did not contain skin or muscle, which would have allowed for more accurate margin orientation in an actual operative setting; by lack of compression of specimens with small nonpalpable lesions in containers, which further distorts specimens; and by the use of smaller lumpectomy specimens than normally obtained during cancer surgery, which could have resulted in overestimation of the discordance that might occur when larger specimens are taken, Dr. Arnaout said.

“The conclusion of our study is that specimen margin orientation really should be defined by the surgeon who knows the original shape and orientation of the tissue during surgery, and not to rely on the pathology to reorient based on some sutures placed in the centers of the specimens without defining the extent of the surface area of each of the margins,” she said.

The Canadian Cancer Society Research Institute and the Canadian Surgical Research Fund supported the study. Dr. Arnaout and coauthors reported no conflicts of interest.

BOSTON – With art supplies and “mystery” sutures, a team of investigators has determined that immediate intraoperative inking of lumpectomy specimens appears to be a better method than suture placement for orienting specimens for pathology, a finding that could reduce re-excisions.

“Intraoperative specimen suturing appears to be inaccurate for margin identification, and additional means by the surgeon to improve the accuracy are needed. This could be either immediate inking or routine excision of shave margins, either at the original surgery or when you go back for re-excision, to take all the margins instead of one specific margin,” Dr. Angel Arnaout, a breast surgeon at the University of Ottawa, Ont., said at the annual Society of Surgical Oncology Cancer Symposium.

She presented results of the randomized, blinded SMART (Specimen Margin Assessment Technique) trial comparing suture placement with intraoperative inking within the same surgical specimen.

Positive resection margins during breast-conserving surgery are associated with a twofold increase in the risk of ipsilateral tumor recurrence, and require re-excision or conversion to mastectomy. Re-excisions rates range from 25% to 45%, but in the majority of cases – 53% to 65% – no additional disease is found, Dr. Arnaout said.

Dr. Angel Arnaout

In addition to adding to patient distress, re-excisions diminish cosmetic results because they take additional tissue, and delay the start of adjuvant breast cancer therapy, she noted.

Surgeons typically orient specimens for pathologists by placing two or three sutures in the center of the margin surfaces or, less frequently, by intraoperative marking of the margins with ink. But in transit from the OR to the pathology lab, specimens often flatten or “pancake” under their own weight, making it challenging for the pathologist to identify the originally marked margins.

To test whether suturing or inking is better at identifying the extent of tumors during breast-conserving surgery, the investigators devised a strategy using benign breast tissue removed during prophylactic mastectomy or reduction mammoplasty.

The surgeons first performed a sham lumpectomy within the tissue. They then painted the specimen margins in the OR using a phospholuminescent paint obtained from an art supply store. The paint dries clear and colorless, but glows in different colors under black light. By using the paint, carefully selected so as not to interfere with pathology staining, the investigators were able to blind the pathologists to the actual orientation of the margins.

While the tissue was still in the OR, the surgeons then placed two or three orienting sutures in customary locations, plus a third, “mystery” suture at a location known only to the surgeon.

In the pathology lab, the assistant was asked to look at the sutures and outline in black ink the edges of the margins using the sutures for guidance.

The specimen was then examined under black light to determine the degree of discordance between surgeons and pathologist for identification of margins using the suture and painting methods (primary outcome), and to evaluate the discrepancy in the extent of margin surface areas as identified by the surgeons and the pathologists (secondary outcome).

“We asked breast surgeons ‘what would be a clinically significant discordance rate for you?’ and most of them said between 10% and 20%, so we looked for a 15% discordance rate as being clinically significant between the two margin assessment techniques,” Dr. Arnaout said.

They found that the overall discordance in the location of the mystery suture between surgeons and pathologists was similar whether the samples contained two or three location sutures, with discordance rates of 46% (34 of 75 samples) for two sutures, and 47% (41 of 88 samples) for three sutures.

For the secondary outcome of discordance in the extent of margins, they found that pathologists tended to estimate the mean surface of the anterior and posterior margins to be significantly larger than the surgeons did (P less than .01), while significantly underestimating the superior, inferior, and lateral margins (P less than .01 for superior and inferior, and P = .04 for lateral margins).

Examination under black light showed that often two or three additional margins would be included by the surgeon within what the pathologist had identified as the anterior margin.

“This has implications for the surgeon and for the patient,” Dr. Arnaout said. “For most of us that do lumpectomies that go from skin down to chest wall, if an anterior margin or posterior margin is positive, a lot of times we would fight not to go back because we would say there is no further room for re-excision.”

But if the tumor is within what is actually the superior margin but labeled by the pathologist as an anterior margin, the patient may miss an opportunity for successful re-excision, she explained.

 

 

The study was limited by the fact that the sham lumpectomy specimens did not contain skin or muscle, which would have allowed for more accurate margin orientation in an actual operative setting; by lack of compression of specimens with small nonpalpable lesions in containers, which further distorts specimens; and by the use of smaller lumpectomy specimens than normally obtained during cancer surgery, which could have resulted in overestimation of the discordance that might occur when larger specimens are taken, Dr. Arnaout said.

“The conclusion of our study is that specimen margin orientation really should be defined by the surgeon who knows the original shape and orientation of the tissue during surgery, and not to rely on the pathology to reorient based on some sutures placed in the centers of the specimens without defining the extent of the surface area of each of the margins,” she said.

The Canadian Cancer Society Research Institute and the Canadian Surgical Research Fund supported the study. Dr. Arnaout and coauthors reported no conflicts of interest.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Inking bests suturing to mark breast tumor margins
Display Headline
Inking bests suturing to mark breast tumor margins
Sections
Article Source

FROM SSO 2016

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Intraoperative suture placement is an inaccurate method for orienting breast tumor specimens for pathology.

Major finding: Discordance in identifying tumor margin surface between surgeons and pathologists was similar whether the samples contained two or three location sutures, with discordance rates of 46% (34 of 75 samples) for two sutures, and 47% (41 of 88 samples) for three sutures.

Data source: Randomized clinical trial of 163 specimens obtained from 49 patients.

Disclosures: The Canadian Cancer Society Research Institute and the Canadian Surgical Research Fund supported the study. Dr. Arnaout and coauthors reported no conflicts of interest.

Voluntary self-disclosure: Pros and cons of using the protocol

Article Type
Changed
Display Headline
Voluntary self-disclosure: Pros and cons of using the protocol

AUSTIN, TEX. – Using the federal government’s voluntary self-disclosure protocol to report potential program violations offers advantages and disadvantages.

On one hand, the protocol allows health providers to get in front of possible offenses and retain some control, according to Miami health law attorney Stephen H. Siegel. On the other hand, launching the process could draw increased government scrutiny to a practice.

Stephen Siegel

However, it can pay to be safe, rather than sorry later, Mr. Siegel said at an American Health Lawyers Association meeting.

“You are far better [off] and in a much better position, being proactive than reactive,” Mr. Siegel said in an interview. “Being proactive is an indication that your intention is to do the right thing. Whereas reactive, certainly the government doesn’t view [your intention] that way.”

Several federal agencies offer voluntary disclosure protocols. The HHS Office of Inspector General (OIG) self-disclosure protocol was created in 1998 to enable the self-disclosure of potential health care fraud. The Centers for Medicare & Medicaid Services provides the voluntary self-disclosure protocol, which is limited to potential violations of the physician-self referral statute, also called the Stark Law. Self-disclosures can also be made to the Department of Justice, although the agency has no formal protocol.

Voluntary self-disclosure can limit the possibility of an external investigation and reduce criminal and civil liability, according to Mr. Siegel. In a self-disclosure case, doctors can typically expect to pay back 1.5 times the amount that was improperly paid by the government. Whereas, in a false claims act case, for example, physicians can wind up paying back treble damages, plus a fine of between $5,500 and $11,000. Other advantages to voluntary self-disclosure include an expedited resolution, better control over adverse publicity, and the neutralizing of whistle-blower threats and lawsuits.

Disadvantages include financial loss, potential reputation harm, no immunity from liability or prior commitments by government, and possible penalties for conduct that may have remained undiscovered and thus undisclosed.

“For the most part, doctors are not aware of [voluntary self-disclosure],” Mr. Siegel said in an interview. “They’re not using it. [However], I think voluntary disclosure is going to become more widely used as people realize the ability to control the risk associated with the process.”

A number of issues warrant voluntary self-disclosure in a practice setting: possible government overpayments, potential improper arrangements with service providers, demonstrable patient harm, falsification of medical records, medical directorship issues, inadequate staffing, or the practice of medicine without a license, among others.

Regardless of which agency handles the self-disclosure, the admission will likely make its way to other agencies.

“Be assured that the agencies are going to talk to each other,” Mr. Siegel said. “If you submit it to DOJ [Department of Justice], chances are, it’s going to go to OIG.”

After choosing which agency to direct the self-disclosure, submit a timely, complete, and transparent disclosure, he advised. Each disclosure protocol is specific. For example, the OIG requires the disclosing party to acknowledge that the conduct is a potential violation and explicitly identify the laws that were potentially violated. The disclosing party also must agree ensure that corrective actions are implemented and that potential misconduct has stopped by the time of disclosure or, for improper kickback arrangements, within 90 days of submission.

The process of voluntary self-disclosure can be slow, usually taking more than a year. The OIG and CMS also reserve the right to reject a voluntary disclosure, Mr. Siegel said. If the government has already initiated an investigation for instance, an agency may reject the self-disclosure.

The government considers a host of factors when choosing how to resolve a self-disclosure case including the effectiveness of preexisting compliance programs; the nature of the conduct and its financial impact; the doctor’s ability to repay; whether the discloser is a first‐time offender; whether the incident is isolated; efforts to correct the problem; the period of alleged conduct; how the matter was discovered; and the party’s level of cooperation.

Mr. Siegel stressed there are no guarantees about how a voluntary self-disclosure case may be settled and that the matter will depend on the circumstances.

“There is no one-size-fits-all approach to voluntary self‐disclosure,” he said. “These decisions should be made with the assistance of competent and experienced counsel.”

[email protected]

On Twitter @legal_med

References

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

AUSTIN, TEX. – Using the federal government’s voluntary self-disclosure protocol to report potential program violations offers advantages and disadvantages.

On one hand, the protocol allows health providers to get in front of possible offenses and retain some control, according to Miami health law attorney Stephen H. Siegel. On the other hand, launching the process could draw increased government scrutiny to a practice.

Stephen Siegel

However, it can pay to be safe, rather than sorry later, Mr. Siegel said at an American Health Lawyers Association meeting.

“You are far better [off] and in a much better position, being proactive than reactive,” Mr. Siegel said in an interview. “Being proactive is an indication that your intention is to do the right thing. Whereas reactive, certainly the government doesn’t view [your intention] that way.”

Several federal agencies offer voluntary disclosure protocols. The HHS Office of Inspector General (OIG) self-disclosure protocol was created in 1998 to enable the self-disclosure of potential health care fraud. The Centers for Medicare & Medicaid Services provides the voluntary self-disclosure protocol, which is limited to potential violations of the physician-self referral statute, also called the Stark Law. Self-disclosures can also be made to the Department of Justice, although the agency has no formal protocol.

Voluntary self-disclosure can limit the possibility of an external investigation and reduce criminal and civil liability, according to Mr. Siegel. In a self-disclosure case, doctors can typically expect to pay back 1.5 times the amount that was improperly paid by the government. Whereas, in a false claims act case, for example, physicians can wind up paying back treble damages, plus a fine of between $5,500 and $11,000. Other advantages to voluntary self-disclosure include an expedited resolution, better control over adverse publicity, and the neutralizing of whistle-blower threats and lawsuits.

Disadvantages include financial loss, potential reputation harm, no immunity from liability or prior commitments by government, and possible penalties for conduct that may have remained undiscovered and thus undisclosed.

“For the most part, doctors are not aware of [voluntary self-disclosure],” Mr. Siegel said in an interview. “They’re not using it. [However], I think voluntary disclosure is going to become more widely used as people realize the ability to control the risk associated with the process.”

A number of issues warrant voluntary self-disclosure in a practice setting: possible government overpayments, potential improper arrangements with service providers, demonstrable patient harm, falsification of medical records, medical directorship issues, inadequate staffing, or the practice of medicine without a license, among others.

Regardless of which agency handles the self-disclosure, the admission will likely make its way to other agencies.

“Be assured that the agencies are going to talk to each other,” Mr. Siegel said. “If you submit it to DOJ [Department of Justice], chances are, it’s going to go to OIG.”

After choosing which agency to direct the self-disclosure, submit a timely, complete, and transparent disclosure, he advised. Each disclosure protocol is specific. For example, the OIG requires the disclosing party to acknowledge that the conduct is a potential violation and explicitly identify the laws that were potentially violated. The disclosing party also must agree ensure that corrective actions are implemented and that potential misconduct has stopped by the time of disclosure or, for improper kickback arrangements, within 90 days of submission.

The process of voluntary self-disclosure can be slow, usually taking more than a year. The OIG and CMS also reserve the right to reject a voluntary disclosure, Mr. Siegel said. If the government has already initiated an investigation for instance, an agency may reject the self-disclosure.

The government considers a host of factors when choosing how to resolve a self-disclosure case including the effectiveness of preexisting compliance programs; the nature of the conduct and its financial impact; the doctor’s ability to repay; whether the discloser is a first‐time offender; whether the incident is isolated; efforts to correct the problem; the period of alleged conduct; how the matter was discovered; and the party’s level of cooperation.

Mr. Siegel stressed there are no guarantees about how a voluntary self-disclosure case may be settled and that the matter will depend on the circumstances.

“There is no one-size-fits-all approach to voluntary self‐disclosure,” he said. “These decisions should be made with the assistance of competent and experienced counsel.”

[email protected]

On Twitter @legal_med

AUSTIN, TEX. – Using the federal government’s voluntary self-disclosure protocol to report potential program violations offers advantages and disadvantages.

On one hand, the protocol allows health providers to get in front of possible offenses and retain some control, according to Miami health law attorney Stephen H. Siegel. On the other hand, launching the process could draw increased government scrutiny to a practice.

Stephen Siegel

However, it can pay to be safe, rather than sorry later, Mr. Siegel said at an American Health Lawyers Association meeting.

“You are far better [off] and in a much better position, being proactive than reactive,” Mr. Siegel said in an interview. “Being proactive is an indication that your intention is to do the right thing. Whereas reactive, certainly the government doesn’t view [your intention] that way.”

Several federal agencies offer voluntary disclosure protocols. The HHS Office of Inspector General (OIG) self-disclosure protocol was created in 1998 to enable the self-disclosure of potential health care fraud. The Centers for Medicare & Medicaid Services provides the voluntary self-disclosure protocol, which is limited to potential violations of the physician-self referral statute, also called the Stark Law. Self-disclosures can also be made to the Department of Justice, although the agency has no formal protocol.

Voluntary self-disclosure can limit the possibility of an external investigation and reduce criminal and civil liability, according to Mr. Siegel. In a self-disclosure case, doctors can typically expect to pay back 1.5 times the amount that was improperly paid by the government. Whereas, in a false claims act case, for example, physicians can wind up paying back treble damages, plus a fine of between $5,500 and $11,000. Other advantages to voluntary self-disclosure include an expedited resolution, better control over adverse publicity, and the neutralizing of whistle-blower threats and lawsuits.

Disadvantages include financial loss, potential reputation harm, no immunity from liability or prior commitments by government, and possible penalties for conduct that may have remained undiscovered and thus undisclosed.

“For the most part, doctors are not aware of [voluntary self-disclosure],” Mr. Siegel said in an interview. “They’re not using it. [However], I think voluntary disclosure is going to become more widely used as people realize the ability to control the risk associated with the process.”

A number of issues warrant voluntary self-disclosure in a practice setting: possible government overpayments, potential improper arrangements with service providers, demonstrable patient harm, falsification of medical records, medical directorship issues, inadequate staffing, or the practice of medicine without a license, among others.

Regardless of which agency handles the self-disclosure, the admission will likely make its way to other agencies.

“Be assured that the agencies are going to talk to each other,” Mr. Siegel said. “If you submit it to DOJ [Department of Justice], chances are, it’s going to go to OIG.”

After choosing which agency to direct the self-disclosure, submit a timely, complete, and transparent disclosure, he advised. Each disclosure protocol is specific. For example, the OIG requires the disclosing party to acknowledge that the conduct is a potential violation and explicitly identify the laws that were potentially violated. The disclosing party also must agree ensure that corrective actions are implemented and that potential misconduct has stopped by the time of disclosure or, for improper kickback arrangements, within 90 days of submission.

The process of voluntary self-disclosure can be slow, usually taking more than a year. The OIG and CMS also reserve the right to reject a voluntary disclosure, Mr. Siegel said. If the government has already initiated an investigation for instance, an agency may reject the self-disclosure.

The government considers a host of factors when choosing how to resolve a self-disclosure case including the effectiveness of preexisting compliance programs; the nature of the conduct and its financial impact; the doctor’s ability to repay; whether the discloser is a first‐time offender; whether the incident is isolated; efforts to correct the problem; the period of alleged conduct; how the matter was discovered; and the party’s level of cooperation.

Mr. Siegel stressed there are no guarantees about how a voluntary self-disclosure case may be settled and that the matter will depend on the circumstances.

“There is no one-size-fits-all approach to voluntary self‐disclosure,” he said. “These decisions should be made with the assistance of competent and experienced counsel.”

[email protected]

On Twitter @legal_med

References

References

Publications
Publications
Topics
Article Type
Display Headline
Voluntary self-disclosure: Pros and cons of using the protocol
Display Headline
Voluntary self-disclosure: Pros and cons of using the protocol
Sections
Article Source

EXPERT ANALYSIS FROM THE PHYSICIANS AND HOSPITALS LAW INSTITUTE

PURLs Copyright

Inside the Article

Better sarcoma outcomes at high-volume centers

Article Type
Changed
Display Headline
Better sarcoma outcomes at high-volume centers

BOSTON – In sarcoma as in other cancers, experience counts.

That’s the conclusion of investigators who found that patients with extra-abdominal sarcomas who were treated in high-volume hospitals had half the 30-day mortality rate, higher likelihood of negative surgical margins, and better overall survival, compared with patients treated in low-volume hospitals.

“It appears there is a direct association between hospital volume and short-term as well as long-term outcomes for soft-tissue sarcomas outside the abdomen,” said Dr. Sanjay P. Bagaria of the Mayo Clinic in Jacksonville, Fla.

The findings support centralization of services at the national level in centers specializing in the management of sarcomas, he said at the annual Society of Surgical Oncology Cancer Symposium.

Previous studies have shown that patients treated in high-volume centers for cancers of the esophagus, pancreas, and lung have better outcomes than patients treated in low-volume centers, he noted.

Given the rarity of sarcomas, with an incidence of approximately 12,000 in the U.S. annually, their complexity, with more than 60 histologic subtypes, and the multimodality approach required for them, it seemed likely that a positive association between volume and outcomes could be found, Dr. Bagaria said.

He and colleagues queried the U.S. National Cancer Database, a hospital registry of data from more than 1,500 facilities accredited by the Commission on Cancer.

They drew records on all patients diagnosed with extra-abdominal sarcomas from 2003 through 2007 who underwent surgery at the reporting hospitals, and divided the cases into terciles as either low volume (3 or fewer surgical cases per year), medium volume (3.2-11.6 per year), or high volume (12 or more cases per year).

One third (33%) of all cases were concentrated in just 44 high-volume hospitals, which comprised just 4% of the total hospital sample of 1,163. An additional third (34%) of cases were managed among 196 medium-volume hospitals (17% of the hospital sample), and the remaining third (33%) were spread among 923 low-volume hospitals (79%).

The 30-day mortality rates for low-, medium-, and high-volume hospitals, respectively, were 1.7%, 1.1%, and 0.6% (P less than .0001).

Similarly, the rates of negative margins (R0 resections) were 73.%, 78.2%, and 84.2% (P less than .0001).

Five-year overall survival was identical for low- and medium-volume centers (65% each), but was significantly better for patients treated at high-volume centers (69%, P less than .001)

Compared with low-volume centers, patients treated at high-volume centers had an adjusted odds ratio (OR) for 30-day mortality of 0.46 (P = .01), an adjusted OR for R0 margins of 1.87 (P less than .001), and OR for overall mortality of 0.92 (P = .04).

Dr. Bagaria noted that the study was limited by missing data about disease-specific survival and by possible selection bias associated with the choice of Commission on Cancer-accredited institutions, which account for 70% of cancer cases nationwide but comprise one-third of all hospitals.

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

BOSTON – In sarcoma as in other cancers, experience counts.

That’s the conclusion of investigators who found that patients with extra-abdominal sarcomas who were treated in high-volume hospitals had half the 30-day mortality rate, higher likelihood of negative surgical margins, and better overall survival, compared with patients treated in low-volume hospitals.

“It appears there is a direct association between hospital volume and short-term as well as long-term outcomes for soft-tissue sarcomas outside the abdomen,” said Dr. Sanjay P. Bagaria of the Mayo Clinic in Jacksonville, Fla.

The findings support centralization of services at the national level in centers specializing in the management of sarcomas, he said at the annual Society of Surgical Oncology Cancer Symposium.

Previous studies have shown that patients treated in high-volume centers for cancers of the esophagus, pancreas, and lung have better outcomes than patients treated in low-volume centers, he noted.

Given the rarity of sarcomas, with an incidence of approximately 12,000 in the U.S. annually, their complexity, with more than 60 histologic subtypes, and the multimodality approach required for them, it seemed likely that a positive association between volume and outcomes could be found, Dr. Bagaria said.

He and colleagues queried the U.S. National Cancer Database, a hospital registry of data from more than 1,500 facilities accredited by the Commission on Cancer.

They drew records on all patients diagnosed with extra-abdominal sarcomas from 2003 through 2007 who underwent surgery at the reporting hospitals, and divided the cases into terciles as either low volume (3 or fewer surgical cases per year), medium volume (3.2-11.6 per year), or high volume (12 or more cases per year).

One third (33%) of all cases were concentrated in just 44 high-volume hospitals, which comprised just 4% of the total hospital sample of 1,163. An additional third (34%) of cases were managed among 196 medium-volume hospitals (17% of the hospital sample), and the remaining third (33%) were spread among 923 low-volume hospitals (79%).

The 30-day mortality rates for low-, medium-, and high-volume hospitals, respectively, were 1.7%, 1.1%, and 0.6% (P less than .0001).

Similarly, the rates of negative margins (R0 resections) were 73.%, 78.2%, and 84.2% (P less than .0001).

Five-year overall survival was identical for low- and medium-volume centers (65% each), but was significantly better for patients treated at high-volume centers (69%, P less than .001)

Compared with low-volume centers, patients treated at high-volume centers had an adjusted odds ratio (OR) for 30-day mortality of 0.46 (P = .01), an adjusted OR for R0 margins of 1.87 (P less than .001), and OR for overall mortality of 0.92 (P = .04).

Dr. Bagaria noted that the study was limited by missing data about disease-specific survival and by possible selection bias associated with the choice of Commission on Cancer-accredited institutions, which account for 70% of cancer cases nationwide but comprise one-third of all hospitals.

BOSTON – In sarcoma as in other cancers, experience counts.

That’s the conclusion of investigators who found that patients with extra-abdominal sarcomas who were treated in high-volume hospitals had half the 30-day mortality rate, higher likelihood of negative surgical margins, and better overall survival, compared with patients treated in low-volume hospitals.

“It appears there is a direct association between hospital volume and short-term as well as long-term outcomes for soft-tissue sarcomas outside the abdomen,” said Dr. Sanjay P. Bagaria of the Mayo Clinic in Jacksonville, Fla.

The findings support centralization of services at the national level in centers specializing in the management of sarcomas, he said at the annual Society of Surgical Oncology Cancer Symposium.

Previous studies have shown that patients treated in high-volume centers for cancers of the esophagus, pancreas, and lung have better outcomes than patients treated in low-volume centers, he noted.

Given the rarity of sarcomas, with an incidence of approximately 12,000 in the U.S. annually, their complexity, with more than 60 histologic subtypes, and the multimodality approach required for them, it seemed likely that a positive association between volume and outcomes could be found, Dr. Bagaria said.

He and colleagues queried the U.S. National Cancer Database, a hospital registry of data from more than 1,500 facilities accredited by the Commission on Cancer.

They drew records on all patients diagnosed with extra-abdominal sarcomas from 2003 through 2007 who underwent surgery at the reporting hospitals, and divided the cases into terciles as either low volume (3 or fewer surgical cases per year), medium volume (3.2-11.6 per year), or high volume (12 or more cases per year).

One third (33%) of all cases were concentrated in just 44 high-volume hospitals, which comprised just 4% of the total hospital sample of 1,163. An additional third (34%) of cases were managed among 196 medium-volume hospitals (17% of the hospital sample), and the remaining third (33%) were spread among 923 low-volume hospitals (79%).

The 30-day mortality rates for low-, medium-, and high-volume hospitals, respectively, were 1.7%, 1.1%, and 0.6% (P less than .0001).

Similarly, the rates of negative margins (R0 resections) were 73.%, 78.2%, and 84.2% (P less than .0001).

Five-year overall survival was identical for low- and medium-volume centers (65% each), but was significantly better for patients treated at high-volume centers (69%, P less than .001)

Compared with low-volume centers, patients treated at high-volume centers had an adjusted odds ratio (OR) for 30-day mortality of 0.46 (P = .01), an adjusted OR for R0 margins of 1.87 (P less than .001), and OR for overall mortality of 0.92 (P = .04).

Dr. Bagaria noted that the study was limited by missing data about disease-specific survival and by possible selection bias associated with the choice of Commission on Cancer-accredited institutions, which account for 70% of cancer cases nationwide but comprise one-third of all hospitals.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Better sarcoma outcomes at high-volume centers
Display Headline
Better sarcoma outcomes at high-volume centers
Sections
Article Source

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Surgical volume, a surrogate for experience, has been shown to have a direct correlation with patient outcomes for cancers of the esophagus, lung, and pancreas, and this appears to be true for sarcomas as well.

Major finding: Patients treated for sarcoma at high-volume centers had lower 30-day and overall mortality and a higher probability of negative margins than those treated at low-volume centers.

Data source: Retrospective review of data on 14,634 patients treated at 1,163 U.S. hospitals.

Disclosures: The authors reported no relevant disclosures.

CT of chest, extremity effective for sarcoma follow-up

Article Type
Changed
Display Headline
CT of chest, extremity effective for sarcoma follow-up

BOSTON – CT scans appear to be effective for detecting local recurrences and pulmonary metastases in patients treated for soft-tissue sarcomas of the extremities, for about a third less than the cost of follow-up with MRI.

In a retrospective study by Dr. Allison Maciver and her colleagues, among 91 patients with soft-tissue sarcomas of the extremity followed with CT, 11 patients had a total of 14 local recurrences detected on CT, and 11 of the recurrences were in patients who were clinically asymptomatic.

Surveillance CT also identified 15 cases of pulmonary metastases, and 4 incidental second primary malignancies, Dr. Maciver of the Roswell Park Cancer Institute in Buffalo, N.Y., and her coinvestigators found, and there was only one false-positive recurrence.

The benefits of CT over extremity MRI in this population include decreased imaging time, lower cost, and a larger field of view, allowing for detection of second primary malignancies, she noted in a poster session at the annual Society of Surgical Oncology Cancer Symposium.

Many sarcomas of the extremities are highly aggressive, and timely detection of local recurrences could improve chances for limb-sparing salvage therapies. Although MRI has typically been used to follow patients with sarcomas, it is expensive and has a limited field of view, Dr. Maciver said.

In addition, the risk of pulmonary metastases with some soft-tissue sarcomas is high, necessitating the use of chest CT as a surveillance tool.

To see whether CT scans of the chest and extremities could be a cost-effective surveillance strategy for both local recurrences and pulmonary metastases, the investigators did a retrospective study of a prospective database of patients who underwent surgical resection for soft-tissue sarcomas of the extremities from 2001 through 2014 and who had CT as the primary follow-up imaging modality.

They identified a total of 91 high-risk patients followed for a median of 50.5 months. The patients had an estimated 5-year freedom from local recurrence of 82%, and from distant recurrence of 80%. Five-year overall survival was 76%.

Of the 15 patients found on CT to have pulmonary metastases, there were 4 incidentally discovered second primary cancers, including 1 each of non–small cell lung cancer, pancreatic adenocarcinoma, Merkel cell carcinomatosis, and myxofibrosarcoma. There were no false-positive pulmonary metastases.

The estimated cost of 10 years of surveillance, based on 2014 gross technical costs, was $64,969 per patient for chest CT and extremity MRI, compared with $41,595 per patient for chest and extremity CT surveillance, a potential cost savings with the CT-only strategy of $23,374 per patient.

The investigators said that the overall benefits of CT, including the cost savings in an accountable care organization model, “appear to outweigh the slightly increased radiation exposure.”

The study was internally funded. The authors reported having no relevant financial disclosures.

References

Click for Credit Link
Meeting/Event
Author and Disclosure Information

Publications
Topics
Sections
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

BOSTON – CT scans appear to be effective for detecting local recurrences and pulmonary metastases in patients treated for soft-tissue sarcomas of the extremities, for about a third less than the cost of follow-up with MRI.

In a retrospective study by Dr. Allison Maciver and her colleagues, among 91 patients with soft-tissue sarcomas of the extremity followed with CT, 11 patients had a total of 14 local recurrences detected on CT, and 11 of the recurrences were in patients who were clinically asymptomatic.

Surveillance CT also identified 15 cases of pulmonary metastases, and 4 incidental second primary malignancies, Dr. Maciver of the Roswell Park Cancer Institute in Buffalo, N.Y., and her coinvestigators found, and there was only one false-positive recurrence.

The benefits of CT over extremity MRI in this population include decreased imaging time, lower cost, and a larger field of view, allowing for detection of second primary malignancies, she noted in a poster session at the annual Society of Surgical Oncology Cancer Symposium.

Many sarcomas of the extremities are highly aggressive, and timely detection of local recurrences could improve chances for limb-sparing salvage therapies. Although MRI has typically been used to follow patients with sarcomas, it is expensive and has a limited field of view, Dr. Maciver said.

In addition, the risk of pulmonary metastases with some soft-tissue sarcomas is high, necessitating the use of chest CT as a surveillance tool.

To see whether CT scans of the chest and extremities could be a cost-effective surveillance strategy for both local recurrences and pulmonary metastases, the investigators did a retrospective study of a prospective database of patients who underwent surgical resection for soft-tissue sarcomas of the extremities from 2001 through 2014 and who had CT as the primary follow-up imaging modality.

They identified a total of 91 high-risk patients followed for a median of 50.5 months. The patients had an estimated 5-year freedom from local recurrence of 82%, and from distant recurrence of 80%. Five-year overall survival was 76%.

Of the 15 patients found on CT to have pulmonary metastases, there were 4 incidentally discovered second primary cancers, including 1 each of non–small cell lung cancer, pancreatic adenocarcinoma, Merkel cell carcinomatosis, and myxofibrosarcoma. There were no false-positive pulmonary metastases.

The estimated cost of 10 years of surveillance, based on 2014 gross technical costs, was $64,969 per patient for chest CT and extremity MRI, compared with $41,595 per patient for chest and extremity CT surveillance, a potential cost savings with the CT-only strategy of $23,374 per patient.

The investigators said that the overall benefits of CT, including the cost savings in an accountable care organization model, “appear to outweigh the slightly increased radiation exposure.”

The study was internally funded. The authors reported having no relevant financial disclosures.

BOSTON – CT scans appear to be effective for detecting local recurrences and pulmonary metastases in patients treated for soft-tissue sarcomas of the extremities, for about a third less than the cost of follow-up with MRI.

In a retrospective study by Dr. Allison Maciver and her colleagues, among 91 patients with soft-tissue sarcomas of the extremity followed with CT, 11 patients had a total of 14 local recurrences detected on CT, and 11 of the recurrences were in patients who were clinically asymptomatic.

Surveillance CT also identified 15 cases of pulmonary metastases, and 4 incidental second primary malignancies, Dr. Maciver of the Roswell Park Cancer Institute in Buffalo, N.Y., and her coinvestigators found, and there was only one false-positive recurrence.

The benefits of CT over extremity MRI in this population include decreased imaging time, lower cost, and a larger field of view, allowing for detection of second primary malignancies, she noted in a poster session at the annual Society of Surgical Oncology Cancer Symposium.

Many sarcomas of the extremities are highly aggressive, and timely detection of local recurrences could improve chances for limb-sparing salvage therapies. Although MRI has typically been used to follow patients with sarcomas, it is expensive and has a limited field of view, Dr. Maciver said.

In addition, the risk of pulmonary metastases with some soft-tissue sarcomas is high, necessitating the use of chest CT as a surveillance tool.

To see whether CT scans of the chest and extremities could be a cost-effective surveillance strategy for both local recurrences and pulmonary metastases, the investigators did a retrospective study of a prospective database of patients who underwent surgical resection for soft-tissue sarcomas of the extremities from 2001 through 2014 and who had CT as the primary follow-up imaging modality.

They identified a total of 91 high-risk patients followed for a median of 50.5 months. The patients had an estimated 5-year freedom from local recurrence of 82%, and from distant recurrence of 80%. Five-year overall survival was 76%.

Of the 15 patients found on CT to have pulmonary metastases, there were 4 incidentally discovered second primary cancers, including 1 each of non–small cell lung cancer, pancreatic adenocarcinoma, Merkel cell carcinomatosis, and myxofibrosarcoma. There were no false-positive pulmonary metastases.

The estimated cost of 10 years of surveillance, based on 2014 gross technical costs, was $64,969 per patient for chest CT and extremity MRI, compared with $41,595 per patient for chest and extremity CT surveillance, a potential cost savings with the CT-only strategy of $23,374 per patient.

The investigators said that the overall benefits of CT, including the cost savings in an accountable care organization model, “appear to outweigh the slightly increased radiation exposure.”

The study was internally funded. The authors reported having no relevant financial disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
CT of chest, extremity effective for sarcoma follow-up
Display Headline
CT of chest, extremity effective for sarcoma follow-up
Click for Credit Status
Active
Sections
Article Source

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Lower-cost CT scans of the extremity and chest appear to be effective for surveillance of patients following resection of soft-tissue sarcomas.

Major finding: Of 91 patients with soft-tissue sarcomas of the extremity followed with CT, 11 had a total of 14 local recurrences detected. Of the recurrences, 11 were clinically asymptomatic.

Data source: A retrospective study of a prospectively maintained surgical database.

Disclosures: The study was internally funded. The authors reported having no relevant financial disclosures.