Multiple Myeloma: Updates on Diagnosis and Management

Article Type
Changed
Thu, 12/15/2022 - 14:53
Display Headline
Multiple Myeloma: Updates on Diagnosis and Management
Two- and 3-drug treatment regimens and autologous stem cell transplants provide opportunities for longer term disease remission, though most patients will still develop relapsed multiple myeloma.

Multiple myeloma (MM) is a disease that is primarily treated by hematologists; however, it is important for primary care providers (PCPs) to be aware of the presentation and diagnosis of this disease. Multiple myeloma often is seen in the veteran population, and VA providers should be familiar with its diagnosis and treatment so that an appropriate referral can be made. Often, the initial signs and symptoms of the disease are subtle and require an astute eye by the PCP to diagnose and initiate a workup.

Once a veteran has an established diagnosis of MM or one of its precursor syndromes, the PCP will invariably be alerted to an adverse event (AE) of treatment or complication of the disease and should be aware of such complications to assist in management or referral. Patients with MM may achieve long-term remission; therefore, it is likely that the PCP will see an evolution in their treatment and care. Last, PCPs and patients often have a close relationship, and patients expect the PCP to understand their diagnosis and treatment plan.

Presentation

Multiple myeloma is a disease in which a neoplastic proliferation of plasma cells produces a monoclonal immunoglobulin. It is almost invariably preceded by premalignant stages of monoclonal gammopathy of undetermined significance (MGUS) and smoldering MM (SMM), although not all cases of MGUS will eventually progress to MM.1 Common signs and symptoms include anemia, bone pain or lytic lesions on X-ray, kidney injury, fatigue, hypercalcemia, and weight loss.2 Anemia is usually a normocytic, normochromic anemia and can be due to involvement of the bone marrow, secondary to renal disease, or it may be dilutional, related to a high monoclonal protein (M protein) level. There are several identifiable causes for renal disease in patients with MM, including light chain cast nephropathy,
hypercalcemia, light chain amyloidosis, and light chain deposition disease. Without intervention, progressive renal damage may occur.3

Diagnosis

All patients with a suspected diagnosis of MM should undergo a basic workup, including complete blood count; peripheral blood smear; complete chemistry panel, including calcium and albumin; serum free light chain analysis (FLC); serum protein electrophoresis (SPEP) and immunofixation; urinalysis; 24-hour urine collection for electrophoresis (UPEP) and immunofixation; serum B2-microglobulin; and lactate dehydrogenase.4 A FLC analysis is particularly useful for the diagnosis and monitoring of MM, when only small amounts of M protein are secreted into the serum/urine or for nonsecretory myeloma, as well as for light-chainonly
myeloma.5

A bone marrow biopsy and aspirate should be performed in the diagnosis of MM to evaluate the bone marrow involvement and genetic abnormality of myeloma cells with fluorescence in situ hybridization (FISH) and cytogenetics, both of which are very important in risk stratification and for treatment planning. A skeletal survey is also typically performed to look for bone lesions.4 Magnetic resonance imaging (MRI) can also be useful to evaluate for possible soft tissue lesions when a bone survey is negative, or to evaluate for spinal cord compression.5 Additionally, an MRI should be performed in patients with SMM at the initial assessment, because focal lesions in the setting of SMM are associated with an increased risk to progression.6 Since plain radiographs are usually abnormal only after ≥ 30% of the
bone is destroyed, an MRI offers a more sensitive image.

Two MM precursor syndromes are worth noting: MGUS and SMM. In evaluating a patient for possible MM, it is important to differentiate between MGUS, asymptomatic
SMM, and MM that requires treatment.4 Monoclonal gammopathy of undetermined significance is diagnosed when a patient has a serum M protein that is < 3 g/dL, clonal bone marrow plasma cells < 10%, and no identifiable end organ damage.5 Smoldering MM is diagnosed when either the serum M protein is > 3 g/dL or bone marrow clonal plasma cells are > 10% in the absence of end organ damage.

Symptomatic MM is characterized by > 10% clonal bone marrow involvement with end organ damage that includes hypercalcemia, renal failure, anemia, or bone lesions. The diagnostic criteria are summarized in Table 1. The International Myeloma Working Group produced updated guidelines in 2014, which now include patients with > 60% bone marrow involvement of plasma cells, serum FLC ratio of > 100, and > 1 focal lesions on an MRI study as symptomatic MM.5,6

Most patients with MM will have a M protein produced by the malignant plasma cells detected on an SPEP or UPEP. The majority of immunoglobulins were IgG and IgA, whereas IgD and IgM were much less common.2 A minority of patients will not have a detectable M protein on SPEP or UPEP. Some patients will produce only light chains and are designated as light-chain-only myeloma.For these patients, the FLC assay is useful for diagnosis and disease monitoring. Patients who have an absence of M protein on SPEP/UPEP and normal FLC assay ratios are considered to have nonsecretory myeloma.7

Staging and Risk Stratification

Two staging systems are used to evaluate a patient’s prognosis: the Durie-Salmon staging system, which is based on tumor burden (Table 2); and the International Staging System (ISS), which uses a combination of serum beta 2 microglobulin (B2M) and serum albumin levels to produce a powerful and reproducible 3-stage classification and is more commonly used by hematologists due to its simplicity to use and reliable reproducibility (Table 3).

In the Durie-Salmon staging system, patients with stage I disease have a lower tumor burden, defined as hemoglobin > 10 g/dL, normal calcium level, no evidence of
lytic bone lesions, and low amounts of protein produced (IgG < 5 g/dL; IgA < 3 g/dL; urine protein < 4 g/d). Patients are classified as stage III if they have any of the following: hemoglobin < 8.5 g/dL, hypercalcemia with level > 12 mg/dL, bony lytic lesions, or high amounts of protein produced (IgG > 7 g/dL; IgA > 5 g/dL; or urine protein > 12 g/d). Patients with stage II disease do not fall into either of these categories. Stage III disease can be further differentiated into stage IIIA or stage IIIB disease if renal involvement is present.8

In the ISS system, patients with stage I disease have B2M levels that are < 3.5 mg/dL and albumin levels > 3.5 g/dL and have a median overall survival (OS) of 62 months. In this classification, stage III patients have B2M levels that are > 5.5 mg/dL and median OS was 29 months. Stage II patients do not meet either of these
criteria and OS was 44 months.9 In a study by Mayo Clinic, OS has improved over the past decade, with OS for ISS stage III patients increasing to 4.2 years. Overall
survival for both ISS stage I and stage III disease seems to have increased as well, although the end point has not been reached.10

All myeloma patients are risk stratified at initial diagnosis based on their cytogenetic abnormalities identified mainly by FISH studies and conventional cytogenetics,
which can serve as an alternative if FISH is unavailable. Genetic abnormalities of MM are the major predictor for the outcome and will affect treatment choice. Three risk groups have been identified: high-risk, intermediate-risk, and standard-risk MM (Table 4).11

Management of MGUS and SMM

Patients with MGUS progress to malignant conditions at a rate of 1% per year.12 Those individuals who are diagnosed with MGUS or SMM typically do not require
therapy. According to the International Myeloma Working Group guidelines, patients should be monitored based on risk stratification. Those with low-risk MGUS (IgG M protein < 1.5 g/dL and no abnormal FLC ratio) can be monitored every 6 months for 2 to 3 years. Those who are intermediate to high risk need a baseline bone marrow biopsy in addition to skeletal survey and should check urine and serum levels for protein every 6 months for the first year and then annually thereafter.

Patients with SMM are at an increased risk of progression to symptomatic MM compared with patients with MGUS (10% per year for the first 5 years, 3% per year for the next 5 years).13 Therefore, experts recommend physician visits and laboratory testing for M proteins every 2 to 3 months for the first year and then an evaluation every 6 to 12 months if the patient remains clinically stable.14 Additionally, there are new data to suggest that early therapy with lenalidomide plus dexamethasone for SMM can prolong time to disease progression as well as increase OS in individuals with SMM at high risk for progression.15

Patients With MM

All patients with a diagnosis of MM require immediate treatment. Initial choice of therapy is driven by whether a patient is eligible for an autologous stem cell transplant (ASCT), because certain agents, such as alkylating agents, should typically be avoided in those who are transplant eligible. Initial therapy for patients
with MM is also based on genetic risk stratification of the disease. Patients with high-risk disease require a complete response (CR) treatment for long-term OS
and thus benefit from an aggressive treatment strategy. Standard-risk patients have similar OS regardless of whether or not CR is achieved and thus can either
be treated with an aggressive approach or a sequential therapy approach.16

Transplant-Eligible Patients

All patients should be evaluated for transplant eligibility, because it results in superior progression-free survival (PFS) and OS in patients with MM compared
with standard chemotherapy. Transplant eligibility requirements differ, depending on the transplant center. There is no strict age limit in the U.S. for determining transplant eligibility. Physiological age and factors such as functional status and liver function are often considered before making a transplant decision.

For VA patients, transplants are generally considered in those aged < 65 years, and patients are referred to 1 of 3 transplant centers: VA Puget Sound Healthcare System in Seattle, Washington; Tennessee Valley Healthcare System in Nashville; or South Texas Veterans Healthcare System in San Antonio.17 All patients who are transplant eligible should receive induction therapy for 2 to 4 months before stem cell collection. This is to reduce tumor burden, for symptomatic management, as well as to lessen end organ damage. After stem cell collection, patients undergo either upfront ASCT or resume induction therapy and undergo a transplant after first relapse.

Bortezomib Regimens

Bortezomib is a proteasome inhibitor (PI) and has been used as upfront chemotherapy for transplant-eligible patients, traditionally to avoid alkylating agents that
could affect stem cell harvest. It is highly efficacious in the treatment of patients with MM. Two- or 3-drug regimens have been used. Common regimens include bortezomib, cyclophosphamide, dexamethasone; bortezomib, thalidomide, dexamethasone (VTD); bortezomib, lenalidomide, dexamethasone (VRD); bortezomib,
doxorubicin, dexamethasone; as well as bortezomib, dexamethasone.18 Dexamethasone is less expensive than VTD or VRD, well tolerated, and efficacious. It is
often used upfront for newly diagnosed MM.19 Threedrug regimens have shown to be more efficacious than 2-drug regimens in clinical trials (Table 5).20

Of note, bortezomib is not cleared through the kidney, which makes it an ideal choice for patients with renal function impairment. A significant potential AE with bortezomib is the onset of peripheral neuropathy. Bortezomib can be administered once or twice weekly. Twice-weekly administration of bortezomib is preferred when rapid results are needed, such as light chain cast nephropathy causing acute renal failure.21

Lenalidomide Plus Dexamethasone

Lenalidomide is a second-generation immunomodulating agent that is being increasingly used as initial therapy for MM. There is currently no data showing superiority of bortezomib-based regimens to lenalidomide plus dexamethasone in reference to OS. Bortezomib-based regimens seem to overcome the poor prognosis associated with t(4;14) translocation and thus should be considered in choosing initial chemotherapy treatment.22

Lenalidomide can affect stem cell collection; therefore, it is important to collect stem cells in transplanteligible patients who are aged < 65 years or for those who have received more than 4 cycles of treatment with this regimen.23,24 A major AE to lenalidomidecontaining regimens is the increased risk of thrombosis. All patients on lenalidomide require treatment with aspirin at a minimum; however, those at higher risk for thrombosis may require low-molecular weight heparin or warfarin.25

Carfilzomib Plus Lenalidomide Plus Dexamethasone

Carfilzomib is a recently approved PI that has shown promise in combination with lenalidomide and dexamethasone as initial therapy for MM. Several phase 2 trials
have reported favorable results with carfilzomib in combination with lenalidomide and dexamethasone in MM.26,27 More studies are needed to establish efficacy and
safety before this regimen is routinely used as upfront therapy.11

Thalidomide Plus Dexamethasone

Although there are no randomized controlled trials comparing lenalidomide plus dexamethasone with thalidomide plus dexamethasone, these regimens have been compared in retrospective studies. In these studies, lenalidomide plus dexamethasone showed both a higher response rate as well as an increased PFS and
OS compared with thalidomide plus dexamethasone. Additionally, lenalidomide’s AE profile was more favorable than that of thalidomide. In light of this, lenalidomide
plus dexamethasone is preferred to thalidomide plus dexamethasone in the management of MM, although the latter can be considered when lenalidomide is not available or when a patient does not tolerate lenalidomide.28

VDT-PACE

A multidrug combination that should be considered in select populations is the VDT-PACE regimen, which includes bortezomib, dexamethasone, thalidomide, cisplatin, doxorubicin, cyclophosphamide, and etoposide. This regimen can be considered in those patients who have aggressive disease, such as those with plasma cell leukemia or with multiple extramedullary plasmacytomas.11

Autologous Stem Cell Transplant

Previous data suggest that ASCT improves OS in MM by 12 months.29 A more recent open-label, randomized trial comparing melphalan and ASCT to melphalanprednisone-lenalidomide showed significant prolonged PFS and OS among patients with MM.30 Although the role of ASCT may change as new drugs are
integrated into initial therapy of MM, ASCT is still the preferred approach in transplant-eligible patients. As such, all patients who are eligible should be considered
to receive a transplant.

There remains debate about whether ASCT should be performed early, after 2 to 4 cycles of induction therapy, or late after first relapse. Several randomized trials failed to show a difference in survival for early vs delayed ASCT approach.31 Generally, transplant can be delayed for patients with standard-risk MM who have responded well to therapy.11 Those patients who do not achieve a CR with their first ASCT may benefit from a second (tandem) ASCT.32 An allogeneic transplant is occasionally used in select populations and is the only potentially curative therapy for these patients. However, its high mortality rate precludes its everyday use.

Transplant-Ineligible Patients

For patients with newly diagnosed MM who are ineligible for ASCT due to age or other comorbidities, chemotherapy is the only option. Many patients will benefit
not only in survival, but also in quality of life. Immunomodulatory agents, such as lenalidomide and thalidomide, and PIs, such as bortezomib, are highly effective
and well tolerated. There has been a general shift to using these agents upfront in transplant-ineligible patients.

All previously mentioned regimens can also be used in transplant-ineligible patients. Although no longer the preferred treatment, melphalan can be considered
in resource-poor settings.11 Patients who are not transplant eligible are treated for a fixed period of 9 to 18 months, although lenalidomide plus dexamethasone is often continued until relapse.11,33

Melphalan Plus Prednisone Plus Bortezomib

The addition of bortezomib to melphalan and prednisone results in improved OS compared with that of melphalan and dexamethasone alone.34 Peripheral neuropathy is a significant AE and can be minimized by giving bortezomib once weekly.

Melphalan Plus Prednisone Plus Thalidomide

Melphalan plus prednisone plus thalidomide has shown an OS benefit compared with that of melphalan and prednisone alone. The regimen has a high toxicity rate (> 50%) and a deep vein thrombosis rate of 20%, so patients undergoing treatment with this regimen require thromboprophylaxis.35,36

Melphalan Plus Prednisone

Although melphalan plus prednisone has fallen out of favor due to the existence of more efficacious regimens, it may be useful in an elderly patient population who lack access to newer agents, such as lenalidomide, thalidomide, and bortezomib.

Assessing Treatment Response

The International Myeloma Working Group has established criteria for assessing disease response. Patient’s response to therapy should be assessed with a FLC assay
before each cycle with SPEP and UPEP and in those without measurable M protein levels. A bone marrow biopsy can be helpful in patients with immeasurable M protein levels and low FLC levels, as well as to establish that a CR is present.

A CR is defined as negative SPEP/UPEP, disappearance of soft tissue plamacytomas, and < 5% plasma cells in bone marrow. A very good partial response is defined as serum/urine M protein being present on immunofixation but not electrophoresis or reduction in serum M protein by 90% and urine M protein < 100 mg/d. For those without measurable M protein, a reduction in FLC ratio by 90% is required. A partial response is defined as > 50% reduction of the serum monoclonal protein and/or < 200 mg urinary M protein per 24 hours or > 90% reduction in urinary M protein. For those without M protein present, they should have > 50% decrease in FLC ratio.5

Maintenance Therapy

There is currently considerable debate about whether patients should be treated with maintenance therapy following induction chemotherapy or transplant. In patients treated with transplant, there have been several studies to investigate the use of maintenance therapy. Lenalidomide has been evaluated for maintenance therapy following stem cell transplant and has shown superior PFS with dexamethasone as post-ASCT maintenance; however, this is at the cost of increased secondary cancers.37

Thalidomide has also been studied as maintenance therapy and seems to have a modest improvement in PFS and OS but at the cost of increased toxicities, such as
neuropathy and thromboembolism.38,39 Still other studies compared bortezomib maintenance with thalidomide maintenance in posttransplant patients and was able to show improved OS. As a result, certain patients with intermediate- or high-risk disease may be eligible for bortezomib for maintenance following transplant.11 For transplant-ineligible patients, there is no clear role for maintenance therapy.

Refreactory/Relapsed Disease Treatments

 

 

 

 

 

 

 

 

 

 

 

Click here to read the digital edition.

References

1. Landgren O, Kyle R, Pfeiffer RM, et al. Monoclonal gammopathy of undetermined
significance (MGUS) consistently precedes multiple myeloma: a prospective
study. Blood. 2009;113(22):5412-5417.
2. Kyle RA, Gertz MA, Witzig TE, et al. Review of 1027 patients with newly diagnosed
multiple myeloma. Mayo Clin Proc. 2003;78(1):21-33.
3. Hutchison CA, Batuman V, Behrens J, et al; International Kidney and Monoclonal
Gammopathy Research Group. The pathogenesis and diagnosis of acute kidney
injury in multiple myeloma. Nat Review Nephrol. 2011;8(1):43-51.
4. Dimopoulous M, Kyle R, Fermand JP, et al; International Myeloma Workshop
Consensus Panel 3. Consensus recommendations for standard investigative workup:
report of the International Myeloma Workshop Consensus Panel 3. Blood.
2011;117(18):4701-4705.
5. Palumbo A, Rajkumar S, San Miguel JF, et al. International Melanoma Working
Group consensus statement for the management, treatment, and supportive care
of patients with myeloma not eligible for standard autologous stem-cell transplantation.
J Clin Oncol. 2014;32(6):587-600.
6. Rajkumar SV, Dimopoulos MA, Palumbo A, et al. International Myeloma Working
Group updated criteria for the diagnosis of multiple myeloma. Lancet Oncol.
2014;15(12):e538-e548.
7. Dimopoulos MA, Kastritis E, Terpo E. Non-secretory myeloma: one, two, or more
entities? Oncology (Williston Park). 2013;27(9):930-932.
8. Durie BG, Salmon SE. A clinical staging system for multiple myeloma. Correlation
of measured myeloma cell mass with presenting clinical features, response to
treatment, and survival. Cancer. 1975;36(3):842-854.
9. Griepp P, San Miguel J, Durie BG, et al. International staging system for multiple
myeloma. J Clin Oncol. 2005;23(15):3412-3420.
10. Kumar SK, Dispenzieri A, Lacy MQ, et al. Continued improvement in survival
in multiple myeloma: changes in early mortality and outcomes in older patients.
Leukemia. 2014; 28(5):1122-1128.
11. Rajkumar SV. Multiple myeloma: 2014 update on diagnosis, risk-stratification,
and management. Am J Hematol. 2014;89(10):999-1009.
12. Kyle RA, Therneau TM, Rajkumar SV, et al. A long-term study of prognosis
in monoclonal gammopathy of undetermined significance. N Engl J Med.
2002;346(8):564-569.
13. Kyle RA, Remstein ED, Therneau TM, et al. Clinical course and prognosis of smoldering
(asymptomatic) multiple myeloma. N Engl J Med. 2007;356(25):2582-2590.
14. Landgren O. Monoclonal gammopathy of undetermined significance and smoldering
multiple myeloma: biological insights and early treatment strategies. Hematology
Am Soc Hematol Educ Program. 2013;2013(1):478-487.
15. Mateos MV, Hernández MT, Giraldo P, et al. Lenalidomide plus dexamethasone
for high-risk smoldering multiple myeloma. N Engl J Med. 2013;369(5):438-447.
16. Haessler K, Shaughnessy JD Jr, Zhan F, et al. Benefit of complete response in multiple
myeloma limited to high-risk subgroup identified by gene expression profiling.
Clin Cancer Res. 2007;13(23):7073-7079.
17. Xiang Z, Mehta P. Management of multiple myeloma and its precursor syndromes.
Fed Pract. 2014;31(suppl 3):6S-13S.
18. National Comprehensive Cancer Network. NCCN clinical practice guidelines in
oncology: multiple myeloma. National Comprehensive Cancer Network Website.
http://www.nccn.org/professionals/physician_gls/PDF/myeloma.pdf. Updated
March 10, 2015. Accessed July 8, 2015.
19. Kumar S, Flinn I, Richardson P, et al. Randomized, multicenter, phase 2 study
(EVOLUTION) of combinations of bortezomib, dexamethasone, cyclosphosphamide,
and lenalidomide in previously untreated multiple myeloma. Blood.
2012;119(19):4375-4382.
20. Moreau P, Avet-Loiseau H, Facon T, et al. Bortezomib plus dexamethasone versus
reduced-dose bortezomib, thalidomide plus dexamethasone as induction treatment
before autologous stem cell transplantation in newly diagnosed multiple
myeloma. Blood. 2011;118(22):5752-5758.
21. Moreau P, Pylypenko H, Grosicki S, et al. Subcutaneous versus intravenous
administration of bortezomib in patients with relapsed multiple myeloma: a randomized,
phase 3, noninferiority study. Lancet Oncol. 2011;12(5):431-440.
22. Pineda-Roman M, Zangari M, Haessler J, et al. Sustained complete remissions in
multiple myeloma linked to bortezomib in total therapy 3: comparison with total
therapy 2. Br J Haematol. 2008;140(6):624-634.
23. Kumar S, Dispenzieri A, Lacy MQ, et al. Impact of lenalidomide therapy on stem
cell mobilization and engraftment post-peripheral blood stem cell transplantation
in patients with newly diagnosed myeloma. Leukemia. 2007;21(9):2035-2042.
24. Kumar S, Giralt S, Stadtmauer EA, et al; International Myeloma Working Group.
Mobilization in myeloma revisited: IMWG consensus perspectives on stem cell
collection following initial therapy with thalidomide-, lenalidomide-, or bortezomibcontaining
regimens. Blood. 2009;114(9):1729-1735.
25. Larocca A, Cavallo F, Bringhen S, et al. Aspirin or enoxaparin thromboprophylaxis
for patients with newly diagnosed multiple myeloma patients treated with
lenalidomide. Blood. 2012;119(4):933-939.
26. Jakubowiak AJ, Dytfeld D, Griffith KA, et al. A phase 1/2 study of carfilzomib in
combination with lenalidomide and low dose dexamethasone as a frontline treatment
for multiple myeloma. Blood. 2012;120(9):1801-1809.
27. Korde N, Zingone A, Kwok M, et al. Phase II clinical and correlative study of
carfilzomib, lenalidomide, and dexamethasone followed by lenalidomide extended
dosing (CRD-R) induces high rates of MRD negativity in newly diagnosed

multiple myeloma patients [Abstract]. Blood. 2013;122(21):538.
28. Gay F, Hayman SR, Lacy MQ, et al. Lenalidomide plus dexamethasone versus thalidomide
plus dexamethasone in newly diagnosed multiple myeloma: a comparative
analysis of 411 patients. Blood. 2010;115(7):1343-1350.
29. Attal M, Harousseau JL, Stoppa AM, et al. A prospective, randomized trial of autologous
bone marrow transplantation and chemotherapy in multiple myeloma.
Intergroupe Français du Myélome. N Engl J Med. 1996;335(2):91-97.
30. Palumbo A, Cavallo F, Gay F, et al. Autologous transplantation and maintenance
therapy in multiple myeloma. N Engl J Med. 2014;371(10):895-905.
31. Fermand JP, Ravaud P, Chevret S, et al. High-dose therapy and autologous
stem cell transplantation in multiple myeloma: up-front or rescue treatment?
Results of a multicenter sequential randomized clinical trial. Blood.
1998;92(9):3131-3136.
32. Elice F, Raimondi R, Tosetto A, et al. Prolonged overall survival with second
on-demand autologous stem cell transplant in multiple myeloma. Am J Hematol.
2006;81(6):426-431.
33. Facon T, Dimopoulos MA, Dispenzieri A, et al. Initial phase 3 results of the
FIRST (frontline investigation of lenalidomide + dexamethasone versus standard
thalidomide) trial (MM-020/IFM 07 01) in newly diagnosed multiple myeloma
(NDMM) patients (pts) ineligible for stem cell transplantation (SCT). Blood.
2013;122(21):2.
34. San Miguel JF, Schlag R, Khuageva NK, et al. Bortezomib plus melphalan
and prednisone for initial treatment of multiple myeloma. N Engl J Med.
2008;359(9):906-917.
35. Facon T, Mary JY, Hulin C, et al; Intergroupe Français du Myélome. Melphalan
and prednisone plus thalidomide versus melphalan and prednisone
alone or reduced-intensity autologous stem cell transplantation in
elderly patients with multiple myeloma (IFM 99-06): a randomised trial. Lancet.
2007;370(9594):1209-1218.
36. Hulin C, Facon T, Rodon P, et al. Efficacy of melphalan and prednisone plus thalidomide
in patients older than 75 years with newly diagnosed multiple myeloma.
IFM 01/01 trial. J Clin Oncol. 2009;27(22):3664-3670.
37. Attal M, Lauwers-Cances V, Marit G, et al. Lenalidomide maintenance after stemcell
transplantation for multiple myeloma. N Engl J Med. 2012;366(19):1782-1791.
38. Attal M., Harousseau JL, Leyvraz S, et al; Inter-Groupe Francophone du Myélome
(IFM). Maintenance therapy with thalidomide improves survival in patients with
multiple myeloma. Blood. 2006;108(10):3289-3294.
39. Spencer A, Prince HM, Roberts AW, et al. Consolidation therapy with low-dose
thalidomide and prednisolone prolongs the survival of multiple myeloma patients
undergoing a single autologous stem-cell transplantation procedure. J Clin Oncol.
2009;27(11):1788-1793.
40. Sonneveld P, Schmidt-Wolf IG, van der Holt B, et al. Bortezomib induction and
maintenance treatment in patients with newly diagnosed multiple myeloma:
results of the randomized phase III HOVON-65/GMMG-HD4 trial. J Clin Oncol.
2012;30(24):2946-2955.
41. Richardson PG, Sonneveld P, Schuster MW, et al; Assessment of Proteasome
Inhibition for Extending Remissions (APEX) Investigators. Bortezomib
or high-dose dexamethasone for relapsed multiple myeloma. N Engl J Med.
2005;352(24):2487-2498.
42. Orlowski RZ, Nagler A, Sonneveld P, et al. Randomized phase III study of pegylated
liposomal doxorubicin plus bortezomib compared with bortezomib alone
in relapsed or refractory multiple myeloma: combination therapy improves time
to progression. J Clin Oncol. 2007;25(25):3892-3901.
43. Kumar SK, Lee JH, Lahuerta JJ, et al; International Myeloma Working Group.
Risk of progression and survival in multiple myeloma relapsing after therapy
with IMiDs and bortezomib: a multicenter international myeloma working group
study. Leukemia. 2012;26(1):149-157.
44. Lacy MQ, Hayman SR, Gertz MA, et al. Pomalidomide (CC4047) plus lowdose
dexamethasone as therapy for relapsed multiple myeloma. J Clin Oncol.
2009;27(30):5008-5014.
45. Siegel DS, Martin T, Wang M, et al. A phase 2 study of single agent carfilzomib
(PX-171-003-A1) in patients with relapsed and refractory multiple myeloma.
Blood. 2012;120(14):2817-2825.
46. San-Miguel JF, Hungria VT, Yoon SS, et al. Panobinostat plus bortezomib
and dexamethasone versus placebo plus bortezomib and dexamethasone
in patients with relapsed or relapsed and refractory multiple
myeloma: a multicentre, randomised, double-blind phase 3 trial. Lancet Oncol.
2014;15(11):1195-1206.

Author and Disclosure Information

Dr. Jewell is a hematology/oncology fellow, Dr. Xiang, Dr. Kunthur, and Dr. Mehta are staff hematologist/oncologists, all in the Division of Hematology/Oncology, Department of Internal Medicine, at the John L. McClellan Memorial Veterans Hospital in Little Rock, Arkansas. Dr. Xiang and Dr. Mehta are also faculty members at the University of Arkansas for Medical Sciences in Little Rock.

Publications
Topics
Page Number
49S-56S
Legacy Keywords
multiple myeloma, hematology, plasma cells, monoclonal immunoglobulin, monoclonal gammopathy of undetermined significance, smoldering multiple myeloma, anemia, bone pain, hypercalcemia, monoclonal protein, M protein, Sarah Jewell, Zhifu Xiang, Anuradha Kunthur, Paulette Mehta
Sections
Author and Disclosure Information

Dr. Jewell is a hematology/oncology fellow, Dr. Xiang, Dr. Kunthur, and Dr. Mehta are staff hematologist/oncologists, all in the Division of Hematology/Oncology, Department of Internal Medicine, at the John L. McClellan Memorial Veterans Hospital in Little Rock, Arkansas. Dr. Xiang and Dr. Mehta are also faculty members at the University of Arkansas for Medical Sciences in Little Rock.

Author and Disclosure Information

Dr. Jewell is a hematology/oncology fellow, Dr. Xiang, Dr. Kunthur, and Dr. Mehta are staff hematologist/oncologists, all in the Division of Hematology/Oncology, Department of Internal Medicine, at the John L. McClellan Memorial Veterans Hospital in Little Rock, Arkansas. Dr. Xiang and Dr. Mehta are also faculty members at the University of Arkansas for Medical Sciences in Little Rock.

Two- and 3-drug treatment regimens and autologous stem cell transplants provide opportunities for longer term disease remission, though most patients will still develop relapsed multiple myeloma.
Two- and 3-drug treatment regimens and autologous stem cell transplants provide opportunities for longer term disease remission, though most patients will still develop relapsed multiple myeloma.

Multiple myeloma (MM) is a disease that is primarily treated by hematologists; however, it is important for primary care providers (PCPs) to be aware of the presentation and diagnosis of this disease. Multiple myeloma often is seen in the veteran population, and VA providers should be familiar with its diagnosis and treatment so that an appropriate referral can be made. Often, the initial signs and symptoms of the disease are subtle and require an astute eye by the PCP to diagnose and initiate a workup.

Once a veteran has an established diagnosis of MM or one of its precursor syndromes, the PCP will invariably be alerted to an adverse event (AE) of treatment or complication of the disease and should be aware of such complications to assist in management or referral. Patients with MM may achieve long-term remission; therefore, it is likely that the PCP will see an evolution in their treatment and care. Last, PCPs and patients often have a close relationship, and patients expect the PCP to understand their diagnosis and treatment plan.

Presentation

Multiple myeloma is a disease in which a neoplastic proliferation of plasma cells produces a monoclonal immunoglobulin. It is almost invariably preceded by premalignant stages of monoclonal gammopathy of undetermined significance (MGUS) and smoldering MM (SMM), although not all cases of MGUS will eventually progress to MM.1 Common signs and symptoms include anemia, bone pain or lytic lesions on X-ray, kidney injury, fatigue, hypercalcemia, and weight loss.2 Anemia is usually a normocytic, normochromic anemia and can be due to involvement of the bone marrow, secondary to renal disease, or it may be dilutional, related to a high monoclonal protein (M protein) level. There are several identifiable causes for renal disease in patients with MM, including light chain cast nephropathy,
hypercalcemia, light chain amyloidosis, and light chain deposition disease. Without intervention, progressive renal damage may occur.3

Diagnosis

All patients with a suspected diagnosis of MM should undergo a basic workup, including complete blood count; peripheral blood smear; complete chemistry panel, including calcium and albumin; serum free light chain analysis (FLC); serum protein electrophoresis (SPEP) and immunofixation; urinalysis; 24-hour urine collection for electrophoresis (UPEP) and immunofixation; serum B2-microglobulin; and lactate dehydrogenase.4 A FLC analysis is particularly useful for the diagnosis and monitoring of MM, when only small amounts of M protein are secreted into the serum/urine or for nonsecretory myeloma, as well as for light-chainonly
myeloma.5

A bone marrow biopsy and aspirate should be performed in the diagnosis of MM to evaluate the bone marrow involvement and genetic abnormality of myeloma cells with fluorescence in situ hybridization (FISH) and cytogenetics, both of which are very important in risk stratification and for treatment planning. A skeletal survey is also typically performed to look for bone lesions.4 Magnetic resonance imaging (MRI) can also be useful to evaluate for possible soft tissue lesions when a bone survey is negative, or to evaluate for spinal cord compression.5 Additionally, an MRI should be performed in patients with SMM at the initial assessment, because focal lesions in the setting of SMM are associated with an increased risk to progression.6 Since plain radiographs are usually abnormal only after ≥ 30% of the
bone is destroyed, an MRI offers a more sensitive image.

Two MM precursor syndromes are worth noting: MGUS and SMM. In evaluating a patient for possible MM, it is important to differentiate between MGUS, asymptomatic
SMM, and MM that requires treatment.4 Monoclonal gammopathy of undetermined significance is diagnosed when a patient has a serum M protein that is < 3 g/dL, clonal bone marrow plasma cells < 10%, and no identifiable end organ damage.5 Smoldering MM is diagnosed when either the serum M protein is > 3 g/dL or bone marrow clonal plasma cells are > 10% in the absence of end organ damage.

Symptomatic MM is characterized by > 10% clonal bone marrow involvement with end organ damage that includes hypercalcemia, renal failure, anemia, or bone lesions. The diagnostic criteria are summarized in Table 1. The International Myeloma Working Group produced updated guidelines in 2014, which now include patients with > 60% bone marrow involvement of plasma cells, serum FLC ratio of > 100, and > 1 focal lesions on an MRI study as symptomatic MM.5,6

Most patients with MM will have a M protein produced by the malignant plasma cells detected on an SPEP or UPEP. The majority of immunoglobulins were IgG and IgA, whereas IgD and IgM were much less common.2 A minority of patients will not have a detectable M protein on SPEP or UPEP. Some patients will produce only light chains and are designated as light-chain-only myeloma.For these patients, the FLC assay is useful for diagnosis and disease monitoring. Patients who have an absence of M protein on SPEP/UPEP and normal FLC assay ratios are considered to have nonsecretory myeloma.7

Staging and Risk Stratification

Two staging systems are used to evaluate a patient’s prognosis: the Durie-Salmon staging system, which is based on tumor burden (Table 2); and the International Staging System (ISS), which uses a combination of serum beta 2 microglobulin (B2M) and serum albumin levels to produce a powerful and reproducible 3-stage classification and is more commonly used by hematologists due to its simplicity to use and reliable reproducibility (Table 3).

In the Durie-Salmon staging system, patients with stage I disease have a lower tumor burden, defined as hemoglobin > 10 g/dL, normal calcium level, no evidence of
lytic bone lesions, and low amounts of protein produced (IgG < 5 g/dL; IgA < 3 g/dL; urine protein < 4 g/d). Patients are classified as stage III if they have any of the following: hemoglobin < 8.5 g/dL, hypercalcemia with level > 12 mg/dL, bony lytic lesions, or high amounts of protein produced (IgG > 7 g/dL; IgA > 5 g/dL; or urine protein > 12 g/d). Patients with stage II disease do not fall into either of these categories. Stage III disease can be further differentiated into stage IIIA or stage IIIB disease if renal involvement is present.8

In the ISS system, patients with stage I disease have B2M levels that are < 3.5 mg/dL and albumin levels > 3.5 g/dL and have a median overall survival (OS) of 62 months. In this classification, stage III patients have B2M levels that are > 5.5 mg/dL and median OS was 29 months. Stage II patients do not meet either of these
criteria and OS was 44 months.9 In a study by Mayo Clinic, OS has improved over the past decade, with OS for ISS stage III patients increasing to 4.2 years. Overall
survival for both ISS stage I and stage III disease seems to have increased as well, although the end point has not been reached.10

All myeloma patients are risk stratified at initial diagnosis based on their cytogenetic abnormalities identified mainly by FISH studies and conventional cytogenetics,
which can serve as an alternative if FISH is unavailable. Genetic abnormalities of MM are the major predictor for the outcome and will affect treatment choice. Three risk groups have been identified: high-risk, intermediate-risk, and standard-risk MM (Table 4).11

Management of MGUS and SMM

Patients with MGUS progress to malignant conditions at a rate of 1% per year.12 Those individuals who are diagnosed with MGUS or SMM typically do not require
therapy. According to the International Myeloma Working Group guidelines, patients should be monitored based on risk stratification. Those with low-risk MGUS (IgG M protein < 1.5 g/dL and no abnormal FLC ratio) can be monitored every 6 months for 2 to 3 years. Those who are intermediate to high risk need a baseline bone marrow biopsy in addition to skeletal survey and should check urine and serum levels for protein every 6 months for the first year and then annually thereafter.

Patients with SMM are at an increased risk of progression to symptomatic MM compared with patients with MGUS (10% per year for the first 5 years, 3% per year for the next 5 years).13 Therefore, experts recommend physician visits and laboratory testing for M proteins every 2 to 3 months for the first year and then an evaluation every 6 to 12 months if the patient remains clinically stable.14 Additionally, there are new data to suggest that early therapy with lenalidomide plus dexamethasone for SMM can prolong time to disease progression as well as increase OS in individuals with SMM at high risk for progression.15

Patients With MM

All patients with a diagnosis of MM require immediate treatment. Initial choice of therapy is driven by whether a patient is eligible for an autologous stem cell transplant (ASCT), because certain agents, such as alkylating agents, should typically be avoided in those who are transplant eligible. Initial therapy for patients
with MM is also based on genetic risk stratification of the disease. Patients with high-risk disease require a complete response (CR) treatment for long-term OS
and thus benefit from an aggressive treatment strategy. Standard-risk patients have similar OS regardless of whether or not CR is achieved and thus can either
be treated with an aggressive approach or a sequential therapy approach.16

Transplant-Eligible Patients

All patients should be evaluated for transplant eligibility, because it results in superior progression-free survival (PFS) and OS in patients with MM compared
with standard chemotherapy. Transplant eligibility requirements differ, depending on the transplant center. There is no strict age limit in the U.S. for determining transplant eligibility. Physiological age and factors such as functional status and liver function are often considered before making a transplant decision.

For VA patients, transplants are generally considered in those aged < 65 years, and patients are referred to 1 of 3 transplant centers: VA Puget Sound Healthcare System in Seattle, Washington; Tennessee Valley Healthcare System in Nashville; or South Texas Veterans Healthcare System in San Antonio.17 All patients who are transplant eligible should receive induction therapy for 2 to 4 months before stem cell collection. This is to reduce tumor burden, for symptomatic management, as well as to lessen end organ damage. After stem cell collection, patients undergo either upfront ASCT or resume induction therapy and undergo a transplant after first relapse.

Bortezomib Regimens

Bortezomib is a proteasome inhibitor (PI) and has been used as upfront chemotherapy for transplant-eligible patients, traditionally to avoid alkylating agents that
could affect stem cell harvest. It is highly efficacious in the treatment of patients with MM. Two- or 3-drug regimens have been used. Common regimens include bortezomib, cyclophosphamide, dexamethasone; bortezomib, thalidomide, dexamethasone (VTD); bortezomib, lenalidomide, dexamethasone (VRD); bortezomib,
doxorubicin, dexamethasone; as well as bortezomib, dexamethasone.18 Dexamethasone is less expensive than VTD or VRD, well tolerated, and efficacious. It is
often used upfront for newly diagnosed MM.19 Threedrug regimens have shown to be more efficacious than 2-drug regimens in clinical trials (Table 5).20

Of note, bortezomib is not cleared through the kidney, which makes it an ideal choice for patients with renal function impairment. A significant potential AE with bortezomib is the onset of peripheral neuropathy. Bortezomib can be administered once or twice weekly. Twice-weekly administration of bortezomib is preferred when rapid results are needed, such as light chain cast nephropathy causing acute renal failure.21

Lenalidomide Plus Dexamethasone

Lenalidomide is a second-generation immunomodulating agent that is being increasingly used as initial therapy for MM. There is currently no data showing superiority of bortezomib-based regimens to lenalidomide plus dexamethasone in reference to OS. Bortezomib-based regimens seem to overcome the poor prognosis associated with t(4;14) translocation and thus should be considered in choosing initial chemotherapy treatment.22

Lenalidomide can affect stem cell collection; therefore, it is important to collect stem cells in transplanteligible patients who are aged < 65 years or for those who have received more than 4 cycles of treatment with this regimen.23,24 A major AE to lenalidomidecontaining regimens is the increased risk of thrombosis. All patients on lenalidomide require treatment with aspirin at a minimum; however, those at higher risk for thrombosis may require low-molecular weight heparin or warfarin.25

Carfilzomib Plus Lenalidomide Plus Dexamethasone

Carfilzomib is a recently approved PI that has shown promise in combination with lenalidomide and dexamethasone as initial therapy for MM. Several phase 2 trials
have reported favorable results with carfilzomib in combination with lenalidomide and dexamethasone in MM.26,27 More studies are needed to establish efficacy and
safety before this regimen is routinely used as upfront therapy.11

Thalidomide Plus Dexamethasone

Although there are no randomized controlled trials comparing lenalidomide plus dexamethasone with thalidomide plus dexamethasone, these regimens have been compared in retrospective studies. In these studies, lenalidomide plus dexamethasone showed both a higher response rate as well as an increased PFS and
OS compared with thalidomide plus dexamethasone. Additionally, lenalidomide’s AE profile was more favorable than that of thalidomide. In light of this, lenalidomide
plus dexamethasone is preferred to thalidomide plus dexamethasone in the management of MM, although the latter can be considered when lenalidomide is not available or when a patient does not tolerate lenalidomide.28

VDT-PACE

A multidrug combination that should be considered in select populations is the VDT-PACE regimen, which includes bortezomib, dexamethasone, thalidomide, cisplatin, doxorubicin, cyclophosphamide, and etoposide. This regimen can be considered in those patients who have aggressive disease, such as those with plasma cell leukemia or with multiple extramedullary plasmacytomas.11

Autologous Stem Cell Transplant

Previous data suggest that ASCT improves OS in MM by 12 months.29 A more recent open-label, randomized trial comparing melphalan and ASCT to melphalanprednisone-lenalidomide showed significant prolonged PFS and OS among patients with MM.30 Although the role of ASCT may change as new drugs are
integrated into initial therapy of MM, ASCT is still the preferred approach in transplant-eligible patients. As such, all patients who are eligible should be considered
to receive a transplant.

There remains debate about whether ASCT should be performed early, after 2 to 4 cycles of induction therapy, or late after first relapse. Several randomized trials failed to show a difference in survival for early vs delayed ASCT approach.31 Generally, transplant can be delayed for patients with standard-risk MM who have responded well to therapy.11 Those patients who do not achieve a CR with their first ASCT may benefit from a second (tandem) ASCT.32 An allogeneic transplant is occasionally used in select populations and is the only potentially curative therapy for these patients. However, its high mortality rate precludes its everyday use.

Transplant-Ineligible Patients

For patients with newly diagnosed MM who are ineligible for ASCT due to age or other comorbidities, chemotherapy is the only option. Many patients will benefit
not only in survival, but also in quality of life. Immunomodulatory agents, such as lenalidomide and thalidomide, and PIs, such as bortezomib, are highly effective
and well tolerated. There has been a general shift to using these agents upfront in transplant-ineligible patients.

All previously mentioned regimens can also be used in transplant-ineligible patients. Although no longer the preferred treatment, melphalan can be considered
in resource-poor settings.11 Patients who are not transplant eligible are treated for a fixed period of 9 to 18 months, although lenalidomide plus dexamethasone is often continued until relapse.11,33

Melphalan Plus Prednisone Plus Bortezomib

The addition of bortezomib to melphalan and prednisone results in improved OS compared with that of melphalan and dexamethasone alone.34 Peripheral neuropathy is a significant AE and can be minimized by giving bortezomib once weekly.

Melphalan Plus Prednisone Plus Thalidomide

Melphalan plus prednisone plus thalidomide has shown an OS benefit compared with that of melphalan and prednisone alone. The regimen has a high toxicity rate (> 50%) and a deep vein thrombosis rate of 20%, so patients undergoing treatment with this regimen require thromboprophylaxis.35,36

Melphalan Plus Prednisone

Although melphalan plus prednisone has fallen out of favor due to the existence of more efficacious regimens, it may be useful in an elderly patient population who lack access to newer agents, such as lenalidomide, thalidomide, and bortezomib.

Assessing Treatment Response

The International Myeloma Working Group has established criteria for assessing disease response. Patient’s response to therapy should be assessed with a FLC assay
before each cycle with SPEP and UPEP and in those without measurable M protein levels. A bone marrow biopsy can be helpful in patients with immeasurable M protein levels and low FLC levels, as well as to establish that a CR is present.

A CR is defined as negative SPEP/UPEP, disappearance of soft tissue plamacytomas, and < 5% plasma cells in bone marrow. A very good partial response is defined as serum/urine M protein being present on immunofixation but not electrophoresis or reduction in serum M protein by 90% and urine M protein < 100 mg/d. For those without measurable M protein, a reduction in FLC ratio by 90% is required. A partial response is defined as > 50% reduction of the serum monoclonal protein and/or < 200 mg urinary M protein per 24 hours or > 90% reduction in urinary M protein. For those without M protein present, they should have > 50% decrease in FLC ratio.5

Maintenance Therapy

There is currently considerable debate about whether patients should be treated with maintenance therapy following induction chemotherapy or transplant. In patients treated with transplant, there have been several studies to investigate the use of maintenance therapy. Lenalidomide has been evaluated for maintenance therapy following stem cell transplant and has shown superior PFS with dexamethasone as post-ASCT maintenance; however, this is at the cost of increased secondary cancers.37

Thalidomide has also been studied as maintenance therapy and seems to have a modest improvement in PFS and OS but at the cost of increased toxicities, such as
neuropathy and thromboembolism.38,39 Still other studies compared bortezomib maintenance with thalidomide maintenance in posttransplant patients and was able to show improved OS. As a result, certain patients with intermediate- or high-risk disease may be eligible for bortezomib for maintenance following transplant.11 For transplant-ineligible patients, there is no clear role for maintenance therapy.

Refreactory/Relapsed Disease Treatments

 

 

 

 

 

 

 

 

 

 

 

Click here to read the digital edition.

Multiple myeloma (MM) is a disease that is primarily treated by hematologists; however, it is important for primary care providers (PCPs) to be aware of the presentation and diagnosis of this disease. Multiple myeloma often is seen in the veteran population, and VA providers should be familiar with its diagnosis and treatment so that an appropriate referral can be made. Often, the initial signs and symptoms of the disease are subtle and require an astute eye by the PCP to diagnose and initiate a workup.

Once a veteran has an established diagnosis of MM or one of its precursor syndromes, the PCP will invariably be alerted to an adverse event (AE) of treatment or complication of the disease and should be aware of such complications to assist in management or referral. Patients with MM may achieve long-term remission; therefore, it is likely that the PCP will see an evolution in their treatment and care. Last, PCPs and patients often have a close relationship, and patients expect the PCP to understand their diagnosis and treatment plan.

Presentation

Multiple myeloma is a disease in which a neoplastic proliferation of plasma cells produces a monoclonal immunoglobulin. It is almost invariably preceded by premalignant stages of monoclonal gammopathy of undetermined significance (MGUS) and smoldering MM (SMM), although not all cases of MGUS will eventually progress to MM.1 Common signs and symptoms include anemia, bone pain or lytic lesions on X-ray, kidney injury, fatigue, hypercalcemia, and weight loss.2 Anemia is usually a normocytic, normochromic anemia and can be due to involvement of the bone marrow, secondary to renal disease, or it may be dilutional, related to a high monoclonal protein (M protein) level. There are several identifiable causes for renal disease in patients with MM, including light chain cast nephropathy,
hypercalcemia, light chain amyloidosis, and light chain deposition disease. Without intervention, progressive renal damage may occur.3

Diagnosis

All patients with a suspected diagnosis of MM should undergo a basic workup, including complete blood count; peripheral blood smear; complete chemistry panel, including calcium and albumin; serum free light chain analysis (FLC); serum protein electrophoresis (SPEP) and immunofixation; urinalysis; 24-hour urine collection for electrophoresis (UPEP) and immunofixation; serum B2-microglobulin; and lactate dehydrogenase.4 A FLC analysis is particularly useful for the diagnosis and monitoring of MM, when only small amounts of M protein are secreted into the serum/urine or for nonsecretory myeloma, as well as for light-chainonly
myeloma.5

A bone marrow biopsy and aspirate should be performed in the diagnosis of MM to evaluate the bone marrow involvement and genetic abnormality of myeloma cells with fluorescence in situ hybridization (FISH) and cytogenetics, both of which are very important in risk stratification and for treatment planning. A skeletal survey is also typically performed to look for bone lesions.4 Magnetic resonance imaging (MRI) can also be useful to evaluate for possible soft tissue lesions when a bone survey is negative, or to evaluate for spinal cord compression.5 Additionally, an MRI should be performed in patients with SMM at the initial assessment, because focal lesions in the setting of SMM are associated with an increased risk to progression.6 Since plain radiographs are usually abnormal only after ≥ 30% of the
bone is destroyed, an MRI offers a more sensitive image.

Two MM precursor syndromes are worth noting: MGUS and SMM. In evaluating a patient for possible MM, it is important to differentiate between MGUS, asymptomatic
SMM, and MM that requires treatment.4 Monoclonal gammopathy of undetermined significance is diagnosed when a patient has a serum M protein that is < 3 g/dL, clonal bone marrow plasma cells < 10%, and no identifiable end organ damage.5 Smoldering MM is diagnosed when either the serum M protein is > 3 g/dL or bone marrow clonal plasma cells are > 10% in the absence of end organ damage.

Symptomatic MM is characterized by > 10% clonal bone marrow involvement with end organ damage that includes hypercalcemia, renal failure, anemia, or bone lesions. The diagnostic criteria are summarized in Table 1. The International Myeloma Working Group produced updated guidelines in 2014, which now include patients with > 60% bone marrow involvement of plasma cells, serum FLC ratio of > 100, and > 1 focal lesions on an MRI study as symptomatic MM.5,6

Most patients with MM will have a M protein produced by the malignant plasma cells detected on an SPEP or UPEP. The majority of immunoglobulins were IgG and IgA, whereas IgD and IgM were much less common.2 A minority of patients will not have a detectable M protein on SPEP or UPEP. Some patients will produce only light chains and are designated as light-chain-only myeloma.For these patients, the FLC assay is useful for diagnosis and disease monitoring. Patients who have an absence of M protein on SPEP/UPEP and normal FLC assay ratios are considered to have nonsecretory myeloma.7

Staging and Risk Stratification

Two staging systems are used to evaluate a patient’s prognosis: the Durie-Salmon staging system, which is based on tumor burden (Table 2); and the International Staging System (ISS), which uses a combination of serum beta 2 microglobulin (B2M) and serum albumin levels to produce a powerful and reproducible 3-stage classification and is more commonly used by hematologists due to its simplicity to use and reliable reproducibility (Table 3).

In the Durie-Salmon staging system, patients with stage I disease have a lower tumor burden, defined as hemoglobin > 10 g/dL, normal calcium level, no evidence of
lytic bone lesions, and low amounts of protein produced (IgG < 5 g/dL; IgA < 3 g/dL; urine protein < 4 g/d). Patients are classified as stage III if they have any of the following: hemoglobin < 8.5 g/dL, hypercalcemia with level > 12 mg/dL, bony lytic lesions, or high amounts of protein produced (IgG > 7 g/dL; IgA > 5 g/dL; or urine protein > 12 g/d). Patients with stage II disease do not fall into either of these categories. Stage III disease can be further differentiated into stage IIIA or stage IIIB disease if renal involvement is present.8

In the ISS system, patients with stage I disease have B2M levels that are < 3.5 mg/dL and albumin levels > 3.5 g/dL and have a median overall survival (OS) of 62 months. In this classification, stage III patients have B2M levels that are > 5.5 mg/dL and median OS was 29 months. Stage II patients do not meet either of these
criteria and OS was 44 months.9 In a study by Mayo Clinic, OS has improved over the past decade, with OS for ISS stage III patients increasing to 4.2 years. Overall
survival for both ISS stage I and stage III disease seems to have increased as well, although the end point has not been reached.10

All myeloma patients are risk stratified at initial diagnosis based on their cytogenetic abnormalities identified mainly by FISH studies and conventional cytogenetics,
which can serve as an alternative if FISH is unavailable. Genetic abnormalities of MM are the major predictor for the outcome and will affect treatment choice. Three risk groups have been identified: high-risk, intermediate-risk, and standard-risk MM (Table 4).11

Management of MGUS and SMM

Patients with MGUS progress to malignant conditions at a rate of 1% per year.12 Those individuals who are diagnosed with MGUS or SMM typically do not require
therapy. According to the International Myeloma Working Group guidelines, patients should be monitored based on risk stratification. Those with low-risk MGUS (IgG M protein < 1.5 g/dL and no abnormal FLC ratio) can be monitored every 6 months for 2 to 3 years. Those who are intermediate to high risk need a baseline bone marrow biopsy in addition to skeletal survey and should check urine and serum levels for protein every 6 months for the first year and then annually thereafter.

Patients with SMM are at an increased risk of progression to symptomatic MM compared with patients with MGUS (10% per year for the first 5 years, 3% per year for the next 5 years).13 Therefore, experts recommend physician visits and laboratory testing for M proteins every 2 to 3 months for the first year and then an evaluation every 6 to 12 months if the patient remains clinically stable.14 Additionally, there are new data to suggest that early therapy with lenalidomide plus dexamethasone for SMM can prolong time to disease progression as well as increase OS in individuals with SMM at high risk for progression.15

Patients With MM

All patients with a diagnosis of MM require immediate treatment. Initial choice of therapy is driven by whether a patient is eligible for an autologous stem cell transplant (ASCT), because certain agents, such as alkylating agents, should typically be avoided in those who are transplant eligible. Initial therapy for patients
with MM is also based on genetic risk stratification of the disease. Patients with high-risk disease require a complete response (CR) treatment for long-term OS
and thus benefit from an aggressive treatment strategy. Standard-risk patients have similar OS regardless of whether or not CR is achieved and thus can either
be treated with an aggressive approach or a sequential therapy approach.16

Transplant-Eligible Patients

All patients should be evaluated for transplant eligibility, because it results in superior progression-free survival (PFS) and OS in patients with MM compared
with standard chemotherapy. Transplant eligibility requirements differ, depending on the transplant center. There is no strict age limit in the U.S. for determining transplant eligibility. Physiological age and factors such as functional status and liver function are often considered before making a transplant decision.

For VA patients, transplants are generally considered in those aged < 65 years, and patients are referred to 1 of 3 transplant centers: VA Puget Sound Healthcare System in Seattle, Washington; Tennessee Valley Healthcare System in Nashville; or South Texas Veterans Healthcare System in San Antonio.17 All patients who are transplant eligible should receive induction therapy for 2 to 4 months before stem cell collection. This is to reduce tumor burden, for symptomatic management, as well as to lessen end organ damage. After stem cell collection, patients undergo either upfront ASCT or resume induction therapy and undergo a transplant after first relapse.

Bortezomib Regimens

Bortezomib is a proteasome inhibitor (PI) and has been used as upfront chemotherapy for transplant-eligible patients, traditionally to avoid alkylating agents that
could affect stem cell harvest. It is highly efficacious in the treatment of patients with MM. Two- or 3-drug regimens have been used. Common regimens include bortezomib, cyclophosphamide, dexamethasone; bortezomib, thalidomide, dexamethasone (VTD); bortezomib, lenalidomide, dexamethasone (VRD); bortezomib,
doxorubicin, dexamethasone; as well as bortezomib, dexamethasone.18 Dexamethasone is less expensive than VTD or VRD, well tolerated, and efficacious. It is
often used upfront for newly diagnosed MM.19 Threedrug regimens have shown to be more efficacious than 2-drug regimens in clinical trials (Table 5).20

Of note, bortezomib is not cleared through the kidney, which makes it an ideal choice for patients with renal function impairment. A significant potential AE with bortezomib is the onset of peripheral neuropathy. Bortezomib can be administered once or twice weekly. Twice-weekly administration of bortezomib is preferred when rapid results are needed, such as light chain cast nephropathy causing acute renal failure.21

Lenalidomide Plus Dexamethasone

Lenalidomide is a second-generation immunomodulating agent that is being increasingly used as initial therapy for MM. There is currently no data showing superiority of bortezomib-based regimens to lenalidomide plus dexamethasone in reference to OS. Bortezomib-based regimens seem to overcome the poor prognosis associated with t(4;14) translocation and thus should be considered in choosing initial chemotherapy treatment.22

Lenalidomide can affect stem cell collection; therefore, it is important to collect stem cells in transplanteligible patients who are aged < 65 years or for those who have received more than 4 cycles of treatment with this regimen.23,24 A major AE to lenalidomidecontaining regimens is the increased risk of thrombosis. All patients on lenalidomide require treatment with aspirin at a minimum; however, those at higher risk for thrombosis may require low-molecular weight heparin or warfarin.25

Carfilzomib Plus Lenalidomide Plus Dexamethasone

Carfilzomib is a recently approved PI that has shown promise in combination with lenalidomide and dexamethasone as initial therapy for MM. Several phase 2 trials
have reported favorable results with carfilzomib in combination with lenalidomide and dexamethasone in MM.26,27 More studies are needed to establish efficacy and
safety before this regimen is routinely used as upfront therapy.11

Thalidomide Plus Dexamethasone

Although there are no randomized controlled trials comparing lenalidomide plus dexamethasone with thalidomide plus dexamethasone, these regimens have been compared in retrospective studies. In these studies, lenalidomide plus dexamethasone showed both a higher response rate as well as an increased PFS and
OS compared with thalidomide plus dexamethasone. Additionally, lenalidomide’s AE profile was more favorable than that of thalidomide. In light of this, lenalidomide
plus dexamethasone is preferred to thalidomide plus dexamethasone in the management of MM, although the latter can be considered when lenalidomide is not available or when a patient does not tolerate lenalidomide.28

VDT-PACE

A multidrug combination that should be considered in select populations is the VDT-PACE regimen, which includes bortezomib, dexamethasone, thalidomide, cisplatin, doxorubicin, cyclophosphamide, and etoposide. This regimen can be considered in those patients who have aggressive disease, such as those with plasma cell leukemia or with multiple extramedullary plasmacytomas.11

Autologous Stem Cell Transplant

Previous data suggest that ASCT improves OS in MM by 12 months.29 A more recent open-label, randomized trial comparing melphalan and ASCT to melphalanprednisone-lenalidomide showed significant prolonged PFS and OS among patients with MM.30 Although the role of ASCT may change as new drugs are
integrated into initial therapy of MM, ASCT is still the preferred approach in transplant-eligible patients. As such, all patients who are eligible should be considered
to receive a transplant.

There remains debate about whether ASCT should be performed early, after 2 to 4 cycles of induction therapy, or late after first relapse. Several randomized trials failed to show a difference in survival for early vs delayed ASCT approach.31 Generally, transplant can be delayed for patients with standard-risk MM who have responded well to therapy.11 Those patients who do not achieve a CR with their first ASCT may benefit from a second (tandem) ASCT.32 An allogeneic transplant is occasionally used in select populations and is the only potentially curative therapy for these patients. However, its high mortality rate precludes its everyday use.

Transplant-Ineligible Patients

For patients with newly diagnosed MM who are ineligible for ASCT due to age or other comorbidities, chemotherapy is the only option. Many patients will benefit
not only in survival, but also in quality of life. Immunomodulatory agents, such as lenalidomide and thalidomide, and PIs, such as bortezomib, are highly effective
and well tolerated. There has been a general shift to using these agents upfront in transplant-ineligible patients.

All previously mentioned regimens can also be used in transplant-ineligible patients. Although no longer the preferred treatment, melphalan can be considered
in resource-poor settings.11 Patients who are not transplant eligible are treated for a fixed period of 9 to 18 months, although lenalidomide plus dexamethasone is often continued until relapse.11,33

Melphalan Plus Prednisone Plus Bortezomib

The addition of bortezomib to melphalan and prednisone results in improved OS compared with that of melphalan and dexamethasone alone.34 Peripheral neuropathy is a significant AE and can be minimized by giving bortezomib once weekly.

Melphalan Plus Prednisone Plus Thalidomide

Melphalan plus prednisone plus thalidomide has shown an OS benefit compared with that of melphalan and prednisone alone. The regimen has a high toxicity rate (> 50%) and a deep vein thrombosis rate of 20%, so patients undergoing treatment with this regimen require thromboprophylaxis.35,36

Melphalan Plus Prednisone

Although melphalan plus prednisone has fallen out of favor due to the existence of more efficacious regimens, it may be useful in an elderly patient population who lack access to newer agents, such as lenalidomide, thalidomide, and bortezomib.

Assessing Treatment Response

The International Myeloma Working Group has established criteria for assessing disease response. Patient’s response to therapy should be assessed with a FLC assay
before each cycle with SPEP and UPEP and in those without measurable M protein levels. A bone marrow biopsy can be helpful in patients with immeasurable M protein levels and low FLC levels, as well as to establish that a CR is present.

A CR is defined as negative SPEP/UPEP, disappearance of soft tissue plamacytomas, and < 5% plasma cells in bone marrow. A very good partial response is defined as serum/urine M protein being present on immunofixation but not electrophoresis or reduction in serum M protein by 90% and urine M protein < 100 mg/d. For those without measurable M protein, a reduction in FLC ratio by 90% is required. A partial response is defined as > 50% reduction of the serum monoclonal protein and/or < 200 mg urinary M protein per 24 hours or > 90% reduction in urinary M protein. For those without M protein present, they should have > 50% decrease in FLC ratio.5

Maintenance Therapy

There is currently considerable debate about whether patients should be treated with maintenance therapy following induction chemotherapy or transplant. In patients treated with transplant, there have been several studies to investigate the use of maintenance therapy. Lenalidomide has been evaluated for maintenance therapy following stem cell transplant and has shown superior PFS with dexamethasone as post-ASCT maintenance; however, this is at the cost of increased secondary cancers.37

Thalidomide has also been studied as maintenance therapy and seems to have a modest improvement in PFS and OS but at the cost of increased toxicities, such as
neuropathy and thromboembolism.38,39 Still other studies compared bortezomib maintenance with thalidomide maintenance in posttransplant patients and was able to show improved OS. As a result, certain patients with intermediate- or high-risk disease may be eligible for bortezomib for maintenance following transplant.11 For transplant-ineligible patients, there is no clear role for maintenance therapy.

Refreactory/Relapsed Disease Treatments

 

 

 

 

 

 

 

 

 

 

 

Click here to read the digital edition.

References

1. Landgren O, Kyle R, Pfeiffer RM, et al. Monoclonal gammopathy of undetermined
significance (MGUS) consistently precedes multiple myeloma: a prospective
study. Blood. 2009;113(22):5412-5417.
2. Kyle RA, Gertz MA, Witzig TE, et al. Review of 1027 patients with newly diagnosed
multiple myeloma. Mayo Clin Proc. 2003;78(1):21-33.
3. Hutchison CA, Batuman V, Behrens J, et al; International Kidney and Monoclonal
Gammopathy Research Group. The pathogenesis and diagnosis of acute kidney
injury in multiple myeloma. Nat Review Nephrol. 2011;8(1):43-51.
4. Dimopoulous M, Kyle R, Fermand JP, et al; International Myeloma Workshop
Consensus Panel 3. Consensus recommendations for standard investigative workup:
report of the International Myeloma Workshop Consensus Panel 3. Blood.
2011;117(18):4701-4705.
5. Palumbo A, Rajkumar S, San Miguel JF, et al. International Melanoma Working
Group consensus statement for the management, treatment, and supportive care
of patients with myeloma not eligible for standard autologous stem-cell transplantation.
J Clin Oncol. 2014;32(6):587-600.
6. Rajkumar SV, Dimopoulos MA, Palumbo A, et al. International Myeloma Working
Group updated criteria for the diagnosis of multiple myeloma. Lancet Oncol.
2014;15(12):e538-e548.
7. Dimopoulos MA, Kastritis E, Terpo E. Non-secretory myeloma: one, two, or more
entities? Oncology (Williston Park). 2013;27(9):930-932.
8. Durie BG, Salmon SE. A clinical staging system for multiple myeloma. Correlation
of measured myeloma cell mass with presenting clinical features, response to
treatment, and survival. Cancer. 1975;36(3):842-854.
9. Griepp P, San Miguel J, Durie BG, et al. International staging system for multiple
myeloma. J Clin Oncol. 2005;23(15):3412-3420.
10. Kumar SK, Dispenzieri A, Lacy MQ, et al. Continued improvement in survival
in multiple myeloma: changes in early mortality and outcomes in older patients.
Leukemia. 2014; 28(5):1122-1128.
11. Rajkumar SV. Multiple myeloma: 2014 update on diagnosis, risk-stratification,
and management. Am J Hematol. 2014;89(10):999-1009.
12. Kyle RA, Therneau TM, Rajkumar SV, et al. A long-term study of prognosis
in monoclonal gammopathy of undetermined significance. N Engl J Med.
2002;346(8):564-569.
13. Kyle RA, Remstein ED, Therneau TM, et al. Clinical course and prognosis of smoldering
(asymptomatic) multiple myeloma. N Engl J Med. 2007;356(25):2582-2590.
14. Landgren O. Monoclonal gammopathy of undetermined significance and smoldering
multiple myeloma: biological insights and early treatment strategies. Hematology
Am Soc Hematol Educ Program. 2013;2013(1):478-487.
15. Mateos MV, Hernández MT, Giraldo P, et al. Lenalidomide plus dexamethasone
for high-risk smoldering multiple myeloma. N Engl J Med. 2013;369(5):438-447.
16. Haessler K, Shaughnessy JD Jr, Zhan F, et al. Benefit of complete response in multiple
myeloma limited to high-risk subgroup identified by gene expression profiling.
Clin Cancer Res. 2007;13(23):7073-7079.
17. Xiang Z, Mehta P. Management of multiple myeloma and its precursor syndromes.
Fed Pract. 2014;31(suppl 3):6S-13S.
18. National Comprehensive Cancer Network. NCCN clinical practice guidelines in
oncology: multiple myeloma. National Comprehensive Cancer Network Website.
http://www.nccn.org/professionals/physician_gls/PDF/myeloma.pdf. Updated
March 10, 2015. Accessed July 8, 2015.
19. Kumar S, Flinn I, Richardson P, et al. Randomized, multicenter, phase 2 study
(EVOLUTION) of combinations of bortezomib, dexamethasone, cyclosphosphamide,
and lenalidomide in previously untreated multiple myeloma. Blood.
2012;119(19):4375-4382.
20. Moreau P, Avet-Loiseau H, Facon T, et al. Bortezomib plus dexamethasone versus
reduced-dose bortezomib, thalidomide plus dexamethasone as induction treatment
before autologous stem cell transplantation in newly diagnosed multiple
myeloma. Blood. 2011;118(22):5752-5758.
21. Moreau P, Pylypenko H, Grosicki S, et al. Subcutaneous versus intravenous
administration of bortezomib in patients with relapsed multiple myeloma: a randomized,
phase 3, noninferiority study. Lancet Oncol. 2011;12(5):431-440.
22. Pineda-Roman M, Zangari M, Haessler J, et al. Sustained complete remissions in
multiple myeloma linked to bortezomib in total therapy 3: comparison with total
therapy 2. Br J Haematol. 2008;140(6):624-634.
23. Kumar S, Dispenzieri A, Lacy MQ, et al. Impact of lenalidomide therapy on stem
cell mobilization and engraftment post-peripheral blood stem cell transplantation
in patients with newly diagnosed myeloma. Leukemia. 2007;21(9):2035-2042.
24. Kumar S, Giralt S, Stadtmauer EA, et al; International Myeloma Working Group.
Mobilization in myeloma revisited: IMWG consensus perspectives on stem cell
collection following initial therapy with thalidomide-, lenalidomide-, or bortezomibcontaining
regimens. Blood. 2009;114(9):1729-1735.
25. Larocca A, Cavallo F, Bringhen S, et al. Aspirin or enoxaparin thromboprophylaxis
for patients with newly diagnosed multiple myeloma patients treated with
lenalidomide. Blood. 2012;119(4):933-939.
26. Jakubowiak AJ, Dytfeld D, Griffith KA, et al. A phase 1/2 study of carfilzomib in
combination with lenalidomide and low dose dexamethasone as a frontline treatment
for multiple myeloma. Blood. 2012;120(9):1801-1809.
27. Korde N, Zingone A, Kwok M, et al. Phase II clinical and correlative study of
carfilzomib, lenalidomide, and dexamethasone followed by lenalidomide extended
dosing (CRD-R) induces high rates of MRD negativity in newly diagnosed

multiple myeloma patients [Abstract]. Blood. 2013;122(21):538.
28. Gay F, Hayman SR, Lacy MQ, et al. Lenalidomide plus dexamethasone versus thalidomide
plus dexamethasone in newly diagnosed multiple myeloma: a comparative
analysis of 411 patients. Blood. 2010;115(7):1343-1350.
29. Attal M, Harousseau JL, Stoppa AM, et al. A prospective, randomized trial of autologous
bone marrow transplantation and chemotherapy in multiple myeloma.
Intergroupe Français du Myélome. N Engl J Med. 1996;335(2):91-97.
30. Palumbo A, Cavallo F, Gay F, et al. Autologous transplantation and maintenance
therapy in multiple myeloma. N Engl J Med. 2014;371(10):895-905.
31. Fermand JP, Ravaud P, Chevret S, et al. High-dose therapy and autologous
stem cell transplantation in multiple myeloma: up-front or rescue treatment?
Results of a multicenter sequential randomized clinical trial. Blood.
1998;92(9):3131-3136.
32. Elice F, Raimondi R, Tosetto A, et al. Prolonged overall survival with second
on-demand autologous stem cell transplant in multiple myeloma. Am J Hematol.
2006;81(6):426-431.
33. Facon T, Dimopoulos MA, Dispenzieri A, et al. Initial phase 3 results of the
FIRST (frontline investigation of lenalidomide + dexamethasone versus standard
thalidomide) trial (MM-020/IFM 07 01) in newly diagnosed multiple myeloma
(NDMM) patients (pts) ineligible for stem cell transplantation (SCT). Blood.
2013;122(21):2.
34. San Miguel JF, Schlag R, Khuageva NK, et al. Bortezomib plus melphalan
and prednisone for initial treatment of multiple myeloma. N Engl J Med.
2008;359(9):906-917.
35. Facon T, Mary JY, Hulin C, et al; Intergroupe Français du Myélome. Melphalan
and prednisone plus thalidomide versus melphalan and prednisone
alone or reduced-intensity autologous stem cell transplantation in
elderly patients with multiple myeloma (IFM 99-06): a randomised trial. Lancet.
2007;370(9594):1209-1218.
36. Hulin C, Facon T, Rodon P, et al. Efficacy of melphalan and prednisone plus thalidomide
in patients older than 75 years with newly diagnosed multiple myeloma.
IFM 01/01 trial. J Clin Oncol. 2009;27(22):3664-3670.
37. Attal M, Lauwers-Cances V, Marit G, et al. Lenalidomide maintenance after stemcell
transplantation for multiple myeloma. N Engl J Med. 2012;366(19):1782-1791.
38. Attal M., Harousseau JL, Leyvraz S, et al; Inter-Groupe Francophone du Myélome
(IFM). Maintenance therapy with thalidomide improves survival in patients with
multiple myeloma. Blood. 2006;108(10):3289-3294.
39. Spencer A, Prince HM, Roberts AW, et al. Consolidation therapy with low-dose
thalidomide and prednisolone prolongs the survival of multiple myeloma patients
undergoing a single autologous stem-cell transplantation procedure. J Clin Oncol.
2009;27(11):1788-1793.
40. Sonneveld P, Schmidt-Wolf IG, van der Holt B, et al. Bortezomib induction and
maintenance treatment in patients with newly diagnosed multiple myeloma:
results of the randomized phase III HOVON-65/GMMG-HD4 trial. J Clin Oncol.
2012;30(24):2946-2955.
41. Richardson PG, Sonneveld P, Schuster MW, et al; Assessment of Proteasome
Inhibition for Extending Remissions (APEX) Investigators. Bortezomib
or high-dose dexamethasone for relapsed multiple myeloma. N Engl J Med.
2005;352(24):2487-2498.
42. Orlowski RZ, Nagler A, Sonneveld P, et al. Randomized phase III study of pegylated
liposomal doxorubicin plus bortezomib compared with bortezomib alone
in relapsed or refractory multiple myeloma: combination therapy improves time
to progression. J Clin Oncol. 2007;25(25):3892-3901.
43. Kumar SK, Lee JH, Lahuerta JJ, et al; International Myeloma Working Group.
Risk of progression and survival in multiple myeloma relapsing after therapy
with IMiDs and bortezomib: a multicenter international myeloma working group
study. Leukemia. 2012;26(1):149-157.
44. Lacy MQ, Hayman SR, Gertz MA, et al. Pomalidomide (CC4047) plus lowdose
dexamethasone as therapy for relapsed multiple myeloma. J Clin Oncol.
2009;27(30):5008-5014.
45. Siegel DS, Martin T, Wang M, et al. A phase 2 study of single agent carfilzomib
(PX-171-003-A1) in patients with relapsed and refractory multiple myeloma.
Blood. 2012;120(14):2817-2825.
46. San-Miguel JF, Hungria VT, Yoon SS, et al. Panobinostat plus bortezomib
and dexamethasone versus placebo plus bortezomib and dexamethasone
in patients with relapsed or relapsed and refractory multiple
myeloma: a multicentre, randomised, double-blind phase 3 trial. Lancet Oncol.
2014;15(11):1195-1206.

References

1. Landgren O, Kyle R, Pfeiffer RM, et al. Monoclonal gammopathy of undetermined
significance (MGUS) consistently precedes multiple myeloma: a prospective
study. Blood. 2009;113(22):5412-5417.
2. Kyle RA, Gertz MA, Witzig TE, et al. Review of 1027 patients with newly diagnosed
multiple myeloma. Mayo Clin Proc. 2003;78(1):21-33.
3. Hutchison CA, Batuman V, Behrens J, et al; International Kidney and Monoclonal
Gammopathy Research Group. The pathogenesis and diagnosis of acute kidney
injury in multiple myeloma. Nat Review Nephrol. 2011;8(1):43-51.
4. Dimopoulous M, Kyle R, Fermand JP, et al; International Myeloma Workshop
Consensus Panel 3. Consensus recommendations for standard investigative workup:
report of the International Myeloma Workshop Consensus Panel 3. Blood.
2011;117(18):4701-4705.
5. Palumbo A, Rajkumar S, San Miguel JF, et al. International Melanoma Working
Group consensus statement for the management, treatment, and supportive care
of patients with myeloma not eligible for standard autologous stem-cell transplantation.
J Clin Oncol. 2014;32(6):587-600.
6. Rajkumar SV, Dimopoulos MA, Palumbo A, et al. International Myeloma Working
Group updated criteria for the diagnosis of multiple myeloma. Lancet Oncol.
2014;15(12):e538-e548.
7. Dimopoulos MA, Kastritis E, Terpo E. Non-secretory myeloma: one, two, or more
entities? Oncology (Williston Park). 2013;27(9):930-932.
8. Durie BG, Salmon SE. A clinical staging system for multiple myeloma. Correlation
of measured myeloma cell mass with presenting clinical features, response to
treatment, and survival. Cancer. 1975;36(3):842-854.
9. Griepp P, San Miguel J, Durie BG, et al. International staging system for multiple
myeloma. J Clin Oncol. 2005;23(15):3412-3420.
10. Kumar SK, Dispenzieri A, Lacy MQ, et al. Continued improvement in survival
in multiple myeloma: changes in early mortality and outcomes in older patients.
Leukemia. 2014; 28(5):1122-1128.
11. Rajkumar SV. Multiple myeloma: 2014 update on diagnosis, risk-stratification,
and management. Am J Hematol. 2014;89(10):999-1009.
12. Kyle RA, Therneau TM, Rajkumar SV, et al. A long-term study of prognosis
in monoclonal gammopathy of undetermined significance. N Engl J Med.
2002;346(8):564-569.
13. Kyle RA, Remstein ED, Therneau TM, et al. Clinical course and prognosis of smoldering
(asymptomatic) multiple myeloma. N Engl J Med. 2007;356(25):2582-2590.
14. Landgren O. Monoclonal gammopathy of undetermined significance and smoldering
multiple myeloma: biological insights and early treatment strategies. Hematology
Am Soc Hematol Educ Program. 2013;2013(1):478-487.
15. Mateos MV, Hernández MT, Giraldo P, et al. Lenalidomide plus dexamethasone
for high-risk smoldering multiple myeloma. N Engl J Med. 2013;369(5):438-447.
16. Haessler K, Shaughnessy JD Jr, Zhan F, et al. Benefit of complete response in multiple
myeloma limited to high-risk subgroup identified by gene expression profiling.
Clin Cancer Res. 2007;13(23):7073-7079.
17. Xiang Z, Mehta P. Management of multiple myeloma and its precursor syndromes.
Fed Pract. 2014;31(suppl 3):6S-13S.
18. National Comprehensive Cancer Network. NCCN clinical practice guidelines in
oncology: multiple myeloma. National Comprehensive Cancer Network Website.
http://www.nccn.org/professionals/physician_gls/PDF/myeloma.pdf. Updated
March 10, 2015. Accessed July 8, 2015.
19. Kumar S, Flinn I, Richardson P, et al. Randomized, multicenter, phase 2 study
(EVOLUTION) of combinations of bortezomib, dexamethasone, cyclosphosphamide,
and lenalidomide in previously untreated multiple myeloma. Blood.
2012;119(19):4375-4382.
20. Moreau P, Avet-Loiseau H, Facon T, et al. Bortezomib plus dexamethasone versus
reduced-dose bortezomib, thalidomide plus dexamethasone as induction treatment
before autologous stem cell transplantation in newly diagnosed multiple
myeloma. Blood. 2011;118(22):5752-5758.
21. Moreau P, Pylypenko H, Grosicki S, et al. Subcutaneous versus intravenous
administration of bortezomib in patients with relapsed multiple myeloma: a randomized,
phase 3, noninferiority study. Lancet Oncol. 2011;12(5):431-440.
22. Pineda-Roman M, Zangari M, Haessler J, et al. Sustained complete remissions in
multiple myeloma linked to bortezomib in total therapy 3: comparison with total
therapy 2. Br J Haematol. 2008;140(6):624-634.
23. Kumar S, Dispenzieri A, Lacy MQ, et al. Impact of lenalidomide therapy on stem
cell mobilization and engraftment post-peripheral blood stem cell transplantation
in patients with newly diagnosed myeloma. Leukemia. 2007;21(9):2035-2042.
24. Kumar S, Giralt S, Stadtmauer EA, et al; International Myeloma Working Group.
Mobilization in myeloma revisited: IMWG consensus perspectives on stem cell
collection following initial therapy with thalidomide-, lenalidomide-, or bortezomibcontaining
regimens. Blood. 2009;114(9):1729-1735.
25. Larocca A, Cavallo F, Bringhen S, et al. Aspirin or enoxaparin thromboprophylaxis
for patients with newly diagnosed multiple myeloma patients treated with
lenalidomide. Blood. 2012;119(4):933-939.
26. Jakubowiak AJ, Dytfeld D, Griffith KA, et al. A phase 1/2 study of carfilzomib in
combination with lenalidomide and low dose dexamethasone as a frontline treatment
for multiple myeloma. Blood. 2012;120(9):1801-1809.
27. Korde N, Zingone A, Kwok M, et al. Phase II clinical and correlative study of
carfilzomib, lenalidomide, and dexamethasone followed by lenalidomide extended
dosing (CRD-R) induces high rates of MRD negativity in newly diagnosed

multiple myeloma patients [Abstract]. Blood. 2013;122(21):538.
28. Gay F, Hayman SR, Lacy MQ, et al. Lenalidomide plus dexamethasone versus thalidomide
plus dexamethasone in newly diagnosed multiple myeloma: a comparative
analysis of 411 patients. Blood. 2010;115(7):1343-1350.
29. Attal M, Harousseau JL, Stoppa AM, et al. A prospective, randomized trial of autologous
bone marrow transplantation and chemotherapy in multiple myeloma.
Intergroupe Français du Myélome. N Engl J Med. 1996;335(2):91-97.
30. Palumbo A, Cavallo F, Gay F, et al. Autologous transplantation and maintenance
therapy in multiple myeloma. N Engl J Med. 2014;371(10):895-905.
31. Fermand JP, Ravaud P, Chevret S, et al. High-dose therapy and autologous
stem cell transplantation in multiple myeloma: up-front or rescue treatment?
Results of a multicenter sequential randomized clinical trial. Blood.
1998;92(9):3131-3136.
32. Elice F, Raimondi R, Tosetto A, et al. Prolonged overall survival with second
on-demand autologous stem cell transplant in multiple myeloma. Am J Hematol.
2006;81(6):426-431.
33. Facon T, Dimopoulos MA, Dispenzieri A, et al. Initial phase 3 results of the
FIRST (frontline investigation of lenalidomide + dexamethasone versus standard
thalidomide) trial (MM-020/IFM 07 01) in newly diagnosed multiple myeloma
(NDMM) patients (pts) ineligible for stem cell transplantation (SCT). Blood.
2013;122(21):2.
34. San Miguel JF, Schlag R, Khuageva NK, et al. Bortezomib plus melphalan
and prednisone for initial treatment of multiple myeloma. N Engl J Med.
2008;359(9):906-917.
35. Facon T, Mary JY, Hulin C, et al; Intergroupe Français du Myélome. Melphalan
and prednisone plus thalidomide versus melphalan and prednisone
alone or reduced-intensity autologous stem cell transplantation in
elderly patients with multiple myeloma (IFM 99-06): a randomised trial. Lancet.
2007;370(9594):1209-1218.
36. Hulin C, Facon T, Rodon P, et al. Efficacy of melphalan and prednisone plus thalidomide
in patients older than 75 years with newly diagnosed multiple myeloma.
IFM 01/01 trial. J Clin Oncol. 2009;27(22):3664-3670.
37. Attal M, Lauwers-Cances V, Marit G, et al. Lenalidomide maintenance after stemcell
transplantation for multiple myeloma. N Engl J Med. 2012;366(19):1782-1791.
38. Attal M., Harousseau JL, Leyvraz S, et al; Inter-Groupe Francophone du Myélome
(IFM). Maintenance therapy with thalidomide improves survival in patients with
multiple myeloma. Blood. 2006;108(10):3289-3294.
39. Spencer A, Prince HM, Roberts AW, et al. Consolidation therapy with low-dose
thalidomide and prednisolone prolongs the survival of multiple myeloma patients
undergoing a single autologous stem-cell transplantation procedure. J Clin Oncol.
2009;27(11):1788-1793.
40. Sonneveld P, Schmidt-Wolf IG, van der Holt B, et al. Bortezomib induction and
maintenance treatment in patients with newly diagnosed multiple myeloma:
results of the randomized phase III HOVON-65/GMMG-HD4 trial. J Clin Oncol.
2012;30(24):2946-2955.
41. Richardson PG, Sonneveld P, Schuster MW, et al; Assessment of Proteasome
Inhibition for Extending Remissions (APEX) Investigators. Bortezomib
or high-dose dexamethasone for relapsed multiple myeloma. N Engl J Med.
2005;352(24):2487-2498.
42. Orlowski RZ, Nagler A, Sonneveld P, et al. Randomized phase III study of pegylated
liposomal doxorubicin plus bortezomib compared with bortezomib alone
in relapsed or refractory multiple myeloma: combination therapy improves time
to progression. J Clin Oncol. 2007;25(25):3892-3901.
43. Kumar SK, Lee JH, Lahuerta JJ, et al; International Myeloma Working Group.
Risk of progression and survival in multiple myeloma relapsing after therapy
with IMiDs and bortezomib: a multicenter international myeloma working group
study. Leukemia. 2012;26(1):149-157.
44. Lacy MQ, Hayman SR, Gertz MA, et al. Pomalidomide (CC4047) plus lowdose
dexamethasone as therapy for relapsed multiple myeloma. J Clin Oncol.
2009;27(30):5008-5014.
45. Siegel DS, Martin T, Wang M, et al. A phase 2 study of single agent carfilzomib
(PX-171-003-A1) in patients with relapsed and refractory multiple myeloma.
Blood. 2012;120(14):2817-2825.
46. San-Miguel JF, Hungria VT, Yoon SS, et al. Panobinostat plus bortezomib
and dexamethasone versus placebo plus bortezomib and dexamethasone
in patients with relapsed or relapsed and refractory multiple
myeloma: a multicentre, randomised, double-blind phase 3 trial. Lancet Oncol.
2014;15(11):1195-1206.

Page Number
49S-56S
Page Number
49S-56S
Publications
Publications
Topics
Article Type
Display Headline
Multiple Myeloma: Updates on Diagnosis and Management
Display Headline
Multiple Myeloma: Updates on Diagnosis and Management
Legacy Keywords
multiple myeloma, hematology, plasma cells, monoclonal immunoglobulin, monoclonal gammopathy of undetermined significance, smoldering multiple myeloma, anemia, bone pain, hypercalcemia, monoclonal protein, M protein, Sarah Jewell, Zhifu Xiang, Anuradha Kunthur, Paulette Mehta
Legacy Keywords
multiple myeloma, hematology, plasma cells, monoclonal immunoglobulin, monoclonal gammopathy of undetermined significance, smoldering multiple myeloma, anemia, bone pain, hypercalcemia, monoclonal protein, M protein, Sarah Jewell, Zhifu Xiang, Anuradha Kunthur, Paulette Mehta
Sections
Citation Override
Fed Pract. 2015 August;32(suppl 7):49S-56S
Disallow All Ads
Alternative CME

Implementation of a Communication Training Program Is Associated with Reduction of Antipsychotic Medication Use in Nursing Homes

Article Type
Changed
Wed, 02/28/2018 - 14:42
Display Headline
Implementation of a Communication Training Program Is Associated with Reduction of Antipsychotic Medication Use in Nursing Homes

Study Overview

Objective. To evaluate the effectiveness of OASIS, a large-scale, statewide communication training program, on the reduction of antipsychotic use in nursing homes (NHs).

Design. Quasi-experimental longitudinal study with external controls.

Setting and participants. The participants were residents living in NHs between 1 March 2011 and 31 August 2013. The intervention group consisted of NHs in Massachusetts that were enrolled in the OASIS intervention and the control group consisted of NHs in Massachusetts and New York. The Centers for Medicare & Medicaid Services Minimum Data Set (MDS) 3.0 data was analyzed to determine medication use and behavior of residents of NHs. Residents of these NHs were excluded if they had a US Food and Drug Administration (FDA)-approved indication for antipsychotic use (eg, schizophrenia); were short-term residents (length of stay < 90 days); or had missing data on psychopharmacological medication use or behavior.

Intervention. The OASIS is an educational program that targeted both direct care and non-direct care staff in NHs to assist them in meeting the needs and challenges of caring for long-term care residents. Utilizing a train-the-trainer model, OASIS program coordinators and champions from each intervention NH participated in an 8-hour in-person training session that focused on enhancing communication skills between NH staff and residents with cognitive impairment. These trainers subsequently instructed the OASIS program to staff at their respective NHs using a team-based care approach. Addi-tional support of the OASIS educational program, such as telephone support, 12 webinars, 2 regional seminars, and 2 booster sessions, were provided to participating NHs.

Main outcome measures. The main outcome measure was facility-level prevalence of antipsychotic use in long-term NH residents captured by MDS in the 7 days preceding the MDS assessment. The secondary outcome measures were facility-level quarterly prevalence of psychotropic medications that may have been substituted for antipsychotic medications (ie, anxiolytics, antidepressants, and hypnotics) and behavioral disturbances (ie, physically abusive behavior, verbally abusive behavior, and rejecting care). All secondary outcomes were dichotomized in the 7 days preceding the MDS assessment and aggregated at the facility level for each quarter.

The analysis utilized an interrupted time series model of facility-level prevalence of antipsychotic medication use, other psychotropic medication use, and behavioral disturbances to evaluate the OASIS intervention’s effectiveness in participating facilities compared with control NHs. This methodology allowed the assessment of changes in the trend of antipsychotic use after the OASIS intervention controlling for historical trends. Data from the 18-month pre-intervention (baseline) period was compared with that of a 3-month training phase, a 6-month implementation phase, and a 3-month maintenance phase.

Main results. 93 NHs received OASIS intervention (27 with high prevalence of antipsychotic use) while 831 NHs did not (non-intervention control). The intervention NHs had a higher prevalence of antipsychotic use before OASIS training (baseline period) than the control NHs (34.1% vs. 22.7%, P < 0.001). The intervention NHs compared to controls were smaller in size (122 beds [interquartile range {IQR}, 88–152 beds] vs. 140 beds; [IQR, 104–200 beds]; P < 0.001), more likely to be for profit (77.4% vs. 62.0%, P = 0.009), had corporate ownership (93.5% vs. 74.6%, P < 0.001), and provided resident-only councils (78.5% vs. 52.9%, P < 0.001). The intervention NHs had higher registered nurse (RN) staffing hours per resident (0.8 vs. 0.7; P = 0.01) but lower certified nursing assistant (CNA) hours per resident (2.3 vs. 2.4; P = 0.04) than control NHs. There was no difference in licensed practical nurse hours per resident between groups.

All 93 intervention NHs completed the 8-hour in-person training session and attended an average of 6.5 (range, 0–12) subsequent support webinars. Thirteen NHs (14.0%) attended no regional seminars, 32 (34.4%) attended one, and 48 (51.6%) attended both. Four NHs (4.3%) attended one booster session, and 13 (14.0%) attended both. The NH staff most often trained in the OASIS training program were the directors of nursing, RNs, CNAs, and activities personnel. Support staff including housekeeping and dietary were trained in about half of the reporting intervention NHs, while physicians and nurse practitioners participated infrequently. Concurrent training programs in dementia care (Hand-in-Hand, Alzheimer Association training, MassPRO dementia care training) were implemented in 67.2% of intervention NHs.

In the intervention NHs, the prevalence of antipsych-otic prescribing decreased from 34.1% at baseline to 26.5% at the study end (7.6% absolute reduction, 22.3% relative reduction). In comparison, the prevalence of antipsychotic prescribing in control NHs decreased from 22.7% to 18.8% over the same period (3.9% absolute reduction, 17.2% relative reduction). During the OASIS implementation phase, the intervention NHs had a reduc-tion in prevalence of antipsychotic use (–1.20% [95% confidence interval {CI}, –1.85% to –0.09% per quarter]) greater than that of the control NHs (–0.23% [95% CI, –0.47% to 0.01% per quarter]), resulting in a net OASIS influence of –0.97% (95% CI, –1.85% to –0.09% per quarter; P = 0.03). The antipsychotic use reduction observed in the implementation phase was not sustained in the maintenance phase (difference of 0.93%; 95% CI, –0.66% to 2.54%; P = 0.48). No increases in other psychotropic medication use (anxiolytics, antidepressants, hypnotics) or behavioral disturbances (physically abusive behavior, verbally abusive behavior, and rejecting care) were observed during the OASIS training and implementation phases.

Conclusion. The OASIS communication training program reduced the prevalence of antipsychotic use in NHs during its implementation phase, but its effect was not sustained in the subsequent maintenance phase. The use of other psychotropic medications and behavior disturbances did not increase during the implementation of OASIS program. The findings from this study provided further support for utilizing nonpharmacologic programs to treat behavioral and psychological symptoms of dementia in older adults who reside in NHs.

Commentary

The use of both conventional and atypical antipsychotic medications is associated with a dose-related, approximately 2-fold increased risk of sudden cardiac death in older adults [1,2]. In 2006, the FDA issued a public health advisory stating that both conventional and atypical anti-psychotic medications are associated with an increased risk of mortality in elderly patients treated for dementia-related psychosis. Despite this black box warning and growing recognition that antipsychotic medications are not indicated for the treatment of dementia-related psychosis, the off-label use of antipsychotic medications to treat behavioral and psychological symptoms of dementia in older adults remains a common practice in nursing homes [3]. Thus, there is an urgent need to assess and develop effective interventions that reduce the practice of antipsychotic medication prescribing in long-term care. To that effect, the study reported by Tjia et al appropriately investigated the impact of the OASIS communication training program, a nonpharmacologic intervention, on the reduction of antipsychotic use in NHs.

This study was well designed and had a number of strengths. It utilized an interrupted time series model, one of the strongest quasi-experimental approaches due to its robustness to threats of internal validity, for evaluating longitudinal effects of an intervention intended to improve the quality of medication use. Moreover, this study included a large sample size and comparison facilities from the same geographical areas (NHs in Massachusetts and New York State) that served as external controls. Several potential weaknesses of the study were identified. Because facility-level aggregate data from NHs were used for analysis, individual level (long-term care resident) characteristics were not accounted for in the analysis. In addition, while the post-OASIS intervention questionnaire response rate was 65.6% (61 of 93 intervention NHs), a higher response rate would provide better characterization of NH staff that participated in OASIS program training, program completion rate, and a more complete representation of competing dementia care training programs concurrently implemented in these NHs.

Several studies, most utilizing various provider education methods, had explored whether these interventions could curb antipsychotic use in NHs with limited success. The largest successful intervention was reported by Meador et al [4], where a focused provider education program facilitated a relative reduction in antipsychotic medication use of 23% compared to control NHs. However, the implementation of this specific program was time- and resource-intensive, requiring geropsychiatry evaluation to all physicians (45 to 60 min), nurse-educator in-service programs for NH staff (5 to 6 one-hr sessions), management specialist consultation to NH administrators (4 hr), and evening meeting for the families of NH residents. The current study by Tjia et al, the largest study to date conducted in the context of competing dementia care training programs and increased awareness of the danger of antipsychotic use in the elderly, similarly showed a meaningful reduction in antipsychotic medication use in NHs that received the OASIS communication training program. The OASIS program appears to be less resource-intensive than the provider education program modeled by Meador et al, and its train-the-trainer model is likely more adaptable to meet the limitations (eg, low staffing and staff turnover) inherent in NHs. The beneficial effect of the OASIS program on reduction of antipsychotic medication prescribing was observed despite low participation by prescribers (11.5% of physicians and 11.5% of nurse practitioners). Although it is unclear why this was observed, this finding is intriguing in that a communication training program that reframes challenging behavior of NH residents with cognitive impairment as (1) communication of unmet needs, (2) train staff to anticipate resident needs, and (3) integrate resident strengths into daily care plans can alter provider prescription behavior. The implication of this is that provider practice in managing behavioral and psychological symptoms of dementia can be improved by optimizing communication training in NH staff. Taken together, this study adds to evidence in favor of utilizing nonpharmacologic interventions to reduce antipsychotic use in long-term care.

Applications for Clinical Practice

OASIS, a communication training program for NH staff, reduces antipsychotic medication use in NHs during its implementation phase. Future studies need to investigate pragmatic methods to sustain the beneficial effect of OASIS after its implementation phase.

 

—Fred Ko, MD, MS, Icahn School of Medicine at Mount Sinai, New York, NY

References

1. Ray WA, Chung CP, Murray KT, et al. Atypical antipsychotic drugs and the risk of sudden cardiac death. N Engl J Med 2009;360:225–35.

2. Wang PS, Schneeweiss S, Avorn J, et al. Risk of death in elderly users of conventional vs. atypical antipsychotic medications. N Engl J Med 2005;353:2335–41.

3. Chen Y, Briesacher BA, Field TS, et al. Unexplained variation across US nursing homes in antipsychotic prescribing rates. Arch Intern Med 2010;170:89–95.

4. Meador KG, Taylor JA, Thapa PB, et al. Predictors of anti-
psychotic withdrawal or dose reduction in a randomized controlled trial of provider education. J Am Geriatr Soc 1997;45:207–10.

Issue
Journal of Clinical Outcomes Management - August 2017, Vol. 24, No 8
Publications
Topics
Sections

Study Overview

Objective. To evaluate the effectiveness of OASIS, a large-scale, statewide communication training program, on the reduction of antipsychotic use in nursing homes (NHs).

Design. Quasi-experimental longitudinal study with external controls.

Setting and participants. The participants were residents living in NHs between 1 March 2011 and 31 August 2013. The intervention group consisted of NHs in Massachusetts that were enrolled in the OASIS intervention and the control group consisted of NHs in Massachusetts and New York. The Centers for Medicare & Medicaid Services Minimum Data Set (MDS) 3.0 data was analyzed to determine medication use and behavior of residents of NHs. Residents of these NHs were excluded if they had a US Food and Drug Administration (FDA)-approved indication for antipsychotic use (eg, schizophrenia); were short-term residents (length of stay < 90 days); or had missing data on psychopharmacological medication use or behavior.

Intervention. The OASIS is an educational program that targeted both direct care and non-direct care staff in NHs to assist them in meeting the needs and challenges of caring for long-term care residents. Utilizing a train-the-trainer model, OASIS program coordinators and champions from each intervention NH participated in an 8-hour in-person training session that focused on enhancing communication skills between NH staff and residents with cognitive impairment. These trainers subsequently instructed the OASIS program to staff at their respective NHs using a team-based care approach. Addi-tional support of the OASIS educational program, such as telephone support, 12 webinars, 2 regional seminars, and 2 booster sessions, were provided to participating NHs.

Main outcome measures. The main outcome measure was facility-level prevalence of antipsychotic use in long-term NH residents captured by MDS in the 7 days preceding the MDS assessment. The secondary outcome measures were facility-level quarterly prevalence of psychotropic medications that may have been substituted for antipsychotic medications (ie, anxiolytics, antidepressants, and hypnotics) and behavioral disturbances (ie, physically abusive behavior, verbally abusive behavior, and rejecting care). All secondary outcomes were dichotomized in the 7 days preceding the MDS assessment and aggregated at the facility level for each quarter.

The analysis utilized an interrupted time series model of facility-level prevalence of antipsychotic medication use, other psychotropic medication use, and behavioral disturbances to evaluate the OASIS intervention’s effectiveness in participating facilities compared with control NHs. This methodology allowed the assessment of changes in the trend of antipsychotic use after the OASIS intervention controlling for historical trends. Data from the 18-month pre-intervention (baseline) period was compared with that of a 3-month training phase, a 6-month implementation phase, and a 3-month maintenance phase.

Main results. 93 NHs received OASIS intervention (27 with high prevalence of antipsychotic use) while 831 NHs did not (non-intervention control). The intervention NHs had a higher prevalence of antipsychotic use before OASIS training (baseline period) than the control NHs (34.1% vs. 22.7%, P < 0.001). The intervention NHs compared to controls were smaller in size (122 beds [interquartile range {IQR}, 88–152 beds] vs. 140 beds; [IQR, 104–200 beds]; P < 0.001), more likely to be for profit (77.4% vs. 62.0%, P = 0.009), had corporate ownership (93.5% vs. 74.6%, P < 0.001), and provided resident-only councils (78.5% vs. 52.9%, P < 0.001). The intervention NHs had higher registered nurse (RN) staffing hours per resident (0.8 vs. 0.7; P = 0.01) but lower certified nursing assistant (CNA) hours per resident (2.3 vs. 2.4; P = 0.04) than control NHs. There was no difference in licensed practical nurse hours per resident between groups.

All 93 intervention NHs completed the 8-hour in-person training session and attended an average of 6.5 (range, 0–12) subsequent support webinars. Thirteen NHs (14.0%) attended no regional seminars, 32 (34.4%) attended one, and 48 (51.6%) attended both. Four NHs (4.3%) attended one booster session, and 13 (14.0%) attended both. The NH staff most often trained in the OASIS training program were the directors of nursing, RNs, CNAs, and activities personnel. Support staff including housekeeping and dietary were trained in about half of the reporting intervention NHs, while physicians and nurse practitioners participated infrequently. Concurrent training programs in dementia care (Hand-in-Hand, Alzheimer Association training, MassPRO dementia care training) were implemented in 67.2% of intervention NHs.

In the intervention NHs, the prevalence of antipsych-otic prescribing decreased from 34.1% at baseline to 26.5% at the study end (7.6% absolute reduction, 22.3% relative reduction). In comparison, the prevalence of antipsychotic prescribing in control NHs decreased from 22.7% to 18.8% over the same period (3.9% absolute reduction, 17.2% relative reduction). During the OASIS implementation phase, the intervention NHs had a reduc-tion in prevalence of antipsychotic use (–1.20% [95% confidence interval {CI}, –1.85% to –0.09% per quarter]) greater than that of the control NHs (–0.23% [95% CI, –0.47% to 0.01% per quarter]), resulting in a net OASIS influence of –0.97% (95% CI, –1.85% to –0.09% per quarter; P = 0.03). The antipsychotic use reduction observed in the implementation phase was not sustained in the maintenance phase (difference of 0.93%; 95% CI, –0.66% to 2.54%; P = 0.48). No increases in other psychotropic medication use (anxiolytics, antidepressants, hypnotics) or behavioral disturbances (physically abusive behavior, verbally abusive behavior, and rejecting care) were observed during the OASIS training and implementation phases.

Conclusion. The OASIS communication training program reduced the prevalence of antipsychotic use in NHs during its implementation phase, but its effect was not sustained in the subsequent maintenance phase. The use of other psychotropic medications and behavior disturbances did not increase during the implementation of OASIS program. The findings from this study provided further support for utilizing nonpharmacologic programs to treat behavioral and psychological symptoms of dementia in older adults who reside in NHs.

Commentary

The use of both conventional and atypical antipsychotic medications is associated with a dose-related, approximately 2-fold increased risk of sudden cardiac death in older adults [1,2]. In 2006, the FDA issued a public health advisory stating that both conventional and atypical anti-psychotic medications are associated with an increased risk of mortality in elderly patients treated for dementia-related psychosis. Despite this black box warning and growing recognition that antipsychotic medications are not indicated for the treatment of dementia-related psychosis, the off-label use of antipsychotic medications to treat behavioral and psychological symptoms of dementia in older adults remains a common practice in nursing homes [3]. Thus, there is an urgent need to assess and develop effective interventions that reduce the practice of antipsychotic medication prescribing in long-term care. To that effect, the study reported by Tjia et al appropriately investigated the impact of the OASIS communication training program, a nonpharmacologic intervention, on the reduction of antipsychotic use in NHs.

This study was well designed and had a number of strengths. It utilized an interrupted time series model, one of the strongest quasi-experimental approaches due to its robustness to threats of internal validity, for evaluating longitudinal effects of an intervention intended to improve the quality of medication use. Moreover, this study included a large sample size and comparison facilities from the same geographical areas (NHs in Massachusetts and New York State) that served as external controls. Several potential weaknesses of the study were identified. Because facility-level aggregate data from NHs were used for analysis, individual level (long-term care resident) characteristics were not accounted for in the analysis. In addition, while the post-OASIS intervention questionnaire response rate was 65.6% (61 of 93 intervention NHs), a higher response rate would provide better characterization of NH staff that participated in OASIS program training, program completion rate, and a more complete representation of competing dementia care training programs concurrently implemented in these NHs.

Several studies, most utilizing various provider education methods, had explored whether these interventions could curb antipsychotic use in NHs with limited success. The largest successful intervention was reported by Meador et al [4], where a focused provider education program facilitated a relative reduction in antipsychotic medication use of 23% compared to control NHs. However, the implementation of this specific program was time- and resource-intensive, requiring geropsychiatry evaluation to all physicians (45 to 60 min), nurse-educator in-service programs for NH staff (5 to 6 one-hr sessions), management specialist consultation to NH administrators (4 hr), and evening meeting for the families of NH residents. The current study by Tjia et al, the largest study to date conducted in the context of competing dementia care training programs and increased awareness of the danger of antipsychotic use in the elderly, similarly showed a meaningful reduction in antipsychotic medication use in NHs that received the OASIS communication training program. The OASIS program appears to be less resource-intensive than the provider education program modeled by Meador et al, and its train-the-trainer model is likely more adaptable to meet the limitations (eg, low staffing and staff turnover) inherent in NHs. The beneficial effect of the OASIS program on reduction of antipsychotic medication prescribing was observed despite low participation by prescribers (11.5% of physicians and 11.5% of nurse practitioners). Although it is unclear why this was observed, this finding is intriguing in that a communication training program that reframes challenging behavior of NH residents with cognitive impairment as (1) communication of unmet needs, (2) train staff to anticipate resident needs, and (3) integrate resident strengths into daily care plans can alter provider prescription behavior. The implication of this is that provider practice in managing behavioral and psychological symptoms of dementia can be improved by optimizing communication training in NH staff. Taken together, this study adds to evidence in favor of utilizing nonpharmacologic interventions to reduce antipsychotic use in long-term care.

Applications for Clinical Practice

OASIS, a communication training program for NH staff, reduces antipsychotic medication use in NHs during its implementation phase. Future studies need to investigate pragmatic methods to sustain the beneficial effect of OASIS after its implementation phase.

 

—Fred Ko, MD, MS, Icahn School of Medicine at Mount Sinai, New York, NY

Study Overview

Objective. To evaluate the effectiveness of OASIS, a large-scale, statewide communication training program, on the reduction of antipsychotic use in nursing homes (NHs).

Design. Quasi-experimental longitudinal study with external controls.

Setting and participants. The participants were residents living in NHs between 1 March 2011 and 31 August 2013. The intervention group consisted of NHs in Massachusetts that were enrolled in the OASIS intervention and the control group consisted of NHs in Massachusetts and New York. The Centers for Medicare & Medicaid Services Minimum Data Set (MDS) 3.0 data was analyzed to determine medication use and behavior of residents of NHs. Residents of these NHs were excluded if they had a US Food and Drug Administration (FDA)-approved indication for antipsychotic use (eg, schizophrenia); were short-term residents (length of stay < 90 days); or had missing data on psychopharmacological medication use or behavior.

Intervention. The OASIS is an educational program that targeted both direct care and non-direct care staff in NHs to assist them in meeting the needs and challenges of caring for long-term care residents. Utilizing a train-the-trainer model, OASIS program coordinators and champions from each intervention NH participated in an 8-hour in-person training session that focused on enhancing communication skills between NH staff and residents with cognitive impairment. These trainers subsequently instructed the OASIS program to staff at their respective NHs using a team-based care approach. Addi-tional support of the OASIS educational program, such as telephone support, 12 webinars, 2 regional seminars, and 2 booster sessions, were provided to participating NHs.

Main outcome measures. The main outcome measure was facility-level prevalence of antipsychotic use in long-term NH residents captured by MDS in the 7 days preceding the MDS assessment. The secondary outcome measures were facility-level quarterly prevalence of psychotropic medications that may have been substituted for antipsychotic medications (ie, anxiolytics, antidepressants, and hypnotics) and behavioral disturbances (ie, physically abusive behavior, verbally abusive behavior, and rejecting care). All secondary outcomes were dichotomized in the 7 days preceding the MDS assessment and aggregated at the facility level for each quarter.

The analysis utilized an interrupted time series model of facility-level prevalence of antipsychotic medication use, other psychotropic medication use, and behavioral disturbances to evaluate the OASIS intervention’s effectiveness in participating facilities compared with control NHs. This methodology allowed the assessment of changes in the trend of antipsychotic use after the OASIS intervention controlling for historical trends. Data from the 18-month pre-intervention (baseline) period was compared with that of a 3-month training phase, a 6-month implementation phase, and a 3-month maintenance phase.

Main results. 93 NHs received OASIS intervention (27 with high prevalence of antipsychotic use) while 831 NHs did not (non-intervention control). The intervention NHs had a higher prevalence of antipsychotic use before OASIS training (baseline period) than the control NHs (34.1% vs. 22.7%, P < 0.001). The intervention NHs compared to controls were smaller in size (122 beds [interquartile range {IQR}, 88–152 beds] vs. 140 beds; [IQR, 104–200 beds]; P < 0.001), more likely to be for profit (77.4% vs. 62.0%, P = 0.009), had corporate ownership (93.5% vs. 74.6%, P < 0.001), and provided resident-only councils (78.5% vs. 52.9%, P < 0.001). The intervention NHs had higher registered nurse (RN) staffing hours per resident (0.8 vs. 0.7; P = 0.01) but lower certified nursing assistant (CNA) hours per resident (2.3 vs. 2.4; P = 0.04) than control NHs. There was no difference in licensed practical nurse hours per resident between groups.

All 93 intervention NHs completed the 8-hour in-person training session and attended an average of 6.5 (range, 0–12) subsequent support webinars. Thirteen NHs (14.0%) attended no regional seminars, 32 (34.4%) attended one, and 48 (51.6%) attended both. Four NHs (4.3%) attended one booster session, and 13 (14.0%) attended both. The NH staff most often trained in the OASIS training program were the directors of nursing, RNs, CNAs, and activities personnel. Support staff including housekeeping and dietary were trained in about half of the reporting intervention NHs, while physicians and nurse practitioners participated infrequently. Concurrent training programs in dementia care (Hand-in-Hand, Alzheimer Association training, MassPRO dementia care training) were implemented in 67.2% of intervention NHs.

In the intervention NHs, the prevalence of antipsych-otic prescribing decreased from 34.1% at baseline to 26.5% at the study end (7.6% absolute reduction, 22.3% relative reduction). In comparison, the prevalence of antipsychotic prescribing in control NHs decreased from 22.7% to 18.8% over the same period (3.9% absolute reduction, 17.2% relative reduction). During the OASIS implementation phase, the intervention NHs had a reduc-tion in prevalence of antipsychotic use (–1.20% [95% confidence interval {CI}, –1.85% to –0.09% per quarter]) greater than that of the control NHs (–0.23% [95% CI, –0.47% to 0.01% per quarter]), resulting in a net OASIS influence of –0.97% (95% CI, –1.85% to –0.09% per quarter; P = 0.03). The antipsychotic use reduction observed in the implementation phase was not sustained in the maintenance phase (difference of 0.93%; 95% CI, –0.66% to 2.54%; P = 0.48). No increases in other psychotropic medication use (anxiolytics, antidepressants, hypnotics) or behavioral disturbances (physically abusive behavior, verbally abusive behavior, and rejecting care) were observed during the OASIS training and implementation phases.

Conclusion. The OASIS communication training program reduced the prevalence of antipsychotic use in NHs during its implementation phase, but its effect was not sustained in the subsequent maintenance phase. The use of other psychotropic medications and behavior disturbances did not increase during the implementation of OASIS program. The findings from this study provided further support for utilizing nonpharmacologic programs to treat behavioral and psychological symptoms of dementia in older adults who reside in NHs.

Commentary

The use of both conventional and atypical antipsychotic medications is associated with a dose-related, approximately 2-fold increased risk of sudden cardiac death in older adults [1,2]. In 2006, the FDA issued a public health advisory stating that both conventional and atypical anti-psychotic medications are associated with an increased risk of mortality in elderly patients treated for dementia-related psychosis. Despite this black box warning and growing recognition that antipsychotic medications are not indicated for the treatment of dementia-related psychosis, the off-label use of antipsychotic medications to treat behavioral and psychological symptoms of dementia in older adults remains a common practice in nursing homes [3]. Thus, there is an urgent need to assess and develop effective interventions that reduce the practice of antipsychotic medication prescribing in long-term care. To that effect, the study reported by Tjia et al appropriately investigated the impact of the OASIS communication training program, a nonpharmacologic intervention, on the reduction of antipsychotic use in NHs.

This study was well designed and had a number of strengths. It utilized an interrupted time series model, one of the strongest quasi-experimental approaches due to its robustness to threats of internal validity, for evaluating longitudinal effects of an intervention intended to improve the quality of medication use. Moreover, this study included a large sample size and comparison facilities from the same geographical areas (NHs in Massachusetts and New York State) that served as external controls. Several potential weaknesses of the study were identified. Because facility-level aggregate data from NHs were used for analysis, individual level (long-term care resident) characteristics were not accounted for in the analysis. In addition, while the post-OASIS intervention questionnaire response rate was 65.6% (61 of 93 intervention NHs), a higher response rate would provide better characterization of NH staff that participated in OASIS program training, program completion rate, and a more complete representation of competing dementia care training programs concurrently implemented in these NHs.

Several studies, most utilizing various provider education methods, had explored whether these interventions could curb antipsychotic use in NHs with limited success. The largest successful intervention was reported by Meador et al [4], where a focused provider education program facilitated a relative reduction in antipsychotic medication use of 23% compared to control NHs. However, the implementation of this specific program was time- and resource-intensive, requiring geropsychiatry evaluation to all physicians (45 to 60 min), nurse-educator in-service programs for NH staff (5 to 6 one-hr sessions), management specialist consultation to NH administrators (4 hr), and evening meeting for the families of NH residents. The current study by Tjia et al, the largest study to date conducted in the context of competing dementia care training programs and increased awareness of the danger of antipsychotic use in the elderly, similarly showed a meaningful reduction in antipsychotic medication use in NHs that received the OASIS communication training program. The OASIS program appears to be less resource-intensive than the provider education program modeled by Meador et al, and its train-the-trainer model is likely more adaptable to meet the limitations (eg, low staffing and staff turnover) inherent in NHs. The beneficial effect of the OASIS program on reduction of antipsychotic medication prescribing was observed despite low participation by prescribers (11.5% of physicians and 11.5% of nurse practitioners). Although it is unclear why this was observed, this finding is intriguing in that a communication training program that reframes challenging behavior of NH residents with cognitive impairment as (1) communication of unmet needs, (2) train staff to anticipate resident needs, and (3) integrate resident strengths into daily care plans can alter provider prescription behavior. The implication of this is that provider practice in managing behavioral and psychological symptoms of dementia can be improved by optimizing communication training in NH staff. Taken together, this study adds to evidence in favor of utilizing nonpharmacologic interventions to reduce antipsychotic use in long-term care.

Applications for Clinical Practice

OASIS, a communication training program for NH staff, reduces antipsychotic medication use in NHs during its implementation phase. Future studies need to investigate pragmatic methods to sustain the beneficial effect of OASIS after its implementation phase.

 

—Fred Ko, MD, MS, Icahn School of Medicine at Mount Sinai, New York, NY

References

1. Ray WA, Chung CP, Murray KT, et al. Atypical antipsychotic drugs and the risk of sudden cardiac death. N Engl J Med 2009;360:225–35.

2. Wang PS, Schneeweiss S, Avorn J, et al. Risk of death in elderly users of conventional vs. atypical antipsychotic medications. N Engl J Med 2005;353:2335–41.

3. Chen Y, Briesacher BA, Field TS, et al. Unexplained variation across US nursing homes in antipsychotic prescribing rates. Arch Intern Med 2010;170:89–95.

4. Meador KG, Taylor JA, Thapa PB, et al. Predictors of anti-
psychotic withdrawal or dose reduction in a randomized controlled trial of provider education. J Am Geriatr Soc 1997;45:207–10.

References

1. Ray WA, Chung CP, Murray KT, et al. Atypical antipsychotic drugs and the risk of sudden cardiac death. N Engl J Med 2009;360:225–35.

2. Wang PS, Schneeweiss S, Avorn J, et al. Risk of death in elderly users of conventional vs. atypical antipsychotic medications. N Engl J Med 2005;353:2335–41.

3. Chen Y, Briesacher BA, Field TS, et al. Unexplained variation across US nursing homes in antipsychotic prescribing rates. Arch Intern Med 2010;170:89–95.

4. Meador KG, Taylor JA, Thapa PB, et al. Predictors of anti-
psychotic withdrawal or dose reduction in a randomized controlled trial of provider education. J Am Geriatr Soc 1997;45:207–10.

Issue
Journal of Clinical Outcomes Management - August 2017, Vol. 24, No 8
Issue
Journal of Clinical Outcomes Management - August 2017, Vol. 24, No 8
Publications
Publications
Topics
Article Type
Display Headline
Implementation of a Communication Training Program Is Associated with Reduction of Antipsychotic Medication Use in Nursing Homes
Display Headline
Implementation of a Communication Training Program Is Associated with Reduction of Antipsychotic Medication Use in Nursing Homes
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Fixed-Dose Combination Pills Enhance Adherence and Persistence to Antihypertensive Medications

Article Type
Changed
Wed, 02/28/2018 - 14:38
Display Headline
Fixed-Dose Combination Pills Enhance Adherence and Persistence to Antihypertensive Medications

Study Overview

Objective. To evaluate long-term adherence to antihypertensive therapy among patients on fixed-dose combination medication as well as antihypertensive monotherapy; and to identify demographic and clinical risk factors associated with selection of and adherence and persistence to antihypertensive medication therapy.

Design. Retrospective cohort study using claims data from a large nationwide insurer.

Setting and participants. The study population included patients older than age 18 who initiated antihypertensive medication between 1 January 2009 and 31 December 2012 and who were continually enrolled at least 180 days before and 365 days after the index date, defined as the date of initiation of antihypertensive therapy. Patients were excluded from the study if they had previously filled any antihypertensive medication at any time prior to the index date. Patients were categorized based on the number and type of antihypertensive medications (fixed-dose combination, defined as a single pill containing multiple medications; multi-pill combination, defined as 2 or more distinct antihypertensive tablets or capsules; or single therapy, defined as only 1 medication) using National Drug Codes (NDC). Study authors also measured patient baseline characteristics, such as age, region, gender, diagnoses as defined by ICD-9 codes, patient utilization characteristics (both outpatient visits and hospitalizations) and characteristics of the initiated medication, including patient copayment and number of days of medication supplied.

Main outcome measures. The primary outcome of inte-rest was persistence, defined as having supply for any antihypertensive medication that overlapped with the 365th day after initiation (index date), whether the initiated medication or other antihypertensive. Additional outcomes included adherence to at least 1 antihypertensive in the 12 months after initiation and refilling at least 1 antihypertensive medication. To determine adherence, the study authors calculated the proportion of days the patient had any antihypertensive available to them (proportion of days covered; PDC). PDC > 80% to at least 1 antihypertensive in the 12 months after initiation was defined as “fully adherent.”

Statistical analysis utilized modified multivariable Poisson regression models and sensitivity analyses were performed. The main study comparisons focused on patients initiating fixed-dose combination therapy and monotherapy because these groups were more comparable in terms of baseline characteristics and medications initiated than the multi-pill combination group.

Main results. The study sample consisted of 484,493 patients who initiated an oral antihypertensive, including 78,958 patient initiating fixed-dose combinations, 380,269 filled a single therapy, and 22,266 who initiated multi-pill combinations. The most frequently initiated fixed-dose combination was lisinopril-hydrochlorothiazide. Lisinopril, hydrochlorothiazide, and amlodipine with the most frequently initiated monotherapy. The mean age of the study population was 47.2 years and 51.8% were women. Patients initiating multiple pill combinations were older (mean age 52.5) and tended to be sicker with more comorbidities than fixed-dose combinations or monotherapy. Patients initiating fixed-dose combination had higher prescription copayments than patients using single medication (prescription copay $14.4 versus $9.6). Patients initiating fixed-dose combinations were 9% more likely to be persistent (relative risk [RR] 1.09, 95% CI 1.08–1.10) and 13% more likely to be adherent (RR 1.13, 95% CI 1.11–1.14) than those who started on a monotherapy. Refill rates were also slightly higher among fixed-dose combination initiators (RR 1.06, 95% CI 1.05-1.07).

Conclusion. Compared with monotherapy, fixed-dose combination therapy appears to improve adherence and persistence to antihypertensive medications.

Commentary

Approximately half of US of individuals with diagnosed hypertension obtain control of their condition based on currently defined targets [1]. The most effective approach to blood pressure management has been controversial. The JNC8 [2] guidelines liberalized blood pressure targets, while recent results from the SPRINT (systolic blood pressure intervention trial) [3] indicates that lower blood pressure targets are able to prevent hypertension-related complications without significant additional risk. Given these conflicts, there is clearly ambiguity in the most effective approach to initiating antihypertensive treatment. Prior studies have shown that fewer than 50% of patients continue to take their medications just 12 months after initiation [4,5].

Fixed-dose combination therapy for blood pressure management has been cited as better for adherence and is now making its way into clinical guidelines [6–8]. However, it should be noted that fixed-dose combination therapy for blood pressure management limits dosing flexibility. Dose titration may be needed, potentially leading to additional prescriptions, thus potentially complicating the drug regimen and adding additional cost. Complicating matters further, quality metrics and reporting requirements for hypertension require primary care providers to achieve blood pressure control while also ensuring patient adherence and concomitantly avoiding side effects related to medication therapy.

This study was conducted using claims data for commercially insured patients or those with Medicare Advan-tage and is unlikely to be representative of the entire population. Additionally, the study authors did not have detailed clinical information about patients, limiting the ability to understand the true clinical implications. Further, patients may have initiated medications for indications other than hypertension. In addition, causality cannot be established given the retrospective observational cohort nature of this study.

Applications for Clinical Practice

Primary care physicians face substantial challenges in the treatment of hypertension, including with respect to selection of initial medication therapy. Results from this study add to the evidence base that fixed-dose combination therapy is more effective in obtaining blood pressure control than monotherapy or multiple-pill therapy. Medication adherence in primary care practice is challenging. Strategies such as fixed-dose combination therapy are reasonable to employ to improve medication adherence; however, costs must be considered.

 

—Ajay Dharod, MD, Wake Forest School of Medicine, Winston-Salem, NC

References

1. Gu Q, Burt VL, Dillon CF, Yoon S. Trends in antihypertensive medication use and blood pressure control among United States adults with hypertension. Circulation 2012;126:2105–14.

2. James PA, Oparil S, Carter BL, et al. 2014 Evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014;311:507–20.

3. Group TSR. A randomized trial of intensive versus standard blood-pressure control. N Engl J Med 2015;373:2103–16.

4. Yeaw J, Benner JS, Walt JG, et al. Comparing adherence and persistence across 6 chronic medication classes. J Manag Care Pharm 2009;15:728–40.

5. Baroletti S, Dell’Orfano H. Medication adherence in cardiovascular disease. Circulation 2010;121:1455–8.

6. Bangalore S, Kamalakkannan G, Parkar S, Messerli FH. Fixed-dose combinations improve medication compliance: a meta-analysis. Am J Med 2007;120:713–9.

7. Gupta AK, Arshad S, Poulter NR. Compliance, safety, and effectiveness of fixed-dose combinations of antihypertensive agents. Hypertension 2010;55:399–407.

8. Pan F, Chernew ME, Fendrick AM. Impact of fixed-dose combination drugs on adherence to prescription medications. J Gen Intern Med 2008;23:611–4.

Issue
Journal of Clinical Outcomes Management - August 2017, Vol. 24, No 8
Publications
Topics
Sections

Study Overview

Objective. To evaluate long-term adherence to antihypertensive therapy among patients on fixed-dose combination medication as well as antihypertensive monotherapy; and to identify demographic and clinical risk factors associated with selection of and adherence and persistence to antihypertensive medication therapy.

Design. Retrospective cohort study using claims data from a large nationwide insurer.

Setting and participants. The study population included patients older than age 18 who initiated antihypertensive medication between 1 January 2009 and 31 December 2012 and who were continually enrolled at least 180 days before and 365 days after the index date, defined as the date of initiation of antihypertensive therapy. Patients were excluded from the study if they had previously filled any antihypertensive medication at any time prior to the index date. Patients were categorized based on the number and type of antihypertensive medications (fixed-dose combination, defined as a single pill containing multiple medications; multi-pill combination, defined as 2 or more distinct antihypertensive tablets or capsules; or single therapy, defined as only 1 medication) using National Drug Codes (NDC). Study authors also measured patient baseline characteristics, such as age, region, gender, diagnoses as defined by ICD-9 codes, patient utilization characteristics (both outpatient visits and hospitalizations) and characteristics of the initiated medication, including patient copayment and number of days of medication supplied.

Main outcome measures. The primary outcome of inte-rest was persistence, defined as having supply for any antihypertensive medication that overlapped with the 365th day after initiation (index date), whether the initiated medication or other antihypertensive. Additional outcomes included adherence to at least 1 antihypertensive in the 12 months after initiation and refilling at least 1 antihypertensive medication. To determine adherence, the study authors calculated the proportion of days the patient had any antihypertensive available to them (proportion of days covered; PDC). PDC > 80% to at least 1 antihypertensive in the 12 months after initiation was defined as “fully adherent.”

Statistical analysis utilized modified multivariable Poisson regression models and sensitivity analyses were performed. The main study comparisons focused on patients initiating fixed-dose combination therapy and monotherapy because these groups were more comparable in terms of baseline characteristics and medications initiated than the multi-pill combination group.

Main results. The study sample consisted of 484,493 patients who initiated an oral antihypertensive, including 78,958 patient initiating fixed-dose combinations, 380,269 filled a single therapy, and 22,266 who initiated multi-pill combinations. The most frequently initiated fixed-dose combination was lisinopril-hydrochlorothiazide. Lisinopril, hydrochlorothiazide, and amlodipine with the most frequently initiated monotherapy. The mean age of the study population was 47.2 years and 51.8% were women. Patients initiating multiple pill combinations were older (mean age 52.5) and tended to be sicker with more comorbidities than fixed-dose combinations or monotherapy. Patients initiating fixed-dose combination had higher prescription copayments than patients using single medication (prescription copay $14.4 versus $9.6). Patients initiating fixed-dose combinations were 9% more likely to be persistent (relative risk [RR] 1.09, 95% CI 1.08–1.10) and 13% more likely to be adherent (RR 1.13, 95% CI 1.11–1.14) than those who started on a monotherapy. Refill rates were also slightly higher among fixed-dose combination initiators (RR 1.06, 95% CI 1.05-1.07).

Conclusion. Compared with monotherapy, fixed-dose combination therapy appears to improve adherence and persistence to antihypertensive medications.

Commentary

Approximately half of US of individuals with diagnosed hypertension obtain control of their condition based on currently defined targets [1]. The most effective approach to blood pressure management has been controversial. The JNC8 [2] guidelines liberalized blood pressure targets, while recent results from the SPRINT (systolic blood pressure intervention trial) [3] indicates that lower blood pressure targets are able to prevent hypertension-related complications without significant additional risk. Given these conflicts, there is clearly ambiguity in the most effective approach to initiating antihypertensive treatment. Prior studies have shown that fewer than 50% of patients continue to take their medications just 12 months after initiation [4,5].

Fixed-dose combination therapy for blood pressure management has been cited as better for adherence and is now making its way into clinical guidelines [6–8]. However, it should be noted that fixed-dose combination therapy for blood pressure management limits dosing flexibility. Dose titration may be needed, potentially leading to additional prescriptions, thus potentially complicating the drug regimen and adding additional cost. Complicating matters further, quality metrics and reporting requirements for hypertension require primary care providers to achieve blood pressure control while also ensuring patient adherence and concomitantly avoiding side effects related to medication therapy.

This study was conducted using claims data for commercially insured patients or those with Medicare Advan-tage and is unlikely to be representative of the entire population. Additionally, the study authors did not have detailed clinical information about patients, limiting the ability to understand the true clinical implications. Further, patients may have initiated medications for indications other than hypertension. In addition, causality cannot be established given the retrospective observational cohort nature of this study.

Applications for Clinical Practice

Primary care physicians face substantial challenges in the treatment of hypertension, including with respect to selection of initial medication therapy. Results from this study add to the evidence base that fixed-dose combination therapy is more effective in obtaining blood pressure control than monotherapy or multiple-pill therapy. Medication adherence in primary care practice is challenging. Strategies such as fixed-dose combination therapy are reasonable to employ to improve medication adherence; however, costs must be considered.

 

—Ajay Dharod, MD, Wake Forest School of Medicine, Winston-Salem, NC

Study Overview

Objective. To evaluate long-term adherence to antihypertensive therapy among patients on fixed-dose combination medication as well as antihypertensive monotherapy; and to identify demographic and clinical risk factors associated with selection of and adherence and persistence to antihypertensive medication therapy.

Design. Retrospective cohort study using claims data from a large nationwide insurer.

Setting and participants. The study population included patients older than age 18 who initiated antihypertensive medication between 1 January 2009 and 31 December 2012 and who were continually enrolled at least 180 days before and 365 days after the index date, defined as the date of initiation of antihypertensive therapy. Patients were excluded from the study if they had previously filled any antihypertensive medication at any time prior to the index date. Patients were categorized based on the number and type of antihypertensive medications (fixed-dose combination, defined as a single pill containing multiple medications; multi-pill combination, defined as 2 or more distinct antihypertensive tablets or capsules; or single therapy, defined as only 1 medication) using National Drug Codes (NDC). Study authors also measured patient baseline characteristics, such as age, region, gender, diagnoses as defined by ICD-9 codes, patient utilization characteristics (both outpatient visits and hospitalizations) and characteristics of the initiated medication, including patient copayment and number of days of medication supplied.

Main outcome measures. The primary outcome of inte-rest was persistence, defined as having supply for any antihypertensive medication that overlapped with the 365th day after initiation (index date), whether the initiated medication or other antihypertensive. Additional outcomes included adherence to at least 1 antihypertensive in the 12 months after initiation and refilling at least 1 antihypertensive medication. To determine adherence, the study authors calculated the proportion of days the patient had any antihypertensive available to them (proportion of days covered; PDC). PDC > 80% to at least 1 antihypertensive in the 12 months after initiation was defined as “fully adherent.”

Statistical analysis utilized modified multivariable Poisson regression models and sensitivity analyses were performed. The main study comparisons focused on patients initiating fixed-dose combination therapy and monotherapy because these groups were more comparable in terms of baseline characteristics and medications initiated than the multi-pill combination group.

Main results. The study sample consisted of 484,493 patients who initiated an oral antihypertensive, including 78,958 patient initiating fixed-dose combinations, 380,269 filled a single therapy, and 22,266 who initiated multi-pill combinations. The most frequently initiated fixed-dose combination was lisinopril-hydrochlorothiazide. Lisinopril, hydrochlorothiazide, and amlodipine with the most frequently initiated monotherapy. The mean age of the study population was 47.2 years and 51.8% were women. Patients initiating multiple pill combinations were older (mean age 52.5) and tended to be sicker with more comorbidities than fixed-dose combinations or monotherapy. Patients initiating fixed-dose combination had higher prescription copayments than patients using single medication (prescription copay $14.4 versus $9.6). Patients initiating fixed-dose combinations were 9% more likely to be persistent (relative risk [RR] 1.09, 95% CI 1.08–1.10) and 13% more likely to be adherent (RR 1.13, 95% CI 1.11–1.14) than those who started on a monotherapy. Refill rates were also slightly higher among fixed-dose combination initiators (RR 1.06, 95% CI 1.05-1.07).

Conclusion. Compared with monotherapy, fixed-dose combination therapy appears to improve adherence and persistence to antihypertensive medications.

Commentary

Approximately half of US of individuals with diagnosed hypertension obtain control of their condition based on currently defined targets [1]. The most effective approach to blood pressure management has been controversial. The JNC8 [2] guidelines liberalized blood pressure targets, while recent results from the SPRINT (systolic blood pressure intervention trial) [3] indicates that lower blood pressure targets are able to prevent hypertension-related complications without significant additional risk. Given these conflicts, there is clearly ambiguity in the most effective approach to initiating antihypertensive treatment. Prior studies have shown that fewer than 50% of patients continue to take their medications just 12 months after initiation [4,5].

Fixed-dose combination therapy for blood pressure management has been cited as better for adherence and is now making its way into clinical guidelines [6–8]. However, it should be noted that fixed-dose combination therapy for blood pressure management limits dosing flexibility. Dose titration may be needed, potentially leading to additional prescriptions, thus potentially complicating the drug regimen and adding additional cost. Complicating matters further, quality metrics and reporting requirements for hypertension require primary care providers to achieve blood pressure control while also ensuring patient adherence and concomitantly avoiding side effects related to medication therapy.

This study was conducted using claims data for commercially insured patients or those with Medicare Advan-tage and is unlikely to be representative of the entire population. Additionally, the study authors did not have detailed clinical information about patients, limiting the ability to understand the true clinical implications. Further, patients may have initiated medications for indications other than hypertension. In addition, causality cannot be established given the retrospective observational cohort nature of this study.

Applications for Clinical Practice

Primary care physicians face substantial challenges in the treatment of hypertension, including with respect to selection of initial medication therapy. Results from this study add to the evidence base that fixed-dose combination therapy is more effective in obtaining blood pressure control than monotherapy or multiple-pill therapy. Medication adherence in primary care practice is challenging. Strategies such as fixed-dose combination therapy are reasonable to employ to improve medication adherence; however, costs must be considered.

 

—Ajay Dharod, MD, Wake Forest School of Medicine, Winston-Salem, NC

References

1. Gu Q, Burt VL, Dillon CF, Yoon S. Trends in antihypertensive medication use and blood pressure control among United States adults with hypertension. Circulation 2012;126:2105–14.

2. James PA, Oparil S, Carter BL, et al. 2014 Evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014;311:507–20.

3. Group TSR. A randomized trial of intensive versus standard blood-pressure control. N Engl J Med 2015;373:2103–16.

4. Yeaw J, Benner JS, Walt JG, et al. Comparing adherence and persistence across 6 chronic medication classes. J Manag Care Pharm 2009;15:728–40.

5. Baroletti S, Dell’Orfano H. Medication adherence in cardiovascular disease. Circulation 2010;121:1455–8.

6. Bangalore S, Kamalakkannan G, Parkar S, Messerli FH. Fixed-dose combinations improve medication compliance: a meta-analysis. Am J Med 2007;120:713–9.

7. Gupta AK, Arshad S, Poulter NR. Compliance, safety, and effectiveness of fixed-dose combinations of antihypertensive agents. Hypertension 2010;55:399–407.

8. Pan F, Chernew ME, Fendrick AM. Impact of fixed-dose combination drugs on adherence to prescription medications. J Gen Intern Med 2008;23:611–4.

References

1. Gu Q, Burt VL, Dillon CF, Yoon S. Trends in antihypertensive medication use and blood pressure control among United States adults with hypertension. Circulation 2012;126:2105–14.

2. James PA, Oparil S, Carter BL, et al. 2014 Evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA 2014;311:507–20.

3. Group TSR. A randomized trial of intensive versus standard blood-pressure control. N Engl J Med 2015;373:2103–16.

4. Yeaw J, Benner JS, Walt JG, et al. Comparing adherence and persistence across 6 chronic medication classes. J Manag Care Pharm 2009;15:728–40.

5. Baroletti S, Dell’Orfano H. Medication adherence in cardiovascular disease. Circulation 2010;121:1455–8.

6. Bangalore S, Kamalakkannan G, Parkar S, Messerli FH. Fixed-dose combinations improve medication compliance: a meta-analysis. Am J Med 2007;120:713–9.

7. Gupta AK, Arshad S, Poulter NR. Compliance, safety, and effectiveness of fixed-dose combinations of antihypertensive agents. Hypertension 2010;55:399–407.

8. Pan F, Chernew ME, Fendrick AM. Impact of fixed-dose combination drugs on adherence to prescription medications. J Gen Intern Med 2008;23:611–4.

Issue
Journal of Clinical Outcomes Management - August 2017, Vol. 24, No 8
Issue
Journal of Clinical Outcomes Management - August 2017, Vol. 24, No 8
Publications
Publications
Topics
Article Type
Display Headline
Fixed-Dose Combination Pills Enhance Adherence and Persistence to Antihypertensive Medications
Display Headline
Fixed-Dose Combination Pills Enhance Adherence and Persistence to Antihypertensive Medications
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Leonard Wood: Advocate of Military Preparedness

Article Type
Changed
Wed, 01/31/2018 - 14:14

Unless you have been assigned to the post or the hospital, you have probably never heard of Leonard Wood. Leonard Wood arguably had the most distinguished military-government career of someone who did not become president. Wood was a Harvard-educated physician, pursued the Apache Chief Geronimo, received the Medal of Honor, was physician to 2 U.S. presidents, served as U.S. army chief of staff, was a successful military governor, ran for president, was a colleague of Walter Reed, and was commander-inarms for President Theodore Roosevelt.

Wood was born in 1860 to an established New England family; his father was a Union Army physician during the Civil War and was practicing on Cape Cod when he died unexpectedly in 1880. The family was left destitute, but Wood was able to continue his education when a wealthy family friend agreed to pay for him to attend Harvard Medical School, which at the time did not require any prior college. He graduated in 1883 and was selected for a prized internship at Boston City Hospital; however, he was dismissed for a rule violation that the program director later admitted was a mistake.

Unable to support himself in practice in Boston, Wood turned to the U.S. Army, a decision that would change his life. Assigned to Fort Huachuca in Arizona, Wood participated in the yearlong pursuit and final surrender of Geronimo; for his role he was awarded the Medal of Honor in 1898. His experiences in the wild and rugged terrain of the west triggered a legendary and lifelong pursuit of hard and stressful physical activity. Transferred to California, Wood met Louise Condit-Smith, ward of an associate justice of the U.S. Supreme Court. When they married in November 1890 in Washington, DC, the ceremony was attended by all of the Supreme Court justices.

In 1893 while assigned to Fort McPherson outside Atlanta, Wood, whose duties were not demanding, needed a physical outlet for his unbounded energies. He enrolled at Georgia Tech at age 33 to play football. He was eligible to play because he had not previously attended college. He scored 5 touchdowns, winning the game against rival University of Georgia.

Later, Wood was assigned to Washington, where he quickly became known and sought after as a physician. He served many of the political and military elite, including presidents Grover Cleveland and William McKinley. In 1897, he met Theodore Roosevelt, the 38-year-old assistant secretary of the U.S. Navy who shared his love of outdoor adventure and the military. They became fast friends/companions/competitors; Roosevelt wrote to a friend that he had found a “playmate.”

When the U.S.S. Maine was sunk in Havana Harbor in 1898 and war was declared on Spain, Wood and Roosevelt schemed on how to go to war together. Wood the career soldier and Roosevelt the career politician had excellent connections and became commander and deputy commander of the First Volunteer Calvary, later famously known as the Rough Riders. When a more senior general became ill, Wood was promoted to brigadier general, and Roosevelt became the regiment colonel.

After the war, Wood became military governor of Cuba and major general of volunteers. During the U.S. occupation, Walter Reed was sent to investigate infectious diseases, including yellow fever. Wood provided $10,000 to fund the second phase of Reed’s research and approved the use of human volunteers. When the U.S. occupation ended in 1902, Wood was to revert to captain, medical corps.

Wood’s success in Cuba was obvious and wel l known; President McKinley promoted him to U.S. Army brigadier general. At that time, as a brigadier general, Wood was essentially guaranteed a second star and a rotation through the chief of staff position. He served as chief of staff from 1910 to 1914, the only physician ever to do so. As chief of staff he eliminated the antiquated bureau system, developed the maneuver unit concept, and laid the groundwork for the Reserve Officers’ Training Corps.

Wood stayed on active duty and rotated through other senior-level positions. Because of Wood’s political activity promoting universal service and improving readiness, President Woodrow Wilson passed over him, instead selecting John J. Pershing to command the American Expeditionary Force in World War I. Wood stayed politically active and ran for the Republican presidential nomination in 1920, losing to Warren G. Harding at the convention. Wood was appointed governor general of the Philippines, a position he held until his death in 1927.

While in Cuba, Wood was severely injured by striking his head on a chandelier, most likely resulting in an undiagnosed skull fracture. Over time he developed neurologic symptoms and was seen by neurosurgeon Harvey Cushing, MD, at Johns Hopkins, who removed a meningioma in February 1910. Wood made a dramatic recovery. Over a decade later while in the Philippines, his symptoms returned, and after significant delay he went home to see Cushing who was then at Harvard Medical School. When Wood died after surgery, Cushing admitted that he should not have tackled such a difficult case so quickly after returning from a trip to Europe.

Fort Leonard Wood in Missouri and the on-base General Leonard Wood U.S. Army Community Hospital are named in Wood’s honor.

About this column
This column provides biographical sketches of the namesakes of military and VA health care facilities. To learn more about the individual
your facility was named for or to offer a topic suggestion, contact us at [email protected] or on Facebook.

Article PDF
Author and Disclosure Information

COL Pierce is a retired U.S. Army pediatrician who served as chief of pediatrics, director of medial education, and chief of the medical staff at Walter Reed Army Medical Center. COL Pierce also was a consultant in pediatrics to the U.S. Army Surgeon General for 7 years. He coauthored a book on Dr. Walter Reed and his research on yellow fever and edited a book on the Walter Reed Army Medical Center.

Issue
Federal Practitioner - 34(7)
Publications
Page Number
48-49
Sections
Author and Disclosure Information

COL Pierce is a retired U.S. Army pediatrician who served as chief of pediatrics, director of medial education, and chief of the medical staff at Walter Reed Army Medical Center. COL Pierce also was a consultant in pediatrics to the U.S. Army Surgeon General for 7 years. He coauthored a book on Dr. Walter Reed and his research on yellow fever and edited a book on the Walter Reed Army Medical Center.

Author and Disclosure Information

COL Pierce is a retired U.S. Army pediatrician who served as chief of pediatrics, director of medial education, and chief of the medical staff at Walter Reed Army Medical Center. COL Pierce also was a consultant in pediatrics to the U.S. Army Surgeon General for 7 years. He coauthored a book on Dr. Walter Reed and his research on yellow fever and edited a book on the Walter Reed Army Medical Center.

Article PDF
Article PDF
Related Articles

Unless you have been assigned to the post or the hospital, you have probably never heard of Leonard Wood. Leonard Wood arguably had the most distinguished military-government career of someone who did not become president. Wood was a Harvard-educated physician, pursued the Apache Chief Geronimo, received the Medal of Honor, was physician to 2 U.S. presidents, served as U.S. army chief of staff, was a successful military governor, ran for president, was a colleague of Walter Reed, and was commander-inarms for President Theodore Roosevelt.

Wood was born in 1860 to an established New England family; his father was a Union Army physician during the Civil War and was practicing on Cape Cod when he died unexpectedly in 1880. The family was left destitute, but Wood was able to continue his education when a wealthy family friend agreed to pay for him to attend Harvard Medical School, which at the time did not require any prior college. He graduated in 1883 and was selected for a prized internship at Boston City Hospital; however, he was dismissed for a rule violation that the program director later admitted was a mistake.

Unable to support himself in practice in Boston, Wood turned to the U.S. Army, a decision that would change his life. Assigned to Fort Huachuca in Arizona, Wood participated in the yearlong pursuit and final surrender of Geronimo; for his role he was awarded the Medal of Honor in 1898. His experiences in the wild and rugged terrain of the west triggered a legendary and lifelong pursuit of hard and stressful physical activity. Transferred to California, Wood met Louise Condit-Smith, ward of an associate justice of the U.S. Supreme Court. When they married in November 1890 in Washington, DC, the ceremony was attended by all of the Supreme Court justices.

In 1893 while assigned to Fort McPherson outside Atlanta, Wood, whose duties were not demanding, needed a physical outlet for his unbounded energies. He enrolled at Georgia Tech at age 33 to play football. He was eligible to play because he had not previously attended college. He scored 5 touchdowns, winning the game against rival University of Georgia.

Later, Wood was assigned to Washington, where he quickly became known and sought after as a physician. He served many of the political and military elite, including presidents Grover Cleveland and William McKinley. In 1897, he met Theodore Roosevelt, the 38-year-old assistant secretary of the U.S. Navy who shared his love of outdoor adventure and the military. They became fast friends/companions/competitors; Roosevelt wrote to a friend that he had found a “playmate.”

When the U.S.S. Maine was sunk in Havana Harbor in 1898 and war was declared on Spain, Wood and Roosevelt schemed on how to go to war together. Wood the career soldier and Roosevelt the career politician had excellent connections and became commander and deputy commander of the First Volunteer Calvary, later famously known as the Rough Riders. When a more senior general became ill, Wood was promoted to brigadier general, and Roosevelt became the regiment colonel.

After the war, Wood became military governor of Cuba and major general of volunteers. During the U.S. occupation, Walter Reed was sent to investigate infectious diseases, including yellow fever. Wood provided $10,000 to fund the second phase of Reed’s research and approved the use of human volunteers. When the U.S. occupation ended in 1902, Wood was to revert to captain, medical corps.

Wood’s success in Cuba was obvious and wel l known; President McKinley promoted him to U.S. Army brigadier general. At that time, as a brigadier general, Wood was essentially guaranteed a second star and a rotation through the chief of staff position. He served as chief of staff from 1910 to 1914, the only physician ever to do so. As chief of staff he eliminated the antiquated bureau system, developed the maneuver unit concept, and laid the groundwork for the Reserve Officers’ Training Corps.

Wood stayed on active duty and rotated through other senior-level positions. Because of Wood’s political activity promoting universal service and improving readiness, President Woodrow Wilson passed over him, instead selecting John J. Pershing to command the American Expeditionary Force in World War I. Wood stayed politically active and ran for the Republican presidential nomination in 1920, losing to Warren G. Harding at the convention. Wood was appointed governor general of the Philippines, a position he held until his death in 1927.

While in Cuba, Wood was severely injured by striking his head on a chandelier, most likely resulting in an undiagnosed skull fracture. Over time he developed neurologic symptoms and was seen by neurosurgeon Harvey Cushing, MD, at Johns Hopkins, who removed a meningioma in February 1910. Wood made a dramatic recovery. Over a decade later while in the Philippines, his symptoms returned, and after significant delay he went home to see Cushing who was then at Harvard Medical School. When Wood died after surgery, Cushing admitted that he should not have tackled such a difficult case so quickly after returning from a trip to Europe.

Fort Leonard Wood in Missouri and the on-base General Leonard Wood U.S. Army Community Hospital are named in Wood’s honor.

About this column
This column provides biographical sketches of the namesakes of military and VA health care facilities. To learn more about the individual
your facility was named for or to offer a topic suggestion, contact us at [email protected] or on Facebook.

Unless you have been assigned to the post or the hospital, you have probably never heard of Leonard Wood. Leonard Wood arguably had the most distinguished military-government career of someone who did not become president. Wood was a Harvard-educated physician, pursued the Apache Chief Geronimo, received the Medal of Honor, was physician to 2 U.S. presidents, served as U.S. army chief of staff, was a successful military governor, ran for president, was a colleague of Walter Reed, and was commander-inarms for President Theodore Roosevelt.

Wood was born in 1860 to an established New England family; his father was a Union Army physician during the Civil War and was practicing on Cape Cod when he died unexpectedly in 1880. The family was left destitute, but Wood was able to continue his education when a wealthy family friend agreed to pay for him to attend Harvard Medical School, which at the time did not require any prior college. He graduated in 1883 and was selected for a prized internship at Boston City Hospital; however, he was dismissed for a rule violation that the program director later admitted was a mistake.

Unable to support himself in practice in Boston, Wood turned to the U.S. Army, a decision that would change his life. Assigned to Fort Huachuca in Arizona, Wood participated in the yearlong pursuit and final surrender of Geronimo; for his role he was awarded the Medal of Honor in 1898. His experiences in the wild and rugged terrain of the west triggered a legendary and lifelong pursuit of hard and stressful physical activity. Transferred to California, Wood met Louise Condit-Smith, ward of an associate justice of the U.S. Supreme Court. When they married in November 1890 in Washington, DC, the ceremony was attended by all of the Supreme Court justices.

In 1893 while assigned to Fort McPherson outside Atlanta, Wood, whose duties were not demanding, needed a physical outlet for his unbounded energies. He enrolled at Georgia Tech at age 33 to play football. He was eligible to play because he had not previously attended college. He scored 5 touchdowns, winning the game against rival University of Georgia.

Later, Wood was assigned to Washington, where he quickly became known and sought after as a physician. He served many of the political and military elite, including presidents Grover Cleveland and William McKinley. In 1897, he met Theodore Roosevelt, the 38-year-old assistant secretary of the U.S. Navy who shared his love of outdoor adventure and the military. They became fast friends/companions/competitors; Roosevelt wrote to a friend that he had found a “playmate.”

When the U.S.S. Maine was sunk in Havana Harbor in 1898 and war was declared on Spain, Wood and Roosevelt schemed on how to go to war together. Wood the career soldier and Roosevelt the career politician had excellent connections and became commander and deputy commander of the First Volunteer Calvary, later famously known as the Rough Riders. When a more senior general became ill, Wood was promoted to brigadier general, and Roosevelt became the regiment colonel.

After the war, Wood became military governor of Cuba and major general of volunteers. During the U.S. occupation, Walter Reed was sent to investigate infectious diseases, including yellow fever. Wood provided $10,000 to fund the second phase of Reed’s research and approved the use of human volunteers. When the U.S. occupation ended in 1902, Wood was to revert to captain, medical corps.

Wood’s success in Cuba was obvious and wel l known; President McKinley promoted him to U.S. Army brigadier general. At that time, as a brigadier general, Wood was essentially guaranteed a second star and a rotation through the chief of staff position. He served as chief of staff from 1910 to 1914, the only physician ever to do so. As chief of staff he eliminated the antiquated bureau system, developed the maneuver unit concept, and laid the groundwork for the Reserve Officers’ Training Corps.

Wood stayed on active duty and rotated through other senior-level positions. Because of Wood’s political activity promoting universal service and improving readiness, President Woodrow Wilson passed over him, instead selecting John J. Pershing to command the American Expeditionary Force in World War I. Wood stayed politically active and ran for the Republican presidential nomination in 1920, losing to Warren G. Harding at the convention. Wood was appointed governor general of the Philippines, a position he held until his death in 1927.

While in Cuba, Wood was severely injured by striking his head on a chandelier, most likely resulting in an undiagnosed skull fracture. Over time he developed neurologic symptoms and was seen by neurosurgeon Harvey Cushing, MD, at Johns Hopkins, who removed a meningioma in February 1910. Wood made a dramatic recovery. Over a decade later while in the Philippines, his symptoms returned, and after significant delay he went home to see Cushing who was then at Harvard Medical School. When Wood died after surgery, Cushing admitted that he should not have tackled such a difficult case so quickly after returning from a trip to Europe.

Fort Leonard Wood in Missouri and the on-base General Leonard Wood U.S. Army Community Hospital are named in Wood’s honor.

About this column
This column provides biographical sketches of the namesakes of military and VA health care facilities. To learn more about the individual
your facility was named for or to offer a topic suggestion, contact us at [email protected] or on Facebook.

Issue
Federal Practitioner - 34(7)
Issue
Federal Practitioner - 34(7)
Page Number
48-49
Page Number
48-49
Publications
Publications
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

The VA Is in Critical Condition, but What Is the Prognosis?

Article Type
Changed
Wed, 03/27/2019 - 11:47

In his first ever—and perhaps the first ever state of the VA—speech delivered on May 30, 2017, VA Secretary David J. Shulkin, MD, reported to the nation and Congress that “the VA is still in critical condition.” This medical metaphor reflects Dr. Shulkin’s distinction of being the only physician ever to hold this cabinet-level post.

For anyone in health care, such a reference immediately calls forth a variety of associations—most of them serious concerns for the status of the VA and whether it will survive. In this editorial, I will expand on this metaphor and explore its meaning for the future of the VA.

Dr. Shulkin extended the metaphor when he said that the “VA requires intensive care.” For clinicians, this remark tells us that the VA is either seriously ill or injured. Yet there is hope because the chief doctor of the VA reassures us that the patient—the largest health care system in the country—is improving. This improvement from critical care to intensive care status informs us that the VA was very sick, maybe even dying, during the previous administration in which Dr. Shulkin served as VA’s Under Secretary for Health.

Dr. Shulkin, a general internist who still sees primary care patients at the VA, gave us a diagnosis of the VA’s most serious symptoms: a lack of access to timely care, a high rate of veteran suicides, an inability to enforce employee accountability, multiple obstacles to hiring and retaining qualified staff, an unacceptable quality of care at some VAMCs, and a backlog of disability claims due to inefficient processing.

Dr. Shulkin also gave us a broad idea of his goal for care, “We are taking immediate and decisive steps stabilizing the organization.” But the more I thought about this impressive speech, the more I wondered, What is the VA’s actual diagnosis?

Several of the many news commentaries analyzing Shulkin’s State of the VA speech suggested possible etiologies. According to the Public Broadcasting Service (PBS), “In a ‘State of the VA’ report, Shulkin, a physician, issued a blunt diagnosis: ‘There is a lot of work to do.’” Astute clinicians will immediately recognize that PBS is right about the secretary’s honesty regarding the magnitude of the task facing him.

He was not providing a diagnosis as much as offering an indirect assessment of the patient’s condition. “A lot of work,” although not a diagnosis, is a colloquial description of the treatment plan that the secretary further outlined in his report. Like any good treatment plan, there is a direct correlation between the major symptoms of the disorder and the therapies that Dr. Shulkin prescribed.

The Secretary recommended and the President signed the Department of Veterans Affairs Accountability and Whistleblower Protection Act of 2017 on June 23, 2017, to make it easier to discipline and terminate VA employees who may be keeping the VA organization ill or at least preventing it from getting better. He also prescribed continued and even higher dose infusions of community care to treat the central access problem. In addition, Dr. Shulkin ordered that the most effective available interventions be used for suicide prevention, enhancement of the overall quality of care, and to improve accountability.

Even with the most efficacious treatments, a high-functioning intensive care unit needs state-of-theart technology and equipment. In a long-awaited announcement, Dr. Shulkin reported on June 5 that of 2 competing modalities to revive the VA’s ailing electronic health record system—the brain of our critical care patient—rather than repair the moribund CPRS, the VA will receive a transplant of the DoD MHS Genesis. Critical care, especially when delivered in a combat zone, requires difficult triage decisions. The secretary has made similar tough resource allocation decisions, determining that some of the VA’s oldest and most debilitated facilities will not be sustained in their present form.

I am near the end of this editorial and still do not have a diagnosis. Pundits, politicians, and policy specialists all have their differential diagnosis as well as veterans groups and VA employees.“Bloated bureaucracy” is the diagnosis from many of these VA critics. Dr. Shulkin proposed a remedy for this disease: He plans to consolidate the VA headquarters.

Even more important, for those who believe the VA should not have a DNR but be allowed to recover, what does the physician who holds the VA’s life in his hands believe is the prognosis for this 86-year-old institution? Dr. Shulkin expressed the hope that the VA can recover its health, saying he is “confident that we will be able turn VA into the organization veterans and their families deserve, and one that America can take pride in.” The most vehement of VA’s opponents would say that pouring additional millions of dollars into such a moribund entity is futile care. Yet the secretary and thousands of VA patients, staff, and supporters believe that the agency that President Lincoln created at the end of the bloodiest war in U.S. history still has value and can be restored to meaningful service for those who have, who are, and who will place their lives on the line for their country.

Article PDF
Author and Disclosure Information

Author disclosures
The author report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the author and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.

Issue
Federal Practitioner - 34(7)
Publications
Topics
Page Number
7-8
Sections
Author and Disclosure Information

Author disclosures
The author report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the author and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.

Author and Disclosure Information

Author disclosures
The author report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the author and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.

Article PDF
Article PDF
Related Articles

In his first ever—and perhaps the first ever state of the VA—speech delivered on May 30, 2017, VA Secretary David J. Shulkin, MD, reported to the nation and Congress that “the VA is still in critical condition.” This medical metaphor reflects Dr. Shulkin’s distinction of being the only physician ever to hold this cabinet-level post.

For anyone in health care, such a reference immediately calls forth a variety of associations—most of them serious concerns for the status of the VA and whether it will survive. In this editorial, I will expand on this metaphor and explore its meaning for the future of the VA.

Dr. Shulkin extended the metaphor when he said that the “VA requires intensive care.” For clinicians, this remark tells us that the VA is either seriously ill or injured. Yet there is hope because the chief doctor of the VA reassures us that the patient—the largest health care system in the country—is improving. This improvement from critical care to intensive care status informs us that the VA was very sick, maybe even dying, during the previous administration in which Dr. Shulkin served as VA’s Under Secretary for Health.

Dr. Shulkin, a general internist who still sees primary care patients at the VA, gave us a diagnosis of the VA’s most serious symptoms: a lack of access to timely care, a high rate of veteran suicides, an inability to enforce employee accountability, multiple obstacles to hiring and retaining qualified staff, an unacceptable quality of care at some VAMCs, and a backlog of disability claims due to inefficient processing.

Dr. Shulkin also gave us a broad idea of his goal for care, “We are taking immediate and decisive steps stabilizing the organization.” But the more I thought about this impressive speech, the more I wondered, What is the VA’s actual diagnosis?

Several of the many news commentaries analyzing Shulkin’s State of the VA speech suggested possible etiologies. According to the Public Broadcasting Service (PBS), “In a ‘State of the VA’ report, Shulkin, a physician, issued a blunt diagnosis: ‘There is a lot of work to do.’” Astute clinicians will immediately recognize that PBS is right about the secretary’s honesty regarding the magnitude of the task facing him.

He was not providing a diagnosis as much as offering an indirect assessment of the patient’s condition. “A lot of work,” although not a diagnosis, is a colloquial description of the treatment plan that the secretary further outlined in his report. Like any good treatment plan, there is a direct correlation between the major symptoms of the disorder and the therapies that Dr. Shulkin prescribed.

The Secretary recommended and the President signed the Department of Veterans Affairs Accountability and Whistleblower Protection Act of 2017 on June 23, 2017, to make it easier to discipline and terminate VA employees who may be keeping the VA organization ill or at least preventing it from getting better. He also prescribed continued and even higher dose infusions of community care to treat the central access problem. In addition, Dr. Shulkin ordered that the most effective available interventions be used for suicide prevention, enhancement of the overall quality of care, and to improve accountability.

Even with the most efficacious treatments, a high-functioning intensive care unit needs state-of-theart technology and equipment. In a long-awaited announcement, Dr. Shulkin reported on June 5 that of 2 competing modalities to revive the VA’s ailing electronic health record system—the brain of our critical care patient—rather than repair the moribund CPRS, the VA will receive a transplant of the DoD MHS Genesis. Critical care, especially when delivered in a combat zone, requires difficult triage decisions. The secretary has made similar tough resource allocation decisions, determining that some of the VA’s oldest and most debilitated facilities will not be sustained in their present form.

I am near the end of this editorial and still do not have a diagnosis. Pundits, politicians, and policy specialists all have their differential diagnosis as well as veterans groups and VA employees.“Bloated bureaucracy” is the diagnosis from many of these VA critics. Dr. Shulkin proposed a remedy for this disease: He plans to consolidate the VA headquarters.

Even more important, for those who believe the VA should not have a DNR but be allowed to recover, what does the physician who holds the VA’s life in his hands believe is the prognosis for this 86-year-old institution? Dr. Shulkin expressed the hope that the VA can recover its health, saying he is “confident that we will be able turn VA into the organization veterans and their families deserve, and one that America can take pride in.” The most vehement of VA’s opponents would say that pouring additional millions of dollars into such a moribund entity is futile care. Yet the secretary and thousands of VA patients, staff, and supporters believe that the agency that President Lincoln created at the end of the bloodiest war in U.S. history still has value and can be restored to meaningful service for those who have, who are, and who will place their lives on the line for their country.

In his first ever—and perhaps the first ever state of the VA—speech delivered on May 30, 2017, VA Secretary David J. Shulkin, MD, reported to the nation and Congress that “the VA is still in critical condition.” This medical metaphor reflects Dr. Shulkin’s distinction of being the only physician ever to hold this cabinet-level post.

For anyone in health care, such a reference immediately calls forth a variety of associations—most of them serious concerns for the status of the VA and whether it will survive. In this editorial, I will expand on this metaphor and explore its meaning for the future of the VA.

Dr. Shulkin extended the metaphor when he said that the “VA requires intensive care.” For clinicians, this remark tells us that the VA is either seriously ill or injured. Yet there is hope because the chief doctor of the VA reassures us that the patient—the largest health care system in the country—is improving. This improvement from critical care to intensive care status informs us that the VA was very sick, maybe even dying, during the previous administration in which Dr. Shulkin served as VA’s Under Secretary for Health.

Dr. Shulkin, a general internist who still sees primary care patients at the VA, gave us a diagnosis of the VA’s most serious symptoms: a lack of access to timely care, a high rate of veteran suicides, an inability to enforce employee accountability, multiple obstacles to hiring and retaining qualified staff, an unacceptable quality of care at some VAMCs, and a backlog of disability claims due to inefficient processing.

Dr. Shulkin also gave us a broad idea of his goal for care, “We are taking immediate and decisive steps stabilizing the organization.” But the more I thought about this impressive speech, the more I wondered, What is the VA’s actual diagnosis?

Several of the many news commentaries analyzing Shulkin’s State of the VA speech suggested possible etiologies. According to the Public Broadcasting Service (PBS), “In a ‘State of the VA’ report, Shulkin, a physician, issued a blunt diagnosis: ‘There is a lot of work to do.’” Astute clinicians will immediately recognize that PBS is right about the secretary’s honesty regarding the magnitude of the task facing him.

He was not providing a diagnosis as much as offering an indirect assessment of the patient’s condition. “A lot of work,” although not a diagnosis, is a colloquial description of the treatment plan that the secretary further outlined in his report. Like any good treatment plan, there is a direct correlation between the major symptoms of the disorder and the therapies that Dr. Shulkin prescribed.

The Secretary recommended and the President signed the Department of Veterans Affairs Accountability and Whistleblower Protection Act of 2017 on June 23, 2017, to make it easier to discipline and terminate VA employees who may be keeping the VA organization ill or at least preventing it from getting better. He also prescribed continued and even higher dose infusions of community care to treat the central access problem. In addition, Dr. Shulkin ordered that the most effective available interventions be used for suicide prevention, enhancement of the overall quality of care, and to improve accountability.

Even with the most efficacious treatments, a high-functioning intensive care unit needs state-of-theart technology and equipment. In a long-awaited announcement, Dr. Shulkin reported on June 5 that of 2 competing modalities to revive the VA’s ailing electronic health record system—the brain of our critical care patient—rather than repair the moribund CPRS, the VA will receive a transplant of the DoD MHS Genesis. Critical care, especially when delivered in a combat zone, requires difficult triage decisions. The secretary has made similar tough resource allocation decisions, determining that some of the VA’s oldest and most debilitated facilities will not be sustained in their present form.

I am near the end of this editorial and still do not have a diagnosis. Pundits, politicians, and policy specialists all have their differential diagnosis as well as veterans groups and VA employees.“Bloated bureaucracy” is the diagnosis from many of these VA critics. Dr. Shulkin proposed a remedy for this disease: He plans to consolidate the VA headquarters.

Even more important, for those who believe the VA should not have a DNR but be allowed to recover, what does the physician who holds the VA’s life in his hands believe is the prognosis for this 86-year-old institution? Dr. Shulkin expressed the hope that the VA can recover its health, saying he is “confident that we will be able turn VA into the organization veterans and their families deserve, and one that America can take pride in.” The most vehement of VA’s opponents would say that pouring additional millions of dollars into such a moribund entity is futile care. Yet the secretary and thousands of VA patients, staff, and supporters believe that the agency that President Lincoln created at the end of the bloodiest war in U.S. history still has value and can be restored to meaningful service for those who have, who are, and who will place their lives on the line for their country.

Issue
Federal Practitioner - 34(7)
Issue
Federal Practitioner - 34(7)
Page Number
7-8
Page Number
7-8
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media

The surgical sky may not be falling

Article Type
Changed
Thu, 03/28/2019 - 14:49

 

Unlike Dr. Elsey (“Surgery can be demanding work: Grit needed,” Letter to the Editor, May 2017, p. 6) and many others in various surgical publications, I have NOT enjoyed recent discussions about my generation’s perceived lack of readiness for independent practice following general surgery residency. Having been subjected to another round this month of “Why The Surgical Sky Is Falling,” I would like to take a moment to offer a different viewpoint.

I graduated from Tufts Medical Center’s general surgery residency in June 2014. After taking the written board exams, I started practice in a hospital-based group in Maine that same summer. My partners, both with 20+ years of experience, instituted a probationary period for observation of skill (ostensibly, and with good-natured teasing, to ensure I would not harm their patients, though I suspect such a thing is fairly universal for a new grad to receive institutional privileges), and, after convincing them I was not a reckless maniac, within a few months I was “on my own” in the operating room. I relied heavily on colleagues those first 18 months in practice, and ,if they ever grew weary of my asking advice about hemorrhoids, biliary colic, and diverticular disease, they never displayed perceptible annoyance. They were, and are, the best mentors I could have had.

Dr. Thomas E. Crosslin III
Dr. Thomas E. Crosslin III
I learned quickly that residency cannot teach you everything. In fact, residency doesn’t begin to teach you half of what you learn in the first year of independent practice. What my residency did – and what I humbly believe should be the focus for all surgical education – is provide a repetition of fundamentals that allowed me to make myself ready for independence when the time came. Anyone can do a Whipple as a chief resident when they’re scrubbed with a hepatobiliary surgical oncologist. What isn’t so easy is trying to keep from shaking your way through the first solo laparoscopic cholecystectomy. No amount of training can prepare you fully for the first independent moment in the operating room, and let’s please not pretend otherwise.

Metrics and studies that rely on resident self-evaluation – and conversely, ones that rely on “objective” identification of resident strengths and weaknesses by faculty – are subject to the very bias that has dominated this argument for years. If you tell us we are not good enough or lacking in some capacity, often enough, we inevitably will start to believe it. Then, you will reinforce that same belief in your perception of us, which drives the wedge further into an increasingly irreconcilable situation.

I had a decent self-opinion of my surgical skill as a chief resident, but, on any given day, the number I would have assigned to my own “readiness” for independence would have varied greatly for any number of reasons. I did not contend with much in the way of spirited discouragement or admonishment regarding my skill progression over 5 years, but, in keeping with the “gritty” surgical personality espoused by Dr. Elsey in his letter, I’m not sure I would have let that stop me. Honestly though, it’s impossible to say how it would have affected my confidence to leave residency straight for attendinghood had I been subjected to daily thrashings over 5 years regarding my lack of attending-level skill.

It seems to me, some of the current teaching generation has displayed an inability to connect with their pupils. The majority of surgical residents in 2017 are millennials, and the “good old ways” of effective teaching through guilt, embarrassment, and punitive action will not work. Browbeaters need not apply, for you already have lost this war. For better or worse, educators must find a way to engage these residents on a positive emotional level at the same time as they engage on a higher intellectual plane.

Before the coffee spurts across your OR lounge and the surgical hats start flying fast and furious, let me clarify: In no way do I support the notion that general surgery residents should be coddled, pampered, or emotionally shielded from the gut-wrenching difficulty of practicing surgery. It was imperative in my education that I learned how to be wrong, how to admit it, and how to take ownership of my actions, whether right or wrong. Thankfully, I had a few good examples in Boston, and I’ll never forget the impact they made on my education. But, those lessons were reinforced in a way that made me WANT to weave them into the fabric of my surgical life. Never a heavy-handed dictum; without ego or audience; lacking the morose condescension associated with “those giants” of classical surgical training – what I received in my training was a whole-person engagement that fulfilled my desire to succeed and allowed me the room to grow up as an adult learner without feeling too akin to a 16-year-old, grounded and without car keys, when I had the audacity to make a mistake. Some tried this tack, but my grit won. Somewhere in Lawrenceville, Ga., I hope Dr. Elsey is smiling.

Those who taught best in my residency did so by example. They did it by letting me drive the ship, by giving credit when I did well, by educating when I did not. They did it by making me understand a patient is not a statistic, that you can be honest and kind and a giver of hope all at the same time and that a true surgeon does not need to brag and boast about her accomplishments, nor does he imperiously tear down those lower than himself on the “hierarchy.” The best of the best at Tufts Medical Center showed me what it means when a good person sits in an exam room with a hurting human being and starts the healing process with a kind smile, a gentle touch, words of reassurance, and confidence in his ability to change that patient’s life for the better.

Could it be that we need more of that – and less devotion to metrics – in surgical education? What might training become if we focus entirely on the patient and stop worrying about how the statistics make us all look? What would happen if educators traded nostalgia for engagement with their pupils? It may just be me, but all that sounds suspiciously ... old school, no?

So, before I have to choke down another article explaining how my contemporaries and I represent a kind of global warming to the long-established surgical polar ice caps, let me assure you that at least one young whippersnapper made it out of modern (read: postduty hours) surgical training and actually found a little success – and more than a bit of professional satisfaction – in the unforgiving world of independent general surgery by adhering to the same principles that guided Zollinger and DeBakey, Graham and Fisher: Do what is right for the patient, every single time, to the very best of your God-given and man-made ability. Those are some time-tested lessons I am very proud to have learned.

And, if you want the real story about my 3 years in practice, talk to my partners here in Maine. There is no critique quite like daily proximity. For what it’s worth, they have tolerated me splendidly.

Dr. Crosslin is a general surgeon practicing in Rockport, Maine, and an FACS Initiate, October 2017.

Publications
Topics
Sections

 

Unlike Dr. Elsey (“Surgery can be demanding work: Grit needed,” Letter to the Editor, May 2017, p. 6) and many others in various surgical publications, I have NOT enjoyed recent discussions about my generation’s perceived lack of readiness for independent practice following general surgery residency. Having been subjected to another round this month of “Why The Surgical Sky Is Falling,” I would like to take a moment to offer a different viewpoint.

I graduated from Tufts Medical Center’s general surgery residency in June 2014. After taking the written board exams, I started practice in a hospital-based group in Maine that same summer. My partners, both with 20+ years of experience, instituted a probationary period for observation of skill (ostensibly, and with good-natured teasing, to ensure I would not harm their patients, though I suspect such a thing is fairly universal for a new grad to receive institutional privileges), and, after convincing them I was not a reckless maniac, within a few months I was “on my own” in the operating room. I relied heavily on colleagues those first 18 months in practice, and ,if they ever grew weary of my asking advice about hemorrhoids, biliary colic, and diverticular disease, they never displayed perceptible annoyance. They were, and are, the best mentors I could have had.

Dr. Thomas E. Crosslin III
Dr. Thomas E. Crosslin III
I learned quickly that residency cannot teach you everything. In fact, residency doesn’t begin to teach you half of what you learn in the first year of independent practice. What my residency did – and what I humbly believe should be the focus for all surgical education – is provide a repetition of fundamentals that allowed me to make myself ready for independence when the time came. Anyone can do a Whipple as a chief resident when they’re scrubbed with a hepatobiliary surgical oncologist. What isn’t so easy is trying to keep from shaking your way through the first solo laparoscopic cholecystectomy. No amount of training can prepare you fully for the first independent moment in the operating room, and let’s please not pretend otherwise.

Metrics and studies that rely on resident self-evaluation – and conversely, ones that rely on “objective” identification of resident strengths and weaknesses by faculty – are subject to the very bias that has dominated this argument for years. If you tell us we are not good enough or lacking in some capacity, often enough, we inevitably will start to believe it. Then, you will reinforce that same belief in your perception of us, which drives the wedge further into an increasingly irreconcilable situation.

I had a decent self-opinion of my surgical skill as a chief resident, but, on any given day, the number I would have assigned to my own “readiness” for independence would have varied greatly for any number of reasons. I did not contend with much in the way of spirited discouragement or admonishment regarding my skill progression over 5 years, but, in keeping with the “gritty” surgical personality espoused by Dr. Elsey in his letter, I’m not sure I would have let that stop me. Honestly though, it’s impossible to say how it would have affected my confidence to leave residency straight for attendinghood had I been subjected to daily thrashings over 5 years regarding my lack of attending-level skill.

It seems to me, some of the current teaching generation has displayed an inability to connect with their pupils. The majority of surgical residents in 2017 are millennials, and the “good old ways” of effective teaching through guilt, embarrassment, and punitive action will not work. Browbeaters need not apply, for you already have lost this war. For better or worse, educators must find a way to engage these residents on a positive emotional level at the same time as they engage on a higher intellectual plane.

Before the coffee spurts across your OR lounge and the surgical hats start flying fast and furious, let me clarify: In no way do I support the notion that general surgery residents should be coddled, pampered, or emotionally shielded from the gut-wrenching difficulty of practicing surgery. It was imperative in my education that I learned how to be wrong, how to admit it, and how to take ownership of my actions, whether right or wrong. Thankfully, I had a few good examples in Boston, and I’ll never forget the impact they made on my education. But, those lessons were reinforced in a way that made me WANT to weave them into the fabric of my surgical life. Never a heavy-handed dictum; without ego or audience; lacking the morose condescension associated with “those giants” of classical surgical training – what I received in my training was a whole-person engagement that fulfilled my desire to succeed and allowed me the room to grow up as an adult learner without feeling too akin to a 16-year-old, grounded and without car keys, when I had the audacity to make a mistake. Some tried this tack, but my grit won. Somewhere in Lawrenceville, Ga., I hope Dr. Elsey is smiling.

Those who taught best in my residency did so by example. They did it by letting me drive the ship, by giving credit when I did well, by educating when I did not. They did it by making me understand a patient is not a statistic, that you can be honest and kind and a giver of hope all at the same time and that a true surgeon does not need to brag and boast about her accomplishments, nor does he imperiously tear down those lower than himself on the “hierarchy.” The best of the best at Tufts Medical Center showed me what it means when a good person sits in an exam room with a hurting human being and starts the healing process with a kind smile, a gentle touch, words of reassurance, and confidence in his ability to change that patient’s life for the better.

Could it be that we need more of that – and less devotion to metrics – in surgical education? What might training become if we focus entirely on the patient and stop worrying about how the statistics make us all look? What would happen if educators traded nostalgia for engagement with their pupils? It may just be me, but all that sounds suspiciously ... old school, no?

So, before I have to choke down another article explaining how my contemporaries and I represent a kind of global warming to the long-established surgical polar ice caps, let me assure you that at least one young whippersnapper made it out of modern (read: postduty hours) surgical training and actually found a little success – and more than a bit of professional satisfaction – in the unforgiving world of independent general surgery by adhering to the same principles that guided Zollinger and DeBakey, Graham and Fisher: Do what is right for the patient, every single time, to the very best of your God-given and man-made ability. Those are some time-tested lessons I am very proud to have learned.

And, if you want the real story about my 3 years in practice, talk to my partners here in Maine. There is no critique quite like daily proximity. For what it’s worth, they have tolerated me splendidly.

Dr. Crosslin is a general surgeon practicing in Rockport, Maine, and an FACS Initiate, October 2017.

 

Unlike Dr. Elsey (“Surgery can be demanding work: Grit needed,” Letter to the Editor, May 2017, p. 6) and many others in various surgical publications, I have NOT enjoyed recent discussions about my generation’s perceived lack of readiness for independent practice following general surgery residency. Having been subjected to another round this month of “Why The Surgical Sky Is Falling,” I would like to take a moment to offer a different viewpoint.

I graduated from Tufts Medical Center’s general surgery residency in June 2014. After taking the written board exams, I started practice in a hospital-based group in Maine that same summer. My partners, both with 20+ years of experience, instituted a probationary period for observation of skill (ostensibly, and with good-natured teasing, to ensure I would not harm their patients, though I suspect such a thing is fairly universal for a new grad to receive institutional privileges), and, after convincing them I was not a reckless maniac, within a few months I was “on my own” in the operating room. I relied heavily on colleagues those first 18 months in practice, and ,if they ever grew weary of my asking advice about hemorrhoids, biliary colic, and diverticular disease, they never displayed perceptible annoyance. They were, and are, the best mentors I could have had.

Dr. Thomas E. Crosslin III
Dr. Thomas E. Crosslin III
I learned quickly that residency cannot teach you everything. In fact, residency doesn’t begin to teach you half of what you learn in the first year of independent practice. What my residency did – and what I humbly believe should be the focus for all surgical education – is provide a repetition of fundamentals that allowed me to make myself ready for independence when the time came. Anyone can do a Whipple as a chief resident when they’re scrubbed with a hepatobiliary surgical oncologist. What isn’t so easy is trying to keep from shaking your way through the first solo laparoscopic cholecystectomy. No amount of training can prepare you fully for the first independent moment in the operating room, and let’s please not pretend otherwise.

Metrics and studies that rely on resident self-evaluation – and conversely, ones that rely on “objective” identification of resident strengths and weaknesses by faculty – are subject to the very bias that has dominated this argument for years. If you tell us we are not good enough or lacking in some capacity, often enough, we inevitably will start to believe it. Then, you will reinforce that same belief in your perception of us, which drives the wedge further into an increasingly irreconcilable situation.

I had a decent self-opinion of my surgical skill as a chief resident, but, on any given day, the number I would have assigned to my own “readiness” for independence would have varied greatly for any number of reasons. I did not contend with much in the way of spirited discouragement or admonishment regarding my skill progression over 5 years, but, in keeping with the “gritty” surgical personality espoused by Dr. Elsey in his letter, I’m not sure I would have let that stop me. Honestly though, it’s impossible to say how it would have affected my confidence to leave residency straight for attendinghood had I been subjected to daily thrashings over 5 years regarding my lack of attending-level skill.

It seems to me, some of the current teaching generation has displayed an inability to connect with their pupils. The majority of surgical residents in 2017 are millennials, and the “good old ways” of effective teaching through guilt, embarrassment, and punitive action will not work. Browbeaters need not apply, for you already have lost this war. For better or worse, educators must find a way to engage these residents on a positive emotional level at the same time as they engage on a higher intellectual plane.

Before the coffee spurts across your OR lounge and the surgical hats start flying fast and furious, let me clarify: In no way do I support the notion that general surgery residents should be coddled, pampered, or emotionally shielded from the gut-wrenching difficulty of practicing surgery. It was imperative in my education that I learned how to be wrong, how to admit it, and how to take ownership of my actions, whether right or wrong. Thankfully, I had a few good examples in Boston, and I’ll never forget the impact they made on my education. But, those lessons were reinforced in a way that made me WANT to weave them into the fabric of my surgical life. Never a heavy-handed dictum; without ego or audience; lacking the morose condescension associated with “those giants” of classical surgical training – what I received in my training was a whole-person engagement that fulfilled my desire to succeed and allowed me the room to grow up as an adult learner without feeling too akin to a 16-year-old, grounded and without car keys, when I had the audacity to make a mistake. Some tried this tack, but my grit won. Somewhere in Lawrenceville, Ga., I hope Dr. Elsey is smiling.

Those who taught best in my residency did so by example. They did it by letting me drive the ship, by giving credit when I did well, by educating when I did not. They did it by making me understand a patient is not a statistic, that you can be honest and kind and a giver of hope all at the same time and that a true surgeon does not need to brag and boast about her accomplishments, nor does he imperiously tear down those lower than himself on the “hierarchy.” The best of the best at Tufts Medical Center showed me what it means when a good person sits in an exam room with a hurting human being and starts the healing process with a kind smile, a gentle touch, words of reassurance, and confidence in his ability to change that patient’s life for the better.

Could it be that we need more of that – and less devotion to metrics – in surgical education? What might training become if we focus entirely on the patient and stop worrying about how the statistics make us all look? What would happen if educators traded nostalgia for engagement with their pupils? It may just be me, but all that sounds suspiciously ... old school, no?

So, before I have to choke down another article explaining how my contemporaries and I represent a kind of global warming to the long-established surgical polar ice caps, let me assure you that at least one young whippersnapper made it out of modern (read: postduty hours) surgical training and actually found a little success – and more than a bit of professional satisfaction – in the unforgiving world of independent general surgery by adhering to the same principles that guided Zollinger and DeBakey, Graham and Fisher: Do what is right for the patient, every single time, to the very best of your God-given and man-made ability. Those are some time-tested lessons I am very proud to have learned.

And, if you want the real story about my 3 years in practice, talk to my partners here in Maine. There is no critique quite like daily proximity. For what it’s worth, they have tolerated me splendidly.

Dr. Crosslin is a general surgeon practicing in Rockport, Maine, and an FACS Initiate, October 2017.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Nonpathologic Postdeployment Transition Symptoms in Combat National Guard Members and Reservists

Article Type
Changed
Thu, 04/26/2018 - 09:42

The rigid dichotomy between combat deployment and postdeployment environments necessitates a multitude of cognitive, behavioral, and emotional adjustments for National Guard members and reservists to resume postdeployment civilian lifestyles successfully. Reacclimating to the postdeployment world is not a quick process for these veterans because of the time required to adjust from a deeply ingrained military combat mentality to civilian life. The process of this reintegration into the civilian world is known as postdeployment transition.

More than half of post-9/11 combat veterans report at least some difficulty with postdeployment transition.1,2 Frequently encountered symptoms of this period include impaired sleep, low frustration tolerance, decreased attention, poor concentration, short-term memory deficits, and difficulty with emotional regulation.1,3,4 Veterans will have difficulty reintegrating into the family unit and society without successful coping strategies to address these symptoms. If transition symptoms are prolonged, veterans are at risk for developing chronic adjustment difficulty or mental health issues.

Although there is significant attention paid to postdeployment adjustment by military family advocacy groups, there is little information in the medical literature on what constitutes common, nonpathologic postdeployment reactions among combat veterans. Frequently, when postdeployment transition symptoms are discussed, the medical literature tends to explain these in the context of a mental health disorder, such posttraumatic stress disorder (PTSD) or a cognitive injury, such as traumatic brain injury.5-8 Without a balanced understanding of normal postdeployment transitions, a health care provider (HCP) inappropriately may equate transition symptoms with the presence of mental health disorders or cognitive injury and medicalize the coping strategies needed to promote healthy adjustment.

The purpose of this article is to promote HCP awareness of common, nonpathologic postdeployment transition symptoms in combat veterans who are National Guard members or reservists. Such knowledge will enable HCPs to evaluate transition symptoms among these combat veterans reentering the civilian world, normalize common transition reactions, and recognize when further intervention is needed. This article reflects the author’s experience as a medical director working in a VA postdeployment clinic combined with data available in the medical literature and lay press.

Postdeployment Transition Symptoms

Dysregulation of emotional expression in returning combat veterans potentially can be present throughout the postdeployment period of adjustment. Although individual experiences vary widely in intensity and frequency, during postdeployment transition veterans often note difficulty in adjusting emotional expression to match that of nonmilitary counterparts.1,9-11 These difficulties usually fall into 2 broad categories: (1) relative emotional neutrality to major life events that cause nonmilitary civilians great joy or sadness; and (2) overreaction to trivial events, causing significant irritation, anger, or sadness that normally would not produce such emotional reactions in nonmilitary civilians. The former is largely overlooked in medical literature to date except in relation to the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) categories, and the latter is often described in limited terms as increased irritability, restlessness, and low frustration tolerance. This emotional dysregulation creates confusing paradoxes for veterans. For example, a veteran might feel no strong emotion when notified of the death of a close relative and yet cry uncontrollably while watching a sad scene in a fictional movie.

Sleep difficulties are intrinsic to the postdeployment period.9-12 Sleep-wake cycles often are altered, reflecting residual effects of the rigid schedules required by military duties and poor sleep hygiene in the combat theater. Inadequate, nonrestful sleep is frequently reported on return to the civilian world. Difficulty falling asleep or difficulty staying asleep also commonly occurs. Nightmares may be present.

Transient difficulty with concentration and attention is often prominent within the postdeployment transition period.9-11,13 Manifestations are variable, but problems with focusing on minor tasks are commonly reported. A more intense effort to master new concepts may be required. Learning styles developed during predeployment phases may be altered so that more conscious effort is required to comprehend and retain new information.

Short-term memory frequently may be affected during postdeployment transition.9-11,13 Veterans often report postdeployment difficulty in recalling appointments or tasks that must be completed even if they had a keen sense of memory during predeployment or deployment. Veterans also may have difficulty recalling the details of specific routines that were done without hesitation during deployment. Compared with predeployment time frames, veterans may exert greater effort to recall newly learned material.

Automatic behaviors necessary for survival in a combat theater still may be prominent in the postdeployment period.10,11,14 Aggressive driving required to avoid deployment ambush may be problematic during the postdeployment transition. Steering clear of any roadside trash may be a residual instinctive drive postdeployment because of the risk of improvised explosive devices concealed by debris in the combat theater. Veterans may avoid sitting with their back to the exit as the result of military safety training. Carrying weapons to ensure safety may be a compelling urge, because being armed and ready at all times was necessary for survival during deployment. Avoiding large crowds may be another strong tendency, because throngs of people were associated with potential danger in the combat theater.

Decision making may be challenging to resume in the postdeployment phase.9-11,15 In the deployment theater, time is relativel structured with rules in place, whereas at home veterans face a myriad of choices and decisions that must be made in order to complete the responsibilities of everyday living. As a result, making decisions about what item to buy, which clothes to wear, or what activities to prioritize, though relatively minor, can be a source of significant frustration. It may be difficult to independently navigate a realm of options available for new employment, schooling, or benefits, especially when there is little or no prior experience with these issues.

 

 

Relationship of Symptoms to Mental Health Diagnoses

Postdeployment transition symptoms do not automatically indicate the presence of an underlying mental health diagnosis. However, persistent and/or severe symptoms of postdeployment transition can overlap with or contribute to the development of mental health concerns (Table 1).14 The effects of the emotional disconnect also can exacerbate underlying mental health diagnoses.

While postdeployment emotional numbness to major life events, irritability, sleep disturbances, and impaired concentration can be associated with acute stress disorder (ASD) or PTSD, there is a constellation of other symptoms that must be present to diagnose these psychiatric conditions.16 Diagnostic criteria include persistent intrusive symptoms associated with the trauma, persistent avoidance of triggers/reminders associated with the trauma, significant changes in physiologic and cognitive arousal states, and negative changes in mood or cognition related to the trauma.16 The symptoms must cause significant impairment in some aspect of functioning on an individual, social, or occupational level. Acute stress disorder occurs when the symptoms last 30 days or less, whereas PTSD is diagnosed if the symptoms persist longer than a month.

Impaired emotional regulation, sleep disturbances, and decreased concentration also can be associated with depression or anxiety but are insufficient in themselves to make the diagnosis of those disorders.16 At least a 2-week history of depressed mood or inability to experience interest or pleasure in activities must be present as one of the criteria for depression as well as 4 or more other symptoms affecting sleep, appetite, energy, movement, self-esteem, or suicidal thoughts. Anxiety disorders have varying specific diagnostic criteria, but recurrent excessive worrying is a hallmark. Just like ASD or PTSD, the diagnostic symptoms of either depression or anxiety disorders must be causing significant impairment in functioning on an individual, social, or occupational level.

Irritability, sleep disturbances, agitation, memory impairment, and difficulty with concentration and attention can mimic the symptoms associated with mild-to-moderate traumatic brain injury (TBI).17,18 However, symptom onset must have a temporal relationship with a TBI. The presence of other TBI symptoms not associated with normal postdeployment transition usually can be used to differentiate between the diagnoses. Those TBI symptoms include recurrent headaches, poor balance, dizziness, tinnitus, and/or light sensitivity. In the majority of mild TBI cases, the symptoms resolve spontaneously within 3 months of TBI symptom manifestation.16,19 For those with persistent postconcussive syndrome, symptoms usually stabilize or improve over time.18,19 If symptoms worsen, there is often a confounding diagnosis such as PTSD or depression.17,20,21

Some returning combat veterans mistakenly believe postdeployment emotional transition symptoms are always a sign of a mental health disorder. Because there is a significant stigma associated with mental health disorders as well as potential repercussions on their service record if they use mental health resources, many reservists and National Guard members avoid accessing health care services if they are experiencing postdeployment adjustment issues, especially if those symptoms are related to emotional transitions.22-24 Unfortunately, such avoidance carries the risk that stress-inducing symptoms will persist and potentiate adjustment problems.

Course of Symptoms

The range for the postdeployment adjustment period generally falls within 3 to 12 months but can extend longer, depending on individual factors.10,11,25 Factors include presence of significant physical injury or illness, co-occurrence of mental health issues, underlying communication styles, and efficacy of coping strategies chosen. Although there is no clear-cut time frame for transition, ideally transition is complete when the returning veteran successfully enters his or her civilian lifestyle roles and feels a sense of purpose and belonging in society.

Postdeployment transition symptoms occur on a continuum in terms of duration and intensity for reservists and National Guard members. It is difficult to predict how specific transition symptoms will affect a particular veteran. The degree to which those symptoms will complicate reintegration depends on the individual veteran’s ability to adapt within the psychosocial context in which the symptoms occur. For example, minor irritation may be short-lived if a veteran can employ techniques to diffuse that feeling. Alternatively, minor irritation also suddenly may explode into a powerful wave of anger if the veteran has significant underlying emotional tension. Similarly, impaired short-term memory may be limited to forgetting a few appointments or may be so common that the veteran is at risk of losing track of his or her day. The level of memory impairment depends on emotional functioning, co-occurring stressors, and use of adaptive strategies.

In general, as these veterans successfully take on civilian routines, postdeployment transition symptoms will improve. Although such symptom improvement may be a passive process for some veterans, others will need to actively employ strategies to help change the military combat mind-set. The goal is to initiate useful interventions early in transition before symptoms become problematic.14

There are numerous self-help techniques and mobile apps that can be applied to a wide number of symptoms. Viable strategies include exercise, yoga, meditation, mindfulness training, and cognitive reframing. Reaching out for early assistance from various military assistance organizations that are well versed in dealing with postdeployment transition challenges often is helpful for reducing stress and navigating postdeployment obstacles (Table 2).

Symptom Strain and Exacerbation

Whenever stumbling blocks are encountered during the postdeployment period, any transition symptom can persist and/or worsen.10,11,14 Emotional disconnect and other transition symptoms can be exacerbated by physical, psychological, and social stressors common in the postdeployment period. Insomnia, poor quality sleep, or other sleep impairments that frequently occur as part of postdeployment transition can negatively impact the veteran’s ability to psychologically cope with daytime stressors. Poor concentration and short-term memory impairment noted by many reservists and National Guard members in the postdeployment phase can cause increased difficulty in attention to the moment and complicate completion of routine tasks. These difficulties can compound frustration and irritation to minor events and make it hard to emotionally connect with more serious issues.

Concentration and attention to mundane activities may be further reduced if the veteran feels no connection to the civilian world and/or experiences the surreal sensation that he or she should be attending to more serious life and death matters, such as those experienced in the combat theater. Ongoing psychological adjustment to physical injuries sustained during deployment can limit emotional flexibility when adapting to either minor or major stressors. Insufficient financial resources, work issues, or school problems can potentiate irritation, anger, and sadness and create an overwhelming emotional overload, leading to helplessness and hopelessness.

Perceived irregularities in emotional connection to the civilian world can significantly strain interpersonal relationships and be powerful impediments to successful reintegration.9,11,14 Failure to express emotions to major life events in the civilian world can result in combat veterans being viewed as not empathetic to others’ feelings. Overreaction to trivial events during postdeployment can lead to the veteran being labeled as unreasonable, controlling, and/or unpredictable. Persistent emotional disconnect with civilians engenders a growing sense of emotional isolation from family and friends when there is either incorrect interpretation of emotional transitions or failure to adapt healthy coping strategies. This isolation further enlarges the emotional chasm and may greatly diminish the veteran’s ability to seek assistance and appropriately address stressors in the civilian world.

Transition and the Family

Emotional disconnection may be more acutely felt within the immediate family unit.26 Redistribution of family unit responsibilities during deployment may mean that roles the veteran played during predeployment now may be handled by a partner. On the veteran’s return to the civilian world, such circumstances require active renegotiation of duties. Interactions with loved ones, especially children, may be colored by the family members’ individual perspectives on deployment as well as by the veteran’s transition symptoms. When there is disagreement about role responsibilities and/or underlying family resentment about deployment, conditions are ripe for significant discord between the veteran and family members, vital loss of partner intimacy, and notable loss of psychological safety to express feelings within the family unit. If there are  concerns about infidelity by the veteran or significant other during the period of deployment, postdeployment tensions can further escalate. If unaddressed in the presence of emotional disconnect, any of these situations can raise the risk of domestic violence and destruction of relationships.

Without adequate knowledge of common postdeployment transitions and coping strategies, the postdeployment transition period is often bewildering to returning veterans and their families. They are taken aback by postdeployment behaviors that do not conform to the veteran’s predeployment personality or mannerisms. Families may feel they have “lost” the veteran and view the emotionally distant postdeployment veteran as a stranger. Veterans mistakenly may view the postdeployment emotional disconnect as evidence that they were permanently altered by deployment and no longer can assimilate into the civilian world. Unless veterans and families develop an awareness of the postdeployment transition symptoms and healthy coping strategies,  these perspectives can contribute to a veteran’s persistent feelings of alienation, significant sense of personal failure, and loss of vital social supports.

 

 

When transition symptoms are or have the potential to become significant stressors, veterans would benefit from mental health counseling either individually or with family members. Overcoming the stigma of seeking mental health services can prove challenging. Explaining that these postdeployment symptoms occur commonly, stem from military combat training, can be reversed, and when reversed will empower the individual to control his or her life may help veterans overcome the stigma and seek mental health services.

The fear of future career impairment with the military reserve or National Guard is another real concern among this cohort who might consider accessing behavioral health care, especially since VA mental health medical records can be accessed by DoD officials through links with the VHA. Fortunately, this concern can be alleviated through the use of Vet Centers, free-standing counseling centers nationwide that offer no-cost individual and family counseling to veterans with combat exposure. Vet Center counseling records are completely confidential, never shared, and are not linked to the VHA electronic health record, the DoD, or any other entity. Although Vet Center providers don’t prescribe medications, the counselors can actively address many issues for veterans and their families. For individuals who do not live near a Vet Center or for those who require psychiatric medications, a frank discussion on the benefits of treatment vs the risk of treatment avoidance must be held.

Assessing Symptoms and Coping Mechanisms

Postdeployment transition symptoms vary, depending on the nature and context of the symptom. Not only must the returning reservist and National Guard member be screened for symptoms, but HCPs also should assess the impact of those symptoms on the veteran and his or her interpersonal relationships. Some veterans will feel that the symptoms have relatively minor impact in their lives, because the veteran can easily compensate for the transient effects. Others may feel that the symptoms are somewhat burdensome because the issues are complicating the smooth transition to civilian roles. Still others will judge the symptoms to be devastating because of the negative effects on personal control, selfesteem, and emotional connection with family and friends.

In addition to screening for symptoms, HCPs should assess these veterans’ current coping adaptations to various transition symptoms. Whereas some activities may be functional and promote reintegration, other short-term coping solutions may cripple the veteran’s ability to successfully resume civilian life. Global avoidance of communication with others and/or retreating from all social situations is a destructive coping pattern that can further alienate veterans from their families and the civilian world. Reacting with anger to all stressful issues is another maladaptive pattern of coping with life’s frustrations. Because of the potential to self-medicate when dealing with social difficulties, depression, anxiety, or other mental health diagnoses, veterans may develop an inappropriate reliance on drugs or alcohol to handle postdeployment stressors.27 Therefore, HCP screening for substance use disorders (SUD) is important so that interventions can be initiated early.

Because of the overlap of postdeployment transition symptoms with mental health disorders and the relative frequency of those mental health disorders among combat veterans, HCPs should have a heightened awareness of the potential for co-occurring mental health difficulties in the postdeployment reservist and National Guard cohort. Health care providers should screen for depression, anxiety, and PTSD. Even if initial screening is done early within the transition period, repeat screening would be of benefit 6 months into the postdeployment period because of the tendency of mental health issues to develop during that time.28,29

By evaluating the impact of the transition symptom and coping strategies on these veterans’ lives, HCPs can better determine which strategies might adequately compensate for symptom effects. In general, informal counseling, even if just to help veterans normalize postdeployment transition symptoms and develop a plan to address such symptoms, can significantly minimize the negative impact of transition symptoms.14,26 Specific symptoms should be targeted by interventions that match the degree of symptom impact.

Symptoms to be aggressively addressed are those that significantly interfere with successful reintegration into the civilian world. For example, persistent sleep difficulties should be dealt with because they can worsen all other transition symptoms. However, the majority of strategies to address sleep do not require medication unless there are confounding factors such as severe nightmares. Minor memory issues attributed to the transition phase can be mitigated by several strategies to improve recall, including use of task lists, digital calendars, or other memory-prodding techniques. However, severe memory issues related to depression or anxiety likely would require pharmaceutical assistance and formal counseling in addition to other nonpharmacologic approaches.

Intermittent irritation or restlessness may be amenable to selfhelp strategies, but significant anger outbursts or aggression will require additional support, such as formal behavioral interventions to help identify the triggers and develop strategic plans to reduce emotional tension. A mild sense of not belonging may resolve without intervention, but a stronger sense of alienation will require further evaluation.

Conclusion

Civilian reintegration after combat deployment is a gradual process rather than a discrete event for reservists and National Guard members. There are common, nonpathologic postdeployment transition symptoms that, if misunderstood or inappropriately addressed, can complicate civilian reintegration. Health care providers are in the unique position to promote a healthy postdeployment transition by assisting veterans to recognize nonpathologic transition symptoms, select appropriate coping strategies, and seek further assistance for more complex problems.

References

1. Pew Research Center. War and sacrifice in the post 9/11 era: executive summary. http://www
.pewsocialtrends.org/2011/10/05/war-and-sacrifice-in-the-post-911-era. Published October 5, 2011. Accessed June 12, 2017.

2. Interian A, Kline A, Callahan L, Losonczy M. Readjustment stressors and early mental health treatment seeking by returning National Guard soldiers with PTSD. Psychiatr Serv. 2012;63(9):855-861.

3. Spelman JF, Hunt SC, Seal KH, Burgo-Black AL. Post deployment care for returning combat veterans. J Gen Intern Med. 2012;27(9):1200-1209.

4. Vasterling JJ, Daily ES, Friedman MJ. Posttraumatic stress reactions over time: the battlefield, homecoming, and long-term course. In: Ruzek JI, Schnurr PP, Vasterling JJ, Friedman MJ, eds. Caring for Veterans With Deployment-Related Stress Disorders: Iraq, Afghanistan, and Beyond. Washington,DC: American Psychological Association;2011:chap 2.

5. Wilcox SL, Oh H, Redmon SA, Chicas J, Hassan AM, Lee PJ, Ell K. A scope of the problem: Postdeployment reintegration challenges in a National Guard Unit. Work. 2015;50(1):73-83.

6. Griffith J. Homecoming of citizen soldiers: Postdeployment problems and service use among Army National Guard soldiers. Community Ment Health J. 2017. doi:10.1007/s10597-017-0132-9. (Epub ahead of print)

7. Schultz M, Glickman ME, Eisen SV. Predictors of decline in overall mental health, PTSD and alcohol use in OEF/OIF veterans. Comprehensive Psychiatry. 2014;55(7):1654-1664.

8. Polusny MA, Kehle SM, Nelson NW, Erbes CR, Arbisi PA, Thuras P. Longitudinal effects of mild traumatic brain injury and posttraumatic stress disorder comorbidity on postdeployment outcomes in National Guard soldiers deployed to Iraq. Arch Gen Psychiatry. 2011:68(1):79-89.

9. U.S. Department of Veterans Affairs, National Center for PTSD. Returning from the war zone: a guide for military personnel. http://www.ptsd.va.gov/public/reintegration/guide-pdf/SMGuide.pdf. Updated January 2014. Accessed June 12, 2017.

10. Slone LB, Friedman MJ. After the War Zone: A Practical Guide for Returning Troops and their Families. Philadelphia, PA: Da Capo Press; 2008.

11. Ainspan ND, Penk WE, eds. When the Warrior Returns: Making the Transition at Home. Annapolis, MD: Naval Institute Press; 2012.

12. Yosick T, Bates M, Moore M, Crowe C, Phillips J, Davison J. A review of post-deployment reintegration: evidence, challenges, and strategies for program development. http://www.dcoe.mil/files/Review_of_Post-Deployment_Reintegration.pdf. Published February 10, 2012. Accessed June 12, 2017.

13. Vasterling JJ, Proctor SP, Amoroso P, Kane R, Heeren T, White RF. Neuropsychological outcomes of army personnel following deployment to the Iraq war. JAMA. 2006;296(5):519-529.

14. Castro CA, Kintzle S, Hassan AM. The combat veteran paradox: paradoxes and dilemmas encountered with reintegrating combat veterans and the agencies that support them. Traumatology. 2015;21(4):299-310.

15. Rivers FM, Gordon S, Speraw S, Reese S. U.S. Army nurses’ reintegration and homecoming experiences after Iraq and Afghanistan. Mil Med. 2013;178(2):166-173.

16. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington,VA: American Psychiatric Association;2013.

17. Tanielian T, Jaycox LH, eds. Invisible Wounds of War: Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: Rand Corporation, 2008.

18. Orff HJ, Hays CC, Heldreth AA, Stein MB, Twamley EW. Clinical considerations in the evaluation and management of patients following traumatic brain injury. Focus. 2013;11(3):328-340.

19. Morissette SB, Woodward M, Kimbrel NA, et al. Deployment-related TBI, persistent postconcussive symptoms, PTSD, and depression in OEF/OIF veterans. Rehabil Psychol. 2011;56(4):340-350.

20. Polusny MA, Kehle SM, Nelson NW, Erbes CR, Arbisi PA, Thuras P. Longitudinal effects of mild traumatic brain injury and posttraumatic stress disorder comorbidity on postdeployment outcomes in national guard soldiers deployed to Iraq. Arch Gen Psychiatry. 2011;68(1):79-89.

21. Wilk JE, Herrell RK, Wynn GH, Riviere LA, Hoge CW. Mild traumatic brain injury (concussion), posttraumatic stress disorder, and depression in U.S. soldiers involved in combat deployments: association with postdeployment symptoms. Psychosom Med. 2012;74(3):249-257.

22. Hoge CW, Grossman SH, Auchterlonie JL, Riviere LA, Milliken CS, Wilk JE. PTSD treatment for soldiers after combat deployment: low utilization of mental health care and reasons for dropout. Psychiatr Serv. 2014;65(8):997-1004.

23. Hines LA, Goodwin L, Jones M, et al. Factors affecting help seeking for mental health problems after deployment to Iraq and Afghanistan. Psychiatr Serv. 2014;65(1):98-105.

24. Gorman LA, Blow AJ, Ames BD, Read PL. National Guard families after combat: mental health, use of mental health services, and perceived treatment barriers. Psychiatr Serv. 2011;62(1):28-34.

25. Marek LI, Hollingsworth WG, D’Aniello C, et al. Returning home: what we know about the reintegration of deployed service members into their families and communities. https://www.ncfr.org/ncfr-report/focus/military-families/returninghome. Published March 1, 2012. Accessed June 13, 2017.

26. Bowling UB, Sherman MD. Welcoming them home: supporting service members and their families in navigating the tasks of reintegration. Prof Psychol Res Pr. 2008;39(4):451-458.

27. Jacobson IG, Ryan MA, Hooper TI, et al. Alcohol use and alcohol-related problems before
and after military combat deployment. JAMA. 2008;300(6):663-675.

28. Seal KH, Metzler TH, Gima KS, Bertenthal D, Maguen S, Marmar CR. Trends and risk factors for mental health diagnoses among Iraq and Afghanistan veterans Department of Veterans Affairs health care, 2002-2008. Am J Public Health. 2009;99(9):1651-1658.

29. Milliken CS, Auchterlonie JL, Hoge CW. Longitudinal assessment of mental health problems among active and reserve component soldiers returning from the Iraq war. JAMA. 2007;298(18):2141-2148.

Article PDF
Author and Disclosure Information

Dr. Mitchell is the VISN 22 Specialty Care Medicine Lead and former medical director of the Phoenix VAMC Postdeployment Clinic in Arizona.

Author disclosures
The author reports no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the author and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.

Issue
Federal Practitioner - 34(7)
Publications
Topics
Page Number
16-22
Sections
Author and Disclosure Information

Dr. Mitchell is the VISN 22 Specialty Care Medicine Lead and former medical director of the Phoenix VAMC Postdeployment Clinic in Arizona.

Author disclosures
The author reports no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the author and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.

Author and Disclosure Information

Dr. Mitchell is the VISN 22 Specialty Care Medicine Lead and former medical director of the Phoenix VAMC Postdeployment Clinic in Arizona.

Author disclosures
The author reports no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the author and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies.

Article PDF
Article PDF
Related Articles

The rigid dichotomy between combat deployment and postdeployment environments necessitates a multitude of cognitive, behavioral, and emotional adjustments for National Guard members and reservists to resume postdeployment civilian lifestyles successfully. Reacclimating to the postdeployment world is not a quick process for these veterans because of the time required to adjust from a deeply ingrained military combat mentality to civilian life. The process of this reintegration into the civilian world is known as postdeployment transition.

More than half of post-9/11 combat veterans report at least some difficulty with postdeployment transition.1,2 Frequently encountered symptoms of this period include impaired sleep, low frustration tolerance, decreased attention, poor concentration, short-term memory deficits, and difficulty with emotional regulation.1,3,4 Veterans will have difficulty reintegrating into the family unit and society without successful coping strategies to address these symptoms. If transition symptoms are prolonged, veterans are at risk for developing chronic adjustment difficulty or mental health issues.

Although there is significant attention paid to postdeployment adjustment by military family advocacy groups, there is little information in the medical literature on what constitutes common, nonpathologic postdeployment reactions among combat veterans. Frequently, when postdeployment transition symptoms are discussed, the medical literature tends to explain these in the context of a mental health disorder, such posttraumatic stress disorder (PTSD) or a cognitive injury, such as traumatic brain injury.5-8 Without a balanced understanding of normal postdeployment transitions, a health care provider (HCP) inappropriately may equate transition symptoms with the presence of mental health disorders or cognitive injury and medicalize the coping strategies needed to promote healthy adjustment.

The purpose of this article is to promote HCP awareness of common, nonpathologic postdeployment transition symptoms in combat veterans who are National Guard members or reservists. Such knowledge will enable HCPs to evaluate transition symptoms among these combat veterans reentering the civilian world, normalize common transition reactions, and recognize when further intervention is needed. This article reflects the author’s experience as a medical director working in a VA postdeployment clinic combined with data available in the medical literature and lay press.

Postdeployment Transition Symptoms

Dysregulation of emotional expression in returning combat veterans potentially can be present throughout the postdeployment period of adjustment. Although individual experiences vary widely in intensity and frequency, during postdeployment transition veterans often note difficulty in adjusting emotional expression to match that of nonmilitary counterparts.1,9-11 These difficulties usually fall into 2 broad categories: (1) relative emotional neutrality to major life events that cause nonmilitary civilians great joy or sadness; and (2) overreaction to trivial events, causing significant irritation, anger, or sadness that normally would not produce such emotional reactions in nonmilitary civilians. The former is largely overlooked in medical literature to date except in relation to the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) categories, and the latter is often described in limited terms as increased irritability, restlessness, and low frustration tolerance. This emotional dysregulation creates confusing paradoxes for veterans. For example, a veteran might feel no strong emotion when notified of the death of a close relative and yet cry uncontrollably while watching a sad scene in a fictional movie.

Sleep difficulties are intrinsic to the postdeployment period.9-12 Sleep-wake cycles often are altered, reflecting residual effects of the rigid schedules required by military duties and poor sleep hygiene in the combat theater. Inadequate, nonrestful sleep is frequently reported on return to the civilian world. Difficulty falling asleep or difficulty staying asleep also commonly occurs. Nightmares may be present.

Transient difficulty with concentration and attention is often prominent within the postdeployment transition period.9-11,13 Manifestations are variable, but problems with focusing on minor tasks are commonly reported. A more intense effort to master new concepts may be required. Learning styles developed during predeployment phases may be altered so that more conscious effort is required to comprehend and retain new information.

Short-term memory frequently may be affected during postdeployment transition.9-11,13 Veterans often report postdeployment difficulty in recalling appointments or tasks that must be completed even if they had a keen sense of memory during predeployment or deployment. Veterans also may have difficulty recalling the details of specific routines that were done without hesitation during deployment. Compared with predeployment time frames, veterans may exert greater effort to recall newly learned material.

Automatic behaviors necessary for survival in a combat theater still may be prominent in the postdeployment period.10,11,14 Aggressive driving required to avoid deployment ambush may be problematic during the postdeployment transition. Steering clear of any roadside trash may be a residual instinctive drive postdeployment because of the risk of improvised explosive devices concealed by debris in the combat theater. Veterans may avoid sitting with their back to the exit as the result of military safety training. Carrying weapons to ensure safety may be a compelling urge, because being armed and ready at all times was necessary for survival during deployment. Avoiding large crowds may be another strong tendency, because throngs of people were associated with potential danger in the combat theater.

Decision making may be challenging to resume in the postdeployment phase.9-11,15 In the deployment theater, time is relativel structured with rules in place, whereas at home veterans face a myriad of choices and decisions that must be made in order to complete the responsibilities of everyday living. As a result, making decisions about what item to buy, which clothes to wear, or what activities to prioritize, though relatively minor, can be a source of significant frustration. It may be difficult to independently navigate a realm of options available for new employment, schooling, or benefits, especially when there is little or no prior experience with these issues.

 

 

Relationship of Symptoms to Mental Health Diagnoses

Postdeployment transition symptoms do not automatically indicate the presence of an underlying mental health diagnosis. However, persistent and/or severe symptoms of postdeployment transition can overlap with or contribute to the development of mental health concerns (Table 1).14 The effects of the emotional disconnect also can exacerbate underlying mental health diagnoses.

While postdeployment emotional numbness to major life events, irritability, sleep disturbances, and impaired concentration can be associated with acute stress disorder (ASD) or PTSD, there is a constellation of other symptoms that must be present to diagnose these psychiatric conditions.16 Diagnostic criteria include persistent intrusive symptoms associated with the trauma, persistent avoidance of triggers/reminders associated with the trauma, significant changes in physiologic and cognitive arousal states, and negative changes in mood or cognition related to the trauma.16 The symptoms must cause significant impairment in some aspect of functioning on an individual, social, or occupational level. Acute stress disorder occurs when the symptoms last 30 days or less, whereas PTSD is diagnosed if the symptoms persist longer than a month.

Impaired emotional regulation, sleep disturbances, and decreased concentration also can be associated with depression or anxiety but are insufficient in themselves to make the diagnosis of those disorders.16 At least a 2-week history of depressed mood or inability to experience interest or pleasure in activities must be present as one of the criteria for depression as well as 4 or more other symptoms affecting sleep, appetite, energy, movement, self-esteem, or suicidal thoughts. Anxiety disorders have varying specific diagnostic criteria, but recurrent excessive worrying is a hallmark. Just like ASD or PTSD, the diagnostic symptoms of either depression or anxiety disorders must be causing significant impairment in functioning on an individual, social, or occupational level.

Irritability, sleep disturbances, agitation, memory impairment, and difficulty with concentration and attention can mimic the symptoms associated with mild-to-moderate traumatic brain injury (TBI).17,18 However, symptom onset must have a temporal relationship with a TBI. The presence of other TBI symptoms not associated with normal postdeployment transition usually can be used to differentiate between the diagnoses. Those TBI symptoms include recurrent headaches, poor balance, dizziness, tinnitus, and/or light sensitivity. In the majority of mild TBI cases, the symptoms resolve spontaneously within 3 months of TBI symptom manifestation.16,19 For those with persistent postconcussive syndrome, symptoms usually stabilize or improve over time.18,19 If symptoms worsen, there is often a confounding diagnosis such as PTSD or depression.17,20,21

Some returning combat veterans mistakenly believe postdeployment emotional transition symptoms are always a sign of a mental health disorder. Because there is a significant stigma associated with mental health disorders as well as potential repercussions on their service record if they use mental health resources, many reservists and National Guard members avoid accessing health care services if they are experiencing postdeployment adjustment issues, especially if those symptoms are related to emotional transitions.22-24 Unfortunately, such avoidance carries the risk that stress-inducing symptoms will persist and potentiate adjustment problems.

Course of Symptoms

The range for the postdeployment adjustment period generally falls within 3 to 12 months but can extend longer, depending on individual factors.10,11,25 Factors include presence of significant physical injury or illness, co-occurrence of mental health issues, underlying communication styles, and efficacy of coping strategies chosen. Although there is no clear-cut time frame for transition, ideally transition is complete when the returning veteran successfully enters his or her civilian lifestyle roles and feels a sense of purpose and belonging in society.

Postdeployment transition symptoms occur on a continuum in terms of duration and intensity for reservists and National Guard members. It is difficult to predict how specific transition symptoms will affect a particular veteran. The degree to which those symptoms will complicate reintegration depends on the individual veteran’s ability to adapt within the psychosocial context in which the symptoms occur. For example, minor irritation may be short-lived if a veteran can employ techniques to diffuse that feeling. Alternatively, minor irritation also suddenly may explode into a powerful wave of anger if the veteran has significant underlying emotional tension. Similarly, impaired short-term memory may be limited to forgetting a few appointments or may be so common that the veteran is at risk of losing track of his or her day. The level of memory impairment depends on emotional functioning, co-occurring stressors, and use of adaptive strategies.

In general, as these veterans successfully take on civilian routines, postdeployment transition symptoms will improve. Although such symptom improvement may be a passive process for some veterans, others will need to actively employ strategies to help change the military combat mind-set. The goal is to initiate useful interventions early in transition before symptoms become problematic.14

There are numerous self-help techniques and mobile apps that can be applied to a wide number of symptoms. Viable strategies include exercise, yoga, meditation, mindfulness training, and cognitive reframing. Reaching out for early assistance from various military assistance organizations that are well versed in dealing with postdeployment transition challenges often is helpful for reducing stress and navigating postdeployment obstacles (Table 2).

Symptom Strain and Exacerbation

Whenever stumbling blocks are encountered during the postdeployment period, any transition symptom can persist and/or worsen.10,11,14 Emotional disconnect and other transition symptoms can be exacerbated by physical, psychological, and social stressors common in the postdeployment period. Insomnia, poor quality sleep, or other sleep impairments that frequently occur as part of postdeployment transition can negatively impact the veteran’s ability to psychologically cope with daytime stressors. Poor concentration and short-term memory impairment noted by many reservists and National Guard members in the postdeployment phase can cause increased difficulty in attention to the moment and complicate completion of routine tasks. These difficulties can compound frustration and irritation to minor events and make it hard to emotionally connect with more serious issues.

Concentration and attention to mundane activities may be further reduced if the veteran feels no connection to the civilian world and/or experiences the surreal sensation that he or she should be attending to more serious life and death matters, such as those experienced in the combat theater. Ongoing psychological adjustment to physical injuries sustained during deployment can limit emotional flexibility when adapting to either minor or major stressors. Insufficient financial resources, work issues, or school problems can potentiate irritation, anger, and sadness and create an overwhelming emotional overload, leading to helplessness and hopelessness.

Perceived irregularities in emotional connection to the civilian world can significantly strain interpersonal relationships and be powerful impediments to successful reintegration.9,11,14 Failure to express emotions to major life events in the civilian world can result in combat veterans being viewed as not empathetic to others’ feelings. Overreaction to trivial events during postdeployment can lead to the veteran being labeled as unreasonable, controlling, and/or unpredictable. Persistent emotional disconnect with civilians engenders a growing sense of emotional isolation from family and friends when there is either incorrect interpretation of emotional transitions or failure to adapt healthy coping strategies. This isolation further enlarges the emotional chasm and may greatly diminish the veteran’s ability to seek assistance and appropriately address stressors in the civilian world.

Transition and the Family

Emotional disconnection may be more acutely felt within the immediate family unit.26 Redistribution of family unit responsibilities during deployment may mean that roles the veteran played during predeployment now may be handled by a partner. On the veteran’s return to the civilian world, such circumstances require active renegotiation of duties. Interactions with loved ones, especially children, may be colored by the family members’ individual perspectives on deployment as well as by the veteran’s transition symptoms. When there is disagreement about role responsibilities and/or underlying family resentment about deployment, conditions are ripe for significant discord between the veteran and family members, vital loss of partner intimacy, and notable loss of psychological safety to express feelings within the family unit. If there are  concerns about infidelity by the veteran or significant other during the period of deployment, postdeployment tensions can further escalate. If unaddressed in the presence of emotional disconnect, any of these situations can raise the risk of domestic violence and destruction of relationships.

Without adequate knowledge of common postdeployment transitions and coping strategies, the postdeployment transition period is often bewildering to returning veterans and their families. They are taken aback by postdeployment behaviors that do not conform to the veteran’s predeployment personality or mannerisms. Families may feel they have “lost” the veteran and view the emotionally distant postdeployment veteran as a stranger. Veterans mistakenly may view the postdeployment emotional disconnect as evidence that they were permanently altered by deployment and no longer can assimilate into the civilian world. Unless veterans and families develop an awareness of the postdeployment transition symptoms and healthy coping strategies,  these perspectives can contribute to a veteran’s persistent feelings of alienation, significant sense of personal failure, and loss of vital social supports.

 

 

When transition symptoms are or have the potential to become significant stressors, veterans would benefit from mental health counseling either individually or with family members. Overcoming the stigma of seeking mental health services can prove challenging. Explaining that these postdeployment symptoms occur commonly, stem from military combat training, can be reversed, and when reversed will empower the individual to control his or her life may help veterans overcome the stigma and seek mental health services.

The fear of future career impairment with the military reserve or National Guard is another real concern among this cohort who might consider accessing behavioral health care, especially since VA mental health medical records can be accessed by DoD officials through links with the VHA. Fortunately, this concern can be alleviated through the use of Vet Centers, free-standing counseling centers nationwide that offer no-cost individual and family counseling to veterans with combat exposure. Vet Center counseling records are completely confidential, never shared, and are not linked to the VHA electronic health record, the DoD, or any other entity. Although Vet Center providers don’t prescribe medications, the counselors can actively address many issues for veterans and their families. For individuals who do not live near a Vet Center or for those who require psychiatric medications, a frank discussion on the benefits of treatment vs the risk of treatment avoidance must be held.

Assessing Symptoms and Coping Mechanisms

Postdeployment transition symptoms vary, depending on the nature and context of the symptom. Not only must the returning reservist and National Guard member be screened for symptoms, but HCPs also should assess the impact of those symptoms on the veteran and his or her interpersonal relationships. Some veterans will feel that the symptoms have relatively minor impact in their lives, because the veteran can easily compensate for the transient effects. Others may feel that the symptoms are somewhat burdensome because the issues are complicating the smooth transition to civilian roles. Still others will judge the symptoms to be devastating because of the negative effects on personal control, selfesteem, and emotional connection with family and friends.

In addition to screening for symptoms, HCPs should assess these veterans’ current coping adaptations to various transition symptoms. Whereas some activities may be functional and promote reintegration, other short-term coping solutions may cripple the veteran’s ability to successfully resume civilian life. Global avoidance of communication with others and/or retreating from all social situations is a destructive coping pattern that can further alienate veterans from their families and the civilian world. Reacting with anger to all stressful issues is another maladaptive pattern of coping with life’s frustrations. Because of the potential to self-medicate when dealing with social difficulties, depression, anxiety, or other mental health diagnoses, veterans may develop an inappropriate reliance on drugs or alcohol to handle postdeployment stressors.27 Therefore, HCP screening for substance use disorders (SUD) is important so that interventions can be initiated early.

Because of the overlap of postdeployment transition symptoms with mental health disorders and the relative frequency of those mental health disorders among combat veterans, HCPs should have a heightened awareness of the potential for co-occurring mental health difficulties in the postdeployment reservist and National Guard cohort. Health care providers should screen for depression, anxiety, and PTSD. Even if initial screening is done early within the transition period, repeat screening would be of benefit 6 months into the postdeployment period because of the tendency of mental health issues to develop during that time.28,29

By evaluating the impact of the transition symptom and coping strategies on these veterans’ lives, HCPs can better determine which strategies might adequately compensate for symptom effects. In general, informal counseling, even if just to help veterans normalize postdeployment transition symptoms and develop a plan to address such symptoms, can significantly minimize the negative impact of transition symptoms.14,26 Specific symptoms should be targeted by interventions that match the degree of symptom impact.

Symptoms to be aggressively addressed are those that significantly interfere with successful reintegration into the civilian world. For example, persistent sleep difficulties should be dealt with because they can worsen all other transition symptoms. However, the majority of strategies to address sleep do not require medication unless there are confounding factors such as severe nightmares. Minor memory issues attributed to the transition phase can be mitigated by several strategies to improve recall, including use of task lists, digital calendars, or other memory-prodding techniques. However, severe memory issues related to depression or anxiety likely would require pharmaceutical assistance and formal counseling in addition to other nonpharmacologic approaches.

Intermittent irritation or restlessness may be amenable to selfhelp strategies, but significant anger outbursts or aggression will require additional support, such as formal behavioral interventions to help identify the triggers and develop strategic plans to reduce emotional tension. A mild sense of not belonging may resolve without intervention, but a stronger sense of alienation will require further evaluation.

Conclusion

Civilian reintegration after combat deployment is a gradual process rather than a discrete event for reservists and National Guard members. There are common, nonpathologic postdeployment transition symptoms that, if misunderstood or inappropriately addressed, can complicate civilian reintegration. Health care providers are in the unique position to promote a healthy postdeployment transition by assisting veterans to recognize nonpathologic transition symptoms, select appropriate coping strategies, and seek further assistance for more complex problems.

The rigid dichotomy between combat deployment and postdeployment environments necessitates a multitude of cognitive, behavioral, and emotional adjustments for National Guard members and reservists to resume postdeployment civilian lifestyles successfully. Reacclimating to the postdeployment world is not a quick process for these veterans because of the time required to adjust from a deeply ingrained military combat mentality to civilian life. The process of this reintegration into the civilian world is known as postdeployment transition.

More than half of post-9/11 combat veterans report at least some difficulty with postdeployment transition.1,2 Frequently encountered symptoms of this period include impaired sleep, low frustration tolerance, decreased attention, poor concentration, short-term memory deficits, and difficulty with emotional regulation.1,3,4 Veterans will have difficulty reintegrating into the family unit and society without successful coping strategies to address these symptoms. If transition symptoms are prolonged, veterans are at risk for developing chronic adjustment difficulty or mental health issues.

Although there is significant attention paid to postdeployment adjustment by military family advocacy groups, there is little information in the medical literature on what constitutes common, nonpathologic postdeployment reactions among combat veterans. Frequently, when postdeployment transition symptoms are discussed, the medical literature tends to explain these in the context of a mental health disorder, such posttraumatic stress disorder (PTSD) or a cognitive injury, such as traumatic brain injury.5-8 Without a balanced understanding of normal postdeployment transitions, a health care provider (HCP) inappropriately may equate transition symptoms with the presence of mental health disorders or cognitive injury and medicalize the coping strategies needed to promote healthy adjustment.

The purpose of this article is to promote HCP awareness of common, nonpathologic postdeployment transition symptoms in combat veterans who are National Guard members or reservists. Such knowledge will enable HCPs to evaluate transition symptoms among these combat veterans reentering the civilian world, normalize common transition reactions, and recognize when further intervention is needed. This article reflects the author’s experience as a medical director working in a VA postdeployment clinic combined with data available in the medical literature and lay press.

Postdeployment Transition Symptoms

Dysregulation of emotional expression in returning combat veterans potentially can be present throughout the postdeployment period of adjustment. Although individual experiences vary widely in intensity and frequency, during postdeployment transition veterans often note difficulty in adjusting emotional expression to match that of nonmilitary counterparts.1,9-11 These difficulties usually fall into 2 broad categories: (1) relative emotional neutrality to major life events that cause nonmilitary civilians great joy or sadness; and (2) overreaction to trivial events, causing significant irritation, anger, or sadness that normally would not produce such emotional reactions in nonmilitary civilians. The former is largely overlooked in medical literature to date except in relation to the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition (DSM-5) categories, and the latter is often described in limited terms as increased irritability, restlessness, and low frustration tolerance. This emotional dysregulation creates confusing paradoxes for veterans. For example, a veteran might feel no strong emotion when notified of the death of a close relative and yet cry uncontrollably while watching a sad scene in a fictional movie.

Sleep difficulties are intrinsic to the postdeployment period.9-12 Sleep-wake cycles often are altered, reflecting residual effects of the rigid schedules required by military duties and poor sleep hygiene in the combat theater. Inadequate, nonrestful sleep is frequently reported on return to the civilian world. Difficulty falling asleep or difficulty staying asleep also commonly occurs. Nightmares may be present.

Transient difficulty with concentration and attention is often prominent within the postdeployment transition period.9-11,13 Manifestations are variable, but problems with focusing on minor tasks are commonly reported. A more intense effort to master new concepts may be required. Learning styles developed during predeployment phases may be altered so that more conscious effort is required to comprehend and retain new information.

Short-term memory frequently may be affected during postdeployment transition.9-11,13 Veterans often report postdeployment difficulty in recalling appointments or tasks that must be completed even if they had a keen sense of memory during predeployment or deployment. Veterans also may have difficulty recalling the details of specific routines that were done without hesitation during deployment. Compared with predeployment time frames, veterans may exert greater effort to recall newly learned material.

Automatic behaviors necessary for survival in a combat theater still may be prominent in the postdeployment period.10,11,14 Aggressive driving required to avoid deployment ambush may be problematic during the postdeployment transition. Steering clear of any roadside trash may be a residual instinctive drive postdeployment because of the risk of improvised explosive devices concealed by debris in the combat theater. Veterans may avoid sitting with their back to the exit as the result of military safety training. Carrying weapons to ensure safety may be a compelling urge, because being armed and ready at all times was necessary for survival during deployment. Avoiding large crowds may be another strong tendency, because throngs of people were associated with potential danger in the combat theater.

Decision making may be challenging to resume in the postdeployment phase.9-11,15 In the deployment theater, time is relativel structured with rules in place, whereas at home veterans face a myriad of choices and decisions that must be made in order to complete the responsibilities of everyday living. As a result, making decisions about what item to buy, which clothes to wear, or what activities to prioritize, though relatively minor, can be a source of significant frustration. It may be difficult to independently navigate a realm of options available for new employment, schooling, or benefits, especially when there is little or no prior experience with these issues.

 

 

Relationship of Symptoms to Mental Health Diagnoses

Postdeployment transition symptoms do not automatically indicate the presence of an underlying mental health diagnosis. However, persistent and/or severe symptoms of postdeployment transition can overlap with or contribute to the development of mental health concerns (Table 1).14 The effects of the emotional disconnect also can exacerbate underlying mental health diagnoses.

While postdeployment emotional numbness to major life events, irritability, sleep disturbances, and impaired concentration can be associated with acute stress disorder (ASD) or PTSD, there is a constellation of other symptoms that must be present to diagnose these psychiatric conditions.16 Diagnostic criteria include persistent intrusive symptoms associated with the trauma, persistent avoidance of triggers/reminders associated with the trauma, significant changes in physiologic and cognitive arousal states, and negative changes in mood or cognition related to the trauma.16 The symptoms must cause significant impairment in some aspect of functioning on an individual, social, or occupational level. Acute stress disorder occurs when the symptoms last 30 days or less, whereas PTSD is diagnosed if the symptoms persist longer than a month.

Impaired emotional regulation, sleep disturbances, and decreased concentration also can be associated with depression or anxiety but are insufficient in themselves to make the diagnosis of those disorders.16 At least a 2-week history of depressed mood or inability to experience interest or pleasure in activities must be present as one of the criteria for depression as well as 4 or more other symptoms affecting sleep, appetite, energy, movement, self-esteem, or suicidal thoughts. Anxiety disorders have varying specific diagnostic criteria, but recurrent excessive worrying is a hallmark. Just like ASD or PTSD, the diagnostic symptoms of either depression or anxiety disorders must be causing significant impairment in functioning on an individual, social, or occupational level.

Irritability, sleep disturbances, agitation, memory impairment, and difficulty with concentration and attention can mimic the symptoms associated with mild-to-moderate traumatic brain injury (TBI).17,18 However, symptom onset must have a temporal relationship with a TBI. The presence of other TBI symptoms not associated with normal postdeployment transition usually can be used to differentiate between the diagnoses. Those TBI symptoms include recurrent headaches, poor balance, dizziness, tinnitus, and/or light sensitivity. In the majority of mild TBI cases, the symptoms resolve spontaneously within 3 months of TBI symptom manifestation.16,19 For those with persistent postconcussive syndrome, symptoms usually stabilize or improve over time.18,19 If symptoms worsen, there is often a confounding diagnosis such as PTSD or depression.17,20,21

Some returning combat veterans mistakenly believe postdeployment emotional transition symptoms are always a sign of a mental health disorder. Because there is a significant stigma associated with mental health disorders as well as potential repercussions on their service record if they use mental health resources, many reservists and National Guard members avoid accessing health care services if they are experiencing postdeployment adjustment issues, especially if those symptoms are related to emotional transitions.22-24 Unfortunately, such avoidance carries the risk that stress-inducing symptoms will persist and potentiate adjustment problems.

Course of Symptoms

The range for the postdeployment adjustment period generally falls within 3 to 12 months but can extend longer, depending on individual factors.10,11,25 Factors include presence of significant physical injury or illness, co-occurrence of mental health issues, underlying communication styles, and efficacy of coping strategies chosen. Although there is no clear-cut time frame for transition, ideally transition is complete when the returning veteran successfully enters his or her civilian lifestyle roles and feels a sense of purpose and belonging in society.

Postdeployment transition symptoms occur on a continuum in terms of duration and intensity for reservists and National Guard members. It is difficult to predict how specific transition symptoms will affect a particular veteran. The degree to which those symptoms will complicate reintegration depends on the individual veteran’s ability to adapt within the psychosocial context in which the symptoms occur. For example, minor irritation may be short-lived if a veteran can employ techniques to diffuse that feeling. Alternatively, minor irritation also suddenly may explode into a powerful wave of anger if the veteran has significant underlying emotional tension. Similarly, impaired short-term memory may be limited to forgetting a few appointments or may be so common that the veteran is at risk of losing track of his or her day. The level of memory impairment depends on emotional functioning, co-occurring stressors, and use of adaptive strategies.

In general, as these veterans successfully take on civilian routines, postdeployment transition symptoms will improve. Although such symptom improvement may be a passive process for some veterans, others will need to actively employ strategies to help change the military combat mind-set. The goal is to initiate useful interventions early in transition before symptoms become problematic.14

There are numerous self-help techniques and mobile apps that can be applied to a wide number of symptoms. Viable strategies include exercise, yoga, meditation, mindfulness training, and cognitive reframing. Reaching out for early assistance from various military assistance organizations that are well versed in dealing with postdeployment transition challenges often is helpful for reducing stress and navigating postdeployment obstacles (Table 2).

Symptom Strain and Exacerbation

Whenever stumbling blocks are encountered during the postdeployment period, any transition symptom can persist and/or worsen.10,11,14 Emotional disconnect and other transition symptoms can be exacerbated by physical, psychological, and social stressors common in the postdeployment period. Insomnia, poor quality sleep, or other sleep impairments that frequently occur as part of postdeployment transition can negatively impact the veteran’s ability to psychologically cope with daytime stressors. Poor concentration and short-term memory impairment noted by many reservists and National Guard members in the postdeployment phase can cause increased difficulty in attention to the moment and complicate completion of routine tasks. These difficulties can compound frustration and irritation to minor events and make it hard to emotionally connect with more serious issues.

Concentration and attention to mundane activities may be further reduced if the veteran feels no connection to the civilian world and/or experiences the surreal sensation that he or she should be attending to more serious life and death matters, such as those experienced in the combat theater. Ongoing psychological adjustment to physical injuries sustained during deployment can limit emotional flexibility when adapting to either minor or major stressors. Insufficient financial resources, work issues, or school problems can potentiate irritation, anger, and sadness and create an overwhelming emotional overload, leading to helplessness and hopelessness.

Perceived irregularities in emotional connection to the civilian world can significantly strain interpersonal relationships and be powerful impediments to successful reintegration.9,11,14 Failure to express emotions to major life events in the civilian world can result in combat veterans being viewed as not empathetic to others’ feelings. Overreaction to trivial events during postdeployment can lead to the veteran being labeled as unreasonable, controlling, and/or unpredictable. Persistent emotional disconnect with civilians engenders a growing sense of emotional isolation from family and friends when there is either incorrect interpretation of emotional transitions or failure to adapt healthy coping strategies. This isolation further enlarges the emotional chasm and may greatly diminish the veteran’s ability to seek assistance and appropriately address stressors in the civilian world.

Transition and the Family

Emotional disconnection may be more acutely felt within the immediate family unit.26 Redistribution of family unit responsibilities during deployment may mean that roles the veteran played during predeployment now may be handled by a partner. On the veteran’s return to the civilian world, such circumstances require active renegotiation of duties. Interactions with loved ones, especially children, may be colored by the family members’ individual perspectives on deployment as well as by the veteran’s transition symptoms. When there is disagreement about role responsibilities and/or underlying family resentment about deployment, conditions are ripe for significant discord between the veteran and family members, vital loss of partner intimacy, and notable loss of psychological safety to express feelings within the family unit. If there are  concerns about infidelity by the veteran or significant other during the period of deployment, postdeployment tensions can further escalate. If unaddressed in the presence of emotional disconnect, any of these situations can raise the risk of domestic violence and destruction of relationships.

Without adequate knowledge of common postdeployment transitions and coping strategies, the postdeployment transition period is often bewildering to returning veterans and their families. They are taken aback by postdeployment behaviors that do not conform to the veteran’s predeployment personality or mannerisms. Families may feel they have “lost” the veteran and view the emotionally distant postdeployment veteran as a stranger. Veterans mistakenly may view the postdeployment emotional disconnect as evidence that they were permanently altered by deployment and no longer can assimilate into the civilian world. Unless veterans and families develop an awareness of the postdeployment transition symptoms and healthy coping strategies,  these perspectives can contribute to a veteran’s persistent feelings of alienation, significant sense of personal failure, and loss of vital social supports.

 

 

When transition symptoms are or have the potential to become significant stressors, veterans would benefit from mental health counseling either individually or with family members. Overcoming the stigma of seeking mental health services can prove challenging. Explaining that these postdeployment symptoms occur commonly, stem from military combat training, can be reversed, and when reversed will empower the individual to control his or her life may help veterans overcome the stigma and seek mental health services.

The fear of future career impairment with the military reserve or National Guard is another real concern among this cohort who might consider accessing behavioral health care, especially since VA mental health medical records can be accessed by DoD officials through links with the VHA. Fortunately, this concern can be alleviated through the use of Vet Centers, free-standing counseling centers nationwide that offer no-cost individual and family counseling to veterans with combat exposure. Vet Center counseling records are completely confidential, never shared, and are not linked to the VHA electronic health record, the DoD, or any other entity. Although Vet Center providers don’t prescribe medications, the counselors can actively address many issues for veterans and their families. For individuals who do not live near a Vet Center or for those who require psychiatric medications, a frank discussion on the benefits of treatment vs the risk of treatment avoidance must be held.

Assessing Symptoms and Coping Mechanisms

Postdeployment transition symptoms vary, depending on the nature and context of the symptom. Not only must the returning reservist and National Guard member be screened for symptoms, but HCPs also should assess the impact of those symptoms on the veteran and his or her interpersonal relationships. Some veterans will feel that the symptoms have relatively minor impact in their lives, because the veteran can easily compensate for the transient effects. Others may feel that the symptoms are somewhat burdensome because the issues are complicating the smooth transition to civilian roles. Still others will judge the symptoms to be devastating because of the negative effects on personal control, selfesteem, and emotional connection with family and friends.

In addition to screening for symptoms, HCPs should assess these veterans’ current coping adaptations to various transition symptoms. Whereas some activities may be functional and promote reintegration, other short-term coping solutions may cripple the veteran’s ability to successfully resume civilian life. Global avoidance of communication with others and/or retreating from all social situations is a destructive coping pattern that can further alienate veterans from their families and the civilian world. Reacting with anger to all stressful issues is another maladaptive pattern of coping with life’s frustrations. Because of the potential to self-medicate when dealing with social difficulties, depression, anxiety, or other mental health diagnoses, veterans may develop an inappropriate reliance on drugs or alcohol to handle postdeployment stressors.27 Therefore, HCP screening for substance use disorders (SUD) is important so that interventions can be initiated early.

Because of the overlap of postdeployment transition symptoms with mental health disorders and the relative frequency of those mental health disorders among combat veterans, HCPs should have a heightened awareness of the potential for co-occurring mental health difficulties in the postdeployment reservist and National Guard cohort. Health care providers should screen for depression, anxiety, and PTSD. Even if initial screening is done early within the transition period, repeat screening would be of benefit 6 months into the postdeployment period because of the tendency of mental health issues to develop during that time.28,29

By evaluating the impact of the transition symptom and coping strategies on these veterans’ lives, HCPs can better determine which strategies might adequately compensate for symptom effects. In general, informal counseling, even if just to help veterans normalize postdeployment transition symptoms and develop a plan to address such symptoms, can significantly minimize the negative impact of transition symptoms.14,26 Specific symptoms should be targeted by interventions that match the degree of symptom impact.

Symptoms to be aggressively addressed are those that significantly interfere with successful reintegration into the civilian world. For example, persistent sleep difficulties should be dealt with because they can worsen all other transition symptoms. However, the majority of strategies to address sleep do not require medication unless there are confounding factors such as severe nightmares. Minor memory issues attributed to the transition phase can be mitigated by several strategies to improve recall, including use of task lists, digital calendars, or other memory-prodding techniques. However, severe memory issues related to depression or anxiety likely would require pharmaceutical assistance and formal counseling in addition to other nonpharmacologic approaches.

Intermittent irritation or restlessness may be amenable to selfhelp strategies, but significant anger outbursts or aggression will require additional support, such as formal behavioral interventions to help identify the triggers and develop strategic plans to reduce emotional tension. A mild sense of not belonging may resolve without intervention, but a stronger sense of alienation will require further evaluation.

Conclusion

Civilian reintegration after combat deployment is a gradual process rather than a discrete event for reservists and National Guard members. There are common, nonpathologic postdeployment transition symptoms that, if misunderstood or inappropriately addressed, can complicate civilian reintegration. Health care providers are in the unique position to promote a healthy postdeployment transition by assisting veterans to recognize nonpathologic transition symptoms, select appropriate coping strategies, and seek further assistance for more complex problems.

References

1. Pew Research Center. War and sacrifice in the post 9/11 era: executive summary. http://www
.pewsocialtrends.org/2011/10/05/war-and-sacrifice-in-the-post-911-era. Published October 5, 2011. Accessed June 12, 2017.

2. Interian A, Kline A, Callahan L, Losonczy M. Readjustment stressors and early mental health treatment seeking by returning National Guard soldiers with PTSD. Psychiatr Serv. 2012;63(9):855-861.

3. Spelman JF, Hunt SC, Seal KH, Burgo-Black AL. Post deployment care for returning combat veterans. J Gen Intern Med. 2012;27(9):1200-1209.

4. Vasterling JJ, Daily ES, Friedman MJ. Posttraumatic stress reactions over time: the battlefield, homecoming, and long-term course. In: Ruzek JI, Schnurr PP, Vasterling JJ, Friedman MJ, eds. Caring for Veterans With Deployment-Related Stress Disorders: Iraq, Afghanistan, and Beyond. Washington,DC: American Psychological Association;2011:chap 2.

5. Wilcox SL, Oh H, Redmon SA, Chicas J, Hassan AM, Lee PJ, Ell K. A scope of the problem: Postdeployment reintegration challenges in a National Guard Unit. Work. 2015;50(1):73-83.

6. Griffith J. Homecoming of citizen soldiers: Postdeployment problems and service use among Army National Guard soldiers. Community Ment Health J. 2017. doi:10.1007/s10597-017-0132-9. (Epub ahead of print)

7. Schultz M, Glickman ME, Eisen SV. Predictors of decline in overall mental health, PTSD and alcohol use in OEF/OIF veterans. Comprehensive Psychiatry. 2014;55(7):1654-1664.

8. Polusny MA, Kehle SM, Nelson NW, Erbes CR, Arbisi PA, Thuras P. Longitudinal effects of mild traumatic brain injury and posttraumatic stress disorder comorbidity on postdeployment outcomes in National Guard soldiers deployed to Iraq. Arch Gen Psychiatry. 2011:68(1):79-89.

9. U.S. Department of Veterans Affairs, National Center for PTSD. Returning from the war zone: a guide for military personnel. http://www.ptsd.va.gov/public/reintegration/guide-pdf/SMGuide.pdf. Updated January 2014. Accessed June 12, 2017.

10. Slone LB, Friedman MJ. After the War Zone: A Practical Guide for Returning Troops and their Families. Philadelphia, PA: Da Capo Press; 2008.

11. Ainspan ND, Penk WE, eds. When the Warrior Returns: Making the Transition at Home. Annapolis, MD: Naval Institute Press; 2012.

12. Yosick T, Bates M, Moore M, Crowe C, Phillips J, Davison J. A review of post-deployment reintegration: evidence, challenges, and strategies for program development. http://www.dcoe.mil/files/Review_of_Post-Deployment_Reintegration.pdf. Published February 10, 2012. Accessed June 12, 2017.

13. Vasterling JJ, Proctor SP, Amoroso P, Kane R, Heeren T, White RF. Neuropsychological outcomes of army personnel following deployment to the Iraq war. JAMA. 2006;296(5):519-529.

14. Castro CA, Kintzle S, Hassan AM. The combat veteran paradox: paradoxes and dilemmas encountered with reintegrating combat veterans and the agencies that support them. Traumatology. 2015;21(4):299-310.

15. Rivers FM, Gordon S, Speraw S, Reese S. U.S. Army nurses’ reintegration and homecoming experiences after Iraq and Afghanistan. Mil Med. 2013;178(2):166-173.

16. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington,VA: American Psychiatric Association;2013.

17. Tanielian T, Jaycox LH, eds. Invisible Wounds of War: Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: Rand Corporation, 2008.

18. Orff HJ, Hays CC, Heldreth AA, Stein MB, Twamley EW. Clinical considerations in the evaluation and management of patients following traumatic brain injury. Focus. 2013;11(3):328-340.

19. Morissette SB, Woodward M, Kimbrel NA, et al. Deployment-related TBI, persistent postconcussive symptoms, PTSD, and depression in OEF/OIF veterans. Rehabil Psychol. 2011;56(4):340-350.

20. Polusny MA, Kehle SM, Nelson NW, Erbes CR, Arbisi PA, Thuras P. Longitudinal effects of mild traumatic brain injury and posttraumatic stress disorder comorbidity on postdeployment outcomes in national guard soldiers deployed to Iraq. Arch Gen Psychiatry. 2011;68(1):79-89.

21. Wilk JE, Herrell RK, Wynn GH, Riviere LA, Hoge CW. Mild traumatic brain injury (concussion), posttraumatic stress disorder, and depression in U.S. soldiers involved in combat deployments: association with postdeployment symptoms. Psychosom Med. 2012;74(3):249-257.

22. Hoge CW, Grossman SH, Auchterlonie JL, Riviere LA, Milliken CS, Wilk JE. PTSD treatment for soldiers after combat deployment: low utilization of mental health care and reasons for dropout. Psychiatr Serv. 2014;65(8):997-1004.

23. Hines LA, Goodwin L, Jones M, et al. Factors affecting help seeking for mental health problems after deployment to Iraq and Afghanistan. Psychiatr Serv. 2014;65(1):98-105.

24. Gorman LA, Blow AJ, Ames BD, Read PL. National Guard families after combat: mental health, use of mental health services, and perceived treatment barriers. Psychiatr Serv. 2011;62(1):28-34.

25. Marek LI, Hollingsworth WG, D’Aniello C, et al. Returning home: what we know about the reintegration of deployed service members into their families and communities. https://www.ncfr.org/ncfr-report/focus/military-families/returninghome. Published March 1, 2012. Accessed June 13, 2017.

26. Bowling UB, Sherman MD. Welcoming them home: supporting service members and their families in navigating the tasks of reintegration. Prof Psychol Res Pr. 2008;39(4):451-458.

27. Jacobson IG, Ryan MA, Hooper TI, et al. Alcohol use and alcohol-related problems before
and after military combat deployment. JAMA. 2008;300(6):663-675.

28. Seal KH, Metzler TH, Gima KS, Bertenthal D, Maguen S, Marmar CR. Trends and risk factors for mental health diagnoses among Iraq and Afghanistan veterans Department of Veterans Affairs health care, 2002-2008. Am J Public Health. 2009;99(9):1651-1658.

29. Milliken CS, Auchterlonie JL, Hoge CW. Longitudinal assessment of mental health problems among active and reserve component soldiers returning from the Iraq war. JAMA. 2007;298(18):2141-2148.

References

1. Pew Research Center. War and sacrifice in the post 9/11 era: executive summary. http://www
.pewsocialtrends.org/2011/10/05/war-and-sacrifice-in-the-post-911-era. Published October 5, 2011. Accessed June 12, 2017.

2. Interian A, Kline A, Callahan L, Losonczy M. Readjustment stressors and early mental health treatment seeking by returning National Guard soldiers with PTSD. Psychiatr Serv. 2012;63(9):855-861.

3. Spelman JF, Hunt SC, Seal KH, Burgo-Black AL. Post deployment care for returning combat veterans. J Gen Intern Med. 2012;27(9):1200-1209.

4. Vasterling JJ, Daily ES, Friedman MJ. Posttraumatic stress reactions over time: the battlefield, homecoming, and long-term course. In: Ruzek JI, Schnurr PP, Vasterling JJ, Friedman MJ, eds. Caring for Veterans With Deployment-Related Stress Disorders: Iraq, Afghanistan, and Beyond. Washington,DC: American Psychological Association;2011:chap 2.

5. Wilcox SL, Oh H, Redmon SA, Chicas J, Hassan AM, Lee PJ, Ell K. A scope of the problem: Postdeployment reintegration challenges in a National Guard Unit. Work. 2015;50(1):73-83.

6. Griffith J. Homecoming of citizen soldiers: Postdeployment problems and service use among Army National Guard soldiers. Community Ment Health J. 2017. doi:10.1007/s10597-017-0132-9. (Epub ahead of print)

7. Schultz M, Glickman ME, Eisen SV. Predictors of decline in overall mental health, PTSD and alcohol use in OEF/OIF veterans. Comprehensive Psychiatry. 2014;55(7):1654-1664.

8. Polusny MA, Kehle SM, Nelson NW, Erbes CR, Arbisi PA, Thuras P. Longitudinal effects of mild traumatic brain injury and posttraumatic stress disorder comorbidity on postdeployment outcomes in National Guard soldiers deployed to Iraq. Arch Gen Psychiatry. 2011:68(1):79-89.

9. U.S. Department of Veterans Affairs, National Center for PTSD. Returning from the war zone: a guide for military personnel. http://www.ptsd.va.gov/public/reintegration/guide-pdf/SMGuide.pdf. Updated January 2014. Accessed June 12, 2017.

10. Slone LB, Friedman MJ. After the War Zone: A Practical Guide for Returning Troops and their Families. Philadelphia, PA: Da Capo Press; 2008.

11. Ainspan ND, Penk WE, eds. When the Warrior Returns: Making the Transition at Home. Annapolis, MD: Naval Institute Press; 2012.

12. Yosick T, Bates M, Moore M, Crowe C, Phillips J, Davison J. A review of post-deployment reintegration: evidence, challenges, and strategies for program development. http://www.dcoe.mil/files/Review_of_Post-Deployment_Reintegration.pdf. Published February 10, 2012. Accessed June 12, 2017.

13. Vasterling JJ, Proctor SP, Amoroso P, Kane R, Heeren T, White RF. Neuropsychological outcomes of army personnel following deployment to the Iraq war. JAMA. 2006;296(5):519-529.

14. Castro CA, Kintzle S, Hassan AM. The combat veteran paradox: paradoxes and dilemmas encountered with reintegrating combat veterans and the agencies that support them. Traumatology. 2015;21(4):299-310.

15. Rivers FM, Gordon S, Speraw S, Reese S. U.S. Army nurses’ reintegration and homecoming experiences after Iraq and Afghanistan. Mil Med. 2013;178(2):166-173.

16. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington,VA: American Psychiatric Association;2013.

17. Tanielian T, Jaycox LH, eds. Invisible Wounds of War: Psychological and Cognitive Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: Rand Corporation, 2008.

18. Orff HJ, Hays CC, Heldreth AA, Stein MB, Twamley EW. Clinical considerations in the evaluation and management of patients following traumatic brain injury. Focus. 2013;11(3):328-340.

19. Morissette SB, Woodward M, Kimbrel NA, et al. Deployment-related TBI, persistent postconcussive symptoms, PTSD, and depression in OEF/OIF veterans. Rehabil Psychol. 2011;56(4):340-350.

20. Polusny MA, Kehle SM, Nelson NW, Erbes CR, Arbisi PA, Thuras P. Longitudinal effects of mild traumatic brain injury and posttraumatic stress disorder comorbidity on postdeployment outcomes in national guard soldiers deployed to Iraq. Arch Gen Psychiatry. 2011;68(1):79-89.

21. Wilk JE, Herrell RK, Wynn GH, Riviere LA, Hoge CW. Mild traumatic brain injury (concussion), posttraumatic stress disorder, and depression in U.S. soldiers involved in combat deployments: association with postdeployment symptoms. Psychosom Med. 2012;74(3):249-257.

22. Hoge CW, Grossman SH, Auchterlonie JL, Riviere LA, Milliken CS, Wilk JE. PTSD treatment for soldiers after combat deployment: low utilization of mental health care and reasons for dropout. Psychiatr Serv. 2014;65(8):997-1004.

23. Hines LA, Goodwin L, Jones M, et al. Factors affecting help seeking for mental health problems after deployment to Iraq and Afghanistan. Psychiatr Serv. 2014;65(1):98-105.

24. Gorman LA, Blow AJ, Ames BD, Read PL. National Guard families after combat: mental health, use of mental health services, and perceived treatment barriers. Psychiatr Serv. 2011;62(1):28-34.

25. Marek LI, Hollingsworth WG, D’Aniello C, et al. Returning home: what we know about the reintegration of deployed service members into their families and communities. https://www.ncfr.org/ncfr-report/focus/military-families/returninghome. Published March 1, 2012. Accessed June 13, 2017.

26. Bowling UB, Sherman MD. Welcoming them home: supporting service members and their families in navigating the tasks of reintegration. Prof Psychol Res Pr. 2008;39(4):451-458.

27. Jacobson IG, Ryan MA, Hooper TI, et al. Alcohol use and alcohol-related problems before
and after military combat deployment. JAMA. 2008;300(6):663-675.

28. Seal KH, Metzler TH, Gima KS, Bertenthal D, Maguen S, Marmar CR. Trends and risk factors for mental health diagnoses among Iraq and Afghanistan veterans Department of Veterans Affairs health care, 2002-2008. Am J Public Health. 2009;99(9):1651-1658.

29. Milliken CS, Auchterlonie JL, Hoge CW. Longitudinal assessment of mental health problems among active and reserve component soldiers returning from the Iraq war. JAMA. 2007;298(18):2141-2148.

Issue
Federal Practitioner - 34(7)
Issue
Federal Practitioner - 34(7)
Page Number
16-22
Page Number
16-22
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media

Knotless Arthroscopic Reduction and Internal Fixation of a Displaced Anterior Cruciate Ligament Tibial Eminence Avulsion Fracture

Article Type
Changed
Thu, 09/19/2019 - 13:21
Display Headline
Knotless Arthroscopic Reduction and Internal Fixation of a Displaced Anterior Cruciate Ligament Tibial Eminence Avulsion Fracture

Take-Home Points

  • Technique provides optimal fixation while simultaneously protecting open growth plates.
  • Self tensioning feature insures both optimal ACL tension and fracture reduction.
  • No need for future hardware removal.
  • 10Cross suture configuration optimizes strength of fixation for highly consistent results.
  • Use fluoroscopy to avoid violation of tibial physis.

Generally occurring in the 8- to 14-year-old population, tibial eminence avulsion (TEA) fractures are a common variant of anterior cruciate ligament (ACL) ruptures and represent 2% to 5% of all knee injuries in skeletally immature individuals.1,2 Compared with adults, children likely experience this anomaly more often because of the weakness of their incompletely ossified tibial plateau relative to the strength of their native ACL.3

The open repair techniques that have been described have multiple disadvantages, including open incisions, difficult visualization of the fracture owing to the location of the fat pad, and increased risk for arthrofibrosis. Arthroscopic fixation is considered the treatment of choice for TEA fractures because it allows for direct visualization of injury, accurate reduction of fracture fragments, removal of loose fragments, and easy treatment of associated soft-tissue injuries.4-6Several fixation techniques for ACL-TEA fractures were recently described: arthroscopic reduction and internal fixation (ARIF) with Kirschner wires,7 cannulated screws,4 the Meniscus Arrow device (Bionx Implants),8 pull-out sutures,9,10 bioabsorbable nails,11 Herbert screws,12 TightRope fixation (Arthrex),13 and various other rotator cuff and meniscal repair systems.14,15 These approaches tend to have good outcomes for TEA fractures, but there are risks associated with ACL tensioning and potential tibial growth plate violation or hardware problems. Likewise, there are no studies with large numbers of patients treated with these new techniques, so the optimal method of reduction and fixation is still unknown.

In this article, we describe a new ARIF technique that involves 2 absorbable anchors with adjustable suture-tensioning technology. This technique optimizes reduction and helps surgeons avoid proximal tibial physeal damage, procedure-related morbidity, and additional surgery.

Case Report

History

The patient, an 8-year-old boy, sustained a noncontact twisting injury of the left knee during a cutting maneuver in a flag football game. He experienced immediate pain and subsequent swelling. Clinical examination revealed a moderate effusion with motion limitations secondary to swelling and irritability. The patient’s Lachman test result was 2+. Pivot shift testing was not possible because of guarding. The knee was stable to varus and valgus stress at 0° and 30° of flexion. Limited knee flexion prohibited placement of the patient in the position needed for anterior and posterior drawer testing. His patella was stable on lateral stress testing at 20° of flexion with no apprehension. Neurovascular status was intact throughout the lower extremity.

Anteroposterior and lateral radiographs showed a minimally displaced Meyers-McKeever type II TEA fracture (Figures 1A, 1B).

Figure 1.
Distal femoral and proximal tibial growth plates were wide open. Magnetic resonance imaging confirmed the displaced type II TEA fracture and showed good signal quality in the attached ACL (Figures 2A, 2B).
Figure 2.
The remaining ligamentous structures appeared without injury or signal change. No tear signal was seen in the imaging sequences of the medial and lateral meniscus.

After discussing potential treatment options with the parents, Dr. Smith proceeded with arthroscopic surgery for definitive reduction and internal fixation of the patient’s left knee displaced ACL-TEA fracture. The new adjustable suture-tensioning fixation technique was used. The patient’s guardian provided written informed consent for print and electronic publication of this case report.

Examination Under Anesthesia

Examination with the patient under general anesthesia revealed 3+ Lachman, 2+ pivot shift with foot in internal and external rotation, and 1+ anterior drawer with foot in neutral and internal rotation. The knee was stable to varus and valgus stress testing.

Surgical Technique

Proper patient positioning and padding of bony prominences were ensured, and the limb was sterilely prepared and draped.

Figure 3.
A standard lateral parapatellar portal was established for arthroscope placement; a medial parapatellar working portal was established as well. Thorough joint inspection revealed normal articular surfaces of patella, femur, and tibial plateau. Similarly, both menisci were intact without evidence of injury.
Figure 4.
With use of the probe, the ACL-TEA fracture could be elevated up to 2 cm toward the top of the notch (Figure 3). Further inspection of the ACL fibers revealed minimal hemorrhaging and no frank tearing (Figure 4).

Given the young age of the patient, it was imperative to avoid the open proximal tibial growth plate. The surgical plan for stabilization involved use of two 3.0-mm BioComposite Knotless SutureTak anchors (Arthrex). This anchor configuration is based on a No. 2 FiberWire suture shuttled through itself to create a locking splice mechanism that allows for adjustable tensioning. The anchors were placed on each side of the tibial bony avulsion site with two No. 2 FiberWire sutures and were then crossed about the avulsion fracture fragment in an “x-type” configuration to secure the ACL back down to the bony bed.

First, a curette was used to débride fibrous tissue on the underside of the fracture fragment and on the fracture bed. Minimal amounts of cancellous bone were débrided from the tibial fracture bed to optimize fracture reduction by slightly recessing the fracture fragment to ensure optimal ACL tensioning (Figure 5).

Figure 5.
Next, an 18-gauge needle was used to establish an accessory superior medial percutaneous portal to ensure a satisfactory drilling trajectory just medial to the fracture site. Under fluoroscopic guidance, a drill guide was placed, and a 2.4-mm bit was used to drill to a depth of 16 mm to accommodate the 12.7-mm anchor. Avoidance of the proximal tibial physis was confirmed with fluoroscopy (Figure 6).
Figure 6.
One of the SutureTak anchors was secured in this drill hole along the anteromedial avulsion fracture site. From the anteromedial portal, a curved needle tip suture passer was placed medially through the ACL fibers and bone, with the wire retrieved out of the superior medial accessory portal. Then, the drill guide was introduced through the lateral portal and positioned just lateral to the tibial avulsion site, a hole was drilled 16 mm deep, and fluoroscopy was used to confirm the physis was not violated. The second SutureTak anchor was placed in this anterolateral location. From the anterolateral portal, the curved needle tip suture passer was placed laterally through the ACL fibers and avulsion fragment, and the wire was passed and retrieved out the anteromedial portal and shuttled back to the anterolateral portal.

Next, from the accessory superior medial portal, the end of the wire that had been passed through the medial aspect of the bony avulsion was retrieved through the lateral portal. This wire was used to shuttle the repair suture from the laterally positioned SutureTak anchor over and through the medial aspect of the bony fragment out of the accessory superior medial (Figure 7).
Figure 7.
This suture was passed through the shuttling loop of the medially positioned SutureTak anchor to create the splice in the anchor for the adjustable fixation. This process was repeated through the lateral aspect of the bony fragment—the medial SutureTak repair suture was passed over the bone here. Thus, the lateral suture was over and through the bony fragment secured to the medial SutureTak anchor, and the medial suture was crossed over and through bone to the lateral SutureTak anchor. With the knee held in full extension, the bony avulsion fracture was easily reduced by alternating tension on the SutureTak limbs, which enabled controlled reduction of the TEA fracture (Figures 8A, 8B).
Figure 8.
An arthroscopic knot pusher was used for final tightening of the SutureTak fixation. An arthroscopic probe was used to confirm anatomical reduction of the fracture and restoration of ACL fiber tension (Figure 9).
Figure 9.
The knee was ranged from 0° to 120° of flexion with visual affirmation of the construct and maintenance of the reduction. Fluoroscopy confirmed anatomical reduction of the TEA fracture. The patient was immobilized in a long leg brace locked in 30° of flexion.

 

 

Follow-Up

Two weeks after surgery, the patient returned to clinic for suture removal. Four weeks after surgery, radiographs confirmed anatomical reduction of the TEA fracture, and outpatient physical therapy (range-of-motion exercises as tolerated) and isometric quadriceps strengthening were instituted. Twelve weeks after surgery, examination revealed full knee motion, negative Lachman and pivot shift test results, and residual quadriceps muscle atrophy, and radiographs confirmed complete fracture healing with maintenance of a normal proximal tibial growth plate (Figures 10A, 10B).

Figure 10.
Sixteen weeks after surgery, ligamentous examination findings were normal, and quadriceps muscle mass was good. In addition, on KT-1000 testing, the surgically repaired knee had only 1 more millimeter of laxity at the 30-pound pull, and equal displacement on the manual maximum test. The patient was allowed to return to full activities as tolerated.

Discussion

The highlight of this case is the simplicity of an excellent reduction of a displaced ACL-TEA fracture. Minimally invasive absorbable implants did not violate the proximal tibial physis, and the unique adjustable suture-tensioning technology allowed the degree of reduction and ACL tension to be “dialed in.” SutureTak implants have strong No. 2 FiberWire suture for excellent stability with an overall small suture load, and their small size avoids the risk of violating the proximal tibial physis and avoids potential growth disturbances.

Despite the obvious risks it poses to the open proximal tibial physis, surgical reduction of Meyers-McKeever type II and type III fractures is the norm for restoring ACL stability. Screws and suture fixation are the most common and reliable methods of TEA fracture reduction.16,17 In recent systematic reviews, however, Osti and colleagues17 and Gans and colleagues18 noted there is not enough evidence to warrant a “gold standard” in pediatric tibial avulsion cases.

Other fixation methods for TEA fractures must be investigated. Anderson and colleagues19 described the biomechanics of 4 different physeal-sparing avulsion fracture reduction techniques: an ultra-high-molecular-weight polyethylene (UHMWPE) suture-suture button, a suture anchor, a polydioxanone suture-suture button, and screw fixation. Using techniques described by Kocher and colleagues,4 Berg,20 Mah and colleagues,21 Vega and colleagues,22 and Lu and colleagues,23 Anderson and colleagues19 reduced TEA fractures in skeletally immature porcine knees. Compared with suture anchors, UHMWPE suture-suture buttons provided biomechanically superior cyclic and load-to-failure results as well as more consistent fixation.

Screw fixation has shown good results but has disadvantages. Incorrect positioning of a screw can lead to impingement and articular cartilage damage, and screw removal may be needed if discomfort at the fixation site persists.24,25 Likewise, screws generally are an option only for large fracture fragments, as there is an inherent risk of fracturing small TEA fractures, which can be common in skeletally immature patients.

Brunner and colleagues26 recently found that TEA fracture repair with absorbable sutures and distal bone bridge fixation yielded 3-month radiographic and clinical healing rates similar to those obtained with nonabsorbable sutures tied around a screw. However, other authors have reported growth disturbances with use of a similar technique, owing to a disturbance of the open proximal tibial growth plate.9 In that regard, a major advantage of this new knotless suturing technique is that distal fixation is not necessary.

The minimally invasive TEA fraction reduction technique described in this article has 6 advantages: It provides excellent fixation while avoiding proximal tibial growth plate injury; the degree of tensioning is easily controlled during reduction; it uses strong suture instead of metal screws or pins; the reduction construct is low-profile; distal fixation is unnecessary; and implant removal is unnecessary, thus limiting subsequent surgical intervention. With respect to long-term outcomes, however, it is not known how this procedure will compare with other commonly used ARIF methods in physeal-sparing techniques for TEA fracture fixation.

This case report highlights a novel pediatric displaced ACL-TEA fracture reduction technique that allows for adjustable reduction and resultant ACL tensioning with excellent strong suture fixation without violating the proximal tibial physis, which could make it invaluable in the surgical treatment of this injury in skeletally immature patients.

Am J Orthop. 2017;46(4):203-208. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Eiskjaer S, Larsen ST, Schmidt MB. The significance of hemarthrosis of the knee in children. Arch Orthop Trauma Surg. 1988;107(2):96-98.

2. Luhmann SJ. Acute traumatic knee effusions in children and adolescents. J Pediatr Orthop. 2003;23(2):199-202.

3. Woo SL, Hollis JM, Adams DJ, Lyon RM, Takai S. Tensile properties of the human femur-anterior cruciate ligament-tibia complex. The effects of specimen age and orientation. Am J Sports Med. 1991;19(3):217-225.

4. Kocher MS, Foreman ES, Micheli LJ. Laxity and functional outcome after arthroscopic reduction and internal fixation of displaced tibial spine fractures in children. Arthroscopy. 2003;19(10):1085-1090.

5. Lubowitz JH, Elson WS, Guttmann D. Part II: arthroscopic treatment of tibial plateau fractures: intercondylar eminence avulsion fractures. Arthroscopy. 2005;21(1):86-92.

6. Vargas B, Lutz N, Dutoit M, Zambelli PY. Nonunion after fracture of the anterior tibial spine: case report and review of the literature. J Pediatr Orthop B. 2009;18(2):90-92.

7. Sommerfeldt DW. Arthroscopically assisted internal fixation of avulsion fractures of the anterior cruciate ligament during childhood and adolescence [in German]. Oper Orthop Traumatol. 2008;20(4-5):310-320.

8. Wouters DB, de Graaf JS, Hemmer PH, Burgerhof JG, Kramer WL. The arthroscopic treatment of displaced tibial spine fractures in children and adolescents using Meniscus Arrows®. Knee Surg Sports Traumatol Arthrosc. 2011;19(5):736-739.

9. Ahn JH, Yoo JC. Clinical outcome of arthroscopic reduction and suture for displaced acute and chronic tibial spine fractures. Knee Surg Sports Traumatol Arthrosc. 2005;13(2):116-121.

10. Huang TW, Hsu KY, Cheng CY, et al. Arthroscopic suture fixation of tibial eminence avulsion fractures. Arthroscopy. 2008;24(11):1232-1238.

11. Liljeros K, Werner S, Janarv PM. Arthroscopic fixation of anterior tibial spine fractures with bioabsorbable nails in skeletally immature patients. Am J Sports Med. 2009;37(5):923-928.

12. Wiegand N, Naumov I, Vamhidy L, Not LG. Arthroscopic treatment of tibial spine fracture in children with a cannulated Herbert screw. Knee. 2014;21(2):481-485.

13. Faivre B, Benea H, Klouche S, Lespagnol F, Bauer T, Hardy P. An original arthroscopic fixation of adult’s tibial eminence fractures using the Tightrope® device: a report of 8 cases and review of literature. Knee. 2014;21(4):833-839.

14. Kluemper CT, Snyder GM, Coats AC, Johnson DL, Mair SD. Arthroscopic suture fixation of tibial eminence fractures. Orthopedics. 2013;36(11):e1401-e1406.

15. Ochiai S, Hagino T, Watanabe Y, Senga S, Haro H. One strategy for arthroscopic suture fixation of tibial intercondylar eminence fractures using the Meniscal Viper Repair System. Sports Med Arthrosc Rehabil Ther Technol. 2011;3:17.

16. Bogunovic L, Tarabichi M, Harris D, Wright R. Treatment of tibial eminence fractures: a systematic review. J Knee Surg. 2015;28(3):255-262.

17. Osti L, Buda M, Soldati F, Del Buono A, Osti R, Maffulli N. Arthroscopic treatment of tibial eminence fracture: a systematic review of different fixation methods. Br Med Bull. 2016;118(1):73-90.

18. Gans I, Baldwin KD, Ganley TJ. Treatment and management outcomes of tibial eminence fractures in pediatric patients: a systematic review. Am J Sports Med. 2014;42(7):1743-1750.

19. Anderson CN, Nyman JS, McCullough KA, et al. Biomechanical evaluation of physeal-sparing fixation methods in tibial eminence fractures. Am J Sports Med. 2013;41(7):1586-1594.

20. Berg EE. Pediatric tibial eminence fractures: arthroscopic cannulated screw fixation. Arthroscopy. 1995;11(3):328-331.

21. Mah JY, Otsuka NY, McLean J. An arthroscopic technique for the reduction and fixation of tibial-eminence fractures. J Pediatr Orthop. 1996;16(1):119-121.

22. Vega JR, Irribarra LA, Baar AK, Iniguez M, Salgado M, Gana N. Arthroscopic fixation of displaced tibial eminence fractures: a new growth plate-sparing method. Arthroscopy. 2008;24(11):1239-1243.

23. Lu XW, Hu XP, Jin C, Zhu T, Ding Y, Dai LY. Reduction and fixation of the avulsion fracture of the tibial eminence using mini-open technique. Knee Surg Sports Traumatol Arthrosc. 2010;18(11):1476-1480.

24. Bonin N, Jeunet L, Obert L, Dejour D. Adult tibial eminence fracture fixation: arthroscopic procedure using K-wire folded fixation. Knee Surg Sports Traumatol Arthrosc. 2007;15(7):857-862.

25. Senekovic V, Veselko M. Anterograde arthroscopic fixation of avulsion fractures of the tibial eminence with a cannulated screw: five-year results. Arthroscopy. 2003;19(1):54-61.

26. Brunner S, Vavken P, Kilger R, et al. Absorbable and non-absorbable suture fixation results in similar outcomes for tibial eminence fractures in children and adolescents. Knee Surg Sports Traumatol Arthrosc. 2016;24(3):723-729.

Article PDF
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Issue
The American Journal of Orthopedics - 46(4)
Publications
Topics
Page Number
203-208
Sections
Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Author and Disclosure Information

Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.

Article PDF
Article PDF

Take-Home Points

  • Technique provides optimal fixation while simultaneously protecting open growth plates.
  • Self tensioning feature insures both optimal ACL tension and fracture reduction.
  • No need for future hardware removal.
  • 10Cross suture configuration optimizes strength of fixation for highly consistent results.
  • Use fluoroscopy to avoid violation of tibial physis.

Generally occurring in the 8- to 14-year-old population, tibial eminence avulsion (TEA) fractures are a common variant of anterior cruciate ligament (ACL) ruptures and represent 2% to 5% of all knee injuries in skeletally immature individuals.1,2 Compared with adults, children likely experience this anomaly more often because of the weakness of their incompletely ossified tibial plateau relative to the strength of their native ACL.3

The open repair techniques that have been described have multiple disadvantages, including open incisions, difficult visualization of the fracture owing to the location of the fat pad, and increased risk for arthrofibrosis. Arthroscopic fixation is considered the treatment of choice for TEA fractures because it allows for direct visualization of injury, accurate reduction of fracture fragments, removal of loose fragments, and easy treatment of associated soft-tissue injuries.4-6Several fixation techniques for ACL-TEA fractures were recently described: arthroscopic reduction and internal fixation (ARIF) with Kirschner wires,7 cannulated screws,4 the Meniscus Arrow device (Bionx Implants),8 pull-out sutures,9,10 bioabsorbable nails,11 Herbert screws,12 TightRope fixation (Arthrex),13 and various other rotator cuff and meniscal repair systems.14,15 These approaches tend to have good outcomes for TEA fractures, but there are risks associated with ACL tensioning and potential tibial growth plate violation or hardware problems. Likewise, there are no studies with large numbers of patients treated with these new techniques, so the optimal method of reduction and fixation is still unknown.

In this article, we describe a new ARIF technique that involves 2 absorbable anchors with adjustable suture-tensioning technology. This technique optimizes reduction and helps surgeons avoid proximal tibial physeal damage, procedure-related morbidity, and additional surgery.

Case Report

History

The patient, an 8-year-old boy, sustained a noncontact twisting injury of the left knee during a cutting maneuver in a flag football game. He experienced immediate pain and subsequent swelling. Clinical examination revealed a moderate effusion with motion limitations secondary to swelling and irritability. The patient’s Lachman test result was 2+. Pivot shift testing was not possible because of guarding. The knee was stable to varus and valgus stress at 0° and 30° of flexion. Limited knee flexion prohibited placement of the patient in the position needed for anterior and posterior drawer testing. His patella was stable on lateral stress testing at 20° of flexion with no apprehension. Neurovascular status was intact throughout the lower extremity.

Anteroposterior and lateral radiographs showed a minimally displaced Meyers-McKeever type II TEA fracture (Figures 1A, 1B).

Figure 1.
Distal femoral and proximal tibial growth plates were wide open. Magnetic resonance imaging confirmed the displaced type II TEA fracture and showed good signal quality in the attached ACL (Figures 2A, 2B).
Figure 2.
The remaining ligamentous structures appeared without injury or signal change. No tear signal was seen in the imaging sequences of the medial and lateral meniscus.

After discussing potential treatment options with the parents, Dr. Smith proceeded with arthroscopic surgery for definitive reduction and internal fixation of the patient’s left knee displaced ACL-TEA fracture. The new adjustable suture-tensioning fixation technique was used. The patient’s guardian provided written informed consent for print and electronic publication of this case report.

Examination Under Anesthesia

Examination with the patient under general anesthesia revealed 3+ Lachman, 2+ pivot shift with foot in internal and external rotation, and 1+ anterior drawer with foot in neutral and internal rotation. The knee was stable to varus and valgus stress testing.

Surgical Technique

Proper patient positioning and padding of bony prominences were ensured, and the limb was sterilely prepared and draped.

Figure 3.
A standard lateral parapatellar portal was established for arthroscope placement; a medial parapatellar working portal was established as well. Thorough joint inspection revealed normal articular surfaces of patella, femur, and tibial plateau. Similarly, both menisci were intact without evidence of injury.
Figure 4.
With use of the probe, the ACL-TEA fracture could be elevated up to 2 cm toward the top of the notch (Figure 3). Further inspection of the ACL fibers revealed minimal hemorrhaging and no frank tearing (Figure 4).

Given the young age of the patient, it was imperative to avoid the open proximal tibial growth plate. The surgical plan for stabilization involved use of two 3.0-mm BioComposite Knotless SutureTak anchors (Arthrex). This anchor configuration is based on a No. 2 FiberWire suture shuttled through itself to create a locking splice mechanism that allows for adjustable tensioning. The anchors were placed on each side of the tibial bony avulsion site with two No. 2 FiberWire sutures and were then crossed about the avulsion fracture fragment in an “x-type” configuration to secure the ACL back down to the bony bed.

First, a curette was used to débride fibrous tissue on the underside of the fracture fragment and on the fracture bed. Minimal amounts of cancellous bone were débrided from the tibial fracture bed to optimize fracture reduction by slightly recessing the fracture fragment to ensure optimal ACL tensioning (Figure 5).

Figure 5.
Next, an 18-gauge needle was used to establish an accessory superior medial percutaneous portal to ensure a satisfactory drilling trajectory just medial to the fracture site. Under fluoroscopic guidance, a drill guide was placed, and a 2.4-mm bit was used to drill to a depth of 16 mm to accommodate the 12.7-mm anchor. Avoidance of the proximal tibial physis was confirmed with fluoroscopy (Figure 6).
Figure 6.
One of the SutureTak anchors was secured in this drill hole along the anteromedial avulsion fracture site. From the anteromedial portal, a curved needle tip suture passer was placed medially through the ACL fibers and bone, with the wire retrieved out of the superior medial accessory portal. Then, the drill guide was introduced through the lateral portal and positioned just lateral to the tibial avulsion site, a hole was drilled 16 mm deep, and fluoroscopy was used to confirm the physis was not violated. The second SutureTak anchor was placed in this anterolateral location. From the anterolateral portal, the curved needle tip suture passer was placed laterally through the ACL fibers and avulsion fragment, and the wire was passed and retrieved out the anteromedial portal and shuttled back to the anterolateral portal.

Next, from the accessory superior medial portal, the end of the wire that had been passed through the medial aspect of the bony avulsion was retrieved through the lateral portal. This wire was used to shuttle the repair suture from the laterally positioned SutureTak anchor over and through the medial aspect of the bony fragment out of the accessory superior medial (Figure 7).
Figure 7.
This suture was passed through the shuttling loop of the medially positioned SutureTak anchor to create the splice in the anchor for the adjustable fixation. This process was repeated through the lateral aspect of the bony fragment—the medial SutureTak repair suture was passed over the bone here. Thus, the lateral suture was over and through the bony fragment secured to the medial SutureTak anchor, and the medial suture was crossed over and through bone to the lateral SutureTak anchor. With the knee held in full extension, the bony avulsion fracture was easily reduced by alternating tension on the SutureTak limbs, which enabled controlled reduction of the TEA fracture (Figures 8A, 8B).
Figure 8.
An arthroscopic knot pusher was used for final tightening of the SutureTak fixation. An arthroscopic probe was used to confirm anatomical reduction of the fracture and restoration of ACL fiber tension (Figure 9).
Figure 9.
The knee was ranged from 0° to 120° of flexion with visual affirmation of the construct and maintenance of the reduction. Fluoroscopy confirmed anatomical reduction of the TEA fracture. The patient was immobilized in a long leg brace locked in 30° of flexion.

 

 

Follow-Up

Two weeks after surgery, the patient returned to clinic for suture removal. Four weeks after surgery, radiographs confirmed anatomical reduction of the TEA fracture, and outpatient physical therapy (range-of-motion exercises as tolerated) and isometric quadriceps strengthening were instituted. Twelve weeks after surgery, examination revealed full knee motion, negative Lachman and pivot shift test results, and residual quadriceps muscle atrophy, and radiographs confirmed complete fracture healing with maintenance of a normal proximal tibial growth plate (Figures 10A, 10B).

Figure 10.
Sixteen weeks after surgery, ligamentous examination findings were normal, and quadriceps muscle mass was good. In addition, on KT-1000 testing, the surgically repaired knee had only 1 more millimeter of laxity at the 30-pound pull, and equal displacement on the manual maximum test. The patient was allowed to return to full activities as tolerated.

Discussion

The highlight of this case is the simplicity of an excellent reduction of a displaced ACL-TEA fracture. Minimally invasive absorbable implants did not violate the proximal tibial physis, and the unique adjustable suture-tensioning technology allowed the degree of reduction and ACL tension to be “dialed in.” SutureTak implants have strong No. 2 FiberWire suture for excellent stability with an overall small suture load, and their small size avoids the risk of violating the proximal tibial physis and avoids potential growth disturbances.

Despite the obvious risks it poses to the open proximal tibial physis, surgical reduction of Meyers-McKeever type II and type III fractures is the norm for restoring ACL stability. Screws and suture fixation are the most common and reliable methods of TEA fracture reduction.16,17 In recent systematic reviews, however, Osti and colleagues17 and Gans and colleagues18 noted there is not enough evidence to warrant a “gold standard” in pediatric tibial avulsion cases.

Other fixation methods for TEA fractures must be investigated. Anderson and colleagues19 described the biomechanics of 4 different physeal-sparing avulsion fracture reduction techniques: an ultra-high-molecular-weight polyethylene (UHMWPE) suture-suture button, a suture anchor, a polydioxanone suture-suture button, and screw fixation. Using techniques described by Kocher and colleagues,4 Berg,20 Mah and colleagues,21 Vega and colleagues,22 and Lu and colleagues,23 Anderson and colleagues19 reduced TEA fractures in skeletally immature porcine knees. Compared with suture anchors, UHMWPE suture-suture buttons provided biomechanically superior cyclic and load-to-failure results as well as more consistent fixation.

Screw fixation has shown good results but has disadvantages. Incorrect positioning of a screw can lead to impingement and articular cartilage damage, and screw removal may be needed if discomfort at the fixation site persists.24,25 Likewise, screws generally are an option only for large fracture fragments, as there is an inherent risk of fracturing small TEA fractures, which can be common in skeletally immature patients.

Brunner and colleagues26 recently found that TEA fracture repair with absorbable sutures and distal bone bridge fixation yielded 3-month radiographic and clinical healing rates similar to those obtained with nonabsorbable sutures tied around a screw. However, other authors have reported growth disturbances with use of a similar technique, owing to a disturbance of the open proximal tibial growth plate.9 In that regard, a major advantage of this new knotless suturing technique is that distal fixation is not necessary.

The minimally invasive TEA fraction reduction technique described in this article has 6 advantages: It provides excellent fixation while avoiding proximal tibial growth plate injury; the degree of tensioning is easily controlled during reduction; it uses strong suture instead of metal screws or pins; the reduction construct is low-profile; distal fixation is unnecessary; and implant removal is unnecessary, thus limiting subsequent surgical intervention. With respect to long-term outcomes, however, it is not known how this procedure will compare with other commonly used ARIF methods in physeal-sparing techniques for TEA fracture fixation.

This case report highlights a novel pediatric displaced ACL-TEA fracture reduction technique that allows for adjustable reduction and resultant ACL tensioning with excellent strong suture fixation without violating the proximal tibial physis, which could make it invaluable in the surgical treatment of this injury in skeletally immature patients.

Am J Orthop. 2017;46(4):203-208. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

Take-Home Points

  • Technique provides optimal fixation while simultaneously protecting open growth plates.
  • Self tensioning feature insures both optimal ACL tension and fracture reduction.
  • No need for future hardware removal.
  • 10Cross suture configuration optimizes strength of fixation for highly consistent results.
  • Use fluoroscopy to avoid violation of tibial physis.

Generally occurring in the 8- to 14-year-old population, tibial eminence avulsion (TEA) fractures are a common variant of anterior cruciate ligament (ACL) ruptures and represent 2% to 5% of all knee injuries in skeletally immature individuals.1,2 Compared with adults, children likely experience this anomaly more often because of the weakness of their incompletely ossified tibial plateau relative to the strength of their native ACL.3

The open repair techniques that have been described have multiple disadvantages, including open incisions, difficult visualization of the fracture owing to the location of the fat pad, and increased risk for arthrofibrosis. Arthroscopic fixation is considered the treatment of choice for TEA fractures because it allows for direct visualization of injury, accurate reduction of fracture fragments, removal of loose fragments, and easy treatment of associated soft-tissue injuries.4-6Several fixation techniques for ACL-TEA fractures were recently described: arthroscopic reduction and internal fixation (ARIF) with Kirschner wires,7 cannulated screws,4 the Meniscus Arrow device (Bionx Implants),8 pull-out sutures,9,10 bioabsorbable nails,11 Herbert screws,12 TightRope fixation (Arthrex),13 and various other rotator cuff and meniscal repair systems.14,15 These approaches tend to have good outcomes for TEA fractures, but there are risks associated with ACL tensioning and potential tibial growth plate violation or hardware problems. Likewise, there are no studies with large numbers of patients treated with these new techniques, so the optimal method of reduction and fixation is still unknown.

In this article, we describe a new ARIF technique that involves 2 absorbable anchors with adjustable suture-tensioning technology. This technique optimizes reduction and helps surgeons avoid proximal tibial physeal damage, procedure-related morbidity, and additional surgery.

Case Report

History

The patient, an 8-year-old boy, sustained a noncontact twisting injury of the left knee during a cutting maneuver in a flag football game. He experienced immediate pain and subsequent swelling. Clinical examination revealed a moderate effusion with motion limitations secondary to swelling and irritability. The patient’s Lachman test result was 2+. Pivot shift testing was not possible because of guarding. The knee was stable to varus and valgus stress at 0° and 30° of flexion. Limited knee flexion prohibited placement of the patient in the position needed for anterior and posterior drawer testing. His patella was stable on lateral stress testing at 20° of flexion with no apprehension. Neurovascular status was intact throughout the lower extremity.

Anteroposterior and lateral radiographs showed a minimally displaced Meyers-McKeever type II TEA fracture (Figures 1A, 1B).

Figure 1.
Distal femoral and proximal tibial growth plates were wide open. Magnetic resonance imaging confirmed the displaced type II TEA fracture and showed good signal quality in the attached ACL (Figures 2A, 2B).
Figure 2.
The remaining ligamentous structures appeared without injury or signal change. No tear signal was seen in the imaging sequences of the medial and lateral meniscus.

After discussing potential treatment options with the parents, Dr. Smith proceeded with arthroscopic surgery for definitive reduction and internal fixation of the patient’s left knee displaced ACL-TEA fracture. The new adjustable suture-tensioning fixation technique was used. The patient’s guardian provided written informed consent for print and electronic publication of this case report.

Examination Under Anesthesia

Examination with the patient under general anesthesia revealed 3+ Lachman, 2+ pivot shift with foot in internal and external rotation, and 1+ anterior drawer with foot in neutral and internal rotation. The knee was stable to varus and valgus stress testing.

Surgical Technique

Proper patient positioning and padding of bony prominences were ensured, and the limb was sterilely prepared and draped.

Figure 3.
A standard lateral parapatellar portal was established for arthroscope placement; a medial parapatellar working portal was established as well. Thorough joint inspection revealed normal articular surfaces of patella, femur, and tibial plateau. Similarly, both menisci were intact without evidence of injury.
Figure 4.
With use of the probe, the ACL-TEA fracture could be elevated up to 2 cm toward the top of the notch (Figure 3). Further inspection of the ACL fibers revealed minimal hemorrhaging and no frank tearing (Figure 4).

Given the young age of the patient, it was imperative to avoid the open proximal tibial growth plate. The surgical plan for stabilization involved use of two 3.0-mm BioComposite Knotless SutureTak anchors (Arthrex). This anchor configuration is based on a No. 2 FiberWire suture shuttled through itself to create a locking splice mechanism that allows for adjustable tensioning. The anchors were placed on each side of the tibial bony avulsion site with two No. 2 FiberWire sutures and were then crossed about the avulsion fracture fragment in an “x-type” configuration to secure the ACL back down to the bony bed.

First, a curette was used to débride fibrous tissue on the underside of the fracture fragment and on the fracture bed. Minimal amounts of cancellous bone were débrided from the tibial fracture bed to optimize fracture reduction by slightly recessing the fracture fragment to ensure optimal ACL tensioning (Figure 5).

Figure 5.
Next, an 18-gauge needle was used to establish an accessory superior medial percutaneous portal to ensure a satisfactory drilling trajectory just medial to the fracture site. Under fluoroscopic guidance, a drill guide was placed, and a 2.4-mm bit was used to drill to a depth of 16 mm to accommodate the 12.7-mm anchor. Avoidance of the proximal tibial physis was confirmed with fluoroscopy (Figure 6).
Figure 6.
One of the SutureTak anchors was secured in this drill hole along the anteromedial avulsion fracture site. From the anteromedial portal, a curved needle tip suture passer was placed medially through the ACL fibers and bone, with the wire retrieved out of the superior medial accessory portal. Then, the drill guide was introduced through the lateral portal and positioned just lateral to the tibial avulsion site, a hole was drilled 16 mm deep, and fluoroscopy was used to confirm the physis was not violated. The second SutureTak anchor was placed in this anterolateral location. From the anterolateral portal, the curved needle tip suture passer was placed laterally through the ACL fibers and avulsion fragment, and the wire was passed and retrieved out the anteromedial portal and shuttled back to the anterolateral portal.

Next, from the accessory superior medial portal, the end of the wire that had been passed through the medial aspect of the bony avulsion was retrieved through the lateral portal. This wire was used to shuttle the repair suture from the laterally positioned SutureTak anchor over and through the medial aspect of the bony fragment out of the accessory superior medial (Figure 7).
Figure 7.
This suture was passed through the shuttling loop of the medially positioned SutureTak anchor to create the splice in the anchor for the adjustable fixation. This process was repeated through the lateral aspect of the bony fragment—the medial SutureTak repair suture was passed over the bone here. Thus, the lateral suture was over and through the bony fragment secured to the medial SutureTak anchor, and the medial suture was crossed over and through bone to the lateral SutureTak anchor. With the knee held in full extension, the bony avulsion fracture was easily reduced by alternating tension on the SutureTak limbs, which enabled controlled reduction of the TEA fracture (Figures 8A, 8B).
Figure 8.
An arthroscopic knot pusher was used for final tightening of the SutureTak fixation. An arthroscopic probe was used to confirm anatomical reduction of the fracture and restoration of ACL fiber tension (Figure 9).
Figure 9.
The knee was ranged from 0° to 120° of flexion with visual affirmation of the construct and maintenance of the reduction. Fluoroscopy confirmed anatomical reduction of the TEA fracture. The patient was immobilized in a long leg brace locked in 30° of flexion.

 

 

Follow-Up

Two weeks after surgery, the patient returned to clinic for suture removal. Four weeks after surgery, radiographs confirmed anatomical reduction of the TEA fracture, and outpatient physical therapy (range-of-motion exercises as tolerated) and isometric quadriceps strengthening were instituted. Twelve weeks after surgery, examination revealed full knee motion, negative Lachman and pivot shift test results, and residual quadriceps muscle atrophy, and radiographs confirmed complete fracture healing with maintenance of a normal proximal tibial growth plate (Figures 10A, 10B).

Figure 10.
Sixteen weeks after surgery, ligamentous examination findings were normal, and quadriceps muscle mass was good. In addition, on KT-1000 testing, the surgically repaired knee had only 1 more millimeter of laxity at the 30-pound pull, and equal displacement on the manual maximum test. The patient was allowed to return to full activities as tolerated.

Discussion

The highlight of this case is the simplicity of an excellent reduction of a displaced ACL-TEA fracture. Minimally invasive absorbable implants did not violate the proximal tibial physis, and the unique adjustable suture-tensioning technology allowed the degree of reduction and ACL tension to be “dialed in.” SutureTak implants have strong No. 2 FiberWire suture for excellent stability with an overall small suture load, and their small size avoids the risk of violating the proximal tibial physis and avoids potential growth disturbances.

Despite the obvious risks it poses to the open proximal tibial physis, surgical reduction of Meyers-McKeever type II and type III fractures is the norm for restoring ACL stability. Screws and suture fixation are the most common and reliable methods of TEA fracture reduction.16,17 In recent systematic reviews, however, Osti and colleagues17 and Gans and colleagues18 noted there is not enough evidence to warrant a “gold standard” in pediatric tibial avulsion cases.

Other fixation methods for TEA fractures must be investigated. Anderson and colleagues19 described the biomechanics of 4 different physeal-sparing avulsion fracture reduction techniques: an ultra-high-molecular-weight polyethylene (UHMWPE) suture-suture button, a suture anchor, a polydioxanone suture-suture button, and screw fixation. Using techniques described by Kocher and colleagues,4 Berg,20 Mah and colleagues,21 Vega and colleagues,22 and Lu and colleagues,23 Anderson and colleagues19 reduced TEA fractures in skeletally immature porcine knees. Compared with suture anchors, UHMWPE suture-suture buttons provided biomechanically superior cyclic and load-to-failure results as well as more consistent fixation.

Screw fixation has shown good results but has disadvantages. Incorrect positioning of a screw can lead to impingement and articular cartilage damage, and screw removal may be needed if discomfort at the fixation site persists.24,25 Likewise, screws generally are an option only for large fracture fragments, as there is an inherent risk of fracturing small TEA fractures, which can be common in skeletally immature patients.

Brunner and colleagues26 recently found that TEA fracture repair with absorbable sutures and distal bone bridge fixation yielded 3-month radiographic and clinical healing rates similar to those obtained with nonabsorbable sutures tied around a screw. However, other authors have reported growth disturbances with use of a similar technique, owing to a disturbance of the open proximal tibial growth plate.9 In that regard, a major advantage of this new knotless suturing technique is that distal fixation is not necessary.

The minimally invasive TEA fraction reduction technique described in this article has 6 advantages: It provides excellent fixation while avoiding proximal tibial growth plate injury; the degree of tensioning is easily controlled during reduction; it uses strong suture instead of metal screws or pins; the reduction construct is low-profile; distal fixation is unnecessary; and implant removal is unnecessary, thus limiting subsequent surgical intervention. With respect to long-term outcomes, however, it is not known how this procedure will compare with other commonly used ARIF methods in physeal-sparing techniques for TEA fracture fixation.

This case report highlights a novel pediatric displaced ACL-TEA fracture reduction technique that allows for adjustable reduction and resultant ACL tensioning with excellent strong suture fixation without violating the proximal tibial physis, which could make it invaluable in the surgical treatment of this injury in skeletally immature patients.

Am J Orthop. 2017;46(4):203-208. Copyright Frontline Medical Communications Inc. 2017. All rights reserved.

References

1. Eiskjaer S, Larsen ST, Schmidt MB. The significance of hemarthrosis of the knee in children. Arch Orthop Trauma Surg. 1988;107(2):96-98.

2. Luhmann SJ. Acute traumatic knee effusions in children and adolescents. J Pediatr Orthop. 2003;23(2):199-202.

3. Woo SL, Hollis JM, Adams DJ, Lyon RM, Takai S. Tensile properties of the human femur-anterior cruciate ligament-tibia complex. The effects of specimen age and orientation. Am J Sports Med. 1991;19(3):217-225.

4. Kocher MS, Foreman ES, Micheli LJ. Laxity and functional outcome after arthroscopic reduction and internal fixation of displaced tibial spine fractures in children. Arthroscopy. 2003;19(10):1085-1090.

5. Lubowitz JH, Elson WS, Guttmann D. Part II: arthroscopic treatment of tibial plateau fractures: intercondylar eminence avulsion fractures. Arthroscopy. 2005;21(1):86-92.

6. Vargas B, Lutz N, Dutoit M, Zambelli PY. Nonunion after fracture of the anterior tibial spine: case report and review of the literature. J Pediatr Orthop B. 2009;18(2):90-92.

7. Sommerfeldt DW. Arthroscopically assisted internal fixation of avulsion fractures of the anterior cruciate ligament during childhood and adolescence [in German]. Oper Orthop Traumatol. 2008;20(4-5):310-320.

8. Wouters DB, de Graaf JS, Hemmer PH, Burgerhof JG, Kramer WL. The arthroscopic treatment of displaced tibial spine fractures in children and adolescents using Meniscus Arrows®. Knee Surg Sports Traumatol Arthrosc. 2011;19(5):736-739.

9. Ahn JH, Yoo JC. Clinical outcome of arthroscopic reduction and suture for displaced acute and chronic tibial spine fractures. Knee Surg Sports Traumatol Arthrosc. 2005;13(2):116-121.

10. Huang TW, Hsu KY, Cheng CY, et al. Arthroscopic suture fixation of tibial eminence avulsion fractures. Arthroscopy. 2008;24(11):1232-1238.

11. Liljeros K, Werner S, Janarv PM. Arthroscopic fixation of anterior tibial spine fractures with bioabsorbable nails in skeletally immature patients. Am J Sports Med. 2009;37(5):923-928.

12. Wiegand N, Naumov I, Vamhidy L, Not LG. Arthroscopic treatment of tibial spine fracture in children with a cannulated Herbert screw. Knee. 2014;21(2):481-485.

13. Faivre B, Benea H, Klouche S, Lespagnol F, Bauer T, Hardy P. An original arthroscopic fixation of adult’s tibial eminence fractures using the Tightrope® device: a report of 8 cases and review of literature. Knee. 2014;21(4):833-839.

14. Kluemper CT, Snyder GM, Coats AC, Johnson DL, Mair SD. Arthroscopic suture fixation of tibial eminence fractures. Orthopedics. 2013;36(11):e1401-e1406.

15. Ochiai S, Hagino T, Watanabe Y, Senga S, Haro H. One strategy for arthroscopic suture fixation of tibial intercondylar eminence fractures using the Meniscal Viper Repair System. Sports Med Arthrosc Rehabil Ther Technol. 2011;3:17.

16. Bogunovic L, Tarabichi M, Harris D, Wright R. Treatment of tibial eminence fractures: a systematic review. J Knee Surg. 2015;28(3):255-262.

17. Osti L, Buda M, Soldati F, Del Buono A, Osti R, Maffulli N. Arthroscopic treatment of tibial eminence fracture: a systematic review of different fixation methods. Br Med Bull. 2016;118(1):73-90.

18. Gans I, Baldwin KD, Ganley TJ. Treatment and management outcomes of tibial eminence fractures in pediatric patients: a systematic review. Am J Sports Med. 2014;42(7):1743-1750.

19. Anderson CN, Nyman JS, McCullough KA, et al. Biomechanical evaluation of physeal-sparing fixation methods in tibial eminence fractures. Am J Sports Med. 2013;41(7):1586-1594.

20. Berg EE. Pediatric tibial eminence fractures: arthroscopic cannulated screw fixation. Arthroscopy. 1995;11(3):328-331.

21. Mah JY, Otsuka NY, McLean J. An arthroscopic technique for the reduction and fixation of tibial-eminence fractures. J Pediatr Orthop. 1996;16(1):119-121.

22. Vega JR, Irribarra LA, Baar AK, Iniguez M, Salgado M, Gana N. Arthroscopic fixation of displaced tibial eminence fractures: a new growth plate-sparing method. Arthroscopy. 2008;24(11):1239-1243.

23. Lu XW, Hu XP, Jin C, Zhu T, Ding Y, Dai LY. Reduction and fixation of the avulsion fracture of the tibial eminence using mini-open technique. Knee Surg Sports Traumatol Arthrosc. 2010;18(11):1476-1480.

24. Bonin N, Jeunet L, Obert L, Dejour D. Adult tibial eminence fracture fixation: arthroscopic procedure using K-wire folded fixation. Knee Surg Sports Traumatol Arthrosc. 2007;15(7):857-862.

25. Senekovic V, Veselko M. Anterograde arthroscopic fixation of avulsion fractures of the tibial eminence with a cannulated screw: five-year results. Arthroscopy. 2003;19(1):54-61.

26. Brunner S, Vavken P, Kilger R, et al. Absorbable and non-absorbable suture fixation results in similar outcomes for tibial eminence fractures in children and adolescents. Knee Surg Sports Traumatol Arthrosc. 2016;24(3):723-729.

References

1. Eiskjaer S, Larsen ST, Schmidt MB. The significance of hemarthrosis of the knee in children. Arch Orthop Trauma Surg. 1988;107(2):96-98.

2. Luhmann SJ. Acute traumatic knee effusions in children and adolescents. J Pediatr Orthop. 2003;23(2):199-202.

3. Woo SL, Hollis JM, Adams DJ, Lyon RM, Takai S. Tensile properties of the human femur-anterior cruciate ligament-tibia complex. The effects of specimen age and orientation. Am J Sports Med. 1991;19(3):217-225.

4. Kocher MS, Foreman ES, Micheli LJ. Laxity and functional outcome after arthroscopic reduction and internal fixation of displaced tibial spine fractures in children. Arthroscopy. 2003;19(10):1085-1090.

5. Lubowitz JH, Elson WS, Guttmann D. Part II: arthroscopic treatment of tibial plateau fractures: intercondylar eminence avulsion fractures. Arthroscopy. 2005;21(1):86-92.

6. Vargas B, Lutz N, Dutoit M, Zambelli PY. Nonunion after fracture of the anterior tibial spine: case report and review of the literature. J Pediatr Orthop B. 2009;18(2):90-92.

7. Sommerfeldt DW. Arthroscopically assisted internal fixation of avulsion fractures of the anterior cruciate ligament during childhood and adolescence [in German]. Oper Orthop Traumatol. 2008;20(4-5):310-320.

8. Wouters DB, de Graaf JS, Hemmer PH, Burgerhof JG, Kramer WL. The arthroscopic treatment of displaced tibial spine fractures in children and adolescents using Meniscus Arrows®. Knee Surg Sports Traumatol Arthrosc. 2011;19(5):736-739.

9. Ahn JH, Yoo JC. Clinical outcome of arthroscopic reduction and suture for displaced acute and chronic tibial spine fractures. Knee Surg Sports Traumatol Arthrosc. 2005;13(2):116-121.

10. Huang TW, Hsu KY, Cheng CY, et al. Arthroscopic suture fixation of tibial eminence avulsion fractures. Arthroscopy. 2008;24(11):1232-1238.

11. Liljeros K, Werner S, Janarv PM. Arthroscopic fixation of anterior tibial spine fractures with bioabsorbable nails in skeletally immature patients. Am J Sports Med. 2009;37(5):923-928.

12. Wiegand N, Naumov I, Vamhidy L, Not LG. Arthroscopic treatment of tibial spine fracture in children with a cannulated Herbert screw. Knee. 2014;21(2):481-485.

13. Faivre B, Benea H, Klouche S, Lespagnol F, Bauer T, Hardy P. An original arthroscopic fixation of adult’s tibial eminence fractures using the Tightrope® device: a report of 8 cases and review of literature. Knee. 2014;21(4):833-839.

14. Kluemper CT, Snyder GM, Coats AC, Johnson DL, Mair SD. Arthroscopic suture fixation of tibial eminence fractures. Orthopedics. 2013;36(11):e1401-e1406.

15. Ochiai S, Hagino T, Watanabe Y, Senga S, Haro H. One strategy for arthroscopic suture fixation of tibial intercondylar eminence fractures using the Meniscal Viper Repair System. Sports Med Arthrosc Rehabil Ther Technol. 2011;3:17.

16. Bogunovic L, Tarabichi M, Harris D, Wright R. Treatment of tibial eminence fractures: a systematic review. J Knee Surg. 2015;28(3):255-262.

17. Osti L, Buda M, Soldati F, Del Buono A, Osti R, Maffulli N. Arthroscopic treatment of tibial eminence fracture: a systematic review of different fixation methods. Br Med Bull. 2016;118(1):73-90.

18. Gans I, Baldwin KD, Ganley TJ. Treatment and management outcomes of tibial eminence fractures in pediatric patients: a systematic review. Am J Sports Med. 2014;42(7):1743-1750.

19. Anderson CN, Nyman JS, McCullough KA, et al. Biomechanical evaluation of physeal-sparing fixation methods in tibial eminence fractures. Am J Sports Med. 2013;41(7):1586-1594.

20. Berg EE. Pediatric tibial eminence fractures: arthroscopic cannulated screw fixation. Arthroscopy. 1995;11(3):328-331.

21. Mah JY, Otsuka NY, McLean J. An arthroscopic technique for the reduction and fixation of tibial-eminence fractures. J Pediatr Orthop. 1996;16(1):119-121.

22. Vega JR, Irribarra LA, Baar AK, Iniguez M, Salgado M, Gana N. Arthroscopic fixation of displaced tibial eminence fractures: a new growth plate-sparing method. Arthroscopy. 2008;24(11):1239-1243.

23. Lu XW, Hu XP, Jin C, Zhu T, Ding Y, Dai LY. Reduction and fixation of the avulsion fracture of the tibial eminence using mini-open technique. Knee Surg Sports Traumatol Arthrosc. 2010;18(11):1476-1480.

24. Bonin N, Jeunet L, Obert L, Dejour D. Adult tibial eminence fracture fixation: arthroscopic procedure using K-wire folded fixation. Knee Surg Sports Traumatol Arthrosc. 2007;15(7):857-862.

25. Senekovic V, Veselko M. Anterograde arthroscopic fixation of avulsion fractures of the tibial eminence with a cannulated screw: five-year results. Arthroscopy. 2003;19(1):54-61.

26. Brunner S, Vavken P, Kilger R, et al. Absorbable and non-absorbable suture fixation results in similar outcomes for tibial eminence fractures in children and adolescents. Knee Surg Sports Traumatol Arthrosc. 2016;24(3):723-729.

Issue
The American Journal of Orthopedics - 46(4)
Issue
The American Journal of Orthopedics - 46(4)
Page Number
203-208
Page Number
203-208
Publications
Publications
Topics
Article Type
Display Headline
Knotless Arthroscopic Reduction and Internal Fixation of a Displaced Anterior Cruciate Ligament Tibial Eminence Avulsion Fracture
Display Headline
Knotless Arthroscopic Reduction and Internal Fixation of a Displaced Anterior Cruciate Ligament Tibial Eminence Avulsion Fracture
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

A simple algorithm for predicting bacteremia using food consumption and shaking chills: a prospective observational study

Article Type
Changed
Wed, 07/19/2017 - 13:47
Display Headline
A simple algorithm for predicting bacteremia using food consumption and shaking chills: a prospective observational study

Fever in hospitalized patients is a nonspecific finding with many potential causes. Blood cultures (BC) are commonly obtained prior to commencing parenteral antibiotics in febrile patients. However, as many as 35% to 50% of positive BCs represent a contamination with organisms inoculated from the skin into culture bottles at the time of sample collection.1-3 Such results represent false-positive BCs that can lead to unnecessary investigations and treatment.

Recently, Coburn et al. reviewed the severity of chills (graded on an ordinal scale) as the most useful predictor of true bacteremia (positive likelihood ratio [LR], 4.7; 95% confidence interval [CI], 3.0–7.2),4-6 and the lack of the systemic inflammatory response syndrome (SIRS) criteria as the best negative indicator of true bacteremia with a negative LR of 0.09 (95% CI, 0.03-0.3).6,7 We have also previously reported normal food consumption as a negative indicator of true bacteremia, with a 98.3% negative predictive value.8 Henderson’s Basic Principles of Nursing Care emphasizes the importance of evaluating whether a patient can eat and drink adequately,9 and the evaluation of a patient’s food consumption is a routine nursing staff practice, which is treated as vital sign in Japan, in contrast to nursing practices in the United States.

However, these data were the result of a single-center retrospective study using the nursing staff’s assessment of food consumption, and they cannot be generalized to larger patient populations. Therefore, the aim of this prospective, multicenter study was to measure the accuracy of food consumption and shaking chills as predictive factors for true bacteremia.

METHODS

Study Design

This was a prospective multicenter observational study (UMIN ID: R000013768) involving 3 hospitals in Tokyo, Japan, that enrolled consecutive patients who had BCs obtained. This study was approved by the ethical committee at Juntendo University Nerima Hospital and each of the participating centers, and the study was conducted in accordance with the Declaration of Helsinki 1971, as revised in 1983. We evaluated 2,792 consecutive hospitalized patients (mean age, 68.9 ± 17.1 years; 55.3% men) who had BCs obtained between April 2013 and August 2014, inclusive. The indication for BC acquisition was at the discretion of the treating physician. The study protocol and the indication for BCs are described in detail elsewhere.8 We excluded patients with anorexia-inducing conditions such as gastrointestinal disease, including gastrointestinal bleeding, enterocolitis, gastric ulceration, peritonitis, appendicitis, cholangitis, pancreatitis, diverticulitis, and ischemic colitis. We also excluded patients receiving chemotherapy for malignancy. In this study, true bacteremia was defined as identical organisms isolated from 2 sets of blood cultures (a set refers to one aerobic bottle and one anaerobic bottle). Moreover, even if only one set of blood cultures was acquired, when the identified pathogen could account for the clinical presentation, we also defined this as true bacteremia. Briefly, contaminants were defined as organisms common to skin flora, including Bacillus species, coagulase-negative Staphylococcus, Corynebacterium species, and Micrococcus species, without isolation of an identical organism with the same antibiotic susceptibilities from another potentially infected site in a patient with incompatible clinical features and no risk factors for infection with the isolated organism. Single BCs that were positive for organisms that were unlikely to explain the patient’s symptoms were also considered as contaminants. Patients with contaminated BCs were excluded from the analyses.

 

 

Structure of Reliability Study Procedures

Nurses in the 3 different hospitals performed daily independent food consumption ratings during each patient’s stay. Interrater reliability assessments were conducted in the morning or afternoon, and none of the raters had access to the other nurses’ scores at any time. The study nurses performed simultaneous ratings during these assessments (one interacted with and rated the patient while the other observed and rated the same patient).

Prediction Variables of True Bacteremia


1. Food consumption. Assessment of food consumption has been previously described in detail.8 Briefly, we characterized the patients’ oral intake based on the meal taken immediately prior to the BCs. For example, if a fever developed at 2 pm, lunch consumption was evaluated. If a fever developed at 2 am, dinner consumption was evaluated. We categorized the patients into 3 groups: low food consumption (<50% consumed), moderate food consumption (>50% to <80% consumed), and high food consumption (>80% consumed). To simplify our prediction rule, we subsequently divided food consumption into just 2 groups: high food consumption, referred to as the “normal food consumption group,” and the combination of low and moderate food consumption, referred to as the “poor food consumption group.”

2. Chills. As done previously, the physician evaluated the patient for a history of chills at the time of BCs and classified the patients into 1 of 4 grades4,5: “no chills,” the absence of any chills; “mild chills,” feeling cold, equivalent to needing an outer jacket; “moderate chills,” feeling very cold, equivalent to needing a thick blanket; and “shaking chills,” feeling extremely cold with rigors and generalized bodily shaking, even under a thick blanket. To distinguish between those patients who had shaking chills and those who did not, we divided the patients into 2 groups: the “shaking chills group” and the combination of none, mild, and moderate chills, referred to as the “negative shaking chills group.”

3. Other predictive variables. We considered the following additional predictive variables: age, gender, axillary body temperature (BT), heart rate (HR), systolic blood pressure (SBP), respiratory rate (RR), white blood cell count (WBC), and serum C-reactive protein level (CRP). These predictive variables were obtained immediately prior to the BCs. We defined SIRS based on standard criteria (HR >90 beats/m, RR >20/m, BT <36°C or >38°C, and a WBC <4 × 103 WBC/μL or >12 × 103 WBC/μL). Patients were subcategorized by age into 2 groups (≤69 years and >70 years). CRP levels were dichotomized as >10.0 mg/dL or ≤10.0 mg/dL. We reviewed the patients’ charts to determine whether they had received antibiotics. In the case of walk-in patients, we interviewed the patients regarding whether they had visited a clinic; if they had, they were questioned as to whether any antibiotic therapy had been prescribed.

Statistical Analysis

Characteristics of Patients
Table 1

Continuous variables are presented as the mean with the associated standard deviation (SD). All potential variables predictive of true bacteremia are shown in Table 1. The variables were dichotomized by clinically meaningful thresholds and used as potential risk-adjusted variables. We calculated the sensitivity and specificity and positive and negative predictive value for each criterion. Multiple logistic regression analysis was used to select components that were significantly associated with true bacteremia (the level of statistical significance determined with maximum likelihood methods was set at P < .05). To visualize and quantify other aspects in the prediction of true bacteremia, a recursive partitioning analysis (RPA) was used to make a decision tree model for true bacteremia. This nonparametric regression method produces a classification tree following a series of nonsequential top-down binary splits. The tree-building process starts by considering a set of predictive variables and selects the variable that produces 2 subsets of participants with the greatest purity. Two factors are considered when splitting a node into its daughter nodes: the goodness of the split and the amount of impurity in the daughter nodes. The splitting process is repeated until further partitioning is no longer possible and the terminal nodes have been reached. Details on this method are discussed in Monte Carlo Calibration of Distributions of Partition Statistics (www.jmp.com).

Probability was considered significant at a value of P < .05. All statistical tests were 2-tailed. Statistical analyses were conducted by a physician (KI) and an independent statistician (JM) with the use of the SPSS® v.16.0 software package (SPSS Inc., Chicago, IL) and JMP® version 8.0.2 (SAS Institute, Cary, NC).

RESULTS

Patients Characteristics

Study population. During the study period, 2,792 patients were eligible for inclusion.
Figure 1

Two thousand seven hundred and ninety-two patients met the inclusion criteria for our study, from which 849 were excluded (see Figure 1 for flow diagram). Among the remaining 1,943 patients, there were 317 patients with positive BCs, of which 221 patients (69.7%) were considered to have true-positive BCs and 96 (30.3%) were considered to have contaminated BCs. After excluding these 96 patients, 221 patients with true bacteremia (true bacteremic group) were compared with 1,626 nonbacteremic patients (nonbacteremic group; Figure 1). The baseline characteristics of the subjects are shown in Table 1. The mean BT was 38.4 ± 1.2°C in the true bacteremic group and 37.9 ± 1.0°C in the nonbacteremic group. The mean serum CRP level was 11.6 ± 9.6 mg/dL in the true bacteremic group and 7.3 ± 6.9 mg/dL in the nonbacteremic group. In the true bacteremic group, there were 6 afebrile patients, and 27 patients without leukocytosis. The pathogens identified from the true-positive BCs were Escherichia coli (n = 59, 26.7%), including extended-spectrum beta-lactamase producing species, Staphylococcus aureus (n = 36, 16.3%), including methicillin-resistant Staphylococcus aureus, and Klebsiella pneumoniae (n = 22, 10.0%; Supplemental Table 1).

 

 

The underlying clinical diagnoses in the true bacteremic group included urinary tract infection (UTI), pneumonia, abscess, catheter-related bloodstream infection (CRBSI), cellulitis, osteomyelitis, infective endocarditis (IE), chorioamnionitis, iatrogenic infection at hemodialysis puncture sites, bacterial meningitis, septic arthritis, and infection of unknown cause (Supplemental Table 2).

Interrater Reliability Testing of Food Consumption

Patients were evaluated during their hospital stays. The interrater reliability of the evaluation of food consumption was very high across all participating hospitals (Supplemental Table 3). To assess the reliability of the evaluations of food consumption, patients (separate from this main study) were selected randomly and evaluated independently by 2 nurses in 3 different hospitals. The kappa scores of agreement between the nurses at the 3 different hospitals were 0.83 (95% CI, 0.63-0.88), 0.90 (95% CI, 0.80-0.99), and 0.80 (95% CI, 0.67-0.99), respectively. The interrater reliability of food consumption evaluation by the nurses was very high at all participating hospitals.

Food Consumption

The low, moderate, and high food consumption groups consisted of 964 (52.1%), 306 (16.6%), and 577 (31.2%) patients, respectively (Table 1). Of these, 174 (18.0%), 33 (10.8%), and 14 (2.4%) patients, respectively, had true bacteremia. The presence of poor food consumption had a sensitivity of 93.7% (95% CI, 89.4%-97.9%), specificity of 34.6% (95% CI, 33.0%-36.2%), and a positive LR of 1.43 (95% CI, 1.37-1.50) for predicting true bacteremia. Conversely, the absence of poor food consumption (ie, normal food consumption) had a negative LR of 0.18 (95% CI, 0.17-0.19).

Chills

The no, mild, moderate, and shaking chills groups consisted of 1,514 (82.0%), 148 (8.0%), 53 (2.9%), and 132 (7.1%) patients, respectively (Table 1). Of these, 136 (9.0%), 25 (16.9%), 8 (15.1%), and 52 (39.4%) patients, respectively, had true bacteremia. The presence of shaking chills had a sensitivity of 23.5% (95% CI, 22.5%-24.6%), a specificity of 95.1% (95% CI, 90.7%-99.4%), and a positive LR of 4.78 (95% CI, 4.56–5.00) for predicting true bacteremia. Conversely, the absence of shaking chills had a negative LR of 0.80 (95% CI, 0.77-0.84).

Prediction Model for True Bacteremia

Components of Predicting True Bacteremia Identified by Multiple Logistic Regression Method
Table 2

The components identified as significantly related to true bacteremia by multiple logistic regression analysis are indicated in Table 2. The significant predictors of true bacteremia were shaking chills (odds ratio [OR], 5.6; 95% CI, 3.6-8.6; P < .01), SBP <90 mmHg (OR, 3.1; 95% CI, 1.6-5.7; P < 01), CRP levels >10.0 mg/dL (OR, 2.2; 95% CI, 1.6-3.1; P < .01), BT <36°C or >38°C (OR, 1.8; 95% CI, 1.3-2.6; P < .01), WBC <4 × 103/μL or >12 × 103/μL (OR, 1.6; 95% CI, 1.2-2.3; P = .003), HR >90 bpm (OR, 1.5; 95% CI, 1.1-2.1; P = .021), and female (OR, 1.4; 95% CI, 1.0-1.9; P = .036). An RPA to create an ideal prediction model for patients with true bacteremia or nonbacteremia is shown in Figure 2. The original group consisted of 1,847 patients, including 221 patients with true bacteremia. The pretest probability of true bacteremia was 2.4% (14/577) for those with normal food consumption (Group 1) and 2.4% (13/552) for those with both normal food consumption and the absence of shaking chills (Group 2). Conversely, the pretest probability of true bacteremia was 16.3% (207/1270) for those with poor food consumption and 47.7% (51/107) for those with both poor food consumption and shaking chills. The patients with true bacteremia with normal food consumption and without shaking chills consisted of 4 cases of CRBSI and UTI, 2 cases of osteomyelitis, 1 case of IE, 1 case of chorioamnionitis, and 1 case for which the focus was unknown (Supplemental Table 4).

Decision tree obtained from recursive partitioning analysis for predicting true bacteremia in patients with suspected true bacteremia.
Figure 2

DISCUSSION

In this observational study, we evaluated if a simple algorithm using food consumption and shaking chills was useful for assessing whether a patient had true bacteremia. A 2-item screening checklist (nursing assessment of food consumption and shaking chills) had excellent statistical properties as a brief screening instrument for true bacteremia.

We have prospectively validated that food consumption, as assessed by nurses, is a reliable predictor of true bacteremia.8 A previous single-center retrospective study showed similar findings, but these could not be generalized across all institutions because of the limited nature of the study. In this multicenter study, we used 2 statistical methods to reduce selection bias. First, we performed a kappa analysis across the hospitals to evaluate the interrater reliability of the evaluation of food consumption. Second, we used an RPA (Figure 2), also known as a decision tree model. RPA is a step-by-step process by which a decision tree is constructed by either splitting or not splitting each node on the tree into 2 daughter nodes.10 By using this method, we successfully generated an ideal approach to predict true bacteremia using food consumption and shaking chills. After adjusting for food consumption and shaking chills, groups 1 to 2 had sequentially decreasing diagnoses of true bacteremia, varying from 221 patients to only 13 patients.

Appetite is influenced by many factors that are integrated by the brain, most importantly within the hypothalamus. Signals that impinge on the hypothalamic center include neural afferents, hormones, cytokines, and metabolites.11 These factors elicit “sickness behavior,” which includes a decrease in food-motivated behavior.12 Furthermore, exposure to pathogenic bacteria increases serotonin, which has been shown to decrease metabolism in amphid neurons by transcriptional and post-transcriptional mechanisms.13 Therefore, nonbacteremic patients retain their appetites. Shaking chills are a well-known predictor of true bacteremia.4,5 Several cytokines, including tumor necrosis factor-alpha and interleukins 6 and 10, may be related to shaking chills.14 Coburn et al. reviewed that shaking chills appear to be useful for identifying true bacteremia (positive LR, 4.7; 95% CI, 3.0-7.2),5,6 similar to our study. In our study, the pretest probability of true bacteremia was the same whether shaking chills was included or not (ie, 2.4% for normal food consumption and 2.4% for normal food consumption plus absence of shaking chills). This would seem to imply that the assessment of shaking chills does not appear to add anything over food assessment alone when trying to rule out bacteremia. Rather, shaking chills seem more important for ruling in bacteremia rather than ruling it out. Moreover, the recent retrospective study revealed that age >60 years (OR = 2.75, 95% CI, 1.23-6.48, P = .015), female sex (OR = 2.21, 95% CI, 1.07- 4.67, P = .038), heart rate >90 bpm (OR = 5.18, 95% CI, 2.25-12.48, P < .001) and neutrophil percentage >80% (OR = 3.61, 95% CI, 1.71- 8.00, P = .001) were independent risk factors for true bacteremia.15 Conversely, the lack of the SIRS criteria was reported as the best negative indicator of true bacteremia with a negative LR of 0.09 (95% CI, 0.03-0.3).6,7 However, the evaluation of SIRS criteria requires the acquisition of laboratory data. To our knowledge, no previous prospective studies have evaluated food consumption in terms of a risk prediction for true bacteremia. This extremely simple model can enable a physician to make a rapid bedside estimation of the risk of true bacteremia.

The strengths of this study include its relatively large sample size, multicenter design, uniformity of data collection across sites, and completeness of data collection from study participants. All of these factors allowed for a robust analysis.

However, there are several limitations of this study. First, the physicians or nurses asked the patients about the presence of shaking chills when they obtained the BCs. It may be difficult for patients, especially elderly patients, to provide this information promptly and accurately. Some patients did not call the nurse when they had shaking chills, and the chills were not witnessed by a healthcare provider. However, we used a more specific definition for shaking chills: a feeling of being extremely cold with rigors and generalized bodily shaking, even under a thick blanket. Second, this algorithm is not applicable to patients with immunosuppressed states because none of the hospitals involved in this study perform bone marrow or organ transplantation. Third, although we included patients with dementia in our cohort, we did not specifically evaluate performance of the algorithm in patients with this medical condition. It is possible that the algorithm would not perform well in this subset of patients owing to their unreliable appetite and food intake. Fourth, some medications may affect appetite, leading to reduced food consumption. Although we have not considered the details of medications in this study, we found that the pretest probability of true bacteremia was low for those patients with normal food consumption regardless of whether the medication affected their appetites or not. However, the question of whether medications truly affect patients’ appetites concurrently with bacteremia would need to be specifically addressed in a future study.

 

 

CONCLUSION

In conclusion, we have established a simple algorithm to identify patients with suspected true bacteremia who require the acquisition of blood cultures. This extremely simple model can enable physicians to make a rapid bedside estimation of the risk of true bacteremia.

Acknowledgment

The authors thank Drs. H. Honda and S. Saint, and Ms. A. Okada for their helpful discussions with regard to this study; Ms. M. Takigawa for the collection of data; and Ms. T. Oguri for providing infectious disease consultation on the pathogenicity of the identified organisms.

Disclosure

This work was supported by JSPS KAKENHI Grant Number 15K19294 (to TK) and 20590840 (to KI) from the Japan Society for the Promotion of Science. The authors report no potential conflicts of interest relevant to this article.

Files
References

1. Weinstein MP, Towns ML, Quartey SM et al. The clinical significance of positive blood cultures in the 1990s: a prospective comprehensive evaluation of the microbiology, epidemiology, and outcome of bacteremia and fungemia in adults. Clin Infect Dis. 1997;24:584-602. PubMed
2. Strand CL, Wajsbort RR, Sturmann K. Effect of iodophor vs iodine tincture skin preparation on blood culture contamination rate. JAMA. 1993;269:1004-1006. PubMed
3. Bates DW, Goldman L, Lee TH. Contaminant blood cultures and resource utilization. The true consequences of false-positive results. JAMA. 1991;265:365-369. PubMed
4. Tokuda Y, Miyasato H, Stein GH. A simple prediction algorithm for bacteraemia in patients with acute febrile illness. QJM. 2005;98:813-820. PubMed
5. Tokuda Y, Miyasato H, Stein GH, Kishaba T. The degree of chills for risk of bacteremia in acute febrile illness. Am J Med. 2005;118:1417. PubMed
6. Coburn B, Morris AM, Tomlinson G, Detsky AS. Does this adult patient with suspected bacteremia require blood cultures? JAMA. 2012;308:502-511. PubMed
7. Shapiro NI, Wolfe RE, Wright SB, Moore R, Bates DW. Who needs a blood culture? A prospectively derived and validated prediction rule. J Emerg Med. 2008;35:255-264. PubMed
8. Komatsu T, Onda T, Murayama G, et al. Predicting bacteremia based on nurse-assessed food consumption at the time of blood culture. J Hosp Med. 2012;7:702-705. PubMed
9. Henderson V. Basic Principles of Nursing Care. 2nd ed. Silver Spring, MD: American Nurses Association; 1969. 
10. Therneau T, Atkinson, EJ. An Introduction to Recursive Partitioning using the RPART Routines. Mayo Foundation 2017. https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf. Accessed May 5, 2017.
11. Pavlov VA, Wang H, Czura CJ, Friedman SG, Tracey KJ. The cholinergic anti-inflammatory pathway: a missing link in neuroimmunomodulation. Mol Med .2003;9:125-134. PubMed
12. Hansen MK, Nguyen KT, Fleshner M, et al. Effects of vagotomy on serum endotoxin, cytokines, and corticosterone after intraperitoneal lipopolysaccharide. Am J Physiol Regul Integr Comp Physiol. 2000;278:R331-336. PubMed
13. Zhang Y, Lu H, Bargmann CI. Pathogenic bacteria induce aversive olfactory learning in Caenorhabditis elegans. Nature 2005;438:179-84. PubMed
14. Van Dissel JT, Schijf V, Vogtlander N, Hoogendoorn M, van’t Wout J. Implications of chills. Lancet 1998;352:374. PubMed
15. Fukui S, Uehara Y, Fujibayashi K, et al. Bacteraemia predictive factors among general medical inpatients: a retrospective cross-sectional survey in a Japanese university hospital. BMJ Open 2016;6:e010527. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12(7)
Topics
Page Number
510-516
Sections
Files
Files
Article PDF
Article PDF

Fever in hospitalized patients is a nonspecific finding with many potential causes. Blood cultures (BC) are commonly obtained prior to commencing parenteral antibiotics in febrile patients. However, as many as 35% to 50% of positive BCs represent a contamination with organisms inoculated from the skin into culture bottles at the time of sample collection.1-3 Such results represent false-positive BCs that can lead to unnecessary investigations and treatment.

Recently, Coburn et al. reviewed the severity of chills (graded on an ordinal scale) as the most useful predictor of true bacteremia (positive likelihood ratio [LR], 4.7; 95% confidence interval [CI], 3.0–7.2),4-6 and the lack of the systemic inflammatory response syndrome (SIRS) criteria as the best negative indicator of true bacteremia with a negative LR of 0.09 (95% CI, 0.03-0.3).6,7 We have also previously reported normal food consumption as a negative indicator of true bacteremia, with a 98.3% negative predictive value.8 Henderson’s Basic Principles of Nursing Care emphasizes the importance of evaluating whether a patient can eat and drink adequately,9 and the evaluation of a patient’s food consumption is a routine nursing staff practice, which is treated as vital sign in Japan, in contrast to nursing practices in the United States.

However, these data were the result of a single-center retrospective study using the nursing staff’s assessment of food consumption, and they cannot be generalized to larger patient populations. Therefore, the aim of this prospective, multicenter study was to measure the accuracy of food consumption and shaking chills as predictive factors for true bacteremia.

METHODS

Study Design

This was a prospective multicenter observational study (UMIN ID: R000013768) involving 3 hospitals in Tokyo, Japan, that enrolled consecutive patients who had BCs obtained. This study was approved by the ethical committee at Juntendo University Nerima Hospital and each of the participating centers, and the study was conducted in accordance with the Declaration of Helsinki 1971, as revised in 1983. We evaluated 2,792 consecutive hospitalized patients (mean age, 68.9 ± 17.1 years; 55.3% men) who had BCs obtained between April 2013 and August 2014, inclusive. The indication for BC acquisition was at the discretion of the treating physician. The study protocol and the indication for BCs are described in detail elsewhere.8 We excluded patients with anorexia-inducing conditions such as gastrointestinal disease, including gastrointestinal bleeding, enterocolitis, gastric ulceration, peritonitis, appendicitis, cholangitis, pancreatitis, diverticulitis, and ischemic colitis. We also excluded patients receiving chemotherapy for malignancy. In this study, true bacteremia was defined as identical organisms isolated from 2 sets of blood cultures (a set refers to one aerobic bottle and one anaerobic bottle). Moreover, even if only one set of blood cultures was acquired, when the identified pathogen could account for the clinical presentation, we also defined this as true bacteremia. Briefly, contaminants were defined as organisms common to skin flora, including Bacillus species, coagulase-negative Staphylococcus, Corynebacterium species, and Micrococcus species, without isolation of an identical organism with the same antibiotic susceptibilities from another potentially infected site in a patient with incompatible clinical features and no risk factors for infection with the isolated organism. Single BCs that were positive for organisms that were unlikely to explain the patient’s symptoms were also considered as contaminants. Patients with contaminated BCs were excluded from the analyses.

 

 

Structure of Reliability Study Procedures

Nurses in the 3 different hospitals performed daily independent food consumption ratings during each patient’s stay. Interrater reliability assessments were conducted in the morning or afternoon, and none of the raters had access to the other nurses’ scores at any time. The study nurses performed simultaneous ratings during these assessments (one interacted with and rated the patient while the other observed and rated the same patient).

Prediction Variables of True Bacteremia


1. Food consumption. Assessment of food consumption has been previously described in detail.8 Briefly, we characterized the patients’ oral intake based on the meal taken immediately prior to the BCs. For example, if a fever developed at 2 pm, lunch consumption was evaluated. If a fever developed at 2 am, dinner consumption was evaluated. We categorized the patients into 3 groups: low food consumption (<50% consumed), moderate food consumption (>50% to <80% consumed), and high food consumption (>80% consumed). To simplify our prediction rule, we subsequently divided food consumption into just 2 groups: high food consumption, referred to as the “normal food consumption group,” and the combination of low and moderate food consumption, referred to as the “poor food consumption group.”

2. Chills. As done previously, the physician evaluated the patient for a history of chills at the time of BCs and classified the patients into 1 of 4 grades4,5: “no chills,” the absence of any chills; “mild chills,” feeling cold, equivalent to needing an outer jacket; “moderate chills,” feeling very cold, equivalent to needing a thick blanket; and “shaking chills,” feeling extremely cold with rigors and generalized bodily shaking, even under a thick blanket. To distinguish between those patients who had shaking chills and those who did not, we divided the patients into 2 groups: the “shaking chills group” and the combination of none, mild, and moderate chills, referred to as the “negative shaking chills group.”

3. Other predictive variables. We considered the following additional predictive variables: age, gender, axillary body temperature (BT), heart rate (HR), systolic blood pressure (SBP), respiratory rate (RR), white blood cell count (WBC), and serum C-reactive protein level (CRP). These predictive variables were obtained immediately prior to the BCs. We defined SIRS based on standard criteria (HR >90 beats/m, RR >20/m, BT <36°C or >38°C, and a WBC <4 × 103 WBC/μL or >12 × 103 WBC/μL). Patients were subcategorized by age into 2 groups (≤69 years and >70 years). CRP levels were dichotomized as >10.0 mg/dL or ≤10.0 mg/dL. We reviewed the patients’ charts to determine whether they had received antibiotics. In the case of walk-in patients, we interviewed the patients regarding whether they had visited a clinic; if they had, they were questioned as to whether any antibiotic therapy had been prescribed.

Statistical Analysis

Characteristics of Patients
Table 1

Continuous variables are presented as the mean with the associated standard deviation (SD). All potential variables predictive of true bacteremia are shown in Table 1. The variables were dichotomized by clinically meaningful thresholds and used as potential risk-adjusted variables. We calculated the sensitivity and specificity and positive and negative predictive value for each criterion. Multiple logistic regression analysis was used to select components that were significantly associated with true bacteremia (the level of statistical significance determined with maximum likelihood methods was set at P < .05). To visualize and quantify other aspects in the prediction of true bacteremia, a recursive partitioning analysis (RPA) was used to make a decision tree model for true bacteremia. This nonparametric regression method produces a classification tree following a series of nonsequential top-down binary splits. The tree-building process starts by considering a set of predictive variables and selects the variable that produces 2 subsets of participants with the greatest purity. Two factors are considered when splitting a node into its daughter nodes: the goodness of the split and the amount of impurity in the daughter nodes. The splitting process is repeated until further partitioning is no longer possible and the terminal nodes have been reached. Details on this method are discussed in Monte Carlo Calibration of Distributions of Partition Statistics (www.jmp.com).

Probability was considered significant at a value of P < .05. All statistical tests were 2-tailed. Statistical analyses were conducted by a physician (KI) and an independent statistician (JM) with the use of the SPSS® v.16.0 software package (SPSS Inc., Chicago, IL) and JMP® version 8.0.2 (SAS Institute, Cary, NC).

RESULTS

Patients Characteristics

Study population. During the study period, 2,792 patients were eligible for inclusion.
Figure 1

Two thousand seven hundred and ninety-two patients met the inclusion criteria for our study, from which 849 were excluded (see Figure 1 for flow diagram). Among the remaining 1,943 patients, there were 317 patients with positive BCs, of which 221 patients (69.7%) were considered to have true-positive BCs and 96 (30.3%) were considered to have contaminated BCs. After excluding these 96 patients, 221 patients with true bacteremia (true bacteremic group) were compared with 1,626 nonbacteremic patients (nonbacteremic group; Figure 1). The baseline characteristics of the subjects are shown in Table 1. The mean BT was 38.4 ± 1.2°C in the true bacteremic group and 37.9 ± 1.0°C in the nonbacteremic group. The mean serum CRP level was 11.6 ± 9.6 mg/dL in the true bacteremic group and 7.3 ± 6.9 mg/dL in the nonbacteremic group. In the true bacteremic group, there were 6 afebrile patients, and 27 patients without leukocytosis. The pathogens identified from the true-positive BCs were Escherichia coli (n = 59, 26.7%), including extended-spectrum beta-lactamase producing species, Staphylococcus aureus (n = 36, 16.3%), including methicillin-resistant Staphylococcus aureus, and Klebsiella pneumoniae (n = 22, 10.0%; Supplemental Table 1).

 

 

The underlying clinical diagnoses in the true bacteremic group included urinary tract infection (UTI), pneumonia, abscess, catheter-related bloodstream infection (CRBSI), cellulitis, osteomyelitis, infective endocarditis (IE), chorioamnionitis, iatrogenic infection at hemodialysis puncture sites, bacterial meningitis, septic arthritis, and infection of unknown cause (Supplemental Table 2).

Interrater Reliability Testing of Food Consumption

Patients were evaluated during their hospital stays. The interrater reliability of the evaluation of food consumption was very high across all participating hospitals (Supplemental Table 3). To assess the reliability of the evaluations of food consumption, patients (separate from this main study) were selected randomly and evaluated independently by 2 nurses in 3 different hospitals. The kappa scores of agreement between the nurses at the 3 different hospitals were 0.83 (95% CI, 0.63-0.88), 0.90 (95% CI, 0.80-0.99), and 0.80 (95% CI, 0.67-0.99), respectively. The interrater reliability of food consumption evaluation by the nurses was very high at all participating hospitals.

Food Consumption

The low, moderate, and high food consumption groups consisted of 964 (52.1%), 306 (16.6%), and 577 (31.2%) patients, respectively (Table 1). Of these, 174 (18.0%), 33 (10.8%), and 14 (2.4%) patients, respectively, had true bacteremia. The presence of poor food consumption had a sensitivity of 93.7% (95% CI, 89.4%-97.9%), specificity of 34.6% (95% CI, 33.0%-36.2%), and a positive LR of 1.43 (95% CI, 1.37-1.50) for predicting true bacteremia. Conversely, the absence of poor food consumption (ie, normal food consumption) had a negative LR of 0.18 (95% CI, 0.17-0.19).

Chills

The no, mild, moderate, and shaking chills groups consisted of 1,514 (82.0%), 148 (8.0%), 53 (2.9%), and 132 (7.1%) patients, respectively (Table 1). Of these, 136 (9.0%), 25 (16.9%), 8 (15.1%), and 52 (39.4%) patients, respectively, had true bacteremia. The presence of shaking chills had a sensitivity of 23.5% (95% CI, 22.5%-24.6%), a specificity of 95.1% (95% CI, 90.7%-99.4%), and a positive LR of 4.78 (95% CI, 4.56–5.00) for predicting true bacteremia. Conversely, the absence of shaking chills had a negative LR of 0.80 (95% CI, 0.77-0.84).

Prediction Model for True Bacteremia

Components of Predicting True Bacteremia Identified by Multiple Logistic Regression Method
Table 2

The components identified as significantly related to true bacteremia by multiple logistic regression analysis are indicated in Table 2. The significant predictors of true bacteremia were shaking chills (odds ratio [OR], 5.6; 95% CI, 3.6-8.6; P < .01), SBP <90 mmHg (OR, 3.1; 95% CI, 1.6-5.7; P < 01), CRP levels >10.0 mg/dL (OR, 2.2; 95% CI, 1.6-3.1; P < .01), BT <36°C or >38°C (OR, 1.8; 95% CI, 1.3-2.6; P < .01), WBC <4 × 103/μL or >12 × 103/μL (OR, 1.6; 95% CI, 1.2-2.3; P = .003), HR >90 bpm (OR, 1.5; 95% CI, 1.1-2.1; P = .021), and female (OR, 1.4; 95% CI, 1.0-1.9; P = .036). An RPA to create an ideal prediction model for patients with true bacteremia or nonbacteremia is shown in Figure 2. The original group consisted of 1,847 patients, including 221 patients with true bacteremia. The pretest probability of true bacteremia was 2.4% (14/577) for those with normal food consumption (Group 1) and 2.4% (13/552) for those with both normal food consumption and the absence of shaking chills (Group 2). Conversely, the pretest probability of true bacteremia was 16.3% (207/1270) for those with poor food consumption and 47.7% (51/107) for those with both poor food consumption and shaking chills. The patients with true bacteremia with normal food consumption and without shaking chills consisted of 4 cases of CRBSI and UTI, 2 cases of osteomyelitis, 1 case of IE, 1 case of chorioamnionitis, and 1 case for which the focus was unknown (Supplemental Table 4).

Decision tree obtained from recursive partitioning analysis for predicting true bacteremia in patients with suspected true bacteremia.
Figure 2

DISCUSSION

In this observational study, we evaluated if a simple algorithm using food consumption and shaking chills was useful for assessing whether a patient had true bacteremia. A 2-item screening checklist (nursing assessment of food consumption and shaking chills) had excellent statistical properties as a brief screening instrument for true bacteremia.

We have prospectively validated that food consumption, as assessed by nurses, is a reliable predictor of true bacteremia.8 A previous single-center retrospective study showed similar findings, but these could not be generalized across all institutions because of the limited nature of the study. In this multicenter study, we used 2 statistical methods to reduce selection bias. First, we performed a kappa analysis across the hospitals to evaluate the interrater reliability of the evaluation of food consumption. Second, we used an RPA (Figure 2), also known as a decision tree model. RPA is a step-by-step process by which a decision tree is constructed by either splitting or not splitting each node on the tree into 2 daughter nodes.10 By using this method, we successfully generated an ideal approach to predict true bacteremia using food consumption and shaking chills. After adjusting for food consumption and shaking chills, groups 1 to 2 had sequentially decreasing diagnoses of true bacteremia, varying from 221 patients to only 13 patients.

Appetite is influenced by many factors that are integrated by the brain, most importantly within the hypothalamus. Signals that impinge on the hypothalamic center include neural afferents, hormones, cytokines, and metabolites.11 These factors elicit “sickness behavior,” which includes a decrease in food-motivated behavior.12 Furthermore, exposure to pathogenic bacteria increases serotonin, which has been shown to decrease metabolism in amphid neurons by transcriptional and post-transcriptional mechanisms.13 Therefore, nonbacteremic patients retain their appetites. Shaking chills are a well-known predictor of true bacteremia.4,5 Several cytokines, including tumor necrosis factor-alpha and interleukins 6 and 10, may be related to shaking chills.14 Coburn et al. reviewed that shaking chills appear to be useful for identifying true bacteremia (positive LR, 4.7; 95% CI, 3.0-7.2),5,6 similar to our study. In our study, the pretest probability of true bacteremia was the same whether shaking chills was included or not (ie, 2.4% for normal food consumption and 2.4% for normal food consumption plus absence of shaking chills). This would seem to imply that the assessment of shaking chills does not appear to add anything over food assessment alone when trying to rule out bacteremia. Rather, shaking chills seem more important for ruling in bacteremia rather than ruling it out. Moreover, the recent retrospective study revealed that age >60 years (OR = 2.75, 95% CI, 1.23-6.48, P = .015), female sex (OR = 2.21, 95% CI, 1.07- 4.67, P = .038), heart rate >90 bpm (OR = 5.18, 95% CI, 2.25-12.48, P < .001) and neutrophil percentage >80% (OR = 3.61, 95% CI, 1.71- 8.00, P = .001) were independent risk factors for true bacteremia.15 Conversely, the lack of the SIRS criteria was reported as the best negative indicator of true bacteremia with a negative LR of 0.09 (95% CI, 0.03-0.3).6,7 However, the evaluation of SIRS criteria requires the acquisition of laboratory data. To our knowledge, no previous prospective studies have evaluated food consumption in terms of a risk prediction for true bacteremia. This extremely simple model can enable a physician to make a rapid bedside estimation of the risk of true bacteremia.

The strengths of this study include its relatively large sample size, multicenter design, uniformity of data collection across sites, and completeness of data collection from study participants. All of these factors allowed for a robust analysis.

However, there are several limitations of this study. First, the physicians or nurses asked the patients about the presence of shaking chills when they obtained the BCs. It may be difficult for patients, especially elderly patients, to provide this information promptly and accurately. Some patients did not call the nurse when they had shaking chills, and the chills were not witnessed by a healthcare provider. However, we used a more specific definition for shaking chills: a feeling of being extremely cold with rigors and generalized bodily shaking, even under a thick blanket. Second, this algorithm is not applicable to patients with immunosuppressed states because none of the hospitals involved in this study perform bone marrow or organ transplantation. Third, although we included patients with dementia in our cohort, we did not specifically evaluate performance of the algorithm in patients with this medical condition. It is possible that the algorithm would not perform well in this subset of patients owing to their unreliable appetite and food intake. Fourth, some medications may affect appetite, leading to reduced food consumption. Although we have not considered the details of medications in this study, we found that the pretest probability of true bacteremia was low for those patients with normal food consumption regardless of whether the medication affected their appetites or not. However, the question of whether medications truly affect patients’ appetites concurrently with bacteremia would need to be specifically addressed in a future study.

 

 

CONCLUSION

In conclusion, we have established a simple algorithm to identify patients with suspected true bacteremia who require the acquisition of blood cultures. This extremely simple model can enable physicians to make a rapid bedside estimation of the risk of true bacteremia.

Acknowledgment

The authors thank Drs. H. Honda and S. Saint, and Ms. A. Okada for their helpful discussions with regard to this study; Ms. M. Takigawa for the collection of data; and Ms. T. Oguri for providing infectious disease consultation on the pathogenicity of the identified organisms.

Disclosure

This work was supported by JSPS KAKENHI Grant Number 15K19294 (to TK) and 20590840 (to KI) from the Japan Society for the Promotion of Science. The authors report no potential conflicts of interest relevant to this article.

Fever in hospitalized patients is a nonspecific finding with many potential causes. Blood cultures (BC) are commonly obtained prior to commencing parenteral antibiotics in febrile patients. However, as many as 35% to 50% of positive BCs represent a contamination with organisms inoculated from the skin into culture bottles at the time of sample collection.1-3 Such results represent false-positive BCs that can lead to unnecessary investigations and treatment.

Recently, Coburn et al. reviewed the severity of chills (graded on an ordinal scale) as the most useful predictor of true bacteremia (positive likelihood ratio [LR], 4.7; 95% confidence interval [CI], 3.0–7.2),4-6 and the lack of the systemic inflammatory response syndrome (SIRS) criteria as the best negative indicator of true bacteremia with a negative LR of 0.09 (95% CI, 0.03-0.3).6,7 We have also previously reported normal food consumption as a negative indicator of true bacteremia, with a 98.3% negative predictive value.8 Henderson’s Basic Principles of Nursing Care emphasizes the importance of evaluating whether a patient can eat and drink adequately,9 and the evaluation of a patient’s food consumption is a routine nursing staff practice, which is treated as vital sign in Japan, in contrast to nursing practices in the United States.

However, these data were the result of a single-center retrospective study using the nursing staff’s assessment of food consumption, and they cannot be generalized to larger patient populations. Therefore, the aim of this prospective, multicenter study was to measure the accuracy of food consumption and shaking chills as predictive factors for true bacteremia.

METHODS

Study Design

This was a prospective multicenter observational study (UMIN ID: R000013768) involving 3 hospitals in Tokyo, Japan, that enrolled consecutive patients who had BCs obtained. This study was approved by the ethical committee at Juntendo University Nerima Hospital and each of the participating centers, and the study was conducted in accordance with the Declaration of Helsinki 1971, as revised in 1983. We evaluated 2,792 consecutive hospitalized patients (mean age, 68.9 ± 17.1 years; 55.3% men) who had BCs obtained between April 2013 and August 2014, inclusive. The indication for BC acquisition was at the discretion of the treating physician. The study protocol and the indication for BCs are described in detail elsewhere.8 We excluded patients with anorexia-inducing conditions such as gastrointestinal disease, including gastrointestinal bleeding, enterocolitis, gastric ulceration, peritonitis, appendicitis, cholangitis, pancreatitis, diverticulitis, and ischemic colitis. We also excluded patients receiving chemotherapy for malignancy. In this study, true bacteremia was defined as identical organisms isolated from 2 sets of blood cultures (a set refers to one aerobic bottle and one anaerobic bottle). Moreover, even if only one set of blood cultures was acquired, when the identified pathogen could account for the clinical presentation, we also defined this as true bacteremia. Briefly, contaminants were defined as organisms common to skin flora, including Bacillus species, coagulase-negative Staphylococcus, Corynebacterium species, and Micrococcus species, without isolation of an identical organism with the same antibiotic susceptibilities from another potentially infected site in a patient with incompatible clinical features and no risk factors for infection with the isolated organism. Single BCs that were positive for organisms that were unlikely to explain the patient’s symptoms were also considered as contaminants. Patients with contaminated BCs were excluded from the analyses.

 

 

Structure of Reliability Study Procedures

Nurses in the 3 different hospitals performed daily independent food consumption ratings during each patient’s stay. Interrater reliability assessments were conducted in the morning or afternoon, and none of the raters had access to the other nurses’ scores at any time. The study nurses performed simultaneous ratings during these assessments (one interacted with and rated the patient while the other observed and rated the same patient).

Prediction Variables of True Bacteremia


1. Food consumption. Assessment of food consumption has been previously described in detail.8 Briefly, we characterized the patients’ oral intake based on the meal taken immediately prior to the BCs. For example, if a fever developed at 2 pm, lunch consumption was evaluated. If a fever developed at 2 am, dinner consumption was evaluated. We categorized the patients into 3 groups: low food consumption (<50% consumed), moderate food consumption (>50% to <80% consumed), and high food consumption (>80% consumed). To simplify our prediction rule, we subsequently divided food consumption into just 2 groups: high food consumption, referred to as the “normal food consumption group,” and the combination of low and moderate food consumption, referred to as the “poor food consumption group.”

2. Chills. As done previously, the physician evaluated the patient for a history of chills at the time of BCs and classified the patients into 1 of 4 grades4,5: “no chills,” the absence of any chills; “mild chills,” feeling cold, equivalent to needing an outer jacket; “moderate chills,” feeling very cold, equivalent to needing a thick blanket; and “shaking chills,” feeling extremely cold with rigors and generalized bodily shaking, even under a thick blanket. To distinguish between those patients who had shaking chills and those who did not, we divided the patients into 2 groups: the “shaking chills group” and the combination of none, mild, and moderate chills, referred to as the “negative shaking chills group.”

3. Other predictive variables. We considered the following additional predictive variables: age, gender, axillary body temperature (BT), heart rate (HR), systolic blood pressure (SBP), respiratory rate (RR), white blood cell count (WBC), and serum C-reactive protein level (CRP). These predictive variables were obtained immediately prior to the BCs. We defined SIRS based on standard criteria (HR >90 beats/m, RR >20/m, BT <36°C or >38°C, and a WBC <4 × 103 WBC/μL or >12 × 103 WBC/μL). Patients were subcategorized by age into 2 groups (≤69 years and >70 years). CRP levels were dichotomized as >10.0 mg/dL or ≤10.0 mg/dL. We reviewed the patients’ charts to determine whether they had received antibiotics. In the case of walk-in patients, we interviewed the patients regarding whether they had visited a clinic; if they had, they were questioned as to whether any antibiotic therapy had been prescribed.

Statistical Analysis

Characteristics of Patients
Table 1

Continuous variables are presented as the mean with the associated standard deviation (SD). All potential variables predictive of true bacteremia are shown in Table 1. The variables were dichotomized by clinically meaningful thresholds and used as potential risk-adjusted variables. We calculated the sensitivity and specificity and positive and negative predictive value for each criterion. Multiple logistic regression analysis was used to select components that were significantly associated with true bacteremia (the level of statistical significance determined with maximum likelihood methods was set at P < .05). To visualize and quantify other aspects in the prediction of true bacteremia, a recursive partitioning analysis (RPA) was used to make a decision tree model for true bacteremia. This nonparametric regression method produces a classification tree following a series of nonsequential top-down binary splits. The tree-building process starts by considering a set of predictive variables and selects the variable that produces 2 subsets of participants with the greatest purity. Two factors are considered when splitting a node into its daughter nodes: the goodness of the split and the amount of impurity in the daughter nodes. The splitting process is repeated until further partitioning is no longer possible and the terminal nodes have been reached. Details on this method are discussed in Monte Carlo Calibration of Distributions of Partition Statistics (www.jmp.com).

Probability was considered significant at a value of P < .05. All statistical tests were 2-tailed. Statistical analyses were conducted by a physician (KI) and an independent statistician (JM) with the use of the SPSS® v.16.0 software package (SPSS Inc., Chicago, IL) and JMP® version 8.0.2 (SAS Institute, Cary, NC).

RESULTS

Patients Characteristics

Study population. During the study period, 2,792 patients were eligible for inclusion.
Figure 1

Two thousand seven hundred and ninety-two patients met the inclusion criteria for our study, from which 849 were excluded (see Figure 1 for flow diagram). Among the remaining 1,943 patients, there were 317 patients with positive BCs, of which 221 patients (69.7%) were considered to have true-positive BCs and 96 (30.3%) were considered to have contaminated BCs. After excluding these 96 patients, 221 patients with true bacteremia (true bacteremic group) were compared with 1,626 nonbacteremic patients (nonbacteremic group; Figure 1). The baseline characteristics of the subjects are shown in Table 1. The mean BT was 38.4 ± 1.2°C in the true bacteremic group and 37.9 ± 1.0°C in the nonbacteremic group. The mean serum CRP level was 11.6 ± 9.6 mg/dL in the true bacteremic group and 7.3 ± 6.9 mg/dL in the nonbacteremic group. In the true bacteremic group, there were 6 afebrile patients, and 27 patients without leukocytosis. The pathogens identified from the true-positive BCs were Escherichia coli (n = 59, 26.7%), including extended-spectrum beta-lactamase producing species, Staphylococcus aureus (n = 36, 16.3%), including methicillin-resistant Staphylococcus aureus, and Klebsiella pneumoniae (n = 22, 10.0%; Supplemental Table 1).

 

 

The underlying clinical diagnoses in the true bacteremic group included urinary tract infection (UTI), pneumonia, abscess, catheter-related bloodstream infection (CRBSI), cellulitis, osteomyelitis, infective endocarditis (IE), chorioamnionitis, iatrogenic infection at hemodialysis puncture sites, bacterial meningitis, septic arthritis, and infection of unknown cause (Supplemental Table 2).

Interrater Reliability Testing of Food Consumption

Patients were evaluated during their hospital stays. The interrater reliability of the evaluation of food consumption was very high across all participating hospitals (Supplemental Table 3). To assess the reliability of the evaluations of food consumption, patients (separate from this main study) were selected randomly and evaluated independently by 2 nurses in 3 different hospitals. The kappa scores of agreement between the nurses at the 3 different hospitals were 0.83 (95% CI, 0.63-0.88), 0.90 (95% CI, 0.80-0.99), and 0.80 (95% CI, 0.67-0.99), respectively. The interrater reliability of food consumption evaluation by the nurses was very high at all participating hospitals.

Food Consumption

The low, moderate, and high food consumption groups consisted of 964 (52.1%), 306 (16.6%), and 577 (31.2%) patients, respectively (Table 1). Of these, 174 (18.0%), 33 (10.8%), and 14 (2.4%) patients, respectively, had true bacteremia. The presence of poor food consumption had a sensitivity of 93.7% (95% CI, 89.4%-97.9%), specificity of 34.6% (95% CI, 33.0%-36.2%), and a positive LR of 1.43 (95% CI, 1.37-1.50) for predicting true bacteremia. Conversely, the absence of poor food consumption (ie, normal food consumption) had a negative LR of 0.18 (95% CI, 0.17-0.19).

Chills

The no, mild, moderate, and shaking chills groups consisted of 1,514 (82.0%), 148 (8.0%), 53 (2.9%), and 132 (7.1%) patients, respectively (Table 1). Of these, 136 (9.0%), 25 (16.9%), 8 (15.1%), and 52 (39.4%) patients, respectively, had true bacteremia. The presence of shaking chills had a sensitivity of 23.5% (95% CI, 22.5%-24.6%), a specificity of 95.1% (95% CI, 90.7%-99.4%), and a positive LR of 4.78 (95% CI, 4.56–5.00) for predicting true bacteremia. Conversely, the absence of shaking chills had a negative LR of 0.80 (95% CI, 0.77-0.84).

Prediction Model for True Bacteremia

Components of Predicting True Bacteremia Identified by Multiple Logistic Regression Method
Table 2

The components identified as significantly related to true bacteremia by multiple logistic regression analysis are indicated in Table 2. The significant predictors of true bacteremia were shaking chills (odds ratio [OR], 5.6; 95% CI, 3.6-8.6; P < .01), SBP <90 mmHg (OR, 3.1; 95% CI, 1.6-5.7; P < 01), CRP levels >10.0 mg/dL (OR, 2.2; 95% CI, 1.6-3.1; P < .01), BT <36°C or >38°C (OR, 1.8; 95% CI, 1.3-2.6; P < .01), WBC <4 × 103/μL or >12 × 103/μL (OR, 1.6; 95% CI, 1.2-2.3; P = .003), HR >90 bpm (OR, 1.5; 95% CI, 1.1-2.1; P = .021), and female (OR, 1.4; 95% CI, 1.0-1.9; P = .036). An RPA to create an ideal prediction model for patients with true bacteremia or nonbacteremia is shown in Figure 2. The original group consisted of 1,847 patients, including 221 patients with true bacteremia. The pretest probability of true bacteremia was 2.4% (14/577) for those with normal food consumption (Group 1) and 2.4% (13/552) for those with both normal food consumption and the absence of shaking chills (Group 2). Conversely, the pretest probability of true bacteremia was 16.3% (207/1270) for those with poor food consumption and 47.7% (51/107) for those with both poor food consumption and shaking chills. The patients with true bacteremia with normal food consumption and without shaking chills consisted of 4 cases of CRBSI and UTI, 2 cases of osteomyelitis, 1 case of IE, 1 case of chorioamnionitis, and 1 case for which the focus was unknown (Supplemental Table 4).

Decision tree obtained from recursive partitioning analysis for predicting true bacteremia in patients with suspected true bacteremia.
Figure 2

DISCUSSION

In this observational study, we evaluated if a simple algorithm using food consumption and shaking chills was useful for assessing whether a patient had true bacteremia. A 2-item screening checklist (nursing assessment of food consumption and shaking chills) had excellent statistical properties as a brief screening instrument for true bacteremia.

We have prospectively validated that food consumption, as assessed by nurses, is a reliable predictor of true bacteremia.8 A previous single-center retrospective study showed similar findings, but these could not be generalized across all institutions because of the limited nature of the study. In this multicenter study, we used 2 statistical methods to reduce selection bias. First, we performed a kappa analysis across the hospitals to evaluate the interrater reliability of the evaluation of food consumption. Second, we used an RPA (Figure 2), also known as a decision tree model. RPA is a step-by-step process by which a decision tree is constructed by either splitting or not splitting each node on the tree into 2 daughter nodes.10 By using this method, we successfully generated an ideal approach to predict true bacteremia using food consumption and shaking chills. After adjusting for food consumption and shaking chills, groups 1 to 2 had sequentially decreasing diagnoses of true bacteremia, varying from 221 patients to only 13 patients.

Appetite is influenced by many factors that are integrated by the brain, most importantly within the hypothalamus. Signals that impinge on the hypothalamic center include neural afferents, hormones, cytokines, and metabolites.11 These factors elicit “sickness behavior,” which includes a decrease in food-motivated behavior.12 Furthermore, exposure to pathogenic bacteria increases serotonin, which has been shown to decrease metabolism in amphid neurons by transcriptional and post-transcriptional mechanisms.13 Therefore, nonbacteremic patients retain their appetites. Shaking chills are a well-known predictor of true bacteremia.4,5 Several cytokines, including tumor necrosis factor-alpha and interleukins 6 and 10, may be related to shaking chills.14 Coburn et al. reviewed that shaking chills appear to be useful for identifying true bacteremia (positive LR, 4.7; 95% CI, 3.0-7.2),5,6 similar to our study. In our study, the pretest probability of true bacteremia was the same whether shaking chills was included or not (ie, 2.4% for normal food consumption and 2.4% for normal food consumption plus absence of shaking chills). This would seem to imply that the assessment of shaking chills does not appear to add anything over food assessment alone when trying to rule out bacteremia. Rather, shaking chills seem more important for ruling in bacteremia rather than ruling it out. Moreover, the recent retrospective study revealed that age >60 years (OR = 2.75, 95% CI, 1.23-6.48, P = .015), female sex (OR = 2.21, 95% CI, 1.07- 4.67, P = .038), heart rate >90 bpm (OR = 5.18, 95% CI, 2.25-12.48, P < .001) and neutrophil percentage >80% (OR = 3.61, 95% CI, 1.71- 8.00, P = .001) were independent risk factors for true bacteremia.15 Conversely, the lack of the SIRS criteria was reported as the best negative indicator of true bacteremia with a negative LR of 0.09 (95% CI, 0.03-0.3).6,7 However, the evaluation of SIRS criteria requires the acquisition of laboratory data. To our knowledge, no previous prospective studies have evaluated food consumption in terms of a risk prediction for true bacteremia. This extremely simple model can enable a physician to make a rapid bedside estimation of the risk of true bacteremia.

The strengths of this study include its relatively large sample size, multicenter design, uniformity of data collection across sites, and completeness of data collection from study participants. All of these factors allowed for a robust analysis.

However, there are several limitations of this study. First, the physicians or nurses asked the patients about the presence of shaking chills when they obtained the BCs. It may be difficult for patients, especially elderly patients, to provide this information promptly and accurately. Some patients did not call the nurse when they had shaking chills, and the chills were not witnessed by a healthcare provider. However, we used a more specific definition for shaking chills: a feeling of being extremely cold with rigors and generalized bodily shaking, even under a thick blanket. Second, this algorithm is not applicable to patients with immunosuppressed states because none of the hospitals involved in this study perform bone marrow or organ transplantation. Third, although we included patients with dementia in our cohort, we did not specifically evaluate performance of the algorithm in patients with this medical condition. It is possible that the algorithm would not perform well in this subset of patients owing to their unreliable appetite and food intake. Fourth, some medications may affect appetite, leading to reduced food consumption. Although we have not considered the details of medications in this study, we found that the pretest probability of true bacteremia was low for those patients with normal food consumption regardless of whether the medication affected their appetites or not. However, the question of whether medications truly affect patients’ appetites concurrently with bacteremia would need to be specifically addressed in a future study.

 

 

CONCLUSION

In conclusion, we have established a simple algorithm to identify patients with suspected true bacteremia who require the acquisition of blood cultures. This extremely simple model can enable physicians to make a rapid bedside estimation of the risk of true bacteremia.

Acknowledgment

The authors thank Drs. H. Honda and S. Saint, and Ms. A. Okada for their helpful discussions with regard to this study; Ms. M. Takigawa for the collection of data; and Ms. T. Oguri for providing infectious disease consultation on the pathogenicity of the identified organisms.

Disclosure

This work was supported by JSPS KAKENHI Grant Number 15K19294 (to TK) and 20590840 (to KI) from the Japan Society for the Promotion of Science. The authors report no potential conflicts of interest relevant to this article.

References

1. Weinstein MP, Towns ML, Quartey SM et al. The clinical significance of positive blood cultures in the 1990s: a prospective comprehensive evaluation of the microbiology, epidemiology, and outcome of bacteremia and fungemia in adults. Clin Infect Dis. 1997;24:584-602. PubMed
2. Strand CL, Wajsbort RR, Sturmann K. Effect of iodophor vs iodine tincture skin preparation on blood culture contamination rate. JAMA. 1993;269:1004-1006. PubMed
3. Bates DW, Goldman L, Lee TH. Contaminant blood cultures and resource utilization. The true consequences of false-positive results. JAMA. 1991;265:365-369. PubMed
4. Tokuda Y, Miyasato H, Stein GH. A simple prediction algorithm for bacteraemia in patients with acute febrile illness. QJM. 2005;98:813-820. PubMed
5. Tokuda Y, Miyasato H, Stein GH, Kishaba T. The degree of chills for risk of bacteremia in acute febrile illness. Am J Med. 2005;118:1417. PubMed
6. Coburn B, Morris AM, Tomlinson G, Detsky AS. Does this adult patient with suspected bacteremia require blood cultures? JAMA. 2012;308:502-511. PubMed
7. Shapiro NI, Wolfe RE, Wright SB, Moore R, Bates DW. Who needs a blood culture? A prospectively derived and validated prediction rule. J Emerg Med. 2008;35:255-264. PubMed
8. Komatsu T, Onda T, Murayama G, et al. Predicting bacteremia based on nurse-assessed food consumption at the time of blood culture. J Hosp Med. 2012;7:702-705. PubMed
9. Henderson V. Basic Principles of Nursing Care. 2nd ed. Silver Spring, MD: American Nurses Association; 1969. 
10. Therneau T, Atkinson, EJ. An Introduction to Recursive Partitioning using the RPART Routines. Mayo Foundation 2017. https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf. Accessed May 5, 2017.
11. Pavlov VA, Wang H, Czura CJ, Friedman SG, Tracey KJ. The cholinergic anti-inflammatory pathway: a missing link in neuroimmunomodulation. Mol Med .2003;9:125-134. PubMed
12. Hansen MK, Nguyen KT, Fleshner M, et al. Effects of vagotomy on serum endotoxin, cytokines, and corticosterone after intraperitoneal lipopolysaccharide. Am J Physiol Regul Integr Comp Physiol. 2000;278:R331-336. PubMed
13. Zhang Y, Lu H, Bargmann CI. Pathogenic bacteria induce aversive olfactory learning in Caenorhabditis elegans. Nature 2005;438:179-84. PubMed
14. Van Dissel JT, Schijf V, Vogtlander N, Hoogendoorn M, van’t Wout J. Implications of chills. Lancet 1998;352:374. PubMed
15. Fukui S, Uehara Y, Fujibayashi K, et al. Bacteraemia predictive factors among general medical inpatients: a retrospective cross-sectional survey in a Japanese university hospital. BMJ Open 2016;6:e010527. PubMed

References

1. Weinstein MP, Towns ML, Quartey SM et al. The clinical significance of positive blood cultures in the 1990s: a prospective comprehensive evaluation of the microbiology, epidemiology, and outcome of bacteremia and fungemia in adults. Clin Infect Dis. 1997;24:584-602. PubMed
2. Strand CL, Wajsbort RR, Sturmann K. Effect of iodophor vs iodine tincture skin preparation on blood culture contamination rate. JAMA. 1993;269:1004-1006. PubMed
3. Bates DW, Goldman L, Lee TH. Contaminant blood cultures and resource utilization. The true consequences of false-positive results. JAMA. 1991;265:365-369. PubMed
4. Tokuda Y, Miyasato H, Stein GH. A simple prediction algorithm for bacteraemia in patients with acute febrile illness. QJM. 2005;98:813-820. PubMed
5. Tokuda Y, Miyasato H, Stein GH, Kishaba T. The degree of chills for risk of bacteremia in acute febrile illness. Am J Med. 2005;118:1417. PubMed
6. Coburn B, Morris AM, Tomlinson G, Detsky AS. Does this adult patient with suspected bacteremia require blood cultures? JAMA. 2012;308:502-511. PubMed
7. Shapiro NI, Wolfe RE, Wright SB, Moore R, Bates DW. Who needs a blood culture? A prospectively derived and validated prediction rule. J Emerg Med. 2008;35:255-264. PubMed
8. Komatsu T, Onda T, Murayama G, et al. Predicting bacteremia based on nurse-assessed food consumption at the time of blood culture. J Hosp Med. 2012;7:702-705. PubMed
9. Henderson V. Basic Principles of Nursing Care. 2nd ed. Silver Spring, MD: American Nurses Association; 1969. 
10. Therneau T, Atkinson, EJ. An Introduction to Recursive Partitioning using the RPART Routines. Mayo Foundation 2017. https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf. Accessed May 5, 2017.
11. Pavlov VA, Wang H, Czura CJ, Friedman SG, Tracey KJ. The cholinergic anti-inflammatory pathway: a missing link in neuroimmunomodulation. Mol Med .2003;9:125-134. PubMed
12. Hansen MK, Nguyen KT, Fleshner M, et al. Effects of vagotomy on serum endotoxin, cytokines, and corticosterone after intraperitoneal lipopolysaccharide. Am J Physiol Regul Integr Comp Physiol. 2000;278:R331-336. PubMed
13. Zhang Y, Lu H, Bargmann CI. Pathogenic bacteria induce aversive olfactory learning in Caenorhabditis elegans. Nature 2005;438:179-84. PubMed
14. Van Dissel JT, Schijf V, Vogtlander N, Hoogendoorn M, van’t Wout J. Implications of chills. Lancet 1998;352:374. PubMed
15. Fukui S, Uehara Y, Fujibayashi K, et al. Bacteraemia predictive factors among general medical inpatients: a retrospective cross-sectional survey in a Japanese university hospital. BMJ Open 2016;6:e010527. PubMed

Issue
Journal of Hospital Medicine 12(7)
Issue
Journal of Hospital Medicine 12(7)
Page Number
510-516
Page Number
510-516
Topics
Article Type
Display Headline
A simple algorithm for predicting bacteremia using food consumption and shaking chills: a prospective observational study
Display Headline
A simple algorithm for predicting bacteremia using food consumption and shaking chills: a prospective observational study
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
*Address for correspondence and reprint requests: Kenji Inoue, Department of Cardiology, Juntendo University Nerima Hospital, 3-1-10, Takanodai, Nerimaku, Tokyo, 177-0033, Japan; Telephone: +81-3-5923-3111; Fax: +81-3-5923-3217; E-mail: [email protected]

Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Antiangiogenesis in Small-Cell Lung Cancer: Is There a Path Forward?

Article Type
Changed
Wed, 02/28/2018 - 14:08
Display Headline
Antiangiogenesis in Small-Cell Lung Cancer: Is There a Path Forward?

Study Overview

Objective. To evaluate efficacy of adding bevacizumab to first-line chemotherapy for treatment of extensive-disease small-cell lung cancer (ED-SCLC).

Design. Phase III prospective multicenter randomized clinical trial.

Setting and participants. The study was conducted at 29 Italian centers and was supported by the Agenzia Italiana del Farmaco. Study entry was limited to patients with histologically or cytologically documented ED-SCLC who were previously untreated with systemic therapy, were 18 years of age or older, and had an Eastern Cooperative Oncology Group performance status (ECOG PS) of 0 to 2. Adequate bone marrow, renal, and liver functions were required. Patients with asymptomatic, treated brain metastases were eligible for trial participation. Exclusions included the following: mixed histologic diagnosis of SCLC and non–SCLC; history of grade 2 hemoptysis; evidence of lung tumor cavitation; significant traumatic injury within the 4 weeks before first dose of study treatment; other active malignancies (previous or current); and any underlying medical condition that might be aggravated by treatment.

Intervention. Patients received a combination of intravenous cisplatin (25 mg/m2 on days 1 to 3), etoposide (100 mg/m2 on days 1 to 3), and bevacizumab (7.5 mg/kg intravenously on day 1) administered every 3 weeks (experimental arm); or the same cisplatin and etoposide chemotherapy regimen alone given every 3 weeks (control arm). Carboplatin (area under the curve 5 on day 1) could be substituted for cisplatin in case of cisplatin contraindications or cisplatin-associated toxicity. Tumor response, on the basis of investigator-assessed Response Evaluation Criteria in Solid Tumors (RECIST; version 1.1), was evaluated every 3 cycles during chemotherapy treatment. After 6 cycles of chemotherapy, tumor assessment was performed every 9 weeks in both arms. In the absence of progression, patients in the treatment arm continued bevacizumab alone until disease progression or for a maximum of 18 courses. Survival follow-up information was collected every 6 months after treatment termination or last dose of study drug, until death or loss to follow-up.

Main outcome measure. The primary end point was overall survival (OS). Response rate, toxicity, and progression-free survival (PFS) were secondary end points.

Main results. 205 patients were randomized between November 2009 and October 2015. 204 patients were considered in the intent-to-treat analysis (103 in the control arm and 101 in the treatment arm). Most patients were male with ECOG PS of 0 to 1. Median age was 64 years. The median number of chemotherapy courses administered was 6 in both arms. Cisplatin was used in majority of the patients. Average relative dose intensities for all drugs were well balanced between 2 groups. A lower percentage of patients in the treatment arm (14.7%) than in the control arm (22.3%) discontinued treatment because of radiologic disease progression, which was the main reason for treatment discontinuation.

At a median follow-up of 34.9 months, the median PFS was 5.7 in the control arm and 6.7 months in the treatment arm (hazard ratio [HR], 0.72; 95% CI, 0.54 to 0.97; P = 0.30). Median OS times were 8.9 months and 9.8 months, and 1-year survival rates were 25% and 37% (HR, 0.78; 95% CI, 0.58 to 1.06; P = 0.113) in the control arm and treatment arm, respectively. A significant effect of the maintenance treatment on OS (HR, 0.60; 95% CI, 0.40 to 0.91, P = 0.011) was observed. A subgroup analysis revealed a statistically significant interaction for OS between treatment and sex; the addition of bevacizumab led to a significant survival benefit in men (HR, 0.55) and to a possible detrimental effect in women (HR, 1.55; interaction test, P = 0.003).

Addition of bevacizumab did not result in increase in hematologic toxicity such as anemia, neutropenia, or thrombocytopenia. Concerning the nonhematologic toxicity, only hypertension was more frequent in the bevacizumab arm (6.3%) compared to chemotherapy alone arm (1%). The rates of proteinuria and thrombosis were similar in both arms.

Conclusion. The addition of bevacizumab to cisplatin and etoposide in the first-line treatment of ED-SCLC had an acceptable toxicity profile and led to a statistically significant improvement in PFS, which, however, did not translate into a statistically significant increase in OS.

Commentary

SCLC currently accounts for approximately 12% to 15% of all lung cancers [1]. It is characterized by a rapid growth rate, metastasis at the time of diagnosis, sensitivity to first-line platinum-based chemotherapy, and invariable recurrence and progressive resistance to subsequent lines of therapy. A number of clinical trials over the past 2 decades have failed to produce outcomes superior to platinum-based doublet chemotherapy, leaving a significant unmet need [2]. Vascular endothelial growth factor (VEGF) is the most important proangiogenic factor, and it is implicated in tumor growth [3]. Bevacizumab, a humanized monoclonal antibody directed against VEGF, is now indicated in the treatment of several tumor types including non–SCLC and breast, colorectal, kidney, and ovarian cancer. Positive signal with bevacizumab was seen in phase II studies, providing rationale for this phase III trial [4,5] .

The study by Tiseo and colleagues reported the outcomes of a randomized study that added bevacizumab to standard combination therapy with platinum and etoposide for the treatment of ED-SCLC. A small statistically significant improvement was seen in PFS (6.7 months vs. 5.7 months, favoring the bevacizumab group). However, the study failed to meet the primary end point of improved OS.

So where do antiangiogenesis agents go from here? Alternative angiogenesis inhibitors with broader mechanism of action are being explored in clinical trials. One such trial (ClinicalTrials.gov identifier: NCT02945852) is evaluating the role of the tyrosine kinase inhibitor apatinib in combination with chemotherapy in ED-SCLC. Apatinib selectively inhibits the vascular growth factor receptor-2 (VEGFR2). In addition, this agent also inhibits c-kit and c-SRC tyrosine kinase. It would be interesting to see if antiangiogenic agents with broader mechanisms would be more effective in SCLC. Immunotherapy with checkpoint inhibitors such as nivolumab and pembrolizumab have revolutionized the lung cancer treatment paradigm. It would be interesting to see if bevacizumab could be safely added to these immunotherapy agents. The ongoing CheckMate 370 (ClinicalTrials.gov identifier: NCT02574078) is addressing this question, evaluating the safety of combining nivolumab with bevacizumab in non-SCLC.

Applications for Clinical Practice

The current study does not support the addition of bevacizumab as a standard therapeutic option in the first-line treatment of ED-SCLC. However, given that there was a trend towards improved OS, alternative strategies of incorporating antiangiogenesis agents should be considered in future clinical trials.

—Deval Rajyaguru, MD

 

References

1. Neal JW, Gubens MA, Wakelee HA. Current management of small cell lung cancer. Clin Chest Med 2011;32:853–63.

2. Bunn PA Jr, Minna JD, Augustyn A, et al. Small cell lung cancer. Can recent advances in biology and molecular biology be translated into improved outcomes? J Thorac Oncol 2016;11:453–74.

3. Ferrara N, Gerber HP, LeCouter J. The biology of VEGF and its receptors. Nat Med 2003:9:669–676.

4. Horn L, Dahlberg SE, Sandler AB, et al. Phase II study of cisplatin plus etoposide and bevacizumab for previously untreated, extensive-stage small-cell lung cancer: Eastern Cooperative Oncology Group Study E3501. J Clin Oncol 2009;27:6006–11.

5. Spigel DR, Townley PM, Waterhouse DM, et al. Randomized phase II study of bevacizumab in combination with chemotherapy in previously untreated extensive-stage small-cell lung cancer: Results from the SALUTE trial. J Clin Oncol 2011;29:2215–22.

Issue
Journal of Clinical Outcomes Management - June 2017, Vol. 24, No. 6
Publications
Topics
Sections

Study Overview

Objective. To evaluate efficacy of adding bevacizumab to first-line chemotherapy for treatment of extensive-disease small-cell lung cancer (ED-SCLC).

Design. Phase III prospective multicenter randomized clinical trial.

Setting and participants. The study was conducted at 29 Italian centers and was supported by the Agenzia Italiana del Farmaco. Study entry was limited to patients with histologically or cytologically documented ED-SCLC who were previously untreated with systemic therapy, were 18 years of age or older, and had an Eastern Cooperative Oncology Group performance status (ECOG PS) of 0 to 2. Adequate bone marrow, renal, and liver functions were required. Patients with asymptomatic, treated brain metastases were eligible for trial participation. Exclusions included the following: mixed histologic diagnosis of SCLC and non–SCLC; history of grade 2 hemoptysis; evidence of lung tumor cavitation; significant traumatic injury within the 4 weeks before first dose of study treatment; other active malignancies (previous or current); and any underlying medical condition that might be aggravated by treatment.

Intervention. Patients received a combination of intravenous cisplatin (25 mg/m2 on days 1 to 3), etoposide (100 mg/m2 on days 1 to 3), and bevacizumab (7.5 mg/kg intravenously on day 1) administered every 3 weeks (experimental arm); or the same cisplatin and etoposide chemotherapy regimen alone given every 3 weeks (control arm). Carboplatin (area under the curve 5 on day 1) could be substituted for cisplatin in case of cisplatin contraindications or cisplatin-associated toxicity. Tumor response, on the basis of investigator-assessed Response Evaluation Criteria in Solid Tumors (RECIST; version 1.1), was evaluated every 3 cycles during chemotherapy treatment. After 6 cycles of chemotherapy, tumor assessment was performed every 9 weeks in both arms. In the absence of progression, patients in the treatment arm continued bevacizumab alone until disease progression or for a maximum of 18 courses. Survival follow-up information was collected every 6 months after treatment termination or last dose of study drug, until death or loss to follow-up.

Main outcome measure. The primary end point was overall survival (OS). Response rate, toxicity, and progression-free survival (PFS) were secondary end points.

Main results. 205 patients were randomized between November 2009 and October 2015. 204 patients were considered in the intent-to-treat analysis (103 in the control arm and 101 in the treatment arm). Most patients were male with ECOG PS of 0 to 1. Median age was 64 years. The median number of chemotherapy courses administered was 6 in both arms. Cisplatin was used in majority of the patients. Average relative dose intensities for all drugs were well balanced between 2 groups. A lower percentage of patients in the treatment arm (14.7%) than in the control arm (22.3%) discontinued treatment because of radiologic disease progression, which was the main reason for treatment discontinuation.

At a median follow-up of 34.9 months, the median PFS was 5.7 in the control arm and 6.7 months in the treatment arm (hazard ratio [HR], 0.72; 95% CI, 0.54 to 0.97; P = 0.30). Median OS times were 8.9 months and 9.8 months, and 1-year survival rates were 25% and 37% (HR, 0.78; 95% CI, 0.58 to 1.06; P = 0.113) in the control arm and treatment arm, respectively. A significant effect of the maintenance treatment on OS (HR, 0.60; 95% CI, 0.40 to 0.91, P = 0.011) was observed. A subgroup analysis revealed a statistically significant interaction for OS between treatment and sex; the addition of bevacizumab led to a significant survival benefit in men (HR, 0.55) and to a possible detrimental effect in women (HR, 1.55; interaction test, P = 0.003).

Addition of bevacizumab did not result in increase in hematologic toxicity such as anemia, neutropenia, or thrombocytopenia. Concerning the nonhematologic toxicity, only hypertension was more frequent in the bevacizumab arm (6.3%) compared to chemotherapy alone arm (1%). The rates of proteinuria and thrombosis were similar in both arms.

Conclusion. The addition of bevacizumab to cisplatin and etoposide in the first-line treatment of ED-SCLC had an acceptable toxicity profile and led to a statistically significant improvement in PFS, which, however, did not translate into a statistically significant increase in OS.

Commentary

SCLC currently accounts for approximately 12% to 15% of all lung cancers [1]. It is characterized by a rapid growth rate, metastasis at the time of diagnosis, sensitivity to first-line platinum-based chemotherapy, and invariable recurrence and progressive resistance to subsequent lines of therapy. A number of clinical trials over the past 2 decades have failed to produce outcomes superior to platinum-based doublet chemotherapy, leaving a significant unmet need [2]. Vascular endothelial growth factor (VEGF) is the most important proangiogenic factor, and it is implicated in tumor growth [3]. Bevacizumab, a humanized monoclonal antibody directed against VEGF, is now indicated in the treatment of several tumor types including non–SCLC and breast, colorectal, kidney, and ovarian cancer. Positive signal with bevacizumab was seen in phase II studies, providing rationale for this phase III trial [4,5] .

The study by Tiseo and colleagues reported the outcomes of a randomized study that added bevacizumab to standard combination therapy with platinum and etoposide for the treatment of ED-SCLC. A small statistically significant improvement was seen in PFS (6.7 months vs. 5.7 months, favoring the bevacizumab group). However, the study failed to meet the primary end point of improved OS.

So where do antiangiogenesis agents go from here? Alternative angiogenesis inhibitors with broader mechanism of action are being explored in clinical trials. One such trial (ClinicalTrials.gov identifier: NCT02945852) is evaluating the role of the tyrosine kinase inhibitor apatinib in combination with chemotherapy in ED-SCLC. Apatinib selectively inhibits the vascular growth factor receptor-2 (VEGFR2). In addition, this agent also inhibits c-kit and c-SRC tyrosine kinase. It would be interesting to see if antiangiogenic agents with broader mechanisms would be more effective in SCLC. Immunotherapy with checkpoint inhibitors such as nivolumab and pembrolizumab have revolutionized the lung cancer treatment paradigm. It would be interesting to see if bevacizumab could be safely added to these immunotherapy agents. The ongoing CheckMate 370 (ClinicalTrials.gov identifier: NCT02574078) is addressing this question, evaluating the safety of combining nivolumab with bevacizumab in non-SCLC.

Applications for Clinical Practice

The current study does not support the addition of bevacizumab as a standard therapeutic option in the first-line treatment of ED-SCLC. However, given that there was a trend towards improved OS, alternative strategies of incorporating antiangiogenesis agents should be considered in future clinical trials.

—Deval Rajyaguru, MD

 

Study Overview

Objective. To evaluate efficacy of adding bevacizumab to first-line chemotherapy for treatment of extensive-disease small-cell lung cancer (ED-SCLC).

Design. Phase III prospective multicenter randomized clinical trial.

Setting and participants. The study was conducted at 29 Italian centers and was supported by the Agenzia Italiana del Farmaco. Study entry was limited to patients with histologically or cytologically documented ED-SCLC who were previously untreated with systemic therapy, were 18 years of age or older, and had an Eastern Cooperative Oncology Group performance status (ECOG PS) of 0 to 2. Adequate bone marrow, renal, and liver functions were required. Patients with asymptomatic, treated brain metastases were eligible for trial participation. Exclusions included the following: mixed histologic diagnosis of SCLC and non–SCLC; history of grade 2 hemoptysis; evidence of lung tumor cavitation; significant traumatic injury within the 4 weeks before first dose of study treatment; other active malignancies (previous or current); and any underlying medical condition that might be aggravated by treatment.

Intervention. Patients received a combination of intravenous cisplatin (25 mg/m2 on days 1 to 3), etoposide (100 mg/m2 on days 1 to 3), and bevacizumab (7.5 mg/kg intravenously on day 1) administered every 3 weeks (experimental arm); or the same cisplatin and etoposide chemotherapy regimen alone given every 3 weeks (control arm). Carboplatin (area under the curve 5 on day 1) could be substituted for cisplatin in case of cisplatin contraindications or cisplatin-associated toxicity. Tumor response, on the basis of investigator-assessed Response Evaluation Criteria in Solid Tumors (RECIST; version 1.1), was evaluated every 3 cycles during chemotherapy treatment. After 6 cycles of chemotherapy, tumor assessment was performed every 9 weeks in both arms. In the absence of progression, patients in the treatment arm continued bevacizumab alone until disease progression or for a maximum of 18 courses. Survival follow-up information was collected every 6 months after treatment termination or last dose of study drug, until death or loss to follow-up.

Main outcome measure. The primary end point was overall survival (OS). Response rate, toxicity, and progression-free survival (PFS) were secondary end points.

Main results. 205 patients were randomized between November 2009 and October 2015. 204 patients were considered in the intent-to-treat analysis (103 in the control arm and 101 in the treatment arm). Most patients were male with ECOG PS of 0 to 1. Median age was 64 years. The median number of chemotherapy courses administered was 6 in both arms. Cisplatin was used in majority of the patients. Average relative dose intensities for all drugs were well balanced between 2 groups. A lower percentage of patients in the treatment arm (14.7%) than in the control arm (22.3%) discontinued treatment because of radiologic disease progression, which was the main reason for treatment discontinuation.

At a median follow-up of 34.9 months, the median PFS was 5.7 in the control arm and 6.7 months in the treatment arm (hazard ratio [HR], 0.72; 95% CI, 0.54 to 0.97; P = 0.30). Median OS times were 8.9 months and 9.8 months, and 1-year survival rates were 25% and 37% (HR, 0.78; 95% CI, 0.58 to 1.06; P = 0.113) in the control arm and treatment arm, respectively. A significant effect of the maintenance treatment on OS (HR, 0.60; 95% CI, 0.40 to 0.91, P = 0.011) was observed. A subgroup analysis revealed a statistically significant interaction for OS between treatment and sex; the addition of bevacizumab led to a significant survival benefit in men (HR, 0.55) and to a possible detrimental effect in women (HR, 1.55; interaction test, P = 0.003).

Addition of bevacizumab did not result in increase in hematologic toxicity such as anemia, neutropenia, or thrombocytopenia. Concerning the nonhematologic toxicity, only hypertension was more frequent in the bevacizumab arm (6.3%) compared to chemotherapy alone arm (1%). The rates of proteinuria and thrombosis were similar in both arms.

Conclusion. The addition of bevacizumab to cisplatin and etoposide in the first-line treatment of ED-SCLC had an acceptable toxicity profile and led to a statistically significant improvement in PFS, which, however, did not translate into a statistically significant increase in OS.

Commentary

SCLC currently accounts for approximately 12% to 15% of all lung cancers [1]. It is characterized by a rapid growth rate, metastasis at the time of diagnosis, sensitivity to first-line platinum-based chemotherapy, and invariable recurrence and progressive resistance to subsequent lines of therapy. A number of clinical trials over the past 2 decades have failed to produce outcomes superior to platinum-based doublet chemotherapy, leaving a significant unmet need [2]. Vascular endothelial growth factor (VEGF) is the most important proangiogenic factor, and it is implicated in tumor growth [3]. Bevacizumab, a humanized monoclonal antibody directed against VEGF, is now indicated in the treatment of several tumor types including non–SCLC and breast, colorectal, kidney, and ovarian cancer. Positive signal with bevacizumab was seen in phase II studies, providing rationale for this phase III trial [4,5] .

The study by Tiseo and colleagues reported the outcomes of a randomized study that added bevacizumab to standard combination therapy with platinum and etoposide for the treatment of ED-SCLC. A small statistically significant improvement was seen in PFS (6.7 months vs. 5.7 months, favoring the bevacizumab group). However, the study failed to meet the primary end point of improved OS.

So where do antiangiogenesis agents go from here? Alternative angiogenesis inhibitors with broader mechanism of action are being explored in clinical trials. One such trial (ClinicalTrials.gov identifier: NCT02945852) is evaluating the role of the tyrosine kinase inhibitor apatinib in combination with chemotherapy in ED-SCLC. Apatinib selectively inhibits the vascular growth factor receptor-2 (VEGFR2). In addition, this agent also inhibits c-kit and c-SRC tyrosine kinase. It would be interesting to see if antiangiogenic agents with broader mechanisms would be more effective in SCLC. Immunotherapy with checkpoint inhibitors such as nivolumab and pembrolizumab have revolutionized the lung cancer treatment paradigm. It would be interesting to see if bevacizumab could be safely added to these immunotherapy agents. The ongoing CheckMate 370 (ClinicalTrials.gov identifier: NCT02574078) is addressing this question, evaluating the safety of combining nivolumab with bevacizumab in non-SCLC.

Applications for Clinical Practice

The current study does not support the addition of bevacizumab as a standard therapeutic option in the first-line treatment of ED-SCLC. However, given that there was a trend towards improved OS, alternative strategies of incorporating antiangiogenesis agents should be considered in future clinical trials.

—Deval Rajyaguru, MD

 

References

1. Neal JW, Gubens MA, Wakelee HA. Current management of small cell lung cancer. Clin Chest Med 2011;32:853–63.

2. Bunn PA Jr, Minna JD, Augustyn A, et al. Small cell lung cancer. Can recent advances in biology and molecular biology be translated into improved outcomes? J Thorac Oncol 2016;11:453–74.

3. Ferrara N, Gerber HP, LeCouter J. The biology of VEGF and its receptors. Nat Med 2003:9:669–676.

4. Horn L, Dahlberg SE, Sandler AB, et al. Phase II study of cisplatin plus etoposide and bevacizumab for previously untreated, extensive-stage small-cell lung cancer: Eastern Cooperative Oncology Group Study E3501. J Clin Oncol 2009;27:6006–11.

5. Spigel DR, Townley PM, Waterhouse DM, et al. Randomized phase II study of bevacizumab in combination with chemotherapy in previously untreated extensive-stage small-cell lung cancer: Results from the SALUTE trial. J Clin Oncol 2011;29:2215–22.

References

1. Neal JW, Gubens MA, Wakelee HA. Current management of small cell lung cancer. Clin Chest Med 2011;32:853–63.

2. Bunn PA Jr, Minna JD, Augustyn A, et al. Small cell lung cancer. Can recent advances in biology and molecular biology be translated into improved outcomes? J Thorac Oncol 2016;11:453–74.

3. Ferrara N, Gerber HP, LeCouter J. The biology of VEGF and its receptors. Nat Med 2003:9:669–676.

4. Horn L, Dahlberg SE, Sandler AB, et al. Phase II study of cisplatin plus etoposide and bevacizumab for previously untreated, extensive-stage small-cell lung cancer: Eastern Cooperative Oncology Group Study E3501. J Clin Oncol 2009;27:6006–11.

5. Spigel DR, Townley PM, Waterhouse DM, et al. Randomized phase II study of bevacizumab in combination with chemotherapy in previously untreated extensive-stage small-cell lung cancer: Results from the SALUTE trial. J Clin Oncol 2011;29:2215–22.

Issue
Journal of Clinical Outcomes Management - June 2017, Vol. 24, No. 6
Issue
Journal of Clinical Outcomes Management - June 2017, Vol. 24, No. 6
Publications
Publications
Topics
Article Type
Display Headline
Antiangiogenesis in Small-Cell Lung Cancer: Is There a Path Forward?
Display Headline
Antiangiogenesis in Small-Cell Lung Cancer: Is There a Path Forward?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default