User login
Commentary: Migraine and the relationship to gynecologic conditions, May 2023
The theme of this month's round-up is women's health, specifically as it relates to migraine. Three recent studies have highlighted the connection between estrogen and migraine, in terms of the potential increase in risk for certain conditions, such as gestational hypertension and endometriosis, and the use of potential therapies, such as calcitonin gene-related peptide (CGRP) antagonist medications to treat menstrual migraine.
Although most practitioners know that there is a deep connection between vascular risk and migraine, most are unfamiliar with the specific interplay between these two conditions. Antihypertensive medications are common preventive treatments for migraine, and migraine itself has been associated with an increased risk for specific vascular issues in pregnancy, most notably venous sinus thrombosis. Crowe and colleagues specifically looked at whether women with migraine experience a higher risk for hypertensive disorders of pregnancy.
This was a UK-based prospective cohort study using a large longitudinal database called the Clinical Practice Research Datalink. Over 1 million live-born or stillborn deliveries were analyzed from 1993 through 2020. The data were linked to diagnosis and prescription codes for migraine that were filled or documented before 20 weeks of gestation and compared with diagnosis codes for hypertensive disorders that occurred from week 20 through the pregnancy and delivery. Regression models were then used to estimate risk ratios and CI. Only single pregnancies were counted because multiple pregnancies already are associated with a higher risk for most other vascular conditions.
Any history of migraine prior to pregnancy was associated with an increased risk for gestational hypertension, eclampsia, and preeclampsia (relative risk 1.17). The greatest risk was higher frequency. Migraine that persisted into the first trimester led to a relative risk of 1.84. The use of migraine medications, especially vasoconstrictive-type medications, was also associated with a higher risk compared with women without migraine.
Women with migraine frequently present before family planning to discuss the potential risks and options of migraine treatment during pregnancy. In addition to discussing the most appropriate preventive and acute medications, it would be appropriate also to discuss any potential red flag symptoms that may occur during pregnancy. This discussion should include hypertensive disorders of pregnancy as per this study.
Since the advent of CGRP antagonist treatments for migraine, many practitioners and patients have been curious to know whether specific features of migraine are better treated with this class of medication. There are now both acute and preventive CGRP antagonists, both as small molecule agents and monoclonal antibodies (mAb). Menstrual migraines specifically can be a more difficult-to-treat subtype, and often when other triggers are negated, hormonal fluctuation can still be a significant problem for many patients. Verhagen and colleagues set out to determine whether CGRP mAb are more or less effective for menstrually associated migraine.
This analysis was post hoc, using data from a single-arm study investigating the efficacy of two of the CGRP mAb medications: erenumab and fremanezumab. Patients were included if they had a history of migraine with > 8 monthly migraine days and at least one antihypertensive or antiepileptic preventive treatment for migraine had previously failed. Any other prophylactic medications were tapered before starting this trial; patients were given a validated electronic diary, and adherence to this diary had to be > 80%. Women were also excluded if they did not have regular menses (for instance, if they were on continuous hormonal contraception) or they were postmenopausal. Logistic regression was used to compare the preventive effect of these medications on perimenstrual and non-perimenstrual migraine attacks.
A total of 45 women were included in this observation. The relative reduction in total monthly migraine days was 31.4%; 28% were noted during and around menses, 32% were during other times of the menstrual cycle. Sensitivity analysis showed no significant difference between these two periods of time, and the ratio remained statistically similar as well.
It appears that the relative reduction in monthly migraine days did not fluctuate when the patient was treated with a CGRP antagonist mAb. Although other classes of preventive medication, specifically onabotulinumtoxinA (Botox), may affect menstrually associated migraine less potently, it appears that the CGRP antagonist class may be just as effective regardless of the underlying migraine trigger. It would definitely be worth considering a CGRP antagonist trial, or the addition of a CGRP mAb, if menstrual migraine remains significant despite otherwise effective preventive treatment.
Migraine is strongly affected by fluctuations in estrogen, and women with endometriosis often experience headaches associated with their severe attacks. Pasquini and colleagues specifically looked to see if the headache associated with endometriosis could be better diagnosed. Specifically, were these women experiencing migraine or another headache disorder?
This was a consecutive case-control series of 131 women admitted to a specialty endometriosis clinic. They were given a validated headache questionnaire that was reviewed by a neurologist to determine a diagnosis of migraine vs a diagnosis of another headache disorder. The case group included women with a history of endometriosis who were previously diagnosed with migraine, while the control group consisted of women with endometriosis only who did not have a history of headache.
Diagnosis of migraine was made in 53.4% of all patients: 18.6% of those experienced pure menstrual migraine (defined as migraine only occurring perimenstrually), 46% had some menstrually associated migraine symptoms, and 36% had purely non-menstrual migraine. Painful periods and dysuria were more frequent in patients with endometriosis and migraine compared with those without migraine. Other menstrually related conditions, including the duration of endometriosis, the phenotype of endometriosis, the presence of other systemic comorbidities, or heavy menstrual bleeding did not seem to differ significantly between the migraine and non-migraine groups.
Women of reproductive age consistently are seen most often for migraine and other headache conditions. Much of this is related to menstrual migraine and the effect that hormonal fluctuation has on migraine frequency and severity. Most practitioners work closely with their patient's gynecologist to determine which hormonal treatments and migraine treatments are most appropriate and safe for each individual situation. This study in particular sheds light on the particular phenotypes of headache pain and the specific headache diagnosis that most women with endometriosis experience.
The theme of this month's round-up is women's health, specifically as it relates to migraine. Three recent studies have highlighted the connection between estrogen and migraine, in terms of the potential increase in risk for certain conditions, such as gestational hypertension and endometriosis, and the use of potential therapies, such as calcitonin gene-related peptide (CGRP) antagonist medications to treat menstrual migraine.
Although most practitioners know that there is a deep connection between vascular risk and migraine, most are unfamiliar with the specific interplay between these two conditions. Antihypertensive medications are common preventive treatments for migraine, and migraine itself has been associated with an increased risk for specific vascular issues in pregnancy, most notably venous sinus thrombosis. Crowe and colleagues specifically looked at whether women with migraine experience a higher risk for hypertensive disorders of pregnancy.
This was a UK-based prospective cohort study using a large longitudinal database called the Clinical Practice Research Datalink. Over 1 million live-born or stillborn deliveries were analyzed from 1993 through 2020. The data were linked to diagnosis and prescription codes for migraine that were filled or documented before 20 weeks of gestation and compared with diagnosis codes for hypertensive disorders that occurred from week 20 through the pregnancy and delivery. Regression models were then used to estimate risk ratios and CI. Only single pregnancies were counted because multiple pregnancies already are associated with a higher risk for most other vascular conditions.
Any history of migraine prior to pregnancy was associated with an increased risk for gestational hypertension, eclampsia, and preeclampsia (relative risk 1.17). The greatest risk was higher frequency. Migraine that persisted into the first trimester led to a relative risk of 1.84. The use of migraine medications, especially vasoconstrictive-type medications, was also associated with a higher risk compared with women without migraine.
Women with migraine frequently present before family planning to discuss the potential risks and options of migraine treatment during pregnancy. In addition to discussing the most appropriate preventive and acute medications, it would be appropriate also to discuss any potential red flag symptoms that may occur during pregnancy. This discussion should include hypertensive disorders of pregnancy as per this study.
Since the advent of CGRP antagonist treatments for migraine, many practitioners and patients have been curious to know whether specific features of migraine are better treated with this class of medication. There are now both acute and preventive CGRP antagonists, both as small molecule agents and monoclonal antibodies (mAb). Menstrual migraines specifically can be a more difficult-to-treat subtype, and often when other triggers are negated, hormonal fluctuation can still be a significant problem for many patients. Verhagen and colleagues set out to determine whether CGRP mAb are more or less effective for menstrually associated migraine.
This analysis was post hoc, using data from a single-arm study investigating the efficacy of two of the CGRP mAb medications: erenumab and fremanezumab. Patients were included if they had a history of migraine with > 8 monthly migraine days and at least one antihypertensive or antiepileptic preventive treatment for migraine had previously failed. Any other prophylactic medications were tapered before starting this trial; patients were given a validated electronic diary, and adherence to this diary had to be > 80%. Women were also excluded if they did not have regular menses (for instance, if they were on continuous hormonal contraception) or they were postmenopausal. Logistic regression was used to compare the preventive effect of these medications on perimenstrual and non-perimenstrual migraine attacks.
A total of 45 women were included in this observation. The relative reduction in total monthly migraine days was 31.4%; 28% were noted during and around menses, 32% were during other times of the menstrual cycle. Sensitivity analysis showed no significant difference between these two periods of time, and the ratio remained statistically similar as well.
It appears that the relative reduction in monthly migraine days did not fluctuate when the patient was treated with a CGRP antagonist mAb. Although other classes of preventive medication, specifically onabotulinumtoxinA (Botox), may affect menstrually associated migraine less potently, it appears that the CGRP antagonist class may be just as effective regardless of the underlying migraine trigger. It would definitely be worth considering a CGRP antagonist trial, or the addition of a CGRP mAb, if menstrual migraine remains significant despite otherwise effective preventive treatment.
Migraine is strongly affected by fluctuations in estrogen, and women with endometriosis often experience headaches associated with their severe attacks. Pasquini and colleagues specifically looked to see if the headache associated with endometriosis could be better diagnosed. Specifically, were these women experiencing migraine or another headache disorder?
This was a consecutive case-control series of 131 women admitted to a specialty endometriosis clinic. They were given a validated headache questionnaire that was reviewed by a neurologist to determine a diagnosis of migraine vs a diagnosis of another headache disorder. The case group included women with a history of endometriosis who were previously diagnosed with migraine, while the control group consisted of women with endometriosis only who did not have a history of headache.
Diagnosis of migraine was made in 53.4% of all patients: 18.6% of those experienced pure menstrual migraine (defined as migraine only occurring perimenstrually), 46% had some menstrually associated migraine symptoms, and 36% had purely non-menstrual migraine. Painful periods and dysuria were more frequent in patients with endometriosis and migraine compared with those without migraine. Other menstrually related conditions, including the duration of endometriosis, the phenotype of endometriosis, the presence of other systemic comorbidities, or heavy menstrual bleeding did not seem to differ significantly between the migraine and non-migraine groups.
Women of reproductive age consistently are seen most often for migraine and other headache conditions. Much of this is related to menstrual migraine and the effect that hormonal fluctuation has on migraine frequency and severity. Most practitioners work closely with their patient's gynecologist to determine which hormonal treatments and migraine treatments are most appropriate and safe for each individual situation. This study in particular sheds light on the particular phenotypes of headache pain and the specific headache diagnosis that most women with endometriosis experience.
The theme of this month's round-up is women's health, specifically as it relates to migraine. Three recent studies have highlighted the connection between estrogen and migraine, in terms of the potential increase in risk for certain conditions, such as gestational hypertension and endometriosis, and the use of potential therapies, such as calcitonin gene-related peptide (CGRP) antagonist medications to treat menstrual migraine.
Although most practitioners know that there is a deep connection between vascular risk and migraine, most are unfamiliar with the specific interplay between these two conditions. Antihypertensive medications are common preventive treatments for migraine, and migraine itself has been associated with an increased risk for specific vascular issues in pregnancy, most notably venous sinus thrombosis. Crowe and colleagues specifically looked at whether women with migraine experience a higher risk for hypertensive disorders of pregnancy.
This was a UK-based prospective cohort study using a large longitudinal database called the Clinical Practice Research Datalink. Over 1 million live-born or stillborn deliveries were analyzed from 1993 through 2020. The data were linked to diagnosis and prescription codes for migraine that were filled or documented before 20 weeks of gestation and compared with diagnosis codes for hypertensive disorders that occurred from week 20 through the pregnancy and delivery. Regression models were then used to estimate risk ratios and CI. Only single pregnancies were counted because multiple pregnancies already are associated with a higher risk for most other vascular conditions.
Any history of migraine prior to pregnancy was associated with an increased risk for gestational hypertension, eclampsia, and preeclampsia (relative risk 1.17). The greatest risk was higher frequency. Migraine that persisted into the first trimester led to a relative risk of 1.84. The use of migraine medications, especially vasoconstrictive-type medications, was also associated with a higher risk compared with women without migraine.
Women with migraine frequently present before family planning to discuss the potential risks and options of migraine treatment during pregnancy. In addition to discussing the most appropriate preventive and acute medications, it would be appropriate also to discuss any potential red flag symptoms that may occur during pregnancy. This discussion should include hypertensive disorders of pregnancy as per this study.
Since the advent of CGRP antagonist treatments for migraine, many practitioners and patients have been curious to know whether specific features of migraine are better treated with this class of medication. There are now both acute and preventive CGRP antagonists, both as small molecule agents and monoclonal antibodies (mAb). Menstrual migraines specifically can be a more difficult-to-treat subtype, and often when other triggers are negated, hormonal fluctuation can still be a significant problem for many patients. Verhagen and colleagues set out to determine whether CGRP mAb are more or less effective for menstrually associated migraine.
This analysis was post hoc, using data from a single-arm study investigating the efficacy of two of the CGRP mAb medications: erenumab and fremanezumab. Patients were included if they had a history of migraine with > 8 monthly migraine days and at least one antihypertensive or antiepileptic preventive treatment for migraine had previously failed. Any other prophylactic medications were tapered before starting this trial; patients were given a validated electronic diary, and adherence to this diary had to be > 80%. Women were also excluded if they did not have regular menses (for instance, if they were on continuous hormonal contraception) or they were postmenopausal. Logistic regression was used to compare the preventive effect of these medications on perimenstrual and non-perimenstrual migraine attacks.
A total of 45 women were included in this observation. The relative reduction in total monthly migraine days was 31.4%; 28% were noted during and around menses, 32% were during other times of the menstrual cycle. Sensitivity analysis showed no significant difference between these two periods of time, and the ratio remained statistically similar as well.
It appears that the relative reduction in monthly migraine days did not fluctuate when the patient was treated with a CGRP antagonist mAb. Although other classes of preventive medication, specifically onabotulinumtoxinA (Botox), may affect menstrually associated migraine less potently, it appears that the CGRP antagonist class may be just as effective regardless of the underlying migraine trigger. It would definitely be worth considering a CGRP antagonist trial, or the addition of a CGRP mAb, if menstrual migraine remains significant despite otherwise effective preventive treatment.
Migraine is strongly affected by fluctuations in estrogen, and women with endometriosis often experience headaches associated with their severe attacks. Pasquini and colleagues specifically looked to see if the headache associated with endometriosis could be better diagnosed. Specifically, were these women experiencing migraine or another headache disorder?
This was a consecutive case-control series of 131 women admitted to a specialty endometriosis clinic. They were given a validated headache questionnaire that was reviewed by a neurologist to determine a diagnosis of migraine vs a diagnosis of another headache disorder. The case group included women with a history of endometriosis who were previously diagnosed with migraine, while the control group consisted of women with endometriosis only who did not have a history of headache.
Diagnosis of migraine was made in 53.4% of all patients: 18.6% of those experienced pure menstrual migraine (defined as migraine only occurring perimenstrually), 46% had some menstrually associated migraine symptoms, and 36% had purely non-menstrual migraine. Painful periods and dysuria were more frequent in patients with endometriosis and migraine compared with those without migraine. Other menstrually related conditions, including the duration of endometriosis, the phenotype of endometriosis, the presence of other systemic comorbidities, or heavy menstrual bleeding did not seem to differ significantly between the migraine and non-migraine groups.
Women of reproductive age consistently are seen most often for migraine and other headache conditions. Much of this is related to menstrual migraine and the effect that hormonal fluctuation has on migraine frequency and severity. Most practitioners work closely with their patient's gynecologist to determine which hormonal treatments and migraine treatments are most appropriate and safe for each individual situation. This study in particular sheds light on the particular phenotypes of headache pain and the specific headache diagnosis that most women with endometriosis experience.
Commentary: New genetic information and treatments for DLBCL, May 2023
Diffuse large B-cell lymphoma (DLBCL) is both a clinically and molecularly heterogenous disease. The International Prognostic Index (IPI), which is based on clinical and laboratory variables, is still currently used to delineate risk at the time of diagnosis. Diffuse large B-cell lymphoma can also further be classified into either germinal center B-cell (GCB) or activated B-cell (ABC) subtype, also known as the cell-of-origin classification (COO), which has been prognostic in prior studies.1 COO is based on gene expression profiling (GEP), though it can be estimated by immunohistochemistry.
Although these classifications are available, treatment of DLBCL has largely remained uniform over the past few decades. Despite encouraging preclinical data and early trials, large randomized studies had not demonstrated an advantage of rituximab, cyclophosphamide, doxorubicin hydrochloride, vincristine sulfate, and prednisone (R-CHOP) plus X over R-CHOP alone.2,3 The REMoDL-B trial, which included 801 adult patients with DLBCL, including patients with ABC, GCB, or molecular high grade (MHG) classification by GEP. Patients received one cycle of R-CHOP and were randomly assigned to R-CHOP (n = 407) alone or bortezomib–R-CHOP (n = 394) for cycles 2-6. Initial reports did not demonstrate any clear benefit of the addition of bortezomib.4 More recently, however, 5-year follow-up data demonstrate that the addition of bortezomib confers an advantage over R-CHOP in patients with ABC and MHG DLBCL (Davies et al). Bortezomib–R-CHOP vs R-CHOP significantly improved 60-month progression-free survival (PFS) in the ABC (adjusted hazard ratio [aHR] 0.65; P = .041) and MHG (aHR 0.46; P = .011) groups and overall survival (OS) in the ABC group (aHR 0.58; P = .032). The GCB group showed no significant difference in PFS or OS.
Despite the results of REMoDL-B, it is unlikely that this study will change practice. GEP is not readily available and with the approval of polatuzumab (pola)–R-CHP, based on the results of POLARIX trial, there is new option available for patients with newly diagnosed DLBCL with a high IPI. A recent meta-analysis of 12 randomized controlled trials (Sheng et al) involving 8376 patients with previously untreated ABC- or GCB-type DLBCL who received pola–R-CHP or other regimens was also recently performed. This study showed that pola–R-CHP prolonged PFS in patients with ABC-type DLBCL compared with bortezomib–R-CHOP (hazard ratio [HR] 0.52; P = .02); ibrutinib–R-CHOP (HR 0.43; P = .001); lenalidomide–R-CHOP (HR 0.51; P = .009); Obinutuzumab–CHOP (HR 0.46; P = .008); R-CHOP (HR 0.40; P < .001); and bortezomib, rituximab, and cyclophosphamide (HR 0.44; P = .07). Pola–R-CHP had no PFS benefit in patients with GCB-type DLBCL. Although it is difficult to directly compare trials, these data suggest that pola–R-CHP is active in ABC subtype DLBCL.
Together, these trials suggest that there still may be a role for more personalized therapy in DLBCL, though there may be room for improvement. Recent studies have suggested more complex genomic underpinnings in DLBCL beyond COO, which will hopefully be studied in the context of DLBCL trials.5
In the second line, patients with primary refractory or early relapse of DLBCL now have the option of anti-CD19 chimeric antigen receptor (CAR) T-cell therapy, based on the results of the ZUMA-7 and TRANSFORM studies.6,7 Lisocabtagene maraleucel (liso-cel) was also found to have a manageable safety profile in older patients with large B-cell lymphoma who were not transplant candidates in the PILOT study, leading to approval in this setting.8 More recently, axicabtagene ciloleucel (axi-cel) was found to be an effective second-line therapy with a manageable safety profile for patients aged ≥ 65 years as well (Westin et al). These findings are from a preplanned analysis of 109 patients aged ≥ 65 years from ZUMA-7 who were randomly assigned to receive second-line axi-cel (n = 51) or standard of care (SOC) (n = 58; two or three cycles of chemoimmunotherapy followed by high-dose chemotherapy with autologous stem-cell transplantation). At a median follow-up of 24.3 months, the median event-free survival was significantly longer with axi-cel vs SOC; 21.5 vs 2.5 months; HR, 0.276; descriptive P < .0001). Rates of grade 3 or higher treatment-emergent adverse events were 94% and 82% with axi-cel and SOC, respectively. Although these patients were considered transplant eligible, this study demonstrates that axi-cel can be safely administered to older patients.
Additional References
1. Rosenwald A, Wright G, Chan WC, et al; Lymphoma/Leukemia Molecular Profiling Project. The use of molecular profiling to predict survival after chemotherapy for diffuse large-B-cell lymphoma. N Engl J Med. 2002;346:1937-1947. doi: 10.1056/NEJMoa012914
2. Younes A, Sehn LH, Johnson P, et al; PHOENIX investigators. Randomized phase III trial of ibrutinib and rituximab plus cyclophosphamide, doxorubicin, vincristine, and prednisone in non-germinal center B-cell diffuse large B-cell lymphoma. J Clin Oncol. 2019;37:1285-1295. doi: 10.1200/JCO.18.02403
3. Nowakowski GS, Chiappella A, Gascoyne RD, et al; ROBUST Trial Investigators. ROBUST: a phase III study of lenalidomide plus R-CHOP versus placebo plus R-CHOP in previously untreated patients with ABC-type diffuse large B-cell lymphoma. J Clin Oncol. 2021;39:1317-1328. doi: 10.1200/JCO.20.01366
4. Davies A, Cummin TE, Barrans S, et al. Gene-expression profiling of bortezomib added to standard chemoimmunotherapy for diffuse large B-cell lymphoma (REMoDL-B): an open-label, randomised, phase 3 trial. Lancet Oncol. 2019;20:649-662. doi: 10.1016/S1470-2045(18)30935-5
5. Crombie JL, Armand P. Diffuse large B-cell lymphoma's new genomics: the bridge and the chasm. J Clin Oncol. 2020;38:3565-3574. doi: 10.1200/JCO.20.01501
6. Locke FL, Miklos DB, Jacobson CA, et al for All ZUMA-7 Investigators and Contributing Kite Members. Axicabtagene ciloleucel as second-line therapy for large B-cell lymphoma. N Engl J Med. 2022;386:640-654. doi: 10.1056/NEJMoa2116133
7. Abramson JS, Solomon SR, Arnason JE, et al; TRANSFORM Investigators. Lisocabtagene maraleucel as second-line therapy for large B-cell lymphoma: primary analysis of phase 3 TRANSFORM study. Blood. 2023:141:1675-1684. doi: 10.1182/blood.2022018730
8. Sehgal A, Hoda D, Riedell PA, et al. Lisocabtagene maraleucel as second-line therapy in adults with relapsed or refractory large B-cell lymphoma who were not intended for haematopoietic stem cell transplantation (PILOT): an open-label, phase 2 study. Lancet Oncol. 2022;23:1066-1077. doi: 10.1016/S1470-2045(22)00339-4
Diffuse large B-cell lymphoma (DLBCL) is both a clinically and molecularly heterogenous disease. The International Prognostic Index (IPI), which is based on clinical and laboratory variables, is still currently used to delineate risk at the time of diagnosis. Diffuse large B-cell lymphoma can also further be classified into either germinal center B-cell (GCB) or activated B-cell (ABC) subtype, also known as the cell-of-origin classification (COO), which has been prognostic in prior studies.1 COO is based on gene expression profiling (GEP), though it can be estimated by immunohistochemistry.
Although these classifications are available, treatment of DLBCL has largely remained uniform over the past few decades. Despite encouraging preclinical data and early trials, large randomized studies had not demonstrated an advantage of rituximab, cyclophosphamide, doxorubicin hydrochloride, vincristine sulfate, and prednisone (R-CHOP) plus X over R-CHOP alone.2,3 The REMoDL-B trial, which included 801 adult patients with DLBCL, including patients with ABC, GCB, or molecular high grade (MHG) classification by GEP. Patients received one cycle of R-CHOP and were randomly assigned to R-CHOP (n = 407) alone or bortezomib–R-CHOP (n = 394) for cycles 2-6. Initial reports did not demonstrate any clear benefit of the addition of bortezomib.4 More recently, however, 5-year follow-up data demonstrate that the addition of bortezomib confers an advantage over R-CHOP in patients with ABC and MHG DLBCL (Davies et al). Bortezomib–R-CHOP vs R-CHOP significantly improved 60-month progression-free survival (PFS) in the ABC (adjusted hazard ratio [aHR] 0.65; P = .041) and MHG (aHR 0.46; P = .011) groups and overall survival (OS) in the ABC group (aHR 0.58; P = .032). The GCB group showed no significant difference in PFS or OS.
Despite the results of REMoDL-B, it is unlikely that this study will change practice. GEP is not readily available and with the approval of polatuzumab (pola)–R-CHP, based on the results of POLARIX trial, there is new option available for patients with newly diagnosed DLBCL with a high IPI. A recent meta-analysis of 12 randomized controlled trials (Sheng et al) involving 8376 patients with previously untreated ABC- or GCB-type DLBCL who received pola–R-CHP or other regimens was also recently performed. This study showed that pola–R-CHP prolonged PFS in patients with ABC-type DLBCL compared with bortezomib–R-CHOP (hazard ratio [HR] 0.52; P = .02); ibrutinib–R-CHOP (HR 0.43; P = .001); lenalidomide–R-CHOP (HR 0.51; P = .009); Obinutuzumab–CHOP (HR 0.46; P = .008); R-CHOP (HR 0.40; P < .001); and bortezomib, rituximab, and cyclophosphamide (HR 0.44; P = .07). Pola–R-CHP had no PFS benefit in patients with GCB-type DLBCL. Although it is difficult to directly compare trials, these data suggest that pola–R-CHP is active in ABC subtype DLBCL.
Together, these trials suggest that there still may be a role for more personalized therapy in DLBCL, though there may be room for improvement. Recent studies have suggested more complex genomic underpinnings in DLBCL beyond COO, which will hopefully be studied in the context of DLBCL trials.5
In the second line, patients with primary refractory or early relapse of DLBCL now have the option of anti-CD19 chimeric antigen receptor (CAR) T-cell therapy, based on the results of the ZUMA-7 and TRANSFORM studies.6,7 Lisocabtagene maraleucel (liso-cel) was also found to have a manageable safety profile in older patients with large B-cell lymphoma who were not transplant candidates in the PILOT study, leading to approval in this setting.8 More recently, axicabtagene ciloleucel (axi-cel) was found to be an effective second-line therapy with a manageable safety profile for patients aged ≥ 65 years as well (Westin et al). These findings are from a preplanned analysis of 109 patients aged ≥ 65 years from ZUMA-7 who were randomly assigned to receive second-line axi-cel (n = 51) or standard of care (SOC) (n = 58; two or three cycles of chemoimmunotherapy followed by high-dose chemotherapy with autologous stem-cell transplantation). At a median follow-up of 24.3 months, the median event-free survival was significantly longer with axi-cel vs SOC; 21.5 vs 2.5 months; HR, 0.276; descriptive P < .0001). Rates of grade 3 or higher treatment-emergent adverse events were 94% and 82% with axi-cel and SOC, respectively. Although these patients were considered transplant eligible, this study demonstrates that axi-cel can be safely administered to older patients.
Additional References
1. Rosenwald A, Wright G, Chan WC, et al; Lymphoma/Leukemia Molecular Profiling Project. The use of molecular profiling to predict survival after chemotherapy for diffuse large-B-cell lymphoma. N Engl J Med. 2002;346:1937-1947. doi: 10.1056/NEJMoa012914
2. Younes A, Sehn LH, Johnson P, et al; PHOENIX investigators. Randomized phase III trial of ibrutinib and rituximab plus cyclophosphamide, doxorubicin, vincristine, and prednisone in non-germinal center B-cell diffuse large B-cell lymphoma. J Clin Oncol. 2019;37:1285-1295. doi: 10.1200/JCO.18.02403
3. Nowakowski GS, Chiappella A, Gascoyne RD, et al; ROBUST Trial Investigators. ROBUST: a phase III study of lenalidomide plus R-CHOP versus placebo plus R-CHOP in previously untreated patients with ABC-type diffuse large B-cell lymphoma. J Clin Oncol. 2021;39:1317-1328. doi: 10.1200/JCO.20.01366
4. Davies A, Cummin TE, Barrans S, et al. Gene-expression profiling of bortezomib added to standard chemoimmunotherapy for diffuse large B-cell lymphoma (REMoDL-B): an open-label, randomised, phase 3 trial. Lancet Oncol. 2019;20:649-662. doi: 10.1016/S1470-2045(18)30935-5
5. Crombie JL, Armand P. Diffuse large B-cell lymphoma's new genomics: the bridge and the chasm. J Clin Oncol. 2020;38:3565-3574. doi: 10.1200/JCO.20.01501
6. Locke FL, Miklos DB, Jacobson CA, et al for All ZUMA-7 Investigators and Contributing Kite Members. Axicabtagene ciloleucel as second-line therapy for large B-cell lymphoma. N Engl J Med. 2022;386:640-654. doi: 10.1056/NEJMoa2116133
7. Abramson JS, Solomon SR, Arnason JE, et al; TRANSFORM Investigators. Lisocabtagene maraleucel as second-line therapy for large B-cell lymphoma: primary analysis of phase 3 TRANSFORM study. Blood. 2023:141:1675-1684. doi: 10.1182/blood.2022018730
8. Sehgal A, Hoda D, Riedell PA, et al. Lisocabtagene maraleucel as second-line therapy in adults with relapsed or refractory large B-cell lymphoma who were not intended for haematopoietic stem cell transplantation (PILOT): an open-label, phase 2 study. Lancet Oncol. 2022;23:1066-1077. doi: 10.1016/S1470-2045(22)00339-4
Diffuse large B-cell lymphoma (DLBCL) is both a clinically and molecularly heterogenous disease. The International Prognostic Index (IPI), which is based on clinical and laboratory variables, is still currently used to delineate risk at the time of diagnosis. Diffuse large B-cell lymphoma can also further be classified into either germinal center B-cell (GCB) or activated B-cell (ABC) subtype, also known as the cell-of-origin classification (COO), which has been prognostic in prior studies.1 COO is based on gene expression profiling (GEP), though it can be estimated by immunohistochemistry.
Although these classifications are available, treatment of DLBCL has largely remained uniform over the past few decades. Despite encouraging preclinical data and early trials, large randomized studies had not demonstrated an advantage of rituximab, cyclophosphamide, doxorubicin hydrochloride, vincristine sulfate, and prednisone (R-CHOP) plus X over R-CHOP alone.2,3 The REMoDL-B trial, which included 801 adult patients with DLBCL, including patients with ABC, GCB, or molecular high grade (MHG) classification by GEP. Patients received one cycle of R-CHOP and were randomly assigned to R-CHOP (n = 407) alone or bortezomib–R-CHOP (n = 394) for cycles 2-6. Initial reports did not demonstrate any clear benefit of the addition of bortezomib.4 More recently, however, 5-year follow-up data demonstrate that the addition of bortezomib confers an advantage over R-CHOP in patients with ABC and MHG DLBCL (Davies et al). Bortezomib–R-CHOP vs R-CHOP significantly improved 60-month progression-free survival (PFS) in the ABC (adjusted hazard ratio [aHR] 0.65; P = .041) and MHG (aHR 0.46; P = .011) groups and overall survival (OS) in the ABC group (aHR 0.58; P = .032). The GCB group showed no significant difference in PFS or OS.
Despite the results of REMoDL-B, it is unlikely that this study will change practice. GEP is not readily available and with the approval of polatuzumab (pola)–R-CHP, based on the results of POLARIX trial, there is new option available for patients with newly diagnosed DLBCL with a high IPI. A recent meta-analysis of 12 randomized controlled trials (Sheng et al) involving 8376 patients with previously untreated ABC- or GCB-type DLBCL who received pola–R-CHP or other regimens was also recently performed. This study showed that pola–R-CHP prolonged PFS in patients with ABC-type DLBCL compared with bortezomib–R-CHOP (hazard ratio [HR] 0.52; P = .02); ibrutinib–R-CHOP (HR 0.43; P = .001); lenalidomide–R-CHOP (HR 0.51; P = .009); Obinutuzumab–CHOP (HR 0.46; P = .008); R-CHOP (HR 0.40; P < .001); and bortezomib, rituximab, and cyclophosphamide (HR 0.44; P = .07). Pola–R-CHP had no PFS benefit in patients with GCB-type DLBCL. Although it is difficult to directly compare trials, these data suggest that pola–R-CHP is active in ABC subtype DLBCL.
Together, these trials suggest that there still may be a role for more personalized therapy in DLBCL, though there may be room for improvement. Recent studies have suggested more complex genomic underpinnings in DLBCL beyond COO, which will hopefully be studied in the context of DLBCL trials.5
In the second line, patients with primary refractory or early relapse of DLBCL now have the option of anti-CD19 chimeric antigen receptor (CAR) T-cell therapy, based on the results of the ZUMA-7 and TRANSFORM studies.6,7 Lisocabtagene maraleucel (liso-cel) was also found to have a manageable safety profile in older patients with large B-cell lymphoma who were not transplant candidates in the PILOT study, leading to approval in this setting.8 More recently, axicabtagene ciloleucel (axi-cel) was found to be an effective second-line therapy with a manageable safety profile for patients aged ≥ 65 years as well (Westin et al). These findings are from a preplanned analysis of 109 patients aged ≥ 65 years from ZUMA-7 who were randomly assigned to receive second-line axi-cel (n = 51) or standard of care (SOC) (n = 58; two or three cycles of chemoimmunotherapy followed by high-dose chemotherapy with autologous stem-cell transplantation). At a median follow-up of 24.3 months, the median event-free survival was significantly longer with axi-cel vs SOC; 21.5 vs 2.5 months; HR, 0.276; descriptive P < .0001). Rates of grade 3 or higher treatment-emergent adverse events were 94% and 82% with axi-cel and SOC, respectively. Although these patients were considered transplant eligible, this study demonstrates that axi-cel can be safely administered to older patients.
Additional References
1. Rosenwald A, Wright G, Chan WC, et al; Lymphoma/Leukemia Molecular Profiling Project. The use of molecular profiling to predict survival after chemotherapy for diffuse large-B-cell lymphoma. N Engl J Med. 2002;346:1937-1947. doi: 10.1056/NEJMoa012914
2. Younes A, Sehn LH, Johnson P, et al; PHOENIX investigators. Randomized phase III trial of ibrutinib and rituximab plus cyclophosphamide, doxorubicin, vincristine, and prednisone in non-germinal center B-cell diffuse large B-cell lymphoma. J Clin Oncol. 2019;37:1285-1295. doi: 10.1200/JCO.18.02403
3. Nowakowski GS, Chiappella A, Gascoyne RD, et al; ROBUST Trial Investigators. ROBUST: a phase III study of lenalidomide plus R-CHOP versus placebo plus R-CHOP in previously untreated patients with ABC-type diffuse large B-cell lymphoma. J Clin Oncol. 2021;39:1317-1328. doi: 10.1200/JCO.20.01366
4. Davies A, Cummin TE, Barrans S, et al. Gene-expression profiling of bortezomib added to standard chemoimmunotherapy for diffuse large B-cell lymphoma (REMoDL-B): an open-label, randomised, phase 3 trial. Lancet Oncol. 2019;20:649-662. doi: 10.1016/S1470-2045(18)30935-5
5. Crombie JL, Armand P. Diffuse large B-cell lymphoma's new genomics: the bridge and the chasm. J Clin Oncol. 2020;38:3565-3574. doi: 10.1200/JCO.20.01501
6. Locke FL, Miklos DB, Jacobson CA, et al for All ZUMA-7 Investigators and Contributing Kite Members. Axicabtagene ciloleucel as second-line therapy for large B-cell lymphoma. N Engl J Med. 2022;386:640-654. doi: 10.1056/NEJMoa2116133
7. Abramson JS, Solomon SR, Arnason JE, et al; TRANSFORM Investigators. Lisocabtagene maraleucel as second-line therapy for large B-cell lymphoma: primary analysis of phase 3 TRANSFORM study. Blood. 2023:141:1675-1684. doi: 10.1182/blood.2022018730
8. Sehgal A, Hoda D, Riedell PA, et al. Lisocabtagene maraleucel as second-line therapy in adults with relapsed or refractory large B-cell lymphoma who were not intended for haematopoietic stem cell transplantation (PILOT): an open-label, phase 2 study. Lancet Oncol. 2022;23:1066-1077. doi: 10.1016/S1470-2045(22)00339-4
Surprising brain activity moments before death
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
All the participants in the study I am going to tell you about this week died. And three of them died twice. But their deaths provide us with a fascinating window into the complex electrochemistry of the dying brain. What we might be looking at, indeed, is the physiologic correlate of the near-death experience.
The concept of the near-death experience is culturally ubiquitous. And though the content seems to track along culture lines – Western Christians are more likely to report seeing guardian angels, while Hindus are more likely to report seeing messengers of the god of death – certain factors seem to transcend culture: an out-of-body experience; a feeling of peace; and, of course, the light at the end of the tunnel.
As a materialist, I won’t discuss the possibility that these commonalities reflect some metaphysical structure to the afterlife. More likely, it seems to me, is that the commonalities result from the fact that the experience is mediated by our brains, and our brains, when dying, may be more alike than different.
We are talking about this study, appearing in the Proceedings of the National Academy of Sciences, by Jimo Borjigin and her team.
Dr. Borjigin studies the neural correlates of consciousness, perhaps one of the biggest questions in all of science today. To wit,
The study in question follows four unconscious patients –comatose patients, really – as life-sustaining support was withdrawn, up until the moment of death. Three had suffered severe anoxic brain injury in the setting of prolonged cardiac arrest. Though the heart was restarted, the brain damage was severe. The fourth had a large brain hemorrhage. All four patients were thus comatose and, though not brain-dead, unresponsive – with the lowest possible Glasgow Coma Scale score. No response to outside stimuli.
The families had made the decision to withdraw life support – to remove the breathing tube – but agreed to enroll their loved one in the study.
The team applied EEG leads to the head, EKG leads to the chest, and other monitoring equipment to observe the physiologic changes that occurred as the comatose and unresponsive patient died.
As the heart rhythm evolved from this:
To this:
And eventually stopped.
But this is a study about the brain, not the heart.
Prior to the withdrawal of life support, the brain electrical signals looked like this:
What you see is the EEG power at various frequencies, with red being higher. All the red was down at the low frequencies. Consciousness, at least as we understand it, is a higher-frequency phenomenon.
Right after the breathing tube was removed, the power didn’t change too much, but you can see some increased activity at the higher frequencies.
But in two of the four patients, something really surprising happened. Watch what happens as the brain gets closer and closer to death.
Here, about 300 seconds before death, there was a power surge at the high gamma frequencies.
This spike in power occurred in the somatosensory cortex and the dorsolateral prefrontal cortex, areas that are associated with conscious experience. It seems that this patient, 5 minutes before death, was experiencing something.
But I know what you’re thinking. This is a brain that is not receiving oxygen. Cells are going to become disordered quickly and start firing randomly – a last gasp, so to speak, before the end. Meaningless noise.
But connectivity mapping tells a different story. The signals seem to have structure.
Those high-frequency power surges increased connectivity in the posterior cortical “hot zone,” an area of the brain many researchers feel is necessary for conscious perception. This figure is not a map of raw brain electrical output like the one I showed before, but of coherence between brain regions in the consciousness hot zone. Those red areas indicate cross-talk – not the disordered scream of dying neurons, but a last set of messages passing back and forth from the parietal and posterior temporal lobes.
In fact, the electrical patterns of the brains in these patients looked very similar to the patterns seen in dreaming humans, as well as in patients with epilepsy who report sensations of out-of-body experiences.
It’s critical to realize two things here. First, these signals of consciousness were not present before life support was withdrawn. These comatose patients had minimal brain activity; there was no evidence that they were experiencing anything before the process of dying began. These brains are behaving fundamentally differently near death.
But second, we must realize that, although the brains of these individuals, in their last moments, appeared to be acting in a way that conscious brains act, we have no way of knowing if the patients were truly having a conscious experience. As I said, all the patients in the study died. Short of those metaphysics I alluded to earlier, we will have no way to ask them how they experienced their final moments.
Let’s be clear: This study doesn’t answer the question of what happens when we die. It says nothing about life after death or the existence or persistence of the soul. But what it does do is shed light on an incredibly difficult problem in neuroscience: the problem of consciousness. And as studies like this move forward, we may discover that the root of consciousness comes not from the breath of God or the energy of a living universe, but from very specific parts of the very complicated machine that is the brain, acting together to produce something transcendent. And to me, that is no less sublime.
Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator, Yale University, New Haven, Conn. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now. Dr. Wilson has disclosed no relevant financial relationships.
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
All the participants in the study I am going to tell you about this week died. And three of them died twice. But their deaths provide us with a fascinating window into the complex electrochemistry of the dying brain. What we might be looking at, indeed, is the physiologic correlate of the near-death experience.
The concept of the near-death experience is culturally ubiquitous. And though the content seems to track along culture lines – Western Christians are more likely to report seeing guardian angels, while Hindus are more likely to report seeing messengers of the god of death – certain factors seem to transcend culture: an out-of-body experience; a feeling of peace; and, of course, the light at the end of the tunnel.
As a materialist, I won’t discuss the possibility that these commonalities reflect some metaphysical structure to the afterlife. More likely, it seems to me, is that the commonalities result from the fact that the experience is mediated by our brains, and our brains, when dying, may be more alike than different.
We are talking about this study, appearing in the Proceedings of the National Academy of Sciences, by Jimo Borjigin and her team.
Dr. Borjigin studies the neural correlates of consciousness, perhaps one of the biggest questions in all of science today. To wit,
The study in question follows four unconscious patients –comatose patients, really – as life-sustaining support was withdrawn, up until the moment of death. Three had suffered severe anoxic brain injury in the setting of prolonged cardiac arrest. Though the heart was restarted, the brain damage was severe. The fourth had a large brain hemorrhage. All four patients were thus comatose and, though not brain-dead, unresponsive – with the lowest possible Glasgow Coma Scale score. No response to outside stimuli.
The families had made the decision to withdraw life support – to remove the breathing tube – but agreed to enroll their loved one in the study.
The team applied EEG leads to the head, EKG leads to the chest, and other monitoring equipment to observe the physiologic changes that occurred as the comatose and unresponsive patient died.
As the heart rhythm evolved from this:
To this:
And eventually stopped.
But this is a study about the brain, not the heart.
Prior to the withdrawal of life support, the brain electrical signals looked like this:
What you see is the EEG power at various frequencies, with red being higher. All the red was down at the low frequencies. Consciousness, at least as we understand it, is a higher-frequency phenomenon.
Right after the breathing tube was removed, the power didn’t change too much, but you can see some increased activity at the higher frequencies.
But in two of the four patients, something really surprising happened. Watch what happens as the brain gets closer and closer to death.
Here, about 300 seconds before death, there was a power surge at the high gamma frequencies.
This spike in power occurred in the somatosensory cortex and the dorsolateral prefrontal cortex, areas that are associated with conscious experience. It seems that this patient, 5 minutes before death, was experiencing something.
But I know what you’re thinking. This is a brain that is not receiving oxygen. Cells are going to become disordered quickly and start firing randomly – a last gasp, so to speak, before the end. Meaningless noise.
But connectivity mapping tells a different story. The signals seem to have structure.
Those high-frequency power surges increased connectivity in the posterior cortical “hot zone,” an area of the brain many researchers feel is necessary for conscious perception. This figure is not a map of raw brain electrical output like the one I showed before, but of coherence between brain regions in the consciousness hot zone. Those red areas indicate cross-talk – not the disordered scream of dying neurons, but a last set of messages passing back and forth from the parietal and posterior temporal lobes.
In fact, the electrical patterns of the brains in these patients looked very similar to the patterns seen in dreaming humans, as well as in patients with epilepsy who report sensations of out-of-body experiences.
It’s critical to realize two things here. First, these signals of consciousness were not present before life support was withdrawn. These comatose patients had minimal brain activity; there was no evidence that they were experiencing anything before the process of dying began. These brains are behaving fundamentally differently near death.
But second, we must realize that, although the brains of these individuals, in their last moments, appeared to be acting in a way that conscious brains act, we have no way of knowing if the patients were truly having a conscious experience. As I said, all the patients in the study died. Short of those metaphysics I alluded to earlier, we will have no way to ask them how they experienced their final moments.
Let’s be clear: This study doesn’t answer the question of what happens when we die. It says nothing about life after death or the existence or persistence of the soul. But what it does do is shed light on an incredibly difficult problem in neuroscience: the problem of consciousness. And as studies like this move forward, we may discover that the root of consciousness comes not from the breath of God or the energy of a living universe, but from very specific parts of the very complicated machine that is the brain, acting together to produce something transcendent. And to me, that is no less sublime.
Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator, Yale University, New Haven, Conn. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now. Dr. Wilson has disclosed no relevant financial relationships.
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.
All the participants in the study I am going to tell you about this week died. And three of them died twice. But their deaths provide us with a fascinating window into the complex electrochemistry of the dying brain. What we might be looking at, indeed, is the physiologic correlate of the near-death experience.
The concept of the near-death experience is culturally ubiquitous. And though the content seems to track along culture lines – Western Christians are more likely to report seeing guardian angels, while Hindus are more likely to report seeing messengers of the god of death – certain factors seem to transcend culture: an out-of-body experience; a feeling of peace; and, of course, the light at the end of the tunnel.
As a materialist, I won’t discuss the possibility that these commonalities reflect some metaphysical structure to the afterlife. More likely, it seems to me, is that the commonalities result from the fact that the experience is mediated by our brains, and our brains, when dying, may be more alike than different.
We are talking about this study, appearing in the Proceedings of the National Academy of Sciences, by Jimo Borjigin and her team.
Dr. Borjigin studies the neural correlates of consciousness, perhaps one of the biggest questions in all of science today. To wit,
The study in question follows four unconscious patients –comatose patients, really – as life-sustaining support was withdrawn, up until the moment of death. Three had suffered severe anoxic brain injury in the setting of prolonged cardiac arrest. Though the heart was restarted, the brain damage was severe. The fourth had a large brain hemorrhage. All four patients were thus comatose and, though not brain-dead, unresponsive – with the lowest possible Glasgow Coma Scale score. No response to outside stimuli.
The families had made the decision to withdraw life support – to remove the breathing tube – but agreed to enroll their loved one in the study.
The team applied EEG leads to the head, EKG leads to the chest, and other monitoring equipment to observe the physiologic changes that occurred as the comatose and unresponsive patient died.
As the heart rhythm evolved from this:
To this:
And eventually stopped.
But this is a study about the brain, not the heart.
Prior to the withdrawal of life support, the brain electrical signals looked like this:
What you see is the EEG power at various frequencies, with red being higher. All the red was down at the low frequencies. Consciousness, at least as we understand it, is a higher-frequency phenomenon.
Right after the breathing tube was removed, the power didn’t change too much, but you can see some increased activity at the higher frequencies.
But in two of the four patients, something really surprising happened. Watch what happens as the brain gets closer and closer to death.
Here, about 300 seconds before death, there was a power surge at the high gamma frequencies.
This spike in power occurred in the somatosensory cortex and the dorsolateral prefrontal cortex, areas that are associated with conscious experience. It seems that this patient, 5 minutes before death, was experiencing something.
But I know what you’re thinking. This is a brain that is not receiving oxygen. Cells are going to become disordered quickly and start firing randomly – a last gasp, so to speak, before the end. Meaningless noise.
But connectivity mapping tells a different story. The signals seem to have structure.
Those high-frequency power surges increased connectivity in the posterior cortical “hot zone,” an area of the brain many researchers feel is necessary for conscious perception. This figure is not a map of raw brain electrical output like the one I showed before, but of coherence between brain regions in the consciousness hot zone. Those red areas indicate cross-talk – not the disordered scream of dying neurons, but a last set of messages passing back and forth from the parietal and posterior temporal lobes.
In fact, the electrical patterns of the brains in these patients looked very similar to the patterns seen in dreaming humans, as well as in patients with epilepsy who report sensations of out-of-body experiences.
It’s critical to realize two things here. First, these signals of consciousness were not present before life support was withdrawn. These comatose patients had minimal brain activity; there was no evidence that they were experiencing anything before the process of dying began. These brains are behaving fundamentally differently near death.
But second, we must realize that, although the brains of these individuals, in their last moments, appeared to be acting in a way that conscious brains act, we have no way of knowing if the patients were truly having a conscious experience. As I said, all the patients in the study died. Short of those metaphysics I alluded to earlier, we will have no way to ask them how they experienced their final moments.
Let’s be clear: This study doesn’t answer the question of what happens when we die. It says nothing about life after death or the existence or persistence of the soul. But what it does do is shed light on an incredibly difficult problem in neuroscience: the problem of consciousness. And as studies like this move forward, we may discover that the root of consciousness comes not from the breath of God or the energy of a living universe, but from very specific parts of the very complicated machine that is the brain, acting together to produce something transcendent. And to me, that is no less sublime.
Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator, Yale University, New Haven, Conn. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now. Dr. Wilson has disclosed no relevant financial relationships.
Cancer pain declines with cannabis use
in a study.
Physician-prescribed cannabis, particularly cannabinoids, has been shown to ease cancer-related pain in adult cancer patients, who often find inadequate pain relief from medications including opioids, Saro Aprikian, MSc, a medical student at the Royal College of Surgeons, Dublin, and colleagues, wrote in their paper.
However, real-world data on the safety and effectiveness of cannabis in the cancer population and the impact on use of other medications are lacking, the researchers said.
In the study, published in BMJ Supportive & Palliative Care, the researchers reviewed data from 358 adults with cancer who were part of a multicenter cannabis registry in Canada between May 2015 and October 2018.
The average age of the patients was 57.6 years, and 48% were men. The top three cancer diagnoses in the study population were genitorurinary, breast, and colorectal.
Pain was the most common reason for obtaining a medical cannabis prescription, cited by 72.4% of patients.
Data were collected at follow-up visits conducted every 3 months over 1 year. Pain was assessed via the Brief Pain Inventory (BPI) and revised Edmonton Symptom Assessment System (ESAS-r) questionnaires and compared to baseline values. Patients rated their pain intensity on a sliding scale of 0 (none) to 10 (worst possible). Pain relief was rated on a scale of 0% (none) to 100% (complete).
Compared to baseline scores, patients showed significant decreases at 3, 6 and 9 months for BPI worst pain (5.5 at baseline, 3.6 for 3, 6, and 9 months) average pain (4.1 at baseline, 2.4, 2.3, and 2.7 for 3, 6, and 9 months, respectively), overall pain severity (2.7 at baseline, 2.3, 2.3, and 2.4 at 3, 6, and 9 months, respectively), and pain interference with daily life (4.3 at baseline, 2.4, 2.2, and 2.4 at 3, 6, and 9 months, respectively; P less than .01 for all four pain measures).
“Pain severity as reported in the ESAS-r decreased significantly at 3-month, 6-month and 9-month follow-ups,” the researchers noted.
In addition, total medication burden based on the medication quantification scale (MQS) and morphine equivalent daily dose (MEDD) were recorded at 3, 6, 9, and 12 months. MQS scores decreased compared to baseline at 3, 6, 9, and 12 months in 10%, 23.5%, 26.2%, and 31.6% of patients, respectively. Also compared with baseline, 11.1%, 31.3%, and 14.3% of patients reported decreases in MEDD scores at 3, 6, and 9 months, respectively.
Overall, products with equal amounts of active ingredients tetrahydrocannabinol (THC) and cannabidiol (CBD) were more effective than were those with a predominance of either THC or CBD, the researchers wrote.
Medical cannabis was well-tolerated; a total of 15 moderate to severe side effects were reported by 11 patients, 13 of which were minor. The most common side effects were sleepiness and fatigue, and five patients discontinued their medical cannabis because of side effects. The two serious side effects reported during the study period – pneumonia and a cardiovascular event – were deemed unlikely related to the patients’ medicinal cannabis use.
The findings were limited by several factors, including the observational design, which prevented conclusions about causality, the researchers noted. Other limitations included the loss of many patients to follow-up and incomplete data on other prescription medications in many cases.
The results support the use of medical cannabis by cancer patients as an adjunct pain relief strategy and a way to potentially reduce the use of other medications such as opioids, the authors concluded.
The study was supported by the Canadian Consortium for the Investigation of Cannabinoids, Collège des Médecins du Québec, and the Canopy Growth Corporation. The researchers had no financial conflicts to disclose.
in a study.
Physician-prescribed cannabis, particularly cannabinoids, has been shown to ease cancer-related pain in adult cancer patients, who often find inadequate pain relief from medications including opioids, Saro Aprikian, MSc, a medical student at the Royal College of Surgeons, Dublin, and colleagues, wrote in their paper.
However, real-world data on the safety and effectiveness of cannabis in the cancer population and the impact on use of other medications are lacking, the researchers said.
In the study, published in BMJ Supportive & Palliative Care, the researchers reviewed data from 358 adults with cancer who were part of a multicenter cannabis registry in Canada between May 2015 and October 2018.
The average age of the patients was 57.6 years, and 48% were men. The top three cancer diagnoses in the study population were genitorurinary, breast, and colorectal.
Pain was the most common reason for obtaining a medical cannabis prescription, cited by 72.4% of patients.
Data were collected at follow-up visits conducted every 3 months over 1 year. Pain was assessed via the Brief Pain Inventory (BPI) and revised Edmonton Symptom Assessment System (ESAS-r) questionnaires and compared to baseline values. Patients rated their pain intensity on a sliding scale of 0 (none) to 10 (worst possible). Pain relief was rated on a scale of 0% (none) to 100% (complete).
Compared to baseline scores, patients showed significant decreases at 3, 6 and 9 months for BPI worst pain (5.5 at baseline, 3.6 for 3, 6, and 9 months) average pain (4.1 at baseline, 2.4, 2.3, and 2.7 for 3, 6, and 9 months, respectively), overall pain severity (2.7 at baseline, 2.3, 2.3, and 2.4 at 3, 6, and 9 months, respectively), and pain interference with daily life (4.3 at baseline, 2.4, 2.2, and 2.4 at 3, 6, and 9 months, respectively; P less than .01 for all four pain measures).
“Pain severity as reported in the ESAS-r decreased significantly at 3-month, 6-month and 9-month follow-ups,” the researchers noted.
In addition, total medication burden based on the medication quantification scale (MQS) and morphine equivalent daily dose (MEDD) were recorded at 3, 6, 9, and 12 months. MQS scores decreased compared to baseline at 3, 6, 9, and 12 months in 10%, 23.5%, 26.2%, and 31.6% of patients, respectively. Also compared with baseline, 11.1%, 31.3%, and 14.3% of patients reported decreases in MEDD scores at 3, 6, and 9 months, respectively.
Overall, products with equal amounts of active ingredients tetrahydrocannabinol (THC) and cannabidiol (CBD) were more effective than were those with a predominance of either THC or CBD, the researchers wrote.
Medical cannabis was well-tolerated; a total of 15 moderate to severe side effects were reported by 11 patients, 13 of which were minor. The most common side effects were sleepiness and fatigue, and five patients discontinued their medical cannabis because of side effects. The two serious side effects reported during the study period – pneumonia and a cardiovascular event – were deemed unlikely related to the patients’ medicinal cannabis use.
The findings were limited by several factors, including the observational design, which prevented conclusions about causality, the researchers noted. Other limitations included the loss of many patients to follow-up and incomplete data on other prescription medications in many cases.
The results support the use of medical cannabis by cancer patients as an adjunct pain relief strategy and a way to potentially reduce the use of other medications such as opioids, the authors concluded.
The study was supported by the Canadian Consortium for the Investigation of Cannabinoids, Collège des Médecins du Québec, and the Canopy Growth Corporation. The researchers had no financial conflicts to disclose.
in a study.
Physician-prescribed cannabis, particularly cannabinoids, has been shown to ease cancer-related pain in adult cancer patients, who often find inadequate pain relief from medications including opioids, Saro Aprikian, MSc, a medical student at the Royal College of Surgeons, Dublin, and colleagues, wrote in their paper.
However, real-world data on the safety and effectiveness of cannabis in the cancer population and the impact on use of other medications are lacking, the researchers said.
In the study, published in BMJ Supportive & Palliative Care, the researchers reviewed data from 358 adults with cancer who were part of a multicenter cannabis registry in Canada between May 2015 and October 2018.
The average age of the patients was 57.6 years, and 48% were men. The top three cancer diagnoses in the study population were genitorurinary, breast, and colorectal.
Pain was the most common reason for obtaining a medical cannabis prescription, cited by 72.4% of patients.
Data were collected at follow-up visits conducted every 3 months over 1 year. Pain was assessed via the Brief Pain Inventory (BPI) and revised Edmonton Symptom Assessment System (ESAS-r) questionnaires and compared to baseline values. Patients rated their pain intensity on a sliding scale of 0 (none) to 10 (worst possible). Pain relief was rated on a scale of 0% (none) to 100% (complete).
Compared to baseline scores, patients showed significant decreases at 3, 6 and 9 months for BPI worst pain (5.5 at baseline, 3.6 for 3, 6, and 9 months) average pain (4.1 at baseline, 2.4, 2.3, and 2.7 for 3, 6, and 9 months, respectively), overall pain severity (2.7 at baseline, 2.3, 2.3, and 2.4 at 3, 6, and 9 months, respectively), and pain interference with daily life (4.3 at baseline, 2.4, 2.2, and 2.4 at 3, 6, and 9 months, respectively; P less than .01 for all four pain measures).
“Pain severity as reported in the ESAS-r decreased significantly at 3-month, 6-month and 9-month follow-ups,” the researchers noted.
In addition, total medication burden based on the medication quantification scale (MQS) and morphine equivalent daily dose (MEDD) were recorded at 3, 6, 9, and 12 months. MQS scores decreased compared to baseline at 3, 6, 9, and 12 months in 10%, 23.5%, 26.2%, and 31.6% of patients, respectively. Also compared with baseline, 11.1%, 31.3%, and 14.3% of patients reported decreases in MEDD scores at 3, 6, and 9 months, respectively.
Overall, products with equal amounts of active ingredients tetrahydrocannabinol (THC) and cannabidiol (CBD) were more effective than were those with a predominance of either THC or CBD, the researchers wrote.
Medical cannabis was well-tolerated; a total of 15 moderate to severe side effects were reported by 11 patients, 13 of which were minor. The most common side effects were sleepiness and fatigue, and five patients discontinued their medical cannabis because of side effects. The two serious side effects reported during the study period – pneumonia and a cardiovascular event – were deemed unlikely related to the patients’ medicinal cannabis use.
The findings were limited by several factors, including the observational design, which prevented conclusions about causality, the researchers noted. Other limitations included the loss of many patients to follow-up and incomplete data on other prescription medications in many cases.
The results support the use of medical cannabis by cancer patients as an adjunct pain relief strategy and a way to potentially reduce the use of other medications such as opioids, the authors concluded.
The study was supported by the Canadian Consortium for the Investigation of Cannabinoids, Collège des Médecins du Québec, and the Canopy Growth Corporation. The researchers had no financial conflicts to disclose.
FROM BMJ SUPPORTIVE & PALLIATIVE CARE
New outbreaks of Marburg virus disease: What clinicians need to know
What do green monkeys, fruit bats, and python caves all have in common? All have been implicated in outbreaks as transmission sources of the rare but deadly Marburg virus. Marburg virus is in the same Filoviridae family of highly pathogenic RNA viruses as Ebola virus, and similarly can cause a rapidly progressive and fatal viral hemorrhagic fever.
In the first reported Marburg outbreak in 1967, laboratory workers in Marburg and Frankfurt, Germany, and in Belgrade, Yugoslavia, developed severe febrile illnesses with massive hemorrhage and multiorgan system dysfunction after contact with infected African green monkeys imported from Uganda.
The majority of MVD outbreaks have occurred in sub-Saharan Africa, and primarily in three African countries: Angola, the Democratic Republic of Congo, and Uganda. In sub-Saharan Africa, these sporadic outbreaks have had high case fatality rates (up to 80%-90%) and been linked to human exposure to the oral secretions or urinary/fecal droppings of Egyptian fruit bats (Rousettus aegyptiacus), the animal reservoir for Marburg virus. These exposures have primarily occurred among miners or tourists frequenting bat-infested mines or caves, including Uganda’s python cave, where Centers for Disease Control and Prevention investigators have conducted ecological studies on Marburg-infected bats. Person-to-person transmission occurs from direct contact with the blood or bodily fluids of an infected person or contact with a contaminated object (for example, unsterilized needles and syringes in a large nosocomial outbreak in Angola).
On April 6, 2023, the CDC issued a Health Advisory for U.S. clinicians and public health departments regarding two separate MVD outbreaks in Equatorial Guinea and Tanzania. These first-ever MVD outbreaks in both West and East African countries appear to be epidemiologically unrelated. As of March 24, 2023, in Equatorial Guinea, a total of 15 confirmed cases, including 11 deaths, and 23 probable cases, all deceased, have been identified in multiple districts since the outbreak declaration in February 2023. In Tanzania, a total of eight cases, including five deaths, have been reported among villagers in a northwest region since the outbreak declaration in March 2023. While so far cases in the Tanzania MVD outbreak have been epidemiologically linked, in Equatorial Guinea some cases have no identified epidemiological links, raising concern for ongoing community spread.
To date, no cases in these outbreaks have been reported in the United States or outside the affected countries. Overall, the risk of MVD in nonendemic countries, like the United States, is low but there is still a risk of importation. As of May 2, 2023, CDC has issued a Level 2 travel alert (practice enhanced precautions) for Marburg in Equatorial Guinea and a Level 1 travel watch (practice usual precautions) for Marburg in Tanzania. Travelers to these countries are advised to avoid nonessential travel to areas with active outbreaks and practice preventative measures, including avoiding contact with sick people, blood and bodily fluids, dead bodies, fruit bats, and nonhuman primates. International travelers returning to the United States from these countries are advised to self-monitor for Marburg symptoms during travel and for 21 days after country departure. Travelers who develop signs or symptoms of MVD should immediately self-isolate and contact their local health department or clinician.
So, how should clinicians manage such return travelers? In the setting of these new MVD outbreaks in sub-Saharan Africa, what do U.S. clinicians need to know? Clinicians should consider MVD in the differential diagnosis of ill patients with a compatible exposure history and clinical presentation. A detailed exposure history should be obtained to determine if patients have been to an area with an active MVD outbreak during their incubation period (in the past 21 days), had concerning epidemiologic risk factors (for example, presence at funerals, health care facilities, in mines/caves) while in the affected area, and/or had contact with a suspected or confirmed MVD case.
Clinical diagnosis of MVD is challenging as the initial dry symptoms of infection are nonspecific (fever, influenza-like illness, malaise, anorexia, etc.) and can resemble other febrile infectious illnesses. Similarly, presenting alternative or concurrent infections, particularly in febrile return travelers, include malaria, Lassa fever, typhoid, and measles. From these nonspecific symptoms, patients with MVD can then progress to the more severe wet symptoms (for example, vomiting, diarrhea, and bleeding). Common clinical features of MVD have been described based on the clinical presentation and course of cases in MVD outbreaks. Notably, in the original Marburg outbreak, maculopapular rash and conjunctival injection were early patient symptoms and most patient deaths occurred during the second week of illness progression.
Supportive care, including aggressive fluid replacement, is the mainstay of therapy for MVD. Currently, there are no Food and Drug Administration–approved antiviral treatments or vaccines for Marburg virus. Despite their viral similarities, vaccines against Ebola virus have not been shown to be protective against Marburg virus. Marburg virus vaccine development is ongoing, with a few promising candidate vaccines in early phase 1 and 2 clinical trials. In 2022, in response to MVD outbreaks in Ghana and Guinea, the World Health Organization convened an international Marburg virus vaccine consortium which is working to promote global research collaboration for more rapid vaccine development.
In the absence of definitive therapies, early identification of patients with suspected MVD is critical for preventing the spread of infection to close contacts. Like Ebola virus–infected patients, only symptomatic MVD patients are infectious and all patients with suspected MVD should be isolated in a private room and cared for in accordance with infection control procedures. As MVD is a nationally notifiable disease, suspected cases should be reported to local or state health departments as per jurisdictional requirements. Clinicians should also consult with their local or state health department and CDC for guidance on testing patients with suspected MVD and consider prompt evaluation for other infectious etiologies in the patient’s differential diagnosis. Comprehensive guidance for clinicians on screening and diagnosing patients with MVD is available on the CDC website at https://www.cdc.gov/vhf/marburg/index.html.
Dr. Appiah (she/her) is a medical epidemiologist in the division of global migration and quarantine at the CDC. Dr. Appiah holds adjunct faculty appointment in the division of infectious diseases at Emory University, Atlanta. She also holds a commission in the U.S. Public Health Service and is a resident advisor, Uganda, U.S. President’s Malaria Initiative, at the CDC.
What do green monkeys, fruit bats, and python caves all have in common? All have been implicated in outbreaks as transmission sources of the rare but deadly Marburg virus. Marburg virus is in the same Filoviridae family of highly pathogenic RNA viruses as Ebola virus, and similarly can cause a rapidly progressive and fatal viral hemorrhagic fever.
In the first reported Marburg outbreak in 1967, laboratory workers in Marburg and Frankfurt, Germany, and in Belgrade, Yugoslavia, developed severe febrile illnesses with massive hemorrhage and multiorgan system dysfunction after contact with infected African green monkeys imported from Uganda.
The majority of MVD outbreaks have occurred in sub-Saharan Africa, and primarily in three African countries: Angola, the Democratic Republic of Congo, and Uganda. In sub-Saharan Africa, these sporadic outbreaks have had high case fatality rates (up to 80%-90%) and been linked to human exposure to the oral secretions or urinary/fecal droppings of Egyptian fruit bats (Rousettus aegyptiacus), the animal reservoir for Marburg virus. These exposures have primarily occurred among miners or tourists frequenting bat-infested mines or caves, including Uganda’s python cave, where Centers for Disease Control and Prevention investigators have conducted ecological studies on Marburg-infected bats. Person-to-person transmission occurs from direct contact with the blood or bodily fluids of an infected person or contact with a contaminated object (for example, unsterilized needles and syringes in a large nosocomial outbreak in Angola).
On April 6, 2023, the CDC issued a Health Advisory for U.S. clinicians and public health departments regarding two separate MVD outbreaks in Equatorial Guinea and Tanzania. These first-ever MVD outbreaks in both West and East African countries appear to be epidemiologically unrelated. As of March 24, 2023, in Equatorial Guinea, a total of 15 confirmed cases, including 11 deaths, and 23 probable cases, all deceased, have been identified in multiple districts since the outbreak declaration in February 2023. In Tanzania, a total of eight cases, including five deaths, have been reported among villagers in a northwest region since the outbreak declaration in March 2023. While so far cases in the Tanzania MVD outbreak have been epidemiologically linked, in Equatorial Guinea some cases have no identified epidemiological links, raising concern for ongoing community spread.
To date, no cases in these outbreaks have been reported in the United States or outside the affected countries. Overall, the risk of MVD in nonendemic countries, like the United States, is low but there is still a risk of importation. As of May 2, 2023, CDC has issued a Level 2 travel alert (practice enhanced precautions) for Marburg in Equatorial Guinea and a Level 1 travel watch (practice usual precautions) for Marburg in Tanzania. Travelers to these countries are advised to avoid nonessential travel to areas with active outbreaks and practice preventative measures, including avoiding contact with sick people, blood and bodily fluids, dead bodies, fruit bats, and nonhuman primates. International travelers returning to the United States from these countries are advised to self-monitor for Marburg symptoms during travel and for 21 days after country departure. Travelers who develop signs or symptoms of MVD should immediately self-isolate and contact their local health department or clinician.
So, how should clinicians manage such return travelers? In the setting of these new MVD outbreaks in sub-Saharan Africa, what do U.S. clinicians need to know? Clinicians should consider MVD in the differential diagnosis of ill patients with a compatible exposure history and clinical presentation. A detailed exposure history should be obtained to determine if patients have been to an area with an active MVD outbreak during their incubation period (in the past 21 days), had concerning epidemiologic risk factors (for example, presence at funerals, health care facilities, in mines/caves) while in the affected area, and/or had contact with a suspected or confirmed MVD case.
Clinical diagnosis of MVD is challenging as the initial dry symptoms of infection are nonspecific (fever, influenza-like illness, malaise, anorexia, etc.) and can resemble other febrile infectious illnesses. Similarly, presenting alternative or concurrent infections, particularly in febrile return travelers, include malaria, Lassa fever, typhoid, and measles. From these nonspecific symptoms, patients with MVD can then progress to the more severe wet symptoms (for example, vomiting, diarrhea, and bleeding). Common clinical features of MVD have been described based on the clinical presentation and course of cases in MVD outbreaks. Notably, in the original Marburg outbreak, maculopapular rash and conjunctival injection were early patient symptoms and most patient deaths occurred during the second week of illness progression.
Supportive care, including aggressive fluid replacement, is the mainstay of therapy for MVD. Currently, there are no Food and Drug Administration–approved antiviral treatments or vaccines for Marburg virus. Despite their viral similarities, vaccines against Ebola virus have not been shown to be protective against Marburg virus. Marburg virus vaccine development is ongoing, with a few promising candidate vaccines in early phase 1 and 2 clinical trials. In 2022, in response to MVD outbreaks in Ghana and Guinea, the World Health Organization convened an international Marburg virus vaccine consortium which is working to promote global research collaboration for more rapid vaccine development.
In the absence of definitive therapies, early identification of patients with suspected MVD is critical for preventing the spread of infection to close contacts. Like Ebola virus–infected patients, only symptomatic MVD patients are infectious and all patients with suspected MVD should be isolated in a private room and cared for in accordance with infection control procedures. As MVD is a nationally notifiable disease, suspected cases should be reported to local or state health departments as per jurisdictional requirements. Clinicians should also consult with their local or state health department and CDC for guidance on testing patients with suspected MVD and consider prompt evaluation for other infectious etiologies in the patient’s differential diagnosis. Comprehensive guidance for clinicians on screening and diagnosing patients with MVD is available on the CDC website at https://www.cdc.gov/vhf/marburg/index.html.
Dr. Appiah (she/her) is a medical epidemiologist in the division of global migration and quarantine at the CDC. Dr. Appiah holds adjunct faculty appointment in the division of infectious diseases at Emory University, Atlanta. She also holds a commission in the U.S. Public Health Service and is a resident advisor, Uganda, U.S. President’s Malaria Initiative, at the CDC.
What do green monkeys, fruit bats, and python caves all have in common? All have been implicated in outbreaks as transmission sources of the rare but deadly Marburg virus. Marburg virus is in the same Filoviridae family of highly pathogenic RNA viruses as Ebola virus, and similarly can cause a rapidly progressive and fatal viral hemorrhagic fever.
In the first reported Marburg outbreak in 1967, laboratory workers in Marburg and Frankfurt, Germany, and in Belgrade, Yugoslavia, developed severe febrile illnesses with massive hemorrhage and multiorgan system dysfunction after contact with infected African green monkeys imported from Uganda.
The majority of MVD outbreaks have occurred in sub-Saharan Africa, and primarily in three African countries: Angola, the Democratic Republic of Congo, and Uganda. In sub-Saharan Africa, these sporadic outbreaks have had high case fatality rates (up to 80%-90%) and been linked to human exposure to the oral secretions or urinary/fecal droppings of Egyptian fruit bats (Rousettus aegyptiacus), the animal reservoir for Marburg virus. These exposures have primarily occurred among miners or tourists frequenting bat-infested mines or caves, including Uganda’s python cave, where Centers for Disease Control and Prevention investigators have conducted ecological studies on Marburg-infected bats. Person-to-person transmission occurs from direct contact with the blood or bodily fluids of an infected person or contact with a contaminated object (for example, unsterilized needles and syringes in a large nosocomial outbreak in Angola).
On April 6, 2023, the CDC issued a Health Advisory for U.S. clinicians and public health departments regarding two separate MVD outbreaks in Equatorial Guinea and Tanzania. These first-ever MVD outbreaks in both West and East African countries appear to be epidemiologically unrelated. As of March 24, 2023, in Equatorial Guinea, a total of 15 confirmed cases, including 11 deaths, and 23 probable cases, all deceased, have been identified in multiple districts since the outbreak declaration in February 2023. In Tanzania, a total of eight cases, including five deaths, have been reported among villagers in a northwest region since the outbreak declaration in March 2023. While so far cases in the Tanzania MVD outbreak have been epidemiologically linked, in Equatorial Guinea some cases have no identified epidemiological links, raising concern for ongoing community spread.
To date, no cases in these outbreaks have been reported in the United States or outside the affected countries. Overall, the risk of MVD in nonendemic countries, like the United States, is low but there is still a risk of importation. As of May 2, 2023, CDC has issued a Level 2 travel alert (practice enhanced precautions) for Marburg in Equatorial Guinea and a Level 1 travel watch (practice usual precautions) for Marburg in Tanzania. Travelers to these countries are advised to avoid nonessential travel to areas with active outbreaks and practice preventative measures, including avoiding contact with sick people, blood and bodily fluids, dead bodies, fruit bats, and nonhuman primates. International travelers returning to the United States from these countries are advised to self-monitor for Marburg symptoms during travel and for 21 days after country departure. Travelers who develop signs or symptoms of MVD should immediately self-isolate and contact their local health department or clinician.
So, how should clinicians manage such return travelers? In the setting of these new MVD outbreaks in sub-Saharan Africa, what do U.S. clinicians need to know? Clinicians should consider MVD in the differential diagnosis of ill patients with a compatible exposure history and clinical presentation. A detailed exposure history should be obtained to determine if patients have been to an area with an active MVD outbreak during their incubation period (in the past 21 days), had concerning epidemiologic risk factors (for example, presence at funerals, health care facilities, in mines/caves) while in the affected area, and/or had contact with a suspected or confirmed MVD case.
Clinical diagnosis of MVD is challenging as the initial dry symptoms of infection are nonspecific (fever, influenza-like illness, malaise, anorexia, etc.) and can resemble other febrile infectious illnesses. Similarly, presenting alternative or concurrent infections, particularly in febrile return travelers, include malaria, Lassa fever, typhoid, and measles. From these nonspecific symptoms, patients with MVD can then progress to the more severe wet symptoms (for example, vomiting, diarrhea, and bleeding). Common clinical features of MVD have been described based on the clinical presentation and course of cases in MVD outbreaks. Notably, in the original Marburg outbreak, maculopapular rash and conjunctival injection were early patient symptoms and most patient deaths occurred during the second week of illness progression.
Supportive care, including aggressive fluid replacement, is the mainstay of therapy for MVD. Currently, there are no Food and Drug Administration–approved antiviral treatments or vaccines for Marburg virus. Despite their viral similarities, vaccines against Ebola virus have not been shown to be protective against Marburg virus. Marburg virus vaccine development is ongoing, with a few promising candidate vaccines in early phase 1 and 2 clinical trials. In 2022, in response to MVD outbreaks in Ghana and Guinea, the World Health Organization convened an international Marburg virus vaccine consortium which is working to promote global research collaboration for more rapid vaccine development.
In the absence of definitive therapies, early identification of patients with suspected MVD is critical for preventing the spread of infection to close contacts. Like Ebola virus–infected patients, only symptomatic MVD patients are infectious and all patients with suspected MVD should be isolated in a private room and cared for in accordance with infection control procedures. As MVD is a nationally notifiable disease, suspected cases should be reported to local or state health departments as per jurisdictional requirements. Clinicians should also consult with their local or state health department and CDC for guidance on testing patients with suspected MVD and consider prompt evaluation for other infectious etiologies in the patient’s differential diagnosis. Comprehensive guidance for clinicians on screening and diagnosing patients with MVD is available on the CDC website at https://www.cdc.gov/vhf/marburg/index.html.
Dr. Appiah (she/her) is a medical epidemiologist in the division of global migration and quarantine at the CDC. Dr. Appiah holds adjunct faculty appointment in the division of infectious diseases at Emory University, Atlanta. She also holds a commission in the U.S. Public Health Service and is a resident advisor, Uganda, U.S. President’s Malaria Initiative, at the CDC.
10 popular diets for heart health ranked
An evidence-based analysis of 10 popular dietary patterns shows that some promote heart health better than others.
A new American Heart Association scientific statement concludes that the Mediterranean, Dietary Approach to Stop Hypertension (DASH), pescatarian, and vegetarian eating patterns most strongly align with heart-healthy eating guidelines issued by the AHA in 2021, whereas the popular paleolithic (paleo) and ketogenic (keto) diets fall short.
“The good news for the public and their clinicians is that there are several dietary patterns that allow for substantial flexibility for following a heart healthy diet – DASH, Mediterranean, vegetarian,” writing-group chair Christopher Gardner, PhD, with Stanford (Calif.) University, told this news organization.
“However, some of the popular diets – particularly paleo and keto – are so strictly restrictive of specific food groups that when these diets are followed as intended by their proponents, they are not aligned with the scientific evidence for a heart-healthy diet,” Dr. Gardner said.
The statement was published online in Circulation.
A tool for clinicians
“The number of different, popular dietary patterns has proliferated in recent years, and the amount of misinformation about them on social media has reached critical levels,” Dr. Gardner said in a news release.
“The public – and even many health care professionals – may rightfully be confused about heart-healthy eating, and they may feel that they don’t have the time or the training to evaluate the different diets. We hope this statement serves as a tool for clinicians and the public to understand which diets promote good cardiometabolic health,” he noted.
The writing group rated on a scale of 1-100 how well 10 popular diets or eating patterns align with AHA dietary advice for heart-healthy eating.
That advice includes consuming a wide variety of fruits and vegetables; choosing mostly whole grains instead of refined grains; using liquid plant oils rather than tropical oils; eating healthy sources of protein, such as from plants, seafood, or lean meats; minimizing added sugars and salt; limiting alcohol; choosing minimally processed foods instead of ultraprocessed foods; and following this guidance wherever food is prepared or consumed.
The 10 diets/dietary patterns were DASH, Mediterranean-style, pescatarian, ovo-lacto vegetarian, vegan, low-fat, very low–fat, low-carbohydrate, paleo, and very low–carbohydrate/keto patterns.
The diets were divided into four tiers on the basis of their scores, which ranged from a low of 31 to a high of 100.
Only the DASH eating plan got a perfect score of 100. This eating pattern is low in salt, added sugar, tropical oil, alcohol, and processed foods and high in nonstarchy vegetables, fruits, whole grains, and legumes. Proteins are mostly plant-based, such as legumes, beans, or nuts, along with fish or seafood, lean poultry and meats, and low-fat or fat-free dairy products.
The Mediterranean eating pattern achieved a slightly lower score of 89 because unlike DASH, it allows for moderate alcohol consumption and does not address added salt.
The other two top tier eating patterns were pescatarian, with a score of 92, and vegetarian, with a score of 86.
“If implemented as intended, the top-tier dietary patterns align best with the American Heart Association’s guidance and may be adapted to respect cultural practices, food preferences and budgets to enable people to always eat this way, for the long term,” Dr. Gardner said in the release.
Vegan and low-fat diets (each with a score of 78) fell into the second tier.
Though these diets emphasize fruits, vegetables, whole grains, legumes, and nuts while limiting alcohol and added sugars, the vegan diet is so restrictive that it could be challenging to follow long-term or when eating out and may increase the risk for vitamin B12 deficiency, which can lead to anemia, the writing group notes.
There also are concerns that low-fat diets treat all fats equally, whereas the AHA guidance calls for replacing saturated fats with healthier fats, they point out.
The third tier includes the very low–fat diet (score 72) and low-carb diet (score 64), whereas the paleo and very low–carb/keto diets fall into the fourth tier, with the lowest scores of 53 and 31, respectively.
Dr. Gardner said that it’s important to note that all 10 diet patterns “share four positive characteristics: more veggies, more whole foods, less added sugars, less refined grains.”
“These are all areas for which Americans have substantial room for improvement, and these are all things that we could work on together. Progress across these aspects would make a large difference in the heart-healthiness of the U.S. diet,” he told this news organization.
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Council on Lifestyle and Cardiometabolic Health, the Council on Cardiovascular and Stroke Nursing, the Council on Hypertension, and the Council on Peripheral Vascular Disease.
A version of this article first appeared on Medscape.com.
An evidence-based analysis of 10 popular dietary patterns shows that some promote heart health better than others.
A new American Heart Association scientific statement concludes that the Mediterranean, Dietary Approach to Stop Hypertension (DASH), pescatarian, and vegetarian eating patterns most strongly align with heart-healthy eating guidelines issued by the AHA in 2021, whereas the popular paleolithic (paleo) and ketogenic (keto) diets fall short.
“The good news for the public and their clinicians is that there are several dietary patterns that allow for substantial flexibility for following a heart healthy diet – DASH, Mediterranean, vegetarian,” writing-group chair Christopher Gardner, PhD, with Stanford (Calif.) University, told this news organization.
“However, some of the popular diets – particularly paleo and keto – are so strictly restrictive of specific food groups that when these diets are followed as intended by their proponents, they are not aligned with the scientific evidence for a heart-healthy diet,” Dr. Gardner said.
The statement was published online in Circulation.
A tool for clinicians
“The number of different, popular dietary patterns has proliferated in recent years, and the amount of misinformation about them on social media has reached critical levels,” Dr. Gardner said in a news release.
“The public – and even many health care professionals – may rightfully be confused about heart-healthy eating, and they may feel that they don’t have the time or the training to evaluate the different diets. We hope this statement serves as a tool for clinicians and the public to understand which diets promote good cardiometabolic health,” he noted.
The writing group rated on a scale of 1-100 how well 10 popular diets or eating patterns align with AHA dietary advice for heart-healthy eating.
That advice includes consuming a wide variety of fruits and vegetables; choosing mostly whole grains instead of refined grains; using liquid plant oils rather than tropical oils; eating healthy sources of protein, such as from plants, seafood, or lean meats; minimizing added sugars and salt; limiting alcohol; choosing minimally processed foods instead of ultraprocessed foods; and following this guidance wherever food is prepared or consumed.
The 10 diets/dietary patterns were DASH, Mediterranean-style, pescatarian, ovo-lacto vegetarian, vegan, low-fat, very low–fat, low-carbohydrate, paleo, and very low–carbohydrate/keto patterns.
The diets were divided into four tiers on the basis of their scores, which ranged from a low of 31 to a high of 100.
Only the DASH eating plan got a perfect score of 100. This eating pattern is low in salt, added sugar, tropical oil, alcohol, and processed foods and high in nonstarchy vegetables, fruits, whole grains, and legumes. Proteins are mostly plant-based, such as legumes, beans, or nuts, along with fish or seafood, lean poultry and meats, and low-fat or fat-free dairy products.
The Mediterranean eating pattern achieved a slightly lower score of 89 because unlike DASH, it allows for moderate alcohol consumption and does not address added salt.
The other two top tier eating patterns were pescatarian, with a score of 92, and vegetarian, with a score of 86.
“If implemented as intended, the top-tier dietary patterns align best with the American Heart Association’s guidance and may be adapted to respect cultural practices, food preferences and budgets to enable people to always eat this way, for the long term,” Dr. Gardner said in the release.
Vegan and low-fat diets (each with a score of 78) fell into the second tier.
Though these diets emphasize fruits, vegetables, whole grains, legumes, and nuts while limiting alcohol and added sugars, the vegan diet is so restrictive that it could be challenging to follow long-term or when eating out and may increase the risk for vitamin B12 deficiency, which can lead to anemia, the writing group notes.
There also are concerns that low-fat diets treat all fats equally, whereas the AHA guidance calls for replacing saturated fats with healthier fats, they point out.
The third tier includes the very low–fat diet (score 72) and low-carb diet (score 64), whereas the paleo and very low–carb/keto diets fall into the fourth tier, with the lowest scores of 53 and 31, respectively.
Dr. Gardner said that it’s important to note that all 10 diet patterns “share four positive characteristics: more veggies, more whole foods, less added sugars, less refined grains.”
“These are all areas for which Americans have substantial room for improvement, and these are all things that we could work on together. Progress across these aspects would make a large difference in the heart-healthiness of the U.S. diet,” he told this news organization.
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Council on Lifestyle and Cardiometabolic Health, the Council on Cardiovascular and Stroke Nursing, the Council on Hypertension, and the Council on Peripheral Vascular Disease.
A version of this article first appeared on Medscape.com.
An evidence-based analysis of 10 popular dietary patterns shows that some promote heart health better than others.
A new American Heart Association scientific statement concludes that the Mediterranean, Dietary Approach to Stop Hypertension (DASH), pescatarian, and vegetarian eating patterns most strongly align with heart-healthy eating guidelines issued by the AHA in 2021, whereas the popular paleolithic (paleo) and ketogenic (keto) diets fall short.
“The good news for the public and their clinicians is that there are several dietary patterns that allow for substantial flexibility for following a heart healthy diet – DASH, Mediterranean, vegetarian,” writing-group chair Christopher Gardner, PhD, with Stanford (Calif.) University, told this news organization.
“However, some of the popular diets – particularly paleo and keto – are so strictly restrictive of specific food groups that when these diets are followed as intended by their proponents, they are not aligned with the scientific evidence for a heart-healthy diet,” Dr. Gardner said.
The statement was published online in Circulation.
A tool for clinicians
“The number of different, popular dietary patterns has proliferated in recent years, and the amount of misinformation about them on social media has reached critical levels,” Dr. Gardner said in a news release.
“The public – and even many health care professionals – may rightfully be confused about heart-healthy eating, and they may feel that they don’t have the time or the training to evaluate the different diets. We hope this statement serves as a tool for clinicians and the public to understand which diets promote good cardiometabolic health,” he noted.
The writing group rated on a scale of 1-100 how well 10 popular diets or eating patterns align with AHA dietary advice for heart-healthy eating.
That advice includes consuming a wide variety of fruits and vegetables; choosing mostly whole grains instead of refined grains; using liquid plant oils rather than tropical oils; eating healthy sources of protein, such as from plants, seafood, or lean meats; minimizing added sugars and salt; limiting alcohol; choosing minimally processed foods instead of ultraprocessed foods; and following this guidance wherever food is prepared or consumed.
The 10 diets/dietary patterns were DASH, Mediterranean-style, pescatarian, ovo-lacto vegetarian, vegan, low-fat, very low–fat, low-carbohydrate, paleo, and very low–carbohydrate/keto patterns.
The diets were divided into four tiers on the basis of their scores, which ranged from a low of 31 to a high of 100.
Only the DASH eating plan got a perfect score of 100. This eating pattern is low in salt, added sugar, tropical oil, alcohol, and processed foods and high in nonstarchy vegetables, fruits, whole grains, and legumes. Proteins are mostly plant-based, such as legumes, beans, or nuts, along with fish or seafood, lean poultry and meats, and low-fat or fat-free dairy products.
The Mediterranean eating pattern achieved a slightly lower score of 89 because unlike DASH, it allows for moderate alcohol consumption and does not address added salt.
The other two top tier eating patterns were pescatarian, with a score of 92, and vegetarian, with a score of 86.
“If implemented as intended, the top-tier dietary patterns align best with the American Heart Association’s guidance and may be adapted to respect cultural practices, food preferences and budgets to enable people to always eat this way, for the long term,” Dr. Gardner said in the release.
Vegan and low-fat diets (each with a score of 78) fell into the second tier.
Though these diets emphasize fruits, vegetables, whole grains, legumes, and nuts while limiting alcohol and added sugars, the vegan diet is so restrictive that it could be challenging to follow long-term or when eating out and may increase the risk for vitamin B12 deficiency, which can lead to anemia, the writing group notes.
There also are concerns that low-fat diets treat all fats equally, whereas the AHA guidance calls for replacing saturated fats with healthier fats, they point out.
The third tier includes the very low–fat diet (score 72) and low-carb diet (score 64), whereas the paleo and very low–carb/keto diets fall into the fourth tier, with the lowest scores of 53 and 31, respectively.
Dr. Gardner said that it’s important to note that all 10 diet patterns “share four positive characteristics: more veggies, more whole foods, less added sugars, less refined grains.”
“These are all areas for which Americans have substantial room for improvement, and these are all things that we could work on together. Progress across these aspects would make a large difference in the heart-healthiness of the U.S. diet,” he told this news organization.
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Council on Lifestyle and Cardiometabolic Health, the Council on Cardiovascular and Stroke Nursing, the Council on Hypertension, and the Council on Peripheral Vascular Disease.
A version of this article first appeared on Medscape.com.
Medications provide best risk-to-benefit ratio for weight loss, says expert
Lifestyle changes result in the least weight loss and may be safest, while surgery provides the most weight loss and has the greatest risk. Antiobesity medications, especially the newer ones used in combination with lifestyle changes, can provide significant and sustained weight loss with manageable side effects, said Daniel Bessesen, MD, a professor in the endocrinology, diabetes, and metabolism at University of Colorado at Denver, Aurora.
New and more effective antiobesity medications have given internists more potential options to discuss with their patients, Dr. Bessesen said. He reviewed the pros and cons of the different options.
Medications are indicated for patients with a body mass index greater than 30, including those with a weight-related comorbidity, Dr. Bessesen said. The average weight loss is 5%-15% over 3-6 months but may vary greatly. Insurance often does not cover the medication costs.
Older FDA-approved antiobesity medications
Phentermine is the most widely prescribed antiobesity medication, partly because it is the only option most people can afford out of pocket. Dr. Bessesen presented recent data showing that long-term use of phentermine was associated with greater weight loss and that patients continuously taking phentermine for 24 months lost 7.5% of their weight.
Phentermine suppresses appetite by increasing norepinephrine production. Dr. Bessesen warned that internists should be careful when prescribing it to patients with mental conditions, because it acts as a stimulant. Early studies raised concerns about the risk of cardiovascular disease (CVD) in patients taking phentermine. However, analysis of data from over 13,000 individuals showed no evidence of a relationship between phentermine exposure and CVD events.
“These data provide some reassurance that it could be used in patients with CVD risk,” he noted. Phentermine can also be combined with topiramate extended release, a combination that provides greater efficacy (up to 10% weight loss) with fewer side effects. However, this combination is less effective in patients with diabetes than in those without.
Additional treatment options included orlistat and naltrexone sustained release/bupropion SR. Orlistat is a good treatment alternative for patients with constipation and is the safest option among older anti-obesity medications, whereas naltrexone SR/bupropion SR may be useful in patients with food cravings. However, there is more variability in the individual-level benefit from these agents compared to phentermine and phentermine/topiramate ER, Dr. Bessesen said.
Newer anti‐obesity medications
Liraglutide, an agent used for the management of type 2 diabetes, has recently been approved for weight loss. Liraglutide causes moderate weight loss, and it may reduce the risk of CVD. However, there are tolerability issues, such as nausea and other risks, and Dr. Bessesen advises internists to “start at low doses and increase slowly.”
Semaglutide is the newest and most effective antiobesity drug approved by the Food and Drug Administration, providing sustained weight loss of 8% for up to 48 weeks after starting treatment. Although its efficacy is lower in patients with diabetes, Dr. Bessesen noted that “this is common for antiobesity agents, and clinicians should not refrain from prescribing it in this population.”
Setmelanotide is another new medication approved for chronic weight management in patients with monogenic obesity. This medication can be considered for patients with early-onset severe obesity with abnormal feeding behavior.
Commenting on barriers to access to new antiobesity medications, Dr. Bessesen said that “the high cost of these medications is a substantial problem, but as more companies become involved and products are on the market for a longer period of time, I am hopeful that prices will come down.”
Emerging antiobesity medications
Dr. Bessesen presented recent phase 3 data showing that treatment with tirzepatide provided sustained chronic loss and improved cardiometabolic measures with no diet. Tirzepatide, which targets receptors for glucagonlike peptide–1 and glucose-dependent insulinotropic polypeptide, is used for the management of type 2 diabetes and is expected to be reviewed soon by the FDA for its use in weight management.
A semaglutide/cagrilintide combination may also provide a new treatment option for patients with obesity. In a phase 1b trial, semaglutide/cagrilintide treatment resulted in up to 17% weight loss in patients with obesity who were otherwise healthy; however, phase 2 and 3 data are needed to confirm its efficacy.
A ‘holistic approach’
When deciding whether to prescribe antiobesity medications, Dr. Bessesen noted that medications are better than exercise alone. Factors to consider when deciding whether to prescribe drugs, as well as which ones, include costs, local regulatory guidelines, requirement for long-term use, and patient comorbidities.
He also stated that lifestyle changes, such as adopting healthy nutrition and exercising regularly, are also important and can enhance weight loss when combined with medications.
Richele Corrado, DO, MPH, agreed that lifestyle management in combination with medications may provide greater weight loss than each of these interventions alone.
“If you look at the data, exercise doesn’t help you lose much weight,” said Dr. Corrado, a staff internist and obesity medicine specialist at Walter Reed National Military Medical Center in Bethesda, Md., who spoke at the same session. She added that she has many patients who struggle to lose weight despite having a healthy lifestyle. “It’s important to discuss with these patients about medications and surgery.”
Dr. Bessesen noted that management of mental health and emotional well-being should also be an integral part of obesity management. “Treatment for obesity may be more successful when underlying psychological conditions such as depression, childhood sexual trauma, or anxiety are addressed and treated,” he said.
Dr. Bessesen was involved in the study of the efficacy of semaglutide/cagrilintide. He does not have any financial conflicts with the companies that make other mentioned medications. He has received research grants or contracts from Novo Nordisk, honoraria from Novo Nordisk, and consultantship from Eli Lilly. Dr. Corrado reported no relevant financial conflicts.
Lifestyle changes result in the least weight loss and may be safest, while surgery provides the most weight loss and has the greatest risk. Antiobesity medications, especially the newer ones used in combination with lifestyle changes, can provide significant and sustained weight loss with manageable side effects, said Daniel Bessesen, MD, a professor in the endocrinology, diabetes, and metabolism at University of Colorado at Denver, Aurora.
New and more effective antiobesity medications have given internists more potential options to discuss with their patients, Dr. Bessesen said. He reviewed the pros and cons of the different options.
Medications are indicated for patients with a body mass index greater than 30, including those with a weight-related comorbidity, Dr. Bessesen said. The average weight loss is 5%-15% over 3-6 months but may vary greatly. Insurance often does not cover the medication costs.
Older FDA-approved antiobesity medications
Phentermine is the most widely prescribed antiobesity medication, partly because it is the only option most people can afford out of pocket. Dr. Bessesen presented recent data showing that long-term use of phentermine was associated with greater weight loss and that patients continuously taking phentermine for 24 months lost 7.5% of their weight.
Phentermine suppresses appetite by increasing norepinephrine production. Dr. Bessesen warned that internists should be careful when prescribing it to patients with mental conditions, because it acts as a stimulant. Early studies raised concerns about the risk of cardiovascular disease (CVD) in patients taking phentermine. However, analysis of data from over 13,000 individuals showed no evidence of a relationship between phentermine exposure and CVD events.
“These data provide some reassurance that it could be used in patients with CVD risk,” he noted. Phentermine can also be combined with topiramate extended release, a combination that provides greater efficacy (up to 10% weight loss) with fewer side effects. However, this combination is less effective in patients with diabetes than in those without.
Additional treatment options included orlistat and naltrexone sustained release/bupropion SR. Orlistat is a good treatment alternative for patients with constipation and is the safest option among older anti-obesity medications, whereas naltrexone SR/bupropion SR may be useful in patients with food cravings. However, there is more variability in the individual-level benefit from these agents compared to phentermine and phentermine/topiramate ER, Dr. Bessesen said.
Newer anti‐obesity medications
Liraglutide, an agent used for the management of type 2 diabetes, has recently been approved for weight loss. Liraglutide causes moderate weight loss, and it may reduce the risk of CVD. However, there are tolerability issues, such as nausea and other risks, and Dr. Bessesen advises internists to “start at low doses and increase slowly.”
Semaglutide is the newest and most effective antiobesity drug approved by the Food and Drug Administration, providing sustained weight loss of 8% for up to 48 weeks after starting treatment. Although its efficacy is lower in patients with diabetes, Dr. Bessesen noted that “this is common for antiobesity agents, and clinicians should not refrain from prescribing it in this population.”
Setmelanotide is another new medication approved for chronic weight management in patients with monogenic obesity. This medication can be considered for patients with early-onset severe obesity with abnormal feeding behavior.
Commenting on barriers to access to new antiobesity medications, Dr. Bessesen said that “the high cost of these medications is a substantial problem, but as more companies become involved and products are on the market for a longer period of time, I am hopeful that prices will come down.”
Emerging antiobesity medications
Dr. Bessesen presented recent phase 3 data showing that treatment with tirzepatide provided sustained chronic loss and improved cardiometabolic measures with no diet. Tirzepatide, which targets receptors for glucagonlike peptide–1 and glucose-dependent insulinotropic polypeptide, is used for the management of type 2 diabetes and is expected to be reviewed soon by the FDA for its use in weight management.
A semaglutide/cagrilintide combination may also provide a new treatment option for patients with obesity. In a phase 1b trial, semaglutide/cagrilintide treatment resulted in up to 17% weight loss in patients with obesity who were otherwise healthy; however, phase 2 and 3 data are needed to confirm its efficacy.
A ‘holistic approach’
When deciding whether to prescribe antiobesity medications, Dr. Bessesen noted that medications are better than exercise alone. Factors to consider when deciding whether to prescribe drugs, as well as which ones, include costs, local regulatory guidelines, requirement for long-term use, and patient comorbidities.
He also stated that lifestyle changes, such as adopting healthy nutrition and exercising regularly, are also important and can enhance weight loss when combined with medications.
Richele Corrado, DO, MPH, agreed that lifestyle management in combination with medications may provide greater weight loss than each of these interventions alone.
“If you look at the data, exercise doesn’t help you lose much weight,” said Dr. Corrado, a staff internist and obesity medicine specialist at Walter Reed National Military Medical Center in Bethesda, Md., who spoke at the same session. She added that she has many patients who struggle to lose weight despite having a healthy lifestyle. “It’s important to discuss with these patients about medications and surgery.”
Dr. Bessesen noted that management of mental health and emotional well-being should also be an integral part of obesity management. “Treatment for obesity may be more successful when underlying psychological conditions such as depression, childhood sexual trauma, or anxiety are addressed and treated,” he said.
Dr. Bessesen was involved in the study of the efficacy of semaglutide/cagrilintide. He does not have any financial conflicts with the companies that make other mentioned medications. He has received research grants or contracts from Novo Nordisk, honoraria from Novo Nordisk, and consultantship from Eli Lilly. Dr. Corrado reported no relevant financial conflicts.
Lifestyle changes result in the least weight loss and may be safest, while surgery provides the most weight loss and has the greatest risk. Antiobesity medications, especially the newer ones used in combination with lifestyle changes, can provide significant and sustained weight loss with manageable side effects, said Daniel Bessesen, MD, a professor in the endocrinology, diabetes, and metabolism at University of Colorado at Denver, Aurora.
New and more effective antiobesity medications have given internists more potential options to discuss with their patients, Dr. Bessesen said. He reviewed the pros and cons of the different options.
Medications are indicated for patients with a body mass index greater than 30, including those with a weight-related comorbidity, Dr. Bessesen said. The average weight loss is 5%-15% over 3-6 months but may vary greatly. Insurance often does not cover the medication costs.
Older FDA-approved antiobesity medications
Phentermine is the most widely prescribed antiobesity medication, partly because it is the only option most people can afford out of pocket. Dr. Bessesen presented recent data showing that long-term use of phentermine was associated with greater weight loss and that patients continuously taking phentermine for 24 months lost 7.5% of their weight.
Phentermine suppresses appetite by increasing norepinephrine production. Dr. Bessesen warned that internists should be careful when prescribing it to patients with mental conditions, because it acts as a stimulant. Early studies raised concerns about the risk of cardiovascular disease (CVD) in patients taking phentermine. However, analysis of data from over 13,000 individuals showed no evidence of a relationship between phentermine exposure and CVD events.
“These data provide some reassurance that it could be used in patients with CVD risk,” he noted. Phentermine can also be combined with topiramate extended release, a combination that provides greater efficacy (up to 10% weight loss) with fewer side effects. However, this combination is less effective in patients with diabetes than in those without.
Additional treatment options included orlistat and naltrexone sustained release/bupropion SR. Orlistat is a good treatment alternative for patients with constipation and is the safest option among older anti-obesity medications, whereas naltrexone SR/bupropion SR may be useful in patients with food cravings. However, there is more variability in the individual-level benefit from these agents compared to phentermine and phentermine/topiramate ER, Dr. Bessesen said.
Newer anti‐obesity medications
Liraglutide, an agent used for the management of type 2 diabetes, has recently been approved for weight loss. Liraglutide causes moderate weight loss, and it may reduce the risk of CVD. However, there are tolerability issues, such as nausea and other risks, and Dr. Bessesen advises internists to “start at low doses and increase slowly.”
Semaglutide is the newest and most effective antiobesity drug approved by the Food and Drug Administration, providing sustained weight loss of 8% for up to 48 weeks after starting treatment. Although its efficacy is lower in patients with diabetes, Dr. Bessesen noted that “this is common for antiobesity agents, and clinicians should not refrain from prescribing it in this population.”
Setmelanotide is another new medication approved for chronic weight management in patients with monogenic obesity. This medication can be considered for patients with early-onset severe obesity with abnormal feeding behavior.
Commenting on barriers to access to new antiobesity medications, Dr. Bessesen said that “the high cost of these medications is a substantial problem, but as more companies become involved and products are on the market for a longer period of time, I am hopeful that prices will come down.”
Emerging antiobesity medications
Dr. Bessesen presented recent phase 3 data showing that treatment with tirzepatide provided sustained chronic loss and improved cardiometabolic measures with no diet. Tirzepatide, which targets receptors for glucagonlike peptide–1 and glucose-dependent insulinotropic polypeptide, is used for the management of type 2 diabetes and is expected to be reviewed soon by the FDA for its use in weight management.
A semaglutide/cagrilintide combination may also provide a new treatment option for patients with obesity. In a phase 1b trial, semaglutide/cagrilintide treatment resulted in up to 17% weight loss in patients with obesity who were otherwise healthy; however, phase 2 and 3 data are needed to confirm its efficacy.
A ‘holistic approach’
When deciding whether to prescribe antiobesity medications, Dr. Bessesen noted that medications are better than exercise alone. Factors to consider when deciding whether to prescribe drugs, as well as which ones, include costs, local regulatory guidelines, requirement for long-term use, and patient comorbidities.
He also stated that lifestyle changes, such as adopting healthy nutrition and exercising regularly, are also important and can enhance weight loss when combined with medications.
Richele Corrado, DO, MPH, agreed that lifestyle management in combination with medications may provide greater weight loss than each of these interventions alone.
“If you look at the data, exercise doesn’t help you lose much weight,” said Dr. Corrado, a staff internist and obesity medicine specialist at Walter Reed National Military Medical Center in Bethesda, Md., who spoke at the same session. She added that she has many patients who struggle to lose weight despite having a healthy lifestyle. “It’s important to discuss with these patients about medications and surgery.”
Dr. Bessesen noted that management of mental health and emotional well-being should also be an integral part of obesity management. “Treatment for obesity may be more successful when underlying psychological conditions such as depression, childhood sexual trauma, or anxiety are addressed and treated,” he said.
Dr. Bessesen was involved in the study of the efficacy of semaglutide/cagrilintide. He does not have any financial conflicts with the companies that make other mentioned medications. He has received research grants or contracts from Novo Nordisk, honoraria from Novo Nordisk, and consultantship from Eli Lilly. Dr. Corrado reported no relevant financial conflicts.
AT INTERNAL MEDICINE 2023
AHA backs screening for cognitive impairment after stroke
Screening for cognitive impairment should be part of multidisciplinary care for stroke survivors, the American Heart Association says in a new scientific statement.
“Cognitive impairment after stroke is very common, is associated with other post-stroke outcomes, and often has significant impact on the quality of life,” Nada El Husseini, MD, MHSc, chair of the scientific statement writing group, told this news organization.
“It is important to screen stroke survivors for cognitive impairment as well as for associated comorbidities such as mood and sleep disorders,” said Dr. El Husseini, associate professor of neurology at Duke University Medical Center in Durham, N.C.
The scientific statement was published online in Stroke. It’s the first to specifically focus on the cognitive impairment resulting from an overt stroke (ischemic or hemorrhagic).
‘Actionable’ considerations for care
The writing group performed a “scoping” review of the literature on the prevalence, diagnosis, and management of poststroke cognitive impairment (PSCI) to provide a framework for “actionable considerations” for clinical practice as well as to highlight gaps needing additional studies, Dr. El Husseini explained.
PSCI, ranging from mild to severe, occurs in up to 60% of stroke survivors in the first year after stroke; yet, it is often underreported and underdiagnosed, the writing group notes.
Up to 20% of stroke survivors who experience mild cognitive impairment fully recover cognitive function, and cognitive recovery is most likely within the first 6 months after a stroke.
However, improvement in cognitive impairment without return to prestroke levels is more frequent than is complete recovery. As many as one in three stroke survivors may develop dementia within 5 years of stroke.
The writing group also notes that PSCI is often associated with other conditions, including physical disability, sleep disorders, behavioral and personality changes, depression, and other neuropsychological changes – each of which may contribute to lower quality of life.
Currently, there is no “gold standard” for cognitive screening following stroke, but several brief cognitive screening tests, including the Mini–Mental State Examination and the Montreal Cognitive Assessment, are widely used to identify cognitive impairment after stroke.
The statement also highlights the importance of assessing cognitive changes over time after stroke. Stroke survivors who experience unexplained difficulties with cognitive-related activities of daily living, following care instructions, or providing a reliable health history may be candidates for additional cognitive screening.
Manage risk factors to prevent repeat stroke
“Anticipatory guidance regarding home and driving safety and, return to work (if applicable) along with interdisciplinary collaboration among different medical and ancillary specialists in the diagnosis and management of cognitive impairment is key for the holistic care of stroke survivors,” Dr. El Husseini told this news organization.
The multidisciplinary poststroke health care team could include neurologists, occupational therapists, speech therapists, nurses, neuropsychologists, gerontologists, and primary care providers.
“Because recurrent stroke is strongly associated with the development of cognitive impairment and dementia, prevention of recurrent strokes should be sought to decrease that risk,” Dr. El Husseini said. This includes addressing stroke risk factors, including high blood pressure, high cholesterol, type 2 diabetes, and atrial fibrillation.
The writing group says research is needed in the future to determine how cognitive impairment develops after stroke and the impact of nonbrain factors, including infection, frailty, and social factors.
Further research is also needed to determine best practices for cognitive screening after stroke, including the development and use of screening instruments that consider demographic, cultural, and linguistic factors in determining “normal” function.
“Perhaps the most pressing need, however, is the development of effective and culturally relevant treatments for poststroke cognitive impairment,” Dr. El Husseini said in a news release.
“We hope to see big enough clinical trials that assess various techniques, medications, and lifestyle changes in diverse groups of patients that may help improve cognitive function,” she added.
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Stroke Council, the Council on Cardiovascular Radiology and Intervention, the Council on Hypertension, and the Council on Lifestyle and Cardiometabolic Health.
Screening for cognitive impairment should be part of multidisciplinary care for stroke survivors, the American Heart Association says in a new scientific statement.
“Cognitive impairment after stroke is very common, is associated with other post-stroke outcomes, and often has significant impact on the quality of life,” Nada El Husseini, MD, MHSc, chair of the scientific statement writing group, told this news organization.
“It is important to screen stroke survivors for cognitive impairment as well as for associated comorbidities such as mood and sleep disorders,” said Dr. El Husseini, associate professor of neurology at Duke University Medical Center in Durham, N.C.
The scientific statement was published online in Stroke. It’s the first to specifically focus on the cognitive impairment resulting from an overt stroke (ischemic or hemorrhagic).
‘Actionable’ considerations for care
The writing group performed a “scoping” review of the literature on the prevalence, diagnosis, and management of poststroke cognitive impairment (PSCI) to provide a framework for “actionable considerations” for clinical practice as well as to highlight gaps needing additional studies, Dr. El Husseini explained.
PSCI, ranging from mild to severe, occurs in up to 60% of stroke survivors in the first year after stroke; yet, it is often underreported and underdiagnosed, the writing group notes.
Up to 20% of stroke survivors who experience mild cognitive impairment fully recover cognitive function, and cognitive recovery is most likely within the first 6 months after a stroke.
However, improvement in cognitive impairment without return to prestroke levels is more frequent than is complete recovery. As many as one in three stroke survivors may develop dementia within 5 years of stroke.
The writing group also notes that PSCI is often associated with other conditions, including physical disability, sleep disorders, behavioral and personality changes, depression, and other neuropsychological changes – each of which may contribute to lower quality of life.
Currently, there is no “gold standard” for cognitive screening following stroke, but several brief cognitive screening tests, including the Mini–Mental State Examination and the Montreal Cognitive Assessment, are widely used to identify cognitive impairment after stroke.
The statement also highlights the importance of assessing cognitive changes over time after stroke. Stroke survivors who experience unexplained difficulties with cognitive-related activities of daily living, following care instructions, or providing a reliable health history may be candidates for additional cognitive screening.
Manage risk factors to prevent repeat stroke
“Anticipatory guidance regarding home and driving safety and, return to work (if applicable) along with interdisciplinary collaboration among different medical and ancillary specialists in the diagnosis and management of cognitive impairment is key for the holistic care of stroke survivors,” Dr. El Husseini told this news organization.
The multidisciplinary poststroke health care team could include neurologists, occupational therapists, speech therapists, nurses, neuropsychologists, gerontologists, and primary care providers.
“Because recurrent stroke is strongly associated with the development of cognitive impairment and dementia, prevention of recurrent strokes should be sought to decrease that risk,” Dr. El Husseini said. This includes addressing stroke risk factors, including high blood pressure, high cholesterol, type 2 diabetes, and atrial fibrillation.
The writing group says research is needed in the future to determine how cognitive impairment develops after stroke and the impact of nonbrain factors, including infection, frailty, and social factors.
Further research is also needed to determine best practices for cognitive screening after stroke, including the development and use of screening instruments that consider demographic, cultural, and linguistic factors in determining “normal” function.
“Perhaps the most pressing need, however, is the development of effective and culturally relevant treatments for poststroke cognitive impairment,” Dr. El Husseini said in a news release.
“We hope to see big enough clinical trials that assess various techniques, medications, and lifestyle changes in diverse groups of patients that may help improve cognitive function,” she added.
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Stroke Council, the Council on Cardiovascular Radiology and Intervention, the Council on Hypertension, and the Council on Lifestyle and Cardiometabolic Health.
Screening for cognitive impairment should be part of multidisciplinary care for stroke survivors, the American Heart Association says in a new scientific statement.
“Cognitive impairment after stroke is very common, is associated with other post-stroke outcomes, and often has significant impact on the quality of life,” Nada El Husseini, MD, MHSc, chair of the scientific statement writing group, told this news organization.
“It is important to screen stroke survivors for cognitive impairment as well as for associated comorbidities such as mood and sleep disorders,” said Dr. El Husseini, associate professor of neurology at Duke University Medical Center in Durham, N.C.
The scientific statement was published online in Stroke. It’s the first to specifically focus on the cognitive impairment resulting from an overt stroke (ischemic or hemorrhagic).
‘Actionable’ considerations for care
The writing group performed a “scoping” review of the literature on the prevalence, diagnosis, and management of poststroke cognitive impairment (PSCI) to provide a framework for “actionable considerations” for clinical practice as well as to highlight gaps needing additional studies, Dr. El Husseini explained.
PSCI, ranging from mild to severe, occurs in up to 60% of stroke survivors in the first year after stroke; yet, it is often underreported and underdiagnosed, the writing group notes.
Up to 20% of stroke survivors who experience mild cognitive impairment fully recover cognitive function, and cognitive recovery is most likely within the first 6 months after a stroke.
However, improvement in cognitive impairment without return to prestroke levels is more frequent than is complete recovery. As many as one in three stroke survivors may develop dementia within 5 years of stroke.
The writing group also notes that PSCI is often associated with other conditions, including physical disability, sleep disorders, behavioral and personality changes, depression, and other neuropsychological changes – each of which may contribute to lower quality of life.
Currently, there is no “gold standard” for cognitive screening following stroke, but several brief cognitive screening tests, including the Mini–Mental State Examination and the Montreal Cognitive Assessment, are widely used to identify cognitive impairment after stroke.
The statement also highlights the importance of assessing cognitive changes over time after stroke. Stroke survivors who experience unexplained difficulties with cognitive-related activities of daily living, following care instructions, or providing a reliable health history may be candidates for additional cognitive screening.
Manage risk factors to prevent repeat stroke
“Anticipatory guidance regarding home and driving safety and, return to work (if applicable) along with interdisciplinary collaboration among different medical and ancillary specialists in the diagnosis and management of cognitive impairment is key for the holistic care of stroke survivors,” Dr. El Husseini told this news organization.
The multidisciplinary poststroke health care team could include neurologists, occupational therapists, speech therapists, nurses, neuropsychologists, gerontologists, and primary care providers.
“Because recurrent stroke is strongly associated with the development of cognitive impairment and dementia, prevention of recurrent strokes should be sought to decrease that risk,” Dr. El Husseini said. This includes addressing stroke risk factors, including high blood pressure, high cholesterol, type 2 diabetes, and atrial fibrillation.
The writing group says research is needed in the future to determine how cognitive impairment develops after stroke and the impact of nonbrain factors, including infection, frailty, and social factors.
Further research is also needed to determine best practices for cognitive screening after stroke, including the development and use of screening instruments that consider demographic, cultural, and linguistic factors in determining “normal” function.
“Perhaps the most pressing need, however, is the development of effective and culturally relevant treatments for poststroke cognitive impairment,” Dr. El Husseini said in a news release.
“We hope to see big enough clinical trials that assess various techniques, medications, and lifestyle changes in diverse groups of patients that may help improve cognitive function,” she added.
This scientific statement was prepared by the volunteer writing group on behalf of the AHA Stroke Council, the Council on Cardiovascular Radiology and Intervention, the Council on Hypertension, and the Council on Lifestyle and Cardiometabolic Health.
New tool accurately predicts suicide risk in serious mental illness
The 17-question Oxford Mental Illness and Suicide Tool (OxMIS) assessment is designed to predict 12-month suicide risk in people with schizophrenia spectrum disorders and bipolar disorder based on risk factors such as familial traits, antisocial traits, and information about self-harm.
“We have demonstrated the clinical utility of OxMIS in two separate studies and countries. As with any clinical risk prediction tool, it will not improve outcomes unless coupled with effective interventions,” lead investigator Amir Sariaslan, PhD, a senior research fellow in psychiatric epidemiology at the University of Oxford, England, told this news organization.
The findings were published online in Translational Psychiatry.
Twice validated
Dr. Sariaslan and his team originally developed and validated the OxMIS in a cohort of 75,000 people with SMI in Sweden. Recognizing the lack of externally validated prognostic models in the mental health field, the team wanted to validate the instrument in a new, population-based sample in Finland.
The investigators accessed information about patient diagnosis and treatment from the Finnish Care Register for Health Care, which contains de-identified information for all individuals between ages 15 and 65 years diagnosed with an SMI between Jan. 1, 1996, and Dec. 31, 2017.
They included 137,000 patients with somatic symptom disorder or bipolar disorder for a total of more than 5 million episodes of inpatient or outpatient treatment. Investigators linked the cohort to the Causes of Death Register to identify those who had died by suicide within 12 months of an index treatment episode, which investigators randomly selected for each person.
The investigators found that 1,475 individuals in the sample died by suicide within 1 year of their index episode (1.1%).
Each patient was assigned a clinical suicide risk score based on their clinical information, familial traits, prescription information, and comorbid conditions. Using OxMIS, the investigators found that the instrument accurately predicted suicide with an area under the curve of 0.70.
In other words, in 70% of the instances where the investigators randomly selected two people from the sample, one of whom died by suicide and the other of whom did not, the individual who died by suicide had a higher OxMIS risk score.
The investigators note the model overestimated the risk for patients who were at extremely high risk for suicide (those with a predicted suicide risk of > 5%). “In our complementary sensitivity analysis, we observed improved calibration in these patients when we assigned them a suicide risk prediction of no more than 5%,” they write.
Dr. Sariaslan said that the findings highlight the importance of safety planning interventions. “It is also essential to remember that OxMIS is not intended to replace clinical decision-making, but rather to support it,” he said.
As to whether the tool could be used in other populations, such as in the United States, Dr. Sariaslan said, “there is no good evidence that the contribution of risk factors to suicide in this population is different in the U.S. than in northern Europe, so there is no a priori reason to have to do multiple external validations before it can be used for research or clinical purposes.”
One size does not fit all
Commenting on the study, Ronald Kessler, PhD, McNeil Family Professor, department of health care policy at Harvard Medical School, Boston, said that he’d be “surprised” if OxMIS was adopted in the United States because there is already an existing tool that is “slightly more accurate,” which he helped develop.
“In addition, when we start thinking about uses for such scales, it becomes clear that different scales should be used for different segments of the population, depending on intervention options,” Dr. Kessler said.
“So, for example, a different scale would probably be optimal in deciding how to manage psychiatric inpatients in the transition back to the community after hospital discharge than [it would be], say, in deciding how to respond to suicidality among patients presenting at an emergency department. No one scale will fit for all the scenarios in which prediction is desired,” he added.
The study was funded by the Academy of Finland. Dr. Kessler receives funding from the National Institute of Mental Health, Department of Defense, and Veterans Administration to develop suicide prediction models. Dr. Sariaslan has no disclosures to report.
A version of this article first appeared on Medscape.com.
The 17-question Oxford Mental Illness and Suicide Tool (OxMIS) assessment is designed to predict 12-month suicide risk in people with schizophrenia spectrum disorders and bipolar disorder based on risk factors such as familial traits, antisocial traits, and information about self-harm.
“We have demonstrated the clinical utility of OxMIS in two separate studies and countries. As with any clinical risk prediction tool, it will not improve outcomes unless coupled with effective interventions,” lead investigator Amir Sariaslan, PhD, a senior research fellow in psychiatric epidemiology at the University of Oxford, England, told this news organization.
The findings were published online in Translational Psychiatry.
Twice validated
Dr. Sariaslan and his team originally developed and validated the OxMIS in a cohort of 75,000 people with SMI in Sweden. Recognizing the lack of externally validated prognostic models in the mental health field, the team wanted to validate the instrument in a new, population-based sample in Finland.
The investigators accessed information about patient diagnosis and treatment from the Finnish Care Register for Health Care, which contains de-identified information for all individuals between ages 15 and 65 years diagnosed with an SMI between Jan. 1, 1996, and Dec. 31, 2017.
They included 137,000 patients with somatic symptom disorder or bipolar disorder for a total of more than 5 million episodes of inpatient or outpatient treatment. Investigators linked the cohort to the Causes of Death Register to identify those who had died by suicide within 12 months of an index treatment episode, which investigators randomly selected for each person.
The investigators found that 1,475 individuals in the sample died by suicide within 1 year of their index episode (1.1%).
Each patient was assigned a clinical suicide risk score based on their clinical information, familial traits, prescription information, and comorbid conditions. Using OxMIS, the investigators found that the instrument accurately predicted suicide with an area under the curve of 0.70.
In other words, in 70% of the instances where the investigators randomly selected two people from the sample, one of whom died by suicide and the other of whom did not, the individual who died by suicide had a higher OxMIS risk score.
The investigators note the model overestimated the risk for patients who were at extremely high risk for suicide (those with a predicted suicide risk of > 5%). “In our complementary sensitivity analysis, we observed improved calibration in these patients when we assigned them a suicide risk prediction of no more than 5%,” they write.
Dr. Sariaslan said that the findings highlight the importance of safety planning interventions. “It is also essential to remember that OxMIS is not intended to replace clinical decision-making, but rather to support it,” he said.
As to whether the tool could be used in other populations, such as in the United States, Dr. Sariaslan said, “there is no good evidence that the contribution of risk factors to suicide in this population is different in the U.S. than in northern Europe, so there is no a priori reason to have to do multiple external validations before it can be used for research or clinical purposes.”
One size does not fit all
Commenting on the study, Ronald Kessler, PhD, McNeil Family Professor, department of health care policy at Harvard Medical School, Boston, said that he’d be “surprised” if OxMIS was adopted in the United States because there is already an existing tool that is “slightly more accurate,” which he helped develop.
“In addition, when we start thinking about uses for such scales, it becomes clear that different scales should be used for different segments of the population, depending on intervention options,” Dr. Kessler said.
“So, for example, a different scale would probably be optimal in deciding how to manage psychiatric inpatients in the transition back to the community after hospital discharge than [it would be], say, in deciding how to respond to suicidality among patients presenting at an emergency department. No one scale will fit for all the scenarios in which prediction is desired,” he added.
The study was funded by the Academy of Finland. Dr. Kessler receives funding from the National Institute of Mental Health, Department of Defense, and Veterans Administration to develop suicide prediction models. Dr. Sariaslan has no disclosures to report.
A version of this article first appeared on Medscape.com.
The 17-question Oxford Mental Illness and Suicide Tool (OxMIS) assessment is designed to predict 12-month suicide risk in people with schizophrenia spectrum disorders and bipolar disorder based on risk factors such as familial traits, antisocial traits, and information about self-harm.
“We have demonstrated the clinical utility of OxMIS in two separate studies and countries. As with any clinical risk prediction tool, it will not improve outcomes unless coupled with effective interventions,” lead investigator Amir Sariaslan, PhD, a senior research fellow in psychiatric epidemiology at the University of Oxford, England, told this news organization.
The findings were published online in Translational Psychiatry.
Twice validated
Dr. Sariaslan and his team originally developed and validated the OxMIS in a cohort of 75,000 people with SMI in Sweden. Recognizing the lack of externally validated prognostic models in the mental health field, the team wanted to validate the instrument in a new, population-based sample in Finland.
The investigators accessed information about patient diagnosis and treatment from the Finnish Care Register for Health Care, which contains de-identified information for all individuals between ages 15 and 65 years diagnosed with an SMI between Jan. 1, 1996, and Dec. 31, 2017.
They included 137,000 patients with somatic symptom disorder or bipolar disorder for a total of more than 5 million episodes of inpatient or outpatient treatment. Investigators linked the cohort to the Causes of Death Register to identify those who had died by suicide within 12 months of an index treatment episode, which investigators randomly selected for each person.
The investigators found that 1,475 individuals in the sample died by suicide within 1 year of their index episode (1.1%).
Each patient was assigned a clinical suicide risk score based on their clinical information, familial traits, prescription information, and comorbid conditions. Using OxMIS, the investigators found that the instrument accurately predicted suicide with an area under the curve of 0.70.
In other words, in 70% of the instances where the investigators randomly selected two people from the sample, one of whom died by suicide and the other of whom did not, the individual who died by suicide had a higher OxMIS risk score.
The investigators note the model overestimated the risk for patients who were at extremely high risk for suicide (those with a predicted suicide risk of > 5%). “In our complementary sensitivity analysis, we observed improved calibration in these patients when we assigned them a suicide risk prediction of no more than 5%,” they write.
Dr. Sariaslan said that the findings highlight the importance of safety planning interventions. “It is also essential to remember that OxMIS is not intended to replace clinical decision-making, but rather to support it,” he said.
As to whether the tool could be used in other populations, such as in the United States, Dr. Sariaslan said, “there is no good evidence that the contribution of risk factors to suicide in this population is different in the U.S. than in northern Europe, so there is no a priori reason to have to do multiple external validations before it can be used for research or clinical purposes.”
One size does not fit all
Commenting on the study, Ronald Kessler, PhD, McNeil Family Professor, department of health care policy at Harvard Medical School, Boston, said that he’d be “surprised” if OxMIS was adopted in the United States because there is already an existing tool that is “slightly more accurate,” which he helped develop.
“In addition, when we start thinking about uses for such scales, it becomes clear that different scales should be used for different segments of the population, depending on intervention options,” Dr. Kessler said.
“So, for example, a different scale would probably be optimal in deciding how to manage psychiatric inpatients in the transition back to the community after hospital discharge than [it would be], say, in deciding how to respond to suicidality among patients presenting at an emergency department. No one scale will fit for all the scenarios in which prediction is desired,” he added.
The study was funded by the Academy of Finland. Dr. Kessler receives funding from the National Institute of Mental Health, Department of Defense, and Veterans Administration to develop suicide prediction models. Dr. Sariaslan has no disclosures to report.
A version of this article first appeared on Medscape.com.
FROM TRANSLATIONAL PSYCHIATRY
Experts debate reducing ASCT for multiple myeloma
NEW YORK –
Hematologist-oncologists whose top priority is ensuring that patients have the best chance of progression-free survival (PFS) will continue to choose ASCT as a best practice, argued Amrita Krishnan, MD, hematologist at the Judy and Bernard Briskin Center for Multiple Myeloma Research, City of Hope Comprehensive Cancer Center, Duarte, Calif.
A differing perspective was presented by C. Ola Landgren, MD, PhD, hematologist at the Sylvester Comprehensive Cancer Center at the University of Miami. Dr. Landgren cited evidence that, for newly diagnosed MM patients treated successfully with modern combination therapies, ASCT is not a mandatory treatment step before starting maintenance therapy.
Making a case for ASCT as the SoC, Dr. Krishnan noted, “based on the DETERMINATION trial [DT], there is far superior rate of PFS with patients who get ASCT up front, compared patients who got only conventional chemotherapy with lenalidomide, bortezomib, and dexamethasone [RVd]. PFS is the endpoint we look for in our treatment regimens.
“If you don’t use ASCT up front, you may lose the opportunity at later relapse. This is not to say that transplant is the only tool at our disposal. It is just an indispensable one. The GRIFFIN trial [GT] has shown us that robust combinations of drugs [both RVd and dexamethasone +RVd] can improve patient outcomes both before and after ASCT,” Dr. Krishnan concluded.
In his presentation, Dr. Landgren stated that, in the DT, while PFS is prolonged by the addition of ASCT to RVd, adding ASCT did not significantly increase overall survival (OS) rates. He added that treatment-related AEs of grade 3+ occurred in only 78.2% of patients on RVd versus 94.2% of RVd + ASCT patients.
“ASCT should not be the SoC frontline treatment in MM because it does not prolong OS. The IFM trial and the DT both show that there is no difference in OS between drug combination therapy followed by transplant and maintenance versus combination therapy alone, followed by transplant and maintenance. Furthermore, patients who get ASCT have higher risk of developing secondary malignancies, worse quality of life, and higher long-term morbidity with other conditions,” Dr. Landgren said.
He cited the MAIA trial administered daratumumab and lenalidomide plus dexamethasone (DRd) to patients who were too old or too frail to qualify for ASCT. Over half of patients in the DRd arm of MAIA had an estimated progression-free survival rate at 60 months.
“Furthermore, GT and the MANHATTAN clinical trials showed that we can safely add CD38-targeted monoclonal antibodies to standard combination therapies [lenalidomide, bortezomib, and dexamethasone (KRd)], resulting in higher rates of minimal-residual-disease (MRD) negativity. That means modern four-drug combination therapies [DR-RVd and DR-KRd] will allow more [and more newly diagnosed] MM patients to achieve MRD negativity in the absence of ASCT,” Dr. Landgren concluded.
Asked to comment on the two viewpoints, Joshua Richter, MD, director of myeloma treatment at the Blavatnik Family Center at Chelsea Mount Sinai, New York, said: “With some patients, we can get similar outcomes, whether or not we do a transplant. Doctors need to be better at choosing who really needs ASCT. Older people with standard-risk disease or people who achieve MRD-negative status after pharmacological treatment might not need to receive a transplant as much as those who have bulk disease or high-risk cytogenetics.
“Although ASCT might not be the best frontline option for everyone, collecting cells from most patients and storing them has many advantages. It allows us to do have the option of ASCT in later lines of therapy. In some patients with low blood counts, we can use stored cells to reboot their marrow and make them eligible for trials of promising new drugs,” Dr. Richter said.
Dr. Krishnan disclosed relationships with Takeda, Amgen, GlaxoSmithKline, Bristol-Myers Squibb, Sanofi, Pfizer, Adaptive, Regeneron, Janssen, AstraZeneca, Artiva, and Sutro. Dr. Landgren reported ties with Amgen, BMS, Celgene, Janssen, Takedam Glenmark, Juno, Pfizer, Merck, and others. Dr. Richter disclosed relationships with Janssen, BMS, and Takeda.
NEW YORK –
Hematologist-oncologists whose top priority is ensuring that patients have the best chance of progression-free survival (PFS) will continue to choose ASCT as a best practice, argued Amrita Krishnan, MD, hematologist at the Judy and Bernard Briskin Center for Multiple Myeloma Research, City of Hope Comprehensive Cancer Center, Duarte, Calif.
A differing perspective was presented by C. Ola Landgren, MD, PhD, hematologist at the Sylvester Comprehensive Cancer Center at the University of Miami. Dr. Landgren cited evidence that, for newly diagnosed MM patients treated successfully with modern combination therapies, ASCT is not a mandatory treatment step before starting maintenance therapy.
Making a case for ASCT as the SoC, Dr. Krishnan noted, “based on the DETERMINATION trial [DT], there is far superior rate of PFS with patients who get ASCT up front, compared patients who got only conventional chemotherapy with lenalidomide, bortezomib, and dexamethasone [RVd]. PFS is the endpoint we look for in our treatment regimens.
“If you don’t use ASCT up front, you may lose the opportunity at later relapse. This is not to say that transplant is the only tool at our disposal. It is just an indispensable one. The GRIFFIN trial [GT] has shown us that robust combinations of drugs [both RVd and dexamethasone +RVd] can improve patient outcomes both before and after ASCT,” Dr. Krishnan concluded.
In his presentation, Dr. Landgren stated that, in the DT, while PFS is prolonged by the addition of ASCT to RVd, adding ASCT did not significantly increase overall survival (OS) rates. He added that treatment-related AEs of grade 3+ occurred in only 78.2% of patients on RVd versus 94.2% of RVd + ASCT patients.
“ASCT should not be the SoC frontline treatment in MM because it does not prolong OS. The IFM trial and the DT both show that there is no difference in OS between drug combination therapy followed by transplant and maintenance versus combination therapy alone, followed by transplant and maintenance. Furthermore, patients who get ASCT have higher risk of developing secondary malignancies, worse quality of life, and higher long-term morbidity with other conditions,” Dr. Landgren said.
He cited the MAIA trial administered daratumumab and lenalidomide plus dexamethasone (DRd) to patients who were too old or too frail to qualify for ASCT. Over half of patients in the DRd arm of MAIA had an estimated progression-free survival rate at 60 months.
“Furthermore, GT and the MANHATTAN clinical trials showed that we can safely add CD38-targeted monoclonal antibodies to standard combination therapies [lenalidomide, bortezomib, and dexamethasone (KRd)], resulting in higher rates of minimal-residual-disease (MRD) negativity. That means modern four-drug combination therapies [DR-RVd and DR-KRd] will allow more [and more newly diagnosed] MM patients to achieve MRD negativity in the absence of ASCT,” Dr. Landgren concluded.
Asked to comment on the two viewpoints, Joshua Richter, MD, director of myeloma treatment at the Blavatnik Family Center at Chelsea Mount Sinai, New York, said: “With some patients, we can get similar outcomes, whether or not we do a transplant. Doctors need to be better at choosing who really needs ASCT. Older people with standard-risk disease or people who achieve MRD-negative status after pharmacological treatment might not need to receive a transplant as much as those who have bulk disease or high-risk cytogenetics.
“Although ASCT might not be the best frontline option for everyone, collecting cells from most patients and storing them has many advantages. It allows us to do have the option of ASCT in later lines of therapy. In some patients with low blood counts, we can use stored cells to reboot their marrow and make them eligible for trials of promising new drugs,” Dr. Richter said.
Dr. Krishnan disclosed relationships with Takeda, Amgen, GlaxoSmithKline, Bristol-Myers Squibb, Sanofi, Pfizer, Adaptive, Regeneron, Janssen, AstraZeneca, Artiva, and Sutro. Dr. Landgren reported ties with Amgen, BMS, Celgene, Janssen, Takedam Glenmark, Juno, Pfizer, Merck, and others. Dr. Richter disclosed relationships with Janssen, BMS, and Takeda.
NEW YORK –
Hematologist-oncologists whose top priority is ensuring that patients have the best chance of progression-free survival (PFS) will continue to choose ASCT as a best practice, argued Amrita Krishnan, MD, hematologist at the Judy and Bernard Briskin Center for Multiple Myeloma Research, City of Hope Comprehensive Cancer Center, Duarte, Calif.
A differing perspective was presented by C. Ola Landgren, MD, PhD, hematologist at the Sylvester Comprehensive Cancer Center at the University of Miami. Dr. Landgren cited evidence that, for newly diagnosed MM patients treated successfully with modern combination therapies, ASCT is not a mandatory treatment step before starting maintenance therapy.
Making a case for ASCT as the SoC, Dr. Krishnan noted, “based on the DETERMINATION trial [DT], there is far superior rate of PFS with patients who get ASCT up front, compared patients who got only conventional chemotherapy with lenalidomide, bortezomib, and dexamethasone [RVd]. PFS is the endpoint we look for in our treatment regimens.
“If you don’t use ASCT up front, you may lose the opportunity at later relapse. This is not to say that transplant is the only tool at our disposal. It is just an indispensable one. The GRIFFIN trial [GT] has shown us that robust combinations of drugs [both RVd and dexamethasone +RVd] can improve patient outcomes both before and after ASCT,” Dr. Krishnan concluded.
In his presentation, Dr. Landgren stated that, in the DT, while PFS is prolonged by the addition of ASCT to RVd, adding ASCT did not significantly increase overall survival (OS) rates. He added that treatment-related AEs of grade 3+ occurred in only 78.2% of patients on RVd versus 94.2% of RVd + ASCT patients.
“ASCT should not be the SoC frontline treatment in MM because it does not prolong OS. The IFM trial and the DT both show that there is no difference in OS between drug combination therapy followed by transplant and maintenance versus combination therapy alone, followed by transplant and maintenance. Furthermore, patients who get ASCT have higher risk of developing secondary malignancies, worse quality of life, and higher long-term morbidity with other conditions,” Dr. Landgren said.
He cited the MAIA trial administered daratumumab and lenalidomide plus dexamethasone (DRd) to patients who were too old or too frail to qualify for ASCT. Over half of patients in the DRd arm of MAIA had an estimated progression-free survival rate at 60 months.
“Furthermore, GT and the MANHATTAN clinical trials showed that we can safely add CD38-targeted monoclonal antibodies to standard combination therapies [lenalidomide, bortezomib, and dexamethasone (KRd)], resulting in higher rates of minimal-residual-disease (MRD) negativity. That means modern four-drug combination therapies [DR-RVd and DR-KRd] will allow more [and more newly diagnosed] MM patients to achieve MRD negativity in the absence of ASCT,” Dr. Landgren concluded.
Asked to comment on the two viewpoints, Joshua Richter, MD, director of myeloma treatment at the Blavatnik Family Center at Chelsea Mount Sinai, New York, said: “With some patients, we can get similar outcomes, whether or not we do a transplant. Doctors need to be better at choosing who really needs ASCT. Older people with standard-risk disease or people who achieve MRD-negative status after pharmacological treatment might not need to receive a transplant as much as those who have bulk disease or high-risk cytogenetics.
“Although ASCT might not be the best frontline option for everyone, collecting cells from most patients and storing them has many advantages. It allows us to do have the option of ASCT in later lines of therapy. In some patients with low blood counts, we can use stored cells to reboot their marrow and make them eligible for trials of promising new drugs,” Dr. Richter said.
Dr. Krishnan disclosed relationships with Takeda, Amgen, GlaxoSmithKline, Bristol-Myers Squibb, Sanofi, Pfizer, Adaptive, Regeneron, Janssen, AstraZeneca, Artiva, and Sutro. Dr. Landgren reported ties with Amgen, BMS, Celgene, Janssen, Takedam Glenmark, Juno, Pfizer, Merck, and others. Dr. Richter disclosed relationships with Janssen, BMS, and Takeda.
AT 2023 GREAT DEBATES AND UPDATES HEMATOLOGIC MALIGNANCIES CONFERENCE