User login
Smartphone apps may aid home rheumatoid arthritis monitoring
BIRMINGHAM, ENGLAND – Researchers in the United Kingdom are looking at how smartphone technology can help to improve how patients with rheumatoid arthritis (RA) monitor their disease at home and between clinic visits.
As part of the Remote Monitoring of Rheumatoid Arthritis (REMORA) study, a team led by Will Dixon, MD, at the University of Manchester (England), has developed an app that links directly into electronic patient records to help collect information from patients between their regular clinic visits for both self-monitoring and research purposes.
“REMORA is motivated by the need to learn about what happens to patients in between clinic visits, and that’s both for clinical care and for research but also to have the opportunity to support self-management, so we’ve designed the study to meet those three needs,” Dr. Dixon, professor and chair of digital epidemiology in Manchester University’s division of musculoskeletal and dermatological sciences, said in an interview at the British Society for Rheumatology annual conference.
Dr. Dixon explained that when patients are seen every few months they might forget or underplay events that could have significance for their clinical care. Use of the beta version of the app between clinic consultations in the study proved there was recall error.
“In the consultation, we’d ask people how they’d been doing before looking at the graphs in the app, and even people who had said they’d been absolutely fine since they’d last been seen, even in the previous month of beta-testing, have signs that they could have been [having] pain flares,” Dr. Dixon said. This sort of prospective data collection by the app could enable discussion of any irregularities even if more stoic patients reported having no problems.
The responses showed that there were some similarities in the information that clinicians and researchers and patients want to record, but also some key differences.
All groups wanted the app to be able to collect information about possible changes in disease activity (indicated by levels of pain, joint swelling, or disease flares) and the impact that these had on physical and emotional well-being.
Patients were open to regular monitoring, if not too burdensome, but would prefer to note things down “when something happened.” On the other hand, clinicians and researchers wanted regularity and consistency in the monitoring, although they saw the benefit of a more “ad hoc” approach.
Clinicians and researchers felt no need to “reinvent the wheel” and indicated that existing validated tools could be used to collect the information. Conversely, patients preferred a more pictorial or free-text approach, although were aware of some standardized tools in common use.
Daily, weekly, and monthly question sets were developed, with a diary that uses emojis to indicate how people using the app are feeling and a free-text area to allow them to note down any significant health events or thoughts.
Pilot testing of the app has been done in one hospital so far, but it was so well received that patients did not want to have to stop using it at the end of the study, Dr. Austin said in an interview.
Linking into the patient records is a unique approach, and if it proves successful in RA, it could be rolled out across the country’s National Health Service (NHS) and perhaps even into other chronic conditions where self-monitoring is needed.
“We all know we have a limited time in our consultations, so we need to develop a system whereby a clinician, in the 15 minutes they have got for a follow-up appointment, can set somebody up with an ‘app prescription,’ ” Dr. Dixon said. “We’re looking to really develop a blueprint for how apps can successfully connect into the NHS,” Dr. Dixon said. At present, however, the next step is to try to scale up the app for use in several hospitals within an area rather than roll it out nationally, he said.
Using built-in accelerometer for research
Another approach to harnessing smartphone technology is being taken by researchers at the University of Southampton (England), where engineering postgraduate student Jimmy Caroupapoulle and his collaborators are working on an app that continually uses the built-in sensors in a phone to detect movements, and thus how physically active someone is.
“What we are trying to achieve is to develop an application that can just run in the background so people do not have to do too much,” Mr. Caroupapoulle explained in an interview around his poster presentation.
Using the app, called RApp, patients will be able to answer daily questions based on existing tools (RAPID3 and MDHAQ) to record their levels of pain, joint inflammation, and physical activity. The latter would be recorded via the phone’s onboard accelerometer to give a more objective view of whether the patient is moving around, as well as the patient’s speed in getting up from a seated position. The app collects data using the 28-joint Disease Activity Score so an indication of the severity of joint pain or inflammation can be assessed.
The aim is to give the patients the power to monitor themselves but also to facilitate discussion with their physicians. Data from the app will be integrated into an online portal so that patients and their doctors can see the information provided.
So far, 5 patients with RA have tested the application and the next stage is to release the application to a wider group, perhaps 20 patients, Mr. Caroupapoulle said.
“There are lots of apps out there, but this is something that looks at the quality of movement.” consultant rheumatologist Christopher Edwards, MD, a member of the team behind the RApp, said in an interview.
As opposed to pedometers or other devices that monitor physical activity to varying degrees of accuracy, RApp looks at how people accelerate as they stand up or move, which can be important for those with arthritis, and how that relates to their disease activity, said Dr. Edwards, professor of rheumatology at the University of Southampton.
“You can’t guarantee that someone always had their phone in their hand or in their bag,” Dr. Edwards said, “so what you want to do is get a sample from time during the day that gives you an overall representation, even if that is a very short period, just once during the day, then we’ll see if that makes a difference over time and whether that correlates with someone’s disease activity.”
The REMORA study is sponsored by Arthritis Research UK and the National Institute of Health Research Collaboration for Leadership in Applied Health Research and Care Greater Manchester. RApp is being developed without commercial funding. Dr. Austin, Dr. Dixon, Mr. Caroupapoulle, and Dr. Edwards stated they had no conflicts of interest.
BIRMINGHAM, ENGLAND – Researchers in the United Kingdom are looking at how smartphone technology can help to improve how patients with rheumatoid arthritis (RA) monitor their disease at home and between clinic visits.
As part of the Remote Monitoring of Rheumatoid Arthritis (REMORA) study, a team led by Will Dixon, MD, at the University of Manchester (England), has developed an app that links directly into electronic patient records to help collect information from patients between their regular clinic visits for both self-monitoring and research purposes.
“REMORA is motivated by the need to learn about what happens to patients in between clinic visits, and that’s both for clinical care and for research but also to have the opportunity to support self-management, so we’ve designed the study to meet those three needs,” Dr. Dixon, professor and chair of digital epidemiology in Manchester University’s division of musculoskeletal and dermatological sciences, said in an interview at the British Society for Rheumatology annual conference.
Dr. Dixon explained that when patients are seen every few months they might forget or underplay events that could have significance for their clinical care. Use of the beta version of the app between clinic consultations in the study proved there was recall error.
“In the consultation, we’d ask people how they’d been doing before looking at the graphs in the app, and even people who had said they’d been absolutely fine since they’d last been seen, even in the previous month of beta-testing, have signs that they could have been [having] pain flares,” Dr. Dixon said. This sort of prospective data collection by the app could enable discussion of any irregularities even if more stoic patients reported having no problems.
The responses showed that there were some similarities in the information that clinicians and researchers and patients want to record, but also some key differences.
All groups wanted the app to be able to collect information about possible changes in disease activity (indicated by levels of pain, joint swelling, or disease flares) and the impact that these had on physical and emotional well-being.
Patients were open to regular monitoring, if not too burdensome, but would prefer to note things down “when something happened.” On the other hand, clinicians and researchers wanted regularity and consistency in the monitoring, although they saw the benefit of a more “ad hoc” approach.
Clinicians and researchers felt no need to “reinvent the wheel” and indicated that existing validated tools could be used to collect the information. Conversely, patients preferred a more pictorial or free-text approach, although were aware of some standardized tools in common use.
Daily, weekly, and monthly question sets were developed, with a diary that uses emojis to indicate how people using the app are feeling and a free-text area to allow them to note down any significant health events or thoughts.
Pilot testing of the app has been done in one hospital so far, but it was so well received that patients did not want to have to stop using it at the end of the study, Dr. Austin said in an interview.
Linking into the patient records is a unique approach, and if it proves successful in RA, it could be rolled out across the country’s National Health Service (NHS) and perhaps even into other chronic conditions where self-monitoring is needed.
“We all know we have a limited time in our consultations, so we need to develop a system whereby a clinician, in the 15 minutes they have got for a follow-up appointment, can set somebody up with an ‘app prescription,’ ” Dr. Dixon said. “We’re looking to really develop a blueprint for how apps can successfully connect into the NHS,” Dr. Dixon said. At present, however, the next step is to try to scale up the app for use in several hospitals within an area rather than roll it out nationally, he said.
Using built-in accelerometer for research
Another approach to harnessing smartphone technology is being taken by researchers at the University of Southampton (England), where engineering postgraduate student Jimmy Caroupapoulle and his collaborators are working on an app that continually uses the built-in sensors in a phone to detect movements, and thus how physically active someone is.
“What we are trying to achieve is to develop an application that can just run in the background so people do not have to do too much,” Mr. Caroupapoulle explained in an interview around his poster presentation.
Using the app, called RApp, patients will be able to answer daily questions based on existing tools (RAPID3 and MDHAQ) to record their levels of pain, joint inflammation, and physical activity. The latter would be recorded via the phone’s onboard accelerometer to give a more objective view of whether the patient is moving around, as well as the patient’s speed in getting up from a seated position. The app collects data using the 28-joint Disease Activity Score so an indication of the severity of joint pain or inflammation can be assessed.
The aim is to give the patients the power to monitor themselves but also to facilitate discussion with their physicians. Data from the app will be integrated into an online portal so that patients and their doctors can see the information provided.
So far, 5 patients with RA have tested the application and the next stage is to release the application to a wider group, perhaps 20 patients, Mr. Caroupapoulle said.
“There are lots of apps out there, but this is something that looks at the quality of movement.” consultant rheumatologist Christopher Edwards, MD, a member of the team behind the RApp, said in an interview.
As opposed to pedometers or other devices that monitor physical activity to varying degrees of accuracy, RApp looks at how people accelerate as they stand up or move, which can be important for those with arthritis, and how that relates to their disease activity, said Dr. Edwards, professor of rheumatology at the University of Southampton.
“You can’t guarantee that someone always had their phone in their hand or in their bag,” Dr. Edwards said, “so what you want to do is get a sample from time during the day that gives you an overall representation, even if that is a very short period, just once during the day, then we’ll see if that makes a difference over time and whether that correlates with someone’s disease activity.”
The REMORA study is sponsored by Arthritis Research UK and the National Institute of Health Research Collaboration for Leadership in Applied Health Research and Care Greater Manchester. RApp is being developed without commercial funding. Dr. Austin, Dr. Dixon, Mr. Caroupapoulle, and Dr. Edwards stated they had no conflicts of interest.
BIRMINGHAM, ENGLAND – Researchers in the United Kingdom are looking at how smartphone technology can help to improve how patients with rheumatoid arthritis (RA) monitor their disease at home and between clinic visits.
As part of the Remote Monitoring of Rheumatoid Arthritis (REMORA) study, a team led by Will Dixon, MD, at the University of Manchester (England), has developed an app that links directly into electronic patient records to help collect information from patients between their regular clinic visits for both self-monitoring and research purposes.
“REMORA is motivated by the need to learn about what happens to patients in between clinic visits, and that’s both for clinical care and for research but also to have the opportunity to support self-management, so we’ve designed the study to meet those three needs,” Dr. Dixon, professor and chair of digital epidemiology in Manchester University’s division of musculoskeletal and dermatological sciences, said in an interview at the British Society for Rheumatology annual conference.
Dr. Dixon explained that when patients are seen every few months they might forget or underplay events that could have significance for their clinical care. Use of the beta version of the app between clinic consultations in the study proved there was recall error.
“In the consultation, we’d ask people how they’d been doing before looking at the graphs in the app, and even people who had said they’d been absolutely fine since they’d last been seen, even in the previous month of beta-testing, have signs that they could have been [having] pain flares,” Dr. Dixon said. This sort of prospective data collection by the app could enable discussion of any irregularities even if more stoic patients reported having no problems.
The responses showed that there were some similarities in the information that clinicians and researchers and patients want to record, but also some key differences.
All groups wanted the app to be able to collect information about possible changes in disease activity (indicated by levels of pain, joint swelling, or disease flares) and the impact that these had on physical and emotional well-being.
Patients were open to regular monitoring, if not too burdensome, but would prefer to note things down “when something happened.” On the other hand, clinicians and researchers wanted regularity and consistency in the monitoring, although they saw the benefit of a more “ad hoc” approach.
Clinicians and researchers felt no need to “reinvent the wheel” and indicated that existing validated tools could be used to collect the information. Conversely, patients preferred a more pictorial or free-text approach, although were aware of some standardized tools in common use.
Daily, weekly, and monthly question sets were developed, with a diary that uses emojis to indicate how people using the app are feeling and a free-text area to allow them to note down any significant health events or thoughts.
Pilot testing of the app has been done in one hospital so far, but it was so well received that patients did not want to have to stop using it at the end of the study, Dr. Austin said in an interview.
Linking into the patient records is a unique approach, and if it proves successful in RA, it could be rolled out across the country’s National Health Service (NHS) and perhaps even into other chronic conditions where self-monitoring is needed.
“We all know we have a limited time in our consultations, so we need to develop a system whereby a clinician, in the 15 minutes they have got for a follow-up appointment, can set somebody up with an ‘app prescription,’ ” Dr. Dixon said. “We’re looking to really develop a blueprint for how apps can successfully connect into the NHS,” Dr. Dixon said. At present, however, the next step is to try to scale up the app for use in several hospitals within an area rather than roll it out nationally, he said.
Using built-in accelerometer for research
Another approach to harnessing smartphone technology is being taken by researchers at the University of Southampton (England), where engineering postgraduate student Jimmy Caroupapoulle and his collaborators are working on an app that continually uses the built-in sensors in a phone to detect movements, and thus how physically active someone is.
“What we are trying to achieve is to develop an application that can just run in the background so people do not have to do too much,” Mr. Caroupapoulle explained in an interview around his poster presentation.
Using the app, called RApp, patients will be able to answer daily questions based on existing tools (RAPID3 and MDHAQ) to record their levels of pain, joint inflammation, and physical activity. The latter would be recorded via the phone’s onboard accelerometer to give a more objective view of whether the patient is moving around, as well as the patient’s speed in getting up from a seated position. The app collects data using the 28-joint Disease Activity Score so an indication of the severity of joint pain or inflammation can be assessed.
The aim is to give the patients the power to monitor themselves but also to facilitate discussion with their physicians. Data from the app will be integrated into an online portal so that patients and their doctors can see the information provided.
So far, 5 patients with RA have tested the application and the next stage is to release the application to a wider group, perhaps 20 patients, Mr. Caroupapoulle said.
“There are lots of apps out there, but this is something that looks at the quality of movement.” consultant rheumatologist Christopher Edwards, MD, a member of the team behind the RApp, said in an interview.
As opposed to pedometers or other devices that monitor physical activity to varying degrees of accuracy, RApp looks at how people accelerate as they stand up or move, which can be important for those with arthritis, and how that relates to their disease activity, said Dr. Edwards, professor of rheumatology at the University of Southampton.
“You can’t guarantee that someone always had their phone in their hand or in their bag,” Dr. Edwards said, “so what you want to do is get a sample from time during the day that gives you an overall representation, even if that is a very short period, just once during the day, then we’ll see if that makes a difference over time and whether that correlates with someone’s disease activity.”
The REMORA study is sponsored by Arthritis Research UK and the National Institute of Health Research Collaboration for Leadership in Applied Health Research and Care Greater Manchester. RApp is being developed without commercial funding. Dr. Austin, Dr. Dixon, Mr. Caroupapoulle, and Dr. Edwards stated they had no conflicts of interest.
Confronting the open chest – Samuel J. Meltzer and the first AATS annual meeting
In retrospect, the founding of the American Association for Thoracic Surgery (AATS) in 1917 may seem surprisingly optimistic, given the status of cardiothoracic surgery as a discipline at that time. While important strides had been made in dealing with open chest wounds, to the modern eye, the field in the second decade of the 20th century seems more characterized by what was not yet possible rather than by what was.
One of the most critical issues holding back the development of cardiothoracic surgery in this early period was the problem of acute pneumothorax that occurred whenever the chest was opened.
As Willy Meyer (1858-1932), second president of the AATS, described the problem at the first AATS annual meeting in 1917: “What is it that happens when the thorax is opened, let us say [for example] by a stab wound in an intercostal space in an affray on the street? Immediately air rushes into the pleural cavity and this normal atmospheric pressure, being greater than the normal pressure within ... the lung contracts to a very small organ around its hilum. Air fills the space formerly occupied by the lung. This condition, with its immediate clinical pathologic consequences, is called ‘acute pneumothorax.’ It has been the stumbling block for almost a century to the proper development of the surgery of the chest. … Carbonic acid is retained in the blood … The accumulation of CO, with its deleterious effect increases, finally ending in the patient’s death.”
But in the first decade of the 20th century, two major and competing techniques evolved to solve the problem, each one represented by the first and second presidents of the fledgling AATS. For a short period of time a controversy seemed to separate the two men, but their views were expressly reconciled at the first annual meeting of the AATS.
The Meltzer/Auer technique was significantly improved upon by the addition of a carbon dioxide absorption method and the creation of a closed circuit apparatus by Dennis Jackson, MD (1878-1980) in 1915. “This fulfilled the criteria of oxygen supply, carbon dioxide absorption, and ether regulation with a hand bag-breather. With this apparatus, respiration could be maintained with the open thorax,” said pioneering thoracic surgeon Rudolf Nissen, MD, and Roger H.L. Wilson, MD, in their Pages in the History of Chest Surgery (Springfield, IL: Charles C. Thomas, 1960).
However, insufflation was not universally applauded when it was first introduced. It was considered a poor second by many who instead embraced the alternate method of preventing chest collapse – the differential pressure–maintaining Sauerbruch chamber. The Sauerbruch chamber was developed by Ernst Ferdinand Sauerbruch (1875-1951) and first reported in 1904 in his paper, “The pathology of open pneumothorax and the bases of my methods for its elimination.”
As described by Nissen and Wilson, “He transformed the operating room into a kind of extended or enlarged pleural cavity, lowering the atmospheric pressure by vacuum. The head of the patient was outside the operating room, tightly sealed at the neck. This ‘pneumatic chamber’ solved in an ideal way the problem of negative pleural pressure.”
Sauerbruch was aware of the Meltzer/Auer technique, but specifically rejected it, and his powerful influence in Germany helped to prevent it from being adopted there.
Among the earliest and most vocal advocates of using the Sauerbruch negative pressure chamber approach in the United States was Dr. Meyer. Both he and Dr. Meltzer addressed the issue and the controversy at the first meeting of the AATS in 1917.
“You probably remember the little battle between differential pressure and intratracheal insufflation. It occurred only 8 years ago; but it seems now like history. When I presented my paper on intratracheal insufflation at the New York Academy of Medicine, my views were opposed, in the interest of conservatism in surgery, by three able surgeons,” Dr. Meltzer said in his address.
“Now, these same surgeons are among the principal founders of the American Association for Thoracic Surgery, and my being the first presiding officer of the Association is due exclusively to their generous spirit and not to any merits of mine. This is my little story of how the introduction of a stomach tube carried a mere medical man into the presidential chair of a national surgical association.”
Dr. Meyer, one of the three surgeons mentioned by Dr. Meltzer, responded shortly thereafter in his own speech at the meeting: “Dr. Meltzer mentioned in his inaugural address today that, in the discussion following his presentation of the matter before the New York Academy of Medicine his views were opposed, in the interest of conservatism in surgery, by three surgeons.
“Inasmuch as I was one of the three, I would, in explanation, here state that ... at that very time it was reported to me that Dr. Meltzer had stated that in his opinion thoracic operations on human beings could be done in a much simpler way than by working in the negative chamber; that a catheter in the trachea and bellows was all that was needed. He, a physiologist who had always done scientific surgical work on animals, certainly found these paraphernalia sufficient. I personally had meanwhile seen and learned to admire the absolutely reliable working of the mechanism of the chamber, without the possibility of doing the slightest harm to the patient.
“In my remarks on that memorable evening at the New York Academy of Medicine, I therefore tried to impress upon my colleagues the great importance of absolute safety. I stated that no matter what apparatus we might use in thoracic surgery on the usually much run down human being, it must be so constructed that it could not possibly do harm to the patient. I further stated that I would be only too happy to personally use intratracheal insufflation as soon as it was sufficiently perfected to render it safe under all conditions. … I want to lay stress upon the statement that I for my part have never been in opposition, but rather in full accord with his splendid discovery. The fact is that I personally have been among the very first in New York to use intratracheal insufflation in thoracic operations upon the human subject,” said Dr. Meyer.
“But, please bear in mind … that only the use of the differential pressure method – no matter what the apparatus – enables the surgeon to work in the thorax with the same equanimity and tranquility as in the abdomen,” he summarized.
So by the early years of the founding of the AATS, no matter the barriers that remained, the fact that thoracic surgery had reached the same level of confidence in terms of attempting operations as had already existed for the abdomen permitted the fledgling association to move forward with a confidence and optimism that had not existed before, when opening the chest in the operating room was generally considered deadly.
Sources
Meltzer, S. J., 1917. First President’s Address. http://t.aats.org/annualmeeting/Program-Books/50th-Anniversary-Book/First-Presidents-Address.cgi
Meyer, W., 1917. Surgery Within the Past Fourteen Years. http://t.aats.org/annualmeeting/Program-Books/50th-Anniversary-Book/A-Review-of-the-Evolution-of-Thoracic-Surgery-Within-the-Pas.cgi
Nissen, R., Wilson, R.H.L. Pages in the History of Chest Surgery. Springfield, IL: Charles C. Thomas, 1960.
In retrospect, the founding of the American Association for Thoracic Surgery (AATS) in 1917 may seem surprisingly optimistic, given the status of cardiothoracic surgery as a discipline at that time. While important strides had been made in dealing with open chest wounds, to the modern eye, the field in the second decade of the 20th century seems more characterized by what was not yet possible rather than by what was.
One of the most critical issues holding back the development of cardiothoracic surgery in this early period was the problem of acute pneumothorax that occurred whenever the chest was opened.
As Willy Meyer (1858-1932), second president of the AATS, described the problem at the first AATS annual meeting in 1917: “What is it that happens when the thorax is opened, let us say [for example] by a stab wound in an intercostal space in an affray on the street? Immediately air rushes into the pleural cavity and this normal atmospheric pressure, being greater than the normal pressure within ... the lung contracts to a very small organ around its hilum. Air fills the space formerly occupied by the lung. This condition, with its immediate clinical pathologic consequences, is called ‘acute pneumothorax.’ It has been the stumbling block for almost a century to the proper development of the surgery of the chest. … Carbonic acid is retained in the blood … The accumulation of CO, with its deleterious effect increases, finally ending in the patient’s death.”
But in the first decade of the 20th century, two major and competing techniques evolved to solve the problem, each one represented by the first and second presidents of the fledgling AATS. For a short period of time a controversy seemed to separate the two men, but their views were expressly reconciled at the first annual meeting of the AATS.
The Meltzer/Auer technique was significantly improved upon by the addition of a carbon dioxide absorption method and the creation of a closed circuit apparatus by Dennis Jackson, MD (1878-1980) in 1915. “This fulfilled the criteria of oxygen supply, carbon dioxide absorption, and ether regulation with a hand bag-breather. With this apparatus, respiration could be maintained with the open thorax,” said pioneering thoracic surgeon Rudolf Nissen, MD, and Roger H.L. Wilson, MD, in their Pages in the History of Chest Surgery (Springfield, IL: Charles C. Thomas, 1960).
However, insufflation was not universally applauded when it was first introduced. It was considered a poor second by many who instead embraced the alternate method of preventing chest collapse – the differential pressure–maintaining Sauerbruch chamber. The Sauerbruch chamber was developed by Ernst Ferdinand Sauerbruch (1875-1951) and first reported in 1904 in his paper, “The pathology of open pneumothorax and the bases of my methods for its elimination.”
As described by Nissen and Wilson, “He transformed the operating room into a kind of extended or enlarged pleural cavity, lowering the atmospheric pressure by vacuum. The head of the patient was outside the operating room, tightly sealed at the neck. This ‘pneumatic chamber’ solved in an ideal way the problem of negative pleural pressure.”
Sauerbruch was aware of the Meltzer/Auer technique, but specifically rejected it, and his powerful influence in Germany helped to prevent it from being adopted there.
Among the earliest and most vocal advocates of using the Sauerbruch negative pressure chamber approach in the United States was Dr. Meyer. Both he and Dr. Meltzer addressed the issue and the controversy at the first meeting of the AATS in 1917.
“You probably remember the little battle between differential pressure and intratracheal insufflation. It occurred only 8 years ago; but it seems now like history. When I presented my paper on intratracheal insufflation at the New York Academy of Medicine, my views were opposed, in the interest of conservatism in surgery, by three able surgeons,” Dr. Meltzer said in his address.
“Now, these same surgeons are among the principal founders of the American Association for Thoracic Surgery, and my being the first presiding officer of the Association is due exclusively to their generous spirit and not to any merits of mine. This is my little story of how the introduction of a stomach tube carried a mere medical man into the presidential chair of a national surgical association.”
Dr. Meyer, one of the three surgeons mentioned by Dr. Meltzer, responded shortly thereafter in his own speech at the meeting: “Dr. Meltzer mentioned in his inaugural address today that, in the discussion following his presentation of the matter before the New York Academy of Medicine his views were opposed, in the interest of conservatism in surgery, by three surgeons.
“Inasmuch as I was one of the three, I would, in explanation, here state that ... at that very time it was reported to me that Dr. Meltzer had stated that in his opinion thoracic operations on human beings could be done in a much simpler way than by working in the negative chamber; that a catheter in the trachea and bellows was all that was needed. He, a physiologist who had always done scientific surgical work on animals, certainly found these paraphernalia sufficient. I personally had meanwhile seen and learned to admire the absolutely reliable working of the mechanism of the chamber, without the possibility of doing the slightest harm to the patient.
“In my remarks on that memorable evening at the New York Academy of Medicine, I therefore tried to impress upon my colleagues the great importance of absolute safety. I stated that no matter what apparatus we might use in thoracic surgery on the usually much run down human being, it must be so constructed that it could not possibly do harm to the patient. I further stated that I would be only too happy to personally use intratracheal insufflation as soon as it was sufficiently perfected to render it safe under all conditions. … I want to lay stress upon the statement that I for my part have never been in opposition, but rather in full accord with his splendid discovery. The fact is that I personally have been among the very first in New York to use intratracheal insufflation in thoracic operations upon the human subject,” said Dr. Meyer.
“But, please bear in mind … that only the use of the differential pressure method – no matter what the apparatus – enables the surgeon to work in the thorax with the same equanimity and tranquility as in the abdomen,” he summarized.
So by the early years of the founding of the AATS, no matter the barriers that remained, the fact that thoracic surgery had reached the same level of confidence in terms of attempting operations as had already existed for the abdomen permitted the fledgling association to move forward with a confidence and optimism that had not existed before, when opening the chest in the operating room was generally considered deadly.
Sources
Meltzer, S. J., 1917. First President’s Address. http://t.aats.org/annualmeeting/Program-Books/50th-Anniversary-Book/First-Presidents-Address.cgi
Meyer, W., 1917. Surgery Within the Past Fourteen Years. http://t.aats.org/annualmeeting/Program-Books/50th-Anniversary-Book/A-Review-of-the-Evolution-of-Thoracic-Surgery-Within-the-Pas.cgi
Nissen, R., Wilson, R.H.L. Pages in the History of Chest Surgery. Springfield, IL: Charles C. Thomas, 1960.
In retrospect, the founding of the American Association for Thoracic Surgery (AATS) in 1917 may seem surprisingly optimistic, given the status of cardiothoracic surgery as a discipline at that time. While important strides had been made in dealing with open chest wounds, to the modern eye, the field in the second decade of the 20th century seems more characterized by what was not yet possible rather than by what was.
One of the most critical issues holding back the development of cardiothoracic surgery in this early period was the problem of acute pneumothorax that occurred whenever the chest was opened.
As Willy Meyer (1858-1932), second president of the AATS, described the problem at the first AATS annual meeting in 1917: “What is it that happens when the thorax is opened, let us say [for example] by a stab wound in an intercostal space in an affray on the street? Immediately air rushes into the pleural cavity and this normal atmospheric pressure, being greater than the normal pressure within ... the lung contracts to a very small organ around its hilum. Air fills the space formerly occupied by the lung. This condition, with its immediate clinical pathologic consequences, is called ‘acute pneumothorax.’ It has been the stumbling block for almost a century to the proper development of the surgery of the chest. … Carbonic acid is retained in the blood … The accumulation of CO, with its deleterious effect increases, finally ending in the patient’s death.”
But in the first decade of the 20th century, two major and competing techniques evolved to solve the problem, each one represented by the first and second presidents of the fledgling AATS. For a short period of time a controversy seemed to separate the two men, but their views were expressly reconciled at the first annual meeting of the AATS.
The Meltzer/Auer technique was significantly improved upon by the addition of a carbon dioxide absorption method and the creation of a closed circuit apparatus by Dennis Jackson, MD (1878-1980) in 1915. “This fulfilled the criteria of oxygen supply, carbon dioxide absorption, and ether regulation with a hand bag-breather. With this apparatus, respiration could be maintained with the open thorax,” said pioneering thoracic surgeon Rudolf Nissen, MD, and Roger H.L. Wilson, MD, in their Pages in the History of Chest Surgery (Springfield, IL: Charles C. Thomas, 1960).
However, insufflation was not universally applauded when it was first introduced. It was considered a poor second by many who instead embraced the alternate method of preventing chest collapse – the differential pressure–maintaining Sauerbruch chamber. The Sauerbruch chamber was developed by Ernst Ferdinand Sauerbruch (1875-1951) and first reported in 1904 in his paper, “The pathology of open pneumothorax and the bases of my methods for its elimination.”
As described by Nissen and Wilson, “He transformed the operating room into a kind of extended or enlarged pleural cavity, lowering the atmospheric pressure by vacuum. The head of the patient was outside the operating room, tightly sealed at the neck. This ‘pneumatic chamber’ solved in an ideal way the problem of negative pleural pressure.”
Sauerbruch was aware of the Meltzer/Auer technique, but specifically rejected it, and his powerful influence in Germany helped to prevent it from being adopted there.
Among the earliest and most vocal advocates of using the Sauerbruch negative pressure chamber approach in the United States was Dr. Meyer. Both he and Dr. Meltzer addressed the issue and the controversy at the first meeting of the AATS in 1917.
“You probably remember the little battle between differential pressure and intratracheal insufflation. It occurred only 8 years ago; but it seems now like history. When I presented my paper on intratracheal insufflation at the New York Academy of Medicine, my views were opposed, in the interest of conservatism in surgery, by three able surgeons,” Dr. Meltzer said in his address.
“Now, these same surgeons are among the principal founders of the American Association for Thoracic Surgery, and my being the first presiding officer of the Association is due exclusively to their generous spirit and not to any merits of mine. This is my little story of how the introduction of a stomach tube carried a mere medical man into the presidential chair of a national surgical association.”
Dr. Meyer, one of the three surgeons mentioned by Dr. Meltzer, responded shortly thereafter in his own speech at the meeting: “Dr. Meltzer mentioned in his inaugural address today that, in the discussion following his presentation of the matter before the New York Academy of Medicine his views were opposed, in the interest of conservatism in surgery, by three surgeons.
“Inasmuch as I was one of the three, I would, in explanation, here state that ... at that very time it was reported to me that Dr. Meltzer had stated that in his opinion thoracic operations on human beings could be done in a much simpler way than by working in the negative chamber; that a catheter in the trachea and bellows was all that was needed. He, a physiologist who had always done scientific surgical work on animals, certainly found these paraphernalia sufficient. I personally had meanwhile seen and learned to admire the absolutely reliable working of the mechanism of the chamber, without the possibility of doing the slightest harm to the patient.
“In my remarks on that memorable evening at the New York Academy of Medicine, I therefore tried to impress upon my colleagues the great importance of absolute safety. I stated that no matter what apparatus we might use in thoracic surgery on the usually much run down human being, it must be so constructed that it could not possibly do harm to the patient. I further stated that I would be only too happy to personally use intratracheal insufflation as soon as it was sufficiently perfected to render it safe under all conditions. … I want to lay stress upon the statement that I for my part have never been in opposition, but rather in full accord with his splendid discovery. The fact is that I personally have been among the very first in New York to use intratracheal insufflation in thoracic operations upon the human subject,” said Dr. Meyer.
“But, please bear in mind … that only the use of the differential pressure method – no matter what the apparatus – enables the surgeon to work in the thorax with the same equanimity and tranquility as in the abdomen,” he summarized.
So by the early years of the founding of the AATS, no matter the barriers that remained, the fact that thoracic surgery had reached the same level of confidence in terms of attempting operations as had already existed for the abdomen permitted the fledgling association to move forward with a confidence and optimism that had not existed before, when opening the chest in the operating room was generally considered deadly.
Sources
Meltzer, S. J., 1917. First President’s Address. http://t.aats.org/annualmeeting/Program-Books/50th-Anniversary-Book/First-Presidents-Address.cgi
Meyer, W., 1917. Surgery Within the Past Fourteen Years. http://t.aats.org/annualmeeting/Program-Books/50th-Anniversary-Book/A-Review-of-the-Evolution-of-Thoracic-Surgery-Within-the-Pas.cgi
Nissen, R., Wilson, R.H.L. Pages in the History of Chest Surgery. Springfield, IL: Charles C. Thomas, 1960.
Corticosteroids effective just hours before preterm delivery
Antenatal corticosteroids may significantly decrease neonatal mortality and morbidity even when they are given just hours before preterm delivery, according to a report published online May 15 in JAMA Pediatrics.
In women at risk of preterm delivery, antenatal corticosteroids given between 1 and 7 days before birth reduce infant mortality by an estimated 31%, respiratory distress syndrome by 34%, intraventricular hemorrhage by 46%, and necrotizing enterocolitis by 54%. But until now their effect when given less than 24 hours before preterm birth has been described as “suboptimal,” “partial,” or “incomplete,” said Mikael Norman, MD, PhD, of Karolinska Institutet in Stockholm, Sweden, and his associates.
“Our findings challenge current thinking about the optimal timing” of antenatal corticosteroids and encourage “a more proactive management of women at risk for imminent preterm birth, which may help reduce infant mortality and severe neonatal brain injury,” the investigators said.
To further examine this issue, they assessed the effects of antenatal corticosteroids when given at different intervals before preterm birth, using data from a prospective cohort study of perinatal intensive care. That study involved 10,329 very preterm births throughout 11 countries in Europe during a 1-year period.
For their analysis, Dr. Norman and his associates focused on 4,594 singleton births at 24-31 weeks’ gestation. They classified the timing of antenatal corticosteroids into four categories: no injections (662 infants, or 14.4% of the study population), first injection at less than 24 hours before birth (1,111 infants, or 24.2%), first injection at the recommended 1-7 days before birth (1,871 infants, or 40.7%), and first injection more than 7 days before birth (950 infants, or 20.7%).
Receiving antenatal corticosteroids at any interval before preterm birth was associated with lower infant mortality, a lower rate of severe neonatal morbidity, and a lower rate of severe neonatal brain injury, compared with not receiving any antenatal corticosteroids. The largest reduction in risk (more than 50%) occurred at the recommended interval of 1-7 days before birth. However, receiving the treatment less than 24 hours before birth also significantly reduced these risks (JAMA Pediatr. 2017 May 15. doi: 10.1001/jamapediatrics.2017.0602).
Using their findings on treatment intervals and effectiveness, the investigators created a simulation model for the 661 infants in this cohort who did not receive any antenatal corticosteroids. Their model predicted that if these infants had received treatment at least 3 hours before delivery, overall mortality would have decreased by 26%. If they had received treatment 3-5 hours before delivery, mortality would have decreased by 37%, and if they had received treatment at 6-12 hours before delivery it would have decreased by 51%.
At the other end of the timing spectrum, infant mortality increased 40% when corticosteroids were given more than 7 days before delivery, compared with when they were given within the recommended 1-7 days. This represents a substantial number of infants – approximately 20% of the study cohort.
The study was supported by the European Union, the French Institute of Public Health, the Polish Ministry of Science and Higher Education, the Karolinska Institutet, and other nonindustry sources. Dr. Norman and his associates reported having no relevant financial disclosures.
Antenatal corticosteroids may significantly decrease neonatal mortality and morbidity even when they are given just hours before preterm delivery, according to a report published online May 15 in JAMA Pediatrics.
In women at risk of preterm delivery, antenatal corticosteroids given between 1 and 7 days before birth reduce infant mortality by an estimated 31%, respiratory distress syndrome by 34%, intraventricular hemorrhage by 46%, and necrotizing enterocolitis by 54%. But until now their effect when given less than 24 hours before preterm birth has been described as “suboptimal,” “partial,” or “incomplete,” said Mikael Norman, MD, PhD, of Karolinska Institutet in Stockholm, Sweden, and his associates.
“Our findings challenge current thinking about the optimal timing” of antenatal corticosteroids and encourage “a more proactive management of women at risk for imminent preterm birth, which may help reduce infant mortality and severe neonatal brain injury,” the investigators said.
To further examine this issue, they assessed the effects of antenatal corticosteroids when given at different intervals before preterm birth, using data from a prospective cohort study of perinatal intensive care. That study involved 10,329 very preterm births throughout 11 countries in Europe during a 1-year period.
For their analysis, Dr. Norman and his associates focused on 4,594 singleton births at 24-31 weeks’ gestation. They classified the timing of antenatal corticosteroids into four categories: no injections (662 infants, or 14.4% of the study population), first injection at less than 24 hours before birth (1,111 infants, or 24.2%), first injection at the recommended 1-7 days before birth (1,871 infants, or 40.7%), and first injection more than 7 days before birth (950 infants, or 20.7%).
Receiving antenatal corticosteroids at any interval before preterm birth was associated with lower infant mortality, a lower rate of severe neonatal morbidity, and a lower rate of severe neonatal brain injury, compared with not receiving any antenatal corticosteroids. The largest reduction in risk (more than 50%) occurred at the recommended interval of 1-7 days before birth. However, receiving the treatment less than 24 hours before birth also significantly reduced these risks (JAMA Pediatr. 2017 May 15. doi: 10.1001/jamapediatrics.2017.0602).
Using their findings on treatment intervals and effectiveness, the investigators created a simulation model for the 661 infants in this cohort who did not receive any antenatal corticosteroids. Their model predicted that if these infants had received treatment at least 3 hours before delivery, overall mortality would have decreased by 26%. If they had received treatment 3-5 hours before delivery, mortality would have decreased by 37%, and if they had received treatment at 6-12 hours before delivery it would have decreased by 51%.
At the other end of the timing spectrum, infant mortality increased 40% when corticosteroids were given more than 7 days before delivery, compared with when they were given within the recommended 1-7 days. This represents a substantial number of infants – approximately 20% of the study cohort.
The study was supported by the European Union, the French Institute of Public Health, the Polish Ministry of Science and Higher Education, the Karolinska Institutet, and other nonindustry sources. Dr. Norman and his associates reported having no relevant financial disclosures.
Antenatal corticosteroids may significantly decrease neonatal mortality and morbidity even when they are given just hours before preterm delivery, according to a report published online May 15 in JAMA Pediatrics.
In women at risk of preterm delivery, antenatal corticosteroids given between 1 and 7 days before birth reduce infant mortality by an estimated 31%, respiratory distress syndrome by 34%, intraventricular hemorrhage by 46%, and necrotizing enterocolitis by 54%. But until now their effect when given less than 24 hours before preterm birth has been described as “suboptimal,” “partial,” or “incomplete,” said Mikael Norman, MD, PhD, of Karolinska Institutet in Stockholm, Sweden, and his associates.
“Our findings challenge current thinking about the optimal timing” of antenatal corticosteroids and encourage “a more proactive management of women at risk for imminent preterm birth, which may help reduce infant mortality and severe neonatal brain injury,” the investigators said.
To further examine this issue, they assessed the effects of antenatal corticosteroids when given at different intervals before preterm birth, using data from a prospective cohort study of perinatal intensive care. That study involved 10,329 very preterm births throughout 11 countries in Europe during a 1-year period.
For their analysis, Dr. Norman and his associates focused on 4,594 singleton births at 24-31 weeks’ gestation. They classified the timing of antenatal corticosteroids into four categories: no injections (662 infants, or 14.4% of the study population), first injection at less than 24 hours before birth (1,111 infants, or 24.2%), first injection at the recommended 1-7 days before birth (1,871 infants, or 40.7%), and first injection more than 7 days before birth (950 infants, or 20.7%).
Receiving antenatal corticosteroids at any interval before preterm birth was associated with lower infant mortality, a lower rate of severe neonatal morbidity, and a lower rate of severe neonatal brain injury, compared with not receiving any antenatal corticosteroids. The largest reduction in risk (more than 50%) occurred at the recommended interval of 1-7 days before birth. However, receiving the treatment less than 24 hours before birth also significantly reduced these risks (JAMA Pediatr. 2017 May 15. doi: 10.1001/jamapediatrics.2017.0602).
Using their findings on treatment intervals and effectiveness, the investigators created a simulation model for the 661 infants in this cohort who did not receive any antenatal corticosteroids. Their model predicted that if these infants had received treatment at least 3 hours before delivery, overall mortality would have decreased by 26%. If they had received treatment 3-5 hours before delivery, mortality would have decreased by 37%, and if they had received treatment at 6-12 hours before delivery it would have decreased by 51%.
At the other end of the timing spectrum, infant mortality increased 40% when corticosteroids were given more than 7 days before delivery, compared with when they were given within the recommended 1-7 days. This represents a substantial number of infants – approximately 20% of the study cohort.
The study was supported by the European Union, the French Institute of Public Health, the Polish Ministry of Science and Higher Education, the Karolinska Institutet, and other nonindustry sources. Dr. Norman and his associates reported having no relevant financial disclosures.
Key clinical point:
Major finding: A simulation model predicted that if untreated infants had received antenatal corticosteroids 6-12 hours before delivery, overall mortality would have decreased by 51%.
Data source: A secondary analysis of data from a population-based cohort study of perinatal intensive care across Europe, involving 4,594 preterm singleton births.
Disclosures: The study was supported by the European Union, the French Institute of Public Health, the Polish Ministry of Science and Higher Education, the Karolinska Institutet, and other nonindustry sources. Dr. Norman and his associates reported having no relevant financial disclosures.
One-third of drug postmarket studies go unpublished
More than one-third of postmarket studies following drug approval that should be published are not, according to new research.
Investigators examined a Food and Drug Administration internal database to identify all postmarket drug studies between 2009 and 2013 identified by the agency as completed, with a follow-up search to find if/where the results of the studies were published.
“As of July 2016, 183 of the 288 postmarket studies (63.5%) meeting inclusion criteria were published in either the scientific literature or on the ClinicalTrials.gov website,” Marisa Cruz, MD, medical officer in the Food and Drug Administration’s Office of Public Health Strategy and Analysis, and her colleagues wrote in a researcher letter published online May 15 in JAMA Internal Medicine (doi: 10.1001/jamainternmed.2017.1313).
More studies were published in journals (175) than in the agency’s clinical trial registry (87), and the 183 interventional clinical trials had a higher overall publication rate (87.4%) than the other 105 studies combined (21.9%).
Of the 69 interventional clinical trials that were focused on efficacy, 86.2% were categorized as having results that were favorable to the trial sponsor. However, the 57 interventional clinical trials with positive results were no more likely to be published than the 12 trials with negative results, Dr. Cruz and colleagues noted.
The findings are consistent with previous research, the researchers noted, with the analysis demonstrating “that postmarket study results are not consistently disseminated, either through journal publication or trial registries.”
“Despite calls for data sharing and publication of all clinical trial results, publication rates for completed postmarket studies required by the FDA remain relatively low,” the researchers wrote.
While the FDA could publish the data itself, “this approach would likely require new regulations,” the authors noted. “Alternatively, increased sponsor commitment to submitting to journals and to publish all clinical trial results on trial registries, regardless of whether publication is legally required, may serve to promote dissemination of scientific knowledge.”
The researchers reported no conflicts of interest.
More than one-third of postmarket studies following drug approval that should be published are not, according to new research.
Investigators examined a Food and Drug Administration internal database to identify all postmarket drug studies between 2009 and 2013 identified by the agency as completed, with a follow-up search to find if/where the results of the studies were published.
“As of July 2016, 183 of the 288 postmarket studies (63.5%) meeting inclusion criteria were published in either the scientific literature or on the ClinicalTrials.gov website,” Marisa Cruz, MD, medical officer in the Food and Drug Administration’s Office of Public Health Strategy and Analysis, and her colleagues wrote in a researcher letter published online May 15 in JAMA Internal Medicine (doi: 10.1001/jamainternmed.2017.1313).
More studies were published in journals (175) than in the agency’s clinical trial registry (87), and the 183 interventional clinical trials had a higher overall publication rate (87.4%) than the other 105 studies combined (21.9%).
Of the 69 interventional clinical trials that were focused on efficacy, 86.2% were categorized as having results that were favorable to the trial sponsor. However, the 57 interventional clinical trials with positive results were no more likely to be published than the 12 trials with negative results, Dr. Cruz and colleagues noted.
The findings are consistent with previous research, the researchers noted, with the analysis demonstrating “that postmarket study results are not consistently disseminated, either through journal publication or trial registries.”
“Despite calls for data sharing and publication of all clinical trial results, publication rates for completed postmarket studies required by the FDA remain relatively low,” the researchers wrote.
While the FDA could publish the data itself, “this approach would likely require new regulations,” the authors noted. “Alternatively, increased sponsor commitment to submitting to journals and to publish all clinical trial results on trial registries, regardless of whether publication is legally required, may serve to promote dissemination of scientific knowledge.”
The researchers reported no conflicts of interest.
More than one-third of postmarket studies following drug approval that should be published are not, according to new research.
Investigators examined a Food and Drug Administration internal database to identify all postmarket drug studies between 2009 and 2013 identified by the agency as completed, with a follow-up search to find if/where the results of the studies were published.
“As of July 2016, 183 of the 288 postmarket studies (63.5%) meeting inclusion criteria were published in either the scientific literature or on the ClinicalTrials.gov website,” Marisa Cruz, MD, medical officer in the Food and Drug Administration’s Office of Public Health Strategy and Analysis, and her colleagues wrote in a researcher letter published online May 15 in JAMA Internal Medicine (doi: 10.1001/jamainternmed.2017.1313).
More studies were published in journals (175) than in the agency’s clinical trial registry (87), and the 183 interventional clinical trials had a higher overall publication rate (87.4%) than the other 105 studies combined (21.9%).
Of the 69 interventional clinical trials that were focused on efficacy, 86.2% were categorized as having results that were favorable to the trial sponsor. However, the 57 interventional clinical trials with positive results were no more likely to be published than the 12 trials with negative results, Dr. Cruz and colleagues noted.
The findings are consistent with previous research, the researchers noted, with the analysis demonstrating “that postmarket study results are not consistently disseminated, either through journal publication or trial registries.”
“Despite calls for data sharing and publication of all clinical trial results, publication rates for completed postmarket studies required by the FDA remain relatively low,” the researchers wrote.
While the FDA could publish the data itself, “this approach would likely require new regulations,” the authors noted. “Alternatively, increased sponsor commitment to submitting to journals and to publish all clinical trial results on trial registries, regardless of whether publication is legally required, may serve to promote dissemination of scientific knowledge.”
The researchers reported no conflicts of interest.
Value-based care didn’t trigger spikes in patient dismissals
Fears that the transition to value-based care could lead to doctors dismissing patients from their practice who could adversely affect their reimbursement didn’t come to fruition in a recent federal value-based initiative.
“Patient dismissal could be an unintended consequence of this shift as clinicians face (or perceive they face) pressure to limit their panel to patients for whom they can readily demonstrate value in order to maximize revenue,” Ann S. O’Malley, MD, senior fellow at Mathematica Policy Research, and her colleagues wrote in a research letter published online May 15 in JAMA Internal Medicine (doi: 10.1001/jamainternmed.2017.1309).
“A similar portion and distribution of CPC and comparison practices reported ever dismissing patients in the past 2 years,” 89% and 92%, respectively, the researchers reported.
CPC and comparison practices “dismissed patients for similar reasons,” Dr. O’Malley and colleagues added, noting the exception that more comparison practices reported dismissing patients for violating bill payment policies than CPC practices did – 43% vs. 35%, respectively.
Other reasons for dismissing patients included patients being extremely disruptive and/or behaving inappropriately toward clinicians or staff, patients violating chronic pain/controlled substances policies, patients repeatedly missing appointments, patients not following recommended lifestyle changes, and patients making frequent emergency department visits and/or frequent self-referrals to specialists.
Practices participating in the CPC initiative were also asked if participation in the value-based payment model would make them more or less likely to dismiss patients.
“According to most CPC practices, the initiative had no effect or made them less likely to dismiss patients,” the researchers found.
The CMS Centers for Medicare & Medicaid Innovation funded the study. The study authors reported no conflicts of interest.
Fears that the transition to value-based care could lead to doctors dismissing patients from their practice who could adversely affect their reimbursement didn’t come to fruition in a recent federal value-based initiative.
“Patient dismissal could be an unintended consequence of this shift as clinicians face (or perceive they face) pressure to limit their panel to patients for whom they can readily demonstrate value in order to maximize revenue,” Ann S. O’Malley, MD, senior fellow at Mathematica Policy Research, and her colleagues wrote in a research letter published online May 15 in JAMA Internal Medicine (doi: 10.1001/jamainternmed.2017.1309).
“A similar portion and distribution of CPC and comparison practices reported ever dismissing patients in the past 2 years,” 89% and 92%, respectively, the researchers reported.
CPC and comparison practices “dismissed patients for similar reasons,” Dr. O’Malley and colleagues added, noting the exception that more comparison practices reported dismissing patients for violating bill payment policies than CPC practices did – 43% vs. 35%, respectively.
Other reasons for dismissing patients included patients being extremely disruptive and/or behaving inappropriately toward clinicians or staff, patients violating chronic pain/controlled substances policies, patients repeatedly missing appointments, patients not following recommended lifestyle changes, and patients making frequent emergency department visits and/or frequent self-referrals to specialists.
Practices participating in the CPC initiative were also asked if participation in the value-based payment model would make them more or less likely to dismiss patients.
“According to most CPC practices, the initiative had no effect or made them less likely to dismiss patients,” the researchers found.
The CMS Centers for Medicare & Medicaid Innovation funded the study. The study authors reported no conflicts of interest.
Fears that the transition to value-based care could lead to doctors dismissing patients from their practice who could adversely affect their reimbursement didn’t come to fruition in a recent federal value-based initiative.
“Patient dismissal could be an unintended consequence of this shift as clinicians face (or perceive they face) pressure to limit their panel to patients for whom they can readily demonstrate value in order to maximize revenue,” Ann S. O’Malley, MD, senior fellow at Mathematica Policy Research, and her colleagues wrote in a research letter published online May 15 in JAMA Internal Medicine (doi: 10.1001/jamainternmed.2017.1309).
“A similar portion and distribution of CPC and comparison practices reported ever dismissing patients in the past 2 years,” 89% and 92%, respectively, the researchers reported.
CPC and comparison practices “dismissed patients for similar reasons,” Dr. O’Malley and colleagues added, noting the exception that more comparison practices reported dismissing patients for violating bill payment policies than CPC practices did – 43% vs. 35%, respectively.
Other reasons for dismissing patients included patients being extremely disruptive and/or behaving inappropriately toward clinicians or staff, patients violating chronic pain/controlled substances policies, patients repeatedly missing appointments, patients not following recommended lifestyle changes, and patients making frequent emergency department visits and/or frequent self-referrals to specialists.
Practices participating in the CPC initiative were also asked if participation in the value-based payment model would make them more or less likely to dismiss patients.
“According to most CPC practices, the initiative had no effect or made them less likely to dismiss patients,” the researchers found.
The CMS Centers for Medicare & Medicaid Innovation funded the study. The study authors reported no conflicts of interest.
Pediatric Dermatology Consult - May 2017
BY LAWRENCE F. EICHENFIELD, MD, and JENNA BOROK
Nevus sebaceous (NS) is considered a subtype of an epidermal nevus, a benign hamartoma that includes an excess or deficiency of structural elements of the skin, such as epidermis, sebaceous, and apocrine glands.1
Nevus sebaceous, also known as nevus sebaceous of Jadassohn, was first described in 1895 by Josef Jadassohn as a nevoid growth composed predominantly of sebaceous glands.2 Sebaceous glands are found everywhere on the body where hair is found and are located adjacent to hair follicles with ducts, through which sebum flows.3
NS clinically appears as a waxy, yellowish-orange to pink, hairless plaque, ranging from 1 cm to over 10 cm in size and usually located on the scalp, face, neck, or trunk.1 When the lesions are linear, they typically follow Blaschko lines.1 The lesions change over time. An infant’s lesion will be slightly raised and can be hardly noticeable. If the lesion is located on the scalp, it will remain hairless. During later childhood, the nevus may thicken. The lesions may become thicker and verrucous during adolescence.
Histologically, NS are benign hamartomas with epidermal, follicular, sebaceous, and apocrine elements.4 The malformation is primarily within the individual folliculosebaceous units.1 An infantile lesion will deviate little from normal skin as the follicular units are so small.1 During childhood, microscopically, it is easier to see misshapen follicles and thickened epidermis.1 During adolescence, the histological pattern is similar to that of an epidermal nevi, which includes acanthosis and fibroplasia of the papillary dermis.
There are now several lines of evidence showing that nevus sebaceous is caused by genetic postzygotic mosaic mutations in the mitogen–activated protein kinase (MAPK) pathway, which specifically activate ras mutations including H ras and K ras genes.5-7 Since NS is caused by a somatic mosaicism mutation, there are a several syndromic findings associated with NS depending on the timing of the mutations during development and whether single versus multiple tissues are affected.8 Specifically, NS has been associated with Schimmelpenning-Feuerstein-Mims syndrome that includes NS, and skeletal, ocular, and neurologic abnormalities.1,7 A study found that cutaneous skeletal hypophosphatemia syndrome, manifested as NS and vitamin D–resistant rickets, has identical ras mutations in both skin and bone tissues, providing further support that these syndromic findings are a result of a multilineage somatic ras mutation.8
Diagnosis
NS is typically diagnosed clinically, based on the presence of a circumscribed, thin, yellow-orange, oval, round, or linear plaque, usually on the scalp or face. During infancy or the first stage, the lesions remain stable. In the second stage, during puberty, the lesions thicken and become verrucous or nodular because of changes in sebaceous gland activity that are driven by hormones. In the third stage of their natural course, benign and malignant epithelial neoplasms can develop, including trichoblastoma, syringocystadenoma papilliferum, sebaceous epithelioma, basal cell carcinoma, and trichilemmoma.9
The associated syndrome, nevus sebaceous syndrome, also known as Schimmelpenning syndrome, has more extensive cutaneous lesions along Blaschko lines and presents with CNS, ocular, or skeletal defects. The CNS abnormalities include mental retardation, seizures, and hemimegalencephaly.1 Therefore, a thorough neurologic and ophthalmologic examination should be performed in patients with suspected nevus sebaceous syndrome.
Aplasia cutis congenita is the absence of the skin at birth that presents with an erosion or deep ulceration to a scar or a defect that is covered with an epidermal membrane.1 Some aplasia cutis may present as a smooth, hairless surface at birth, making differentiation from nevus sebaceous difficult. The pinkish, orange to yellow hue of NS may help differentiate these entities. Juvenile xanthogranuloma is a fairly common histiocytosis and is the most common histiocytic disease of childhood.1 It is a benign proliferation of dermal dendrocytes, and it presents in infants as many red to yellow papules or a few nodules on the head and neck. Many lesions even remain undetected.1
Syringocystadenoma papilliferum is a benign neoplasm with apocrine differentiation and presents as a papule or plaque on the head and neck. It can be associated with nevus sebaceous.1
Treatment
The definitive treatment is an excision. The necessity and timing of excision of these lesions is still under debate.9 Secondary neoplasms do arise from the nevus sebaceous, although the incidence is low pre puberty.1 It has been estimated that 16% of cases develop benign tumors and that 8% of cases develop malignant tumors.9 The majority of malignant lesions are basal cell carcinomas, and a large recent study found that only 1% of patients had malignancies.10 While most of these tumors develop in adulthood, there have been reports of malignancies in children.10
An additional reason to excise NS is that, over time, they grow more verrucous, become inflamed, bleed with trauma, and may be unappealing cosmetically.1 Some experts recommend earlier excision in childhood, especially with larger lesions or facial lesions where minimizing deformity is important and with the possibility of less noticeable scarring.9 The prophylactic removal of NS remains controversial, and there is a lack of consensus among experts about the timing of excision. It is recommended that each lesion be evaluated on a case-by-case basis.
Dr. Eichenfield is chief of pediatric and adolescent dermatology at Rady Children’s Hospital–San Diego and professor of dermatology and pediatrics at the University of California, San Diego. Ms. Borok is a medical student at the University of California, Los Angeles. Dr. Eichenfield and Ms. Borok said they had no relevant financial disclosures. Email them at [email protected].
References
1. “Dermatology.” 3rd ed. (St Louis, Mo: Elsevier, 2012).
2. Arch Dermatol Res. 1895. doi: 10.1007/BF01842810.
3. J Invest Dermatol. 2004 Jul;123(1):1-12.
4. J Cutan Pathol. 1984;11(5):396-414.
5. J Invest Dermatol. 2013 Mar;133(3):827-30.
6. J Invest Dermatol. 2013 Mar;133(3):597-600.
7. Nat Genet. 2012 Jun 10;44(7):783-7.
8. JAAD. 2016 Aug;75(2):420-7.
9. Pediatr Dermatol. 2012 Jan-Feb;29(1):15-23.
10. Pediatr Dermatol. 2009 Nov-Dec;26(6):676-81.
BY LAWRENCE F. EICHENFIELD, MD, and JENNA BOROK
Nevus sebaceous (NS) is considered a subtype of an epidermal nevus, a benign hamartoma that includes an excess or deficiency of structural elements of the skin, such as epidermis, sebaceous, and apocrine glands.1
Nevus sebaceous, also known as nevus sebaceous of Jadassohn, was first described in 1895 by Josef Jadassohn as a nevoid growth composed predominantly of sebaceous glands.2 Sebaceous glands are found everywhere on the body where hair is found and are located adjacent to hair follicles with ducts, through which sebum flows.3
NS clinically appears as a waxy, yellowish-orange to pink, hairless plaque, ranging from 1 cm to over 10 cm in size and usually located on the scalp, face, neck, or trunk.1 When the lesions are linear, they typically follow Blaschko lines.1 The lesions change over time. An infant’s lesion will be slightly raised and can be hardly noticeable. If the lesion is located on the scalp, it will remain hairless. During later childhood, the nevus may thicken. The lesions may become thicker and verrucous during adolescence.
Histologically, NS are benign hamartomas with epidermal, follicular, sebaceous, and apocrine elements.4 The malformation is primarily within the individual folliculosebaceous units.1 An infantile lesion will deviate little from normal skin as the follicular units are so small.1 During childhood, microscopically, it is easier to see misshapen follicles and thickened epidermis.1 During adolescence, the histological pattern is similar to that of an epidermal nevi, which includes acanthosis and fibroplasia of the papillary dermis.
There are now several lines of evidence showing that nevus sebaceous is caused by genetic postzygotic mosaic mutations in the mitogen–activated protein kinase (MAPK) pathway, which specifically activate ras mutations including H ras and K ras genes.5-7 Since NS is caused by a somatic mosaicism mutation, there are a several syndromic findings associated with NS depending on the timing of the mutations during development and whether single versus multiple tissues are affected.8 Specifically, NS has been associated with Schimmelpenning-Feuerstein-Mims syndrome that includes NS, and skeletal, ocular, and neurologic abnormalities.1,7 A study found that cutaneous skeletal hypophosphatemia syndrome, manifested as NS and vitamin D–resistant rickets, has identical ras mutations in both skin and bone tissues, providing further support that these syndromic findings are a result of a multilineage somatic ras mutation.8
Diagnosis
NS is typically diagnosed clinically, based on the presence of a circumscribed, thin, yellow-orange, oval, round, or linear plaque, usually on the scalp or face. During infancy or the first stage, the lesions remain stable. In the second stage, during puberty, the lesions thicken and become verrucous or nodular because of changes in sebaceous gland activity that are driven by hormones. In the third stage of their natural course, benign and malignant epithelial neoplasms can develop, including trichoblastoma, syringocystadenoma papilliferum, sebaceous epithelioma, basal cell carcinoma, and trichilemmoma.9
The associated syndrome, nevus sebaceous syndrome, also known as Schimmelpenning syndrome, has more extensive cutaneous lesions along Blaschko lines and presents with CNS, ocular, or skeletal defects. The CNS abnormalities include mental retardation, seizures, and hemimegalencephaly.1 Therefore, a thorough neurologic and ophthalmologic examination should be performed in patients with suspected nevus sebaceous syndrome.
Aplasia cutis congenita is the absence of the skin at birth that presents with an erosion or deep ulceration to a scar or a defect that is covered with an epidermal membrane.1 Some aplasia cutis may present as a smooth, hairless surface at birth, making differentiation from nevus sebaceous difficult. The pinkish, orange to yellow hue of NS may help differentiate these entities. Juvenile xanthogranuloma is a fairly common histiocytosis and is the most common histiocytic disease of childhood.1 It is a benign proliferation of dermal dendrocytes, and it presents in infants as many red to yellow papules or a few nodules on the head and neck. Many lesions even remain undetected.1
Syringocystadenoma papilliferum is a benign neoplasm with apocrine differentiation and presents as a papule or plaque on the head and neck. It can be associated with nevus sebaceous.1
Treatment
The definitive treatment is an excision. The necessity and timing of excision of these lesions is still under debate.9 Secondary neoplasms do arise from the nevus sebaceous, although the incidence is low pre puberty.1 It has been estimated that 16% of cases develop benign tumors and that 8% of cases develop malignant tumors.9 The majority of malignant lesions are basal cell carcinomas, and a large recent study found that only 1% of patients had malignancies.10 While most of these tumors develop in adulthood, there have been reports of malignancies in children.10
An additional reason to excise NS is that, over time, they grow more verrucous, become inflamed, bleed with trauma, and may be unappealing cosmetically.1 Some experts recommend earlier excision in childhood, especially with larger lesions or facial lesions where minimizing deformity is important and with the possibility of less noticeable scarring.9 The prophylactic removal of NS remains controversial, and there is a lack of consensus among experts about the timing of excision. It is recommended that each lesion be evaluated on a case-by-case basis.
Dr. Eichenfield is chief of pediatric and adolescent dermatology at Rady Children’s Hospital–San Diego and professor of dermatology and pediatrics at the University of California, San Diego. Ms. Borok is a medical student at the University of California, Los Angeles. Dr. Eichenfield and Ms. Borok said they had no relevant financial disclosures. Email them at [email protected].
References
1. “Dermatology.” 3rd ed. (St Louis, Mo: Elsevier, 2012).
2. Arch Dermatol Res. 1895. doi: 10.1007/BF01842810.
3. J Invest Dermatol. 2004 Jul;123(1):1-12.
4. J Cutan Pathol. 1984;11(5):396-414.
5. J Invest Dermatol. 2013 Mar;133(3):827-30.
6. J Invest Dermatol. 2013 Mar;133(3):597-600.
7. Nat Genet. 2012 Jun 10;44(7):783-7.
8. JAAD. 2016 Aug;75(2):420-7.
9. Pediatr Dermatol. 2012 Jan-Feb;29(1):15-23.
10. Pediatr Dermatol. 2009 Nov-Dec;26(6):676-81.
BY LAWRENCE F. EICHENFIELD, MD, and JENNA BOROK
Nevus sebaceous (NS) is considered a subtype of an epidermal nevus, a benign hamartoma that includes an excess or deficiency of structural elements of the skin, such as epidermis, sebaceous, and apocrine glands.1
Nevus sebaceous, also known as nevus sebaceous of Jadassohn, was first described in 1895 by Josef Jadassohn as a nevoid growth composed predominantly of sebaceous glands.2 Sebaceous glands are found everywhere on the body where hair is found and are located adjacent to hair follicles with ducts, through which sebum flows.3
NS clinically appears as a waxy, yellowish-orange to pink, hairless plaque, ranging from 1 cm to over 10 cm in size and usually located on the scalp, face, neck, or trunk.1 When the lesions are linear, they typically follow Blaschko lines.1 The lesions change over time. An infant’s lesion will be slightly raised and can be hardly noticeable. If the lesion is located on the scalp, it will remain hairless. During later childhood, the nevus may thicken. The lesions may become thicker and verrucous during adolescence.
Histologically, NS are benign hamartomas with epidermal, follicular, sebaceous, and apocrine elements.4 The malformation is primarily within the individual folliculosebaceous units.1 An infantile lesion will deviate little from normal skin as the follicular units are so small.1 During childhood, microscopically, it is easier to see misshapen follicles and thickened epidermis.1 During adolescence, the histological pattern is similar to that of an epidermal nevi, which includes acanthosis and fibroplasia of the papillary dermis.
There are now several lines of evidence showing that nevus sebaceous is caused by genetic postzygotic mosaic mutations in the mitogen–activated protein kinase (MAPK) pathway, which specifically activate ras mutations including H ras and K ras genes.5-7 Since NS is caused by a somatic mosaicism mutation, there are a several syndromic findings associated with NS depending on the timing of the mutations during development and whether single versus multiple tissues are affected.8 Specifically, NS has been associated with Schimmelpenning-Feuerstein-Mims syndrome that includes NS, and skeletal, ocular, and neurologic abnormalities.1,7 A study found that cutaneous skeletal hypophosphatemia syndrome, manifested as NS and vitamin D–resistant rickets, has identical ras mutations in both skin and bone tissues, providing further support that these syndromic findings are a result of a multilineage somatic ras mutation.8
Diagnosis
NS is typically diagnosed clinically, based on the presence of a circumscribed, thin, yellow-orange, oval, round, or linear plaque, usually on the scalp or face. During infancy or the first stage, the lesions remain stable. In the second stage, during puberty, the lesions thicken and become verrucous or nodular because of changes in sebaceous gland activity that are driven by hormones. In the third stage of their natural course, benign and malignant epithelial neoplasms can develop, including trichoblastoma, syringocystadenoma papilliferum, sebaceous epithelioma, basal cell carcinoma, and trichilemmoma.9
The associated syndrome, nevus sebaceous syndrome, also known as Schimmelpenning syndrome, has more extensive cutaneous lesions along Blaschko lines and presents with CNS, ocular, or skeletal defects. The CNS abnormalities include mental retardation, seizures, and hemimegalencephaly.1 Therefore, a thorough neurologic and ophthalmologic examination should be performed in patients with suspected nevus sebaceous syndrome.
Aplasia cutis congenita is the absence of the skin at birth that presents with an erosion or deep ulceration to a scar or a defect that is covered with an epidermal membrane.1 Some aplasia cutis may present as a smooth, hairless surface at birth, making differentiation from nevus sebaceous difficult. The pinkish, orange to yellow hue of NS may help differentiate these entities. Juvenile xanthogranuloma is a fairly common histiocytosis and is the most common histiocytic disease of childhood.1 It is a benign proliferation of dermal dendrocytes, and it presents in infants as many red to yellow papules or a few nodules on the head and neck. Many lesions even remain undetected.1
Syringocystadenoma papilliferum is a benign neoplasm with apocrine differentiation and presents as a papule or plaque on the head and neck. It can be associated with nevus sebaceous.1
Treatment
The definitive treatment is an excision. The necessity and timing of excision of these lesions is still under debate.9 Secondary neoplasms do arise from the nevus sebaceous, although the incidence is low pre puberty.1 It has been estimated that 16% of cases develop benign tumors and that 8% of cases develop malignant tumors.9 The majority of malignant lesions are basal cell carcinomas, and a large recent study found that only 1% of patients had malignancies.10 While most of these tumors develop in adulthood, there have been reports of malignancies in children.10
An additional reason to excise NS is that, over time, they grow more verrucous, become inflamed, bleed with trauma, and may be unappealing cosmetically.1 Some experts recommend earlier excision in childhood, especially with larger lesions or facial lesions where minimizing deformity is important and with the possibility of less noticeable scarring.9 The prophylactic removal of NS remains controversial, and there is a lack of consensus among experts about the timing of excision. It is recommended that each lesion be evaluated on a case-by-case basis.
Dr. Eichenfield is chief of pediatric and adolescent dermatology at Rady Children’s Hospital–San Diego and professor of dermatology and pediatrics at the University of California, San Diego. Ms. Borok is a medical student at the University of California, Los Angeles. Dr. Eichenfield and Ms. Borok said they had no relevant financial disclosures. Email them at [email protected].
References
1. “Dermatology.” 3rd ed. (St Louis, Mo: Elsevier, 2012).
2. Arch Dermatol Res. 1895. doi: 10.1007/BF01842810.
3. J Invest Dermatol. 2004 Jul;123(1):1-12.
4. J Cutan Pathol. 1984;11(5):396-414.
5. J Invest Dermatol. 2013 Mar;133(3):827-30.
6. J Invest Dermatol. 2013 Mar;133(3):597-600.
7. Nat Genet. 2012 Jun 10;44(7):783-7.
8. JAAD. 2016 Aug;75(2):420-7.
9. Pediatr Dermatol. 2012 Jan-Feb;29(1):15-23.
10. Pediatr Dermatol. 2009 Nov-Dec;26(6):676-81.
A 4-year-old boy presents with a yellow-orange lesion on his scalp. His mother states that it was present at birth and does not seem to bother him. The mother also complains that he is not growing hair over the bump area on his head. The bump has grown proportionally with the patient since birth.
He is otherwise healthy and has no other bumps like this one on his body. He was born at term with an unremarkable perinatal history.
During the physical exam, you find a yellow-orange, smooth, hairless plaque on the scalp. The plaque is 3 cm in size and is well circumscribed. The infant’s general physical, skin exam, and neurological exam are normal. He has no skeletal defects.
Mycobacteria subset plagues pulmonary patients
Nontuberculous mycobacteria accounts for an increasing percentage of pulmonary disease, and nonsurgical treatment alone has not shown effectiveness, according to data from a meta-analysis of 24 studies and 1,224 patients. The study results were published online in Chest.
Data on therapeutic successes in cases of nontuberculosis mycobacteria (NTM)–related pulmonary disease are limited, in particular for those species not related to the Mycobacterium avium complex (non-MAC), wrote Roland Diel, MD, of University Medical Hospital Schleswig-Holstein, Germany, and his colleagues.
In particular, non-MAC species Mycobacterium xenopi (MX), Mycobacterium abscessus, Mycobacterium malmoense, and Mycobacterium kansasii (MK) were addressed in the studies, which included 16 retrospective chart reviews, 5 randomized trials, and 3 prospective, nonrandomized studies (Chest 2017. doi: 10.1016/j.chest.2017.04.166).
Treatment success was measured by rates of sputum culture conversion (SCC).
Overall, the average proportion of SCC for patients with M. abscessus was 41% after subtraction for posttreatment relapses, but reached 70% for subspecies M. massiliense in macrolide-containing treatments. The average proportion of SCC was 80% for patients with M. kansasii, 32% for those with MX, and 54% for those with M. malmoense.
Treatment success ranged from 9% to 73% for M. xenopi patients, but all-cause mortality was 69%. Of note, a 100% success rate was noted in M. kansasii patients using a three-drug TB regimen of isoniazid, rifampicin, and ethambutol, or with a combination of ethambutol, rifampicin, and clarithromycin, the researchers noted.
The percentage of SCC in 55 patients with lung resection and either MX or M. abscessus was considered high at 76%.
The study findings were limited by the diverse definitions of treatment success and by the variety of treatments and “an optimal multidrug treatment cannot be derived from the few studies and has yet to be determined,” the researchers said. In the absence of optimal drug therapy, functional and quality of life elements deserve greater consideration when evaluating outcomes in patients with non-MAC NTM pulmonary disease, they added.
Dr. Diel reported receiving lecturing and/or consulting fees from Insmed and Riemser.
Nontuberculous mycobacteria accounts for an increasing percentage of pulmonary disease, and nonsurgical treatment alone has not shown effectiveness, according to data from a meta-analysis of 24 studies and 1,224 patients. The study results were published online in Chest.
Data on therapeutic successes in cases of nontuberculosis mycobacteria (NTM)–related pulmonary disease are limited, in particular for those species not related to the Mycobacterium avium complex (non-MAC), wrote Roland Diel, MD, of University Medical Hospital Schleswig-Holstein, Germany, and his colleagues.
In particular, non-MAC species Mycobacterium xenopi (MX), Mycobacterium abscessus, Mycobacterium malmoense, and Mycobacterium kansasii (MK) were addressed in the studies, which included 16 retrospective chart reviews, 5 randomized trials, and 3 prospective, nonrandomized studies (Chest 2017. doi: 10.1016/j.chest.2017.04.166).
Treatment success was measured by rates of sputum culture conversion (SCC).
Overall, the average proportion of SCC for patients with M. abscessus was 41% after subtraction for posttreatment relapses, but reached 70% for subspecies M. massiliense in macrolide-containing treatments. The average proportion of SCC was 80% for patients with M. kansasii, 32% for those with MX, and 54% for those with M. malmoense.
Treatment success ranged from 9% to 73% for M. xenopi patients, but all-cause mortality was 69%. Of note, a 100% success rate was noted in M. kansasii patients using a three-drug TB regimen of isoniazid, rifampicin, and ethambutol, or with a combination of ethambutol, rifampicin, and clarithromycin, the researchers noted.
The percentage of SCC in 55 patients with lung resection and either MX or M. abscessus was considered high at 76%.
The study findings were limited by the diverse definitions of treatment success and by the variety of treatments and “an optimal multidrug treatment cannot be derived from the few studies and has yet to be determined,” the researchers said. In the absence of optimal drug therapy, functional and quality of life elements deserve greater consideration when evaluating outcomes in patients with non-MAC NTM pulmonary disease, they added.
Dr. Diel reported receiving lecturing and/or consulting fees from Insmed and Riemser.
Nontuberculous mycobacteria accounts for an increasing percentage of pulmonary disease, and nonsurgical treatment alone has not shown effectiveness, according to data from a meta-analysis of 24 studies and 1,224 patients. The study results were published online in Chest.
Data on therapeutic successes in cases of nontuberculosis mycobacteria (NTM)–related pulmonary disease are limited, in particular for those species not related to the Mycobacterium avium complex (non-MAC), wrote Roland Diel, MD, of University Medical Hospital Schleswig-Holstein, Germany, and his colleagues.
In particular, non-MAC species Mycobacterium xenopi (MX), Mycobacterium abscessus, Mycobacterium malmoense, and Mycobacterium kansasii (MK) were addressed in the studies, which included 16 retrospective chart reviews, 5 randomized trials, and 3 prospective, nonrandomized studies (Chest 2017. doi: 10.1016/j.chest.2017.04.166).
Treatment success was measured by rates of sputum culture conversion (SCC).
Overall, the average proportion of SCC for patients with M. abscessus was 41% after subtraction for posttreatment relapses, but reached 70% for subspecies M. massiliense in macrolide-containing treatments. The average proportion of SCC was 80% for patients with M. kansasii, 32% for those with MX, and 54% for those with M. malmoense.
Treatment success ranged from 9% to 73% for M. xenopi patients, but all-cause mortality was 69%. Of note, a 100% success rate was noted in M. kansasii patients using a three-drug TB regimen of isoniazid, rifampicin, and ethambutol, or with a combination of ethambutol, rifampicin, and clarithromycin, the researchers noted.
The percentage of SCC in 55 patients with lung resection and either MX or M. abscessus was considered high at 76%.
The study findings were limited by the diverse definitions of treatment success and by the variety of treatments and “an optimal multidrug treatment cannot be derived from the few studies and has yet to be determined,” the researchers said. In the absence of optimal drug therapy, functional and quality of life elements deserve greater consideration when evaluating outcomes in patients with non-MAC NTM pulmonary disease, they added.
Dr. Diel reported receiving lecturing and/or consulting fees from Insmed and Riemser.
FROM CHEST
Key clinical point: An optimal multidrug treatment has not yet been found for patients with nontuberculosis mycobacteria (NTM)–related pulmonary disease.
Major finding: The average proportion of sputum culture conversion (SCC) for patients with M. abscessus was 42% after subtraction for posttreatment relapses, but reached 79% for subspecies M. massiliense in macrolide-containing treatments. The average proportion of SCC was 80% for patients with M. kansasii, 32% for those with M. xenopi, and 54% for those with M. malmoense.
Data source: A meta-analysis of 24 studies and 1,224 patients.
Disclosures: Dr. Roland Diel reported receiving lecturing and/or consulting fees from Insmed and Riemser.
Digital transference: New dangers in a new world
We are in a new age of psychiatric practice caught in the wider shift from an industrial to a technology-based society. Although this transformation has been occurring over the past half-century, the last decade has seen a rapid acceleration driven by mobile phones, social networking, and the Internet.
Thomas Friedman, in his book “Thank you for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations” (New York: Farrar, Straus & Girous, 2016), cites 2007 as the year our world changed with the launching of the iPhone, the globalization of Facebook and Twitter, the release of the Kindle and Android, the founding of Airbnb, Google’s purchase of YouTube, and IBM’s creation of its AI system, Watson. Psychiatry has been gradually incorporating technology into everyday practice using mobile devices, email, videoconferencing, Internet, and electronic medical records, as well as being impacted by more rapidly evolving technologies, such as texting and social networking platforms.
Transference remains a core tenant in the psychiatric conceptualization of the psychiatrist-patient relationship. There are numerous formal definitions of this phenomenon. This article will use a broad reductionist definition of transference as the “unconscious projection of a past relationship/experience onto a current relationship” and combine the terms transference (from patient to psychiatrist) and countertransference (from psychiatrist to patient; often defined as a psychiatrist’s reaction to a patient’s transference).
How do a psychiatrist and patient dyad’s previous experiences with technology and technology-based relationships affect a current clinical relationship? How does the type of technology being used influence shared meanings and assumptions? Does technology introduce new implicit biases that go unrecognized? Does distant communication increase the risk of missing contextual clues more apparent for in-person interactions? These critical questions have largely gone unaddressed, but what is known raises concerns. The question is not whether to use these technologies, which have demonstrated utility to transform care. Rather, concerns around our lack of understanding of the technologies’ strengths, weaknesses, and influences on the doctor-patient relationship need to be explored. Below we will briefly examine each of these questions.
A relatively new paradigm has been inserting itself from the field of education into medicine that describes a patient’s previous technology experiences. “Digital immigrants” is a term for those who did not grow up with today’s technology and began using our current technologies as adults. They contrast with “digital natives,” who have grown up incorporating technology into their daily lives. Broad assumptions are that digital natives tend to be more comfortable, flexible, and adaptable with technologies, compared with digital immigrants, who are more hesitant and slower to adopt and integrate technology. However, the experience of a specific patient with technology is multifactorial and more nuanced than the digital native vs. digital immigrant classification. There are those who argue that technology use from an early age is altering on a biological level the way the human brain processes both information and emotion. Depending on their experiences and backgrounds (immigrant vs. native), a psychiatrist and patient using videoconferencing to enable remote access could have initial as well as ongoing positive or negative transferences to treatment.
The specific technology being used also sets parameters for communication that influence interpretation. Text and email communication are very different from live interactive video conferencing and involve use of language that may not be shared between the psychiatrist and patient, such as text abbreviations and emojis. Lack of visual and auditory information necessitates more interpretation by the receiver to fill in tone, meaning, and intent drawn from their past conscious and unconscious experiences and assumptions. The opportunity for misinterpretation is further compounded by implicit bias built into the technology. Although biases embedded in medical technologies have yet to be examined, there are some alarming examples from society in general.
A recent report by the Georgetown University’s Center on Privacy & Technology drew attention to inherent racial bias in facial recognition technology used by law enforcement agencies. This bias was a product of both the underlying software and programming, as well as the real world implementation of these systems. As the field of medicine increasingly turns to artificial intelligence for help with pattern recognition, data management, and population health, what implicit biases are being built into these systems? Could a web-assisted, evidence-based therapy that uses an algorithmic approach have built-in biases for certain populations of patients, affecting the therapeutic interaction?
A final issue worth considering is the power of technology to distort shared context. When a psychiatrist meets with a patient in person, they are sharing the same environmental context at the same point of time during treatment. When communicating over distance, they are occupying different environments and, with asynchronous communication (for example, email), different points in time. These disparate contexts may lend themselves to additional assumptions that get projected onto the clinical relationship. For example, a telepsychiatrist working with Northern Plains Indian Communities via videoconferencing has a new patient in a new clinic setting visually similar to other clinics they have visited in the past. If not mindful of context, the telepsychiatrist may risk making unwarranted assumptions about the patient’s environmental context based on the physician’s previous work. In a different example, a psychiatrist sees a patient for an in-person visit and then reads an email sent 12 hours prior to the visit by the patient expressing upset at psychiatrist’s structuring of treatment. This issue was not addressed in the session that just ended. What is the impact of this email to both the psychiatrist and patient, and their current feelings about the therapeutic relationship? Is this now current or past context for the patient and psychiatrist?
For many, questions about bias, context, and previous experiences with technology can be seen as “grist for the mill” for psychiatrists to understand the transferences and other processes within doctor-patient relationships. This knowledge can then be leveraged to appropriately attend to the therapeutic relationship. The danger in the age of hybrid relationships is when there are embedded issues that psychiatry as a field and individual psychiatrists are unaware of and not attending to in treatment. As the acknowledged experts in medicine in the doctor-patient relationship say, psychiatrists need to take leadership roles in better understanding the impact of technologies on clinical processes – both for those processes on the surface, as well as those that lurk beneath the digital waves.
Dr. Shore chairs the American Psychiatric Association’s Committee on Telepsychiatry and is director of telemedicine at the Helen & Arthur E. Johnson Depression Center at the University of Colorado at Denver, Aurora. He also serves as associate professor of psychiatry at the university.
We are in a new age of psychiatric practice caught in the wider shift from an industrial to a technology-based society. Although this transformation has been occurring over the past half-century, the last decade has seen a rapid acceleration driven by mobile phones, social networking, and the Internet.
Thomas Friedman, in his book “Thank you for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations” (New York: Farrar, Straus & Girous, 2016), cites 2007 as the year our world changed with the launching of the iPhone, the globalization of Facebook and Twitter, the release of the Kindle and Android, the founding of Airbnb, Google’s purchase of YouTube, and IBM’s creation of its AI system, Watson. Psychiatry has been gradually incorporating technology into everyday practice using mobile devices, email, videoconferencing, Internet, and electronic medical records, as well as being impacted by more rapidly evolving technologies, such as texting and social networking platforms.
Transference remains a core tenant in the psychiatric conceptualization of the psychiatrist-patient relationship. There are numerous formal definitions of this phenomenon. This article will use a broad reductionist definition of transference as the “unconscious projection of a past relationship/experience onto a current relationship” and combine the terms transference (from patient to psychiatrist) and countertransference (from psychiatrist to patient; often defined as a psychiatrist’s reaction to a patient’s transference).
How do a psychiatrist and patient dyad’s previous experiences with technology and technology-based relationships affect a current clinical relationship? How does the type of technology being used influence shared meanings and assumptions? Does technology introduce new implicit biases that go unrecognized? Does distant communication increase the risk of missing contextual clues more apparent for in-person interactions? These critical questions have largely gone unaddressed, but what is known raises concerns. The question is not whether to use these technologies, which have demonstrated utility to transform care. Rather, concerns around our lack of understanding of the technologies’ strengths, weaknesses, and influences on the doctor-patient relationship need to be explored. Below we will briefly examine each of these questions.
A relatively new paradigm has been inserting itself from the field of education into medicine that describes a patient’s previous technology experiences. “Digital immigrants” is a term for those who did not grow up with today’s technology and began using our current technologies as adults. They contrast with “digital natives,” who have grown up incorporating technology into their daily lives. Broad assumptions are that digital natives tend to be more comfortable, flexible, and adaptable with technologies, compared with digital immigrants, who are more hesitant and slower to adopt and integrate technology. However, the experience of a specific patient with technology is multifactorial and more nuanced than the digital native vs. digital immigrant classification. There are those who argue that technology use from an early age is altering on a biological level the way the human brain processes both information and emotion. Depending on their experiences and backgrounds (immigrant vs. native), a psychiatrist and patient using videoconferencing to enable remote access could have initial as well as ongoing positive or negative transferences to treatment.
The specific technology being used also sets parameters for communication that influence interpretation. Text and email communication are very different from live interactive video conferencing and involve use of language that may not be shared between the psychiatrist and patient, such as text abbreviations and emojis. Lack of visual and auditory information necessitates more interpretation by the receiver to fill in tone, meaning, and intent drawn from their past conscious and unconscious experiences and assumptions. The opportunity for misinterpretation is further compounded by implicit bias built into the technology. Although biases embedded in medical technologies have yet to be examined, there are some alarming examples from society in general.
A recent report by the Georgetown University’s Center on Privacy & Technology drew attention to inherent racial bias in facial recognition technology used by law enforcement agencies. This bias was a product of both the underlying software and programming, as well as the real world implementation of these systems. As the field of medicine increasingly turns to artificial intelligence for help with pattern recognition, data management, and population health, what implicit biases are being built into these systems? Could a web-assisted, evidence-based therapy that uses an algorithmic approach have built-in biases for certain populations of patients, affecting the therapeutic interaction?
A final issue worth considering is the power of technology to distort shared context. When a psychiatrist meets with a patient in person, they are sharing the same environmental context at the same point of time during treatment. When communicating over distance, they are occupying different environments and, with asynchronous communication (for example, email), different points in time. These disparate contexts may lend themselves to additional assumptions that get projected onto the clinical relationship. For example, a telepsychiatrist working with Northern Plains Indian Communities via videoconferencing has a new patient in a new clinic setting visually similar to other clinics they have visited in the past. If not mindful of context, the telepsychiatrist may risk making unwarranted assumptions about the patient’s environmental context based on the physician’s previous work. In a different example, a psychiatrist sees a patient for an in-person visit and then reads an email sent 12 hours prior to the visit by the patient expressing upset at psychiatrist’s structuring of treatment. This issue was not addressed in the session that just ended. What is the impact of this email to both the psychiatrist and patient, and their current feelings about the therapeutic relationship? Is this now current or past context for the patient and psychiatrist?
For many, questions about bias, context, and previous experiences with technology can be seen as “grist for the mill” for psychiatrists to understand the transferences and other processes within doctor-patient relationships. This knowledge can then be leveraged to appropriately attend to the therapeutic relationship. The danger in the age of hybrid relationships is when there are embedded issues that psychiatry as a field and individual psychiatrists are unaware of and not attending to in treatment. As the acknowledged experts in medicine in the doctor-patient relationship say, psychiatrists need to take leadership roles in better understanding the impact of technologies on clinical processes – both for those processes on the surface, as well as those that lurk beneath the digital waves.
Dr. Shore chairs the American Psychiatric Association’s Committee on Telepsychiatry and is director of telemedicine at the Helen & Arthur E. Johnson Depression Center at the University of Colorado at Denver, Aurora. He also serves as associate professor of psychiatry at the university.
We are in a new age of psychiatric practice caught in the wider shift from an industrial to a technology-based society. Although this transformation has been occurring over the past half-century, the last decade has seen a rapid acceleration driven by mobile phones, social networking, and the Internet.
Thomas Friedman, in his book “Thank you for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations” (New York: Farrar, Straus & Girous, 2016), cites 2007 as the year our world changed with the launching of the iPhone, the globalization of Facebook and Twitter, the release of the Kindle and Android, the founding of Airbnb, Google’s purchase of YouTube, and IBM’s creation of its AI system, Watson. Psychiatry has been gradually incorporating technology into everyday practice using mobile devices, email, videoconferencing, Internet, and electronic medical records, as well as being impacted by more rapidly evolving technologies, such as texting and social networking platforms.
Transference remains a core tenant in the psychiatric conceptualization of the psychiatrist-patient relationship. There are numerous formal definitions of this phenomenon. This article will use a broad reductionist definition of transference as the “unconscious projection of a past relationship/experience onto a current relationship” and combine the terms transference (from patient to psychiatrist) and countertransference (from psychiatrist to patient; often defined as a psychiatrist’s reaction to a patient’s transference).
How do a psychiatrist and patient dyad’s previous experiences with technology and technology-based relationships affect a current clinical relationship? How does the type of technology being used influence shared meanings and assumptions? Does technology introduce new implicit biases that go unrecognized? Does distant communication increase the risk of missing contextual clues more apparent for in-person interactions? These critical questions have largely gone unaddressed, but what is known raises concerns. The question is not whether to use these technologies, which have demonstrated utility to transform care. Rather, concerns around our lack of understanding of the technologies’ strengths, weaknesses, and influences on the doctor-patient relationship need to be explored. Below we will briefly examine each of these questions.
A relatively new paradigm has been inserting itself from the field of education into medicine that describes a patient’s previous technology experiences. “Digital immigrants” is a term for those who did not grow up with today’s technology and began using our current technologies as adults. They contrast with “digital natives,” who have grown up incorporating technology into their daily lives. Broad assumptions are that digital natives tend to be more comfortable, flexible, and adaptable with technologies, compared with digital immigrants, who are more hesitant and slower to adopt and integrate technology. However, the experience of a specific patient with technology is multifactorial and more nuanced than the digital native vs. digital immigrant classification. There are those who argue that technology use from an early age is altering on a biological level the way the human brain processes both information and emotion. Depending on their experiences and backgrounds (immigrant vs. native), a psychiatrist and patient using videoconferencing to enable remote access could have initial as well as ongoing positive or negative transferences to treatment.
The specific technology being used also sets parameters for communication that influence interpretation. Text and email communication are very different from live interactive video conferencing and involve use of language that may not be shared between the psychiatrist and patient, such as text abbreviations and emojis. Lack of visual and auditory information necessitates more interpretation by the receiver to fill in tone, meaning, and intent drawn from their past conscious and unconscious experiences and assumptions. The opportunity for misinterpretation is further compounded by implicit bias built into the technology. Although biases embedded in medical technologies have yet to be examined, there are some alarming examples from society in general.
A recent report by the Georgetown University’s Center on Privacy & Technology drew attention to inherent racial bias in facial recognition technology used by law enforcement agencies. This bias was a product of both the underlying software and programming, as well as the real world implementation of these systems. As the field of medicine increasingly turns to artificial intelligence for help with pattern recognition, data management, and population health, what implicit biases are being built into these systems? Could a web-assisted, evidence-based therapy that uses an algorithmic approach have built-in biases for certain populations of patients, affecting the therapeutic interaction?
A final issue worth considering is the power of technology to distort shared context. When a psychiatrist meets with a patient in person, they are sharing the same environmental context at the same point of time during treatment. When communicating over distance, they are occupying different environments and, with asynchronous communication (for example, email), different points in time. These disparate contexts may lend themselves to additional assumptions that get projected onto the clinical relationship. For example, a telepsychiatrist working with Northern Plains Indian Communities via videoconferencing has a new patient in a new clinic setting visually similar to other clinics they have visited in the past. If not mindful of context, the telepsychiatrist may risk making unwarranted assumptions about the patient’s environmental context based on the physician’s previous work. In a different example, a psychiatrist sees a patient for an in-person visit and then reads an email sent 12 hours prior to the visit by the patient expressing upset at psychiatrist’s structuring of treatment. This issue was not addressed in the session that just ended. What is the impact of this email to both the psychiatrist and patient, and their current feelings about the therapeutic relationship? Is this now current or past context for the patient and psychiatrist?
For many, questions about bias, context, and previous experiences with technology can be seen as “grist for the mill” for psychiatrists to understand the transferences and other processes within doctor-patient relationships. This knowledge can then be leveraged to appropriately attend to the therapeutic relationship. The danger in the age of hybrid relationships is when there are embedded issues that psychiatry as a field and individual psychiatrists are unaware of and not attending to in treatment. As the acknowledged experts in medicine in the doctor-patient relationship say, psychiatrists need to take leadership roles in better understanding the impact of technologies on clinical processes – both for those processes on the surface, as well as those that lurk beneath the digital waves.
Dr. Shore chairs the American Psychiatric Association’s Committee on Telepsychiatry and is director of telemedicine at the Helen & Arthur E. Johnson Depression Center at the University of Colorado at Denver, Aurora. He also serves as associate professor of psychiatry at the university.
The Personal Health Inventory: Current Use, Perceived Barriers, and Benefits
To better meet the needs and values of patients, the VA has been promulgating a paradigm shift away from the disease-focused model toward a whole health, patient-centered focus.1 To achieve this goal, the VA Office of Patient Centered Care and Cultural Transformation has advocated the use of the personal health inventory (PHI). This inventory asks patients to mindfully assess why their health is important to them and to determine where they feel they are and where they want to be with respect to 8 areas of self-care (working the body, physical and emotional surroundings, personal development, food and drink, sleep, human relationships, spirituality/purpose, and awareness of relationship between mind and body).
Personal health inventory written responses are then discussed with a member of the health care team to develop a proactive, patient-driven health plan unique to that veteran’s circumstances and aspirations.2 The PHI is applicable not only to veterans, but also in primary care and other practices outside the VA to improve shared decision making and produce more effective clinician-patient partnerships.
After national PHI promotion by the VA, the authors observed that there was not widespread adoption of this practice at their institution, despite its introduction and discussion at several primary care staff meetings. The authors surveyed primary care providers (PCPs) at VA Connecticut Healthcare System (VACHS) to understand perceived barriers and benefits to the use of PHIs in clinical practice.
Methods
The authors surveyed PCPs at VACHS sites about their current use of the PHI as well as their perceptions of barriers and benefits for future implementation of the PHI in clinical settings. Current use of the PHI was captured in a free response question. The authors assessed comfort with the PHI using a 5-point Likert scale, asking participants how comfortable they would feel explaining the PHI to a patient and or a coworker (1 = very uncomfortable, 5 = very comfortable). Barriers and benefits of future PHI implementation were chosen from preselected lists (Figure 1). Participants also were asked how important they feel it is for VA PCPs to use the PHI (1 = very unimportant, 5 = very important).
Finally, participants were asked whether they plan to use the PHI with their patients and how often (1 = less than once a month, 5 = daily). Participants were initially asked at staff meetings to complete the survey in a paper format. Nonrespondents then were asked to complete the survey electronically. This research protocol was reviewed and approved by the institutional review board of the participating institutions.
Study Population
The survey was delivered to all PCPs in the VACHS, which consisted of 2 main facilities (West Haven and Newington campuses) and 7 community-based outpatient clinics. The VACHS provides care to Connecticut’s eligible veteran population of > 55,000 patients who are enrolled in care. Survey participants included physicians, physician assistants, and nurse practitioners. Trainees were excluded.
Statistical Analyses
Summary statistics were calculated to assess current use of the PHI, barriers to and benefits of future implementation, and other scaled responses. Chi-square tests were used to compare the responses of participants who were completing the survey online with those completing it on paper for major study outcomes. Mann-Whitney tests were conducted to assess whether responses to certain questions (eg, future plans to use the PHI) were associated with responses to other related questions (eg, importance of VACHS providers pursuing the PHI). Significance was determined as P ≤ .05.
Results
Thirty-eight (53%) of 72 PCPs completed the survey. Thirteen providers completed the survey in the online format and 25 on paper. There was no significant difference between participants who completed the survey online vs paper for each of the major outcomes assessed. Most participants were aged between 40 and 60 years (64%), female (70%), and white (76%), similar to the entire PCP population at VACHS. The majority of participants worked in a hospital-based outpatient primary care setting (58%) (Table).
Current Use of PHI
Of respondents, 84% stated that they had heard of the PHI. Of those, 68% felt very or somewhat comfortable explaining the PHI to a patient, with slightly fewer, 64%, very or somewhat comfortable explaining the PHI to a coworker. Forty-eight percent stated that they had implemented the PHI in their clinical practices. Examples of current use included “can refer to RN to complete a true PHI,” “giving blank PHI to patients to fill out and bring back/mail,” and “occasional patient who I am trying to achieve some sort of lifestyle modification or change in behavior.”
PHI Barriers and Benefits
Almost all participants (95%) stated that lack of time was a barrier to using the PHI in their clinical settings (Figure 2). The next most common barriers were cumbersome paper forms (37%) and lack of support from upper management (24%). Very few participants listed discomfort as a reason for not discussing the PHI with patients (5%).
Respondents were divided evenly when identifying the benefits of the PHI. The top 3 selections were greater focus on what patients want (55%), greater patient engagement (55%), and improved patient/provider communication (53%) (Figure 3).
PHI Importance and Future Use
The majority of participants (71%) stated that it was very or somewhat important for VA PCPs to pursue the PHI. However, only 45% planned to use the PHI with their patients. Respondents who said they had implemented the PHI in the past were not more likely than others to state that pursuing the PHI was very important (P = .81). However, respondents who stated that it was very important to pursue the PHI were significantly more likely to plan to implement the PHI (P = .04). Of those planning on its use, the frequency of expected use varied from 31% planning to use the PHI daily with patients to 25% expecting to use it less than once a month.
Discussion
The traditional model of care has been fraught with problems. For example, patients are frequently nonadherent to medical therapies and lifestyle recommendations.3-6 Clearly, changes need to be made. To improve health care outcomes by delivering more patient-centered care, the VA initiated the PHI.7
Although nearly three-fourths of the respondents believed that the PHI was an important tool that the VA should pursue, more than half of all respondents did not intend to use it. Of those planning on using it, a large proportion planned on using it infrequently.
The authors found that despite PCP knowledge of PHI and its acceptance as a tool to focus more on what patients want to accomplish, to enhance patient engagement, and to improve communication between patients and providers, time constraints were a universal barrier to implementation, followed by cumbersome paper forms, and not enough perceived support from local upper management.
Measures to decrease PCP time investment and involvement with paper forms, such as having the patient complete the PHI outside of an office visit with a PCP, either at home, with the assistance of a team member with less training than a PCP, or electronically could help address an identified barrier. Further, if the PHI is to be more broadly adopted, support of local upper management should be enlisted to vociferously advocate its use, thus it will be deemed more essential to enhance care and introduce an organizational system for its effective implementation.
Interestingly, only about one-third of respondents believed that the use of the PHI would lead to better health outcomes for patients. Future studies should address whether the use of the PHI improves surrogate goals, such as cholesterol levels, blood pressures, hemoglobin A1c, or medication adherence as well as harder outcomes, such as risk of cardiovascular outcomes, diabetic complications, and mortality.
Limitations
The questionnaire was used at only 1 health care system within the VA. Whether it could be generalizable to PCPs with other baseline demographic information, non-VA facilities, or even other VA facilities, is not known. Since this survey was administered to PCPs, the authors also do not know the impact of implementing the PHI in specialty settings.
Conclusion
Although the concept of the PHI is favored by the majority of PCPs within VACHS, significant barriers, the most common being time constraints, need to be overcome before it is widely adopted. Implementation of novel collaborative systems of PHI administration may be needed.
1. U.S. Department of Veterans Affairs.VA patient centered care. http://www.va.gov/patientcenteredcare/about.asp. Updated March 3, 2016. Accessed March 30, 2017.
2. U.S. Department of Veterans Affairs. MyStory: personal health inventory. http://www.va.gov/patientcenteredcare/docs/va-opcc-personal-health-inventory-final-508.pdf. Published October 7, 2013. Accessed March 30, 2017.
3. Martin LR, Williams SL, Haskard KB, Dimatteo MR. The challenge of patient adherence. Ther Clin Risk Manag. 2005;1(3):189-199.
4. Nieuwlaat R, Wilczynski N, Navarro T, et al. Interventions for enhancing medication adherence. Cochrane Database Syst Rev. 2014;(11):CD000011.
5. Iuga AO, McGuire MJ. Adherence and health care costs. Risk Manag Healthc Policy. 2014;7:35-44.
6. Viswanathan M, Golin CE, Jones CD, et al. Interventions to improve adherence to self-administered medications for chronic diseases in the United States: a systematic review. Ann Intern Med. 2012;157(11):785-795. 7. Simmons LA, Drake CD, Gaudet TW, Snyderman R. Personalized health planning in primary care settings. Fed Pract. 2016;33(1):27-34.
To better meet the needs and values of patients, the VA has been promulgating a paradigm shift away from the disease-focused model toward a whole health, patient-centered focus.1 To achieve this goal, the VA Office of Patient Centered Care and Cultural Transformation has advocated the use of the personal health inventory (PHI). This inventory asks patients to mindfully assess why their health is important to them and to determine where they feel they are and where they want to be with respect to 8 areas of self-care (working the body, physical and emotional surroundings, personal development, food and drink, sleep, human relationships, spirituality/purpose, and awareness of relationship between mind and body).
Personal health inventory written responses are then discussed with a member of the health care team to develop a proactive, patient-driven health plan unique to that veteran’s circumstances and aspirations.2 The PHI is applicable not only to veterans, but also in primary care and other practices outside the VA to improve shared decision making and produce more effective clinician-patient partnerships.
After national PHI promotion by the VA, the authors observed that there was not widespread adoption of this practice at their institution, despite its introduction and discussion at several primary care staff meetings. The authors surveyed primary care providers (PCPs) at VA Connecticut Healthcare System (VACHS) to understand perceived barriers and benefits to the use of PHIs in clinical practice.
Methods
The authors surveyed PCPs at VACHS sites about their current use of the PHI as well as their perceptions of barriers and benefits for future implementation of the PHI in clinical settings. Current use of the PHI was captured in a free response question. The authors assessed comfort with the PHI using a 5-point Likert scale, asking participants how comfortable they would feel explaining the PHI to a patient and or a coworker (1 = very uncomfortable, 5 = very comfortable). Barriers and benefits of future PHI implementation were chosen from preselected lists (Figure 1). Participants also were asked how important they feel it is for VA PCPs to use the PHI (1 = very unimportant, 5 = very important).
Finally, participants were asked whether they plan to use the PHI with their patients and how often (1 = less than once a month, 5 = daily). Participants were initially asked at staff meetings to complete the survey in a paper format. Nonrespondents then were asked to complete the survey electronically. This research protocol was reviewed and approved by the institutional review board of the participating institutions.
Study Population
The survey was delivered to all PCPs in the VACHS, which consisted of 2 main facilities (West Haven and Newington campuses) and 7 community-based outpatient clinics. The VACHS provides care to Connecticut’s eligible veteran population of > 55,000 patients who are enrolled in care. Survey participants included physicians, physician assistants, and nurse practitioners. Trainees were excluded.
Statistical Analyses
Summary statistics were calculated to assess current use of the PHI, barriers to and benefits of future implementation, and other scaled responses. Chi-square tests were used to compare the responses of participants who were completing the survey online with those completing it on paper for major study outcomes. Mann-Whitney tests were conducted to assess whether responses to certain questions (eg, future plans to use the PHI) were associated with responses to other related questions (eg, importance of VACHS providers pursuing the PHI). Significance was determined as P ≤ .05.
Results
Thirty-eight (53%) of 72 PCPs completed the survey. Thirteen providers completed the survey in the online format and 25 on paper. There was no significant difference between participants who completed the survey online vs paper for each of the major outcomes assessed. Most participants were aged between 40 and 60 years (64%), female (70%), and white (76%), similar to the entire PCP population at VACHS. The majority of participants worked in a hospital-based outpatient primary care setting (58%) (Table).
Current Use of PHI
Of respondents, 84% stated that they had heard of the PHI. Of those, 68% felt very or somewhat comfortable explaining the PHI to a patient, with slightly fewer, 64%, very or somewhat comfortable explaining the PHI to a coworker. Forty-eight percent stated that they had implemented the PHI in their clinical practices. Examples of current use included “can refer to RN to complete a true PHI,” “giving blank PHI to patients to fill out and bring back/mail,” and “occasional patient who I am trying to achieve some sort of lifestyle modification or change in behavior.”
PHI Barriers and Benefits
Almost all participants (95%) stated that lack of time was a barrier to using the PHI in their clinical settings (Figure 2). The next most common barriers were cumbersome paper forms (37%) and lack of support from upper management (24%). Very few participants listed discomfort as a reason for not discussing the PHI with patients (5%).
Respondents were divided evenly when identifying the benefits of the PHI. The top 3 selections were greater focus on what patients want (55%), greater patient engagement (55%), and improved patient/provider communication (53%) (Figure 3).
PHI Importance and Future Use
The majority of participants (71%) stated that it was very or somewhat important for VA PCPs to pursue the PHI. However, only 45% planned to use the PHI with their patients. Respondents who said they had implemented the PHI in the past were not more likely than others to state that pursuing the PHI was very important (P = .81). However, respondents who stated that it was very important to pursue the PHI were significantly more likely to plan to implement the PHI (P = .04). Of those planning on its use, the frequency of expected use varied from 31% planning to use the PHI daily with patients to 25% expecting to use it less than once a month.
Discussion
The traditional model of care has been fraught with problems. For example, patients are frequently nonadherent to medical therapies and lifestyle recommendations.3-6 Clearly, changes need to be made. To improve health care outcomes by delivering more patient-centered care, the VA initiated the PHI.7
Although nearly three-fourths of the respondents believed that the PHI was an important tool that the VA should pursue, more than half of all respondents did not intend to use it. Of those planning on using it, a large proportion planned on using it infrequently.
The authors found that despite PCP knowledge of PHI and its acceptance as a tool to focus more on what patients want to accomplish, to enhance patient engagement, and to improve communication between patients and providers, time constraints were a universal barrier to implementation, followed by cumbersome paper forms, and not enough perceived support from local upper management.
Measures to decrease PCP time investment and involvement with paper forms, such as having the patient complete the PHI outside of an office visit with a PCP, either at home, with the assistance of a team member with less training than a PCP, or electronically could help address an identified barrier. Further, if the PHI is to be more broadly adopted, support of local upper management should be enlisted to vociferously advocate its use, thus it will be deemed more essential to enhance care and introduce an organizational system for its effective implementation.
Interestingly, only about one-third of respondents believed that the use of the PHI would lead to better health outcomes for patients. Future studies should address whether the use of the PHI improves surrogate goals, such as cholesterol levels, blood pressures, hemoglobin A1c, or medication adherence as well as harder outcomes, such as risk of cardiovascular outcomes, diabetic complications, and mortality.
Limitations
The questionnaire was used at only 1 health care system within the VA. Whether it could be generalizable to PCPs with other baseline demographic information, non-VA facilities, or even other VA facilities, is not known. Since this survey was administered to PCPs, the authors also do not know the impact of implementing the PHI in specialty settings.
Conclusion
Although the concept of the PHI is favored by the majority of PCPs within VACHS, significant barriers, the most common being time constraints, need to be overcome before it is widely adopted. Implementation of novel collaborative systems of PHI administration may be needed.
To better meet the needs and values of patients, the VA has been promulgating a paradigm shift away from the disease-focused model toward a whole health, patient-centered focus.1 To achieve this goal, the VA Office of Patient Centered Care and Cultural Transformation has advocated the use of the personal health inventory (PHI). This inventory asks patients to mindfully assess why their health is important to them and to determine where they feel they are and where they want to be with respect to 8 areas of self-care (working the body, physical and emotional surroundings, personal development, food and drink, sleep, human relationships, spirituality/purpose, and awareness of relationship between mind and body).
Personal health inventory written responses are then discussed with a member of the health care team to develop a proactive, patient-driven health plan unique to that veteran’s circumstances and aspirations.2 The PHI is applicable not only to veterans, but also in primary care and other practices outside the VA to improve shared decision making and produce more effective clinician-patient partnerships.
After national PHI promotion by the VA, the authors observed that there was not widespread adoption of this practice at their institution, despite its introduction and discussion at several primary care staff meetings. The authors surveyed primary care providers (PCPs) at VA Connecticut Healthcare System (VACHS) to understand perceived barriers and benefits to the use of PHIs in clinical practice.
Methods
The authors surveyed PCPs at VACHS sites about their current use of the PHI as well as their perceptions of barriers and benefits for future implementation of the PHI in clinical settings. Current use of the PHI was captured in a free response question. The authors assessed comfort with the PHI using a 5-point Likert scale, asking participants how comfortable they would feel explaining the PHI to a patient and or a coworker (1 = very uncomfortable, 5 = very comfortable). Barriers and benefits of future PHI implementation were chosen from preselected lists (Figure 1). Participants also were asked how important they feel it is for VA PCPs to use the PHI (1 = very unimportant, 5 = very important).
Finally, participants were asked whether they plan to use the PHI with their patients and how often (1 = less than once a month, 5 = daily). Participants were initially asked at staff meetings to complete the survey in a paper format. Nonrespondents then were asked to complete the survey electronically. This research protocol was reviewed and approved by the institutional review board of the participating institutions.
Study Population
The survey was delivered to all PCPs in the VACHS, which consisted of 2 main facilities (West Haven and Newington campuses) and 7 community-based outpatient clinics. The VACHS provides care to Connecticut’s eligible veteran population of > 55,000 patients who are enrolled in care. Survey participants included physicians, physician assistants, and nurse practitioners. Trainees were excluded.
Statistical Analyses
Summary statistics were calculated to assess current use of the PHI, barriers to and benefits of future implementation, and other scaled responses. Chi-square tests were used to compare the responses of participants who were completing the survey online with those completing it on paper for major study outcomes. Mann-Whitney tests were conducted to assess whether responses to certain questions (eg, future plans to use the PHI) were associated with responses to other related questions (eg, importance of VACHS providers pursuing the PHI). Significance was determined as P ≤ .05.
Results
Thirty-eight (53%) of 72 PCPs completed the survey. Thirteen providers completed the survey in the online format and 25 on paper. There was no significant difference between participants who completed the survey online vs paper for each of the major outcomes assessed. Most participants were aged between 40 and 60 years (64%), female (70%), and white (76%), similar to the entire PCP population at VACHS. The majority of participants worked in a hospital-based outpatient primary care setting (58%) (Table).
Current Use of PHI
Of respondents, 84% stated that they had heard of the PHI. Of those, 68% felt very or somewhat comfortable explaining the PHI to a patient, with slightly fewer, 64%, very or somewhat comfortable explaining the PHI to a coworker. Forty-eight percent stated that they had implemented the PHI in their clinical practices. Examples of current use included “can refer to RN to complete a true PHI,” “giving blank PHI to patients to fill out and bring back/mail,” and “occasional patient who I am trying to achieve some sort of lifestyle modification or change in behavior.”
PHI Barriers and Benefits
Almost all participants (95%) stated that lack of time was a barrier to using the PHI in their clinical settings (Figure 2). The next most common barriers were cumbersome paper forms (37%) and lack of support from upper management (24%). Very few participants listed discomfort as a reason for not discussing the PHI with patients (5%).
Respondents were divided evenly when identifying the benefits of the PHI. The top 3 selections were greater focus on what patients want (55%), greater patient engagement (55%), and improved patient/provider communication (53%) (Figure 3).
PHI Importance and Future Use
The majority of participants (71%) stated that it was very or somewhat important for VA PCPs to pursue the PHI. However, only 45% planned to use the PHI with their patients. Respondents who said they had implemented the PHI in the past were not more likely than others to state that pursuing the PHI was very important (P = .81). However, respondents who stated that it was very important to pursue the PHI were significantly more likely to plan to implement the PHI (P = .04). Of those planning on its use, the frequency of expected use varied from 31% planning to use the PHI daily with patients to 25% expecting to use it less than once a month.
Discussion
The traditional model of care has been fraught with problems. For example, patients are frequently nonadherent to medical therapies and lifestyle recommendations.3-6 Clearly, changes need to be made. To improve health care outcomes by delivering more patient-centered care, the VA initiated the PHI.7
Although nearly three-fourths of the respondents believed that the PHI was an important tool that the VA should pursue, more than half of all respondents did not intend to use it. Of those planning on using it, a large proportion planned on using it infrequently.
The authors found that despite PCP knowledge of PHI and its acceptance as a tool to focus more on what patients want to accomplish, to enhance patient engagement, and to improve communication between patients and providers, time constraints were a universal barrier to implementation, followed by cumbersome paper forms, and not enough perceived support from local upper management.
Measures to decrease PCP time investment and involvement with paper forms, such as having the patient complete the PHI outside of an office visit with a PCP, either at home, with the assistance of a team member with less training than a PCP, or electronically could help address an identified barrier. Further, if the PHI is to be more broadly adopted, support of local upper management should be enlisted to vociferously advocate its use, thus it will be deemed more essential to enhance care and introduce an organizational system for its effective implementation.
Interestingly, only about one-third of respondents believed that the use of the PHI would lead to better health outcomes for patients. Future studies should address whether the use of the PHI improves surrogate goals, such as cholesterol levels, blood pressures, hemoglobin A1c, or medication adherence as well as harder outcomes, such as risk of cardiovascular outcomes, diabetic complications, and mortality.
Limitations
The questionnaire was used at only 1 health care system within the VA. Whether it could be generalizable to PCPs with other baseline demographic information, non-VA facilities, or even other VA facilities, is not known. Since this survey was administered to PCPs, the authors also do not know the impact of implementing the PHI in specialty settings.
Conclusion
Although the concept of the PHI is favored by the majority of PCPs within VACHS, significant barriers, the most common being time constraints, need to be overcome before it is widely adopted. Implementation of novel collaborative systems of PHI administration may be needed.
1. U.S. Department of Veterans Affairs.VA patient centered care. http://www.va.gov/patientcenteredcare/about.asp. Updated March 3, 2016. Accessed March 30, 2017.
2. U.S. Department of Veterans Affairs. MyStory: personal health inventory. http://www.va.gov/patientcenteredcare/docs/va-opcc-personal-health-inventory-final-508.pdf. Published October 7, 2013. Accessed March 30, 2017.
3. Martin LR, Williams SL, Haskard KB, Dimatteo MR. The challenge of patient adherence. Ther Clin Risk Manag. 2005;1(3):189-199.
4. Nieuwlaat R, Wilczynski N, Navarro T, et al. Interventions for enhancing medication adherence. Cochrane Database Syst Rev. 2014;(11):CD000011.
5. Iuga AO, McGuire MJ. Adherence and health care costs. Risk Manag Healthc Policy. 2014;7:35-44.
6. Viswanathan M, Golin CE, Jones CD, et al. Interventions to improve adherence to self-administered medications for chronic diseases in the United States: a systematic review. Ann Intern Med. 2012;157(11):785-795. 7. Simmons LA, Drake CD, Gaudet TW, Snyderman R. Personalized health planning in primary care settings. Fed Pract. 2016;33(1):27-34.
1. U.S. Department of Veterans Affairs.VA patient centered care. http://www.va.gov/patientcenteredcare/about.asp. Updated March 3, 2016. Accessed March 30, 2017.
2. U.S. Department of Veterans Affairs. MyStory: personal health inventory. http://www.va.gov/patientcenteredcare/docs/va-opcc-personal-health-inventory-final-508.pdf. Published October 7, 2013. Accessed March 30, 2017.
3. Martin LR, Williams SL, Haskard KB, Dimatteo MR. The challenge of patient adherence. Ther Clin Risk Manag. 2005;1(3):189-199.
4. Nieuwlaat R, Wilczynski N, Navarro T, et al. Interventions for enhancing medication adherence. Cochrane Database Syst Rev. 2014;(11):CD000011.
5. Iuga AO, McGuire MJ. Adherence and health care costs. Risk Manag Healthc Policy. 2014;7:35-44.
6. Viswanathan M, Golin CE, Jones CD, et al. Interventions to improve adherence to self-administered medications for chronic diseases in the United States: a systematic review. Ann Intern Med. 2012;157(11):785-795. 7. Simmons LA, Drake CD, Gaudet TW, Snyderman R. Personalized health planning in primary care settings. Fed Pract. 2016;33(1):27-34.
CMV matching improves survival in HSCT recipients
Matching the cytomegalovirus (CMV) status of the donor and recipient of a hematopoietic stem cell transplant (HSCT) can significantly improve the recipient’s survival, according to a study published in Bone Marrow Transplantation.
In fact, researchers said they found evidence to suggest that CMV matching may abrogate the effect of a human leukocyte antigen (HLA) mismatch.
“This breakthrough will help us discover new and more effective ways to make sure patients in need of a transplant get the best possible match to cure blood cancer and blood disorders,” said study author Steven Marsh, PhD, of Anthony Nolan Research Institute, Royal Free Hospital in London, UK.
Dr Marsh and his colleagues studied 1271 patients who received T-cell-depleted grafts to treat a hematologic disorder, including acute or chronic leukemia, lymphoma, myeloma, myelodysplasia, and “other” disorders.
The 5-year overall survival in these patients was 40.6%.
The researchers found HSCT recipients with a 10/10 HLA-matched donor had significantly better overall survival (OS) and lower non-relapse mortality (NRM) than patients with a mismatched donor.
The 5-year OS was 43.1% for a 10/10 match, 35.6% for a 9/10 match, and 28.4% for a match less than 9/10 (P=0.001). NRM at 1 year was 20.3%, 26.0%, and 33.4%, respectively (P=0.007).
Similarly, HSCT recipients with a CMV-matched donor had significantly better OS and significantly lower NRM than recipients with a CMV-mismatched donor.
The 5-year OS was 44.1% for recipients with a CMV-matched donor and 32.2% for patients with a mismatched donor (P<0.001). NRM at 1 year was 19.1% and 30.4%, respectively (P<0.001)
Most of the associations between CMV/HLA matching and OS/NRM remained significant in multivariate analyses.
For recipients with more than 1 HLA mismatch, the relative risk (RR) of death was 1.43 (P=0.016), and the RR for NRM was 1.59 (P=0.028), when compared to patients who had received a 10/10 HLA-matched graft.
For recipients with a single mismatch, the RR for death was 1.21 (P=0.042), and the RR for NRM was 1.24 (P=0.14).
For recipients with a CMV mismatched donor, the RR for death was 1.40 (P<0.001), and the RR for NRM was 1.63 (P<0.001).
The researchers also assessed CMV and HLA status together. Compared to fully HLA-matched and CMV-matched recipients, the RRs for death were:
- 1.36 (P=0.003) for HLA matched/CMV mismatched
- 1.22 (P=0.062) for HLA mismatched/CMV matched
- 1.81 (P=0.001) for HLA and CMV mismatched.
The researchers said these results suggest it is possible to improve survival rates for patients with no HLA-matched donor by matching the CMV status of the donor and recipient.
As a result of the findings, experts at Anthony Nolan are exploring how to type donors for CMV when joining the stem cell donor register to allow CMV status to be taken into account when transplant centers are selecting potential donors for a patient.
“[B]y establishing that CMV matching has a significant impact on patient outcomes, we are making it easier for transplant centers to make informed choices about the donors they select for their patients,” Dr Marsh said.
Matching the cytomegalovirus (CMV) status of the donor and recipient of a hematopoietic stem cell transplant (HSCT) can significantly improve the recipient’s survival, according to a study published in Bone Marrow Transplantation.
In fact, researchers said they found evidence to suggest that CMV matching may abrogate the effect of a human leukocyte antigen (HLA) mismatch.
“This breakthrough will help us discover new and more effective ways to make sure patients in need of a transplant get the best possible match to cure blood cancer and blood disorders,” said study author Steven Marsh, PhD, of Anthony Nolan Research Institute, Royal Free Hospital in London, UK.
Dr Marsh and his colleagues studied 1271 patients who received T-cell-depleted grafts to treat a hematologic disorder, including acute or chronic leukemia, lymphoma, myeloma, myelodysplasia, and “other” disorders.
The 5-year overall survival in these patients was 40.6%.
The researchers found HSCT recipients with a 10/10 HLA-matched donor had significantly better overall survival (OS) and lower non-relapse mortality (NRM) than patients with a mismatched donor.
The 5-year OS was 43.1% for a 10/10 match, 35.6% for a 9/10 match, and 28.4% for a match less than 9/10 (P=0.001). NRM at 1 year was 20.3%, 26.0%, and 33.4%, respectively (P=0.007).
Similarly, HSCT recipients with a CMV-matched donor had significantly better OS and significantly lower NRM than recipients with a CMV-mismatched donor.
The 5-year OS was 44.1% for recipients with a CMV-matched donor and 32.2% for patients with a mismatched donor (P<0.001). NRM at 1 year was 19.1% and 30.4%, respectively (P<0.001)
Most of the associations between CMV/HLA matching and OS/NRM remained significant in multivariate analyses.
For recipients with more than 1 HLA mismatch, the relative risk (RR) of death was 1.43 (P=0.016), and the RR for NRM was 1.59 (P=0.028), when compared to patients who had received a 10/10 HLA-matched graft.
For recipients with a single mismatch, the RR for death was 1.21 (P=0.042), and the RR for NRM was 1.24 (P=0.14).
For recipients with a CMV mismatched donor, the RR for death was 1.40 (P<0.001), and the RR for NRM was 1.63 (P<0.001).
The researchers also assessed CMV and HLA status together. Compared to fully HLA-matched and CMV-matched recipients, the RRs for death were:
- 1.36 (P=0.003) for HLA matched/CMV mismatched
- 1.22 (P=0.062) for HLA mismatched/CMV matched
- 1.81 (P=0.001) for HLA and CMV mismatched.
The researchers said these results suggest it is possible to improve survival rates for patients with no HLA-matched donor by matching the CMV status of the donor and recipient.
As a result of the findings, experts at Anthony Nolan are exploring how to type donors for CMV when joining the stem cell donor register to allow CMV status to be taken into account when transplant centers are selecting potential donors for a patient.
“[B]y establishing that CMV matching has a significant impact on patient outcomes, we are making it easier for transplant centers to make informed choices about the donors they select for their patients,” Dr Marsh said.
Matching the cytomegalovirus (CMV) status of the donor and recipient of a hematopoietic stem cell transplant (HSCT) can significantly improve the recipient’s survival, according to a study published in Bone Marrow Transplantation.
In fact, researchers said they found evidence to suggest that CMV matching may abrogate the effect of a human leukocyte antigen (HLA) mismatch.
“This breakthrough will help us discover new and more effective ways to make sure patients in need of a transplant get the best possible match to cure blood cancer and blood disorders,” said study author Steven Marsh, PhD, of Anthony Nolan Research Institute, Royal Free Hospital in London, UK.
Dr Marsh and his colleagues studied 1271 patients who received T-cell-depleted grafts to treat a hematologic disorder, including acute or chronic leukemia, lymphoma, myeloma, myelodysplasia, and “other” disorders.
The 5-year overall survival in these patients was 40.6%.
The researchers found HSCT recipients with a 10/10 HLA-matched donor had significantly better overall survival (OS) and lower non-relapse mortality (NRM) than patients with a mismatched donor.
The 5-year OS was 43.1% for a 10/10 match, 35.6% for a 9/10 match, and 28.4% for a match less than 9/10 (P=0.001). NRM at 1 year was 20.3%, 26.0%, and 33.4%, respectively (P=0.007).
Similarly, HSCT recipients with a CMV-matched donor had significantly better OS and significantly lower NRM than recipients with a CMV-mismatched donor.
The 5-year OS was 44.1% for recipients with a CMV-matched donor and 32.2% for patients with a mismatched donor (P<0.001). NRM at 1 year was 19.1% and 30.4%, respectively (P<0.001)
Most of the associations between CMV/HLA matching and OS/NRM remained significant in multivariate analyses.
For recipients with more than 1 HLA mismatch, the relative risk (RR) of death was 1.43 (P=0.016), and the RR for NRM was 1.59 (P=0.028), when compared to patients who had received a 10/10 HLA-matched graft.
For recipients with a single mismatch, the RR for death was 1.21 (P=0.042), and the RR for NRM was 1.24 (P=0.14).
For recipients with a CMV mismatched donor, the RR for death was 1.40 (P<0.001), and the RR for NRM was 1.63 (P<0.001).
The researchers also assessed CMV and HLA status together. Compared to fully HLA-matched and CMV-matched recipients, the RRs for death were:
- 1.36 (P=0.003) for HLA matched/CMV mismatched
- 1.22 (P=0.062) for HLA mismatched/CMV matched
- 1.81 (P=0.001) for HLA and CMV mismatched.
The researchers said these results suggest it is possible to improve survival rates for patients with no HLA-matched donor by matching the CMV status of the donor and recipient.
As a result of the findings, experts at Anthony Nolan are exploring how to type donors for CMV when joining the stem cell donor register to allow CMV status to be taken into account when transplant centers are selecting potential donors for a patient.
“[B]y establishing that CMV matching has a significant impact on patient outcomes, we are making it easier for transplant centers to make informed choices about the donors they select for their patients,” Dr Marsh said.