User login
In 2016, results from the LEADER trial of liraglutide in patients with type 2 diabetes helped jump-start awareness of the potential role of this new class of drugs, the glucagonlike peptide–1 receptor agonists, for reducing cardiovascular events. The randomized, placebo-controlled trial enrolled more than 9000 patients at more than 400 sites in over 30 countries, and took nearly 6 years from the start of patient enrollment to publication of the landmark results.
In December 2020, an independent team of researchers published results from a study with a design identical to LEADER, but used data that came not from a massive, global, years-long trial but from already-existing numbers culled from three large U.S. insurance claim databases. The result of this emulation using real-world data was virtually identical to what the actual trial showed, replicating both the direction and statistical significance of the original finding of the randomized, controlled trial (RCT).
What if research proved that this sort of RCT emulation could reliably be done on a regular basis? What might it mean for regulatory decisions on drugs and devices that historically have been based entirely on efficacy evidence from RCTs?
Making the most of a sea of observational data
Medicine in the United States has become increasingly awash in a sea of observational data collected from sources that include electronic health records, insurance claims, and increasingly, personal-health monitoring devices.
The Food and Drug Administration is now in the process of trying to figure out how it can legitimately harness this tsunami of real-world data to make efficacy decisions, essentially creating a new category of evidence to complement traditional data from randomized trials. It’s an opportunity that agency staff and their outside advisors have been keen to seize, especially given the soaring cost of prospective, randomized trials.
Recognition of this untapped resource in part led to a key initiative, among many others, included in the 21st Century Cures Act, passed in December 2016. Among the Act’s mandates was that, by the end of 2021, the FDA would issue guidance on when drug sponsors could use real-world evidence (RWE) to either help support a new indication for an already approved drug or help satisfy postapproval study requirements.
The initiative recognizes that this approach is not appropriate for initial drug approvals, which remain exclusively reliant on evidence from RCTs. Instead, it seems best suited to support expanding indications for already approved drugs.
Although FDA staff have made progress in identifying the challenges and broadening their understanding of how to best handle real-world data that come from observing patients in routine practice, agency leaders stress that this complex issue will likely not be fully resolved by their guidance to be published later this year. The FDA released a draft of the guidance in May 2019.
Can RWE be ‘credible and reliable?’
“Whether observational, nonrandomized data can become credible enough to use is what we’re talking about. These are possibilities that need to be explained and better understood,” said Robert Temple, MD, deputy director for clinical science of the FDA Center for Drug Evaluation and Research.
“Since the 1970s, the FDA has recognized historical controls as legitimate, so it’s possible [for RWE] to be credible. The big test is when is it credible and reliable enough [to assess efficacy]?” wondered Dr. Temple during a 2-day workshop on the topic held mid-February and organized by Duke University’s Margolis Center for Health Policy.
“We’re approaching an inflection point regarding how observational studies are generated and used, but our evidentiary standards will not lower, and it will be a case-by-case decision” by the agency as they review future RWE submissions, said John Concato, MD, the FDA’s associate director for real-world evidence, during the workshop.
“We are working toward guidance development, but also looking down the road to what we need to do to enable this,” said Dr. Concato. “It’s a complicated issue. If it was easy, it would have already been fixed.” He added that the agency will likely release a “portfolio” of guidance for submitting real-world data and RWE. Real-world data are raw information that, when analyzed, become RWE.
In short, the FDA seems headed toward guidance that won’t spell out a pathway that guarantees success using RWE but will at least open the door to consideration of this unprecedented application.
Not like flipping a switch
The guidance will not activate acceptance of RWE all at once. “It’s not like a light switch,” cautioned Adam Kroetsch, MPP, research director for biomedical innovation and regulatory policy at Duke-Margolis in Washington, D.C. “It’s an evolutionary process,” and the upcoming guidance will provide “just a little more clarity” on what sorts of best practices using RWE the FDA will find persuasive. “It’s hard for the FDA to clearly say what it’s looking for until they see some good examples,” Dr. Kroetsch said in an interview.
What will change is that drug sponsors can submit using RWE, and the FDA “will have a more open-minded view,” predicted Sebastian Schneeweiss, MD, ScD, a workshop participant and chief of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital in Boston. “For the first time, a law required [the FDA] to take a serious look” at observational data for efficacy assessment.
“The FDA has had a bias against using RWE for evidence of efficacy but has long used it to understand drug safety. Now the FDA is trying to wrap its arms around how to best use RWE” for efficacy decisions, said Joseph S. Ross, MD, another workshop participant and professor of medicine and public health at Yale University, New Haven, Conn.
The agency’s cautious approach is reassuring, Dr. Ross noted in an interview. “There was worry that the 21st Century Cures Act would open the door to allowing real-world data to be used in ways that weren’t very reliable. Very quickly, the FDA started trying to figure out the best ways to use these data in reasonable ways.”
Duplicating RCTs with RWE
To help better understand the potential use of RWE, the FDA sponsored several demonstration projects. Researchers presented results from three of these projects during the workshop in February. All three examined whether RWE, plugged into the design of an actual RCT, can produce roughly similar results when similar patients are used.
A generally consistent finding from the three demonstration projects was that “when the data are fit for purpose” the emulated or duplicated analyses with RWE “can come to similar conclusions” as the actual RCTs, said Dr. Schneeweiss, who leads one of the demonstration projects, RCT DUPLICATE.
At the workshop he reported results from RWE duplications of 20 different RCTs using insurance claims data from U.S. patients. The findings came from 10 duplications already reported in Circulation in December 2020 (including a duplication of the LEADER trial), and an additional 10 as yet unpublished RCT duplications. In the next few months, the researchers intend to assess a final group of 10 more RCT duplications.
Workshop participants also presented results from two other FDA demonstration projects: the OPERAND program run by the Multi-Regional Clinical Trials Center of Brigham and Women’s Hospital and Harvard; and the CERSI program based at Yale and the Mayo Clinic in Rochester, Minn. Both are smaller in scale than RCT DUPLICATE, incorporate lab data in addition to claims data, and in some cases test how well RWE can emulate RCTs that are not yet completed.
Collectively, results from these demonstration projects suggest that RWE can successfully emulate the results of an RCT, said Dr. Ross, a coinvestigator on the CERSI study. But the CERSI findings also highlighted how an RCT can fall short of clinical relevance.
“One of our most important findings was that RCTs don’t always represent real-world practice,” he said. His group attempted to replicate the 5,000-patient GRADE trial of four different drug options added to metformin in patients with type 2 diabetes. One of the four options included insulin glargine (Lantus), and the attempt to emulate the study with RWE hit the bump that no relevant real-world patients in their US claims database actually received the formulation.
That means the GRADE trial “is almost meaningless. It doesn’t reflect real-world practice,” Dr. Ross noted.
Results from the three demonstration projects “highlight the gaps we still have,” summed up Dr. Kroetsch. “They show where we need better data” from observational sources that function as well as data from RCTs.
Still, the demonstration project results are “an important step forward in establishing the validity of real-world evidence,” commented David Kerr, MBChB, an endocrinologist and director of research and innovation at the Sansum Diabetes Research Institute in Santa Barbara, Calif.
‘Target trials’ tether RWE
The target trial approach to designing an observational study is a key tool for boosting reliability and applicability of the results. The idea is to create a well-designed trial that could be the basis for a conventional RCT, and then use observational data to flesh out the target trial instead of collecting data from prospectively enrolled patients.
Designing observational studies that emulate target trials allows causal inferences, said Miguel A. Hernán, MD, DrPH, a professor of biostatistics and epidemiology at the Harvard School of Public Health, Boston. Plugging real-world data into the framework of an appropriately designed target trial substantially cuts the risk of a biased analysis, he explained during the workshop.
However, the approach has limitations. The target trial must be a pragmatic trial, and the approach does not work for placebo-controlled trials, although it can accommodate a usual-care control arm. It also usually precludes patient blinding, testing treatments not used in routine practice, and close monitoring of patients in ways that are uncommon in usual care.
The target trial approach received broad endorsement during the workshop as the future for observational studies destined for efficacy consideration by the FDA.
“The idea of prespecifying a target trial is a really fantastic place to start,” commented Robert Ball, MD, deputy director of the FDA Office of Surveillance and Epidemiology. “There is still a whole set of questions once the trial is prespecified, but prespecification would be a fantastic step forward,” he said during the workshop.
Participants also endorsed other important steps to boost the value of observational studies for regulatory reviews, including preregistering the study on a site such as clinicaltrials.gov; being fully transparent about the origins of observational data; using data that match the needs of the target trial; not reviewing the data in advance to avoid cherry picking and gaming the analysis; and reporting neutral or negative results when they occur, something often not currently done for observational analyses.
But although there was clear progress and much agreement among thought leaders at the workshop, FDA representatives stressed caution in moving forward.
“No easy answer”
“With more experience, we can learn what works and what doesn’t work in generating valid results from observational studies,” said Dr. Concato. “Although the observational results have upside potential, we need to learn more. There is no easy answer, no checklist for fit-for-use data, no off-the-shelf study design, and no ideal analytic method.”
Dr. Concato acknowledged that the FDA’s goal is clear given the 2016 legislation. “The FDA is embracing our obligations under the 21st Century Cures Act to evaluate use of real-world data and real-world evidence.”
He also suggested that researchers “shy away from a false dichotomy of RCTs or observational studies and instead think about how and when RCTs and observational studies can be designed and conducted to yield trustworthy results.” Dr. Concato’s solution: “a taxonomy of interventional or noninterventional studies.”
“The FDA is under enormous pressure to embrace real-world evidence, both because of the economics of running RCTs and because of the availability of new observational data from electronic health records, wearable devices, claims, etc.,” said Dr. Kerr, who did not participate in the workshop but coauthored an editorial that calls for using real-world data in regulatory decisions for drugs and devices for diabetes. These factors create an “irresistible force” spurring the FDA to consider observational, noninterventional data.
“I think the FDA really wants this to go forward,” Dr. Kerr added in an interview. “The FDA keeps telling us that clinical trials do not have enough women or patients from minority groups. Real-world data is a way to address that. This will not be the death of RCTs, but this work shines a light on the deficiencies of RCTs and how the deficiencies can be dealt with.”
Dr. Kroetsch has reported no relevant financial relationships. Dr. Schneeweiss has reported being a consultant to and holding equity in Aetion and receiving research funding from the FDA. Dr. Ross has reported receiving research funding from the FDA, Johnson & Johnson, and Medtronic. Dr. Hernán has reported being a consultant for Cytel. Dr. Kerr has reported being a consultant for Ascensia, EOFlow, Lifecare, Merck, Novo Nordisk, Roche Diagnostics, and Voluntis. Dr. Temple, Dr. Concato, and Dr. Ball are FDA employees.
A version of this article first appeared on Medscape.com.
In 2016, results from the LEADER trial of liraglutide in patients with type 2 diabetes helped jump-start awareness of the potential role of this new class of drugs, the glucagonlike peptide–1 receptor agonists, for reducing cardiovascular events. The randomized, placebo-controlled trial enrolled more than 9000 patients at more than 400 sites in over 30 countries, and took nearly 6 years from the start of patient enrollment to publication of the landmark results.
In December 2020, an independent team of researchers published results from a study with a design identical to LEADER, but used data that came not from a massive, global, years-long trial but from already-existing numbers culled from three large U.S. insurance claim databases. The result of this emulation using real-world data was virtually identical to what the actual trial showed, replicating both the direction and statistical significance of the original finding of the randomized, controlled trial (RCT).
What if research proved that this sort of RCT emulation could reliably be done on a regular basis? What might it mean for regulatory decisions on drugs and devices that historically have been based entirely on efficacy evidence from RCTs?
Making the most of a sea of observational data
Medicine in the United States has become increasingly awash in a sea of observational data collected from sources that include electronic health records, insurance claims, and increasingly, personal-health monitoring devices.
The Food and Drug Administration is now in the process of trying to figure out how it can legitimately harness this tsunami of real-world data to make efficacy decisions, essentially creating a new category of evidence to complement traditional data from randomized trials. It’s an opportunity that agency staff and their outside advisors have been keen to seize, especially given the soaring cost of prospective, randomized trials.
Recognition of this untapped resource in part led to a key initiative, among many others, included in the 21st Century Cures Act, passed in December 2016. Among the Act’s mandates was that, by the end of 2021, the FDA would issue guidance on when drug sponsors could use real-world evidence (RWE) to either help support a new indication for an already approved drug or help satisfy postapproval study requirements.
The initiative recognizes that this approach is not appropriate for initial drug approvals, which remain exclusively reliant on evidence from RCTs. Instead, it seems best suited to support expanding indications for already approved drugs.
Although FDA staff have made progress in identifying the challenges and broadening their understanding of how to best handle real-world data that come from observing patients in routine practice, agency leaders stress that this complex issue will likely not be fully resolved by their guidance to be published later this year. The FDA released a draft of the guidance in May 2019.
Can RWE be ‘credible and reliable?’
“Whether observational, nonrandomized data can become credible enough to use is what we’re talking about. These are possibilities that need to be explained and better understood,” said Robert Temple, MD, deputy director for clinical science of the FDA Center for Drug Evaluation and Research.
“Since the 1970s, the FDA has recognized historical controls as legitimate, so it’s possible [for RWE] to be credible. The big test is when is it credible and reliable enough [to assess efficacy]?” wondered Dr. Temple during a 2-day workshop on the topic held mid-February and organized by Duke University’s Margolis Center for Health Policy.
“We’re approaching an inflection point regarding how observational studies are generated and used, but our evidentiary standards will not lower, and it will be a case-by-case decision” by the agency as they review future RWE submissions, said John Concato, MD, the FDA’s associate director for real-world evidence, during the workshop.
“We are working toward guidance development, but also looking down the road to what we need to do to enable this,” said Dr. Concato. “It’s a complicated issue. If it was easy, it would have already been fixed.” He added that the agency will likely release a “portfolio” of guidance for submitting real-world data and RWE. Real-world data are raw information that, when analyzed, become RWE.
In short, the FDA seems headed toward guidance that won’t spell out a pathway that guarantees success using RWE but will at least open the door to consideration of this unprecedented application.
Not like flipping a switch
The guidance will not activate acceptance of RWE all at once. “It’s not like a light switch,” cautioned Adam Kroetsch, MPP, research director for biomedical innovation and regulatory policy at Duke-Margolis in Washington, D.C. “It’s an evolutionary process,” and the upcoming guidance will provide “just a little more clarity” on what sorts of best practices using RWE the FDA will find persuasive. “It’s hard for the FDA to clearly say what it’s looking for until they see some good examples,” Dr. Kroetsch said in an interview.
What will change is that drug sponsors can submit using RWE, and the FDA “will have a more open-minded view,” predicted Sebastian Schneeweiss, MD, ScD, a workshop participant and chief of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital in Boston. “For the first time, a law required [the FDA] to take a serious look” at observational data for efficacy assessment.
“The FDA has had a bias against using RWE for evidence of efficacy but has long used it to understand drug safety. Now the FDA is trying to wrap its arms around how to best use RWE” for efficacy decisions, said Joseph S. Ross, MD, another workshop participant and professor of medicine and public health at Yale University, New Haven, Conn.
The agency’s cautious approach is reassuring, Dr. Ross noted in an interview. “There was worry that the 21st Century Cures Act would open the door to allowing real-world data to be used in ways that weren’t very reliable. Very quickly, the FDA started trying to figure out the best ways to use these data in reasonable ways.”
Duplicating RCTs with RWE
To help better understand the potential use of RWE, the FDA sponsored several demonstration projects. Researchers presented results from three of these projects during the workshop in February. All three examined whether RWE, plugged into the design of an actual RCT, can produce roughly similar results when similar patients are used.
A generally consistent finding from the three demonstration projects was that “when the data are fit for purpose” the emulated or duplicated analyses with RWE “can come to similar conclusions” as the actual RCTs, said Dr. Schneeweiss, who leads one of the demonstration projects, RCT DUPLICATE.
At the workshop he reported results from RWE duplications of 20 different RCTs using insurance claims data from U.S. patients. The findings came from 10 duplications already reported in Circulation in December 2020 (including a duplication of the LEADER trial), and an additional 10 as yet unpublished RCT duplications. In the next few months, the researchers intend to assess a final group of 10 more RCT duplications.
Workshop participants also presented results from two other FDA demonstration projects: the OPERAND program run by the Multi-Regional Clinical Trials Center of Brigham and Women’s Hospital and Harvard; and the CERSI program based at Yale and the Mayo Clinic in Rochester, Minn. Both are smaller in scale than RCT DUPLICATE, incorporate lab data in addition to claims data, and in some cases test how well RWE can emulate RCTs that are not yet completed.
Collectively, results from these demonstration projects suggest that RWE can successfully emulate the results of an RCT, said Dr. Ross, a coinvestigator on the CERSI study. But the CERSI findings also highlighted how an RCT can fall short of clinical relevance.
“One of our most important findings was that RCTs don’t always represent real-world practice,” he said. His group attempted to replicate the 5,000-patient GRADE trial of four different drug options added to metformin in patients with type 2 diabetes. One of the four options included insulin glargine (Lantus), and the attempt to emulate the study with RWE hit the bump that no relevant real-world patients in their US claims database actually received the formulation.
That means the GRADE trial “is almost meaningless. It doesn’t reflect real-world practice,” Dr. Ross noted.
Results from the three demonstration projects “highlight the gaps we still have,” summed up Dr. Kroetsch. “They show where we need better data” from observational sources that function as well as data from RCTs.
Still, the demonstration project results are “an important step forward in establishing the validity of real-world evidence,” commented David Kerr, MBChB, an endocrinologist and director of research and innovation at the Sansum Diabetes Research Institute in Santa Barbara, Calif.
‘Target trials’ tether RWE
The target trial approach to designing an observational study is a key tool for boosting reliability and applicability of the results. The idea is to create a well-designed trial that could be the basis for a conventional RCT, and then use observational data to flesh out the target trial instead of collecting data from prospectively enrolled patients.
Designing observational studies that emulate target trials allows causal inferences, said Miguel A. Hernán, MD, DrPH, a professor of biostatistics and epidemiology at the Harvard School of Public Health, Boston. Plugging real-world data into the framework of an appropriately designed target trial substantially cuts the risk of a biased analysis, he explained during the workshop.
However, the approach has limitations. The target trial must be a pragmatic trial, and the approach does not work for placebo-controlled trials, although it can accommodate a usual-care control arm. It also usually precludes patient blinding, testing treatments not used in routine practice, and close monitoring of patients in ways that are uncommon in usual care.
The target trial approach received broad endorsement during the workshop as the future for observational studies destined for efficacy consideration by the FDA.
“The idea of prespecifying a target trial is a really fantastic place to start,” commented Robert Ball, MD, deputy director of the FDA Office of Surveillance and Epidemiology. “There is still a whole set of questions once the trial is prespecified, but prespecification would be a fantastic step forward,” he said during the workshop.
Participants also endorsed other important steps to boost the value of observational studies for regulatory reviews, including preregistering the study on a site such as clinicaltrials.gov; being fully transparent about the origins of observational data; using data that match the needs of the target trial; not reviewing the data in advance to avoid cherry picking and gaming the analysis; and reporting neutral or negative results when they occur, something often not currently done for observational analyses.
But although there was clear progress and much agreement among thought leaders at the workshop, FDA representatives stressed caution in moving forward.
“No easy answer”
“With more experience, we can learn what works and what doesn’t work in generating valid results from observational studies,” said Dr. Concato. “Although the observational results have upside potential, we need to learn more. There is no easy answer, no checklist for fit-for-use data, no off-the-shelf study design, and no ideal analytic method.”
Dr. Concato acknowledged that the FDA’s goal is clear given the 2016 legislation. “The FDA is embracing our obligations under the 21st Century Cures Act to evaluate use of real-world data and real-world evidence.”
He also suggested that researchers “shy away from a false dichotomy of RCTs or observational studies and instead think about how and when RCTs and observational studies can be designed and conducted to yield trustworthy results.” Dr. Concato’s solution: “a taxonomy of interventional or noninterventional studies.”
“The FDA is under enormous pressure to embrace real-world evidence, both because of the economics of running RCTs and because of the availability of new observational data from electronic health records, wearable devices, claims, etc.,” said Dr. Kerr, who did not participate in the workshop but coauthored an editorial that calls for using real-world data in regulatory decisions for drugs and devices for diabetes. These factors create an “irresistible force” spurring the FDA to consider observational, noninterventional data.
“I think the FDA really wants this to go forward,” Dr. Kerr added in an interview. “The FDA keeps telling us that clinical trials do not have enough women or patients from minority groups. Real-world data is a way to address that. This will not be the death of RCTs, but this work shines a light on the deficiencies of RCTs and how the deficiencies can be dealt with.”
Dr. Kroetsch has reported no relevant financial relationships. Dr. Schneeweiss has reported being a consultant to and holding equity in Aetion and receiving research funding from the FDA. Dr. Ross has reported receiving research funding from the FDA, Johnson & Johnson, and Medtronic. Dr. Hernán has reported being a consultant for Cytel. Dr. Kerr has reported being a consultant for Ascensia, EOFlow, Lifecare, Merck, Novo Nordisk, Roche Diagnostics, and Voluntis. Dr. Temple, Dr. Concato, and Dr. Ball are FDA employees.
A version of this article first appeared on Medscape.com.
In 2016, results from the LEADER trial of liraglutide in patients with type 2 diabetes helped jump-start awareness of the potential role of this new class of drugs, the glucagonlike peptide–1 receptor agonists, for reducing cardiovascular events. The randomized, placebo-controlled trial enrolled more than 9000 patients at more than 400 sites in over 30 countries, and took nearly 6 years from the start of patient enrollment to publication of the landmark results.
In December 2020, an independent team of researchers published results from a study with a design identical to LEADER, but used data that came not from a massive, global, years-long trial but from already-existing numbers culled from three large U.S. insurance claim databases. The result of this emulation using real-world data was virtually identical to what the actual trial showed, replicating both the direction and statistical significance of the original finding of the randomized, controlled trial (RCT).
What if research proved that this sort of RCT emulation could reliably be done on a regular basis? What might it mean for regulatory decisions on drugs and devices that historically have been based entirely on efficacy evidence from RCTs?
Making the most of a sea of observational data
Medicine in the United States has become increasingly awash in a sea of observational data collected from sources that include electronic health records, insurance claims, and increasingly, personal-health monitoring devices.
The Food and Drug Administration is now in the process of trying to figure out how it can legitimately harness this tsunami of real-world data to make efficacy decisions, essentially creating a new category of evidence to complement traditional data from randomized trials. It’s an opportunity that agency staff and their outside advisors have been keen to seize, especially given the soaring cost of prospective, randomized trials.
Recognition of this untapped resource in part led to a key initiative, among many others, included in the 21st Century Cures Act, passed in December 2016. Among the Act’s mandates was that, by the end of 2021, the FDA would issue guidance on when drug sponsors could use real-world evidence (RWE) to either help support a new indication for an already approved drug or help satisfy postapproval study requirements.
The initiative recognizes that this approach is not appropriate for initial drug approvals, which remain exclusively reliant on evidence from RCTs. Instead, it seems best suited to support expanding indications for already approved drugs.
Although FDA staff have made progress in identifying the challenges and broadening their understanding of how to best handle real-world data that come from observing patients in routine practice, agency leaders stress that this complex issue will likely not be fully resolved by their guidance to be published later this year. The FDA released a draft of the guidance in May 2019.
Can RWE be ‘credible and reliable?’
“Whether observational, nonrandomized data can become credible enough to use is what we’re talking about. These are possibilities that need to be explained and better understood,” said Robert Temple, MD, deputy director for clinical science of the FDA Center for Drug Evaluation and Research.
“Since the 1970s, the FDA has recognized historical controls as legitimate, so it’s possible [for RWE] to be credible. The big test is when is it credible and reliable enough [to assess efficacy]?” wondered Dr. Temple during a 2-day workshop on the topic held mid-February and organized by Duke University’s Margolis Center for Health Policy.
“We’re approaching an inflection point regarding how observational studies are generated and used, but our evidentiary standards will not lower, and it will be a case-by-case decision” by the agency as they review future RWE submissions, said John Concato, MD, the FDA’s associate director for real-world evidence, during the workshop.
“We are working toward guidance development, but also looking down the road to what we need to do to enable this,” said Dr. Concato. “It’s a complicated issue. If it was easy, it would have already been fixed.” He added that the agency will likely release a “portfolio” of guidance for submitting real-world data and RWE. Real-world data are raw information that, when analyzed, become RWE.
In short, the FDA seems headed toward guidance that won’t spell out a pathway that guarantees success using RWE but will at least open the door to consideration of this unprecedented application.
Not like flipping a switch
The guidance will not activate acceptance of RWE all at once. “It’s not like a light switch,” cautioned Adam Kroetsch, MPP, research director for biomedical innovation and regulatory policy at Duke-Margolis in Washington, D.C. “It’s an evolutionary process,” and the upcoming guidance will provide “just a little more clarity” on what sorts of best practices using RWE the FDA will find persuasive. “It’s hard for the FDA to clearly say what it’s looking for until they see some good examples,” Dr. Kroetsch said in an interview.
What will change is that drug sponsors can submit using RWE, and the FDA “will have a more open-minded view,” predicted Sebastian Schneeweiss, MD, ScD, a workshop participant and chief of pharmacoepidemiology and pharmacoeconomics at Brigham and Women’s Hospital in Boston. “For the first time, a law required [the FDA] to take a serious look” at observational data for efficacy assessment.
“The FDA has had a bias against using RWE for evidence of efficacy but has long used it to understand drug safety. Now the FDA is trying to wrap its arms around how to best use RWE” for efficacy decisions, said Joseph S. Ross, MD, another workshop participant and professor of medicine and public health at Yale University, New Haven, Conn.
The agency’s cautious approach is reassuring, Dr. Ross noted in an interview. “There was worry that the 21st Century Cures Act would open the door to allowing real-world data to be used in ways that weren’t very reliable. Very quickly, the FDA started trying to figure out the best ways to use these data in reasonable ways.”
Duplicating RCTs with RWE
To help better understand the potential use of RWE, the FDA sponsored several demonstration projects. Researchers presented results from three of these projects during the workshop in February. All three examined whether RWE, plugged into the design of an actual RCT, can produce roughly similar results when similar patients are used.
A generally consistent finding from the three demonstration projects was that “when the data are fit for purpose” the emulated or duplicated analyses with RWE “can come to similar conclusions” as the actual RCTs, said Dr. Schneeweiss, who leads one of the demonstration projects, RCT DUPLICATE.
At the workshop he reported results from RWE duplications of 20 different RCTs using insurance claims data from U.S. patients. The findings came from 10 duplications already reported in Circulation in December 2020 (including a duplication of the LEADER trial), and an additional 10 as yet unpublished RCT duplications. In the next few months, the researchers intend to assess a final group of 10 more RCT duplications.
Workshop participants also presented results from two other FDA demonstration projects: the OPERAND program run by the Multi-Regional Clinical Trials Center of Brigham and Women’s Hospital and Harvard; and the CERSI program based at Yale and the Mayo Clinic in Rochester, Minn. Both are smaller in scale than RCT DUPLICATE, incorporate lab data in addition to claims data, and in some cases test how well RWE can emulate RCTs that are not yet completed.
Collectively, results from these demonstration projects suggest that RWE can successfully emulate the results of an RCT, said Dr. Ross, a coinvestigator on the CERSI study. But the CERSI findings also highlighted how an RCT can fall short of clinical relevance.
“One of our most important findings was that RCTs don’t always represent real-world practice,” he said. His group attempted to replicate the 5,000-patient GRADE trial of four different drug options added to metformin in patients with type 2 diabetes. One of the four options included insulin glargine (Lantus), and the attempt to emulate the study with RWE hit the bump that no relevant real-world patients in their US claims database actually received the formulation.
That means the GRADE trial “is almost meaningless. It doesn’t reflect real-world practice,” Dr. Ross noted.
Results from the three demonstration projects “highlight the gaps we still have,” summed up Dr. Kroetsch. “They show where we need better data” from observational sources that function as well as data from RCTs.
Still, the demonstration project results are “an important step forward in establishing the validity of real-world evidence,” commented David Kerr, MBChB, an endocrinologist and director of research and innovation at the Sansum Diabetes Research Institute in Santa Barbara, Calif.
‘Target trials’ tether RWE
The target trial approach to designing an observational study is a key tool for boosting reliability and applicability of the results. The idea is to create a well-designed trial that could be the basis for a conventional RCT, and then use observational data to flesh out the target trial instead of collecting data from prospectively enrolled patients.
Designing observational studies that emulate target trials allows causal inferences, said Miguel A. Hernán, MD, DrPH, a professor of biostatistics and epidemiology at the Harvard School of Public Health, Boston. Plugging real-world data into the framework of an appropriately designed target trial substantially cuts the risk of a biased analysis, he explained during the workshop.
However, the approach has limitations. The target trial must be a pragmatic trial, and the approach does not work for placebo-controlled trials, although it can accommodate a usual-care control arm. It also usually precludes patient blinding, testing treatments not used in routine practice, and close monitoring of patients in ways that are uncommon in usual care.
The target trial approach received broad endorsement during the workshop as the future for observational studies destined for efficacy consideration by the FDA.
“The idea of prespecifying a target trial is a really fantastic place to start,” commented Robert Ball, MD, deputy director of the FDA Office of Surveillance and Epidemiology. “There is still a whole set of questions once the trial is prespecified, but prespecification would be a fantastic step forward,” he said during the workshop.
Participants also endorsed other important steps to boost the value of observational studies for regulatory reviews, including preregistering the study on a site such as clinicaltrials.gov; being fully transparent about the origins of observational data; using data that match the needs of the target trial; not reviewing the data in advance to avoid cherry picking and gaming the analysis; and reporting neutral or negative results when they occur, something often not currently done for observational analyses.
But although there was clear progress and much agreement among thought leaders at the workshop, FDA representatives stressed caution in moving forward.
“No easy answer”
“With more experience, we can learn what works and what doesn’t work in generating valid results from observational studies,” said Dr. Concato. “Although the observational results have upside potential, we need to learn more. There is no easy answer, no checklist for fit-for-use data, no off-the-shelf study design, and no ideal analytic method.”
Dr. Concato acknowledged that the FDA’s goal is clear given the 2016 legislation. “The FDA is embracing our obligations under the 21st Century Cures Act to evaluate use of real-world data and real-world evidence.”
He also suggested that researchers “shy away from a false dichotomy of RCTs or observational studies and instead think about how and when RCTs and observational studies can be designed and conducted to yield trustworthy results.” Dr. Concato’s solution: “a taxonomy of interventional or noninterventional studies.”
“The FDA is under enormous pressure to embrace real-world evidence, both because of the economics of running RCTs and because of the availability of new observational data from electronic health records, wearable devices, claims, etc.,” said Dr. Kerr, who did not participate in the workshop but coauthored an editorial that calls for using real-world data in regulatory decisions for drugs and devices for diabetes. These factors create an “irresistible force” spurring the FDA to consider observational, noninterventional data.
“I think the FDA really wants this to go forward,” Dr. Kerr added in an interview. “The FDA keeps telling us that clinical trials do not have enough women or patients from minority groups. Real-world data is a way to address that. This will not be the death of RCTs, but this work shines a light on the deficiencies of RCTs and how the deficiencies can be dealt with.”
Dr. Kroetsch has reported no relevant financial relationships. Dr. Schneeweiss has reported being a consultant to and holding equity in Aetion and receiving research funding from the FDA. Dr. Ross has reported receiving research funding from the FDA, Johnson & Johnson, and Medtronic. Dr. Hernán has reported being a consultant for Cytel. Dr. Kerr has reported being a consultant for Ascensia, EOFlow, Lifecare, Merck, Novo Nordisk, Roche Diagnostics, and Voluntis. Dr. Temple, Dr. Concato, and Dr. Ball are FDA employees.
A version of this article first appeared on Medscape.com.