Data on drug efficacy and safety is assessed using controlled trials as well as studies based on data from medical records. Randomized clinical trials, which use randomly selected subjects and controls, are the standard method for assessing a drug’s efficacy. By contrast, real-world evidence studies analyze data from electronic patient records, insurance claims data, medical devices, and other sources. The findings of real-world evidence studies sometimes contradicts the results of randomized clinical trials. This raises the question of how to reconcile the two.
What Causes Contradictory Data?
Consider a real-world evidence study of chronic heart failure that analyzed the clinical records of 2.5 million patients in the Italian National Health Service (NHS). Of these 2.5 million patients, 54,059 had been hospitalized for heart failure. Researchers found that the patients who were hospitalized for heart failure were older and more likely to be female than the results of random clinical trials would predict. These differences may be explained by differences in the makeup of the two cohorts. The authors of the study noted that “[p]atients with chronic heart failure (HF) in controlled trials do not fully represent real population followed in clinical practice.”
Even in cases where clinical trial participants do reflect the general population, other differences between controlled clinical trial conditions and real-world conditions can lead to different outcomes. For example, clinical studies find that osteoporosis therapy can reduce the risk of fractures, but a real-world study found that poor adherence to medication can lead to fracture risk increases by approximately 30 percent.
Differences between clinical trials and real-world cohorts and conditions aren’t the only reasons for the differences in results. There may also be problems with the real-world study, including “uncertainty about their internal validity, inaccurate recording of health events, missing data, and opaque reporting of conduct and results,” according to an assessment by two pharmacological professional groups.
When results are inconsistent, it helps to assess the methodologies used in the studies. Researchers have developed tools that can assess studies and identify potential issues with study design and statistical methods. For example, the Downs & Black appraisal tool is a set of 27 questions grouped into five categories: study quality, external validity, study bias, confounding and selection bias, and statistical power. The Newcastle-Ottowa appraisal, which is designed for nonrandomized trials such as cohort and case-control studies, considers study features, such as case definition, representativeness of cases, and selections of controls. In both cases, the assessment can identify weaknesses in studies, such as lack of detail about confounders in each group of patients and insufficient description of characteristics of patients. The findings of studies with identified weaknesses have to be considered in light of those limitations. More weight can be given to studies without such weaknesses.
While researchers may choose among several different tools for analyzing the quality of studies, researchers wrote in a 2015 study that researchers found “[t]here is no consensus on a preferred instrument that allows for the assessment of all types of RW evidence.” However, there is now some guidance on how to conduct real-world studies. In 2017, a joint task force of The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the International Society for Pharmacoepidemiology (ISPE) developed a set of good practices for real-world data studies. These best practices are documented in ISPOR Good Practices for Outcomes Research Reports and cover areas such as comparative effectiveness of research methods, economic evaluation methods, clinical outcomes assessment, and modeling methods.
While clinical studies and real-world evidence can serve as a necessary complement to each other, the results are not always consistent. In those cases, it helps to analyze the results of the real-world studies using established appraisal tools and best practices.