Skip to content
AdminFeb 23, 20195 min read

The epidemiology of databases: Part II: Four principles of generating real-world evidence

In a recent comment letter to the U.S. Food and Drug Administration, Aetion outlined key considerations to help guide the agency’s formal exploration of the use of real-world evidence (RWE) in regulatory decision-making for drugs and biologics.

In Part I of this series, we outline key principles for the selection and storage of real-world data (RWD). And Part III presents applications of RWE. But principled database epidemiology doesn’t stop there.

To generate regulatory-level confidence in the real-world evidence—the results produced from analyses of RWD—we offer four principles governing the design, quality, and reporting of these studies.

Adopting these principles will increase confidence that RWD analyses are transparent, auditable, and reproducible—the foundations of good science.

1. Statistical diagnostics and sensitivity analyses should be pre-specified in RWD analyses

To achieve full transparency and capture the investigator’s intent, analysis plans should be pre-specified, following a step-by-step procedure for addressing the research question, as well as a matching choice of statistics required to get a meaningful answer. The procedure must also include diagnostics to ensure that the analysis met the stated goals.

In addition, pre-specification and inclusion of sensitivity analyses is particularly critical to the assessment of data relevance as, among other benefits, they give a scale on which to calibrate confidence in the study’s findings.

2. RWE must be transparent and reproducible

If the results of nonrandomized studies from health care databases are to be reliable sources for decision-making, they must be reported with sufficient transparency for an independent group to reproduce the results. Investigators should document the methodology choices they make and, when possible, include references to studies that validate those methods. Such transparency allows other investigators to independently verify the findings and judge the scientific merit of the design and analysis.

To this end, a joint ISPE-ISPOR task force took an important step by agreeing on a set of parameters that need to be reported in order for a decision-maker to understand the investigator’s study implementation and to reproduce the study when working from the same list of study parameters.

3. RWD analyses should be compliant with all relevant scientific design and reporting standards, including:

Best practices for designing and conducting drug safety studies based on RWD, including:

  • FDA, European Network of Centres for Pharmacoepidemiology and Pharmacovigilance, and the ISPE.

Best practices for designing and conducting comparative effectiveness research, including:

  • The Agency for Healthcare Research and Quality’s user guide for developing a protocol for CER20, Good ReseArch for Comparative Effectiveness (GRACE), and ISPOR’s series of good research practices for retrospective data analyses.

Requisite reporting standards, including:

  • CONSORT, STrengthening the Reporting of Observational Studies in Epidemiology (STROBE), REporting of studies Conducted using Observational Routinely-collected Data (RECORD) and RECORD-PE, and the combined ICPE-ISPE task force.

4. In comparative studies, study diagnostics should be applied to determine whether the resulting RWE is capable of answering the question of interest

Covariate balance diagnostics are of particular importance in comparative studies: Investigators should evaluate the level of balance in baseline covariates that can be achieved between treatment arms even before computing an association with the outcome data.

Propensity score matching and weighting methods are particularly suitable for supporting this type of diagnostic and can serve as comprehensive tests of a study’s ability to identify true effects and not falsely identify spurious associations.

Study diagnostics could provide a final decision point for regulators and investigators to determine whether the selected RWD source and study methodology are capable of producing estimates of causal treatment effects.

How software platforms ensure study quality and governance

As software products connected with one or multiple RWD sources, platforms ensure reliability, transparency, and reproducibility in the following ways:

  • Understandability: Platforms allow for the specification of the study in terms that are understandable by decision-makers rather than in programming code, which is only understandable by statistical programmers.
  • End-to-end validation: Instead of validating studies one-by-one, platforms themselves can be validated. This means all study implementations created on the platform are also validated and ensure end-to-end correctness.
  • Validation against randomized controlled trials (RCTs) to show that RWE studies are “fit for purpose”: Ongoing scientific validation against RCTs and other RWE studies that have been recognized as substantial evidence for regulatory decision-making will re-confirm that the platform can validly implement evidence given “fit-for-purpose” RWD.
  • Good RWE study practice: Platforms can guide users to follow recognized paradigms in implementing comparative studies and limiting users to scientifically-valid analytic workflows.
  • Use of sensitivity analyses: The scale achieved through platforms encourages relevant sensitivity analyses. Sensitivity analyses explore meaningful variations in design choices to inform decision-makers’ ultimate confidence in study findings.

Platforms also ensure good study governance in these ways:

  • User authentication: User permissions ensure that only qualified, authorized people are able to access, create, and modify the study.
  • Transparent study implementation plan: An RWE study implementation plan is always prepared and logged before the analysis is run.
  • Verifiable achievement of stated study intentions: Reporting features in platforms enable verification of the study implementation against the study protocol by other investigators, not only those able to read a particular programmer’s code.
  • Audit capability: Audit logs allow traceability and verification of what was done in the analysis, when, and by whom.
  • Long-term data capture and storage: Long-term storage and capture of all study elements (cohorts, measurements, etc.) in the platform ensure long-term access to study materials and certain reproducibility.
  • Transparent data transformation: Version histories and other provenance information for all study elements show changes (and rationale for changes) over time.

These principles are a distillation of core scientific principles as they apply to RWE (and as they are implemented in software platforms). By applying them consistently—and being clear about where RWE can and cannot address a research question—we can generate reliable RWE to answer critical questions in health care.

RELATED ARTICLES