CohortConstructor benchmarking results

Introduction

Cohorts are a fundamental building block for studies that use the OMOP CDM, identifying people who satisfy one or more inclusion criteria for a duration of time based on their clinical records. Currently cohorts are typically built using CIRCE which allows complex cohorts to be represented using JSON. This JSON is then converted to SQL for execution against a database containing data mapped to the OMOP CDM. CIRCE JSON can be created via the ATLAS GUI or programmatically via the Capr R package. However, although a powerful tool for expressing and operationalising cohort definitions, the SQL generated can be cumbersome especially for complex cohort definitions, moreover cohorts are instantiated independently, leading to duplicated work.

The CohortConstructor package offers an alternative approach, emphasising cohort building in a pipeline format. It first creates base cohorts and then applies specific inclusion criteria. Unlike the “by definition” approach, where cohorts are built independently, CohortConstructor follows a “by domain/ table” approach, which minimises redundant queries to large OMOP tables. More details on this approach can be found in the Introduction vignette.

To test the performance of the package there is a benchmarking function which uses nine phenotypes from the OHDSI Phenotype library that cover a range of concept domains, entry and inclusion criteria, and cohort exit options. We replicated these cohorts using CohortConstructor to assess computational time and agreement between CIRCE and CohortConstructor.

library(CDMConnector)
library(CodelistGenerator)
library(PatientProfiles)
library(CohortConstructor)
library(dplyr)

con <- DBI::dbConnect(duckdb::duckdb(), 
                      dbdir = eunomiaDir())
cdm <- cdmFromCon(con, cdmSchema = "main", writeSchema = "main", 
                  writePrefix = "my_study_")

Once we have created our cdm reference we can run the benchmark. Once run we’ll have a set of results with the time taken to run the different tasks. For this example we will just run task of creating all the cohorts at once using CohortConstructor.

benchmark_results <- benchmarkCohortConstructor(cdm,
               runCIRCE = FALSE,
               runCohortConstructorDefinition = FALSE,
               runCohortConstructorDomain = TRUE)
benchmark_results |> 
  glimpse()

Code and collaboration

The benchmarking code is available on the BenchmarkCohortConstructor repository on GitHub.

If you are interested in running the code on your database, feel free to reach out to us for assistance, and we can also update the vignette with your results! :)

The benchmark script was executed against the following four databases:

The table below presents the number of records in the OMOP tables used in the benchmark script for each of the participating databases.

Cohorts

We replicated the following cohorts from the OHDSI phenotype library: COVID-19 (ID 56), inpatient hospitalisation (23), new users of beta blockers nested in essential hypertension (1049), transverse myelitis (63), major non cardiac surgery (1289), asthma without COPD (27), endometriosis procedure (722), new fluoroquinolone users (1043), acquired neutropenia or unspecified leukopenia (213).

The COVID-19 cohort was used to evaluate the performance of common cohort stratifications. To compare the package with CIRCE, we created definitions in Atlas, stratified by age groups and sex, which are available in the benchmark GitHub repository with the benchmark code.

Cohort counts and overlap

The following table displays the number of records and subjects for each cohort across the participating databases:

We also computed the overlap between patients in CIRCE and CohortConstructor cohorts, with results shown in the plot below:

Performance

To evaluate CohortConstructor performance we generated each of the CIRCE cohorts using functionalities provided by both CodelistGenerator and CohortConstructor, and measured the computational time taken.

Two different approaches with CohortConstructor were tested:

By definition

The following plot shows the times taken to create each cohort using CIRCE and CohortConstructor when each cohorts were created separately.

By domain

The table below depicts the total time it took to create the nine cohorts when using the by domain approach for CohortConstructor.

Cohort stratification

Cohorts are often stratified in studies. With Atlas cohort definitions, each stratum requires a new CIRCE JSON to be instantiated, while CohortConstructor allows stratifications to be generated from an overall cohort. The following table shows the time taken to create age and sex stratifications for the COVID-19 cohort with both CIRCE and CohortConstructor.

Use of SQL indexes

For Postgres SQL databases, the package uses indexes in conceptCohort by default. To evaluate how much these indexes reduce computation time, we instantiated a subset of concept sets from the benchmark, both with and without indexes.

Four calls were made to conceptCohort, each involving a different number of OMOP tables. The combinations were:

  1. Drug exposure

  2. Drug exposure + condition occurrence

  3. Drug exposure + condition occurrence + procedure occurrence

  4. Drug exposure + condition occurrence + procedure occurrence + measurement

The plot below shows the computation time with and without SQL indexes for each scenario:

mirror server hosted at Truenetwork, Russian Federation.