Academy Course Overview

EHDEN Foundation

The aim of the EHDEN foundation course is to explain first and foremost what the EHDEN project aims to achieve, how it will do this and what the final result will be. However, no project stands alone and therefore we provide more context on how EU research is being done, why real world evidence is important and about our collaboration with OHDSI. While this course is written from a non-technical, managerial point of view, the context itprovides is important to understand what drives EHDEN and why EHDEN works the way it does. The tools and other technical aspects will be discussed in more detail in the other courses.

Extract, Transform and Load

This course describes Extract, Transform, and Load (ETL) processes to take your database from raw observational data to the OMOP Common Data Model. Steps 1 and 2 will focus on the tools available for designing ETL logic in a way to communicate to a developer how source data should be handled during conversion to the OMOP Common Data Model. This includes discussion of practical methods for mapping source vocabulary values to Standard OMOP Concepts and a demonstration of how to handle situations where pre-specified mappings do not exist.  Steps 3 and 4 will go over some examples and guidelines when it comes to implementing and testing the ETL logic.

OMOP CDM and Standardised Vocabularies

This course gives an introduction into the history of the Observational Medical Outcomes Partnership (OMOP) and the birth of Observational Health Data Sciences and Informatics (OHDSI). It highlights OHDSI’s vision as well as how OHDSI is trying to drive the future of observational research through network studies. The course will then move on to understanding the OMOP Vocabularies, as it's it is important to understand how the vocabulary is used in examples prior to moving on to the OMOP Common Data Model (CDM). The final section of the course will explain in detail the layout of the CDM as well as some key concepts such as era logic.

Infrastructure

This course describes the infrastructure and tools used to work with patient-level data converted to the OMOP Common Data Model (CDM). Within this course we will cover the process for installing and configuring the OHDSI's WebAPI and Atlas tools. OHDSI's WebAPI is a Java-based application that provides a web-based application programming interface (API) for standardized analytics against one or more OMOP CDMs. Next we'll install the OHDSI Atlas web application and configure it to use WebAPI. Atlas is used to design and execute of observational analyses against the OMOP CDM. This course also covers the installation of R and RStudio and how to use some of the OHDSI R libraries for designing database queries against the OMOP CDM.

OHDSI-in-a-box

In this course you will learn how to quickly deploy OHDSI-in-a-Box, which is a single instance implementation of OHDSI tools and sample data for personal learning and training environments. OHDSI-in-a-Box contains SynPUF data in OMOP CDM, an RDBS including query client, WebAPI, ATLAS, R and Python. In the end of the course we optionally introduce OHDSI on AWS, which is an enterprise class, multi-user, scalable and fault tolerant OHDSI environment on AWS.

ATLAS

ATLAS is a free, publicly available web-based, open-source software application developed by the OHDSI community to support the design and execution of observational analyses to generate real world evidence from patient level observational data. In this course, Patrick Ryan will walk you through the key features of ATLAS and how to use them to design analytical studies on OMOP CDM data sources.

Phenotype Definition, Characterisation and Evaluation

In this module you will gain deeper insights into defining phenotypes, characterising and evaluating using OHDSI tools, Cohort Diagnostics, PheValuator, and ATLAS. In a stepwise approach, you will be guided in how to use the various R tools, how Cohort Diagnostics can support characterising a cohort, evaluating its attributes and potential flaws. Understanding the rationale for utilising phenotype algorithms, as well as being able to evaluate them using the PheValuator tool. Designing cohorts themselves will be conducted in the ATLAS tool, taking you through concept sets, groups, and the ability to create a cohort. By the end of the module you will have be proficient in creating initial cohorts, and comfortable with critiquing them.

Population-level Effect Estimation

In this module you will be able to gain deeper insights into population-level effect estimation. We cover one specific type of population-level effect estimation method for generating clinical evidence: comparative cohort studies. The course presents a framework for formulating comparative effect research questions and discusses what makes good and bad questions. It furthermore describes how the OHDSI tools can be used to design and execute a comparative effect study using the OHDSI software tools. This includes the use of propensity scores to adjust for confounding, varying times at risk, and different types of outcome models, such as Cox or logistic regressions. An existing published study is analyzed to demonstrate the applicability of the framework. Evaluation of a study, based on study diagnostics, as well as negative controls and empirical calibration, are discussed at length.

Patient-Level Prediction

In this module you will gain deeper insights into patient prediction modelling using OHDSI tools, in particular ATLAS, and focusing on:

  • Defining prediction questions
  • Identifying useful prediction questions of most relevance to prediction modelling
  • Developing prediction models
  • Evaluating prediction models
  • Presenting prediction models

This course was originally delivered in a group setting but has been adapted for individual study. The aim of the Patient-level Prediction work in OHDSI is to standardize the prediction process to do large-scale predictions to allow to investigate, among other things, best practices and decision choices when doing model development. The standardized data format is possible thanks to OMOP CDM.

R for Patient-Level Prediction

The primary goals of the R for Patient-Level Prediction course are to explain how to install and set up the PLP Package. To develop a prediction model using R and to develop and edit a Prediction Study Package using the ATLAS interface. You will learn to create an R Package to externally validate models, to add Study Packages to “OHDSI-Studies” GitHub repo and to contribute to the PLP Package (add new classifiers or enhancements to the Package).