A Clinical Research Site Uses Mendel to Augment Pre-screening and Feasibility Assessments with AI

Augmentation of human resources at clinical research departments with Mendel Research’s clinical trial matching solution yield sizable improvement over standard practices in recruitment and feasibility assessments.


increase in number of eligible patients identified


reduction in pre-screening costs versus manual methods

Understanding The Fully-loaded Cost Of Determining Clinical Research Site Eligibility

Research site pre-screening and feasibility assessment is a costly and time-consuming endeavor for every player in clinical research.

In order to determine if it is feasible and profitable to participate in a clinical trial, a research site must first spend thousands of hours manually poring over and capturing clinical information from patient records in the form of unstructured free text or in scanned documents (e.g. PDF images of radiology reports). Only once this is done, can a research site begin searching for suitable patients – another process that takes research coordinators hundreds of hours of manual research, and one that is riddled with human oversight.

40% of sites in a multi-center trial will under-enroll compared to plan, and 10% will fail to enroll a single patient. When costs associated with investing in pre-screening and eligibility determination for all these studies with eventual no-go decisions are considered, the true cost per patient enrolled in a study can be devastating for research sites.

The inherent inefficiency in pre-screening and feasibility assessments has a ripple effect down to research sponsors. Valuable time and effort is lost on research sites that will eventually under-enroll, or not enroll any, patients.

Statistics show that for any study on average, 50% of sites never recruit any patients.

At Mendel, we saw the opportunity for our AI to massively benefit both research sites and sponsors by eliminating these unnecessary costs and time from the pre-screening and feasibility assessment process.

Overcoming The Stigma Of Past AI Failures With Mendel Recruit

When we launched Mendel Recruit to support clinical research sites in pre-screening and eligibility assessment, there was rightfully a healthy dose of skepticism about the ability of AI to add value. IBM Watson’s AI – after a lot of fanfare – had failed to deliver results, and had left clinical experts hesitant to investigate more novel approaches.

So, we at Mendel did it for them.

We applied Mendel’s Recruit retroactively to two completed oncology studies (one breast, one lung), and one (lung) study that failed to enroll at the Comprehensive Blood and Cancer Center, in order to enable an apples-to-apples comparison of results between standard pre-screening practices and Mendel’s AI.

Using proprietary artificial intelligence algorithms that pair text recognition in scanned documents with natural language understanding of clinical text and automated clinical reasoning, Mendel’s AI interpreted both structured and unstructured medical records and cross-referenced these with protocol eligibility criteria to evaluate patient eligibility automatically.

Finally, using Mendel Recruit, we quickly and accurately pre-screened patient records, even prior to initiation of screening.

Results that speak for themselves

As can be seen in the table below for the high-enrolling trial (Protocol 1), and low-enrolling trial (Protocol 2), use of Mendel.ai resulted in a 24% - 50% increase in the number of patients correctly identified as potentially eligible for clinical trial participation.

Compared to a breast and lung oncology research trials which had previously manually enrolled patients, Mendel Recruit increased number of eligible patients by 24% and 50%, respectively.

In order to be a contender to human-only screening, Mendel AI’s also needed to demonstrate high “Recall”- the ability to only select patients that are truly eligible. This was clearly demonstrated in the third study, for which manual efforts found no eligible patients. However, Mendel completed this assessment in minutes, as opposed to days.

For a study, both Mendel Recruit and standard practices failed to identify any potentially suitable patients; for the Mendel.ai system, establishing the lack of suitable patients required a total of 1.3 man-hours.

Quantifying the ongoing benefits of automation

Looking past the initial screening and eligibility determination, the elapsed time between the date on which the patient could be characterized from information in the medical record as potentially eligible for the trial, and the date on which the patient was screened for the trial, varied widely across patients using manual methods.

In contrast, Mendel Recruit continuously absorbed data from new patients and any updates on existing patients to to detect eligibility on an ongoing basis.

(Table 1).  For Protocol 1, the mean elapsed time was 19 days, with a standard deviation of 28. Twelve patients were enrolled within 1 week of becoming eligible; for these patients, the mean time lapse between determination of potential eligibility and screening visit was 1.2 days. However, for the remaining 13, the mean time lapse between eligibility and screening was 37 days.  For Protocol 2, the mean elapsed time was 263 days, with a standard deviation of 22.

Mendel Recruit increased number of patients identified as potentially eligible, reduced the number of man-hours expended in the process of both site identification and elimination, and reduced elapsed time between patient eligibility and identification.  Finally, Mendel Recruit created an ongoing benefit to increasing the rate of enrollment due to its ability to continuously update patients for eligibility as new data became available.

A new reality for research sponsors with Mendel

The new-found ability to quickly and accurately pre-screen patient records at sites, even prior to initiation of screening improves site selection, enhances the ability to forecast enrollment both overall and by patient subset (e.g. demographic), and even allows for modeling of the impact of different eligibility criteria during protocol development.