A leading healthcare company providing outcomes measurement and predictive analytics for value-based and personalized healthcare. By leveraging big clinical data, standardized outcomes measures, and artificial intelligence technology, this leading technology company delivers a robust approach to improving healthcare outcomes, powered by more precise information.
The quest for finding better data to develop more precise information is what brought them to Mendel.
In the process of FDA submission for a Phase IV study, the Customer had acquired over 90,000 images and over 157,000 RTF files with images from scanned/faxed medical reports (like pathology and cytology reports) as well as imaging data (like ultrasounds) that required digitization.
“No one got fired for hiring IBM”. As the saying goes, prominent OCR solutions like Google’s OCR were being considered by the Customer as a low-risk solution. The Mendel team knew they could do much better, but the results needed to speak for themselves.
We evaluated Mendel Retina against Google’s OCR system on 430 images containing nearly 150,000 words. The evaluation showed a 25% error rate for Google’s OCR compared to 6% error rate for Mendel Retina. Table 1 shows detailed results in terms of Precision, Recall, and F-Scores
The results weren’t a fluke. Unlike Optical Character Recognition (OCR) systems, which read one character at a time, Mendel Retina analyzes the meaning of the full sentence as well as the intent of the document while recognizing words and phrases. The result is unparalleled Precision and Recall in OCR. Translation: less data loss, and less OCR errors.
That’s simply not the case with Mendel. We’ve built an engine that can be asked highly specific questions, such as, “What are the modes of transmission of a virus?” as well as understand that “no intrauterine infections have been recorded” is a relevant answer.
With irrefutably better OCR accuracy than Google OCR, the customer chose Mendel Retina as their platform of choice for digitizing over 250,000 documents containing scans and images.
When we inquired what was next for this digitized data, the Customer’s plan was to use human abstractors to turn OCR’d content into analytics-ready data. This wasn’t a surprise as traditionally, companies seeking abstraction have had only two choices: off-the-shelf Big Tech abstraction tech that simply lacks the clinical understanding needed to create rich data, or human-abstraction which is time and resource intensive, but high quality. At Mendel, we believed that Mendel Read – an AI custom-built for clinical understanding through years and millions of dollars in R&D –could offer our customer both the quality of human abstraction, with the scale and cost-efficiency of automated solutions.
Recognizing the Customer’s unrelenting focus on data quality, Mendel needed to prove that AI would not be a compromise on data-richness in any way. So for a blind test, we randomly selected medical documents containing around 8,000 concept occurrences representing all concepts (we made sure that the rare concepts are represented, sometimes even by including all instances that did not exist in our AI training data). None of the files in the blind test set exist in the training data. We asked our QC team to manually label these instances and review each other’s work until we were satisfied that we reached a “ground truth.” In the following sections, we present two evaluations by comparing the ground truth to two outputs:
• Intrinsic evaluation, by comparing to Mendel Read output before human review.
• Extrinsic evaluation, by comparing to the output that was submitted to Customer.
How did Mendel’s abstraction AI fare so much better than other technologies? Traditional approaches to extracting data points from medical text involve using taxonomies and a search engine in combination with regular expressions for textual pattern matching (e.g., Linguamatics and Averbis). These solutions, while called “clinical NLP” by the companies which offer them, actually fall under traditional Information Retrieval (IR) techniques.
Such approaches return many irrelevant results (false positives) and miss many relevant results (false negatives). In contrast, Mendel Read’s AI reads medical documents for a given patient and extracts “concepts” (data elements that are the study’s endpoints; e.g., “Squamous Carcinoma of the Cervix”). It also provides source document verification by pointing back to the exact location of the concepts (highlighting the source text) in the original de-identified documents.
With Mendel’s abstraction AI showing the precision and recall of human-only abstraction, the Customer decided to switch over to Read for all abstraction. With abstracted data available in minutes, the Customer ran 11 iterations of research in 5 days, a speed that would have been unimaginable with human-only abstraction.
The Customer was using their own workforce for patient de-identification. As the last piece of the puzzle in focusing human effort away from the busy-work of data and towards actual research the Customer turned to Mendel Redact for de-identification. Mirador Analytics reported 100% Precision and 99.85% Recall (99.93% F1-Score) exceeding the threshold for HIPAA compliance. Results are reported in the table below copied from Mirador’s statistical verification report (the full report is available upon request).
Needs table from Mirador
Mendel has transformed data OCR, abstraction and redaction at our customer, eliminating thousands of hours and millions of dollars in spend on human-only efforts for the company.