Dr. Karim Galil: Welcome to the Patientless Podcast. We discuss the Good, the Bad, and the Ugly about Real World Data and AI in clinical research.
This is your host, Karim Galil, cofounder and CEO of Mendel AI.
I invite key thought leaders across the broad spectrum of believers and descenders of AI to share their experiences with actual AI and real world data initiatives.
Hi everyone. And welcome to new episode of Patientless podcast. Today's guest is Dr. Richard Gliklich, he's the founder and CEO of OM1, but he's also a physician. He's a professor of Otolaryngology at Harvard medical school, and before starting OM1 he also founded a company called Outcome at around 1998. That was eventually sold to what is now known as IQVIA, and built the phase four or the outcome piece of the business. He has led several key national and international efforts focused on evaluating the safety and effectiveness and value and quality of healthcare.
So we're very honored to have him today on our podcast. Richard, thank you so much and welcome to the Patientless podcast.
Richard Gliklich, OM1: Well, thank you. Thank you for inviting me. I look forward to speaking to you and your listeners.
Dr. Karim Galil: Obviously Richard is a role model for someone like me. I also went to med school and decided to go into business. I mean, I obviously didn't end up being a professor at Harvard, but it's very inspiring to see your journey between the clinical side of medicine and also the business side of it.
Can we start off touching on that? How did you end up starting a company?
Richard Gliklich, OM1: Well, it's actually a great question. So in the 1990s, I was very active in outcomes research, and I had a lab that was focused on outcomes research, and later becoming what we call patient registries today. And the hospital was looking to spinoff companies. So I actually had a knock on my door once from the new head of business development, who asked me if I had any technology that they can license.
And one thing led to another. And that's how my first company spun off my research lab and got me into the business world.
Dr. Karim Galil: Wow. That was a good investment from their side. So why don't we start off by you telling us more about OM1, how you started OM1, what's the mission of OM1 as an outcomes company,
Richard Gliklich, OM1: Yeah. No, absolutely. So our vision is really to improve health outcomes through data, and that sort of encapsulates what we're trying to do. From a mission perspective, we are harnessing the power of data for measuring and improving patient outcomes. That was our first goal for accelerating medical research and for improving clinical decision making. So that's kind of the mission of the company, and where we started from is that after I'd sold my first company, I was in the venture capital world. I became very interested in healthcare IT and big data. And with the digitization of healthcare data that followed Era and High-tech in the last recession, the last great recession, there was this massive digitization of healthcare data, and I felt that there was an opportunity to leverage information more automatically than I had done with my previous research and business, which was a much more manual, to still drive the same goal of being able to measure patient outcomes. And if you could measure them, ultimately you could predict them. So that's what led to the concept, and really all about how do we measure outcomes.
And what we learned along the way is that while there was a lot of data out there, being able to access data with data liquidity, wasn't the problem. It was really being able to find a really strong, deep levels of information. In fact, we were able to develop a database of almost 250 million unique individuals in the US pretty rapidly.
And there's a lot you can do with that, but it took much longer, frankly, to develop much more sophisticated data sets that could ultimately be used for medical research and personalization in very specific areas of healthcare, mostly chronic conditions, which cost a lot.
Dr. Karim Galil: That's actually a great point. So the word healthcare data is a very generic term and it's used quite loosely. In many instances people mean ICD codes and very structured data that are meant for billing, but it doesn't necessarily capture the clinical aspect of a patient journey.
I think you've touched on that. You said it's easy to collect data, but it's not easy to get deep clinical insights about a patient. How do you define deep? How do you define a comprehensive data set about a patient?
Richard Gliklich, OM1: Yeah. I mean, I think you know this from your medical training as well, but it's getting to the nuance of what a clinician means when they are seeing a patient and entering information about that patient and in the US, that nuance is still generally captured in a dictated note or a written note. There's certainly information in the laboratories and the coding information, the billing information, pathology information and so forth. But if you want to get to the nuance, which is that clinical understanding of how the physician is viewing that patient, you have to get deeper into that data. And that requires a lot more effort.
Dr. Karim Galil: It was very interesting for me that there is no billing codes sometimes to a subtype of a lung cancer. Is that true? Like you cannot capture a non small-cell lung cancer, in some sort of a billing code. Is that still the case? What's your experience with that?
Richard Gliklich, OM1: I think that the billing codes improved, I mean, not to bore your listeners, but the billing coding system improved with the granularity of going from ICD-9 to ICD-10. But sometimes that granularity is still, you know, you're still picking from a list of possibilities. And while there may be a code for, catching fire while on a surf board, I actually think there may be a code for that, there may not be for certain subtypes of lung cancer or lupus or whatever the condition may be, because it's not been critical to those paying the bills to have that information.
Dr. Karim Galil: That's quite interesting. I didn't know that there's a code for catching fire on a surf board. One of the questions we always ask, is real world data and outcome research a vitamin or a painkiller? And what I mean by that question, is it something good to have, or is it becoming to be a must to have for a pharma company or for a decision maker in healthcare?
Richard Gliklich, OM1: I think, you already made the comment that real world data from one source is entirely different from another, so I do think that the opportunities to leverage what happens in the natural experiment to the real world are unlimited. Like literally unlimited. So I'd say it's more than a vitamin, but it's probably not quite a panacea, but I think that we're just beginning to tap it.
And I do think that the strategic value of this deep, clinically focused, because we still want to add in social determinants of disease and information about providers and information from other types of encounters, both within and outside the healthcare system. But I do believe that the opportunity is there to revolutionize what we're doing, both on the medical research side and the clinical care side.
I'm a true believer. And as a result, it's a strategic investment that crosses these companies and the companies that are moving fastest, have huge advantages. Being the pharma companies have a huge advantage.
Dr. Karim Galil: So there's this really big debate. Obviously in every conference we go through, randomized controlled trials versus real world data or more like data-driven trials. And obviously RCTs has been the gold standard and the naysayers are more skeptical about the clinical validity of whatever you get out of real world data.
What's your take on that? Is it one or the other or is it both of them coming together? Where do you stand in this debate?
Richard Gliklich, OM1: Yeah, I don't see it so much as a debate. I think they're complimentary sources of information. So real world data enables us to see what actually happens in the real world. In the large, actual, natural experiments that are occurring. It allows us to see what happens with drugs and devices that are being used in populations that are not generally recruited into clinical trials, which are largely middle aged white males are the predominant group in the US. They enable us to be able to look at combinations of drugs. And also look at the real use patterns. Like if I have a patient who's going to come in and see me for a trial visit every two weeks, they're sure as heck going to take their medication and fill out their forms if that's the requirement, but that's not what happens in the real world.
And so understanding those things is complimentary to what we learn in a clinical trial, which are critical for really handling bias and knowing what works and what doesn't. So I, I believe strongly in randomized clinical trials, but also believe very strongly in the importance and complementarity and the need to have that complementarity of real world data, because you need to know the extremes of the populations.
Who's getting it. Who's not getting it. How you can look at things like comparative effectiveness in the real world, which is very, very hard to do in drug studies. There’s very few head to head to head to head drug studies being done. So to get to the patient choices that are really necessary out there, we need the real world data, but I absolutely believe that randomized trials are critical for knowing what works as well.
Dr. Karim Galil: So this is more towards like, we're seeing more towards Phase IV for the role of real world data, but how do you see the role, if any, of real world data in things like Phase II or Phase III of clinical approval like someone trying to seek an FDA approval for either a new use, like an extended labeling, or even a new compound altogether.
Richard Gliklich, OM1: Yeah. So what we see with our clients are many of them will look for us to generate real world data sets as they're coming, getting their phase II results, and they do that for a number of reasons. They want to compare what they're seeing in the Phase II, to try to plan the Phase III.
They want to utilize it for protocol development. They want to look at it in terms of trying to understand what it will be like to recruit for the further Phase IIs or Phase IIIs in the real world trials as they try to put them into place. And then, as we go towards Phase III, what we are seeing is that there are certain scenarios that the FDA will accept real world data for either a new approval or a label expansion.
And for the new approval scenario for populations that may be rare, small, and difficult to define, so the example of Ibrance getting an approval for a man with breast cancer is a good example of the use of real world data for kind of a label expansion to a new population that was facilitated by real world data.
Another example would be creating external control arms that can actually be provided to augment the placebo arm, to get comparators against the active treatment arm within a trial. So those are all good uses. Another use for expanded label is when you have a natural experiment happening, meaning when you have a drug or device being used off label, but having good results.
And we have one scenario currently where we're providing data in exactly the situation where there is a reluctance to randomize because there's already a bias that's been developed among the clinicians who feel it would be unethical to randomize patients based on what they've already seen.
So there's certain scenarios where it becomes really smart to use real world data and certainly acceptable. But in any of those situations, the sponsor needs to engage in conversations with the FDA. They have to understand what their appetite is for real world data, how they'll evaluate it, what they want to see. Because if it's practical to do a randomized trial, that's generally going to be the preferred option for the FDA.
Dr. Karim Galil: Do you guys at OM1 help sponsors make the case to the FDA of, “hey this is a study where real world data will be really good for it”, or is your role more after they figure it out with the FDA that you come and execute on it?
Richard Gliklich, OM1: We're generally involved all along the way. We'll go with them to the agency to sort of explain what our role is and how we view the data and the quality of the data. There's often a lot of questions and good, smart questions from the FDA about how the data are being collected and processed and how you're ensuring that they're meeting appropriate quality standards and audit-ability, traceability, and so forth. So, we are partnered with them typically from the pre-submission all the way through.
Dr. Karim Galil: Today, the concept of real world data is obviously trendy. Everyone is talking about it. But in 1998, that was not the case, and that's when you started Outcome. Can you walk me through the adoption curve from the 2000s? I mean, you started Outcome Research when that was not a sexy term or not something that everyone is talking about, and today you have a very well established player when there is more of an appetite and more adoption. Can you walk me through, like, what's the difference between early 2000s to now that we're coming to 2020.
Richard Gliklich, OM1: Yeah, absolutely. So if you can bear with me, I'll give you a lot more longer, history than you might want to hear. I did a specialized fellowship in outcomes research during the middle of medical school with a fellow named Sam Martin. I was at University of Pennsylvania.
He was actually on the board, I think, of SmithKline back in those days as well. And he had been the chairman of medicine, I believe at Duke, who said to me that when he had been a chair of medicine years earlier, he always questioned how well patients were actually doing on treatment.
And so he told all of his staff that worked for him, that when you have ever have a patient refuse treatment, send them to me and I'll just follow them. And he did that and he said that we really don't know what works and what doesn't work. And we really need to understand that the only way to do that is to follow them in the real world.
So that got me on the path of trying to track patients in the real world. And initially that was through developing technology, internet based technology to track patients in registries and, and, and the initial uptake was around with medical and surgical specialty societies. So programs like the American Heart Association get with the guidelines program and work with American College of Surgeons and so forth.
And there wasn't a huge amount of interest outside of that. But that enabled us to build a global network to collect data. What happened was when Vioxx was withdrawn in 2003 by Merck voluntarily, that immediately caused the entire industry to need real world data on what was happening in the real world for a number of reasons.
And so that, that's what opened the market and everything changed overnight, frankly. So better lucky than good as they say. That's how I got into it.
Dr. Karim Galil: So that specific event was a turning point for the industry. In early 2000s there wasn't really wide adoption of EMRs. How were you even able to track patients in a real word setting?
Richard Gliklich, OM1: Yeah. So back then we had to set up, what's now called electronic data capture for registries and post-marketing surveillance. So just similar to the way EDC is used today for clinical trials, we had created our own system back then to do it in the real world and set that up. The EMRs were not prevalent.
We actually created an EMR system at one point. Had a few thousand users of our own EMR system to try to enable it to happen more fluidly, but EMRs weren't supported significantly until 2007, 2008. So that part of the business didn't do well. So we actually abandoned that, foolishly, and now 2008, everything over the next five years starts moving towards EMRs.
So now, now clinicians don't want to double entry information, both in the EMR and into somebody's EDC system. So we must work from the EMR if we're gonna maintain the research infrastructure, I believe.
Dr. Karim Galil: That comes with the complexity, as you said, that the nuances are mostly captured in a very unstructured way. It's not something that a computer can analyze. It's not something that you can plug into a SaaS or an SQL. What are the options today? How can someone go around that?
Richard Gliklich, OM1: Meaning getting data from the electronic medical record?
Dr. Karim Galil: Like capturing the nuance out of a non-computer readable format. If you have, say a study that has like 5,000 patients, that translates into a few thousands of PDFs, a few thousands of doctor notes, how can someone lean relevant information out of that?
Richard Gliklich, OM1: Yeah. So there's really just a few ways that it's done today. So one way is you put the nurse or other clinical abstractors to review information and infer from that and fill out a case report form on an electronic data capture system that. That has some utility and you can have more than one
do that and measure their inter-rater reliability. Another way is to force being able to do parallel entry, which would be the more standard was typically done. But sites are generally gonna revolt against that over time. Meaning you're collecting EDC by having people reenter information in that way.
A third is templated EMRs where a certain amount of structure is put into an EMR to be able to capture structured information, but clinicians still don't like to enter it in that way. They don't like to click they'd rather talk. And the fourth is collecting data and trying to use language processing to pull information from the unstructured text to make it into structured variables.
Dr. Karim Galil: One of the biggest questions for me about outcome research is whether you're using AI or human abstraction or whatever tool to extract data, you have to approach the problem knowing what data to capture to begin with. What endpoints, or what data parameters are of interest for that.
Yet I find it hard to reconcile that with the fact that we don't know what we don't know, right? We don't know what we should be capturing to begin with. How can someone reconcile these two concepts, approaching an experiment with curiosity and at the same time being bounded by the idea that you have to abstract, that you have to pick some points to extract, and you have to disregard others.
Richard Gliklich, OM1: Yeah, that's a great question because there's some subtleties. I do think that you need to intentionally know what you're trying to capture for the purposes of a study. Meaning when you look at information, trying to extract it, whether that’s by curation or AI to pull information out. However, using AI directly allows for a tremendous opportunity for hypothesis generation, with more unsupervised techniques.
And we have found that to be extremely valuable in finding correlations and things one might not have otherwise considered. But I think just like any other study, there's those studies that are hypothesis generating, and then you want to move towards a hypothesis testing type of study to confirm it.
Same thing when we're looking at unstructured information we will, if we're trying to learn something about the data, let the data tell us what it can tell us. But then, study the results in a more hypothesis testing way once you've done the hypothesis generation.
So both can coexist, but they they're different mindsets from the start of those evaluations.
Dr. Karim Galil: You have a company that generates outcome research, but you're also a physician. Where is the gap between the bedside of the patient and between where the industry is? In other words, are you seeing any time soon a world where a physician is only prescribing a treatment plan based on the aggregate wisdom of all the outcome research that has been out there? Or is it still going to be more subjective and dependent on his own experience.
Richard Gliklich, OM1: I think it's changing very rapidly. The biggest barrier is getting, identifying when say FDA approval is needed for something being a software as a medical device. We have programs now that will help assist a clinician or a patient with decision making. We just had a, I won't name,the institution and the subject because it hasn’t been published yet, but we just had an academic institution complete a randomized trial using the output from a set of models that provide sort of personalized predictions in the clinical setting to assist an informed consent that, not only improve the process of that consultation, but actually improve the patient's outcomes in a randomized trial. So I think the opportunity is tremendous to bring it to the bedside.
There are some barriers and regulatory questions that need to be addressed and so forth. But I think as soon as the things start, that we hit the tipping point, which may be is in three to five years, most of our clinical decisions are going to be assisted with personalized information based on big data, real world data, and AI.
Dr. Karim Galil: Wow. Three to five years, I expected something like 10 to 30 years. So you're seeing that it is that fast. The movement is that accelerated right now.
Richard Gliklich, OM1: It's not yet, but we're going to hit the tipping point when some of these studies demonstrate like this one, these studies start demonstrating to people within three to five years. That's when we'll hit the start, hitting the tipping point, that the standard of care can be changed by personalization. Like right now. I mean, your team is heavily involved in oncology. In oncology personalization has changed everything in the last 15 years, but outside of oncology, so areas that we work in tremendously like immunology, rheumatology, cardiovascular, in those you can't separate the DNA of the disease from the DNA of the patient.
So it's a tougher math problem. And so you need more data, but ultimately the personalization information that we'll get from those data will be equal to what we're currently able to do in cancer. And we'll change those diseases, it will change their treatment, it will change clinical interactions.
And I think that it's happening at quantum speed.
Dr. Karim Galil: It takes around 10 to 20 years to get a physician to have enough experience. I think the real world data and outcomes research is going to get that way faster. Because now with the click of a button, you can get access to what happened worldwide with patients who had the same kind of phenotypic or even genomic profile of the patient that's sitting in front of you.
But the bigger question is, a physician almost has like 10 minutes with a patient. How fast is this data going to be delivered? Is it going to be delivered in a sense of, he's putting the data in the EMR, he's getting a recommendation? Or is it more like, what we see in oncology where you gather every Monday, you have the board, then you start discussing your patients and do this kind of matchmaking?
Richard Gliklich, OM1: I think decision-making only needs to be accelerated when there's a life and death reason it needs to be accelerated. So what we see in the clinic, when AI based decision making is brought to the clinic, that they do exactly what you just said. They'll meet in spine work that we do.
It's another area that we're in. They'll meet as a council and review each patient and say, this one or that one has particular reason that we need to do something different, and it needs to work in the clinical workflow. That's why it'll take a few years for it to hit the tipping point.
Not because there is not already the ability to generate some incredibly valuable predictive information.
Dr. Karim Galil: I'm a big fan of OM1. Every time I talk to you, you guys are touching all aspects of the business. I didn't even know that you're in the business also of helping physicians, that's very, very intriguing. So how's 2025 looking like for clinical research, like in the next five years, how's it going to look like?
Richard Gliklich, OM1: A lot more modeling and real world data. A lot more automation, meaning, making research much more of a technological effort and then a human effort. I mean I think that's going to be the expectation of centers. COVID is proving that you don't always have humans available to do the research work other than non-COVID research. And our automation, for example, that we have in place to bring data in from centers and to process and so forth enables us to continue to do work, even though the clinical research infrastructures have slowed down. So that that's a good learning to keep in mind. I think there'll be more acceptance of real world data, real-world research in its importance, by the FDA and industry in appropriate places. But also, as we just talked about, more research on what we call implementation science: how do we bring the data from bench to bedside as quickly as possible?
Dr. Karim Galil: If you can Zoom call any living person today, who would it be and why?
Richard Gliklich, OM1: Well, one question I would want to know, and it has nothing really to do with real world data, I’d probably want to zoom the Dalai Lama and ask him if we've just taken a side path off of our karma, or if we're going to get back onto some good karma cause we need some good karma as a planet right now.
Dr. Karim Galil: It's crazy. What's happening out there. Have you seen what happened in Lebanon today in Beirut?
Richard Gliklich, OM1: I saw the videos, unbelievably horrible.
Dr. Karim Galil: It's like a nuclear bomb or something. It's crazy. I couldn't even believe it. Yeah. I hope the best for the world. This 2020 has been a very rough year, obviously for everyone.
Hey Richard, thank you so much, this has been incredible. Thank you so much for all the, like your take on different aspects of outcomes research, the history of outcomes research, and all the good luck for OM1. I root for you guys. Thank you so much
Richard Gliklich, OM1: You too. And thank you, we’re excited to be partners with you.
Dr. Karim Galil: I appreciate it. Thank you. Bye bye.
We’ve changed our look. Our goal remains the same: make medicine objective. The new site highlights the way our proprietary AI enables organizations to achieve quality and scale when structuring unstructured data. It comes down supercharging your clinical abstraction. We’ve validated that our human in the loop abstraction approach can support a machine that understands medical context like a physician. In our own experiments, the number of variables needing correction decreased by 40%. High quality abstraction = high quality data for cohort selection, real-world evidence, and registries.
The customer, a key player in the genomics space, had a strategic initiative to build a clinic genomic database to support their life sciences customers.
One clinical trial organization was using manual chart review and was looking to reduce the time it takes to find eligible patients.
From the Desk of the AI Team
Organizations that use patient data for internal or external research need to take steps to prevent the exposure of PHI to those who are not authorized to view it. They do this by redacting specific categories of identifiers from every patient document. Once the identifiers are masked, the risk profile of these datasets is significantly reduced. But how do you ensure that redaction engines are working to the highest accuracy?
The Mendel team is still buzzing from our week-long retreat in Cairo. The theme of the retreat was “coming together” and it was the first time the American and other remote employees were united with their Egyptian counterparts. Although there were many adventures–missing flights, seeing the pyramids, haggling at Khan el-Khalili–the highlight of the trip was collaborating together, as one global organization.
Competence via comprehension
Artificial intelligence (AI) is playing an increasingly important role in the healthcare industry. But to fully leverage the potential of AI, it must be equipped with clinical reasoning skills - the ability to truly comprehend clinical data, or in other words, to read it as a doctor would. When it comes to data processing tools, only a tool capable of clinical reasoning can effectively process unstructured clinical data.
Sailu Challapalli, our Chief Product Officer, spoke at a recent Harvard Business School Healthcare panel. The event brought together different healthcare and AI experts to discuss large language models and their impact.
Manually abstracting patient data at scale is an herculean task for humans alone. It is slow, expensive, difficult, and requires extreme precision and accuracy. Organizations have to choose between breadth and depth when it comes to making data useful for decision making. Because of these challenges, the Mendel team created Carbon. Carbon is an easy to use workspace that allows clinical abstraction teams to efficiently curate high quality clinical datasets at scale. The foundation of Carbon is Mendel’s AI. Carbon pulls directly from Mendel’s AI platform to give abstractors a headstart in identifying relevant data elements within a patient’s chart.
Within the real world evidence space, the generally accepted process for creating a regulatory grade data set is to have two human abstractors work with the same set of documents and bring in a third reviewer to adjudicate the differences. These datasets also serve a second purpose - as a reference standard against which the performance of human abstractors can be measured. Although this remains the industry standard, it is expensive, time consuming and difficult to scale.
From the Desk of the AI Team
AI projects have created tangible results for a wide range of industries. Despite the innovation, it is important to remember that AI is not a magic wand that will solve every problem in every industry with a single wave.
Before embarking on any new endeavor or enterprise, certain questions come to mind: How are we going to handle this? Does our team have the expertise, bandwidth, resources, and time to handle this undertaking on our own? When it comes to finding a scalable way to structure your unstructured healthcare data, the answers to these questions will impact when/whether you deliver a top-tier product for your clients.
Human abstraction has long been considered the gold standard for extracting high quality information from EHR data. With the rise of NLP and machine learning, how should we evaluate these new technologies and are human abstractors still the correct comparison?