Twitter   LinkedInFacebookRSS  
Could AI Plus EHR Equal Better Healthcare?


By Paul Nicolaus

August 8, 2018 | In some respects, an electronic health record (EHR) is a goldmine of information. Wrapped up in this digital version of a patient’s record is everything from medical history and medications to diagnoses and treatment plans.

The big hope, of course, is that new and improved technologies can harness this electronic information to improve upon healthcare as we know it and maybe even cut costs along the way. And yet, there are a number of reasons why the valuable data stored within can remain buried.

“To clinicians, I think EHR data presents a problem because it is packaged in a way that is not easily accessible,” explained Ken Carson, senior medical director at Flatiron Health—a healthcare technology and services company based in New York City.

Most clinicians don’t have access to the technical support that allows them to unlock the potential of their EHR. “I think this was a big promise,” he said, “and everybody thought it would be very easy, but in truth, it’s quite difficult.”

Part of the underlying challenge relates to the way in which EHRs get customized. The saying goes, “when you’ve seen one EHR, you’ve seen one EHR,” Carson joked. Just because you’re on Epic, for example, that doesn’t mean you’ve got the same Epic build as the hospital down the street. “As a result, the interoperability is not always what people had expected.”

From a research perspective, there are similar issues. Gaining access to the data is challenging because the right engineering support is needed to pull out the critical information without also pulling out material that would obscure the understanding of the data.

To take it one step further, one of the difficulties with EHR data is that certain elements are structured and others are not. Structured data fields are elements that are inherently a yes or no question or a number. Is the patient a smoker, for example? Or what is the patient’s tumor marker value? Those structured data fields are a diagnostic code.

AI-EHR-1

“But in oncology, which is where Flatiron works, we’ve found that much of the value is actually in the unstructured data,” he said, which can be found in clinician notes, radiology reports, discharge summaries, and pathology reports.

In a patient with cancer, it is possible to capture that he or she received a particular chemotherapy in the structured fields, but what you really want to know is the genetic make-up of that cancer. Does that patient have particular mutations of interest? When did their disease progress on a CT scan? Were they tested for X, Y, or Z?

“And all of those are still wrapped up in the unstructured notes,” Carson added, “which makes the electronic health record, for lack of a better term, like a fancy electronic filing cabinet.”

Emerging Approaches

How, then, can that cabinet be unlocked and information pulled out? To date, Flatiron Health has used a part technology, part human-based approach. The company’s software is used to organize unstructured notes so that a human being can go in and pull out the data elements of interest.

Once there is a high-quality abstracted data set, that can be turned into a training data set. From there, it is possible to start experimenting with more advanced technological techniques such as machine learning. The company is using these advanced technologies more and more, according to Carson, and the intent is to continue to move in that direction.

Cambridge, England-based Linguamatics is another company working to pull information from unstructured text-based EHRs and transform it into actionable insights. Natural language processing (NLP) is a subcomponent of artificial intelligence (AI), explained Simon Beaulah, senior director of healthcare, and it deals with the understanding of language and the extraction of concepts from language to turn those into discrete data elements.

After information is extracted from the text and turned into structured elements, it then feeds into the machine learning algorithms to enable users to better predict patient outcomes. “When we’re thinking about the use of natural language processing in association with electronic health records, it is very much about finding and flagging high-risk individuals,” he said.

A pulmonary nodule in a radiology report is often identified as an incidental finding, for instance, so it’s tangential to the primary analysis. If someone is involved in a traffic accident and arrives at an emergency room, he or she may have X-rays that reveal broken ribs. The emergency room focus will be on that acute care, making sure that the ribs are appropriately treated. If the radiologist notices a nodule or lesion on the lung, though, he or she can flag that in the report and make sure that is evident to the clinician.

“What we’re seeing is that a number of health systems that we’ve worked with are looking at using natural language processing to monitor every radiology report,” Beaulah said, “to look for these potential early danger signs and to route those specific reports over to a care coordinator.”

NLP technologies are essentially supporting human review and helping identify high-risk individuals, but there is still a human intervention that helps determine whether or not follow-up is needed. “We talk about artificial intelligence,” Beaulah added. “I’d like to think of this as intelligence augmentation.”

At the Massachusetts General Hospital (MGH) and Brigham and Women’s Hospital (BWH) Center for Clinical Data Science (CCDS), work revolves around the creation of AI models. Recently, for example, the Center came up with a model that automatically detects areas where there may be pinched nerves in patients with back pain who have had an MRI or a CT scan.

“One thing that we increasingly are working on is how to integrate those solutions into the healthcare system in an easy way,” Executive Director Mark Michalski said. “While many of these models are very effective, we’re still quite early in their development and application, so validating the models and making sure that they’re safe is a really critical part of what we do.”

AI-EHR-2

Founded in 2016, the CCDS has grown to include nearly three dozen clinicians, researchers, data scientists, and product development and translational experts all working to advance medicine using AI. It began in the department of radiology at Mass General, and the initial focus was on the application of machine learning to medical images.

Since inception, the Center has collaborated with other academic partners, laboratories, and provider networks. It has also collaborated with companies like NVIDIA, and last year, Partners HealthCare and GE Healthcare announced a 10-year collaboration to develop, validate and integrate deep learning technology across the entire continuum of care. Along the way, the focus on diagnostics has remained as the CCDS has also begun to branch out into other areas like clinical informatics, operations, and population health.

One task of interest is the drawing of a line around a tumor. With the help of machine learning, this might go faster because that line could be drawn in a semi-automated way. And that has the potential to become more quantitative because it could be possible to draw it the same way every time. “There’s really power in that,” Michalski said.

He thinks there may be aspects within those images that are hard or even impossible for humans to see, and machine learning might be able to do better. There is also the possibility of combining image information with pathology, demographic, EHR, and other forms of data.

When that happens, he said there could be an opportunity to get to precision health “in a way that we’ve kind of dreamt for a long time.”

Lingering Challenges

The road leading to that dream is filled with obstructions, however. “Data access is a big challenge,” Michalski said, even for those who work at a healthcare system like he does. Access to clinicians and expert users who can validate AI models is another significant hurdle. “We have to rely on clinical experts to understand and test these models,” he said, “and that’s not a trivial thing. That takes a lot of effort.”

From Beaulah’s perspective, interoperability continues to raise issues for everyone working in this space. There have been steps in the right direction, he explained, like the Fast Healthcare Interoperability Resources (FHIR) interfaces and Consolidated-Clinical Document Architecture (C-CDA) export formats, but it’s still difficult to move data from the EHR to external systems.

Current applications of technology, whether it’s been NLP or machine learning, have been limited because they can only get you to a certain quality level, Carson pointed out. This can be helpful in terms of gaining a general sense of certain concepts or ideas. “But when you’re talking about using real-world data to generate real-world evidence that is supporting regulatory applications and it’s being used by the FDA, your quality bar has to be so much higher,” he said, where 85% or 90% specificity is not good enough and could lead to flawed conclusions.

“In the future I think we will be able to use some of those techniques more frequently,” Carson said, “but part of what was lacking historically has just been the high-quality data set that can train the algorithms.”

Using advanced technologies for treatment decision support or medical diagnostics is what Carson views as the ultimate objective. But what is it going to take to get to the point where the last patient’s experience informs the next patient’s?

“I think we have to be very careful that we don’t misinterpret data as evidence,” he explained. “I think that just because your last patient had a certain response to a therapy or a lack of response doesn’t by itself necessarily mean the next patient will.”

It is important to understand that many different variables are influencing that exposure-outcome relationship and to process that in a way that those factors are being taken into account before it’s fed back into a decision support or diagnostic system.

“Doing this right and doing it safely, I think, is the number one thing that has to be done throughout healthcare if this is going to be adopted,” Carson added. “If there are egregious errors or mistakes, it could very well tarnish this whole endeavor.”

Paul Nicolaus is a freelance writer specializing in science, nature, and health. Learn more at www.nicolauswriting.com.