Latest News

AI Used to Scan Images and Detect COVID-19

By Paul Nicolaus

July 14, 2020 | Several groups across the globe are pursuing the combination of chest imaging and artificial intelligence (AI) to improve workflow efficiency, support clinical decision-making, and slow the spread of COVID-19.

A reverse transcriptase polymerase chain reaction (RT–PCR) test is commonly used for diagnosis, but results can take hours or days. That lag time, along with testing supply shortages in some areas, sparked interest in the use of imaging as yet another tool.

While not currently a standard method for diagnosing COVID-19 in the United States, it has been used for other purposes, like confirming diagnoses made by other means, determining severity, guiding treatment, and monitoring patient progress.

And because the pandemic has revved up demand for radiologic services, some say the automation of chest scan analysis could help reduce the burden placed on human resources and identify patients with early-stage disease, among other possibilities.

Imaging + AI for COVID-19 Diagnosis

NVIDIA has developed two AI models for detecting COVID-19 in CT images along with the National Institutes of Health (NIH). The models, available in the latest release of Clara Imaging, can help researchers study the severity of COVID-19 and create new tools to better understand, measure, and detect infections.

There’s an old adage, explained Mona Flores, global head of medical AI at NVIDIA: garbage in, garbage out. “A model is as strong as the data that you put into it,” she told Diagnostics World, “and one of the advantages of this NIH model is that it was trained on a very variable data set from many different locations.”

The AI models are not intended to be used clinically. Instead, they are meant to be a building block for researchers and scientists who may wish to fine-tune them, add them to other models, and “build something that is useful for them or for the world,” she said.

The Imaging COVID-19 AI initiative is working to create a deep learning model for the automated detection and classification of COVID-19 on CT scans and for the assessment of disease severity in the lungs of infected patients.

Hospitals and institutions across Europe are collaborating on the project, which is supported by the European Society of Medical Imaging Informatics and coordinated by the Netherlands Cancer Institute. The endeavor includes companies such as Microsoft and Google, which have contributed GPU and storage, and NVIDIA, which has contributed DGX-2 superservers.

Mount Sinai researchers, meanwhile, have come up with AI algorithms that integrate chest CT findings with clinical information such as age, temperature, symptoms, and exposure history to diagnose patients with COVID-19. Their study, published May 19 in Nature Medicine (doi: 10.1038/s41591-020-0931-3), includes the scans of over 900 patients received from collaborators in China.

The AI model was found to be as accurate as an experienced radiologist in terms of diagnosing the disease and even better in scenarios where COVID-19 positive patients presented with normal-seeming CT scans. In these instances, the model recognized 17 of 25 (68%) COVID-19 positive cases, whereas a senior thoracic radiologist and a thoracic radiology fellow interpreted all of them as negative.

The tool could be useful in scenarios where there aren’t enough material or human resources available, whether there is a scarcity of supplies needed to perform RT-PCR testing or a lack of radiologists, Zahi Fayad, director of the Biomedical Engineering and Imaging Institute and professor of Radiology, Medicine, and Cardiology at Mount Sinai told Diagnostics World. “This is really a way of integrating the best data that you have to come up with a diagnosis,” he said.

The study does have limitations, however, such as the relatively small sample size and bias toward patients with COVID-19 in the training data, which could limit the current AI model’s ability to distinguish COVID-19 from other causes of respiratory failure.

Fayad also mentioned the reliance on CT imaging as a possible limitation of this study. Although some countries—such as Italy, China, and the Netherlands—decided to move forward with the deployment of CT, that has not been the case in many parts of the world, he explained. In the United States, for instance, chest x-rays (CXR) have been more commonly used as a first line of attack due to workflow issues like accessibility, portability, and cleaning time.

Some hospitals and practices, on the other hand, have made use of both. “That’s what we’ve done at Mount Sinai,” he said. “We’ve supplemented the chest X-rays with CT scanning.” The researchers are now evaluating the AI model tested in their recently published paper. “And we also have a very nice way at Mount Sinai to deploy these models immediately in our hospitals,” he added.

A Lawrence Berkeley National Laboratory data scientist is exploring whether image recognition algorithms and a data analysis pipeline can help distinguish COVID-19 abnormalities in CT scans and CXRs from other respiratory illnesses.

Before the pandemic hit, Daniela Ushizima was working with researchers at the University of California, San Francisco, to develop algorithms that can search CT scans to detect cancer tumors. She hopes to leverage these prior efforts to classify COVID-19 lesions in CT scans and CXRs, she said in a statement.

The research team is working to collect CXR data into a central database. Relevant information will be stored at computing facilities that are part of the COVID-19 HPC consortium, and researchers will be able to use it to test their own image recognition algorithms.

Other groups have focused on CXR in particular.

South Korea-based Lunit, which is working in partnership with French teleradiology firm Vizyon, announced that its AI solution for CXR analysis is being used and tested in over 10 countries. And GE Healthcare, in partnership with Lunit, launched a new AI suite to detect CXR abnormalities, including pneumonia caused by COVID-19.

Elsewhere, researchers at Simon Fraser University in Canada are developing an AI tool in collaboration with Providence Health Care intended to speed diagnosis by trimming the time healthcare professionals spend distinguishing between COVID-19 pneumonia and non-COVID-19 cases. The tool allows clinicians to enter patients’ CXR images into a computer, run a bio-image detection analysis, and find positive pneumonia cases consistent with COVID-19.

Although it is not a standalone clinical diagnosis solution, it can be used to assist in the confirmation of clinicians’ hunches along with the use of other tools, like CT scans. The AI system can also help less experienced doctors examine a data set and make a quick diagnosis before a senior doctor is able to step in. A beta version has been uploaded to the United Nations Global Platform, and following approval, the intent is to make it freely available with U.N. support.

Still another example is the work of researchers at the University of Waterloo in Canada and startup company DarwinAI, which introduced COVID-Net, an AI system designed to detect COVID-19 cases from CXR images to improve screening.

In addition to the deep-learning AI software, the researchers have made a dataset and scientific paper on their work publicly available on GitHub. The COVID-Net models, which are at the research stage and not yet meant for direct clinical diagnosis, are intended to be used as reference models that can be built upon and enhanced as new data becomes available.

Meanwhile, a team of Italian researchers has studied deep learning techniques for the detection of COVID-19 in lung ultrasonography (LUS) images. Their work, published in IEEE Transactions on Medical Imaging (doi: 10.1109/TMI.2020.2994459), presents an annotated dataset of LUS images gathered from several Italian hospitals and introduces deep models for automatic analysis of the imaging.

Beyond Diagnosis

As new solutions continue to emerge, there remains skepticism and debate about the role that imaging ought to play during the pandemic. A normal chest CT does not mean that an individual does not have COVID-19 infection, some have pointed out, and an abnormal CT is not specific for COVID-19 diagnosis considering a variety of other infections look similar.

Back in March, several leading radiology organizations released statements discouraging the use of imaging for certain COVID-19 purposes. “CT should not be used to screen for or as a first-line test to diagnose COVID-19,” the American College of Radiology said in a position statement. Similarly, the Royal College of Radiologists said “there is no current role for CT in the diagnostic assessment of patients with suspected coronavirus infection in the UK.”

Individuals have spoken out as well. One radiologist in Australia explained in a blog post why he believes CT—with or without AI—is not a worthwhile option for the screening and diagnosis of COVID-19. The role of imaging is evolving, others have noted. When RT-PCR kits were not readily available, imaging was used as part of the screening process. But as the availability of testing kits expanded, AI has the potential to expand the role of chest imaging beyond the realm of diagnosis.

Possibilities include AI’s ability to facilitate risk stratification, treatment monitoring, and the discovery of novel therapeutic targets, according to researchers from Johns Hopkins, Cleveland Clinic, Emory University, and the University of Pennsylvania. Machine learning could, for instance, help predict ventilator requirements over the course of ICU admissions, they explained in a report published in Radiology: Artificial Intelligence (doi: 10.1148/ryai.2020200053).

Other imaging-related resources are emerging that, while not useful for diagnosing the living, may help in the overall battle against COVID-19. One significant challenge when it comes to learning more about this illness is that there are so few facilities with the capacity to perform autopsies and collect tissues from patients who have succumbed to the disease. Yet these tissues are an important means of exploring the pathology—as well as the prevention and treatment—of COVID-19 infection.

Indica Labs, a provider of computational pathology software, and Octo, an information systems provider, have announced the creation of an online COVID Digital Pathology Repository (COVID-DPR) hosted at NIH and available as a shared resource. This virtual collection of high resolution microscopic human tissue images from patients infected with COVID-19 includes initial data sets from infectious disease labs across Europe, Australia, and North America.

“These images that are coming, they’re essentially sectioned tissues, and they’re scanned at very high resolution,” Kate Lillard Tunstall, chief scientific officer at Indica Labs, told Diagnostics World. “They’re pyramidal so that you can zoom in and zoom out of the tissues.”

Because the imaging is such a high resolution and the files are so large, there needs to be a way to stream the information so that the images can be viewed across the globe. And that’s essentially what HALO Link—the “backbone” of the COVID-DPR resource—does. It manages the images, she explained, and allows that data to be delivered around the world so that anyone can view it as if they were standing right in front of the microscope where the slide was sitting.

In the long term, as research ramps up, the resource is expected to be used in studies to help identify pathologies and perhaps to build AI algorithms to identify them automatically. For the time being, though, it’s mainly an educational tool that can help pathologists learn more about the damage effects of COVID-19 on various organs.

Paul Nicolaus is a freelance writer specializing in science, nature, and health. Learn more at www.nicolauswriting.com.