By Deborah Borfitz
May 27, 2021 | Over the past year, Michigan Medicine has been collaborating with a company generating synthetic data to improve the rate at which brain tumors get accurately diagnosed intraoperatively. The initiative involves using a generative adversarial network to “grow" data since large public repositories do not contain the kind of images needed to guide real-time decision-making about which surgery is best for patients, says Todd Hollon, M.D., a neurosurgeon who leads the medical center’s Machine Learning in Neurosurgery Laboratory (MLiNS).
Knowing which course of action to take can be a diagnostic conundrum because less than 10% of patients have a biopsy done prior to surgery—a situation that has remained unchanged since the specialty of neurosurgery was born in the early 1900s, Hollon says. Consequently, maximal safe resection sometimes gets unnecessarily done and non-operative therapies tend to be underutilized.
More than 150 different brain tumor types have been documented, but only a handful of the most common types are important for surgical decision-making, says Hollon, pointing to the role of domain knowledge in the classification exercise. The initial prototype tumor classification system currently being tested includes four types that cover over 90% of the brain tumors that get diagnosed in the U.S.
Partner in the effort is Synthetaic, a startup launched in 2019 to create imaging data that can be used to train artificial intelligence (AI). On its own, or in collaboration with partners, the company endeavors to find applications in different verticals that could benefit from AI tools having synthetic data running in the background, says CEO and Founder Corey Jaskolski.
Before its collaboration with Michigan Medicine, the company was working mostly with AI across geospatial imaging data and conservation applications. It was not much of a leap from there to medicine because “data is data” and AI principles and tools are universal, says Jaskolski. “AI is amazing at finding patterns and features in data that unlock the ability to detect things that are hard or tedious to detect as humans, and the synthetic data concept expands the datasets for greater [predictive] accuracy.”
One of the company’s key corporate missions is “to leave the world a better place than we came into it,” Jaskolski adds. In both the conservation and medicine arenas, AI has the potential to make a significant positive impact.
Hollon says his 30-year ambition is to see AI-based decision support tools running on every patient’s electronic medical record (EMR) to make it prescriptive as well as descriptive. Today, the few such tools that exist are narrow in scope because they are rules-based systems whose utility is circumstantial.
On the Michigan Medicine side, the collaboration is a team effort that includes Siri Sahib S. Khalsa, M.D., MLiNS neurosurgery resident with a computational modeling background who initiated and leads the project with Synthetaic. The work is supported by Sandra Camelo-Piragua, M.D., associate professor of neuropathology.
‘Great AI Equalizer’
Among the handful of other synthetic data vendors on the market, most focus on tabular or text-based data rather than imagery, says Jaskolski says. One good example is the use of synthetic financial transactions to build AI for fraud detection, since credit card companies would not want to give developers access to all the real financial transactions of their customers. Another is the generation of synthetic medical records for veterans to tease out risk factors for suicide without Veterans Affairs having to disclose any personal protected information.
When image-based synthetic data is used, typically it is in the form of 3D video game graphics, he continues. That is how NVIDIA, for example, makes self-driving car simulators. Synthetaic, in contrast, uses AI rather than video game rendering software to generate images.
That is a critically important difference when working with slides of human brain tissue, says Jaskolski. Video game technology lends itself to rules-based scenarios where rigid objects move in more predeterminable ways, but not the complexity and uncertainty of human microscopy.
The promise of AI has been discussed for decades and recently success stories have emerged, but the use cases have involved “tons of annotated data,” says Jaskolski. Self-driving cars can in many cases outperform human drivers, for example, but only because of the millions of hours of dash cam footage that someone turned into millions or even billions of labeled pieces of data to train a computer to accurately recognize pedestrians, bicyclists, crosswalks, buildings, and other vehicles.
Almost all other applications do not have nearly as much data available to drive performance efficiency, Jaskolski says, which is where synthetic data comes in. “It’s the great AI equalizer.”
Synthetaic is effectively using small statistical islands of images to grow large, high-quality datasets that are completely computer imagined and thus not subject to privacy and confidentiality regulations, says Jaskolski. The approach involves a generative adversarial network named MEGAN [massively extensible generative adversarial network].
Like all generative adversarial networks, MEGAN includes a generator and a discriminator that compete against each other, he explains. This is often characterized as a game of cat and mouse where a counterfeiter is learning to pass bogus money and a police officer is learning to spot the fakes, but Jaskolski dislikes the analogy.
In his portrayal, someone is asked to draw a rare Wisconsin daisy sight unseen. The generator makes many attempts to render a more perfect image—e.g., a longer stem, broader leaves, and more color variability in the petals—based on the feedback of the discriminator regarding how much the picture has improved. Ultimately, the methodology would likely produce a “pretty good image of this fictional flower,” Jaskolski says.
“What’s really cool is that it’s an unsupervised technique,” he adds. The system learns by itself without a human telling it “you’re getting closer” or “that’s not very good at all.” That means MEGAN can train for a long time on huge amounts of starter data unaccompanied, to efficiently build better, higher quality data at scale.
Early indications are that using synthetic data resembling frozen section images to train a predictive model can boosts its diagnostic accuracy by 40% or more and, for some tumor subtypes, over 90%, says Hollon. As a tool for biomedical image analysis and pathology image support, it has been “a real game-changer.”
The application of AI in pathology has to date involved formal and fixed, rather than frozen and stained, images, he says. They are “completely different ways of processing tissue… comparable but not the same.”
Based on experiments conducted in multiple labs, training a machine learning algorithm on large public pathology datasets containing images of formalin-fixed paraffin-embedded tissue results in “very poor” diagnostic prediction using frozen section images, says Hollon. The MLiNS at Michigan Medicine has tried to generate a dataset of frozen images, which was “much easier said than done” because the frozen section technique is done far less often than conventional specimen analysis.
Other hospitals and medical centers are now being recruited to give the research team prospective testing data they can use to validate the new model trained on the synthetic data, since the utility of the tumor classification system will be tied to its generalizability to new datasets. Medical use cases “must be held to a higher standard,” Hollon notes, because patients’ health is at stake.
The project between Synthetaic and Michigan Medicine might not have been feasible if Hollon were not both a surgeon and AI practitioner, says Jaskolski. “We did not have to explain the state of the art [to him]… so we could move quickly,” and Hollon could help head off fears that predictive AI was going to be tested on more synthetic data rather than real-world brain tumor data from multiple collaborating institutions.
While interpretability of an AI-based decision support tool is important to acceptance by clinicians, adds Hollon, “I have yet to meet a physician who is going to argue with great results. If it performs well consistently, I think interpretability becomes less of an issue.”
Moreover, pathologists are probably going to be less interested in the “exact way” a neural network is classifying a glioblastoma versus an astrocytoma than when the calculation involves how common clinical variables like body mass index and blood oxygen level play into predictions about an ICU stay, Hollon says. “In many ways, I think trust and interpretability [of AI] get played up a little bit more than happens in the real world with actual clinicians.”
In addition to helping ensure the best surgery gets done for individual patients, MLiNS has also been collaborating with pathologists at Michigan Medicine to create multiple other decision support tools to increase diagnostic speed and accuracy using machine learning and computer vision, Hollon says. The goal is better care everywhere, including community hospitals where a pathologist may be unavailable during the time of surgery to help interpret slides.
The brain tumor classification system is but the first intended application of the AI-based decision support tool, Hollon adds, since the same system is applicable across histopathology—basically any time a quicker diagnosis is needed, potentially from suboptimal tissue samples. This would include spinal tumors as well as prostate and lung cancers.
“It’s a general technique that could be applied to any of the biomedical imaging modalities, including MRI, CT scan, [and] chest X-ray… at the interface between surgery and pathology,” says Hollon. Intraoperative brain tumor diagnosis was an ideal starting point both because it is in his clinical wheelhouse and is an area ripe for improvement.