By Paul Nicolaus
February 21, 2017 | Over the course of six months, software engineer Ken Jennings morphed from mere mortal into quiz show machine. He sifted through his vast database of knowledge, buzzed in with lightning speed, and dazzled “Jeopardy!” audiences with one correct response after another.
Along the way, he strung together a jaw-dropping 74 consecutive victories and crushed the previous record. In terms of staying power, the feat is likely on par with DiMaggio’s legendary hit streak.
Like Jennings, Brad Rutter made himself known as one of the show’s finest competitors by reeling in $3.25 million in prize money and staking claim to the title of “biggest money winner” in the process.
With these credentials, it sure seemed like the duo had a fighting chance when they agreed to an interesting proposition back in 2011. Their task? Duke it out with an actual machine in an exhibition match.
It turns out that third opponent wouldn’t actually stand behind the podium. Made up of ten racks of Power 750 servers, IBM’s Watson was too large and—due to its cooling system—too noisy to join the others. In its place, an avatar sat in as the clues were delivered to the supercomputer in textual format and responses fed back into the studio.
Watching with bated breath, the world tuned in to see whether man or machine would come out on top as the contestants took on categories covering everything from familiar sayings and dialects to “Actors Who Direct” and The Beatles.
At the end of the first of three days of competition, Watson and Rutter were knotted up at $5,000 each with Jennings following with $2,000. It wasn’t until the second day that Watson hit its stride, securing a lead of over $20,000 over its nearest challenger.
By the time all was said and done, there was, indeed, a clear victor. Watson’s total of $77,147 was more than enough to take down Jennings and Rutter, who had accumulated $24,000 and $21,600, respectively.
The performance was, no doubt, a massive milestone for artificial intelligence (AI) in general, but in the years since this supercomputer has shifted away from battling human competitors to make a name for itself in other ways, including working alongside the medical community to help solve health-related head scratchers.
Before working with Watson, the extent of Dr. William Kim’s familiarity related back to the hyped-up quiz show battle, and as he and his co-workers first began conversations with IBM about the possibility of using the supercomputer for medical purposes he wasn’t a big believer—not yet anyway.
“Honestly, I was a little skeptical about its ability to help us with cancer genomics,” said Kim, a member of the University of North Carolina’s Lineberger Comprehensive Cancer Center and an associate professor of medicine and genetics at UNC-Chapel Hill.
Kim’s mindset soon began to shift, however, and now over two years into the collaborative relationship he speaks of Watson in much friendlier terms. “I almost look at Watson as a colleague that is super smart, super read up on literature, and available 24 hours a day,” he said.
Bouncing ideas off Watson is useful. So is the ability to gather a second opinion. But the assistance is especially appreciated because it is simply no longer humanly possible to keep tabs on all relevant information.
Kim recalled performing a literature review on cancer related publications in 2015, for example. “It was an absurdly high number,” he said. There were about 175,000 scientific articles published during that year alone.
Divide that by 365, he said, and it works out to about 480 articles that would need to be read per day to keep up. Even if you divide that by Kim’s 15 team members, that’s still 32 articles a day. And according to Kim, it’s a tall order just to read one article per day while keeping up with all other responsibilities.
These figures don’t even include the thousands of open cancer studies that exist at any given point in time. “The amount of information that’s coming out in literature and the number of clinical trials that are being opened and closed on a daily basis that have anything to do with cancer is incredibly high, and it’s incredibly difficult to curate manually or have the brain power to keep up with,” he said.
This flood of information and the inability to sift through it efficiently may very well contribute to communication breakdown, poor judgement, and diagnostic error. It’s a problem with serious consequences, too. One study published in The BMJ earlier this year found that medical error is the third leading cause of death in the U.S.
Couple that with the World Health Organization’s projection that the medical community will face a global shortfall of nearly 13 million health-care workers by 2035, and AI begins to seem like a solution that can at least help offset the need for more human brain power.
Making Doctors Better
At UNC Lineberger, Watson has become intimately integrated into the process of sequencing and interpreting data for patients as researchers and physicians continue to explore the possibilities of cancer genetics.
A protocol dubbed UNCseq analyzes tumor samples using next generation sequencing and compares them to normal tissue samples, allowing researchers to identify the genetic changes that could influence patient treatment.
About 800 genes are sequenced and an independent panel of physicians called the Clinical Committee for Genomic Research (CCGR) develops a list of approximately 70 genes that would be important for a physician to know about and consider if there were an alteration at the genomic level. The Molecular Tumor Board (MTB) looks at the genomic data in terms of the actual variance and cross references it with the CCGR’s list.
Although Watson hasn’t been diagnosing diseases at UNC, it is apparent that it has become a powerful tool working in conjunction with the human team. One study, for example, took about 1,000 patients that had already gone through the typical human process and ran their genomic data through Watson to see what the computer had to say.
With about 25 or 30 of those patients, Watson recommended considering a gene as actionable when others had not classified it as such. In each of these instances, Watson had a rationale for why it was deeming it as such, and about 85% of the time the human team agreed with their technological counterpart’s findings.
These abilities have revealed themselves far beyond the UNC campus as well, and in some instances Watson has stepped up as more of a savior than a technological tool. In one scenario reported by the International Business Times, a female patient suffering from leukemia baffled medical professionals from Japan after the initial treatment proved ineffective. Out of ideas, the team tapped Watson for assistance.
After studying the patient’s medical information and cross-referencing her condition against 20 million oncological records uploaded by doctors from the University of Tokyo’s Institute of Medical Science, Watson made a life-saving diagnosis as it determined the patient had a variant form of the disease. In mere minutes, it solved what had stumped humans for months.
Rise of AI
While IBM may still be considered the big kahuna thanks to the success of Watson, it is far from the only company pursuing the healthcare-related wonders of AI. According to technology trend predictor CB Insights, over 90 companies are currently applying machine learning algorithms and predictive analytics to take on everything from reducing drug discovery times to diagnosing ailments.
In addition to the scores of start-ups entering the fray, there are several other tech titans involved. Google, for example, announced earlier this year that its new and improved search function would be able to better help those attempting to self-diagnose by offering more accurate and accessible medical information. Its database was built using feedback from doctors at Harvard Medical School and the Mayo Clinic.
On another front, Microsoft researchers Dr. Eric Horvitz and Dr. Ryen White, along with Columbia University graduate student John Paparrizos, revealed in their study in the Journal of Oncology Practice (DOI: 10.1200/JOP.2015.010504) that analyzing large samples of search engine queries could help identify internet users suffering from pancreatic cancer before they have received a diagnosis.
The trio sifted through 18 months of Bing search data to see if they could uncover patterns of symptoms before people were diagnosed with the disease. It turns out that in 5% to 15% of these cases they could, with false positive rates as low as 1 in 100,000.
To identify individuals to focus on, they zeroed in on first person searches such as “I was just diagnosed with pancreatic cancer” that indicated someone had the disease. They then worked backward through previous queries made by these same people to see if they had searched for symptoms associated with the illness.
Their findings offer promise, at least in part, because the disease poses such a difficult challenge. It is the fourth leading cause of cancer death in the U.S. and is often diagnosed too late to be treated effectively. About 75% of patients with pancreatic cancer who are not candidates for surgery die within one year of diagnosis, and only 4% survive five years beyond their diagnosis.
Even more recently, the researchers have shown in a related study published in JAMA Oncology (doi:10.1001/jamaoncol.2016.4911) that lung cancer can be detected up to a full year prior to current methods of diagnosis by analyzing a patient’s internet searches for symptoms and demographic data that put them at a higher risk.
Like pancreatic cancer, the signs of lung carcinoma—the leading cause of cancer death in the U.S.—often present as nonspecific symptoms that appear and evolve over time and in many cases do not become prominent until the disease has metastasized. Many patients present with stage III or IV disease, which is rarely curable with current therapies, and five-year survival rates are low.
In other words, early detection is crucial, and search logs could serve as a new form of large-scale detector that gives patients or doctors enough reason to seek cancer screenings earlier. And this, in turn, could help improve upon treatment prospects.
“One of the most important conclusions that we can draw is that the technique actually appears to work,” explained White, CTO of Health Intelligence at Microsoft. “We can make the prediction accurately far in advance of the point of diagnosis.”
More Steps to Take
Their methods involve the use of machine learning techniques. “We basically take the long-term behavior and we convert that to features,” White said. “We then use those features combined with the presence or absence of an experiential query to train a machine learned model to be able to make a prediction.”
And there are a number of options for using that information, ranging from alerting the user directly to engaging with patients’ doctors and allowing them to make a determination. There’s also more of a middle ground possibility of encouraging users to mention their concerns to their doctors and engage in dialog.
If users have searched for symptoms and a prediction can be made that there is a high likelihood that they have pancreatic cancer, for example, that prediction doesn’t necessarily have to be revealed.
“We could expose the fact that they’ve looked for back pain and itchy hands and yellow eyes and recommend that they mention those things the next time they speak to their doctor,” White said. “There we are not actually informing them of the outcome. We are just informing them of what we’ve observed.”
While these recent studies are revealing plenty of far-reaching possibilities, especially considering that similar methods could be applied to other diseases, the practical use of the technology is still located somewhere off in the horizon. There aren’t any immediate plans to integrate this directly into Microsoft products, and getting to that point would take time.
For one, there is a need to evaluate these models based on larger truths. As is, the studies make inferences based on the presence of queries that appear to point to a diagnosis. It will be important to work with patients who have actually been diagnosed, obtain their approval to look at their long-term behavior before that point, and evaluate how effective the techniques really are.
There are also a whole host of related issues that would require plenty of care and attention, ranging from user consent and approval to avoiding any undue worry caused by false alarms or errors. “We do need to be incredibly careful about how this information is being used,” White said, noting that privacy needs to be respected at all times.
“The work is a very promising first step,” he added, “but I think there are a lot of steps to take to get something like this to production.”
Reimagining AI’s Role
In his final moments in the limelight, Jennings showed he was a good sport with a sense of humor, drawing a laugh from host Alex Trebek and the studio audience by including “I for one welcome our new computer overlords” beneath his Final Jeopardy response.
His quip, a nod to a line from a “Simpsons” episode, only adds to the long, ongoing conversation about AI, which has intrigued some of the greatest minds of our time. Cambridge physicist Stephen Hawking, for example, once noted that success in AI would be the biggest event in human history. Like Bill Gates and other thought leaders, however, he has tempered comments on the incredible promise of AI with concerns about the possible dangers and threats as well.
In some cases—think “The Terminator”—portrayals of AI have tapped into these sorts of fears. In others—think “Wall-E”—there’s a child-like charm involved. Still others, like “Her,” offer up a more complex dynamic as Theodore (Joaquin Phoenix) develops a friendship with and eventual love for an operating system named Samantha (voiced by Scarlett Johansson).
Whether the action plays out on a game show stage or the big screen, pop culture has been tapping into our imaginations as we entertain all the hair-raising fears and mind-boggling possibilities our human intellect can dream up about a different sort of smarts. And yet, the implications couldn’t be more real.
As this alternate form of intelligence progresses and impresses, the medical community will be faced with the challenge of reimagining the role of AI. When it comes to curing disease and working toward the betterment of humanity, will humans continue to take the lead? Will collaboration between man and machine become the norm? Will roles reverse over time?
AI seems to be raising far more what-ifs than answers, so at this stage of the game perhaps it’s appropriate to follow along with the format popularized by “Jeopardy!” that asks participants to provide responses to all clues in the form of a question.
What is a fascinating road ahead, Alex?
Paul Nicolaus is a freelance writer specializing in health and medicine. Learn more at www.nicolauswriting.com.