Latest News

AI Framework for Differentiating Neurodegenerative Diseases

By Deborah Borfitz 

April 29, 2026 | A team of computational scientists at Lund University (Sweden) have built a deep, joint-learning proteomics model for improving the diagnostic accuracy for a handful of dementia-related conditions which in primary care settings remains challenging due to a shortage of informative biomarkers. Predicting these different but correlated pathologies together with a single blood test would facilitate a differential diagnosis with speed and confidence, according to postdoctoral researcher and model creator Lijun An, Ph.D. 

For six related and often coexisting conditions—Alzheimer’s disease, Parkinson’s disease, ALS, frontotemporal dementia, previous stroke, and healthy aging controls—individuals have about a 50-50 chance of being misdiagnosed or underdiagnosed by their general practitioner today, he notes. This is due largely to shared biological profiles and symptoms, but also the significant diversity in patient populations and the way functional domains get measured one clinic to the next. 

The newly conceived model, called proteomics-based artificial intelligence for dementia diagnosis (ProtAIDe-Dx), is the first step toward a more precise diagnostic approach using the largest neuro-proteomics samples to date that would enable earlier intervention and tailored treatments for neurodegenerative diseases that are growing rapidly worldwide. Its potential was the subject of a study that was published recently in Nature Medicine (DOI: 10.1038/s41591-026-04303-y). 

The modeling exercise tapped the world’s largest neurodegenerative disease plasma proteomics dataset, assembled by the Global Neurodegenerative Proteomics Consortium (GNPC). A subset of over 17,000 memory clinic patients was selected for the study, based on the availability of SomaLogic 7k proteomics, sampled across 19 contributing sites. 

The SomaLogic platform identifies and measures thousands of proteins, the “functional effectors” in the bodies of living people, points out Jacob Vogel, Ph.D., An’s supervisor and an assistant professor who leads a research group as part of the government-supported MultiPark translational program at Lund University. Applying the most sophisticated AI models to the large proteomics cohort demonstrated that ProtAIDe-Dx could significantly improve biomarker-based differential diagnosis by identifying the proteins driving patient-level decisions. 

Task Learning  

The field has not been without progress. Over just the past few years, researchers have discovered blood-based biomarkers for Alzheimer’s disease (e.g., p-tau217 and amyloid ratios) that are expected to improve the diagnostic accuracy for this most common cause of dementia soon, says An. 

This is not the case with the other neurodegenerative diseases where a definitive diagnosis often happens only at autopsy. Patients, while still alive, generally get diagnosed based on clinical history, brief cognitive screening tools, and various lab tests to rule out reversible causes.  

A simple blood test that can sort out the molecular clues is the hope, An says. The starting point for addressing issues of misdiagnosis, confounded by age-related comorbidities, is the development of blood-based biomarkers to identify underlying neurodegenerative pathology with high specificity. 

Blood samples from diverse cohorts spanning the U.S. and Europe were analyzed with SomaLogic’s proteomics assay run on Illumina high-throughput DNA sequencing platforms. From an initial set of 7,595 proteins, several hundred of the most relevant ones were selected for the multi-task, joint-learning approach to allow the ProtAIDe-Dx model to signal disease co-pathology. 

ProtAIDe-Dx succeeded in outperforming multiple machine learning and state-of-the-art deep learning models in terms of predictive accuracy. When generalized to multiple independent datasets, it also produced a better differential diagnosis relative to currently accessible clinical biomarkers, reports An. 

The deep learning architecture essentially learned one task (e.g., is it Alzheimer’s?) to help learn another one (e.g., is it Parkinson’s?), all at the same time, explains Vogel. “Learning them together gives you a net advantage on each individual task, because deep in the data there is information that is relevant to all [six] of them.” 

After a few thousand iterations, the model internally defines the “weight” assigned to the informative proteins, explains An. Feature importance technologies were also employed to rank, filter, and identify the most predictive variables while reducing noise. 

The multi-diagnosis prediction model was tested across a dozen different datasets, Vogel says, providing a close estimation of how it would perform in a real-world clinic. Ultimately, understanding how a model behaves “in the wild” is more important than finding the best models based on computational techniques. 

Diagnostic Labels 

It was only somewhat surprising that the proteomic profiles were found to predict cognitive decline better than the clinical diagnosis, says Vogel. The GNPC dataset is “reflective of real life ... [where] everyone is doing their diagnosis a little bit differently.” Not only are clinicians potentially reaching different conclusions, but confidence levels in those diagnoses are related to the availability of high-end technology such as PET and MRI scans and cerebrospinal fluid analysis. 

“In aggregate across all these different datasets, we don’t know how truly accurate the diagnostic labels are,” he says. “It’s very, very hard to diagnose a neurodegenerative disease without biomarkers and we don’t have precise biomarkers for most of them.” 

Another interesting finding from the study was that individuals with the same clinical diagnosis seemed to have different underlying biological subtypes, Vogel continues. The way the model grouped patients “is not always going to correspond perfectly to what the clinicians have assigned as their diagnosis, but it does tell us something ... [namely], that there are people that perhaps have a similar biological profile but might have symptoms that manifest differently.” 

It is known that family members can have the same genetic mutation where one person gets ALS while another develops frontotemporal dementia, he offers as an example. “The question becomes which is the relevant piece of information—the symptoms or the biological profile.” 

For choosing a treatment, clinicians would probably want to know a patient’s underlying biological profile. But if it’s a behavioral treatment, they may also want to know more about the symptoms. These are “disparate entities, and they are both relevant to our pursuit of therapeutics and patient management,” says Vogel. 

Missing Data 

In an ideal world, as imagined by Vogel, the diagnosis of multiple diseases will happen via a blood-based panel. “We’re not quite there yet, at least with this iteration of plasma proteomics.” 

While the ProtAIDe-Dx model is highly correlated with multiple aspects of many different diseases, it is not at the point where it can be used alone to make a definitive diagnosis. But adding the model’s output to information from Alzheimer’s biomarkers, imaging scans, and cognitive tests should improve the situation considerably. 

“If you use very strict thresholds with this proteomic data, you can also come up with a rather confident biological diagnoses,” he adds. The problem is that the diagnosis would be uncertain for all but 10% to 20% of individuals. 

What’s missing is the data, says Vogel, drawing a parallel with the multi-decade process to develop an Alzheimer’s disease blood test that advanced by fits and starts. SomaLogic’s affinity technology provides but one view of a protein. Other views are available from proximity extension assays (e.g., Olink Proteomics), which use next-generation sequencing to identify and quantify proteins, and mass spectrometry, which offers a significantly wider and more detailed array of peptides, fragments, and post-translational modifications. Brain-derived extracellular vesicles found in blood are also promising, non-invasive, early-detection biomarkers for neurodegenerative diseases. 

“The AI approaches we’re using are great because they can pull the needle out of the haystack and combine information in ways that a person with a spreadsheet simply would never be able to do,” he says. “It’s just a matter of continuing to generate data in large quantities and profiling it ... until we find what we want.” 

At that point, work will begin narrowing down the candidate proteins to the smallest possible panel with no loss of diagnostic value, says Vogel. That would make the test scalable by making it less expensive and easier to reproduce and maintain quality control. 

“If you have something that is incredibly accurate, people will use it no matter,” he says. If a test is in high demand, the price will eventually come down as the supply side of the equation adjusts. The goal would be to get the maximum amount of clinical utility with the minimum amount of overhead.  

Biomarker Panels 

A single blood test to diagnose multiple neurodegenerative diseases is the longer-term goal here. “It can’t be understated how important biomarkers have been to our pursuit of therapies and patient management,” says Vogel. “When you have a disease there is a certain very strong relief that comes from someone saying I know exactly what you have ... it opens up a world of possibilities to patients that isn’t available to them if they don’t know.” 

For many years, one of the major holdups in finding treatments for Alzheimer’s disease was that many of the people enrolled in clinical trials didn’t in fact have the condition because there were no biomarkers for diagnosing them, he says. “Everything about the pursuit of dementia research and treatment has improved with the existence of biomarkers,” including tracking disease progression and response to medication.  

On a single-disease basis, biomarkers are increasingly being employed in clinical trials to improve diagnostic confidence. This notably includes Parkinson’s disease, where measures of alpha synuclein are measured in cerebrospinal fluid, Vogel says. 

More recently, biomarkers have emerged for 4R tauopathies—a group of neurodegenerative diseases defined by the pathological accumulation of the four-repeat isoform of the tau protein in the brain—which often presents as atypical parkinsonism. From proteomics analyses, studies are also finding proteins strongly associated with progression to cognitive impairment based on synaptic damage. 

This all points to the possibility of biomarker panels being assembled at some point. “Progress has been excellent in that domain over the last couple of years,” says Vogel, who hopes to have a multi-diagnosis test market-ready much sooner than the decade or more he once braced for.  

Work Ahead 

A major focus of the Dementia Multi-Omics and Neuroimaging lab, where Vogel and his team are doing their work, is partnering with colleagues to better understand the heterogeneity of neurodegenerative diseases and how that influences treatment response, he says. There might also be a role for the ProtAIDe-Dx model in the clinical trials being conducted by World-Wide FINGERS, a network of studies based on the landmark Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability, demonstrating that simultaneous lifestyle changes could improve cognition in at-risk older adults. 

The complexity of the data being analyzed in easy to underestimate, says An, who was introduced to clinical proteomics precision medicine projects in the Vogel lab. Busy, high-dimensional data requires advanced, specialized computational methods to produce interesting findings that can be validated to support precision medicine, he stresses. 

“There is a great amount of diversity in what we are looking at and, when you put all that together, it becomes a very difficult task to make anything out of it,” says Vogel. “[Researchers] all know if you add more data, you are more likely to be able to find the truth, but it would be easier ... if we had better harmonization and standardization. We probably just need to be working together more and ... pooling our data [beforehand].”

Load more comments
comment-avatar