Deep Learning Offers Promise In Understanding the Brain and Neurological Conditions

By Tré LaRosa
NeuLine Health

Interest — social, financial, and academic — in artificial intelligence has exploded in the last decade: Since 2016, an ETF focused on artificial intelligence and robotics has jumped 148%. A basic PubMed search for “artificial intelligence” shows a jump of 46% in publications in the five years between 2009 to 2015; for the five years between 2016 and 2021, it’s a difference of more than 24,000 publications or an astounding 360%. The public’s interest has also matched these trends; there was an increase of 126% in Google searches for artificial intelligence in 2021 compared to 2016. 

It’s clear that artificial intelligence is already ingratiated into society, but how is AI playing a role in neuroscience and neurology?

As it turns out, these fields are very intertwined. After all, the field of AI hopes to emulate the complexity of the world’s most incredible supercomputer: Our brain.

We’ve discussed the immense complexity of the brain before: the interconnecting 86 billion neurons in our brain result in trillions of connections, known as the “connectome” of the brain. Our brain is exceptional in countless ways, such as its ability to quickly process a barrage of sensory information, transmit signals, and blend it all to give us a perspective of the world that allows us to remember previous experiences (something researchers still don’t fully understand), glean insights from those previous experience, respond to stimuli nearly immediately, and then forecast future events. It’s not at all surprising that the wondrous capabilities of the brain inspire us to develop technology that can emulate it and even outperform it. Such outperformance can exist in fields such as math, but other fields, such as object recognition without extensive training are less successful.

So in a fascinating reversal, technology that was inspired by the brain is now helping us to gain a better understanding of the brain, including how it arose to process the world in the way it does. And with a better understanding of the brain and the nervous system, comes a better understanding of any condition that affects them…

Artificial intelligence, machine learning, and neural nets

The way artificial intelligence has been defined over the years has shifted, but generally, it’s when we program computers to accomplish tasks that usually require human intelligence. It’s a field that has evolved about as fast as technology has evolved. Artificial intelligence is a way for us to harness technology’s computational power to advance our understanding of questions that require rapid thinking and large datasets — something that despite our brain’s remarkable abilities can be lacking at times. AI is everywhere, including in the applications we use every day.

A branch of artificial intelligence called machine learning is when an AI program is “taught” using large datasets to do something specific, such as recognizing faces in images. A caveat of machine learning is it requires human intervention to modify and assist the program in improving upon itself. Machine learning is also everywhere: The fields of data science and machine learning are fundamentally intertwined, with data models that utilize machine learning used in common, crucial applications like predicting weather to assist human experts.

Further down into the rabbit hole of artificial intelligence is a subset of machine learning called “deep learning.” Deep learning differs from general machine learning in a key way: Deep learning requires what are called “neural nets” which allow the program to optimize itself with less human intervention. Machine learning algorithms usually require humans to program their features in a particular order; deep learning doesn’t, and relies on the program’s ability to analyze large datasets and discern trends. 

Neural networks were developed to emulate the way the brain processes information. There is an input, then there are layers where information is analyzed and transmitted between the layers, which concludes with a final output. Some tasks could be accomplished using either machine learning or deep learning, but how those tasks are achieved is likely to be different, at least initially. In the human brain, we don’t fully know how humans process increasingly complex subjects, but we do know it’s processed through relationships between neurons and neural networks in our brain. Artificial neural networks use these same concepts with computational power applied. And different tasks are optimized in different ways: The way the human brain processes visual stimuli is much different than auditory stimuli, which is different from how the brain processes scents. 

So while artificial neural networks were invented based on our understanding of the brain’s way of processing the world, it’s not really simple to investigate the brain’s circuitry, which gives neural networks a valuable role as a (basic) computational model for the brain.

How are they helping us understand the brain?

Neural networks were inspired by an existing understanding of the brain and its arrangement in processing the world. Perhaps the most intriguing aspect of the brain’s processing is the multi-layered, hierarchical way the brain translates stimuli into a more holistic view of the world. 

For example, how the brain processes visual stimuli. It’s not our eyes that make sense of the scenes we observe; it’s our brain. But how do we “know” what we’re seeing, especially with scenes as crowded and highly-stimulating as we see every day? 

Well, the brain, as we’ve noted many times before, is incredibly complex and well-organized. Evolution has a powerful way of optimizing the way organisms develop. In our case, the way we process visual stimuli is through these multiple layers of brain processing — which in the brain exist as organizations of neurons with specialized functions. Each layer adds more context to the previous layer — the first layers observe the edges and contours of the scenes with subsequent layers analyzing more complex details. The final result is our brain combines all the signals into a single output, which is what we actually observe. Fascinatingly, we don’t have to have an extensive prior dataset to “learn” what something is, nor do we need to have exact specifics. Machine learning, on the other hand, does require more extensive training to improve its accuracy. 

The way humans make sense of what we see is well-understood, which means that’s a fruitful research area to mine when understanding how neural nets can be useful for our understanding of the brain. In research, it’s important to first validate models before they can be applied to outstanding questions. To do this, researchers used a neural net to “see” and discern between objects and faces, which we know are processed differently in the brain. If their neural net mimicked the way the brain processes visual stimuli, it might be able to predict the way the brain would respond to an image it had never seen before. They were successful at both. Researchers have had other successes in auditory and olfactory pathways as well.

It would be a mistake to say that neural nets give us an analogous model for our brain, but the promise of neural nets — and artificial intelligence — is not to act as a replacement for the human brain, but as a complement. Neural nets forming similar hierarchies as our brain demonstrates that there is some degree of concordance, at least in how processes are optimized in highly-complex computational systems. They might not give us discrete answers to how the brain functions, but they provide us with a way to test hypotheses.

Deep learning is not a far-away fantasy, either — it’s already being used to improve care for neurological conditions.

Their applications in neurological diseases 

New discoveries offer hope in medicine in multiple ways: They might help improve the prevention and diagnosis of a condition, influence how therapeutics are developed, or improve the understanding of the typical progression of a given condition. Deep learning is already offering promise in all these respects. 

Properly diagnosing Alzheimer’s and finding sufficient biomarkers remains a major challenge in neurology for a variety of reasons. Alzheimer’s is a condition with many known risk factors and proposed mechanisms which means there are many possible applications for deep learning that are already being investigated. One such problem that exists is clinical trial enrollment and patient stratification. With deep learning, an improved ability to diagnose Alzheimer’s — and an improved ability to predict if and when somebody will advance from mild cognitive impairment (MCI) to AD — clinical trial enrollment and completion could improve significantly. This attempt at improved diagnosis and prognostication has already shown promise in peer-reviewed papers

Consider how profound this is from multiple angles: Improved diagnosis means people can begin receiving treatment and care before symptoms and pathology have progressed; improved sensitivity for patients who don’t have the full panel of symptoms of AD but show a higher likelihood of developing it can begin taking stronger steps to prevent the onset, and therefore might experience life-extension by nature of delaying an AD diagnosis; and finally, with improved clinical trial enrollment and completion, a deeper understanding of the condition can be gained through clinical trials, patients will experience less disappointment by enrolling with less concern of being excluding, and the cost-effectiveness of clinical trials will improve which will likely result in more clinical trials. 

There can be a cascade effect that results in compounding improvements whenever a promising technique is applied at every phase in the field, which underscores how important it is that these applications are guided by patient-centered bioethics. Artificial intelligence can be scary for some; the idea that a computer could be guiding medical decisions sounds like dystopian science fiction. But technology is always at its best when it’s a complement to humans; while imaging tools like MRIs and CTs are not quite artificial intelligence, it would be a mistake for us to dismiss their value because of a fear of technology. AI should be considered similarly: So long as we center patients in research, these concerns can be avoided or at least ameliorated. 

Deep learning is not only promising in Alzheimer’s research but also in all other conditions. 

Researchers used machine learning to classify multiple MS subtypes from MRIs: They found that their results could likely be used to predict the progression of multiple sclerosis, response to treatment, and even as a tool to delineate patients in interventional trials.

The compound effect is evident here: If these MS subtypes were validated and implemented in the clinic as part of the diagnostic process, patients would have a better understanding of their progression and would experience less uncertainty when their clinicians prescribe their therapies. Further, clinical trials would probably produce more nuanced insights since interventional trials would be divided by subtype. These benefits are synergistic: They improve the overall experience for the patient, reduce uncertainty, provide more ways to investigate the condition, and improve the clinical and scientific understanding of the patient community and the mechanism of the condition. And it would improve drug development!

Conclusion

As it always goes with new discoveries and technological advancements, there’s a wave of promise, disappointment, fear, and uncertainty. Reasonably so, especially in the medical field. But as long as researchers are guided by bioethics that center the patient, these advancements could have a profound effect for anybody affected by a neurological condition. To best research a condition, researchers do not only focus on improving therapies; they focus on improving the understanding of the brain, how the condition arises and progresses, and the overarching research framework. Deep learning offers promise by using technological advancements to analyze the massive datasets that are becoming more and more common. The promise is there: The next step is validation in the clinic.

Patient-Reported Outcomes Part 1 of 2: A Primer

Patient-Reported Outcomes Part 1 of 2: A Primer

Patient-reported outcomes (PROs) are clinical trial measures that capture the patient’s own perspective on how they feel. While they are commonly used in clinical trials, they are also used in the clinic as another measure to gauge a patient’s health over time.

read more