Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels
- URL: http://arxiv.org/abs/2009.07738v3
- Date: Tue, 12 Jan 2021 15:54:14 GMT
- Title: Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels
- Authors: Alexander Lavin
- Abstract summary: We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
- Score: 93.58854458951431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a probabilistic programmed deep kernel learning approach to
personalized, predictive modeling of neurodegenerative diseases. Our analysis
considers a spectrum of neural and symbolic machine learning approaches, which
we assess for predictive performance and important medical AI properties such
as interpretability, uncertainty reasoning, data-efficiency, and leveraging
domain knowledge. Our Bayesian approach combines the flexibility of Gaussian
processes with the structural power of neural networks to model biomarker
progressions, without needing clinical labels for training. We run evaluations
on the problem of Alzheimer's disease prediction, yielding results that surpass
deep learning in both accuracy and timeliness of predicting neurodegeneration,
and with the practical advantages of Bayesian nonparametrics and probabilistic
programming.
Related papers
- Enhancing learning in artificial neural networks through cellular heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Physics-informed Neural Network Estimation of Material Properties in Soft Tissue Nonlinear Biomechanical Models [2.8763745263714005]
We propose a new approach which relies on the combination of physics-informed neural networks (PINNs) with three-dimensional soft tissue nonlinear biomechanical models.
The proposed learning algorithm encodes information from a limited amount of displacement and, in some cases, strain data, that can be routinely acquired in the clinical setting.
Several benchmarks are presented to show the accuracy and robustness of the proposed method.
arXiv Detail & Related papers (2023-12-15T13:41:20Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Neural Networks from Biological to Artificial and Vice Versa [6.85316573653194]
Key contribution this paper is the investigation of the impact of a dead neuron on the performance of artificial neural networks (ANNs)
The aim is to assess the potential application of the findings in the biological domain, the expected results may have significant implications for the development of effective treatment strategies for neurological disorders.
arXiv Detail & Related papers (2023-06-05T17:30:07Z) - Promises and pitfalls of deep neural networks in neuroimaging-based
psychiatric research [0.9449650062296824]
Deep neural networks and in particular convolutional neural networks have advanced to a powerful tool in medical imaging.
Here, we first give an introduction into methodological key concepts and resulting methodological promises.
After reviewing recent applications within neuroimaging-based psychiatric research, we discuss current challenges.
arXiv Detail & Related papers (2023-01-20T12:05:59Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Learning identifiable and interpretable latent models of
high-dimensional neural activity using pi-VAE [10.529943544385585]
We propose a method that integrates key ingredients from latent models and traditional neural encoding models.
Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder.
We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex.
arXiv Detail & Related papers (2020-11-09T22:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.