LERD: Latent Event-Relational Dynamics for Neurodegenerative Classification
- URL: http://arxiv.org/abs/2602.18195v1
- Date: Fri, 20 Feb 2026 13:03:40 GMT
- Title: LERD: Latent Event-Relational Dynamics for Neurodegenerative Classification
- Authors: Hairong Chen, Yicheng Feng, Ziyu Jia, Samir Bhatt, Hengguan Huang,
- Abstract summary: Alzheimer's disease (AD) alters brain electrophysiology and disrupts multichannel EEG dynamics.<n>We propose LERD, an end-to-end electrophysiological neural dynamical system that infers latent neural events and their relational structure directly from multichannel EEG.
- Score: 19.992574981355247
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Alzheimer's disease (AD) alters brain electrophysiology and disrupts multichannel EEG dynamics, making accurate and clinically useful EEG-based diagnosis increasingly important for screening and disease monitoring. However, many existing approaches rely on black-box classifiers and do not explicitly model the underlying dynamics that generate observed signals. To address these limitations, we propose LERD, an end-to-end Bayesian electrophysiological neural dynamical system that infers latent neural events and their relational structure directly from multichannel EEG without event or interaction annotations. LERD combines a continuous-time event inference module with a stochastic event-generation process to capture flexible temporal patterns, while incorporating an electrophysiology-inspired dynamical prior to guide learning in a principled way. We further provide theoretical analysis that yields a tractable bound for training and stability guarantees for the inferred relational dynamics. Extensive experiments on synthetic benchmarks and two real-world AD EEG cohorts demonstrate that LERD consistently outperforms strong baselines and yields physiology-aligned latent summaries that help characterize group-level dynamical differences.
Related papers
- Stringology-Based Motif Discovery from EEG Signals: an ADHD Case Study [0.0]
We propose a novel computational framework for analyzing electroencephalography (EEG) time series using methods from stringology.<n>The framework adapts order-preserving matching (OPM) and Cartesian tree matching (CTM) to detect temporal motifs.<n>These findings suggest that ADHD-related EEG alterations involve systematic differences in the structure, stability, and hierarchical organization of recurrent temporal patterns.
arXiv Detail & Related papers (2026-03-03T19:44:55Z) - ODEBrain: Continuous-Time EEG Graph for Modeling Dynamic Brain Networks [33.66198565629555]
ODEBRAIN can improve significantly over existing methods in forecasting EEG dynamics with enhanced generalization and capabilities.<n>Our design ensures that latent representations can capture variations of complex brain states at any given time point.
arXiv Detail & Related papers (2026-02-26T17:59:10Z) - Learning Alzheimer's Disease Signatures by bridging EEG with Spiking Neural Networks and Biophysical Simulations [42.091774598477706]
Conventional deep learning approaches for EEG-based Alzheimer's disease detection are computationally intensive and mechanistically opaque.<n>We propose a neuro-bridge framework that links data-driven learning with biophysically grounded simulations.
arXiv Detail & Related papers (2026-01-30T21:54:16Z) - Adaptive Temporal Dynamics for Personalized Emotion Recognition: A Liquid Neural Network Approach [0.0]
This work presents to the best of our knowledge, the first comprehensive application of liquid neural networks for EEG-based emotion recognition.<n>The proposed framework combines convolutional feature extraction, liquid neural networks with learnable time constants, and attention-guided fusion.<n>Subject-dependent experiments conducted on the PhyMER dataset across seven emotional classes achieve an accuracy of 95.45%, surpassing previously reported results.
arXiv Detail & Related papers (2026-01-28T12:14:45Z) - Neuronal Group Communication for Efficient Neural representation [85.36421257648294]
This paper addresses the question of how to build large neural systems that learn efficient, modular, and interpretable representations.<n>We propose Neuronal Group Communication (NGC), a theory-driven framework that reimagines a neural network as a dynamical system of interacting neuronal groups.<n>NGC treats weights as transient interactions between embedding-like neuronal states, with neural computation unfolding through iterative communication among groups of neurons.
arXiv Detail & Related papers (2025-10-19T14:23:35Z) - WaveMind: Towards a Conversational EEG Foundation Model Aligned to Textual and Visual Modalities [55.00677513249723]
EEG signals simultaneously encode both cognitive processes and intrinsic neural states.<n>We map EEG signals and their corresponding modalities into a unified semantic space to achieve generalized interpretation.<n>The resulting model demonstrates robust classification accuracy while supporting flexible, open-ended conversations.
arXiv Detail & Related papers (2025-09-26T06:21:51Z) - A Brain-Inspired Gating Mechanism Unlocks Robust Computation in Spiking Neural Networks [5.647576619206974]
We introduce the Dynamic Gated Neuron(DGN), a novel spiking unit in which membrane conductance evolves in response to neuronal activity.<n>Our results highlight, for the first time, a biologically plausible dynamic gating as a key mechanism for robust spike-based computation.
arXiv Detail & Related papers (2025-09-03T13:00:49Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - CodeBrain: Towards Decoupled Interpretability and Multi-Scale Architecture for EEG Foundation Model [52.466542039411515]
EEG foundation models (EFMs) have emerged to address the scalability issues of task-specific models.<n>We present CodeBrain, a two-stage EFM designed to fill this gap.<n>In the first stage, we introduce the TFDual-Tokenizer, which decouples heterogeneous temporal and frequency EEG signals into discrete tokens.<n>In the second stage, we propose the multi-scale EEGSSM architecture, which combines structured global convolution with sliding window attention.
arXiv Detail & Related papers (2025-06-10T17:20:39Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning where representations are compressed in order to represent more abstract concepts in deeper layers of the network.<n>We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.<n>We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, quantification, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Attention for Causal Relationship Discovery from Biological Neural
Dynamics [9.097847269529202]
This paper explores the potential of the transformer models for learning Granger causality in networks with complex nonlinear dynamics at every node.
We show that the cross attention module effectively captures the causal relationship among neurons, with an accuracy equal or superior to that for the most popular Granger causality analysis method.
arXiv Detail & Related papers (2023-11-12T18:59:42Z) - fMRI from EEG is only Deep Learning away: the use of interpretable DL to
unravel EEG-fMRI relationships [68.8204255655161]
We present an interpretable domain grounded solution to recover the activity of several subcortical regions from multichannel EEG data.
We recover individual spatial and time-frequency patterns of scalp EEG predictive of the hemodynamic signal in the subcortical nuclei.
arXiv Detail & Related papers (2022-10-23T15:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.