Contrastive and Multi-Task Learning on Noisy Brain Signals with Nonlinear Dynamical Signatures
- URL: http://arxiv.org/abs/2601.08549v2
- Date: Thu, 22 Jan 2026 10:10:04 GMT
- Title: Contrastive and Multi-Task Learning on Noisy Brain Signals with Nonlinear Dynamical Signatures
- Authors: Sucheta Ghosh, Zahra Monfared, Felix Dietrich,
- Abstract summary: We introduce a two-stage multitask learning framework for analyzing EEG signals.<n>In the first stage, a denoising autoencoder is trained to suppress artifacts and stabilize temporal dynamics.<n>In the second stage, a multitask architecture processes these denoised signals to achieve three objectives.
- Score: 5.37454752035459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a two-stage multitask learning framework for analyzing Electroencephalography (EEG) signals that integrates denoising, dynamical modeling, and representation learning. In the first stage, a denoising autoencoder is trained to suppress artifacts and stabilize temporal dynamics, providing robust signal representations. In the second stage, a multitask architecture processes these denoised signals to achieve three objectives: motor imagery classification, chaotic versus non-chaotic regime discrimination using Lyapunov exponent-based labels, and self-supervised contrastive representation learning with NT-Xent loss. A convolutional backbone combined with a Transformer encoder captures spatial-temporal structure, while the dynamical task encourages sensitivity to nonlinear brain dynamics. This staged design mitigates interference between reconstruction and discriminative goals, improves stability across datasets, and supports reproducible training by clearly separating noise reduction from higher-level feature learning. Empirical studies show that our framework not only enhances robustness and generalization but also surpasses strong baselines and recent state-of-the-art methods in EEG decoding, highlighting the effectiveness of combining denoising, dynamical features, and self-supervised learning.
Related papers
- Brain-Semantoks: Learning Semantic Tokens of Brain Dynamics with a Self-Distilled Foundation Model [0.27528170226206433]
We introduce Brain-Semantoks, a self-supervised framework to learn abstract representations of brain dynamics.<n>Its architecture is built on two core innovations: a semantic tokenizer that aggregates noisy regional signals into robust tokens representing functional networks.<n>We show that learned representations enable strong performance on a variety of downstream tasks even when only using a linear probe.
arXiv Detail & Related papers (2025-12-12T14:11:20Z) - Rethinking the Role of Dynamic Sparse Training for Scalable Deep Reinforcement Learning [58.533203990515034]
Scaling neural networks has driven breakthrough advances in machine learning, yet this paradigm fails in deep reinforcement learning (DRL)<n>We show that dynamic sparse training strategies provide module-specific benefits that complement the primary scalability foundation established by architectural improvements.<n>We finally distill these insights into Module-Specific Training (MST), a practical framework that exploits the benefits of architectural improvements and demonstrates substantial scalability gains across diverse RL algorithms without algorithmic modifications.
arXiv Detail & Related papers (2025-10-14T03:03:08Z) - Temporal-Aware Iterative Speech Model for Dementia Detection [0.0]
Current methods for automated dementia detection using speech rely on static, time-agnostic features or aggregated linguistic content.<n>We introduce TAI-Speech, a Temporal Aware Iterative framework that dynamically models spontaneous speech for dementia detection.<n>Our work provides a more flexible and robust solution for automated cognitive assessment, operating directly on the dynamics of raw audio.
arXiv Detail & Related papers (2025-09-26T01:56:07Z) - DynaMind: Reconstructing Dynamic Visual Scenes from EEG by Aligning Temporal Dynamics and Multimodal Semantics to Guided Diffusion [10.936858717759156]
We introduce DynaMind, a novel framework that reconstructs video by jointly modeling neural dynamics and semantic features.<n>On the SEED-DV dataset, DynaMind sets a new state-of-the-art (SOTA), boosting reconstructed video accuracies by 12.5 and 10.3 percentage points.<n>This marks a critical advancement, bridging the gap between neural dynamics and high-fidelity visual semantics.
arXiv Detail & Related papers (2025-09-01T06:52:08Z) - Spatial-Temporal Transformer with Curriculum Learning for EEG-Based Emotion Recognition [2.847161275680418]
SST-CL is a novel framework integrating spatial-temporal transformers with curriculum learning.<n>An intensity-aware curriculum learning strategy guides training from high-intensity to low-intensity emotional states.<n>Experiments on three benchmark datasets demonstrate state-of-the-art performance across various emotional intensity levels.
arXiv Detail & Related papers (2025-07-19T17:23:38Z) - Reinforced Interactive Continual Learning via Real-time Noisy Human Feedback [59.768119380109084]
This paper introduces an interactive continual learning paradigm where AI models dynamically learn new skills from real-time human feedback.<n>We propose RiCL, a Reinforced interactive Continual Learning framework leveraging Large Language Models (LLMs)<n>Our RiCL approach substantially outperforms existing combinations of state-of-the-art online continual learning and noisy-label learning methods.
arXiv Detail & Related papers (2025-05-15T03:22:03Z) - Intensity Profile Projection: A Framework for Continuous-Time
Representation Learning for Dynamic Networks [50.2033914945157]
We present a representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data.
The framework consists of three stages: estimating pairwise intensity functions, learning a projection which minimises a notion of intensity reconstruction error.
Moreoever, we develop estimation theory providing tight control on the error of any estimated trajectory, indicating that the representations could even be used in quite noise-sensitive follow-on analyses.
arXiv Detail & Related papers (2023-06-09T15:38:25Z) - Self-Supervised Generative-Contrastive Learning of Multi-Modal Euclidean Input for 3D Shape Latent Representations: A Dynamic Switching Approach [53.376029341079054]
We propose a combined generative and contrastive neural architecture for learning latent representations of 3D shapes.<n>The architecture uses two encoder branches for voxel grids and multi-view images from the same underlying shape.
arXiv Detail & Related papers (2023-01-11T18:14:24Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z) - Dynamic Dual-Attentive Aggregation Learning for Visible-Infrared Person
Re-Identification [208.1227090864602]
Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality pedestrian retrieval problem.
Existing VI-ReID methods tend to learn global representations, which have limited discriminability and weak robustness to noisy images.
We propose a novel dynamic dual-attentive aggregation (DDAG) learning method by mining both intra-modality part-level and cross-modality graph-level contextual cues for VI-ReID.
arXiv Detail & Related papers (2020-07-18T03:08:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.