Affect and Effect: Limitations of regularisation-based continual learning in EEG-based emotion classification
- URL: http://arxiv.org/abs/2601.07858v1
- Date: Fri, 09 Jan 2026 17:09:54 GMT
- Title: Affect and Effect: Limitations of regularisation-based continual learning in EEG-based emotion classification
- Authors: Nina Peire, Yupei Li, Björn Schuller,
- Abstract summary: Generalisation to unseen subjects in EEG-based emotion classification remains a challenge due to high inter-and intra-subject variability.<n>Regularisation-based continual learning approaches are commonly used as baselines in EEG-based CL studies.<n>This study theoretically and empirically finds that regularisation-based CL methods show limited performance for EEG-based emotion classification.
- Score: 0.38961828230212814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generalisation to unseen subjects in EEG-based emotion classification remains a challenge due to high inter-and intra-subject variability. Continual learning (CL) poses a promising solution by learning from a sequence of tasks while mitigating catastrophic forgetting. Regularisation-based CL approaches, such as Elastic Weight Consolidation (EWC), Synaptic Intelligence (SI), and Memory Aware Synapses (MAS), are commonly used as baselines in EEG-based CL studies, yet their suitability for this problem remains underexplored. This study theoretically and empirically finds that regularisation-based CL methods show limited performance for EEG-based emotion classification on the DREAMER and SEED datasets. We identify a fundamental misalignment in the stability-plasticity trade-off, where regularisation-based methods prioritise mitigating catastrophic forgetting (backward transfer) over adapting to new subjects (forward transfer). We investigate this limitation under subject-incremental sequences and observe that: (1) the heuristics for estimating parameter importance become less reliable under noisy data and covariate shift, (2) gradients on parameters deemed important by these heuristics often interfere with gradient updates required for new subjects, moving optimisation away from the minimum, (3) importance values accumulated across tasks over-constrain the model, and (4) performance is sensitive to subject order. Forward transfer showed no statistically significant improvement over sequential fine-tuning (p > 0.05 across approaches and datasets). The high variability of EEG signals means past subjects provide limited value to future subjects. Regularisation-based continual learning approaches are therefore limited for robust generalisation to unseen subjects in EEG-based emotion classification.
Related papers
- What Makes Value Learning Efficient in Residual Reinforcement Learning? [57.635661297706065]
Residual reinforcement learning (RL) enables stable online refinement of expressive pretrained policies by freezing the base and learning only bounded corrections.<n>In this work, we identify two key bottlenecks: cold start pathology, where the critic lacks knowledge of the value landscape around the base policy, and structural scale mismatch.<n>We propose DAWN, a minimal approach targeting efficient value learning in residual RL.
arXiv Detail & Related papers (2026-02-11T05:25:39Z) - ERIS: An Energy-Guided Feature Disentanglement Framework for Out-of-Distribution Time Series Classification [51.07970070817353]
An ideal time series classification (TSC) should be able to capture invariant representations.<n>Current methods are largely unguided, lacking the semantic direction required to isolate truly universal features.<n>We propose an end-to-end Energy-Regularized Information for Shift-Robustness framework to enable guided and reliable feature disentanglement.
arXiv Detail & Related papers (2025-08-19T12:13:41Z) - Decomposing the Entropy-Performance Exchange: The Missing Keys to Unlocking Effective Reinforcement Learning [106.68304931854038]
Reinforcement learning with verifiable rewards (RLVR) has been widely used for enhancing the reasoning abilities of large language models (LLMs)<n>We conduct a systematic empirical analysis of the entropy-performance exchange mechanism of RLVR across different levels of granularity.<n>Our analysis reveals that, in the rising stage, entropy reduction in negative samples facilitates the learning of effective reasoning patterns.<n>In the plateau stage, learning efficiency strongly correlates with high-entropy tokens present in low-perplexity samples and those located at the end of sequences.
arXiv Detail & Related papers (2025-08-04T10:08:10Z) - Commuting Distance Regularization for Timescale-Dependent Label Inconsistency in EEG Emotion Recognition [1.4499463058550683]
We address the often-overlooked issue of Timescale Dependent Label Inconsistency (TsDLI) in training neural network models for EEG-based human emotion recognition.<n>We propose two novel regularization strategies: Local Variation Loss (LVL) and Local-Global Consistency Loss (LGCL)<n>Results consistently show that our proposed methods outperform state-of-the-art baselines.
arXiv Detail & Related papers (2025-07-15T01:22:14Z) - Zero-Shot EEG-to-Gait Decoding via Phase-Aware Representation Learning [9.49131859415923]
We propose NeuroDyGait, a domain-generalizable EEG-to-motion decoding framework.<n>It uses structured contrastive representation learning and relational domain modeling to achieve semantic alignment between EEG and motion embeddings.<n>It achieves zero-shot motion prediction for unseen individuals without requiring adaptation and superior performance in cross-subject gait decoding on benchmark datasets.
arXiv Detail & Related papers (2025-06-24T06:03:49Z) - EKPC: Elastic Knowledge Preservation and Compensation for Class-Incremental Learning [53.88000987041739]
Class-Incremental Learning (CIL) aims to enable AI models to continuously learn from sequentially arriving data of different classes over time.<n>We propose the Elastic Knowledge Preservation and Compensation (EKPC) method, integrating Importance-aware importance Regularization (IPR) and Trainable Semantic Drift Compensation (TSDC) for CIL.
arXiv Detail & Related papers (2025-06-14T05:19:58Z) - NDCG-Consistent Softmax Approximation with Accelerated Convergence [67.10365329542365]
We propose novel loss formulations that align directly with ranking metrics.<n>We integrate the proposed RG losses with the highly efficient Alternating Least Squares (ALS) optimization method.<n> Empirical evaluations on real-world datasets demonstrate that our approach achieves comparable or superior ranking performance.
arXiv Detail & Related papers (2025-06-11T06:59:17Z) - LEL: A Novel Lipschitz Continuity-constrained Ensemble Learning Model for EEG-based Emotion Recognition [6.9292405290420005]
We introduce LEL (Lipschitz continuity-constrained Ensemble Learning), a novel framework that enhances EEG-based emotion recognition.<n> Experimental results on three public benchmark datasets demonstrated the LEL's state-of-the-art performance.
arXiv Detail & Related papers (2025-04-12T09:41:23Z) - Temporal-Difference Variational Continual Learning [77.92320830700797]
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.<n>Our approach effectively mitigates Catastrophic Forgetting, outperforming strong Variational CL methods.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Continual Human Pose Estimation for Incremental Integration of Keypoints and Pose Variations [12.042768320132694]
This paper reformulates cross-dataset human pose estimation as a continual learning task.
We benchmark this formulation against established regularization-based methods for mitigating catastrophic forgetting.
We show that our approach outperforms existing regularization-based continual learning strategies.
arXiv Detail & Related papers (2024-09-30T16:29:30Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - Latent Spectral Regularization for Continual Learning [21.445600749028923]
We study the phenomenon by investigating the geometric characteristics of the learner's latent space.
We propose a geometric regularizer that enforces weak requirements on the Laplacian spectrum of the latent space.
arXiv Detail & Related papers (2023-01-09T13:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.