Meta-cognitive Multi-scale Hierarchical Reasoning for Motor Imagery Decoding
- URL: http://arxiv.org/abs/2511.07884v1
- Date: Wed, 12 Nov 2025 01:26:19 GMT
- Title: Meta-cognitive Multi-scale Hierarchical Reasoning for Motor Imagery Decoding
- Authors: Si-Hyun Kim, Heon-Gyu Kwak, Byoung-Hee Kwon, Seong-Whan Lee,
- Abstract summary: This work investigates a hierarchical and meta-cognitive decoding framework for four-class electroencephalogram (EEG) signals.<n>We introduce a multi-scale hierarchical signal processing module that reorganizes backbone features into temporal multi-scale representations.<n>We instantiate this framework on three standard EEG backbones and evaluate four-class MI decoding using the BCI Competition IV-2a dataset.
- Score: 43.32839547082765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Brain-computer interface (BCI) aims to decode motor intent from noninvasive neural signals to enable control of external devices, but practical deployment remains limited by noise and variability in motor imagery (MI)-based electroencephalogram (EEG) signals. This work investigates a hierarchical and meta-cognitive decoding framework for four-class MI classification. We introduce a multi-scale hierarchical signal processing module that reorganizes backbone features into temporal multi-scale representations, together with an introspective uncertainty estimation module that assigns per-cycle reliability scores and guides iterative refinement. We instantiate this framework on three standard EEG backbones (EEGNet, ShallowConvNet, and DeepConvNet) and evaluate four-class MI decoding using the BCI Competition IV-2a dataset under a subject-independent setting. Across all backbones, the proposed components improve average classification accuracy and reduce inter-subject variance compared to the corresponding baselines, indicating increased robustness to subject heterogeneity and noisy trials. These results suggest that combining hierarchical multi-scale processing with introspective confidence estimation can enhance the reliability of MI-based BCI systems.
Related papers
- Component-Aware Pruning Framework for Neural Network Controllers via Gradient-Based Importance Estimation [0.34410212782758043]
This paper introduces a component-aware pruning framework that utilizes gradient information to compute three distinct importance metrics during training.<n> Experimental results with an autoencoder and a TDMPC agent demonstrate that the proposed framework reveals critical structural dependencies and dynamic shifts in importance.
arXiv Detail & Related papers (2026-01-27T16:53:19Z) - MS-ISSM: Objective Quality Assessment of Point Clouds Using Multi-scale Implicit Structural Similarity [65.85858856481131]
unstructured and irregular nature of point clouds poses a significant challenge for objective quality assessment (PCQA)<n>We propose the Multi-scale Implicit Structural Similarity Measurement (MS-ISSM)
arXiv Detail & Related papers (2026-01-03T14:58:52Z) - GCMCG: A Clustering-Aware Graph Attention and Expert Fusion Network for Multi-Paradigm, Multi-task, and Cross-Subject EEG Decoding [0.7871262900865523]
Brain-Computer Interfaces (BCIs) based on Motor Imagery (MI) electroencephalogram (EEG) signals offer a direct pathway for human-machine interaction.<n>This paper proposes Graph-guided Clustering Mixture-of-Experts CNNGRUG, a novel unified framework for MI-ME EEG decoding.
arXiv Detail & Related papers (2025-11-29T18:05:33Z) - Deep Learning Architectures for Code-Modulated Visual Evoked Potentials Detection [0.40822165794627957]
Non-invasive brain interfaces (BCIs) based on Code-Modulated Visual Evoked Potentials (C-VEPs) require highly robust decoding methods to address temporal variability and session-dependent noise in EEG signals.<n>This study proposes and evaluates several deep learning architectures, including convolutional neural networks (CNNs) for 63-bit m-sequence reconstruction and classification, and Siamese networks for similarity-based decoding, alongside canonical correlation analysis (CCA) baselines.<n>The proposed deep models significantly outperformed traditional approaches, with distance-based decoding using Earth Mover's Distance (EMD) and constrained showing greater robustness
arXiv Detail & Related papers (2025-11-26T22:02:22Z) - MultiDiffNet: A Multi-Objective Diffusion Framework for Generalizable Brain Decoding [1.6528632644902828]
We introduce textitMultiDiffNet, a diffusion-based framework that bypasses generative augmentation entirely by learning a compact latent space optimized for multiple objectives.<n>We decode directly from this space and achieve state-of-the-art generalization across various neural decoding tasks using subject and session disjoint evaluation.
arXiv Detail & Related papers (2025-11-23T05:22:27Z) - Source-Free Object Detection with Detection Transformer [59.33653163035064]
Source-Free Object Detection (SFOD) enables knowledge transfer from a source domain to an unsupervised target domain for object detection without access to source data.<n>Most existing SFOD approaches are either confined to conventional object detection (OD) models like Faster R-CNN or designed as general solutions without tailored adaptations for novel OD architectures, especially Detection Transformer (DETR)<n>In this paper, we introduce Feature Reweighting ANd Contrastive Learning NetworK (FRANCK), a novel SFOD framework specifically designed to perform query-centric feature enhancement for DETRs.
arXiv Detail & Related papers (2025-10-13T07:35:04Z) - When Brain Foundation Model Meets Cauchy-Schwarz Divergence: A New Framework for Cross-Subject Motor Imagery Decoding [21.816266585365042]
MI-EEG decoding remains challenging due to substantial inter-subject variability and limited labeled target data.<n>Many existing multi-source domain adaptation methods indiscriminately incorporate all available source domains.<n>We propose a novel MSDA framework that leverages a pretrained large Brain Foundation Model (BFM) for dynamic and informed source subject selection.
arXiv Detail & Related papers (2025-07-28T17:55:26Z) - Interpretable Few-Shot Image Classification via Prototypical Concept-Guided Mixture of LoRA Experts [79.18608192761512]
Self-Explainable Models (SEMs) rely on Prototypical Concept Learning (PCL) to enable their visual recognition processes more interpretable.<n>We propose a Few-Shot Prototypical Concept Classification framework that mitigates two key challenges under low-data regimes: parametric imbalance and representation misalignment.<n>Our approach consistently outperforms existing SEMs by a notable margin, with 4.2%-8.7% relative gains in 5-way 5-shot classification.
arXiv Detail & Related papers (2025-06-05T06:39:43Z) - MoCA: Multi-modal Cross-masked Autoencoder for Digital Health Measurements [2.8493802389913694]
We propose the Multi-modal Cross-masked Autoencoder (MoCA), a self-supervised learning framework that combines transformer architecture with masked autoencoder (MAE) methodology.<n>MoCA demonstrates strong performance boosts across reconstruction and downstream classification tasks on diverse benchmark datasets.<n>Our approach offers a novel solution for leveraging unlabeled multi-modal wearable data while handling missing modalities, with broad applications across digital health domains.
arXiv Detail & Related papers (2025-06-02T21:07:25Z) - EEGEncoder: Advancing BCI with Transformer-Based Motor Imagery Classification [11.687193535939798]
Brain-computer interfaces (BCIs) harness electroencephalographic signals for direct neural control of devices.
Traditional machine learning methods for EEG-based motor imagery (MI) classification encounter challenges such as manual feature extraction and susceptibility to noise.
This paper introduces EEGEncoder, a deep learning framework that employs modified transformers and TCNs to surmount these limitations.
arXiv Detail & Related papers (2024-04-23T09:51:24Z) - You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment [45.62136459502005]
We propose a network to perform full reference (FR) and no reference (NR) IQA.
We first employ an encoder to extract multi-level features from input images.
A Hierarchical Attention (HA) module is proposed as a universal adapter for both FR and NR inputs.
A Semantic Distortion Aware (SDA) module is proposed to examine feature correlations between shallow and deep layers of the encoder.
arXiv Detail & Related papers (2023-10-14T11:03:04Z) - RetiFluidNet: A Self-Adaptive and Multi-Attention Deep Convolutional
Network for Retinal OCT Fluid Segmentation [3.57686754209902]
Quantification of retinal fluids is necessary for OCT-guided treatment management.
New convolutional neural architecture named RetiFluidNet is proposed for multi-class retinal fluid segmentation.
Model benefits from hierarchical representation learning of textural, contextual, and edge features.
arXiv Detail & Related papers (2022-09-26T07:18:00Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Modal-Adaptive Gated Recoding Network for RGB-D Salient Object Detection [2.9153096940947796]
We propose a novel gated recoding network (GRNet) to evaluate the information validity of the two modes.
A perception encoder is adopted to extract multi-level single-modal features.
A modal-adaptive gate unit is proposed to suppress the invalid information and transfer the effective modal features to the recoding mixer and the hybrid branch decoder.
arXiv Detail & Related papers (2021-08-13T15:08:21Z) - Deep Learning-based Implicit CSI Feedback in Massive MIMO [68.81204537021821]
We propose a DL-based implicit feedback architecture to inherit the low-overhead characteristic, which uses neural networks (NNs) to replace the precoding matrix indicator (PMI) encoding and decoding modules.
For a single resource block (RB), the proposed architecture can save 25.0% and 40.0% of overhead compared with Type I codebook under two antenna configurations.
arXiv Detail & Related papers (2021-05-21T02:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.