RatioWaveNet: A Learnable RDWT Front-End for Robust and Interpretable EEG Motor-Imagery Classification
- URL: http://arxiv.org/abs/2510.21841v1
- Date: Wed, 22 Oct 2025 14:04:03 GMT
- Title: RatioWaveNet: A Learnable RDWT Front-End for Robust and Interpretable EEG Motor-Imagery Classification
- Authors: Marco Siino, Giuseppe Bonomo, Rosario Sorbello, Ilenia Tinnirello,
- Abstract summary: We present RatioWaveNet, which augments a strong temporal CNN-Transformer backbone with a trainable, Rationally-Dilated Wavelet Transform front end.<n>Our goal is to test whether this principled wavelet front end improves robustness precisely where BCIs typically fail.
- Score: 1.4069478981641936
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Brain-computer interfaces (BCIs) based on motor imagery (MI) translate covert movement intentions into actionable commands, yet reliable decoding from non-invasive EEG remains challenging due to nonstationarity, low SNR, and subject variability. We present RatioWaveNet, which augments a strong temporal CNN-Transformer backbone (TCFormer) with a trainable, Rationally-Dilated Wavelet Transform (RDWT) front end. The RDWT performs an undecimated, multi-resolution subband decomposition that preserves temporal length and shift-invariance, enhancing sensorimotor rhythms while mitigating jitter and mild artifacts; subbands are fused via lightweight grouped 1-D convolutions and passed to a multi-kernel CNN for local temporal-spatial feature extraction, a grouped-query attention encoder for long-range context, and a compact TCN head for causal temporal integration. Our goal is to test whether this principled wavelet front end improves robustness precisely where BCIs typically fail - on the hardest subjects - and whether such gains persist on average across seeds under both intra- and inter-subject protocols. On BCI-IV-2a and BCI-IV-2b, across five seeds, RatioWaveNet improves worst-subject accuracy over the Transformer backbone by +0.17 / +0.42 percentage points (Sub-Dependent / LOSO) on 2a and by +1.07 / +2.54 percentage points on 2b, with consistent average-case gains and modest computational overhead. These results indicate that a simple, trainable wavelet front end is an effective plug-in to strengthen Transformer-based BCIs, improving worst-case reliability without sacrificing efficiency.
Related papers
- HCFT: Hierarchical Convolutional Fusion Transformer for EEG Decoding [9.572621097681646]
We propose a lightweight decoding framework named Hierarchical Conencephaloal Fusion Transformer (HCFT)<n>HCFT combines dual-branchal encoders and hierarchical Transformer blocks for multi-scale representation.<n>Results show that HCFT achieves 80.83% average accuracy and a Cohen's kappa of 0.6165 on BCI IV-2b, as well as 99.10% sensitivity, 0.0236 false positives per hour, and 98.82% specificity on CHB-MIT.
arXiv Detail & Related papers (2026-01-18T06:36:30Z) - Wavelet-Guided Water-Level Estimation for ISAC [28.187510402999376]
Real-time water-level monitoring is vital for flood response, infrastructure management, and environmental forecasting.<n>We propose a passive, low-cost water-level tracking scheme that uses only LTE downlink power metrics reported by commodity receivers.
arXiv Detail & Related papers (2025-11-26T00:01:00Z) - FTT-GRU: A Hybrid Fast Temporal Transformer with GRU for Remaining Useful Life Prediction [0.6421270655703623]
We propose a hybrid model, FTT-GRU, which combines a Fast Temporal Transformer (FTT) with a gated recurrent unit (GRU) layer for sequential modeling.<n>On NASA CMAPSS FD001, FTT-GRU attains RMSE 30.76, MAE 18.97, and $R2=0.45$, with 1.12 ms CPU latency at batch=1.<n>These results demonstrate that a compact Transformer-RNN hybrid delivers accurate and efficient RUL predictions on CMAPSS.
arXiv Detail & Related papers (2025-11-01T14:02:03Z) - Bidirectional Time-Frequency Pyramid Network for Enhanced Robust EEG Classification [2.512406961007489]
BITE (Bidirectional Time-Freq Pyramid Network) is an end-to-end unified architecture featuring robust multistream synergy, pyramid time-frequency attention (PTFA), and bidirectional adaptive convolutions.<n>As a unified architecture, it combines robust performance across both MI and SSVEP tasks with exceptional computational efficiency.<n>Our work validates that paradigm-aligned spectral-temporal processing is essential for reliable BCI systems.
arXiv Detail & Related papers (2025-10-11T04:14:48Z) - Dual-TSST: A Dual-Branch Temporal-Spectral-Spatial Transformer Model for EEG Decoding [2.0721229324537833]
We propose a novel decoding architecture network with a dual-branch temporal-spectral-spatial transformer (Dual-TSST)
Our proposed Dual-TSST performs superiorly in various tasks, which achieves the promising EEG classification performance of average accuracy of 80.67%.
This study provides a new approach to high-performance EEG decoding, and has great potential for future CNN-Transformer based applications.
arXiv Detail & Related papers (2024-09-05T05:08:43Z) - RIMformer: An End-to-End Transformer for FMCW Radar Interference Mitigation [1.8063750621475454]
A novel FMCW radar interference mitigation method, termed as RIMformer, is proposed by using an end-to-end Transformer-based structure.
The architecture is designed to process time-domain IF signals in an end-to-end manner.
The results show that the proposed RIMformer can effectively mitigate interference and restore the target signals.
arXiv Detail & Related papers (2024-07-16T07:51:20Z) - OFDM-Standard Compatible SC-NOFS Waveforms for Low-Latency and Jitter-Tolerance Industrial IoT Communications [53.398544571833135]
This work proposes a spectrally efficient irregular Sinc (irSinc) shaping technique, revisiting the traditional Sinc back to 1924.
irSinc yields a signal with increased spectral efficiency without sacrificing error performance.
Our signal achieves faster data transmission within the same spectral bandwidth through 5G standard signal configuration.
arXiv Detail & Related papers (2024-06-07T09:20:30Z) - Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves [69.9104427437916]
Multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves.
These complex devices need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves.
In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics.
arXiv Detail & Related papers (2024-04-17T02:04:10Z) - Efficient Decoder-free Object Detection with Transformers [75.00499377197475]
Vision transformers (ViTs) are changing the landscape of object detection approaches.
We propose a decoder-free fully transformer-based (DFFT) object detector.
DFFT_SMALL achieves high efficiency in both training and inference stages.
arXiv Detail & Related papers (2022-06-14T13:22:19Z) - EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware
Multi-Task NLP Inference [82.1584439276834]
Transformer-based language models such as BERT provide significant accuracy improvement for a multitude of natural language processing (NLP) tasks.
We present EdgeBERT, an in-depth algorithm- hardware co-design for latency-aware energy optimization for multi-task NLP.
arXiv Detail & Related papers (2020-11-28T19:21:47Z) - Non-Autoregressive Transformer ASR with CTC-Enhanced Decoder Input [54.82369261350497]
We propose a CTC-enhanced NAR transformer, which generates target sequence by refining predictions of the CTC module.
Experimental results show that our method outperforms all previous NAR counterparts and achieves 50x faster decoding speed than a strong AR baseline with only 0.0 0.3 absolute CER degradation on Aishell-1 and Aishell-2 datasets.
arXiv Detail & Related papers (2020-10-28T15:00:09Z) - Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging [48.99717153937717]
We present WAGMA-SGD, a wait-avoiding subgroup that reduces global communication via weight exchange.<n>We train ResNet-50 on ImageNet; Transformer for machine translation; and deep reinforcement learning for navigation at scale.<n>Compared with state-of-the-art decentralized SGD variants, WAGMA-SGD significantly improves training throughput.
arXiv Detail & Related papers (2020-04-30T22:11:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.