Time-space-frequency feature Fusion for 3-channel motor imagery
classification
- URL: http://arxiv.org/abs/2304.01461v1
- Date: Tue, 4 Apr 2023 02:01:48 GMT
- Title: Time-space-frequency feature Fusion for 3-channel motor imagery
classification
- Authors: Zhengqing Miao and Meirong Zhao
- Abstract summary: This study introduces TSFF-Net, a novel network architecture that integrates time-space-frequency features.
TSFF-Net comprises four main components: time-frequency representation, time-frequency feature extraction, time-space feature extraction, and feature fusion and classification.
Experimental results demonstrate that TSFF-Net not only compensates for the shortcomings of single-mode feature extraction networks in EEG decoding, but also outperforms other state-of-the-art methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-channel EEG devices are crucial for portable and entertainment
applications. However, the low spatial resolution of EEG presents challenges in
decoding low-channel motor imagery. This study introduces TSFF-Net, a novel
network architecture that integrates time-space-frequency features, effectively
compensating for the limitations of single-mode feature extraction networks
based on time-series or time-frequency modalities. TSFF-Net comprises four main
components: time-frequency representation, time-frequency feature extraction,
time-space feature extraction, and feature fusion and classification.
Time-frequency representation and feature extraction transform raw EEG signals
into time-frequency spectrograms and extract relevant features. The time-space
network processes time-series EEG trials as input and extracts temporal-spatial
features. Feature fusion employs MMD loss to constrain the distribution of
time-frequency and time-space features in the Reproducing Kernel Hilbert Space,
subsequently combining these features using a weighted fusion approach to
obtain effective time-space-frequency features. Moreover, few studies have
explored the decoding of three-channel motor imagery based on time-frequency
spectrograms. This study proposes a shallow, lightweight decoding architecture
(TSFF-img) based on time-frequency spectrograms and compares its classification
performance in low-channel motor imagery with other methods using two publicly
available datasets. Experimental results demonstrate that TSFF-Net not only
compensates for the shortcomings of single-mode feature extraction networks in
EEG decoding, but also outperforms other state-of-the-art methods. Overall,
TSFF-Net offers considerable advantages in decoding low-channel motor imagery
and provides valuable insights for algorithmically enhancing low-channel EEG
decoding.
Related papers
- Efficient Spatio-Temporal Signal Recognition on Edge Devices Using PointLCA-Net [0.45609532372046985]
This paper presents an approach that combines PointNet's feature extraction with the in-memory computing capabilities and energy efficiency of neuromorphic systems fortemporal signal recognition.
PointNet achieves high accuracy and significantly lower energy burden during both inference and training than comparable approaches.
arXiv Detail & Related papers (2024-11-21T20:48:40Z) - Investigation of Time-Frequency Feature Combinations with Histogram Layer Time Delay Neural Networks [37.95478822883338]
Methods by which audio signals are converted into time-frequency representations can significantly impact performance.
This work demonstrates the performance impact of using different combinations of time-frequency features in a histogram layer time delay neural network.
arXiv Detail & Related papers (2024-09-20T20:22:24Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual Speech Separation [18.93255531121519]
We present a novel time-frequency domain audio-visual speech separation method.
RTFS-Net applies its algorithms on the complex time-frequency bins yielded by the Short-Time Fourier Transform.
This is the first time-frequency domain audio-visual speech separation method to outperform all contemporary time-domain counterparts.
arXiv Detail & Related papers (2023-09-29T12:38:00Z) - Complementary Frequency-Varying Awareness Network for Open-Set
Fine-Grained Image Recognition [14.450381668547259]
Open-set image recognition is a challenging topic in computer vision.
We propose a Complementary Frequency-varying Awareness Network that could better capture both high-frequency and low-frequency information.
Based on CFAN, we propose an open-set fine-grained image recognition method, called CFAN-OSFGR.
arXiv Detail & Related papers (2023-07-14T08:15:36Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Fourier Disentangled Space-Time Attention for Aerial Video Recognition [54.80846279175762]
We present an algorithm, Fourier Activity Recognition (FAR), for UAV video activity recognition.
Our formulation uses a novel Fourier object disentanglement method to innately separate out the human agent from the background.
We have evaluated our approach on multiple UAV datasets including UAV Human RGB, UAV Human Night, Drone Action, and NEC Drone.
arXiv Detail & Related papers (2022-03-21T01:24:53Z) - Energy-Efficient Model Compression and Splitting for Collaborative
Inference Over Time-Varying Channels [52.60092598312894]
We propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes.
Our proposed solution results in minimal energy consumption and $CO$ emission compared to the considered baselines.
arXiv Detail & Related papers (2021-06-02T07:36:27Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Time and Frequency Network for Human Action Detection in Videos [6.78349879472022]
We propose an end-to-end network that considers the time and frequency features simultaneously, named TFNet.
To obtain the action patterns, these two features are deeply fused under the attention mechanism.
arXiv Detail & Related papers (2021-03-08T11:42:05Z) - Multi-Temporal Convolutions for Human Action Recognition in Videos [83.43682368129072]
We present a novel temporal-temporal convolution block that is capable of extracting at multiple resolutions.
The proposed blocks are lightweight and can be integrated into any 3D-CNN architecture.
arXiv Detail & Related papers (2020-11-08T10:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.