EEGDnet: Fusing Non-Local and Local Self-Similarity for 1-D EEG Signal
Denoising with 2-D Transformer
- URL: http://arxiv.org/abs/2109.04235v1
- Date: Thu, 9 Sep 2021 12:55:19 GMT
- Title: EEGDnet: Fusing Non-Local and Local Self-Similarity for 1-D EEG Signal
Denoising with 2-D Transformer
- Authors: Peng Yi, Kecheng Chen, Zhaoqi Ma, Di Zhao, Xiaorong Pu and Yazhou Ren
- Abstract summary: We propose a novel 1-D EEG signal denoising network with 2-D transformer, EEGDnet.
We take into account the non-local and local self-similarity of EEG signal through the transformer module.
EEGDnet achieves much better performance in terms of both quantitative and qualitative metrics.
- Score: 8.295946712221845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electroencephalogram (EEG) has shown a useful approach to produce a
brain-computer interface (BCI). One-dimensional (1-D) EEG signal is yet easily
disturbed by certain artifacts (a.k.a. noise) due to the high temporal
resolution. Thus, it is crucial to remove the noise in received EEG signal.
Recently, deep learning-based EEG signal denoising approaches have achieved
impressive performance compared with traditional ones. It is well known that
the characteristics of self-similarity (including non-local and local ones) of
data (e.g., natural images and time-domain signals) are widely leveraged for
denoising. However, existing deep learning-based EEG signal denoising methods
ignore either the non-local self-similarity (e.g., 1-D convolutional neural
network) or local one (e.g., fully connected network and recurrent neural
network). To address this issue, we propose a novel 1-D EEG signal denoising
network with 2-D transformer, namely EEGDnet. Specifically, we comprehensively
take into account the non-local and local self-similarity of EEG signal through
the transformer module. By fusing non-local self-similarity in self-attention
blocks and local self-similarity in feed forward blocks, the negative impact
caused by noises and outliers can be reduced significantly. Extensive
experiments show that, compared with other state-of-the-art models, EEGDnet
achieves much better performance in terms of both quantitative and qualitative
metrics.
Related papers
- EEGDiR: Electroencephalogram denoising network for temporal information storage and global modeling through Retentive Network [11.491355463353731]
We introduce the Retnet from natural language processing to EEG denoising.
Direct application of Retnet to EEG denoising is unfeasible due to the one-dimensional nature of EEG signals.
We propose the signal embedding method, transforming one-dimensional EEG signals into two dimensions for use as network inputs.
arXiv Detail & Related papers (2024-03-20T15:04:21Z) - DGSD: Dynamical Graph Self-Distillation for EEG-Based Auditory Spatial
Attention Detection [49.196182908826565]
Auditory Attention Detection (AAD) aims to detect target speaker from brain signals in a multi-speaker environment.
Current approaches primarily rely on traditional convolutional neural network designed for processing Euclidean data like images.
This paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input.
arXiv Detail & Related papers (2023-09-07T13:43:46Z) - Degradation-Noise-Aware Deep Unfolding Transformer for Hyperspectral
Image Denoising [9.119226249676501]
Hyperspectral images (HSIs) are often quite noisy because of narrow band spectral filtering.
To reduce the noise in HSI data cubes, both model-driven and learning-based denoising algorithms have been proposed.
This paper proposes a Degradation-Noise-Aware Unfolding Network (DNA-Net) that addresses these issues.
arXiv Detail & Related papers (2023-05-06T13:28:20Z) - Data-Driven Blind Synchronization and Interference Rejection for Digital
Communication Signals [98.95383921866096]
We study the potential of data-driven deep learning methods for separation of two communication signals from an observation of their mixture.
We show that capturing high-resolution temporal structures (nonstationarities) leads to substantial performance gains.
We propose a domain-informed neural network (NN) design that is able to improve upon both "off-the-shelf" NNs and classical detection and interference rejection methods.
arXiv Detail & Related papers (2022-09-11T14:10:37Z) - Exploiting Cross Domain Acoustic-to-articulatory Inverted Features For
Disordered Speech Recognition [57.15942628305797]
Articulatory features are invariant to acoustic signal distortion and have been successfully incorporated into automatic speech recognition systems for normal speech.
This paper presents a cross-domain acoustic-to-articulatory (A2A) inversion approach that utilizes the parallel acoustic-articulatory data of the 15-hour TORGO corpus in model training.
Cross-domain adapted to the 102.7-hour UASpeech corpus and to produce articulatory features.
arXiv Detail & Related papers (2022-03-19T08:47:18Z) - Electroencephalogram Signal Processing with Independent Component
Analysis and Cognitive Stress Classification using Convolutional Neural
Networks [0.0]
This paper proposes an idea of using Independent Component Analysis(ICA) along with cross-correlation to de-noise EEG signal.
The results of the recorded data show that this algorithm can eliminate the EOG signal artifact with little loss in EEG data.
arXiv Detail & Related papers (2021-08-22T18:38:12Z) - Robust learning from corrupted EEG with dynamic spatial filtering [68.82260713085522]
Building machine learning models using EEG recorded outside of the laboratory requires robust methods to noisy data and randomly missing channels.
We propose dynamic spatial filtering (DSF), a multi-head attention module that can be plugged in before the first layer of a neural network.
We tested DSF on public EEG data encompassing 4,000 recordings with simulated channel corruption and on a private dataset of 100 at-home recordings of mobile EEG with natural corruption.
arXiv Detail & Related papers (2021-05-27T02:33:16Z) - Orthogonal Features Based EEG Signals Denoising Using Fractional and
Compressed One-Dimensional CNN AutoEncoder [3.8580784887142774]
This paper presents a fractional one-dimensional convolutional neural network (CNN) autoencoder for denoising the Electroencephalogram (EEG) signals.
EEG signals often get contaminated with noise during the recording process, mostly due to muscle artifacts (MA)
arXiv Detail & Related papers (2021-04-16T13:58:05Z) - EEG-Inception: An Accurate and Robust End-to-End Neural Network for
EEG-based Motor Imagery Classification [123.93460670568554]
This paper proposes a novel convolutional neural network (CNN) architecture for accurate and robust EEG-based motor imagery (MI) classification.
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network.
The proposed network is an end-to-end classification, as it takes the raw EEG signals as the input and does not require complex EEG signal-preprocessing.
arXiv Detail & Related papers (2021-01-24T19:03:10Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Deep learning denoising for EOG artifacts removal from EEG signals [0.5243460995467893]
One of the most challenging issues in EEG denoising processes is removing the ocular artifacts.
In this paper, we build and train a deep learning model to deal with this challenge and remove the ocular artifacts effectively.
We proposed three different schemes and made our U-NET based models learn to purify contaminated EEG signals.
arXiv Detail & Related papers (2020-09-12T23:28:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.