ConvConcatNet: a deep convolutional neural network to reconstruct mel
spectrogram from the EEG
- URL: http://arxiv.org/abs/2401.04965v1
- Date: Wed, 10 Jan 2024 07:15:45 GMT
- Title: ConvConcatNet: a deep convolutional neural network to reconstruct mel
spectrogram from the EEG
- Authors: Xiran Xu, Bo Wang, Yujie Yan, Haolin Zhu, Zechen Zhang, Xihong Wu,
Jing Chen
- Abstract summary: This work presents a novel method, ConvConcatNet, to reconstruct mel-specgrams from EEG.
With our ConvConcatNet model, the Pearson correlation between the reconstructed and the target mel-spectrogram can achieve 0.0420.
- Score: 10.564488010303988
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To investigate the processing of speech in the brain, simple linear models
are commonly used to establish a relationship between brain signals and speech
features. However, these linear models are ill-equipped to model a highly
dynamic and complex non-linear system like the brain. Although non-linear
methods with neural networks have been developed recently, reconstructing
unseen stimuli from unseen subjects' EEG is still a highly challenging task.
This work presents a novel method, ConvConcatNet, to reconstruct mel-specgrams
from EEG, in which the deep convolution neural network and extensive
concatenation operation were combined. With our ConvConcatNet model, the
Pearson correlation between the reconstructed and the target mel-spectrogram
can achieve 0.0420, which was ranked as No.1 in the Task 2 of the Auditory EEG
Challenge. The codes and models to implement our work will be available on
Github: https://github.com/xuxiran/ConvConcatNet
Related papers
- Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Deep Learning for real-time neural decoding of grasp [0.0]
We present a Deep Learning-based approach to the decoding of neural signals for grasp type classification.
The main goal of the presented approach is to improve over state-of-the-art decoding accuracy without relying on any prior neuroscience knowledge.
arXiv Detail & Related papers (2023-11-02T08:26:29Z) - Pathfinding Neural Cellular Automata [23.831530224401575]
Pathfinding is an important sub-component of a broad range of complex AI tasks, such as robot path planning, transport routing, and game playing.
We hand-code and learn models for Breadth-First Search (BFS), i.e. shortest path finding.
We present a neural implementation of Depth-First Search (DFS), and outline how it can be combined with neural BFS to produce an NCA for computing diameter of a graph.
We experiment with architectural modifications inspired by these hand-coded NCAs, training networks from scratch to solve the diameter problem on grid mazes while exhibiting strong ability generalization
arXiv Detail & Related papers (2023-01-17T11:45:51Z) - Go Beyond Multiple Instance Neural Networks: Deep-learning Models based
on Local Pattern Aggregation [0.0]
convolutional neural networks (CNNs) have brought breakthroughs in processing clinical electrocardiograms (ECGs) and speaker-independent speech.
In this paper, we propose local pattern aggregation-based deep-learning models to effectively deal with both problems.
The novel network structure, called LPANet, has cropping and aggregation operations embedded into it.
arXiv Detail & Related papers (2022-05-28T13:18:18Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - EEG-ITNet: An Explainable Inception Temporal Convolutional Network for
Motor Imagery Classification [0.5616884466478884]
We propose an end-to-end deep learning architecture called EEG-ITNet.
Our model can extract rich spectral, spatial, and temporal information from multi-channel EEG signals.
EEG-ITNet shows up to 5.9% improvement in the classification accuracy in different scenarios.
arXiv Detail & Related papers (2022-04-14T13:18:43Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.