RRCNN$^{+}$: An Enhanced Residual Recursive Convolutional Neural Network
for Non-stationary Signal Decomposition
- URL: http://arxiv.org/abs/2309.04782v1
- Date: Sat, 9 Sep 2023 13:00:30 GMT
- Title: RRCNN$^{+}$: An Enhanced Residual Recursive Convolutional Neural Network
for Non-stationary Signal Decomposition
- Authors: Feng Zhou, Antonio Cicone, Haomin Zhou
- Abstract summary: We propose a novel method to decompose a non-stationary signal into quasi-stationary components.
In this study, we aim to further improve RRCNN with the help of several nimble techniques from deep learning and optimization.
- Score: 9.736778471284712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time-frequency analysis is an important and challenging task in many
applications. Fourier and wavelet analysis are two classic methods that have
achieved remarkable success in many fields. They also exhibit limitations when
applied to nonlinear and non-stationary signals. To address this challenge, a
series of nonlinear and adaptive methods, pioneered by the empirical mode
decomposition method have been proposed. Their aim is to decompose a
non-stationary signal into quasi-stationary components which reveal better
features in the time-frequency analysis. Recently, inspired by deep learning,
we proposed a novel method called residual recursive convolutional neural
network (RRCNN). Not only RRCNN can achieve more stable decomposition than
existing methods while batch processing large-scale signals with low
computational cost, but also deep learning provides a unique perspective for
non-stationary signal decomposition. In this study, we aim to further improve
RRCNN with the help of several nimble techniques from deep learning and
optimization to ameliorate the method and overcome some of the limitations of
this technique.
Related papers
- An Unsupervised Deep Learning Approach for the Wave Equation Inverse
Problem [12.676629870617337]
Full-waveform inversion (FWI) is a powerful geophysical imaging technique that infers high-resolution subsurface physical parameters.
Due to limitations in observation, limited shots or receivers, and random noise, conventional inversion methods are confronted with numerous challenges.
We provide an unsupervised learning approach aimed at accurately reconstructing physical velocity parameters.
arXiv Detail & Related papers (2023-11-08T08:39:33Z) - RRCNN: A novel signal decomposition approach based on recurrent residue
convolutional neural network [7.5123109191537205]
We propose a new non-stationary signal decomposition method under the framework of deep learning.
We use the convolutional neural network, residual structure and nonlinear activation function to compute in an innovative way the local average of the signal.
In the experiments, we evaluate the performance of the proposed model from two points of view: the calculation of the local average and the signal decomposition.
arXiv Detail & Related papers (2023-07-04T13:53:01Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Deep unfolding as iterative regularization for imaging inverse problems [6.485466095579992]
Deep unfolding methods guide the design of deep neural networks (DNNs) through iterative algorithms.
We prove that the unfolded DNN will converge to it stably.
We demonstrate with an example of MRI reconstruction that the proposed method outperforms conventional unfolding methods.
arXiv Detail & Related papers (2022-11-24T07:38:47Z) - Deep learning based sferics recognition for AMT data processing in the
dead band [5.683853455697258]
In the audio magnetotellurics (AMT) sounding data processing, the absence of sferic signals in some time ranges typically results in a lack of energy in the AMT dead band.
We propose a deep convolutional neural network (CNN) to automatically recognize sferic signals from redundantly recorded data in a long time range.
Our method can significantly improve S/N and effectively solve the problem of lack of energy in dead band.
arXiv Detail & Related papers (2022-09-22T02:31:28Z) - Overcoming the Spectral Bias of Neural Value Approximation [17.546011419043644]
Value approximation using deep neural networks is often the primary module that provides learning signals to the rest of the algorithm.
Recent works in neural kernel regression suggest the presence of a spectral bias, where fitting high-frequency components of the value function requires exponentially more gradient update steps than the low-frequency ones.
We re-examine off-policy reinforcement learning through the lens of kernel regression and propose to overcome such bias via a composite neural kernel.
arXiv Detail & Related papers (2022-06-09T17:59:57Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Short-Term Memory Optimization in Recurrent Neural Networks by
Autoencoder-based Initialization [79.42778415729475]
We explore an alternative solution based on explicit memorization using linear autoencoders for sequences.
We show how such pretraining can better support solving hard classification tasks with long sequences.
We show that the proposed approach achieves a much lower reconstruction error for long sequences and a better gradient propagation during the finetuning phase.
arXiv Detail & Related papers (2020-11-05T14:57:16Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - RNN Training along Locally Optimal Trajectories via Frank-Wolfe
Algorithm [50.76576946099215]
We propose a novel and efficient training method for RNNs by iteratively seeking a local minima on the loss surface within a small region.
We develop a novel RNN training method that, surprisingly, even with the additional cost, the overall training cost is empirically observed to be lower than back-propagation.
arXiv Detail & Related papers (2020-10-12T01:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.