Convolutional Sparse Coding Fast Approximation with Application to
Seismic Reflectivity Estimation
- URL: http://arxiv.org/abs/2106.15296v1
- Date: Tue, 29 Jun 2021 12:19:07 GMT
- Title: Convolutional Sparse Coding Fast Approximation with Application to
Seismic Reflectivity Estimation
- Authors: Deborah Pereg, Israel Cohen, and Anthony A. Vassiliou
- Abstract summary: We propose a speed-up upgraded version of the classic iterative thresholding algorithm, that produces a good approximation of the convolutional sparse code within 2-5 iterations.
The performance of the proposed solution is demonstrated via the seismic inversion problem in both synthetic and real data scenarios.
- Score: 9.005280130480308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In sparse coding, we attempt to extract features of input vectors, assuming
that the data is inherently structured as a sparse superposition of basic
building blocks. Similarly, neural networks perform a given task by learning
features of the training data set. Recently both data-driven and model-driven
feature extracting methods have become extremely popular and have achieved
remarkable results. Nevertheless, practical implementations are often too slow
to be employed in real-life scenarios, especially for real-time applications.
We propose a speed-up upgraded version of the classic iterative thresholding
algorithm, that produces a good approximation of the convolutional sparse code
within 2-5 iterations. The speed advantage is gained mostly from the
observation that most solvers are slowed down by inefficient global
thresholding. The main idea is to normalize each data point by the local
receptive field energy, before applying a threshold. This way, the natural
inclination towards strong feature expressions is suppressed, so that one can
rely on a global threshold that can be easily approximated, or learned during
training. The proposed algorithm can be employed with a known predetermined
dictionary, or with a trained dictionary. The trained version is implemented as
a neural net designed as the unfolding of the proposed solver. The performance
of the proposed solution is demonstrated via the seismic inversion problem in
both synthetic and real data scenarios. We also provide theoretical guarantees
for a stable support recovery. Namely, we prove that under certain conditions
the true support is perfectly recovered within the first iteration.
Related papers
- A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z) - Learning Non-Vacuous Generalization Bounds from Optimization [8.294831479902658]
We present a simple yet non-vacuous generalization bound from the optimization perspective.
We achieve this goal by leveraging that the hypothesis set accessed by gradient algorithms is essentially fractal-like.
Numerical studies demonstrate that our approach is able to yield plausible generalization guarantees for modern neural networks.
arXiv Detail & Related papers (2022-06-09T08:59:46Z) - Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm
Regularization [44.54772242784423]
We develop an efficient non regularization algorithm for low-rank learning matrix.
The proposed algorithm can avoid expensive folding/unfolding problems.
Experiments show that the proposed algorithm is efficient and more space than the existing state-of-the-world.
arXiv Detail & Related papers (2022-05-06T07:47:10Z) - Data-driven Weight Initialization with Sylvester Solvers [72.11163104763071]
We propose a data-driven scheme to initialize the parameters of a deep neural network.
We show that our proposed method is especially effective in few-shot and fine-tuning settings.
arXiv Detail & Related papers (2021-05-02T07:33:16Z) - Efficient Sparse Coding using Hierarchical Riemannian Pursuit [2.4087148947930634]
Sparse coding is a class of unsupervised methods for learning a representation of the input data in the form of a linear combination of a dictionary and a code.
We propose an efficient synthetic state scheme for sparse coding tasks with a complete dictionary.
arXiv Detail & Related papers (2021-04-21T02:16:44Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - A Partial Regularization Method for Network Compression [0.0]
We propose an approach of partial regularization rather than the original form of penalizing all parameters, which is said to be full regularization, to conduct model compression at a higher speed.
Experimental results show that as we expected, the computational complexity is reduced by observing less running time in almost all situations.
Surprisingly, it helps to improve some important metrics such as regression fitting results and classification accuracy in both training and test phases on multiple datasets.
arXiv Detail & Related papers (2020-09-03T00:38:27Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.