DeepMP for Non-Negative Sparse Decomposition
- URL: http://arxiv.org/abs/2007.14281v1
- Date: Tue, 28 Jul 2020 14:52:06 GMT
- Title: DeepMP for Non-Negative Sparse Decomposition
- Authors: Konstantinos A. Voulgaris, Mike E. Davies, Mehrdad Yaghoobi
- Abstract summary: Non-negative signals form an important class of sparse signals.
greedy and convex relaxed algorithms are among the most popular methods.
One such modification has been proposed for Matching Pursuit (MP) based algorithms.
- Score: 14.790515227906257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-negative signals form an important class of sparse signals. Many
algorithms have already beenproposed to recover such non-negative
representations, where greedy and convex relaxed algorithms are among the most
popular methods. The greedy techniques are low computational cost algorithms,
which have also been modified to incorporate the non-negativity of the
representations. One such modification has been proposed for Matching Pursuit
(MP) based algorithms, which first chooses positive coefficients and uses a
non-negative optimisation technique that guarantees the non-negativity of the
coefficients. The performance of greedy algorithms, like all non-exhaustive
search methods, suffer from high coherence with the linear generative model,
called the dictionary. We here first reformulate the non-negative matching
pursuit algorithm in the form of a deep neural network. We then show that the
proposed model after training yields a significant improvement in terms of
exact recovery performance, compared to other non-trained greedy algorithms,
while keeping the complexity low.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - A fast Multiplicative Updates algorithm for Non-negative Matrix Factorization [2.646309221150203]
We propose to improve the Multiplicative Updates algorithm by crafting a tighter upper bound of the Hessian matrix for each alternate subproblem.
Convergence is still ensured and we observe in practice on both synthetic and real world dataset that the proposed fastMU algorithm is often several orders of magnitude faster than the regular Multiplicative Updates algorithm.
arXiv Detail & Related papers (2023-03-31T12:09:36Z) - SymBa: Symmetric Backpropagation-Free Contrastive Learning with
Forward-Forward Algorithm for Optimizing Convergence [1.6244541005112747]
The paper proposes a new algorithm called SymBa that aims to achieve more biologically plausible learning.
It is based on the Forward-Forward (FF) algorithm, which is a BP-free method for training neural networks.
The proposed algorithm has the potential to improve our understanding of how the brain learns and processes information.
arXiv Detail & Related papers (2023-03-15T07:39:23Z) - HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection [75.84584400866254]
We propose a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses.
HARRIS' decisions are based on a forest model, whose trees are created based on optimized on a hybrid ranking and regression loss function.
arXiv Detail & Related papers (2022-10-31T14:06:11Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Learning Sparse Graphs via Majorization-Minimization for Smooth Node
Signals [8.140698535149042]
We propose an algorithm for learning a sparse weighted graph by estimating its adjacency matrix.
We show that the proposed algorithm converges faster, in terms of the average number of iterations, than several existing methods in the literature.
arXiv Detail & Related papers (2022-02-06T17:06:13Z) - Gaussian Elimination versus Greedy Methods for the Synthesis of Linear
Reversible Circuits [0.0]
reversible circuits represent a subclass of reversible circuits with many applications in quantum computing.
We propose a new algorithm for any linear reversible operator by using an optimized version of the Gaussian elimination algorithm and a tuned LU factorization.
Overall, our algorithms improve the state-of-the-art methods for specific ranges of problem sizes.
arXiv Detail & Related papers (2022-01-17T16:31:42Z) - Benchmarking Simulation-Based Inference [5.3898004059026325]
Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods.
We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms.
We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency.
arXiv Detail & Related papers (2021-01-12T18:31:22Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.