Reconstructing Sparse Signals via Greedy Monte-Carlo Search
- URL: http://arxiv.org/abs/2008.03175v3
- Date: Fri, 29 Jan 2021 09:05:13 GMT
- Title: Reconstructing Sparse Signals via Greedy Monte-Carlo Search
- Authors: Kao Hayashi, Tomoyuki Obuchi, Yoshiyuki Kabashima
- Abstract summary: We propose a Monte-Carlo-based method for reconstructing sparse signals in a high-dimensional setting.
The greedy Monte-Carlo search algorithm is called the greedy Monte-Carlo (GMC) search algorithm.
- Score: 6.660458629649825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a Monte-Carlo-based method for reconstructing sparse signals in
the formulation of sparse linear regression in a high-dimensional setting. The
basic idea of this algorithm is to explicitly select variables or covariates to
represent a given data vector or responses and accept randomly generated
updates of that selection if and only if the energy or cost function decreases.
This algorithm is called the greedy Monte-Carlo (GMC) search algorithm. Its
performance is examined via numerical experiments, which suggests that in the
noiseless case, GMC can achieve perfect reconstruction in undersampling
situations of a reasonable level: it can outperform the $\ell_1$ relaxation but
does not reach the algorithmic limit of MC-based methods theoretically
clarified by an earlier analysis. The necessary computational time is also
examined and compared with that of an algorithm using simulated annealing.
Additionally, experiments on the noisy case are conducted on synthetic datasets
and on a real-world dataset, supporting the practicality of GMC.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Making RL with Preference-based Feedback Efficient via Randomization [11.019088464664696]
Reinforcement Learning algorithms that learn from human feedback need to be efficient in terms of statistical complexity, computational complexity, and query complexity.
We present an algorithm that is sample efficient (i.e. has near-optimal-case regret bounds) and has running time (i.e. computational complexity is worst with respect to relevant parameters)
To extend the results to more general nonlinear function approximation, we design a model-based randomized algorithm inspired by the idea of Thompson sampling.
arXiv Detail & Related papers (2023-10-23T04:19:35Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Greedy versus Map-based Optimized Adaptive Algorithms for
random-telegraph-noise mitigation by spectator qubits [6.305016513788048]
In a scenario where data-storage qubits are kept in isolation as far as possible, noise mitigation can still be done using additional noise probes.
We construct a theoretical model assuming projective measurements on the qubits, and derive the performance of different measurement and control strategies.
We show, analytically and numerically, that MOAAAR outperforms the Greedy algorithm, especially in the regime of high noise sensitivity of SQ.
arXiv Detail & Related papers (2022-05-25T08:25:10Z) - Learning Sparse Graphs via Majorization-Minimization for Smooth Node
Signals [8.140698535149042]
We propose an algorithm for learning a sparse weighted graph by estimating its adjacency matrix.
We show that the proposed algorithm converges faster, in terms of the average number of iterations, than several existing methods in the literature.
arXiv Detail & Related papers (2022-02-06T17:06:13Z) - Matrix Reordering for Noisy Disordered Matrices: Optimality and
Computationally Efficient Algorithms [9.245687221460654]
Motivated by applications in single-cell biology and metagenomics, we investigate the problem of matrixing based on a noisy monotone Toeplitz matrix model.
We establish fundamental statistical limit for this problem in a decision-theoretic framework and demonstrate that a constrained least squares rate.
To address this, we propose a novel-time adaptive sorting algorithm with guaranteed performance improvement.
arXiv Detail & Related papers (2022-01-17T14:53:52Z) - Estimating leverage scores via rank revealing methods and randomization [50.591267188664666]
We study algorithms for estimating the statistical leverage scores of rectangular dense or sparse matrices of arbitrary rank.
Our approach is based on combining rank revealing methods with compositions of dense and sparse randomized dimensionality reduction transforms.
arXiv Detail & Related papers (2021-05-23T19:21:55Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Nearly Linear Row Sampling Algorithm for Quantile Regression [54.75919082407094]
We give a row sampling algorithm for the quantile loss function with sample complexity nearly linear in the dimensionality of the data.
Based upon our row sampling algorithm, we give the fastest known algorithm for quantile regression and a graph sparsification algorithm for balanced directed graphs.
arXiv Detail & Related papers (2020-06-15T13:40:07Z) - Semi-analytic approximate stability selection for correlated data in
generalized linear models [3.42658286826597]
We propose a novel approximate inference algorithm that can conduct Stability Selection without the repeated fitting.
The algorithm is based on the replica method of statistical mechanics and vector approximate message passing of information theory.
Numerical experiments indicate that the algorithm exhibits fast convergence and high approximation accuracy for both synthetic and real-world data.
arXiv Detail & Related papers (2020-03-19T10:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.