Covariance-Free Sparse Bayesian Learning
- URL: http://arxiv.org/abs/2105.10439v1
- Date: Fri, 21 May 2021 16:20:07 GMT
- Title: Covariance-Free Sparse Bayesian Learning
- Authors: Alexander Lin, Andrew H. Song, Berkin Bilgic, and Demba Ba
- Abstract summary: We introduce a new SBL inference algorithm that avoids explicit inversions of the covariance matrix.
Our method can be up to thousands of times faster than existing baselines.
We showcase how our new algorithm enables SBL to tractably tackle high-dimensional signal recovery problems.
- Score: 62.24008859844098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse Bayesian learning (SBL) is a powerful framework for tackling the
sparse coding problem while also providing uncertainty quantification. However,
the most popular inference algorithms for SBL become too expensive for
high-dimensional problems due to the need to maintain a large covariance
matrix. To resolve this issue, we introduce a new SBL inference algorithm that
avoids explicit computation of the covariance matrix, thereby saving
significant time and space. Instead of performing costly matrix inversions, our
covariance-free method solves multiple linear systems to obtain provably
unbiased estimates of the posterior statistics needed by SBL. These systems can
be solved in parallel, enabling further acceleration of the algorithm via
graphics processing units. In practice, our method can be up to thousands of
times faster than existing baselines, reducing hours of computation time to
seconds. We showcase how our new algorithm enables SBL to tractably tackle
high-dimensional signal recovery problems, such as deconvolution of calcium
imaging data and multi-contrast reconstruction of magnetic resonance images.
Finally, we open-source a toolbox containing all of our implementations to
drive future research in SBL.
Related papers
- A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Sublinear Time Algorithms for Several Geometric Optimization (With
Outliers) Problems In Machine Learning [8.794896028549323]
We revisit the Minimum Enclosing Ball (MEB) problem in Euclidean space $mathbbRd$.
We introduce the notion of stability for MEB, which is natural and easy to understand.
Under the stability assumption, we present two sampling algorithms for computing radius-approximate MEB.
arXiv Detail & Related papers (2023-01-07T15:03:45Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - COSMIC: fast closed-form identification from large-scale data for LTV
systems [4.10464681051471]
We introduce a closed-form method for identification of discrete-time linear timevariant systems from data.
We develop an algorithm with guarantees of optimality and with a complexity that increases linearly with the number of instants considered per trajectory.
Our algorithm was applied to both a Low Fidelity and Functional Engineering Simulators for the Comet Interceptor mission.
arXiv Detail & Related papers (2021-12-08T16:07:59Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Stochastic Bundle Adjustment for Efficient and Scalable 3D
Reconstruction [43.736296034673124]
Current bundle adjustment solvers such as the Levenberg-Marquardt (LM) algorithm are limited by the bottleneck in solving the Reduced Camera System (RCS) whose dimension is proportional to the camera number.
We propose a bundle adjustment algorithm which seeks to decompose the RCS approximately inside the LM to improve the efficiency and scalability.
arXiv Detail & Related papers (2020-08-02T10:26:09Z) - Multiscale Non-stationary Stochastic Bandits [83.48992319018147]
We propose a novel multiscale changepoint detection method for the non-stationary linear bandit problems, called Multiscale-LinUCB.
Experimental results show that our proposed Multiscale-LinUCB algorithm outperforms other state-of-the-art algorithms in non-stationary contextual environments.
arXiv Detail & Related papers (2020-02-13T00:24:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.