High-Dimensional Sparse Bayesian Learning without Covariance Matrices
- URL: http://arxiv.org/abs/2202.12808v1
- Date: Fri, 25 Feb 2022 16:35:26 GMT
- Title: High-Dimensional Sparse Bayesian Learning without Covariance Matrices
- Authors: Alexander Lin, Andrew H. Song, Berkin Bilgic, Demba Ba
- Abstract summary: We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
- Score: 66.60078365202867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse Bayesian learning (SBL) is a powerful framework for tackling the
sparse coding problem. However, the most popular inference algorithms for SBL
become too expensive for high-dimensional settings, due to the need to store
and compute a large covariance matrix. We introduce a new inference scheme that
avoids explicit construction of the covariance matrix by solving multiple
linear systems in parallel to obtain the posterior moments for SBL. Our
approach couples a little-known diagonal estimation result from numerical
linear algebra with the conjugate gradient algorithm. On several simulations,
our method scales better than existing approaches in computation time and
memory, especially for structured dictionaries capable of fast matrix-vector
multiplication.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Recovering Simultaneously Structured Data via Non-Convex Iteratively
Reweighted Least Squares [0.8702432681310401]
We propose a new algorithm for recovering data that adheres to multiple, heterogeneous low-dimensional structures from linear observations.
We show that the IRLS method favorable in identifying low/comckuele state measurements.
arXiv Detail & Related papers (2023-06-08T06:35:47Z) - Sublinear Time Approximation of Text Similarity Matrices [50.73398637380375]
We introduce a generalization of the popular Nystr"om method to the indefinite setting.
Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix.
We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices.
arXiv Detail & Related papers (2021-12-17T17:04:34Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Covariance-Free Sparse Bayesian Learning [62.24008859844098]
We introduce a new SBL inference algorithm that avoids explicit inversions of the covariance matrix.
Our method can be up to thousands of times faster than existing baselines.
We showcase how our new algorithm enables SBL to tractably tackle high-dimensional signal recovery problems.
arXiv Detail & Related papers (2021-05-21T16:20:07Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z) - Solution Path Algorithm for Twin Multi-class Support Vector Machine [6.97711662470035]
The paper is devoted to the fast regularization parameter tuning algorithm for the twin multi-class support vector machine.
A new sample dataset division method is adopted and the Lagrangian multipliers are proved to be piecewise linear.
The proposed method can achieve good classification performance with reducing the computational cost of grid search method from exponential level to the constant level.
arXiv Detail & Related papers (2020-05-30T14:05:46Z) - Efficient Alternating Least Squares Algorithms for Low Multilinear Rank
Approximation of Tensors [6.308492837096872]
We propose a new class of truncated HOSVD algorithms based on alternating least squares (ALS) for efficiently computing the low multilinear rank approximation of tensors.
ALS-based approaches are able to eliminate the redundant computations of the singular vectors of intermediate matrices and are therefore free of data explosion.
Numerical experiments with large-scale tensors from both synthetic and real-world applications demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.
arXiv Detail & Related papers (2020-04-06T11:58:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.