Block Sparse Bayesian Learning: A Diversified Scheme
- URL: http://arxiv.org/abs/2402.04646v2
- Date: Wed, 30 Oct 2024 10:13:21 GMT
- Title: Block Sparse Bayesian Learning: A Diversified Scheme
- Authors: Yanhao Zhang, Zhihan Zhu, Yong Xia,
- Abstract summary: We introduce a novel prior called Diversified Block Sparse Prior to characterize the widespread block sparsity phenomenon in real-world data.
By allowing diversification on intra-block variance and inter-block correlation matrices, we effectively address the sensitivity issue of existing block sparse learning methods to pre-defined block information.
- Score: 16.61484758008309
- License:
- Abstract: This paper introduces a novel prior called Diversified Block Sparse Prior to characterize the widespread block sparsity phenomenon in real-world data. By allowing diversification on intra-block variance and inter-block correlation matrices, we effectively address the sensitivity issue of existing block sparse learning methods to pre-defined block information, which enables adaptive block estimation while mitigating the risk of overfitting. Based on this, a diversified block sparse Bayesian learning method (DivSBL) is proposed, utilizing EM algorithm and dual ascent method for hyperparameter estimation. Moreover, we establish the global and local optimality theory of our model. Experiments validate the advantages of DivSBL over existing algorithms.
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Towards Universal Dense Blocking for Entity Resolution [49.06313308481536]
We propose UniBlocker, a dense blocker that is pre-trained on a domain-independent, easily-obtainable corpus.
By conducting domain-independent pre-training, UniBlocker can be adapted to various downstream blocking scenarios without requiring domain-specific fine-tuning.
Our experiments show that the proposed UniBlocker, without any domain-specific learning, significantly outperforms previous self- and unsupervised dense blocking methods.
arXiv Detail & Related papers (2024-04-23T08:39:29Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Semi-Blind Source Separation with Learned Constraints [1.2891210250935146]
Blind source separation (BSS) algorithms are unsupervised methods for hyperspectral data analysis.
In this article, we investigate a semi-supervised source separation approach in which we combine a projected alternating least-square algorithm with a learning-based regularization scheme.
We show that this allows for an innovative BSS algorithm, with improved accuracy, which provides physically interpretable solutions.
arXiv Detail & Related papers (2022-09-27T17:58:23Z) - Approach of variable clustering and compression for learning large
Bayesian networks [0.0]
This paper describes a new approach for learning structures of large Bayesian networks based on blocks resulting from feature space clustering.
The advantage of the approach is evaluated in terms of speed of work as well as the accuracy of the found structures.
arXiv Detail & Related papers (2022-08-29T13:55:32Z) - Risk Consistent Multi-Class Learning from Label Proportions [64.0125322353281]
This study addresses a multiclass learning from label proportions (MCLLP) setting in which training instances are provided in bags.
Most existing MCLLP methods impose bag-wise constraints on the prediction of instances or assign them pseudo-labels.
A risk-consistent method is proposed for instance classification using the empirical risk minimization framework.
arXiv Detail & Related papers (2022-03-24T03:49:04Z) - Efficient Reinforcement Learning in Block MDPs: A Model-free
Representation Learning Approach [73.62265030773652]
We present BRIEE, an algorithm for efficient reinforcement learning in Markov Decision Processes with block-structured dynamics.
BRIEE interleaves latent states discovery, exploration, and exploitation together, and can provably learn a near-optimal policy.
We show that BRIEE is more sample efficient than the state-of-art Block MDP algorithm HOMER RL and other empirical baselines on challenging rich-observation combination lock problems.
arXiv Detail & Related papers (2022-01-31T19:47:55Z) - Stochastic Block-ADMM for Training Deep Networks [16.369102155752824]
We propose Block-ADMM as an approach to train deep neural networks in batch and online settings.
Our method works by splitting neural networks into an arbitrary number of blocks and utilizing auxiliary variables to connect these blocks.
We prove the convergence of our proposed method and justify its capabilities through experiments in supervised and weakly-supervised settings.
arXiv Detail & Related papers (2021-05-01T19:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.