Variational Inference: Posterior Threshold Improves Network Clustering Accuracy in Sparse Regimes
- URL: http://arxiv.org/abs/2301.04771v3
- Date: Tue, 21 May 2024 06:07:11 GMT
- Title: Variational Inference: Posterior Threshold Improves Network Clustering Accuracy in Sparse Regimes
- Authors: Xuezhen Li, Can M. Le,
- Abstract summary: This paper proposes a simple way to improve the variational inference method by hard thresholding the posterior of the community assignment after each iteration.
We show that the proposed method converges and can accurately recover the true community labels, even when the average node degree of the network is bounded.
- Score: 2.5782420501870296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational inference has been widely used in machine learning literature to fit various Bayesian models. In network analysis, this method has been successfully applied to solve the community detection problems. Although these results are promising, their theoretical support is only for relatively dense networks, an assumption that may not hold for real networks. In addition, it has been shown recently that the variational loss surface has many saddle points, which may severely affect its performance, especially when applied to sparse networks. This paper proposes a simple way to improve the variational inference method by hard thresholding the posterior of the community assignment after each iteration. Using a random initialization that correlates with the true community assignment, we show that the proposed method converges and can accurately recover the true community labels, even when the average node degree of the network is bounded. Extensive numerical study further confirms the advantage of the proposed method over the classical variational inference and another state-of-the-art algorithm.
Related papers
- Implicit Variational Inference for High-Dimensional Posteriors [7.924706533725115]
In variational inference, the benefits of Bayesian models rely on accurately capturing the true posterior distribution.
We propose using neural samplers that specify implicit distributions, which are well-suited for approximating complex multimodal and correlated posteriors.
Our approach introduces novel bounds for approximate inference using implicit distributions by locally linearising the neural sampler.
arXiv Detail & Related papers (2023-10-10T14:06:56Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Efficient Estimation for Longitudinal Networks via Adaptive Merging [21.62069959992736]
We propose an efficient estimation framework for longitudinal network, leveraging strengths of adaptive network merging, tensor decomposition and point process.
It merges neighboring sparse networks so as to enlarge the number of observed edges and reduce estimation variance.
A projected descent algorithm is proposed to facilitate estimation, where upper bound of the estimation error in each iteration is established.
arXiv Detail & Related papers (2022-11-15T03:17:11Z) - On Optimizing Back-Substitution Methods for Neural Network Verification [1.4394939014120451]
We present an approach for making back-substitution produce tighter bounds.
Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques.
arXiv Detail & Related papers (2022-08-16T11:16:44Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Efficient Variational Inference for Sparse Deep Learning with
Theoretical Guarantee [20.294908538266867]
Sparse deep learning aims to address the challenge of huge storage consumption by deep neural networks.
In this paper, we train sparse deep neural networks with a fully Bayesian treatment under spike-and-slab priors.
We develop a set of computationally efficient variational inferences via continuous relaxation of Bernoulli distribution.
arXiv Detail & Related papers (2020-11-15T03:27:54Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.