Variational Learning for the Inverted Beta-Liouville Mixture Model and
Its Application to Text Categorization
- URL: http://arxiv.org/abs/2112.14375v1
- Date: Wed, 29 Dec 2021 03:03:44 GMT
- Title: Variational Learning for the Inverted Beta-Liouville Mixture Model and
Its Application to Text Categorization
- Authors: Yongfa Ling, Wenbo Guan, Qiang Ruan, Heping Song, Yuping Lai
- Abstract summary: finite invert Beta-Liouville mixture model (IBLMM) has recently gained some attention due to its positive data modeling capability.
New function is proposed to replace the original variational object function in order to avoid intractable moment computation.
- Score: 1.4174475093445236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The finite invert Beta-Liouville mixture model (IBLMM) has recently gained
some attention due to its positive data modeling capability. Under the
conventional variational inference (VI) framework, the analytically tractable
solution to the optimization of the variational posterior distribution cannot
be obtained, since the variational object function involves evaluation of
intractable moments. With the recently proposed extended variational inference
(EVI) framework, a new function is proposed to replace the original variational
object function in order to avoid intractable moment computation, so that the
analytically tractable solution of the IBLMM can be derived in an elegant way.
The good performance of the proposed approach is demonstrated by experiments
with both synthesized data and a real-world application namely text
categorization.
Related papers
- Hyperspectral Unmixing Under Endmember Variability: A Variational Inference Framework [22.114121550108344]
This work proposes a variational inference framework for hyperspectral unmixing in the presence of endmember variability (HU-EV)
An EV-accounted noisy linear mixture model (LMM) is considered, and the presence of outliers is also incorporated into the model.
The effectiveness of the proposed framework is demonstrated through synthetic, semi-real, and real-data experiments.
arXiv Detail & Related papers (2024-07-20T15:16:14Z) - A Differentiable Partially Observable Generalized Linear Model with
Forward-Backward Message Passing [2.600709013150986]
We propose a new differentiable POGLM, which enables the pathwise gradient estimator, better than the score function gradient estimator used in existing works.
Our new method yields more interpretable parameters, underscoring its significance in neuroscience.
arXiv Detail & Related papers (2024-02-02T09:34:49Z) - Joint State Estimation and Noise Identification Based on Variational
Optimization [8.536356569523127]
A novel adaptive Kalman filter method based on conjugate-computation variational inference, referred to as CVIAKF, is proposed.
The effectiveness of CVIAKF is validated through synthetic and real-world datasets of maneuvering target tracking.
arXiv Detail & Related papers (2023-12-15T07:47:03Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Quasi Black-Box Variational Inference with Natural Gradients for
Bayesian Learning [84.90242084523565]
We develop an optimization algorithm suitable for Bayesian learning in complex models.
Our approach relies on natural gradient updates within a general black-box framework for efficient training with limited model-specific derivations.
arXiv Detail & Related papers (2022-05-23T18:54:27Z) - Generalised Gaussian Process Latent Variable Models (GPLVM) with
Stochastic Variational Inference [9.468270453795409]
We study the doubly formulation of the BayesianVM model amenable with minibatch training.
We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models.
We demonstrate how we can train in the presence of massively missing data and obtain high-fidelity reconstructions.
arXiv Detail & Related papers (2022-02-25T21:21:51Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Loss function based second-order Jensen inequality and its application
to particle variational inference [112.58907653042317]
Particle variational inference (PVI) uses an ensemble of models as an empirical approximation for the posterior distribution.
PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models.
We derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models.
arXiv Detail & Related papers (2021-06-09T12:13:51Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Posterior-Aided Regularization for Likelihood-Free Inference [23.708122045184698]
Posterior-Aided Regularization (PAR) is applicable to learning the density estimator, regardless of the model structure.
We provide a unified estimation method of PAR to estimate both reverse KL term and mutual information term with a single neural network.
arXiv Detail & Related papers (2021-02-15T16:59:30Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.