Learning Unnormalized Statistical Models via Compositional Optimization
- URL: http://arxiv.org/abs/2306.07485v1
- Date: Tue, 13 Jun 2023 01:18:16 GMT
- Title: Learning Unnormalized Statistical Models via Compositional Optimization
- Authors: Wei Jiang, Jiayu Qin, Lingyu Wu, Changyou Chen, Tianbao Yang, Lijun
Zhang
- Abstract summary: Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
- Score: 73.30514599338407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning unnormalized statistical models (e.g., energy-based models) is
computationally challenging due to the complexity of handling the partition
function. To eschew this complexity, noise-contrastive estimation~(NCE) has
been proposed by formulating the objective as the logistic loss of the real
data and the artificial noise. However, as found in previous works, NCE may
perform poorly in many tasks due to its flat loss landscape and slow
convergence. In this paper, we study it a direct approach for optimizing the
negative log-likelihood of unnormalized models from the perspective of
compositional optimization. To tackle the partition function, a noise
distribution is introduced such that the log partition function can be written
as a compositional function whose inner function can be estimated with
stochastic samples. Hence, the objective can be optimized by stochastic
compositional optimization algorithms. Despite being a simple method, we
demonstrate that it is more favorable than NCE by (1) establishing a fast
convergence rate and quantifying its dependence on the noise distribution
through the variance of stochastic estimators; (2) developing better results
for one-dimensional Gaussian mean estimation by showing our objective has a
much favorable loss landscape and hence our method enjoys faster convergence;
(3) demonstrating better performance on multiple applications, including
density estimation, out-of-distribution detection, and real image generation.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Likelihood approximations via Gaussian approximate inference [3.4991031406102238]
We propose efficient schemes to approximate the effects of non-Gaussian likelihoods by Gaussian densities.
Our results attain good approximation quality for binary and multiclass classification in large-scale point-estimate and distributional inferential settings.
As a by-product, we show that the proposed approximate log-likelihoods are a superior alternative to least-squares on raw labels for neural network classification.
arXiv Detail & Related papers (2024-10-28T05:39:26Z) - Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Probabilistic Iterative Hard Thresholding for Sparse Learning [2.5782973781085383]
We present an approach towards solving expectation objective optimization problems with cardinality constraints.
We prove convergence of the underlying process, and demonstrate the performance on two Machine Learning problems.
arXiv Detail & Related papers (2024-09-02T18:14:45Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Provable benefits of score matching [30.317535687908755]
We give the first example of a natural exponential family of distributions such that score matching loss is computationally efficient to optimize.
We show that designing a zeroth-order or first-order oracle for optimizing the likelihood loss is NP-hard.
Minimizing the score matching loss is both computationally and statistically efficient, with complexity in the ambient dimension.
arXiv Detail & Related papers (2023-06-03T03:42:30Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - A Nonconvex Framework for Structured Dynamic Covariance Recovery [24.471814126358556]
We propose a flexible yet interpretable model for high-dimensional data with time-varying second order statistics.
Motivated by the literature, we quantify factorization and smooth temporal data.
We show that our approach outperforms existing baselines.
arXiv Detail & Related papers (2020-11-11T07:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.