Telescoping Density-Ratio Estimation
- URL: http://arxiv.org/abs/2006.12204v2
- Date: Tue, 24 Nov 2020 13:13:20 GMT
- Title: Telescoping Density-Ratio Estimation
- Authors: Benjamin Rhodes, Kai Xu and Michael U. Gutmann
- Abstract summary: We introduce a new framework, telescoping density-ratio estimation (TRE)
TRE enables the estimation of ratios between highly dissimilar densities in high-dimensional spaces.
Our experiments demonstrate that TRE can yield substantial improvements over existing single-ratio methods.
- Score: 21.514983459970903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Density-ratio estimation via classification is a cornerstone of unsupervised
learning. It has provided the foundation for state-of-the-art methods in
representation learning and generative modelling, with the number of use-cases
continuing to proliferate. However, it suffers from a critical limitation: it
fails to accurately estimate ratios p/q for which the two densities differ
significantly. Empirically, we find this occurs whenever the KL divergence
between p and q exceeds tens of nats. To resolve this limitation, we introduce
a new framework, telescoping density-ratio estimation (TRE), that enables the
estimation of ratios between highly dissimilar densities in high-dimensional
spaces. Our experiments demonstrate that TRE can yield substantial improvements
over existing single-ratio methods for mutual information estimation,
representation learning and energy-based modelling.
Related papers
- Overcoming Saturation in Density Ratio Estimation by Iterated Regularization [11.244546184962996]
We show that a class of kernel methods for density ratio estimation suffers from error saturation.
We introduce iterated regularization in density ratio estimation to achieve fast error rates.
arXiv Detail & Related papers (2024-02-21T16:02:14Z) - Adaptive learning of density ratios in RKHS [3.047411947074805]
Estimating the ratio of two probability densities from finitely many observations is a central problem in machine learning and statistics.
We analyze a large class of density ratio estimation methods that minimize a regularized Bregman divergence between the true density ratio and a model in a reproducing kernel Hilbert space.
arXiv Detail & Related papers (2023-07-30T08:18:39Z) - Disentangling Learning Representations with Density Estimation [9.244163477446799]
We present a method which achieves reliable disentanglement via flexible density estimation of the latent space.
GCAE achieves highly competitive and reliable disentanglement scores compared with state-of-the-art baselines.
arXiv Detail & Related papers (2023-02-08T22:37:33Z) - Mutual Wasserstein Discrepancy Minimization for Sequential
Recommendation [82.0801585843835]
We propose a novel self-supervised learning framework based on Mutual WasserStein discrepancy minimization MStein for the sequential recommendation.
We also propose a novel contrastive learning loss based on Wasserstein Discrepancy Measurement.
arXiv Detail & Related papers (2023-01-28T13:38:48Z) - Adaptive Multi-stage Density Ratio Estimation for Learning Latent Space
Energy-based Model [11.401637217963511]
This paper studies the fundamental problem of learning energy-based model (EBM) in the latent space of the generator model.
We propose to use noise contrastive estimation (NCE) to discriminatively learn the EBM through density ratio estimation.
Our experiments demonstrate strong performances in image generation and reconstruction as well as anomaly detection.
arXiv Detail & Related papers (2022-09-19T03:20:15Z) - Posterior Coreset Construction with Kernelized Stein Discrepancy for
Model-Based Reinforcement Learning [78.30395044401321]
We develop a novel model-based approach to reinforcement learning (MBRL)
It relaxes the assumptions on the target transition model to belong to a generic family of mixture models.
It can achieve up-to 50 percent reduction in wall clock time in some continuous control environments.
arXiv Detail & Related papers (2022-06-02T17:27:49Z) - Density Ratio Estimation via Infinitesimal Classification [85.08255198145304]
We propose DRE-infty, a divide-and-conquer approach to reduce Density ratio estimation (DRE) to a series of easier subproblems.
Inspired by Monte Carlo methods, we smoothly interpolate between the two distributions via an infinite continuum of intermediate bridge distributions.
We show that our approach performs well on downstream tasks such as mutual information estimation and energy-based modeling on complex, high-dimensional datasets.
arXiv Detail & Related papers (2021-11-22T06:26:29Z) - Featurized Density Ratio Estimation [82.40706152910292]
In our work, we propose to leverage an invertible generative model to map the two distributions into a common feature space prior to estimation.
This featurization brings the densities closer together in latent space, sidestepping pathological scenarios where the learned density ratios in input space can be arbitrarily inaccurate.
At the same time, the invertibility of our feature map guarantees that the ratios computed in feature space are equivalent to those in input space.
arXiv Detail & Related papers (2021-07-05T18:30:26Z) - Meta-Learning for Relative Density-Ratio Estimation [59.75321498170363]
Existing methods for (relative) density-ratio estimation (DRE) require many instances from both densities.
We propose a meta-learning method for relative DRE, which estimates the relative density-ratio from a few instances by using knowledge in related datasets.
We empirically demonstrate the effectiveness of the proposed method by using three problems: relative DRE, dataset comparison, and outlier detection.
arXiv Detail & Related papers (2021-07-02T02:13:45Z) - Imitation with Neural Density Models [98.34503611309256]
We propose a new framework for Imitation Learning (IL) via density estimation of the expert's occupancy measure followed by Imitation Occupancy Entropy Reinforcement Learning (RL) using the density as a reward.
Our approach maximizes a non-adversarial model-free RL objective that provably lower bounds reverse Kullback-Leibler divergence between occupancy measures of the expert and imitator.
arXiv Detail & Related papers (2020-10-19T19:38:36Z) - Improving Nonparametric Density Estimation with Tensor Decompositions [14.917420021212912]
Nonparametric density estimators often perform well on low dimensional data, but suffer when applied to higher dimensional data.
This paper investigates whether these improvements can be extended to other simplified dependence assumptions.
We prove that restricting estimation to low-rank nonnegative PARAFAC or Tucker decompositions removes the dimensionality exponent on bin width rates for multidimensional histograms.
arXiv Detail & Related papers (2020-10-06T01:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.