DEMI: Discriminative Estimator of Mutual Information
- URL: http://arxiv.org/abs/2010.01766v2
- Date: Mon, 30 Nov 2020 02:59:02 GMT
- Title: DEMI: Discriminative Estimator of Mutual Information
- Authors: Ruizhi Liao, Daniel Moyer, Polina Golland, William M. Wells
- Abstract summary: Estimating mutual information between continuous random variables is often intractable and challenging for high-dimensional data.
Recent progress has leveraged neural networks to optimize variational lower bounds on mutual information.
Our approach is based on training a classifier that provides the probability that a data sample pair is drawn from the joint distribution.
- Score: 5.248805627195347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating mutual information between continuous random variables is often
intractable and extremely challenging for high-dimensional data. Recent
progress has leveraged neural networks to optimize variational lower bounds on
mutual information. Although showing promise for this difficult problem, the
variational methods have been theoretically and empirically proven to have
serious statistical limitations: 1) many methods struggle to produce accurate
estimates when the underlying mutual information is either low or high; 2) the
resulting estimators may suffer from high variance. Our approach is based on
training a classifier that provides the probability that a data sample pair is
drawn from the joint distribution rather than from the product of its marginal
distributions. Moreover, we establish a direct connection between mutual
information and the average log odds estimate produced by the classifier on a
test set, leading to a simple and accurate estimator of mutual information. We
show theoretically that our method and other variational approaches are
equivalent when they achieve their optimum, while our method sidesteps the
variational bound. Empirical results demonstrate high accuracy of our approach
and the advantages of our estimator in the context of representation learning.
Our demo is available at https://github.com/RayRuizhiLiao/demi_mi_estimator.
Related papers
- Mutual Information Multinomial Estimation [53.58005108981247]
Estimating mutual information (MI) is a fundamental yet challenging task in data science and machine learning.
Our main discovery is that a preliminary estimate of the data distribution can dramatically help estimate.
Experiments on diverse tasks including non-Gaussian synthetic problems with known ground-truth and real-world applications demonstrate the advantages of our method.
arXiv Detail & Related papers (2024-08-18T06:27:30Z) - Discriminative Estimation of Total Variation Distance: A Fidelity Auditor for Generative Data [10.678533056953784]
We propose a discriminative approach to estimate the total variation (TV) distance between two distributions.
Our method quantitatively characterizes the relation between the Bayes risk in classifying two distributions and their TV distance.
We demonstrate that, with a specific choice of hypothesis class in classification, a fast convergence rate in estimating the TV distance can be achieved.
arXiv Detail & Related papers (2024-05-24T08:18:09Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - On the Effectiveness of Hybrid Mutual Information Estimation [1.0312968200748118]
Estimating the mutual information from samples from a joint distribution is a challenging problem in science and engineering.
In this work, we realize a variational bound that generalizes both discriminative and generative approaches.
We propose Predictive Quantization (PQ): a simple generative method that can be easily combined with discriminative estimators for minimal computational overhead.
arXiv Detail & Related papers (2023-06-01T12:26:07Z) - Mutual Information Estimation via $f$-Divergence and Data Derangements [6.43826005042477]
We propose a novel class of discrimi mutual information estimators based on the variational representation of the $f$-divergence.
The proposed estimator is flexible since it exhibits an excellent bias/ variance trade-off.
arXiv Detail & Related papers (2023-05-31T16:54:25Z) - Learn from Unpaired Data for Image Restoration: A Variational Bayes
Approach [18.007258270845107]
We propose LUD-VAE, a deep generative method to learn the joint probability density function from data sampled from marginal distributions.
We apply our method to real-world image denoising and super-resolution tasks and train the models using the synthetic data generated by the LUD-VAE.
arXiv Detail & Related papers (2022-04-21T13:27:17Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Density Ratio Estimation via Infinitesimal Classification [85.08255198145304]
We propose DRE-infty, a divide-and-conquer approach to reduce Density ratio estimation (DRE) to a series of easier subproblems.
Inspired by Monte Carlo methods, we smoothly interpolate between the two distributions via an infinite continuum of intermediate bridge distributions.
We show that our approach performs well on downstream tasks such as mutual information estimation and energy-based modeling on complex, high-dimensional datasets.
arXiv Detail & Related papers (2021-11-22T06:26:29Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.