R-divergence for Estimating Model-oriented Distribution Discrepancy
- URL: http://arxiv.org/abs/2310.01109v1
- Date: Mon, 2 Oct 2023 11:30:49 GMT
- Title: R-divergence for Estimating Model-oriented Distribution Discrepancy
- Authors: Zhilin Zhao and Longbing Cao
- Abstract summary: We introduce R-divergence, designed to assess model-oriented distribution discrepancies.
R-divergence learns a minimum hypothesis on the mixed data and then gauges the empirical risk difference between them.
We evaluate the test power across various unsupervised and supervised tasks and find that R-divergence achieves state-of-the-art performance.
- Score: 37.939239477868796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-life data are often non-IID due to complex distributions and
interactions, and the sensitivity to the distribution of samples can differ
among learning models. Accordingly, a key question for any supervised or
unsupervised model is whether the probability distributions of two given
datasets can be considered identical. To address this question, we introduce
R-divergence, designed to assess model-oriented distribution discrepancies. The
core insight is that two distributions are likely identical if their optimal
hypothesis yields the same expected risk for each distribution. To estimate the
distribution discrepancy between two datasets, R-divergence learns a minimum
hypothesis on the mixed data and then gauges the empirical risk difference
between them. We evaluate the test power across various unsupervised and
supervised tasks and find that R-divergence achieves state-of-the-art
performance. To demonstrate the practicality of R-divergence, we employ
R-divergence to train robust neural networks on samples with noisy labels.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Improving Distribution Alignment with Diversity-based Sampling [0.0]
Domain shifts are ubiquitous in machine learning, and can substantially degrade a model's performance when deployed to real-world data.
This paper proposes to improve these estimates by inducing diversity in each sampled minibatch.
It simultaneously balances the data and reduces the variance of the gradients, thereby enhancing the model's generalisation ability.
arXiv Detail & Related papers (2024-10-05T17:26:03Z) - Discriminative Estimation of Total Variation Distance: A Fidelity Auditor for Generative Data [10.678533056953784]
We propose a discriminative approach to estimate the total variation (TV) distance between two distributions.
Our method quantitatively characterizes the relation between the Bayes risk in classifying two distributions and their TV distance.
We demonstrate that, with a specific choice of hypothesis class in classification, a fast convergence rate in estimating the TV distance can be achieved.
arXiv Detail & Related papers (2024-05-24T08:18:09Z) - Probabilistic Matching of Real and Generated Data Statistics in Generative Adversarial Networks [0.6906005491572401]
We propose a method to ensure that the distributions of certain generated data statistics coincide with the respective distributions of the real data.
We evaluate the method on a synthetic dataset and a real-world dataset and demonstrate improved performance of our approach.
arXiv Detail & Related papers (2023-06-19T14:03:27Z) - The Representation Jensen-Shannon Divergence [0.0]
Quantifying the difference between probability distributions is crucial in machine learning.
This work proposes the representation Jensen-Shannon divergence (RJSD), a novel measure inspired by the traditional Jensen-Shannon divergence.
Our results demonstrate RJSD's superiority in two-sample testing, distribution shift detection, and unsupervised domain adaptation.
arXiv Detail & Related papers (2023-05-25T19:44:36Z) - Policy Evaluation in Distributional LQR [70.63903506291383]
We provide a closed-form expression of the distribution of the random return.
We show that this distribution can be approximated by a finite number of random variables.
Using the approximate return distribution, we propose a zeroth-order policy gradient algorithm for risk-averse LQR.
arXiv Detail & Related papers (2023-03-23T20:27:40Z) - Personalized Trajectory Prediction via Distribution Discrimination [78.69458579657189]
Trarimiy prediction is confronted with the dilemma to capture the multi-modal nature of future dynamics.
We present a distribution discrimination (DisDis) method to predict personalized motion patterns.
Our method can be integrated with existing multi-modal predictive models as a plug-and-play module.
arXiv Detail & Related papers (2021-07-29T17:42:12Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Distributional Random Forests: Heterogeneity Adjustment and Multivariate
Distributional Regression [0.8574682463936005]
We propose a novel forest construction for multivariate responses based on their joint conditional distribution.
The code is available as Python and R packages drf.
arXiv Detail & Related papers (2020-05-29T09:05:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.