Probabilistic Precision and Recall Towards Reliable Evaluation of
Generative Models
- URL: http://arxiv.org/abs/2309.01590v1
- Date: Mon, 4 Sep 2023 13:19:17 GMT
- Title: Probabilistic Precision and Recall Towards Reliable Evaluation of
Generative Models
- Authors: Dogyun Park, Suhyun Kim
- Abstract summary: We propose P-precision and P-recall (PP&PR), based on a probabilistic approach that address the problems.
We show that our PP&PR provide more reliable estimates for comparing fidelity and diversity than the existing metrics.
- Score: 7.770029179741429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assessing the fidelity and diversity of the generative model is a difficult
but important issue for technological advancement. So, recent papers have
introduced k-Nearest Neighbor ($k$NN) based precision-recall metrics to break
down the statistical distance into fidelity and diversity. While they provide
an intuitive method, we thoroughly analyze these metrics and identify
oversimplified assumptions and undesirable properties of kNN that result in
unreliable evaluation, such as susceptibility to outliers and insensitivity to
distributional changes. Thus, we propose novel metrics, P-precision and
P-recall (PP\&PR), based on a probabilistic approach that address the problems.
Through extensive investigations on toy experiments and state-of-the-art
generative models, we show that our PP\&PR provide more reliable estimates for
comparing fidelity and diversity than the existing metrics. The codes are
available at \url{https://github.com/kdst-team/Probablistic_precision_recall}.
Related papers
- Continuous Bayesian Model Selection for Multivariate Causal Discovery [22.945274948173182]
Current causal discovery approaches require restrictive model assumptions or assume access to interventional data to ensure structure identifiability.
Recent work has shown that Bayesian model selection can greatly improve accuracy by exchanging restrictive modelling for more flexible assumptions.
We demonstrate the competitiveness of our approach on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-11-15T12:55:05Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - On Uncertainty Calibration and Selective Generation in Probabilistic
Neural Summarization: A Benchmark Study [14.041071717005362]
Modern deep models for summarization attains impressive benchmark performance, but they are prone to generating miscalibrated predictive uncertainty.
This means that they assign high confidence to low-quality predictions, leading to compromised reliability and trustworthiness in real-world applications.
Probabilistic deep learning methods are common solutions to the miscalibration problem, but their relative effectiveness in complex autoregressive summarization tasks are not well-understood.
arXiv Detail & Related papers (2023-04-17T23:06:28Z) - How Reliable is Your Regression Model's Uncertainty Under Real-World
Distribution Shifts? [46.05502630457458]
We propose a benchmark of 8 image-based regression datasets with different types of challenging distribution shifts.
We find that while methods are well calibrated when there is no distribution shift, they all become highly overconfident on many of the benchmark datasets.
arXiv Detail & Related papers (2023-02-07T18:54:39Z) - Do Bayesian Variational Autoencoders Know What They Don't Know? [0.6091702876917279]
The problem of detecting the Out-of-Distribution (OoD) inputs is paramount importance for Deep Neural Networks.
It has been previously shown that even Deep Generative Models that allow estimating the density of the inputs may not be reliable.
This paper investigates three approaches to inference: Markov chain Monte Carlo, Bayes gradient by Backpropagation and Weight Averaging-Gaussian.
arXiv Detail & Related papers (2022-12-29T11:48:01Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Robust Out-of-Distribution Detection on Deep Probabilistic Generative
Models [0.06372261626436676]
Out-of-distribution (OOD) detection is an important task in machine learning systems.
Deep probabilistic generative models facilitate OOD detection by estimating the likelihood of a data sample.
We propose a new detection metric that operates without outlier exposure.
arXiv Detail & Related papers (2021-06-15T06:36:10Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.