Distributional Evaluation of Generative Models via Relative Density Ratio
- URL: http://arxiv.org/abs/2510.25507v1
- Date: Wed, 29 Oct 2025 13:31:35 GMT
- Title: Distributional Evaluation of Generative Models via Relative Density Ratio
- Authors: Yuliang Xu, Yun Wei, Li Ma,
- Abstract summary: We propose a functional evaluation metric for generative models based on the relative density ratio (RDR)<n>We show that the RDR as a functional summary of the goodness-of-fit for the generative model, possesses several desirable theoretical properties.<n>We show that the estimated RDR not only allows for an effective comparison of the overall performance of competing generative models, but it can also offer a convenient means of revealing the nature of the underlying goodness-of-fit.
- Score: 12.663086000741872
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose a functional evaluation metric for generative models based on the relative density ratio (RDR) designed to characterize distributional differences between real and generated samples. We show that the RDR as a functional summary of the goodness-of-fit for the generative model, possesses several desirable theoretical properties. It preserves $\phi$-divergence between two distributions, enables sample-level evaluation that facilitates downstream investigations of feature-specific distributional differences, and has a bounded range that affords clear interpretability and numerical stability. Functional estimation of the RDR is achieved efficiently through convex optimization on the variational form of $\phi$-divergence. We provide theoretical convergence rate guarantees for general estimators based on M-estimator theory, as well as the convergence rates of neural network-based estimators when the true ratio is in the anisotropic Besov space. We demonstrate the power of the proposed RDR-based evaluation through numerical experiments on MNIST, CelebA64, and the American Gut project microbiome data. We show that the estimated RDR not only allows for an effective comparison of the overall performance of competing generative models, but it can also offer a convenient means of revealing the nature of the underlying goodness-of-fit. This enables one to assess support overlap, coverage, and fidelity while pinpointing regions of the sample space where generators concentrate and revealing the features that drive the most salient distributional differences.
Related papers
- GDR-learners: Orthogonal Learning of Generative Models for Potential Outcomes [50.228749840286895]
We introduce a general suite of generative Neyman-orthogonal learners that estimate conditional distributions of potential outcomes.<n>Our proposed GDR-learners are flexible and can be instantiated with many state-of-the-art deep generative models.<n>Unlike the existing methods, our GDR-learners possess the properties of quasi-oracle efficiency and double robustness.
arXiv Detail & Related papers (2025-09-26T21:35:28Z) - The Lie of the Average: How Class Incremental Learning Evaluation Deceives You? [48.83567710215299]
Class Incremental Learning (CIL) requires models to continuously learn new classes without forgetting previously learned ones.<n>We argue that a robust CIL evaluation protocol should accurately characterize and estimate the entire performance distribution.<n>We propose EDGE, an evaluation protocol that adaptively identifies and samples extreme class sequences using inter-task similarity.
arXiv Detail & Related papers (2025-09-26T17:00:15Z) - A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models [8.862614615192578]
We study the large-sample properties of a likelihood-based approach for estimating conditional deep generative models.<n>Our results lead to the convergence rate of a sieve maximum likelihood estimator for estimating the conditional distribution.
arXiv Detail & Related papers (2024-10-02T20:46:21Z) - Distributionally Robust Optimization as a Scalable Framework to Characterize Extreme Value Distributions [22.765095010254118]
The goal of this paper is to develop distributionally robust optimization (DRO) estimators, specifically for multidimensional Extreme Value Theory (EVT) statistics.
In order to mitigate over-conservative estimates while enhancing out-of-sample performance, we study DRO estimators informed by semi-parametric max-stable constraints in the space of point processes.
Both approaches are validated using synthetically generated data, recovering prescribed characteristics, and verifying the efficacy of the proposed techniques.
arXiv Detail & Related papers (2024-07-31T19:45:27Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - R-divergence for Estimating Model-oriented Distribution Discrepancy [37.939239477868796]
We introduce R-divergence, designed to assess model-oriented distribution discrepancies.
R-divergence learns a minimum hypothesis on the mixed data and then gauges the empirical risk difference between them.
We evaluate the test power across various unsupervised and supervised tasks and find that R-divergence achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-02T11:30:49Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - A Unified Framework for Multi-distribution Density Ratio Estimation [101.67420298343512]
Binary density ratio estimation (DRE) provides the foundation for many state-of-the-art machine learning algorithms.
We develop a general framework from the perspective of Bregman minimization divergence.
We show that our framework leads to methods that strictly generalize their counterparts in binary DRE.
arXiv Detail & Related papers (2021-12-07T01:23:20Z) - Information Theoretic Structured Generative Modeling [13.117829542251188]
A novel generative model framework called the structured generative model (SGM) is proposed that makes straightforward optimization possible.
The implementation employs a single neural network driven by an orthonormal input to a single white noise source adapted to learn an infinite Gaussian mixture model.
Preliminary results show that SGM significantly improves MINE estimation in terms of data efficiency and variance, conventional and variational Gaussian mixture models, as well as for training adversarial networks.
arXiv Detail & Related papers (2021-10-12T07:44:18Z) - Comparing Probability Distributions with Conditional Transport [63.11403041984197]
We propose conditional transport (CT) as a new divergence and approximate it with the amortized CT (ACT) cost.
ACT amortizes the computation of its conditional transport plans and comes with unbiased sample gradients that are straightforward to compute.
On a wide variety of benchmark datasets generative modeling, substituting the default statistical distance of an existing generative adversarial network with ACT is shown to consistently improve the performance.
arXiv Detail & Related papers (2020-12-28T05:14:22Z) - Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning [52.25145141639159]
We study how the interplay between the latent distribution and the complexity of the pushforward map affects performance.
Motivated by our analysis, we advocate learning the latent distribution as well as the pushforward map within the GAN paradigm.
arXiv Detail & Related papers (2020-07-29T07:31:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.