Toward a Generalization Metric for Deep Generative Models
- URL: http://arxiv.org/abs/2011.00754v3
- Date: Mon, 24 May 2021 12:36:45 GMT
- Title: Toward a Generalization Metric for Deep Generative Models
- Authors: Hoang Thanh-Tung, Truyen Tran
- Abstract summary: Generalization capacity of Deep Generative Models (DGMs) is difficult to measure.
We introduce a framework for comparing the robustness of evaluation metrics.
We develop an efficient method for estimating the complexity of Generative Latent Variable Models (GLVMs)
- Score: 18.941388632914666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Measuring the generalization capacity of Deep Generative Models (DGMs) is
difficult because of the curse of dimensionality. Evaluation metrics for DGMs
such as Inception Score, Fr\'echet Inception Distance, Precision-Recall, and
Neural Net Divergence try to estimate the distance between the generated
distribution and the target distribution using a polynomial number of samples.
These metrics are the target of researchers when designing new models. Despite
the claims, it is still unclear how well can they measure the generalization
capacity of a generative model. In this paper, we investigate the capacity of
these metrics in measuring the generalization capacity. We introduce a
framework for comparing the robustness of evaluation metrics. We show that
better scores in these metrics do not imply better generalization. They can be
fooled easily by a generator that memorizes a small subset of the training set.
We propose a fix to the NND metric to make it more robust to noise in the
generated data. Toward building a robust metric for generalization, we propose
to apply the Minimum Description Length principle to the problem of evaluating
DGMs. We develop an efficient method for estimating the complexity of
Generative Latent Variable Models (GLVMs). Experimental results show that our
metric can effectively detect training set memorization and distinguish GLVMs
of different generalization capacities. Source code is available at
https://github.com/htt210/GeneralizationMetricGAN.
Related papers
- Compute-Optimal LLMs Provably Generalize Better With Scale [102.29926217670926]
We develop generalization bounds on the pretraining objective of large language models (LLMs) in the compute-optimal regime.
We introduce a novel, fully empirical Freedman-type martingale concentration that tightens existing bounds by accounting for the variance of the loss function.
We produce a scaling law for the generalization gap, with bounds that become predictably stronger with scale.
arXiv Detail & Related papers (2025-04-21T16:26:56Z) - What Do Learning Dynamics Reveal About Generalization in LLM Reasoning? [83.83230167222852]
We find that a model's generalization behavior can be effectively characterized by a training metric we call pre-memorization train accuracy.
By connecting a model's learning behavior to its generalization, pre-memorization train accuracy can guide targeted improvements to training strategies.
arXiv Detail & Related papers (2024-11-12T09:52:40Z) - Forgetting Curve: A Reliable Method for Evaluating Memorization Capability for Long-context Models [58.6172667880028]
We propose a new method called forgetting curve to measure the memorization capability of long-context models.
We show that forgetting curve has the advantage of being robust to the tested corpus and the experimental settings.
Our measurement provides empirical evidence for the effectiveness of transformer extension techniques while raises questions for the effective length of RNN/SSM based models.
arXiv Detail & Related papers (2024-10-07T03:38:27Z) - Understanding Deep Generative Models with Generalized Empirical
Likelihoods [3.7978679293562587]
We show how to combine techniques from Maximum Mean Discrepancy and Generalized Empirical Likelihood to create distribution tests that retain per-sample interpretability.
We find that such tests predict the degree of mode dropping and mode imbalance up to 60% better than metrics such as improved precision/recall.
arXiv Detail & Related papers (2023-06-16T11:33:47Z) - Feature Likelihood Divergence: Evaluating the Generalization of
Generative Models Using Samples [25.657798631897908]
Feature Likelihood Divergence provides a comprehensive trichotomic evaluation of generative models.
We empirically demonstrate the ability of FLD to identify overfitting problem cases, even when previously proposed metrics fail.
arXiv Detail & Related papers (2023-02-09T04:57:27Z) - A Study on the Evaluation of Generative Models [19.18642459565609]
Implicit generative models, which do not return likelihood values, have become prevalent in recent years.
In this work, we study the evaluation metrics of generative models by generating a high-quality synthetic dataset.
Our study shows that while FID and IS do correlate to several f-divergences, their ranking of close models can vary considerably.
arXiv Detail & Related papers (2022-06-22T09:27:31Z) - Evaluating natural language processing models with generalization
metrics that do not need access to any training or testing data [66.11139091362078]
We provide the first model selection results on large pretrained Transformers from Huggingface using generalization metrics.
Despite their niche status, we find that metrics derived from the heavy-tail (HT) perspective are particularly useful in NLP tasks.
arXiv Detail & Related papers (2022-02-06T20:07:35Z) - On Evaluation Metrics for Graph Generative Models [17.594098458581694]
We study existing graph generative models (GGMs) and neural-network-based metrics for evaluating GGMs.
Motivated by the power of certain Graph Neural Networks (GNNs) to extract meaningful graph representations without any training, we introduce several metrics based on the features extracted by an untrained random GNN.
arXiv Detail & Related papers (2022-01-24T18:49:27Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Towards GAN Benchmarks Which Require Generalization [48.075521136623564]
We argue that estimating the function must require a large sample from the model.
We turn to neural network divergences (NNDs) which are defined in terms of a neural network trained to distinguish between distributions.
The resulting benchmarks cannot be "won" by training set memorization, while still being perceptually correlated and computable only from samples.
arXiv Detail & Related papers (2020-01-10T20:18:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.