Rethinking FID: Towards a Better Evaluation Metric for Image Generation
- URL: http://arxiv.org/abs/2401.09603v2
- Date: Thu, 25 Jan 2024 22:22:14 GMT
- Title: Rethinking FID: Towards a Better Evaluation Metric for Image Generation
- Authors: Sadeep Jayasumana, Srikumar Ramalingam, Andreas Veit, Daniel Glasner,
Ayan Chakrabarti, Sanjiv Kumar
- Abstract summary: Inception Distance estimates the distance between a distribution of Inception-v3 features of real images, and those of images generated by the algorithm.
We highlight important drawbacks of FID: Inception's poor representation of the rich and varied content generated by modern text-to-image models, incorrect normality assumptions, and poor sample complexity.
We propose an alternative new metric, CMMD, based on richer CLIP embeddings and the maximum mean discrepancy distance with the Gaussian RBF kernel.
- Score: 43.66036053597747
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As with many machine learning problems, the progress of image generation
methods hinges on good evaluation metrics. One of the most popular is the
Frechet Inception Distance (FID). FID estimates the distance between a
distribution of Inception-v3 features of real images, and those of images
generated by the algorithm. We highlight important drawbacks of FID:
Inception's poor representation of the rich and varied content generated by
modern text-to-image models, incorrect normality assumptions, and poor sample
complexity. We call for a reevaluation of FID's use as the primary quality
metric for generated images. We empirically demonstrate that FID contradicts
human raters, it does not reflect gradual improvement of iterative
text-to-image models, it does not capture distortion levels, and that it
produces inconsistent results when varying the sample size. We also propose an
alternative new metric, CMMD, based on richer CLIP embeddings and the maximum
mean discrepancy distance with the Gaussian RBF kernel. It is an unbiased
estimator that does not make any assumptions on the probability distribution of
the embeddings and is sample efficient. Through extensive experiments and
analysis, we demonstrate that FID-based evaluations of text-to-image models may
be unreliable, and that CMMD offers a more robust and reliable assessment of
image quality.
Related papers
- RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Uncertainty Quantification via Neural Posterior Principal Components [26.26693707762823]
Uncertainty quantification is crucial for the deployment of image restoration models in safety-critical domains.
We present a method for predicting the PCs of the posterior distribution for any input image, in a single forward pass of a neural network.
Our method reliably conveys instance-adaptive uncertainty directions, achieving uncertainty quantification comparable with posterior samplers.
arXiv Detail & Related papers (2023-09-27T09:51:29Z) - On quantifying and improving realism of images generated with diffusion [50.37578424163951]
We propose a metric, called Image Realism Score (IRS), computed from five statistical measures of a given image.
IRS is easily usable as a measure to classify a given image as real or fake.
We experimentally establish the model- and data-agnostic nature of the proposed IRS by successfully detecting fake images generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN.
Our efforts have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes generated by four high-quality models.
arXiv Detail & Related papers (2023-09-26T08:32:55Z) - Improving Adversarial Robustness of Masked Autoencoders via Test-time
Frequency-domain Prompting [133.55037976429088]
We investigate the adversarial robustness of vision transformers equipped with BERT pretraining (e.g., BEiT, MAE)
A surprising observation is that MAE has significantly worse adversarial robustness than other BERT pretraining methods.
We propose a simple yet effective way to boost the adversarial robustness of MAE.
arXiv Detail & Related papers (2023-08-20T16:27:17Z) - Learning from Multi-Perception Features for Real-Word Image
Super-resolution [87.71135803794519]
We propose a novel SR method called MPF-Net that leverages multiple perceptual features of input images.
Our method incorporates a Multi-Perception Feature Extraction (MPFE) module to extract diverse perceptual information.
We also introduce a contrastive regularization term (CR) that improves the model's learning capability.
arXiv Detail & Related papers (2023-05-26T07:35:49Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Robustness via Uncertainty-aware Cycle Consistency [44.34422859532988]
Unpaired image-to-image translation refers to learning inter-image-domain mapping without corresponding image pairs.
Existing methods learn deterministic mappings without explicitly modelling the robustness to outliers or predictive uncertainty.
We propose a novel probabilistic method based on Uncertainty-aware Generalized Adaptive Cycle Consistency (UGAC)
arXiv Detail & Related papers (2021-10-24T15:33:21Z) - Compound Frechet Inception Distance for Quality Assessment of GAN
Created Images [7.628527132779575]
One notable application of GANs is developing fake human faces, also known as "deep fakes"
Measuring the quality of the generated images is inherently subjective but attempts to objectify quality using standardized metrics have been made.
We propose to improve the robustness of the evaluation process by integrating lower-level features to cover a wider array of visual defects.
arXiv Detail & Related papers (2021-06-16T06:53:27Z) - Same Same But DifferNet: Semi-Supervised Defect Detection with
Normalizing Flows [24.734388664558708]
We propose DifferNet: It leverages the descriptiveness of features extracted by convolutional neural networks to estimate their density.
Based on these likelihoods we develop a scoring function that indicates defects.
We demonstrate the superior performance over existing approaches on the challenging and newly proposed MVTec AD and Magnetic Tile Defects datasets.
arXiv Detail & Related papers (2020-08-28T10:49:28Z) - Reliable Fidelity and Diversity Metrics for Generative Models [30.941563781926202]
The most widely used metric for measuring the similarity between real and generated images has been the Fr'echet Inception Distance (FID) score.
We show that even the latest version of the precision and recall metrics are not reliable yet.
We propose density and coverage metrics that solve the above issues.
arXiv Detail & Related papers (2020-02-23T00:50:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.