FLD+: Data-efficient Evaluation Metric for Generative Models
- URL: http://arxiv.org/abs/2411.15584v1
- Date: Sat, 23 Nov 2024 15:12:57 GMT
- Title: FLD+: Data-efficient Evaluation Metric for Generative Models
- Authors: Pranav Jeevan, Neeraj Nixon, Amit Sethi,
- Abstract summary: We introduce a new metric to assess the quality of generated images that is more reliable, data-efficient, compute-efficient, and adaptable to new domains.
The proposed metric is based on normalizing flows, which allows for the computation of density (exact log-likelihood) of images from any domain.
- Score: 4.093503153499691
- License:
- Abstract: We introduce a new metric to assess the quality of generated images that is more reliable, data-efficient, compute-efficient, and adaptable to new domains than the previous metrics, such as Fr\'echet Inception Distance (FID). The proposed metric is based on normalizing flows, which allows for the computation of density (exact log-likelihood) of images from any domain. Thus, unlike FID, the proposed Flow-based Likelihood Distance Plus (FLD+) metric exhibits strongly monotonic behavior with respect to different types of image degradations, including noise, occlusion, diffusion steps, and generative model size. Additionally, because normalizing flow can be trained stably and efficiently, FLD+ achieves stable results with two orders of magnitude fewer images than FID (which requires more images to reliably compute Fr\'echet distance between features of large samples of real and generated images). We made FLD+ computationally even more efficient by applying normalizing flows to features extracted in a lower-dimensional latent space instead of using a pre-trained network. We also show that FLD+ can easily be retrained on new domains, such as medical images, unlike the networks behind previous metrics -- such as InceptionNetV3 pre-trained on ImageNet.
Related papers
- Normalizing Flow-Based Metric for Image Generation [4.093503153499691]
We propose two new evaluation metrics to assess realness of generated images based on normalizing flows.
Because normalizing flows can be used to compute the exact likelihood, the proposed metrics assess how closely generated images align with the distribution of real images from a given domain.
arXiv Detail & Related papers (2024-10-02T20:09:58Z) - WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - Bring Metric Functions into Diffusion Models [145.71911023514252]
We introduce a Cascaded Diffusion Model (Cas-DM) that improves a Denoising Diffusion Probabilistic Model (DDPM)
The proposed diffusion model backbone enables the effective use of the LPIPS loss, leading to state-of-the-art image quality (FID, sFID, IS)
Experiment results show that the proposed diffusion model backbone enables the effective use of the LPIPS loss, leading to state-of-the-art image quality (FID, sFID, IS)
arXiv Detail & Related papers (2024-01-04T18:55:01Z) - Recovering high-quality FODs from a reduced number of diffusion-weighted
images using a model-driven deep learning architecture [0.0]
We propose a model-driven deep learning FOD reconstruction architecture.
It ensures intermediate and output FODs produced by the network are consistent with the input DWI signals.
Our results show that the model-based deep learning architecture achieves competitive performance compared to a state-of-the-art FOD super-resolution network, FOD-Net.
arXiv Detail & Related papers (2023-07-28T02:47:34Z) - CaloFlow: Fast and Accurate Generation of Calorimeter Showers with
Normalizing Flows [0.0]
We introduce CaloFlow, a fast detector simulation framework based on normalizing flows.
For the first time, we demonstrate that normalizing flows can reproduce many-channel calorimeter showers with extremely high fidelity.
arXiv Detail & Related papers (2021-06-09T18:00:02Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - FlowReg: Fast Deformable Unsupervised Medical Image Registration using
Optical Flow [0.09167082845109438]
FlowReg is a framework for unsupervised image registration for neuroimaging applications.
FlowReg is able to obtain high intensity and spatial similarity while maintaining the shape and structure of anatomy and pathology.
arXiv Detail & Related papers (2021-01-24T03:51:34Z) - Same Same But DifferNet: Semi-Supervised Defect Detection with
Normalizing Flows [24.734388664558708]
We propose DifferNet: It leverages the descriptiveness of features extracted by convolutional neural networks to estimate their density.
Based on these likelihoods we develop a scoring function that indicates defects.
We demonstrate the superior performance over existing approaches on the challenging and newly proposed MVTec AD and Magnetic Tile Defects datasets.
arXiv Detail & Related papers (2020-08-28T10:49:28Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Why Normalizing Flows Fail to Detect Out-of-Distribution Data [51.552870594221865]
Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
arXiv Detail & Related papers (2020-06-15T17:00:01Z) - Semi-Supervised Learning with Normalizing Flows [54.376602201489995]
FlowGMM is an end-to-end approach to generative semi supervised learning with normalizing flows.
We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data.
arXiv Detail & Related papers (2019-12-30T17:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.