Metrics to Quantify Global Consistency in Synthetic Medical Images
- URL: http://arxiv.org/abs/2308.00402v1
- Date: Tue, 1 Aug 2023 09:29:39 GMT
- Title: Metrics to Quantify Global Consistency in Synthetic Medical Images
- Authors: Daniel Scholz, Benedikt Wiestler, Daniel Rueckert, Martin J. Menten
- Abstract summary: We introduce two metrics that can measure the global consistency of synthetic images on a per-image basis.
We quantify global consistency by predicting and comparing explicit attributes of images on patches using supervised trained neural networks.
Our results demonstrate that predicting explicit attributes of synthetic images on patches can distinguish globally consistent from inconsistent images.
- Score: 6.863780677964219
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Image synthesis is increasingly being adopted in medical image processing,
for example for data augmentation or inter-modality image translation. In these
critical applications, the generated images must fulfill a high standard of
biological correctness. A particular requirement for these images is global
consistency, i.e an image being overall coherent and structured so that all
parts of the image fit together in a realistic and meaningful way. Yet,
established image quality metrics do not explicitly quantify this property of
synthetic images. In this work, we introduce two metrics that can measure the
global consistency of synthetic images on a per-image basis. To measure the
global consistency, we presume that a realistic image exhibits consistent
properties, e.g., a person's body fat in a whole-body MRI, throughout the
depicted object or scene. Hence, we quantify global consistency by predicting
and comparing explicit attributes of images on patches using supervised trained
neural networks. Next, we adapt this strategy to an unlabeled setting by
measuring the similarity of implicit image features predicted by a
self-supervised trained network. Our results demonstrate that predicting
explicit attributes of synthetic images on patches can distinguish globally
consistent from inconsistent images. Implicit representations of images are
less sensitive to assess global consistency but are still serviceable when
labeled data is unavailable. Compared to established metrics, such as the FID,
our method can explicitly measure global consistency on a per-image basis,
enabling a dedicated analysis of the biological plausibility of single
synthetic images.
Related papers
- Global-Local Image Perceptual Score (GLIPS): Evaluating Photorealistic Quality of AI-Generated Images [0.7499722271664147]
The Global-Local Image Perceptual Score (GLIPS) is an image metric designed to assess the photorealistic image quality of AI-generated images.
Comprehensive tests across various generative models demonstrate that GLIPS consistently outperforms existing metrics like FID, SSIM, and MS-SSIM in terms of correlation with human scores.
arXiv Detail & Related papers (2024-05-15T15:19:23Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - You Don't Have to Be Perfect to Be Amazing: Unveil the Utility of
Synthetic Images [2.0790547421662064]
We have established a comprehensive set of evaluators for synthetic images, including fidelity, variety, privacy, and utility.
By analyzing more than 100k chest X-ray images and their synthetic copies, we have demonstrated that there is an inevitable trade-off between synthetic image fidelity, variety, and privacy.
arXiv Detail & Related papers (2023-05-25T13:47:04Z) - Unsupervised Synthetic Image Refinement via Contrastive Learning and
Consistent Semantic-Structural Constraints [32.07631215590755]
Contrastive learning (CL) has been successfully used to pull correlated patches together and push uncorrelated ones apart.
In this work, we exploit semantic and structural consistency between synthetic and refined images and adopt CL to reduce the semantic distortion.
arXiv Detail & Related papers (2023-04-25T05:55:28Z) - SIAN: Style-Guided Instance-Adaptive Normalization for Multi-Organ
Histopathology Image Synthesis [63.845552349914186]
We propose a style-guided instance-adaptive normalization (SIAN) to synthesize realistic color distributions and textures for different organs.
The four phases work together and are integrated into a generative network to embed image semantics, style, and instance-level boundaries.
arXiv Detail & Related papers (2022-09-02T16:45:46Z) - Evaluating the Quality and Diversity of DCGAN-based Generatively
Synthesized Diabetic Retinopathy Imagery [0.07499722271664144]
Publicly available diabetic retinopathy (DR) datasets are imbalanced, containing limited numbers of images with DR.
The imbalance can be addressed using Geneversarative Adrial Networks (GANs) to augment the datasets with synthetic images.
To evaluate the quality and diversity of synthetic images, several evaluation metrics, such as Multi-Scale Structural Similarity Index (MS-SSIM), Cosine Distance (CD), and Fr't Inception Distance (FID) are used.
arXiv Detail & Related papers (2022-08-10T23:50:01Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Image Synthesis via Semantic Composition [74.68191130898805]
We present a novel approach to synthesize realistic images based on their semantic layouts.
It hypothesizes that for objects with similar appearance, they share similar representation.
Our method establishes dependencies between regions according to their appearance correlation, yielding both spatially variant and associated representations.
arXiv Detail & Related papers (2021-09-15T02:26:07Z) - Common Limitations of Image Processing Metrics: A Picture Story [58.83274952067888]
This document focuses on biomedical image analysis problems that can be phrased as image-level classification, semantic segmentation, instance segmentation, or object detection task.
The current version is based on a Delphi process on metrics conducted by an international consortium of image analysis experts from more than 60 institutions worldwide.
arXiv Detail & Related papers (2021-04-12T17:03:42Z) - Synthetic Sample Selection via Reinforcement Learning [8.099072894865802]
We propose a reinforcement learning based synthetic sample selection method that learns to choose synthetic images containing reliable and informative features.
In experiments on a cervical dataset and a lymph node dataset, the image classification performance is improved by 8.1% and 2.3%, respectively.
arXiv Detail & Related papers (2020-08-26T01:34:19Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.