Decoupling Shape and Density for Liver Lesion Synthesis Using
Conditional Generative Adversarial Networks
- URL: http://arxiv.org/abs/2106.00629v1
- Date: Tue, 1 Jun 2021 16:45:19 GMT
- Title: Decoupling Shape and Density for Liver Lesion Synthesis Using
Conditional Generative Adversarial Networks
- Authors: Dario Augusto Borges Oliveira
- Abstract summary: The quality and diversity of synthesized data are highly dependent on the annotated data used to train the models.
This paper presents a method for decoupling shape and density for liver lesion synthesis, creating a framework that allows straight-forwardly driving the synthesis.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lesion synthesis received much attention with the rise of efficient
generative models for augmenting training data, drawing lesion evolution
scenarios, or aiding expert training. The quality and diversity of synthesized
data are highly dependent on the annotated data used to train the models, which
not rarely struggle to derive very different yet realistic samples from the
training ones. That adds an inherent bias to lesion segmentation algorithms and
limits synthesizing lesion evolution scenarios efficiently. This paper presents
a method for decoupling shape and density for liver lesion synthesis, creating
a framework that allows straight-forwardly driving the synthesis. We offer
qualitative results that show the synthesis control by modifying shape and
density individually, and quantitative results that demonstrate that embedding
the density information in the generator model helps to increase lesion
segmentation performance compared to using the shape solely.
Related papers
- CAFusion: Controllable Anatomical Synthesis of Perirectal Lymph Nodes via SDF-guided Diffusion [8.311453061101899]
We introduce CAFusion, a novel approach for synthesizing perirectal lymph nodes.
By leveraging Signed Distance Functions (SDF), CAFusion generates highly realistic 3D anatomical structures.
Experimental results demonstrate that our synthetic data substantially improve segmentation performance.
arXiv Detail & Related papers (2025-03-10T04:59:54Z) - Latent Drifting in Diffusion Models for Counterfactual Medical Image Synthesis [55.959002385347645]
Latent Drifting enables diffusion models to be conditioned for medical images fitted for the complex task of counterfactual image generation.
We evaluate our method on three public longitudinal benchmark datasets of brain MRI and chest X-rays for counterfactual image generation.
arXiv Detail & Related papers (2024-12-30T01:59:34Z) - Synthetic Image Learning: Preserving Performance and Preventing Membership Inference Attacks [5.0243930429558885]
This paper introduces Knowledge Recycling (KR), a pipeline designed to optimise the generation and use of synthetic data for training downstream classifiers.
At the heart of this pipeline is Generative Knowledge Distillation (GKD), the proposed technique that significantly improves the quality and usefulness of the information.
The results show a significant reduction in the performance gap between models trained on real and synthetic data, with models based on synthetic data outperforming those trained on real data in some cases.
arXiv Detail & Related papers (2024-07-22T10:31:07Z) - TSynD: Targeted Synthetic Data Generation for Enhanced Medical Image Classification [0.011037620731410175]
This work aims to guide the generative model to synthesize data with high uncertainty.
We alter the feature space of the autoencoder through an optimization process.
We improve the robustness against test time data augmentations and adversarial attacks on several classifications tasks.
arXiv Detail & Related papers (2024-06-25T11:38:46Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Derm-T2IM: Harnessing Synthetic Skin Lesion Data via Stable Diffusion
Models for Enhanced Skin Disease Classification using ViT and CNN [1.0499611180329804]
We aim to incorporate enhanced data transformation techniques by extending the recent success of few-shot learning.
We investigate the impact of incorporating newly generated synthetic data into the training pipeline of state-of-art machine learning models.
arXiv Detail & Related papers (2024-01-10T13:46:03Z) - Improving Adversarial Robustness by Contrastive Guided Diffusion Process [19.972628281993487]
We propose Contrastive-Guided Diffusion Process (Contrastive-DP) to guide the diffusion model in data generation.
We show that enhancing the distinguishability among the generated data is critical for improving adversarial robustness.
arXiv Detail & Related papers (2022-10-18T07:20:53Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - A Scaling Law for Synthetic-to-Real Transfer: A Measure of Pre-Training [52.93808218720784]
Synthetic-to-real transfer learning is a framework in which we pre-train models with synthetically generated images and ground-truth annotations for real tasks.
Although synthetic images overcome the data scarcity issue, it remains unclear how the fine-tuning performance scales with pre-trained models.
We observe a simple and general scaling law that consistently describes learning curves in various tasks, models, and complexities of synthesized pre-training data.
arXiv Detail & Related papers (2021-08-25T02:29:28Z) - METGAN: Generative Tumour Inpainting and Modality Synthesis in Light
Sheet Microscopy [4.872960046536882]
We introduce a novel generative method which leverages real anatomical information to generate realistic image-label pairs of tumours.
We construct a dual-pathway generator, for the anatomical image and label, trained in a cycle-consistent setup, constrained by an independent, pretrained segmentor.
The generated images yield significant quantitative improvement compared to existing methods.
arXiv Detail & Related papers (2021-04-22T11:18:17Z) - High-Fidelity Synthesis with Disentangled Representation [60.19657080953252]
We propose an Information-Distillation Generative Adrial Network (ID-GAN) for disentanglement learning and high-fidelity synthesis.
Our method learns disentangled representation using VAE-based models, and distills the learned representation with an additional nuisance variable to the separate GAN-based generator for high-fidelity synthesis.
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
arXiv Detail & Related papers (2020-01-13T14:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.