Robustness-Guided Image Synthesis for Data-Free Quantization
- URL: http://arxiv.org/abs/2310.03661v3
- Date: Wed, 21 Feb 2024 04:09:38 GMT
- Title: Robustness-Guided Image Synthesis for Data-Free Quantization
- Authors: Jianhong Bai, Yuchen Yang, Huanpeng Chu, Hualiang Wang, Zuozhu Liu,
Ruizhe Chen, Xiaoxuan He, Lianrui Mu, Chengfei Cai, Haoji Hu
- Abstract summary: We propose Robustness-Guided Image Synthesis (RIS) to enrich the semantics of synthetic images and improve image diversity.
RIS is a simple but effective method to enrich the semantics of synthetic images and improve image diversity.
We achieve state-of-the-art performance for various settings on data-free quantization and can be extended to other data-free compression tasks.
- Score: 15.91924736452861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantization has emerged as a promising direction for model compression.
Recently, data-free quantization has been widely studied as a promising method
to avoid privacy concerns, which synthesizes images as an alternative to real
training data. Existing methods use classification loss to ensure the
reliability of the synthesized images. Unfortunately, even if these images are
well-classified by the pre-trained model, they still suffer from low semantics
and homogenization issues. Intuitively, these low-semantic images are sensitive
to perturbations, and the pre-trained model tends to have inconsistent output
when the generator synthesizes an image with poor semantics. To this end, we
propose Robustness-Guided Image Synthesis (RIS), a simple but effective method
to enrich the semantics of synthetic images and improve image diversity,
further boosting the performance of downstream data-free compression tasks.
Concretely, we first introduce perturbations on input and model weight, then
define the inconsistency metrics at feature and prediction levels before and
after perturbations. On the basis of inconsistency on two levels, we design a
robustness optimization objective to enhance the semantics of synthetic images.
Moreover, we also make our approach diversity-aware by forcing the generator to
synthesize images with small correlations in the label space. With RIS, we
achieve state-of-the-art performance for various settings on data-free
quantization and can be extended to other data-free compression tasks.
Related papers
- FairDiff: Fair Segmentation with Point-Image Diffusion [15.490776421216689]
Our research adopts a data-driven strategy-enhancing data balance by integrating synthetic images.
We formulate the problem in a joint optimization manner, in which three networks are optimized towards the goal of empirical risk and fairness.
Our model achieves superior fairness segmentation performance compared to the state-of-the-art fairness learning models.
arXiv Detail & Related papers (2024-07-08T17:59:58Z) - TSynD: Targeted Synthetic Data Generation for Enhanced Medical Image Classification [0.011037620731410175]
This work aims to guide the generative model to synthesize data with high uncertainty.
We alter the feature space of the autoencoder through an optimization process.
We improve the robustness against test time data augmentations and adversarial attacks on several classifications tasks.
arXiv Detail & Related papers (2024-06-25T11:38:46Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Unsupervised Synthetic Image Refinement via Contrastive Learning and
Consistent Semantic-Structural Constraints [32.07631215590755]
Contrastive learning (CL) has been successfully used to pull correlated patches together and push uncorrelated ones apart.
In this work, we exploit semantic and structural consistency between synthetic and refined images and adopt CL to reduce the semantic distortion.
arXiv Detail & Related papers (2023-04-25T05:55:28Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Retrieval-based Spatially Adaptive Normalization for Semantic Image
Synthesis [68.1281982092765]
We propose a novel normalization module, termed as REtrieval-based Spatially AdaptIve normaLization (RESAIL)
RESAIL provides pixel level fine-grained guidance to the normalization architecture.
Experiments on several challenging datasets show that our RESAIL performs favorably against state-of-the-arts in terms of quantitative metrics, visual quality, and subjective evaluation.
arXiv Detail & Related papers (2022-04-06T14:21:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.