RepMix: Representation Mixing for Robust Attribution of Synthesized
Images
- URL: http://arxiv.org/abs/2207.02063v1
- Date: Tue, 5 Jul 2022 14:14:06 GMT
- Title: RepMix: Representation Mixing for Robust Attribution of Synthesized
Images
- Authors: Tu Bui, Ning Yu and John Collomosse
- Abstract summary: We present a solution capable of matching images invariant to their semantic content.
We then propose RepMix, our GAN fingerprinting technique based on representation mixing and a novel loss.
We show our approach improves significantly from existing GAN fingerprinting works on both semantic generalization and robustness.
- Score: 15.698564265127432
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rapid advances in Generative Adversarial Networks (GANs) raise new challenges
for image attribution; detecting whether an image is synthetic and, if so,
determining which GAN architecture created it. Uniquely, we present a solution
to this task capable of 1) matching images invariant to their semantic content;
2) robust to benign transformations (changes in quality, resolution, shape,
etc.) commonly encountered as images are re-shared online. In order to
formalize our research, a challenging benchmark, Attribution88, is collected
for robust and practical image attribution. We then propose RepMix, our GAN
fingerprinting technique based on representation mixing and a novel loss. We
validate its capability of tracing the provenance of GAN-generated images
invariant to the semantic content of the image and also robust to
perturbations. We show our approach improves significantly from existing GAN
fingerprinting works on both semantic generalization and robustness. Data and
code are available at https://github.com/TuBui/image_attribution.
Related papers
- Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis [65.7968515029306]
We propose a novel Coarse-to-Fine Latent Diffusion (CFLD) method for Pose-Guided Person Image Synthesis (PGPIS)
A perception-refined decoder is designed to progressively refine a set of learnable queries and extract semantic understanding of person images as a coarse-grained prompt.
arXiv Detail & Related papers (2024-02-28T06:07:07Z) - Unlocking Pre-trained Image Backbones for Semantic Image Synthesis [29.688029979801577]
We propose a new class of GAN discriminators for semantic image synthesis that generates highly realistic images.
Our model, which we dub DP-SIMS, achieves state-of-the-art results in terms of image quality and consistency with the input label maps on ADE-20K, COCO-Stuff, and Cityscapes.
arXiv Detail & Related papers (2023-12-20T09:39:19Z) - SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution [16.815468458589635]
We present a semantics-aware approach to better preserve the semantic fidelity of generative real-world image super-resolution.
First, we train a degradation-aware prompt extractor, which can generate accurate soft and hard semantic prompts even under strong degradation.
The experiments show that our method can reproduce more realistic image details and hold better the semantics.
arXiv Detail & Related papers (2023-11-27T18:11:19Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - Towards Better Text-Image Consistency in Text-to-Image Generation [15.735515302139335]
We develop a novel CLIP-based metric termed as Semantic Similarity Distance (SSD)
We further design the Parallel Deep Fusion Generative Adversarial Networks (PDF-GAN), which can fuse semantic information at different granularities.
Our PDF-GAN can lead to significantly better text-image consistency while maintaining decent image quality on the CUB and COCO datasets.
arXiv Detail & Related papers (2022-10-27T07:47:47Z) - High-Fidelity Image Inpainting with GAN Inversion [23.49170140410603]
In this paper, we propose a novel GAN inversion model for image inpainting, dubbed InvertFill.
Within the encoder, the pre-modulation network leverages multi-scale structures to encode more discriminative semantics into style vectors.
To reconstruct faithful and photorealistic images, a simple yet effective Soft-update Mean Latent module is designed to capture more diverse in-domain patterns that synthesize high-fidelity textures for large corruptions.
arXiv Detail & Related papers (2022-08-25T03:39:24Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Linear Semantics in Generative Adversarial Networks [26.123252503846942]
We aim to better understand the semantic representation of GANs, and enable semantic control in GAN's generation process.
We find that a well-trained GAN encodes image semantics in its internal feature maps in a surprisingly simple way.
We propose two few-shot image editing approaches, namely Semantic-Conditional Sampling and Semantic Image Editing.
arXiv Detail & Related papers (2021-04-01T14:18:48Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - In-Domain GAN Inversion for Real Image Editing [56.924323432048304]
A common practice of feeding a real image to a trained GAN generator is to invert it back to a latent code.
Existing inversion methods typically focus on reconstructing the target image by pixel values yet fail to land the inverted code in the semantic domain of the original latent space.
We propose an in-domain GAN inversion approach, which faithfully reconstructs the input image and ensures the inverted code to be semantically meaningful for editing.
arXiv Detail & Related papers (2020-03-31T18:20:18Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.