Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs
- URL: http://arxiv.org/abs/2011.14107v3
- Date: Wed, 23 Oct 2024 20:55:35 GMT
- Title: Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs
- Authors: Hui-Po Wang, Ning Yu, Mario Fritz,
- Abstract summary: We show that state-of-the-art GAN models can be used for a range of applications beyond unconditional image generation.
We achieve this by an iterative scheme that also allows gaining control over the image generation process.
- Score: 57.90008929377144
- License:
- Abstract: While Generative Adversarial Networks (GANs) show increasing performance and the level of realism is becoming indistinguishable from natural images, this also comes with high demands on data and computation. We show that state-of-the-art GAN models -- such as they are being publicly released by researchers and industry -- can be used for a range of applications beyond unconditional image generation. We achieve this by an iterative scheme that also allows gaining control over the image generation process despite the highly non-linear latent spaces of the latest GAN models. We demonstrate that this opens up the possibility to re-use state-of-the-art, difficult to train, pre-trained GANs with a high level of control even if only black-box access is granted. Our work also raises concerns and awareness that the use cases of a published GAN model may well reach beyond the creators' intention, which needs to be taken into account before a full public release. Code is available at https://github.com/hui-po-wang/hijackgan.
Related papers
- U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation [48.40120035775506]
Kolmogorov-Arnold Networks (KANs) reshape the neural network learning via the stack of non-linear learnable activation functions.
We investigate, modify and re-design the established U-Net pipeline by integrating the dedicated KAN layers on the tokenized intermediate representation, termed U-KAN.
We further delved into the potential of U-KAN as an alternative U-Net noise predictor in diffusion models, demonstrating its applicability in generating task-oriented model architectures.
arXiv Detail & Related papers (2024-06-05T04:13:03Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Spatial Steerability of GANs via Self-Supervision from Discriminator [123.27117057804732]
We propose a self-supervised approach to improve the spatial steerability of GANs without searching for steerable directions in the latent space.
Specifically, we design randomly sampled Gaussian heatmaps to be encoded into the intermediate layers of generative models as spatial inductive bias.
During inference, users can interact with the spatial heatmaps in an intuitive manner, enabling them to edit the output image by adjusting the scene layout, moving, or removing objects.
arXiv Detail & Related papers (2023-01-20T07:36:29Z) - GLIGEN: Open-Set Grounded Text-to-Image Generation [97.72536364118024]
Grounded-Language-to-Image Generation is a novel approach that builds upon and extends the functionality of existing text-to-image diffusion models.
Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs.
GLIGEN's zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin.
arXiv Detail & Related papers (2023-01-17T18:58:58Z) - GIU-GANs: Global Information Utilization for Generative Adversarial
Networks [3.3945834638760948]
In this paper, we propose a new GANs called Involution Generative Adversarial Networks (GIU-GANs)
GIU-GANs leverages a brand new module called the Global Information Utilization (GIU) module, which integrates Squeeze-and-Excitation Networks (SENet) and involution.
Batch Normalization(BN) inevitably ignores the representation differences among noise sampled by the generator, and thus degrades the generated image quality.
arXiv Detail & Related papers (2022-01-25T17:17:15Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - TinyGAN: Distilling BigGAN for Conditional Image Generation [2.8072597424460466]
BigGAN has significantly improved the quality of image generation on ImageNet, but it requires a huge model, making it hard to deploy on resource-constrained devices.
We propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process.
Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having $16times$ fewer parameters.
arXiv Detail & Related papers (2020-09-29T07:33:49Z) - On Leveraging Pretrained GANs for Generation with Limited Data [83.32972353800633]
generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images.
Most images so generated are not contained in a training dataset, suggesting potential for augmenting training sets with GAN-generated data.
We leverage existing GAN models pretrained on large-scale datasets to introduce additional knowledge, following the concept of transfer learning.
An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
arXiv Detail & Related papers (2020-02-26T21:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.