DiffGAR: Model-Agnostic Restoration from Generative Artifacts Using
Image-to-Image Diffusion Models
- URL: http://arxiv.org/abs/2210.08573v1
- Date: Sun, 16 Oct 2022 16:08:47 GMT
- Title: DiffGAR: Model-Agnostic Restoration from Generative Artifacts Using
Image-to-Image Diffusion Models
- Authors: Yueqin Yin, Lianghua Huang, Yu Liu, Kaiqi Huang
- Abstract summary: This work aims to develop a plugin post-processing module for diverse generative models.
Unlike traditional degradation patterns, generative artifacts are non-linear and the transformation function is highly complex.
- Score: 46.46919194633776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent generative models show impressive results in photo-realistic image
generation. However, artifacts often inevitably appear in the generated
results, leading to downgraded user experience and reduced performance in
downstream tasks. This work aims to develop a plugin post-processing module for
diverse generative models, which can faithfully restore images from diverse
generative artifacts. This is challenging because: (1) Unlike traditional
degradation patterns, generative artifacts are non-linear and the
transformation function is highly complex. (2) There are no readily available
artifact-image pairs. (3) Different from model-specific anti-artifact methods,
a model-agnostic framework views the generator as a black-box machine and has
no access to the architecture details. In this work, we first design a group of
mechanisms to simulate generative artifacts of popular generators (i.e., GANs,
autoregressive models, and diffusion models), given real images. Second, we
implement the model-agnostic anti-artifact framework as an image-to-image
diffusion model, due to its advantage in generation quality and capacity.
Finally, we design a conditioning scheme for the diffusion model to enable both
blind and non-blind image restoration. A guidance parameter is also introduced
to allow for a trade-off between restoration accuracy and image quality.
Extensive experiments show that our method significantly outperforms previous
approaches on the proposed datasets and real-world artifact images.
Related papers
- DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.
We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.
The learned artifact detector is then involved in the second stage to tune the diffusion model through assigning a per-pixel confidence map for each image.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - Refine-by-Align: Reference-Guided Artifacts Refinement through Semantic Alignment [40.112548587906005]
We present Refine-by-Align, a first-of-its-kind model that employs a diffusion-based framework to address this challenge.
We show that our pipeline greatly pushes the boundary of fine details in the image synthesis models.
arXiv Detail & Related papers (2024-11-30T01:26:04Z) - How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - Active Generation for Image Classification [45.93535669217115]
We propose to address the efficiency of image generation by focusing on the specific needs and characteristics of the model.
With a central tenet of active learning, our method, named ActGen, takes a training-aware approach to image generation.
arXiv Detail & Related papers (2024-03-11T08:45:31Z) - Class-Prototype Conditional Diffusion Model with Gradient Projection for Continual Learning [20.175586324567025]
Mitigating catastrophic forgetting is a key hurdle in continual learning.
A major issue is the deterioration in the quality of generated data compared to the original.
We propose a GR-based approach for continual learning that enhances image quality in generators.
arXiv Detail & Related papers (2023-12-10T17:39:42Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Diffusion Models for Image Restoration and Enhancement -- A
Comprehensive Survey [96.99328714941657]
We present a comprehensive review of recent diffusion model-based methods on image restoration.
We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.
We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - Alteration-free and Model-agnostic Origin Attribution of Generated
Images [28.34437698362946]
Concerns have emerged regarding potential misuse of image generation models.
It is necessary to analyze the origin of images by inferring if a specific image was generated by a particular model.
arXiv Detail & Related papers (2023-05-29T01:35:37Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.