HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color
Histograms
- URL: http://arxiv.org/abs/2011.11731v2
- Date: Sat, 27 Mar 2021 02:23:03 GMT
- Title: HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color
Histograms
- Authors: Mahmoud Afifi, Marcus A. Brubaker, Michael S. Brown
- Abstract summary: HistoGAN is a color histogram-based method for controlling GAN-generated images' colors.
We show how to expand HistoGAN to recolor real images.
- Score: 52.77252727786091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While generative adversarial networks (GANs) can successfully produce
high-quality images, they can be challenging to control. Simplifying GAN-based
image generation is critical for their adoption in graphic design and artistic
work. This goal has led to significant interest in methods that can intuitively
control the appearance of images generated by GANs. In this paper, we present
HistoGAN, a color histogram-based method for controlling GAN-generated images'
colors. We focus on color histograms as they provide an intuitive way to
describe image color while remaining decoupled from domain-specific semantics.
Specifically, we introduce an effective modification of the recent StyleGAN
architecture to control the colors of GAN-generated images specified by a
target color histogram feature. We then describe how to expand HistoGAN to
recolor real images. For image recoloring, we jointly train an encoder network
along with HistoGAN. The recoloring model, ReHistoGAN, is an unsupervised
approach trained to encourage the network to keep the original image's content
while changing the colors based on the given target histogram. We show that
this histogram-based approach offers a better way to control GAN-generated and
real images' colors while producing more compelling results compared to
existing alternative strategies.
Related papers
- Transforming Color: A Novel Image Colorization Method [8.041659727964305]
This paper introduces a novel method for image colorization that utilizes a color transformer and generative adversarial networks (GANs)
The proposed method integrates a transformer architecture to capture global information and a GAN framework to improve visual quality.
Experimental results show that the proposed network significantly outperforms other state-of-the-art colorization techniques.
arXiv Detail & Related papers (2024-10-07T07:23:42Z) - Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - SPDGAN: A Generative Adversarial Network based on SPD Manifold Learning
for Automatic Image Colorization [1.220743263007369]
We propose a fully automatic colorization approach based on Symmetric Positive Definite (SPD) Manifold Learning with a generative adversarial network (SPDGAN)
Our model establishes an adversarial game between two discriminators and a generator. Its goal is to generate fake colorized images without losing color information across layers through residual connections.
arXiv Detail & Related papers (2023-12-21T00:52:01Z) - PalGAN: Image Colorization with Palette Generative Adversarial Networks [51.59276436217957]
We propose a new GAN-based colorization approach PalGAN, integrated with palette estimation and chromatic attention.
PalGAN outperforms state-of-the-arts in quantitative evaluation and visual comparison, delivering notable diverse, contrastive, and edge-preserving appearances.
arXiv Detail & Related papers (2022-10-20T12:28:31Z) - Towards Vivid and Diverse Image Colorization with Generative Color Prior [17.087464490162073]
Recent deep-learning-based methods could automatically colorize images at a low cost.
We aim at recovering vivid colors by leveraging the rich and diverse color priors encapsulated in a pretrained Generative Adversarial Networks (GAN)
Thanks to the powerful generative color prior and delicate designs, our method could produce vivid colors with a single forward pass.
arXiv Detail & Related papers (2021-08-19T17:49:21Z) - SCGAN: Saliency Map-guided Colorization with Generative Adversarial
Network [16.906813829260553]
We propose a fully automatic Saliency Map-guided Colorization with Generative Adversarial Network (SCGAN) framework.
It jointly predicts the colorization and saliency map to minimize semantic confusion and color bleeding.
Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques.
arXiv Detail & Related papers (2020-11-23T13:06:54Z) - CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup
Transfer [10.086015702323971]
We propose a new formulation for the makeup style transfer task, with the objective to learn a color controllable makeup style synthesis.
We introduce CA-GAN, a generative model that learns to modify the color of specific objects in the image to an arbitrary target color.
We present for the first time a quantitative analysis of makeup style transfer and color control performance.
arXiv Detail & Related papers (2020-08-24T10:11:17Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z) - Deep Line Art Video Colorization with a Few References [49.7139016311314]
We propose a deep architecture to automatically color line art videos with the same color style as the given reference images.
Our framework consists of a color transform network and a temporal constraint network.
Our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples.
arXiv Detail & Related papers (2020-03-24T06:57:40Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.