Painting Style-Aware Manga Colorization Based on Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2107.07943v1
- Date: Fri, 16 Jul 2021 15:00:28 GMT
- Title: Painting Style-Aware Manga Colorization Based on Generative Adversarial
Networks
- Authors: Yugo Shimizu, Ryosuke Furuta, Delong Ouyang, Yukinobu Taniguchi, Ryota
Hinami, Shonosuke Ishiwatari
- Abstract summary: We propose a semi-automatic colorization method based on generative adversarial networks (GAN)
The proposed method takes a pair of a screen tone image and a flat colored image as input, and outputs a colorized image.
Experiments show that the proposed method achieves better performance than the existing alternatives.
- Score: 9.495186818333815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Japanese comics (called manga) are traditionally created in monochrome
format. In recent years, in addition to monochrome comics, full color comics, a
more attractive medium, have appeared. Unfortunately, color comics require
manual colorization, which incurs high labor costs. Although automatic
colorization methods have been recently proposed, most of them are designed for
illustrations, not for comics. Unlike illustrations, since comics are composed
of many consecutive images, the painting style must be consistent. To realize
consistent colorization, we propose here a semi-automatic colorization method
based on generative adversarial networks (GAN); the method learns the painting
style of a specific comic from small amount of training data. The proposed
method takes a pair of a screen tone image and a flat colored image as input,
and outputs a colorized image. Experiments show that the proposed method
achieves better performance than the existing alternatives.
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - inkn'hue: Enhancing Manga Colorization from Multiple Priors with
Alignment Multi-Encoder VAE [0.0]
We propose a specialized framework for manga colorization.
We leverage established models for shading and vibrant coloring using a multi-encoder VAE.
This structured workflow ensures clear and colorful results, with the option to incorporate reference images and manual hints.
arXiv Detail & Related papers (2023-11-03T09:33:32Z) - AnimeDiffusion: Anime Face Line Drawing Colorization via Diffusion
Models [24.94532405404846]
We propose a novel method called AnimeDiffusion using diffusion models that performs anime face line drawing colorization automatically.
We conduct an anime face line drawing colorization benchmark dataset, which contains 31696 training data and 579 testing data.
We demonstrate AnimeDiffusion outperforms state-of-the-art GANs-based models for anime face drawing colorization.
arXiv Detail & Related papers (2023-03-20T14:15:23Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - Towards Photorealistic Colorization by Imagination [48.82757902812846]
We present a novel approach to automatic image colorization by imitating the imagination process of human experts.
Our work produces more colorful and diverse results than state-of-the-art image colorization methods.
arXiv Detail & Related papers (2021-08-20T14:28:37Z) - Reference-Based Sketch Image Colorization using Augmented-Self Reference
and Dense Semantic Correspondence [32.848390767305276]
This paper tackles the automatic colorization task of a sketch image given an already-colored reference image.
We utilize the identical image with geometric distortion as a virtual reference, which makes it possible to secure the ground truth for a colored output image.
arXiv Detail & Related papers (2020-05-11T15:52:50Z) - Deep Line Art Video Colorization with a Few References [49.7139016311314]
We propose a deep architecture to automatically color line art videos with the same color style as the given reference images.
Our framework consists of a color transform network and a temporal constraint network.
Our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples.
arXiv Detail & Related papers (2020-03-24T06:57:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.