Reference-Based Sketch Image Colorization using Augmented-Self Reference
and Dense Semantic Correspondence
- URL: http://arxiv.org/abs/2005.05207v1
- Date: Mon, 11 May 2020 15:52:50 GMT
- Title: Reference-Based Sketch Image Colorization using Augmented-Self Reference
and Dense Semantic Correspondence
- Authors: Junsoo Lee, Eungyeup Kim, Yunsung Lee, Dongjun Kim, Jaehyuk Chang,
Jaegul Choo
- Abstract summary: This paper tackles the automatic colorization task of a sketch image given an already-colored reference image.
We utilize the identical image with geometric distortion as a virtual reference, which makes it possible to secure the ground truth for a colored output image.
- Score: 32.848390767305276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper tackles the automatic colorization task of a sketch image given an
already-colored reference image. Colorizing a sketch image is in high demand in
comics, animation, and other content creation applications, but it suffers from
information scarcity of a sketch image. To address this, a reference image can
render the colorization process in a reliable and user-driven manner. However,
it is difficult to prepare for a training data set that has a sufficient amount
of semantically meaningful pairs of images as well as the ground truth for a
colored image reflecting a given reference (e.g., coloring a sketch of an
originally blue car given a reference green car). To tackle this challenge, we
propose to utilize the identical image with geometric distortion as a virtual
reference, which makes it possible to secure the ground truth for a colored
output image. Furthermore, it naturally provides the ground truth for dense
semantic correspondence, which we utilize in our internal attention mechanism
for color transfer from reference to sketch input. We demonstrate the
effectiveness of our approach in various types of sketch image colorization via
quantitative as well as qualitative evaluation against existing methods.
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Palette-based Color Transfer between Images [9.471264982229508]
We propose a new palette-based color transfer method that can automatically generate a new color scheme.
With a redesigned palette-based clustering method, pixels can be classified into different segments according to color distribution.
Our method exhibits significant advantages over peer methods in terms of natural realism, color consistency, generality, and robustness.
arXiv Detail & Related papers (2024-05-14T01:41:19Z) - Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - ColorizeDiffusion: Adjustable Sketch Colorization with Reference Image and Text [5.675944597452309]
We introduce two variations of an image-guided latent diffusion model utilizing different image tokens from the pre-trained CLIP image encoder.
We propose corresponding manipulation methods to adjust their results sequentially using weighted text inputs.
arXiv Detail & Related papers (2024-01-02T22:46:12Z) - Text-Guided Scene Sketch-to-Photo Synthesis [5.431298869139175]
We propose a method for scene-level sketch-to-photo synthesis with text guidance.
To train our model, we use self-supervised learning from a set of photographs.
Experiments show that the proposed method translates original sketch images that are not extracted from color images into photos with compelling visual quality.
arXiv Detail & Related papers (2023-02-14T08:13:36Z) - XCI-Sketch: Extraction of Color Information from Images for Generation
of Colored Outlines and Sketches [0.0]
We propose two methods to mimic human-drawn colored sketches.
The first method renders colored outline sketches by applying image processing techniques aided by k-means color clustering.
The second method uses a generative adversarial network to develop a model that can generate colored sketches from previously unobserved images.
arXiv Detail & Related papers (2021-08-26T02:27:55Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Deep Line Art Video Colorization with a Few References [49.7139016311314]
We propose a deep architecture to automatically color line art videos with the same color style as the given reference images.
Our framework consists of a color transform network and a temporal constraint network.
Our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples.
arXiv Detail & Related papers (2020-03-24T06:57:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.