Deep Line Art Video Colorization with a Few References
- URL: http://arxiv.org/abs/2003.10685v2
- Date: Mon, 30 Mar 2020 07:34:55 GMT
- Title: Deep Line Art Video Colorization with a Few References
- Authors: Min Shi, Jia-Qi Zhang, Shu-Yu Chen, Lin Gao, Yu-Kun Lai, Fang-Lue
Zhang
- Abstract summary: We propose a deep architecture to automatically color line art videos with the same color style as the given reference images.
Our framework consists of a color transform network and a temporal constraint network.
Our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples.
- Score: 49.7139016311314
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Coloring line art images based on the colors of reference images is an
important stage in animation production, which is time-consuming and tedious.
In this paper, we propose a deep architecture to automatically color line art
videos with the same color style as the given reference images. Our framework
consists of a color transform network and a temporal constraint network. The
color transform network takes the target line art images as well as the line
art and color images of one or more reference images as input, and generates
corresponding target color images. To cope with larger differences between the
target line art image and reference color images, our architecture utilizes
non-local similarity matching to determine the region correspondences between
the target image and the reference images, which are used to transform the
local color information from the references to the target. To ensure global
color style consistency, we further incorporate Adaptive Instance Normalization
(AdaIN) with the transformation parameters obtained from a style embedding
vector that describes the global color style of the references, extracted by an
embedder. The temporal constraint network takes the reference images and the
target image together in chronological order, and learns the spatiotemporal
features through 3D convolution to ensure the temporal consistency of the
target image and the reference image. Our model can achieve even better
coloring results by fine-tuning the parameters with only a small amount of
samples when dealing with an animation of a new style. To evaluate our method,
we build a line art coloring dataset. Experiments show that our method achieves
the best performance on line art video coloring compared to the
state-of-the-art methods and other baselines.
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Palette-based Color Transfer between Images [9.471264982229508]
We propose a new palette-based color transfer method that can automatically generate a new color scheme.
With a redesigned palette-based clustering method, pixels can be classified into different segments according to color distribution.
Our method exhibits significant advantages over peer methods in terms of natural realism, color consistency, generality, and robustness.
arXiv Detail & Related papers (2024-05-14T01:41:19Z) - Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization [70.14893481468525]
We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
arXiv Detail & Related papers (2022-12-05T13:47:15Z) - iColoriT: Towards Propagating Local Hint to the Right Region in
Interactive Colorization by Leveraging Vision Transformer [29.426206281291755]
We present iColoriT, a novel point-interactive colorization Vision Transformer capable of propagating user hints to relevant regions.
Our approach colorizes images in real-time by utilizing pixel shuffling, an efficient upsampling technique that replaces the decoder architecture.
arXiv Detail & Related papers (2022-07-14T11:40:32Z) - HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color
Histograms [52.77252727786091]
HistoGAN is a color histogram-based method for controlling GAN-generated images' colors.
We show how to expand HistoGAN to recolor real images.
arXiv Detail & Related papers (2020-11-23T21:14:19Z) - Reference-Based Sketch Image Colorization using Augmented-Self Reference
and Dense Semantic Correspondence [32.848390767305276]
This paper tackles the automatic colorization task of a sketch image given an already-colored reference image.
We utilize the identical image with geometric distortion as a virtual reference, which makes it possible to secure the ground truth for a colored output image.
arXiv Detail & Related papers (2020-05-11T15:52:50Z) - Line Art Correlation Matching Feature Transfer Network for Automatic
Animation Colorization [0.0]
We propose a correlation matching feature transfer model (called CMFT) to align the colored reference feature in a learnable way.
This enables the generator to transfer the layer-wise synchronized features from the deep semantic code to the content progressively.
arXiv Detail & Related papers (2020-04-14T06:50:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.