Attention-Aware Anime Line Drawing Colorization
- URL: http://arxiv.org/abs/2212.10988v1
- Date: Wed, 21 Dec 2022 12:50:31 GMT
- Title: Attention-Aware Anime Line Drawing Colorization
- Authors: Yu Cao, Hao Tian, P.Y. Mok
- Abstract summary: We introduce an attention-based model for anime line drawing colorization, in which a channel-wise and spatial-wise Convolutional Attention module is used.
Our method outperforms other SOTA methods, with more accurate line structure and semantic color information.
- Score: 10.924683447616273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic colorization of anime line drawing has attracted much attention in
recent years since it can substantially benefit the animation industry.
User-hint based methods are the mainstream approach for line drawing
colorization, while reference-based methods offer a more intuitive approach.
Nevertheless, although reference-based methods can improve feature aggregation
of the reference image and the line drawing, the colorization results are not
compelling in terms of color consistency or semantic correspondence. In this
paper, we introduce an attention-based model for anime line drawing
colorization, in which a channel-wise and spatial-wise Convolutional Attention
module is used to improve the ability of the encoder for feature extraction and
key area perception, and a Stop-Gradient Attention module with cross-attention
and self-attention is used to tackle the cross-domain long-range dependency
problem. Extensive experiments show that our method outperforms other SOTA
methods, with more accurate line structure and semantic color information.
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - Deep Geometrized Cartoon Line Inbetweening [98.35956631655357]
Inbetweening involves generating intermediate frames between two black-and-white line drawings.
Existing frame methods that rely on matching and warping whole images are unsuitable for line inbetweening.
We propose AnimeInbet, which geometrizes geometric line drawings into endpoints and reframes the inbetweening task as a graph fusion problem.
Our method can effectively capture the sparsity and unique structure of line drawings while preserving the details during inbetweening.
arXiv Detail & Related papers (2023-09-28T17:50:05Z) - BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization [70.14893481468525]
We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
arXiv Detail & Related papers (2022-12-05T13:47:15Z) - Guiding Users to Where to Give Color Hints for Efficient Interactive
Sketch Colorization via Unsupervised Region Prioritization [31.750591990768307]
This paper proposes a novel model-guided deep interactive colorization framework that reduces the required amount of user interactions.
Our method, called GuidingPainter, prioritizes these regions where the model most needs a color hint, rather than just relying on the user's manual decision on where to give a color hint.
arXiv Detail & Related papers (2022-10-25T18:50:09Z) - Eliminating Gradient Conflict in Reference-based Line-art Colorization [26.46476996150605]
Reference-based line-art colorization is a challenging task in computer vision.
We propose a novel attention mechanism using Stop-Gradient Attention (SGA)
Compared with state-of-the-art modules in line-art colorization, our approach demonstrates significant improvements.
arXiv Detail & Related papers (2022-07-13T10:08:37Z) - Attention-based Stylisation for Exemplar Image Colourisation [3.491870689686827]
This work reformulates the existing methodology introducing a novel end-to-end colourisation network.
The proposed architecture integrates attention modules at different resolutions that learn how to perform the style transfer task.
Experimental validations demonstrate efficiency of the proposed methodology which generates high quality and visual appealing colourisation.
arXiv Detail & Related papers (2021-05-04T18:56:26Z) - Line Art Correlation Matching Feature Transfer Network for Automatic
Animation Colorization [0.0]
We propose a correlation matching feature transfer model (called CMFT) to align the colored reference feature in a learnable way.
This enables the generator to transfer the layer-wise synchronized features from the deep semantic code to the content progressively.
arXiv Detail & Related papers (2020-04-14T06:50:08Z) - Deep Line Art Video Colorization with a Few References [49.7139016311314]
We propose a deep architecture to automatically color line art videos with the same color style as the given reference images.
Our framework consists of a color transform network and a temporal constraint network.
Our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples.
arXiv Detail & Related papers (2020-03-24T06:57:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.