Guiding Users to Where to Give Color Hints for Efficient Interactive
Sketch Colorization via Unsupervised Region Prioritization
- URL: http://arxiv.org/abs/2210.14270v1
- Date: Tue, 25 Oct 2022 18:50:09 GMT
- Title: Guiding Users to Where to Give Color Hints for Efficient Interactive
Sketch Colorization via Unsupervised Region Prioritization
- Authors: Youngin Cho, Junsoo Lee, Soyoung Yang, Juntae Kim, Yeojeong Park,
Haneol Lee, Mohammad Azam Khan, Daesik Kim, Jaegul Choo
- Abstract summary: This paper proposes a novel model-guided deep interactive colorization framework that reduces the required amount of user interactions.
Our method, called GuidingPainter, prioritizes these regions where the model most needs a color hint, rather than just relying on the user's manual decision on where to give a color hint.
- Score: 31.750591990768307
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing deep interactive colorization models have focused on ways to utilize
various types of interactions, such as point-wise color hints, scribbles, or
natural-language texts, as methods to reflect a user's intent at runtime.
However, another approach, which actively informs the user of the most
effective regions to give hints for sketch image colorization, has been
under-explored. This paper proposes a novel model-guided deep interactive
colorization framework that reduces the required amount of user interactions,
by prioritizing the regions in a colorization model. Our method, called
GuidingPainter, prioritizes these regions where the model most needs a color
hint, rather than just relying on the user's manual decision on where to give a
color hint. In our extensive experiments, we show that our approach outperforms
existing interactive colorization methods in terms of the conventional metrics,
such as PSNR and FID, and reduces required amount of interactions.
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - Language-based Photo Color Adjustment for Graphic Designs [38.43984897069872]
We introduce an interactive language-based approach for photo recoloring.
Our model can predict the source colors and the target regions, and then recolor the target regions with the source colors based on the given language-based instruction.
arXiv Detail & Related papers (2023-08-06T08:53:49Z) - L-CAD: Language-based Colorization with Any-level Descriptions using
Diffusion Priors [62.80068955192816]
We propose a unified model to perform language-based colorization with any-level descriptions.
We leverage the pretrained cross-modality generative model for its robust language understanding and rich color priors.
With the proposed novel sampling strategy, our model achieves instance-aware colorization in diverse and complex scenarios.
arXiv Detail & Related papers (2023-05-24T14:57:42Z) - Attention-Aware Anime Line Drawing Colorization [10.924683447616273]
We introduce an attention-based model for anime line drawing colorization, in which a channel-wise and spatial-wise Convolutional Attention module is used.
Our method outperforms other SOTA methods, with more accurate line structure and semantic color information.
arXiv Detail & Related papers (2022-12-21T12:50:31Z) - BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization [70.14893481468525]
We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
arXiv Detail & Related papers (2022-12-05T13:47:15Z) - PalGAN: Image Colorization with Palette Generative Adversarial Networks [51.59276436217957]
We propose a new GAN-based colorization approach PalGAN, integrated with palette estimation and chromatic attention.
PalGAN outperforms state-of-the-arts in quantitative evaluation and visual comparison, delivering notable diverse, contrastive, and edge-preserving appearances.
arXiv Detail & Related papers (2022-10-20T12:28:31Z) - iColoriT: Towards Propagating Local Hint to the Right Region in
Interactive Colorization by Leveraging Vision Transformer [29.426206281291755]
We present iColoriT, a novel point-interactive colorization Vision Transformer capable of propagating user hints to relevant regions.
Our approach colorizes images in real-time by utilizing pixel shuffling, an efficient upsampling technique that replaces the decoder architecture.
arXiv Detail & Related papers (2022-07-14T11:40:32Z) - Deep Edge-Aware Interactive Colorization against Color-Bleeding Effects [15.386085970550996]
Deep image colorization networks often suffer from the color-bleeding artifact.
We propose a novel edge-enhancing framework for the regions of interest, by utilizing user scribbles that indicate where to enhance.
arXiv Detail & Related papers (2021-07-04T13:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.