SketchDeco: Decorating B&W Sketches with Colour
- URL: http://arxiv.org/abs/2405.18716v1
- Date: Wed, 29 May 2024 02:53:59 GMT
- Title: SketchDeco: Decorating B&W Sketches with Colour
- Authors: Chaitat Utintu, Pinaki Nath Chowdhury, Aneeshan Sain, Subhadeep Koley, Ayan Kumar Bhunia, Yi-Zhe Song,
- Abstract summary: This paper introduces a novel approach to sketch colourisation, inspired by the universal childhood activity of colouring.
Striking a balance between precision and convenience, our method utilise region masks and colour palettes to allow intuitive user control.
- Score: 80.90808879991182
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel approach to sketch colourisation, inspired by the universal childhood activity of colouring and its professional applications in design and story-boarding. Striking a balance between precision and convenience, our method utilises region masks and colour palettes to allow intuitive user control, steering clear of the meticulousness of manual colour assignments or the limitations of textual prompts. By strategically combining ControlNet and staged generation, incorporating Stable Diffusion v1.5, and leveraging BLIP-2 text prompts, our methodology facilitates faithful image generation and user-directed colourisation. Addressing challenges of local and global consistency, we employ inventive solutions such as an inversion scheme, guided sampling, and a self-attention mechanism with a scaling factor. The resulting tool is not only fast and training-free but also compatible with consumer-grade Nvidia RTX 4090 Super GPUs, making it a valuable asset for both creative professionals and enthusiasts in various fields. Project Page: \url{https://chaitron.github.io/SketchDeco/}
Related papers
- Consistent Video Colorization via Palette Guidance [10.651227296134655]
We regard the colorization task as a generative task and introduce Stable Video Diffusion (SVD) as our base model.
We design a palette-based color guider to assist the model in generating vivid and consistent colors.
Experiments demonstrate that the proposed method can provide vivid and stable colors for videos, surpassing previous methods.
arXiv Detail & Related papers (2025-01-31T17:31:19Z) - MangaNinja: Line Art Colorization with Precise Reference Following [84.2001766692797]
MangaNinjia specializes in the task of reference-guided line art colorization.
We incorporate two thoughtful designs to ensure precise character detail transcription.
A patch shuffling module to facilitate correspondence learning between the reference color image and the target line art, and a point-driven control scheme to enable fine-grained color matching.
arXiv Detail & Related papers (2025-01-14T18:59:55Z) - DreamColour: Controllable Video Colour Editing without Training [80.90808879991182]
We present a training-free framework that makes precise video colour editing accessible through an intuitive interface.
By decoupling spatial and temporal aspects of colour editing, we can better align with users' natural workflow.
Our approach matches or exceeds state-of-the-art methods while eliminating the need for training or specialized hardware.
arXiv Detail & Related papers (2024-12-06T16:57:54Z) - L-C4: Language-Based Video Colorization for Creative and Consistent Color [59.069498113050436]
We present Language-based video colorization for Creative and Consistent Colors (L-C4)
Our model is built upon a pre-trained cross-modality generative model.
We propose temporally deformable attention to prevent flickering or color shifts, and cross-clip fusion to maintain long-term color consistency.
arXiv Detail & Related papers (2024-10-07T12:16:21Z) - Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis [63.757624792753205]
We present Zero-Painter, a framework for layout-conditional text-to-image synthesis.
Our method utilizes object masks and individual descriptions, coupled with a global text prompt, to generate images with high fidelity.
arXiv Detail & Related papers (2024-06-06T13:02:00Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Diffusing Colors: Image Colorization with Text Guided Diffusion [11.727899027933466]
We present a novel image colorization framework that utilizes image diffusion techniques with granular text prompts.
Our method provides a balance between automation and control, outperforming existing techniques in terms of visual quality and semantic coherence.
Our approach holds potential particularly for color enhancement and historical image colorization.
arXiv Detail & Related papers (2023-12-07T08:59:20Z) - Attention-based Stylisation for Exemplar Image Colourisation [3.491870689686827]
This work reformulates the existing methodology introducing a novel end-to-end colourisation network.
The proposed architecture integrates attention modules at different resolutions that learn how to perform the style transfer task.
Experimental validations demonstrate efficiency of the proposed methodology which generates high quality and visual appealing colourisation.
arXiv Detail & Related papers (2021-05-04T18:56:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.