Interactive Cartoonization with Controllable Perceptual Factors
- URL: http://arxiv.org/abs/2212.09555v1
- Date: Mon, 19 Dec 2022 15:45:47 GMT
- Title: Interactive Cartoonization with Controllable Perceptual Factors
- Authors: Namhyuk Ahn, Patrick Kwon, Jihye Back, Kibeom Hong, Seungkwon Kim
- Abstract summary: We propose a novel solution with editing features of texture and color based on the cartoon creation process.
In the texture decoder, we propose a texture controller, which enables a user to control stroke style and abstraction to generate diverse cartoon textures.
We also introduce an HSV color augmentation to induce the networks to generate diverse and controllable color translation.
- Score: 5.8641445422054765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cartoonization is a task that renders natural photos into cartoon styles.
Previous deep cartoonization methods only have focused on end-to-end
translation, which may hinder editability. Instead, we propose a novel solution
with editing features of texture and color based on the cartoon creation
process. To do that, we design a model architecture to have separate decoders,
texture and color, to decouple these attributes. In the texture decoder, we
propose a texture controller, which enables a user to control stroke style and
abstraction to generate diverse cartoon textures. We also introduce an HSV
color augmentation to induce the networks to generate diverse and controllable
color translation. To the best of our knowledge, our work is the first deep
approach to control the cartoonization at inference while showing profound
quality improvement over to baselines.
Related papers
- Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - SketchDeco: Decorating B&W Sketches with Colour [80.90808879991182]
This paper introduces a novel approach to sketch colourisation, inspired by the universal childhood activity of colouring.
Striking a balance between precision and convenience, our method utilise region masks and colour palettes to allow intuitive user control.
arXiv Detail & Related papers (2024-05-29T02:53:59Z) - LASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation [62.232361821779335]
We introduce a tuning-free attention control framework, encapsulated by the progressive process of prompt-Aware editing, StablE animation geneRation, abbreviated as LASER.
We manipulate the model's spatial features and self-attention mechanisms to maintain animation integrity.
Our meticulous control over spatial features and self-attention ensures structural consistency in the images.
arXiv Detail & Related papers (2024-04-21T07:13:56Z) - Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Make-It-Vivid: Dressing Your Animatable Biped Cartoon Characters from Text [38.591390310534024]
We focus on automatic texture design for cartoon characters on input instructions.
This is challenging for domain-specific requirements and a lack of high-quality data.
We propose Make-ItVivi the first attempt to enable high-quality texture generation from text in UV.
arXiv Detail & Related papers (2024-03-25T16:08:04Z) - Instance-guided Cartoon Editing with a Large-scale Dataset [12.955181769243232]
We present an instance-aware image segmentation model that can generate accurate, high-resolution segmentation masks for characters in cartoon images.
We present that the proposed approach enables a range of segmentation-dependent cartoon editing applications like 3D Ken Burns parallax effects, text-guided cartoon style editing, and puppet animation from illustrations and manga.
arXiv Detail & Related papers (2023-12-04T15:00:15Z) - TADA! Text to Animatable Digital Avatars [57.52707683788961]
TADA takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures.
We derive an optimizable high-resolution body model from SMPL-X with 3D displacements and a texture map.
We render normals and RGB images of the generated character and exploit their latent embeddings in the SDS training process.
arXiv Detail & Related papers (2023-08-21T17:59:10Z) - Learning to Incorporate Texture Saliency Adaptive Attention to Image
Cartoonization [20.578335938736384]
A novel cartoon-texture-saliency-sampler (CTSS) module is proposed to dynamically sample cartoon-texture-salient patches from training data.
With extensive experiments, we demonstrate that texture saliency adaptive attention in adversarial learning, is of significant importance in facilitating and enhancing image cartoonization.
arXiv Detail & Related papers (2022-08-02T16:45:55Z) - White-Box Cartoonization Using An Extended GAN Framework [0.0]
We propose to implement a new framework for estimating generative models via an adversarial process to extend an existing GAN framework.
We develop a white-box controllable image cartoonization, which can generate high-quality cartooned images/videos from real-world photos and videos.
arXiv Detail & Related papers (2021-07-09T17:09:19Z) - Stylized Neural Painting [0.0]
This paper proposes an image-to-painting translation method that generates vivid and realistic painting artworks with controllable styles.
Experiments show that the paintings generated by our method have a high degree of fidelity in both global appearance and local textures.
arXiv Detail & Related papers (2020-11-16T17:24:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.