Multimodal Color Recommendation in Vector Graphic Documents
- URL: http://arxiv.org/abs/2308.04118v1
- Date: Tue, 8 Aug 2023 08:17:39 GMT
- Title: Multimodal Color Recommendation in Vector Graphic Documents
- Authors: Qianru Qiu, Xueting Wang, Mayu Otani
- Abstract summary: We propose a multimodal masked color model that integrates both color and textual contexts to provide text-aware color recommendation for graphic documents.
Our proposed model comprises self-attention networks to capture the relationships between colors in multiple palettes, and cross-attention networks that incorporate both color and CLIP-based text representations.
- Score: 14.287758028119788
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Color selection plays a critical role in graphic document design and requires
sufficient consideration of various contexts. However, recommending appropriate
colors which harmonize with the other colors and textual contexts in documents
is a challenging task, even for experienced designers. In this study, we
propose a multimodal masked color model that integrates both color and textual
contexts to provide text-aware color recommendation for graphic documents. Our
proposed model comprises self-attention networks to capture the relationships
between colors in multiple palettes, and cross-attention networks that
incorporate both color and CLIP-based text representations. Our proposed method
primarily focuses on color palette completion, which recommends colors based on
the given colors and text. Additionally, it is applicable for another color
recommendation task, full palette generation, which generates a complete color
palette corresponding to the given text. Experimental results demonstrate that
our proposed approach surpasses previous color palette completion methods on
accuracy, color distribution, and user experience, as well as full palette
generation methods concerning color diversity and similarity to the ground
truth palettes.
Related papers
- SketchDeco: Decorating B&W Sketches with Colour [80.90808879991182]
This paper introduces a novel approach to sketch colourisation, inspired by the universal childhood activity of colouring.
Striking a balance between precision and convenience, our method utilise region masks and colour palettes to allow intuitive user control.
arXiv Detail & Related papers (2024-05-29T02:53:59Z) - Palette-based Color Transfer between Images [9.471264982229508]
We propose a new palette-based color transfer method that can automatically generate a new color scheme.
With a redesigned palette-based clustering method, pixels can be classified into different segments according to color distribution.
Our method exhibits significant advantages over peer methods in terms of natural realism, color consistency, generality, and robustness.
arXiv Detail & Related papers (2024-05-14T01:41:19Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - Language-based Photo Color Adjustment for Graphic Designs [38.43984897069872]
We introduce an interactive language-based approach for photo recoloring.
Our model can predict the source colors and the target regions, and then recolor the target regions with the source colors based on the given language-based instruction.
arXiv Detail & Related papers (2023-08-06T08:53:49Z) - DiffColor: Toward High Fidelity Text-Guided Image Colorization with
Diffusion Models [12.897939032560537]
We propose a new method called DiffColor to recover vivid colors conditioned on a prompt text.
We first fine-tune a pre-trained text-to-image model to generate colorized images using a CLIP-based contrastive loss.
Then we try to obtain an optimized text embedding aligning the colorized image and the text prompt, and a fine-tuned diffusion model enabling high-quality image reconstruction.
Our method can produce vivid and diverse colors with a few iterations, and keep the structure and background intact while having colors well-aligned with the target language guidance.
arXiv Detail & Related papers (2023-08-03T09:38:35Z) - L-CAD: Language-based Colorization with Any-level Descriptions using
Diffusion Priors [62.80068955192816]
We propose a unified model to perform language-based colorization with any-level descriptions.
We leverage the pretrained cross-modality generative model for its robust language understanding and rich color priors.
With the proposed novel sampling strategy, our model achieves instance-aware colorization in diverse and complex scenarios.
arXiv Detail & Related papers (2023-05-24T14:57:42Z) - BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization [70.14893481468525]
We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
arXiv Detail & Related papers (2022-12-05T13:47:15Z) - PalGAN: Image Colorization with Palette Generative Adversarial Networks [51.59276436217957]
We propose a new GAN-based colorization approach PalGAN, integrated with palette estimation and chromatic attention.
PalGAN outperforms state-of-the-arts in quantitative evaluation and visual comparison, delivering notable diverse, contrastive, and edge-preserving appearances.
arXiv Detail & Related papers (2022-10-20T12:28:31Z) - Color Recommendation for Vector Graphic Documents based on Multi-Palette
Representation [12.71266194474117]
We extract multiple color palettes from each visual element in a graphic document, and then combine them into a color sequence.
We train the model and build a color recommendation system on a large-scale dataset of vector graphic documents.
arXiv Detail & Related papers (2022-09-22T07:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.