Semantic Palette-Guided Color Propagation
- URL: http://arxiv.org/abs/2506.01441v1
- Date: Mon, 02 Jun 2025 08:57:34 GMT
- Title: Semantic Palette-Guided Color Propagation
- Authors: Zi-Yu Zhang, Bing-Feng Seng, Ya-Feng Du, Kang Li, Zhe-Cheng Wang, Zheng-Jun Du,
- Abstract summary: Conventional approaches often rely on low-level visual cues such as color, texture, or lightness to measure pixel similarity.<n>We present a semantic palette-guided approach for color propagation.<n>Our approach enables efficient yet accurate pixel-level color editing and ensures that local color changes are propagated in a content-aware manner.
- Score: 7.263538036771765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Color propagation aims to extend local color edits to similar regions across the input image. Conventional approaches often rely on low-level visual cues such as color, texture, or lightness to measure pixel similarity, making it difficult to achieve content-aware color propagation. While some recent approaches attempt to introduce semantic information into color editing, but often lead to unnatural, global color change in color adjustments. To overcome these limitations, we present a semantic palette-guided approach for color propagation. We first extract a semantic palette from an input image. Then, we solve an edited palette by minimizing a well-designed energy function based on user edits. Finally, local edits are accurately propagated to regions that share similar semantics via the solved palette. Our approach enables efficient yet accurate pixel-level color editing and ensures that local color changes are propagated in a content-aware manner. Extensive experiments demonstrated the effectiveness of our method.
Related papers
- Free-Lunch Color-Texture Disentanglement for Stylized Image Generation [58.406368812760256]
This paper introduces the first tuning-free approach to achieve free-lunch color-texture disentanglement in stylized T2I generation.<n>We develop techniques for separating and extracting Color-Texture Embeddings (CTE) from individual color and texture reference images.<n>To ensure that the color palette of the generated image aligns closely with the color reference, we apply a whitening and coloring transformation.
arXiv Detail & Related papers (2025-03-18T14:10:43Z) - Enabling Region-Specific Control via Lassos in Point-Based Colorization [34.49200294787326]
We introduce a lasso tool that can control the scope of each color hint.<n>We also design a framework that leverages the user-provided lassos to localize the attention masks.<n>The experimental results show that using a single lasso is as effective as applying 4.18 individual color hints.
arXiv Detail & Related papers (2024-12-18T03:35:55Z) - Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Palette-based Color Transfer between Images [9.471264982229508]
We propose a new palette-based color transfer method that can automatically generate a new color scheme.
With a redesigned palette-based clustering method, pixels can be classified into different segments according to color distribution.
Our method exhibits significant advantages over peer methods in terms of natural realism, color consistency, generality, and robustness.
arXiv Detail & Related papers (2024-05-14T01:41:19Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color
Editing of 3D Scenes [21.284044381058575]
We present RecolorNeRF, a novel user-friendly color editing approach for neural radiance fields.
Our key idea is to decompose the scene into a set of pure-colored layers, forming a palette.
To support efficient palette-based editing, the color of each layer needs to be as representative as possible.
arXiv Detail & Related papers (2023-01-19T09:18:06Z) - PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields [60.66412075837952]
We present PaletteNeRF, a novel method for appearance editing of neural radiance fields (NeRF) based on 3D color decomposition.
Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases.
We extend our framework with compressed semantic features for semantic-aware appearance editing.
arXiv Detail & Related papers (2022-12-21T00:20:01Z) - PalGAN: Image Colorization with Palette Generative Adversarial Networks [51.59276436217957]
We propose a new GAN-based colorization approach PalGAN, integrated with palette estimation and chromatic attention.
PalGAN outperforms state-of-the-arts in quantitative evaluation and visual comparison, delivering notable diverse, contrastive, and edge-preserving appearances.
arXiv Detail & Related papers (2022-10-20T12:28:31Z) - iColoriT: Towards Propagating Local Hint to the Right Region in
Interactive Colorization by Leveraging Vision Transformer [29.426206281291755]
We present iColoriT, a novel point-interactive colorization Vision Transformer capable of propagating user hints to relevant regions.
Our approach colorizes images in real-time by utilizing pixel shuffling, an efficient upsampling technique that replaces the decoder architecture.
arXiv Detail & Related papers (2022-07-14T11:40:32Z) - Flexible Portrait Image Editing with Fine-Grained Control [12.32304366243904]
We develop a new method for portrait image editing, which supports fine-grained editing of geometries, colors, lights and shadows using a single neural network model.
We adopt a novel asymmetric conditional GAN architecture: the generators take the transformed conditional inputs, such as edge maps, color palette, sliders and masks, that can be directly edited by the user.
We demonstrate the effectiveness of our method by evaluating it on the CelebAMask-HQ dataset with a wide range of tasks, including geometry/color/shadow/light editing, hand-drawn sketch to image translation, and color transfer.
arXiv Detail & Related papers (2022-04-04T08:39:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.