Controllable-Continuous Color Editing in Diffusion Model via Color Mapping
- URL: http://arxiv.org/abs/2509.13756v1
- Date: Wed, 17 Sep 2025 07:12:51 GMT
- Title: Controllable-Continuous Color Editing in Diffusion Model via Color Mapping
- Authors: Yuqi Yang, Dongliang Chang, Yuanchen Fang, Yi-Zhe SonG, Zhanyu Ma, Jun Guo,
- Abstract summary: We introduce a color mapping module that explicitly models the correspondence between the text embedding space and image RGB values.<n>Users can specify a target RGB range to generate images with continuous color variations within the desired range.<n> Experimental results demonstrate that our method performs well in terms of color continuity and controllability.
- Score: 73.62340517056619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, text-driven image editing has made significant progress. However, due to the inherent ambiguity and discreteness of natural language, color editing still faces challenges such as insufficient precision and difficulty in achieving continuous control. Although linearly interpolating the embedding vectors of different textual descriptions can guide the model to generate a sequence of images with varying colors, this approach lacks precise control over the range of color changes in the output images. Moreover, the relationship between the interpolation coefficient and the resulting image color is unknown and uncontrollable. To address these issues, we introduce a color mapping module that explicitly models the correspondence between the text embedding space and image RGB values. This module predicts the corresponding embedding vector based on a given RGB value, enabling precise color control of the generated images while maintaining semantic consistency. Users can specify a target RGB range to generate images with continuous color variations within the desired range, thereby achieving finer-grained, continuous, and controllable color editing. Experimental results demonstrate that our method performs well in terms of color continuity and controllability.
Related papers
- Content-Adaptive Image Retouching Guided by Attribute-Based Text Representation [53.196155487850746]
We propose a novel Content-Adaptive image retouching method guided by Attribute-based Text Representation (CA-ATP)<n> Specifically, we propose a content-adaptive curve mapping module, which leverages a series of basis curves to establish multiple color mapping relationships.<n>In addition, we propose an attribute text prediction module that generates text representations from multiple image attributes, which explicitly represent user-defined style preferences.
arXiv Detail & Related papers (2025-12-10T12:15:50Z) - Color3D: Controllable and Consistent 3D Colorization with Personalized Colorizer [58.94607850223466]
We present Color3D, a highly adaptable framework for colorizing both static and dynamic 3D scenes from monochromatic inputs.<n>Our approach is able to preserve color diversity and steerability while ensuring cross-view and cross-time consistency.
arXiv Detail & Related papers (2025-10-11T10:21:19Z) - Leveraging Semantic Attribute Binding for Free-Lunch Color Control in Diffusion Models [53.73253164099701]
We introduce ColorWave, a training-free approach that achieves exact RGB-level color control in diffusion models without fine-tuning.<n>We demonstrate that ColorWave establishes a new paradigm for structured, color-consistent diffusion-based image synthesis.
arXiv Detail & Related papers (2025-03-12T21:49:52Z) - MangaNinja: Line Art Colorization with Precise Reference Following [84.2001766692797]
MangaNinjia specializes in the task of reference-guided line art colorization.<n>We incorporate two thoughtful designs to ensure precise character detail transcription.<n>A patch shuffling module to facilitate correspondence learning between the reference color image and the target line art, and a point-driven control scheme to enable fine-grained color matching.
arXiv Detail & Related papers (2025-01-14T18:59:55Z) - Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Underwater Organism Color Enhancement via Color Code Decomposition, Adaptation and Interpolation [24.96772289126242]
We propose a method called textitColorCode, which enhances underwater images while offering a range controllable color outputs.
Our approach involves recovering an underwater image to a reference enhanced image through supervised training and decomposing it into color and content codes.
The color code is explicitly constrained to follow a Gaussian distribution, allowing for efficient sampling and inference.
arXiv Detail & Related papers (2024-09-29T12:24:34Z) - Color Shift Estimation-and-Correction for Image Enhancement [37.52492067462557]
Images captured under sub-optimal illumination conditions may contain both over- and under-exposures.
Current approaches mainly focus on adjusting image brightness, which may exacerbate the color tone distortion in under-exposed areas.
We propose a novel method to enhance images with both over- and under-exposures by learning to estimate and correct such color shifts.
arXiv Detail & Related papers (2024-05-28T00:45:35Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - DiffColor: Toward High Fidelity Text-Guided Image Colorization with
Diffusion Models [12.897939032560537]
We propose a new method called DiffColor to recover vivid colors conditioned on a prompt text.
We first fine-tune a pre-trained text-to-image model to generate colorized images using a CLIP-based contrastive loss.
Then we try to obtain an optimized text embedding aligning the colorized image and the text prompt, and a fine-tuned diffusion model enabling high-quality image reconstruction.
Our method can produce vivid and diverse colors with a few iterations, and keep the structure and background intact while having colors well-aligned with the target language guidance.
arXiv Detail & Related papers (2023-08-03T09:38:35Z) - Learning RAW-to-sRGB Mappings with Inaccurately Aligned Supervision [76.41657124981549]
This paper presents a joint learning model for image alignment and RAW-to-sRGB mapping.
Experiments show that our method performs favorably against state-of-the-arts on ZRR and SR-RAW datasets.
arXiv Detail & Related papers (2021-08-18T12:41:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.