Color3D: Controllable and Consistent 3D Colorization with Personalized Colorizer
- URL: http://arxiv.org/abs/2510.10152v1
- Date: Sat, 11 Oct 2025 10:21:19 GMT
- Title: Color3D: Controllable and Consistent 3D Colorization with Personalized Colorizer
- Authors: Yecong Wan, Mingwen Shao, Renlong Wu, Wangmeng Zuo,
- Abstract summary: We present Color3D, a highly adaptable framework for colorizing both static and dynamic 3D scenes from monochromatic inputs.<n>Our approach is able to preserve color diversity and steerability while ensuring cross-view and cross-time consistency.
- Score: 58.94607850223466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we present Color3D, a highly adaptable framework for colorizing both static and dynamic 3D scenes from monochromatic inputs, delivering visually diverse and chromatically vibrant reconstructions with flexible user-guided control. In contrast to existing methods that focus solely on static scenarios and enforce multi-view consistency by averaging color variations which inevitably sacrifice both chromatic richness and controllability, our approach is able to preserve color diversity and steerability while ensuring cross-view and cross-time consistency. In particular, the core insight of our method is to colorize only a single key view and then fine-tune a personalized colorizer to propagate its color to novel views and time steps. Through personalization, the colorizer learns a scene-specific deterministic color mapping underlying the reference view, enabling it to consistently project corresponding colors to the content in novel views and video frames via its inherent inductive bias. Once trained, the personalized colorizer can be applied to infer consistent chrominance for all other images, enabling direct reconstruction of colorful 3D scenes with a dedicated Lab color space Gaussian splatting representation. The proposed framework ingeniously recasts complicated 3D colorization as a more tractable single image paradigm, allowing seamless integration of arbitrary image colorization models with enhanced flexibility and controllability. Extensive experiments across diverse static and dynamic 3D colorization benchmarks substantiate that our method can deliver more consistent and chromatically rich renderings with precise user control. Project Page https://yecongwan.github.io/Color3D/.
Related papers
- VIRGi: View-dependent Instant Recoloring of 3D Gaussians Splats [53.602701067430075]
We introduce VIRGi, a novel approach for rapidly editing the color of scenes modeled by 3DGS.<n>By fine-tuning the weights of a single user, the color edits are seamlessly propagated to the entire scene in just two seconds.<n>An exhaustive validation on diverse datasets demonstrates significant quantitative and qualitative advancements over competitors.
arXiv Detail & Related papers (2026-03-03T13:41:17Z) - LoGoColor: Local-Global 3D Colorization for 360° Scenes [29.177641673340137]
Single-channel 3D reconstruction is widely used in fields such as robotics and medical imaging.<n>Recent 3D colorization studies address this problem by distilling 2D image colorization models.<n>We propose LoGoColor, a pipeline designed to preserve color diversity by eliminating the guidance-averaging process.
arXiv Detail & Related papers (2025-12-10T03:03:38Z) - Follow-Your-Color: Multi-Instance Sketch Colorization [44.72374445094054]
Follow-Your-Color is a diffusion-based framework for multi-instance sketch colorization.<n>Our model critically automates the colorization process with zero manual adjustments.
arXiv Detail & Related papers (2025-03-21T08:53:14Z) - Leveraging Semantic Attribute Binding for Free-Lunch Color Control in Diffusion Models [53.73253164099701]
We introduce ColorWave, a training-free approach that achieves exact RGB-level color control in diffusion models without fine-tuning.<n>We demonstrate that ColorWave establishes a new paradigm for structured, color-consistent diffusion-based image synthesis.
arXiv Detail & Related papers (2025-03-12T21:49:52Z) - DreamColour: Controllable Video Colour Editing without Training [80.90808879991182]
We present a training-free framework that makes precise video colour editing accessible through an intuitive interface.<n>By decoupling spatial and temporal aspects of colour editing, we can better align with users' natural workflow.<n>Our approach matches or exceeds state-of-the-art methods while eliminating the need for training or specialized hardware.
arXiv Detail & Related papers (2024-12-06T16:57:54Z) - Paint Bucket Colorization Using Anime Character Color Design Sheets [72.66788521378864]
We introduce inclusion matching, which allows the network to understand the relationships between segments.
Our network's training pipeline significantly improves performance in both colorization and consecutive frame colorization.
To support our network's training, we have developed a unique dataset named PaintBucket-Character.
arXiv Detail & Related papers (2024-10-25T09:33:27Z) - Transforming Color: A Novel Image Colorization Method [8.041659727964305]
This paper introduces a novel method for image colorization that utilizes a color transformer and generative adversarial networks (GANs)
The proposed method integrates a transformer architecture to capture global information and a GAN framework to improve visual quality.
Experimental results show that the proposed network significantly outperforms other state-of-the-art colorization techniques.
arXiv Detail & Related papers (2024-10-07T07:23:42Z) - Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - ChromaDistill: Colorizing Monochrome Radiance Fields with Knowledge Distillation [23.968181738235266]
We present a method for colorized novel views from input grayscale multi-view images.<n>We propose a distillation-based method that transfers color from these networks trained on natural images to the target 3D representation.<n>Our method is agnostic to the underlying 3D representation and easily generalizable to NeRF and 3DGS methods.
arXiv Detail & Related papers (2023-09-14T12:30:48Z) - Video Colorization with Pre-trained Text-to-Image Diffusion Models [19.807766482434563]
We present ColorDiffuser, an adaptation of a pre-trained text-to-image latent diffusion model for video colorization.
We propose two novel techniques to enhance the temporal coherence and maintain the vividness of colorization across frames.
arXiv Detail & Related papers (2023-06-02T17:58:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.