Colorizing Monochromatic Radiance Fields
- URL: http://arxiv.org/abs/2402.12184v1
- Date: Mon, 19 Feb 2024 14:47:23 GMT
- Title: Colorizing Monochromatic Radiance Fields
- Authors: Yean Cheng, Renjie Wan, Shuchen Weng, Chengxuan Zhu, Yakun Chang,
Boxin Shi
- Abstract summary: We consider reproducing color from monochromatic radiance fields as a representation-prediction task in the Lab color space.
By first constructing the luminance and density representation using monochromatic images, our prediction stage can recreate color representation on the basis of an image colorization module.
We then reproduce a colorful implicit model through the representation of luminance, density, and color.
- Score: 55.695149357101755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though Neural Radiance Fields (NeRF) can produce colorful 3D representations
of the world by using a set of 2D images, such ability becomes non-existent
when only monochromatic images are provided. Since color is necessary in
representing the world, reproducing color from monochromatic radiance fields
becomes crucial. To achieve this goal, instead of manipulating the
monochromatic radiance fields directly, we consider it as a
representation-prediction task in the Lab color space. By first constructing
the luminance and density representation using monochromatic images, our
prediction stage can recreate color representation on the basis of an image
colorization module. We then reproduce a colorful implicit model through the
representation of luminance, density, and color. Extensive experiments have
been conducted to validate the effectiveness of our approaches. Our project
page: https://liquidammonia.github.io/color-nerf.
Related papers
- Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - Improved Diffusion-based Image Colorization via Piggybacked Models [19.807766482434563]
We introduce a colorization model piggybacking on the existing powerful T2I diffusion model.
A diffusion guider is designed to incorporate the pre-trained weights of the latent diffusion model.
A lightness-aware VQVAE will then generate the colorized result with pixel-perfect alignment to the given grayscale image.
arXiv Detail & Related papers (2023-04-21T16:23:24Z) - Behind the Scenes: Density Fields for Single View Reconstruction [63.40484647325238]
Inferring meaningful geometric scene representation from a single image is a fundamental problem in computer vision.
We propose to predict implicit density fields. A density field maps every location in the frustum of the input image to volumetric density.
We show that our method is able to predict meaningful geometry for regions that are occluded in the input image.
arXiv Detail & Related papers (2023-01-18T17:24:01Z) - BigColor: Colorization using a Generative Color Prior for Natural Images [28.42665080958172]
We propose BigColor, a novel colorization approach that provides vivid colorization for diverse in-the-wild images with complex structures.
Our method enables robust colorization for diverse inputs in a single forward pass, supports arbitrary input resolutions, and provides multi-modal colorization results.
arXiv Detail & Related papers (2022-07-20T06:36:46Z) - Towards Photorealistic Colorization by Imagination [48.82757902812846]
We present a novel approach to automatic image colorization by imitating the imagination process of human experts.
Our work produces more colorful and diverse results than state-of-the-art image colorization methods.
arXiv Detail & Related papers (2021-08-20T14:28:37Z) - Guided Colorization Using Mono-Color Image Pairs [6.729108277517129]
monochrome images usually have better signal-to-noise ratio (SNR) and richer textures due to its higher quantum efficiency.
We propose a mono-color image enhancement algorithm that colorizes the monochrome image with the color one.
Experimental results show that, our algorithm can efficiently restore color images with higher SNR and richer details from the mono-color image pairs.
arXiv Detail & Related papers (2021-08-17T07:00:28Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.