Log NeRF: Comparing Spaces for Learning Radiance Fields
- URL: http://arxiv.org/abs/2512.09375v1
- Date: Wed, 10 Dec 2025 07:12:33 GMT
- Title: Log NeRF: Comparing Spaces for Learning Radiance Fields
- Authors: Sihe Chen, Luv Verma, Bruce A. Maxwell,
- Abstract summary: Neural Radiance Fields (NeRF) have achieved remarkable results in novel view synthesis.<n>Inspired by the BiIlluminant Dichromatic Reflection (BIDR) model, we hypothesize that log RGB space enables NeRF to learn a more compact and effective representation of scene appearance.<n>We trained NeRF models under various color space interpretations, converting each network output to a common color space before rendering and loss computation, enforcing representation learning in different color spaces.
- Score: 0.6542188603141654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRF) have achieved remarkable results in novel view synthesis, typically using sRGB images for supervision. However, little attention has been paid to the color space in which the network is learning the radiance field representation. Inspired by the BiIlluminant Dichromatic Reflection (BIDR) model, which suggests that a logarithmic transformation simplifies the separation of illumination and reflectance, we hypothesize that log RGB space enables NeRF to learn a more compact and effective representation of scene appearance. To test this, we captured approximately 30 videos using a GoPro camera, ensuring linear data recovery through inverse encoding. We trained NeRF models under various color space interpretations linear, sRGB, GPLog, and log RGB by converting each network output to a common color space before rendering and loss computation, enforcing representation learning in different color spaces. Quantitative and qualitative evaluations demonstrate that using a log RGB color space consistently improves rendering quality, exhibits greater robustness across scenes, and performs particularly well in low light conditions while using the same bit-depth input images. Further analysis across different network sizes and NeRF variants confirms the generalization and stability of the log space advantage.
Related papers
- HVI: A New Color Space for Low-light Image Enhancement [58.8280819306909]
We propose a new color space for Low-Light Image Enhancement (LLIE) based on Horizontal/Vertical-Intensity (HVI)<n>HVI is defined by polarized HS maps and learnable intensity, while the latter compresses the low-light regions to remove the black artifacts.<n>To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is introduced.
arXiv Detail & Related papers (2025-02-27T16:59:51Z) - Leveraging Color Channel Independence for Improved Unsupervised Object Detection [7.030688465389997]
We challenge the common assumption that RGB images are the optimal color space for unsupervised learning in computer vision.<n>We show that models improve when requiring them to predict additional color channels.<n>The use of composite color spaces can be implemented with basically no computational overhead.
arXiv Detail & Related papers (2024-12-19T18:28:37Z) - Towards virtual painting recolouring using Vision Transformer on X-Ray Fluorescence datacubes [80.32085982862151]
We define a pipeline to perform virtual painting recolouring using raw data of X-Ray Fluorescence (XRF) analysis on pictorial artworks.
To circumvent the small dataset size, we generate a synthetic dataset, starting from a database of XRF spectra.
We define a Deep Variational Embedding network to embed the XRF spectra into a lower dimensional, K-Means friendly, metric space.
arXiv Detail & Related papers (2024-10-11T14:05:28Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Rethinking RGB Color Representation for Image Restoration Models [55.81013540537963]
We augment the representation to hold structural information of local neighborhoods at each pixel.
Substituting the underlying representation space for the per-pixel losses facilitates the training of image restoration models.
Our space consistently improves overall metrics by reconstructing both color and local structures.
arXiv Detail & Related papers (2024-02-05T06:38:39Z) - Training Neural Networks on RAW and HDR Images for Restoration Tasks [53.84872583527721]
We study how neural networks should be trained for tasks on RAW and HDR images in linear color spaces.<n>Our results indicate that neural networks train significantly better on HDR and RAW images represented in color spaces.<n>This small change to the training strategy can bring a very substantial gain in performance, between 2 and 9 dB.
arXiv Detail & Related papers (2023-12-06T17:47:16Z) - NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects [63.04781030984006]
Dynamic Neural Radiance Field (NeRF) is a powerful algorithm capable of rendering photo-realistic novel view images from a monocular RGB video of a dynamic scene.
We address the limitation by reformulating the neural radiance field function to be conditioned on surface position and orientation in the observation space.
We evaluate our model based on the novel view synthesis quality with a self-collected dataset of different moving specular objects in realistic environments.
arXiv Detail & Related papers (2023-03-25T11:03:53Z) - DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising
Diffusion Models [5.255302402546892]
We learn a prior over scene geometry and color, using a denoising diffusion model (DDM)
We show that, these gradients of logarithms of RGBD patch priors serve to regularize geometry and color of a scene.
Evaluations on LLFF, the most relevant dataset, show that our learned prior achieves improved quality in the reconstructed geometry and improved to novel views.
arXiv Detail & Related papers (2023-02-23T18:52:28Z) - EventNeRF: Neural Radiance Fields from a Single Colour Event Camera [81.19234142730326]
This paper proposes the first approach for 3D-consistent, dense and novel view synthesis using just a single colour event stream as input.
At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels.
We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings.
arXiv Detail & Related papers (2022-06-23T17:59:53Z) - The Utility of Decorrelating Colour Spaces in Vector Quantised
Variational Autoencoders [1.7792264784100689]
We propose colour space conversion to enforce a network learning structured representations.
We trained several instances of VQ-VAE whose input is an image in one colour space, and its output in another.
arXiv Detail & Related papers (2020-09-30T07:44:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.