Compact Hadamard Latent Codes for Efficient Spectral Rendering
- URL: http://arxiv.org/abs/2602.18741v2
- Date: Wed, 25 Feb 2026 19:48:14 GMT
- Title: Compact Hadamard Latent Codes for Efficient Spectral Rendering
- Authors: Jiaqi Yu, Dar'ya Guarnera, Giuseppe Claudio Guarnera,
- Abstract summary: Spectral rendering accurately reproduces wavelength-dependent appearance but is computationally expensive.<n>We propose Hadamard spectral codes, a compact latent representation that enables spectral rendering.<n> Experiments on 3D scenes demonstrate that k = 6 significantly reduces color error compared to RGB baselines.
- Score: 3.0459410574367305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spectral rendering accurately reproduces wavelength-dependent appearance but is computationally expensive, as shading must be evaluated at many wavelength samples and scales roughly linearly with the number of samples. It also requires spectral textures and lights throughout the rendering pipeline. We propose Hadamard spectral codes, a compact latent representation that enables spectral rendering using standard RGB rendering operations. Spectral images are approximated with a small number of RGB rendering passes, followed by a decoding step. Our key requirement is latent linearity: scaling and addition in spectral space correspond to scaling and addition of codes, and the element-wise product of spectra (for example reflectance times illumination) is approximated by the element-wise product of their latent codes. We show that an exact low-dimensional algebra-preserving representation cannot exist for arbitrary spectra when the latent dimension k is smaller than the number of spectral samples n. We therefore introduce a learned non-negative linear encoder and decoder architecture that preserves scaling and addition exactly while encouraging approximate multiplicativity under the Hadamard product. With k = 6, we render k/3 = 2 RGB images per frame using an unmodified RGB renderer, reconstruct the latent image, and decode to high-resolution spectra or XYZ or RGB. Experiments on 3D scenes demonstrate that k = 6 significantly reduces color error compared to RGB baselines while being substantially faster than naive n-sample spectral rendering. Using k = 9 provides higher-quality reference results. We further introduce a lightweight neural upsampling network that maps RGB assets directly to latent codes, enabling integration of legacy RGB content into the spectral pipeline while maintaining perceptually accurate colors in rendered images.
Related papers
- Quantile Rendering: Efficiently Embedding High-dimensional Feature on 3D Gaussian Splatting [52.18697134979677]
Recent advancements in computer vision have successfully extended Open-vocabulary segmentation (OVS) to the 3D domain by leveraging 3D Gaussian Splatting (3D-GS)<n>Existing methods employ codebooks or feature compression, causing information loss, thereby degrading segmentation quality.<n>We introduce Quantile Rendering (Q-Render), a novel rendering strategy for 3D Gaussians that efficiently handles high-dimensional features while maintaining high fidelity.<n>Our framework outperforms state-of-the-art methods, while enabling real-time rendering with an approximate 43.7x speedup on 512-D feature maps.
arXiv Detail & Related papers (2025-12-24T04:16:18Z) - RGB Pre-Training Enhanced Unobservable Feature Latent Diffusion Model for Spectral Reconstruction [16.54284634377436]
We propose a two-stage pipeline consisting of spectral structure representation learning and spectral-spatial joint distribution learning.<n>In the first stage, a spectral unobservable feature autoencoder (SpeUAE) is trained to extract and compress the unobservable feature into a 3D manifold aligned with RGB space.<n>The ULDM is then acquired to model the distribution of the coded unobservable feature with guidance from the corresponding RGB images.
arXiv Detail & Related papers (2025-07-17T10:07:32Z) - EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.<n>Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.<n>We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [50.36933474990516]
This work considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance ray tracing hardware.
To efficiently handle large numbers of semi-transparent particles, we describe a specialized algorithm which encapsulates particles with bounding meshes.
Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision.
arXiv Detail & Related papers (2024-07-09T17:59:30Z) - Ternary-Type Opacity and Hybrid Odometry for RGB NeRF-SLAM [58.736472371951955]
We introduce a ternary-type opacity (TT) model, which categorizes points on a ray intersecting a surface into three regions: before, on, and behind the surface.
This enables a more accurate rendering of depth, subsequently improving the performance of image warping techniques.
Our integrated approach of TT and HO achieves state-of-the-art performance on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-20T18:03:17Z) - SpectralNeRF: Physically Based Spectral Rendering with Neural Radiance
Field [70.15900280156262]
We propose an end-to-end Neural Radiance Field (NeRF)-based architecture for high-quality physically based rendering from a novel spectral perspective.
SpectralNeRF is superior to recent NeRF-based methods when synthesizing new views on synthetic and real datasets.
arXiv Detail & Related papers (2023-12-14T07:19:31Z) - SpectralGPT: Spectral Remote Sensing Foundation Model [60.023956954916414]
A universal RS foundation model, named SpectralGPT, is purpose-built to handle spectral RS images using a novel 3D generative pretrained transformer (GPT)
Compared to existing foundation models, SpectralGPT accommodates input images with varying sizes, resolutions, time series, and regions in a progressive training fashion, enabling full utilization of extensive RS big data.
Our evaluation highlights significant performance improvements with pretrained SpectralGPT models, signifying substantial potential in advancing spectral RS big data applications within the field of geoscience.
arXiv Detail & Related papers (2023-11-13T07:09:30Z) - A novel approach for holographic 3D content generation without depth map [2.905273049932301]
We propose a deep learning-based method to synthesize the volumetric digital holograms using only the given RGB image.
Through experiments, we demonstrate that the volumetric hologram generated through our proposed model is more accurate than that of competitive models.
arXiv Detail & Related papers (2023-09-26T14:37:31Z) - Continuous Spectral Reconstruction from RGB Images via Implicit Neural
Representation [43.622087181097164]
Existing methods for spectral reconstruction usually learn a discrete mapping from RGB images to a number of spectral bands.
We propose Neural Spectral Reconstruction (NeSR) to lift this limitation, by introducing a novel continuous spectral representation.
NeSR extends the flexibility of spectral reconstruction by enabling an arbitrary number of spectral bands as the target output.
arXiv Detail & Related papers (2021-12-24T09:08:23Z) - Tuning IR-cut Filter for Illumination-aware Spectral Reconstruction from
RGB [84.1657998542458]
It has been proven that the reconstruction accuracy relies heavily on the spectral response of the RGB camera in use.
This paper explores the filter-array based color imaging mechanism of existing RGB cameras, and proposes to design the IR-cut filter properly for improved spectral recovery.
arXiv Detail & Related papers (2021-03-26T19:42:21Z) - Learning to Enhance Visual Quality via Hyperspectral Domain Mapping [8.365634649800658]
SpecNet computes spectral profile to estimate pixel-wise dynamic range adjustment of a given image.
We incorporate a self-supervision and a spectral profile regularization network to infer a plausible HSI from an RGB image.
arXiv Detail & Related papers (2021-02-10T13:27:34Z) - Fast Hyperspectral Image Recovery via Non-iterative Fusion of
Dual-Camera Compressive Hyperspectral Imaging [22.683482662362337]
Coded aperture snapshot spectral imaging (CASSI) is a promising technique to capture the three-dimensional hyperspectral image (HSI)
Various regularizers have been exploited to reconstruct the 3D data from the 2D measurement.
One feasible solution is to utilize additional information such as the RGB measurement in CASSI.
arXiv Detail & Related papers (2020-12-30T10:29:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.