Multispectral-NeRF:a multispectral modeling approach based on neural radiance fields
- URL: http://arxiv.org/abs/2509.11169v1
- Date: Sun, 14 Sep 2025 09:04:35 GMT
- Title: Multispectral-NeRF:a multispectral modeling approach based on neural radiance fields
- Authors: Hong Zhang, Fei Guo, Zihan Xie, Dizhao Yao,
- Abstract summary: 3D reconstruction techniques based on 2D images typically rely on RGB spectral information.<n>Additional spectral bands beyond RGB have been increasingly incorporated into 3D reconstruction.<n>Existing methods that integrate these expanded spectral data often suffer from expensive scheme prices, low accuracy and poor geometric features.<n>We propose Multispectral-NeRF, an enhanced neural architecture derived from NeRF that can effectively integrate multispectral information.
- Score: 3.606065291262699
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D reconstruction technology generates three-dimensional representations of real-world objects, scenes, or environments using sensor data such as 2D images, with extensive applications in robotics, autonomous vehicles, and virtual reality systems. Traditional 3D reconstruction techniques based on 2D images typically relies on RGB spectral information. With advances in sensor technology, additional spectral bands beyond RGB have been increasingly incorporated into 3D reconstruction workflows. Existing methods that integrate these expanded spectral data often suffer from expensive scheme prices, low accuracy and poor geometric features. Three - dimensional reconstruction based on NeRF can effectively address the various issues in current multispectral 3D reconstruction methods, producing high - precision and high - quality reconstruction results. However, currently, NeRF and some improved models such as NeRFacto are trained on three - band data and cannot take into account the multi - band information. To address this problem, we propose Multispectral-NeRF, an enhanced neural architecture derived from NeRF that can effectively integrates multispectral information. Our technical contributions comprise threefold modifications: Expanding hidden layer dimensionality to accommodate 6-band spectral inputs; Redesigning residual functions to optimize spectral discrepancy calculations between reconstructed and reference images; Adapting data compression modules to address the increased bit-depth requirements of multispectral imagery. Experimental results confirm that Multispectral-NeRF successfully processes multi-band spectral features while accurately preserving the original scenes' spectral characteristics.
Related papers
- 3D Reconstruction from Transient Measurements with Time-Resolved Transformer [48.73999376279579]
We propose a generic Time-Resolved Transformer (TRT) architecture to boost 3D reconstruction performance in photon-efficient imaging.<n>In this paper, we develop two task-specific embodiments: TRT-LOS for imaging and TRT-NLOS for NLOS imaging.<n>In addition, we contribute a large-scale, high-resolution synthetic LOS dataset with various noise levels and capture a set of real-world NLOS imaging measurements.
arXiv Detail & Related papers (2025-10-10T09:44:08Z) - Multi-view 3D surface reconstruction from SAR images by inverse rendering [4.964816143841665]
We propose a new inverse rendering method for 3D reconstruction from unconstrained Synthetic Aperture Radar (SAR) images.<n>Our method showcases the potential of exploiting geometric disparities in SAR images and paves the way for multi-sensor data fusion.
arXiv Detail & Related papers (2025-02-14T13:19:32Z) - SuperNeRF-GAN: A Universal 3D-Consistent Super-Resolution Framework for Efficient and Enhanced 3D-Aware Image Synthesis [59.73403876485574]
We propose SuperNeRF-GAN, a universal framework for 3D-consistent super-resolution.<n>A key highlight of SuperNeRF-GAN is its seamless integration with NeRF-based 3D-aware image synthesis methods.<n> Experimental results demonstrate the superior efficiency, 3D-consistency, and quality of our approach.
arXiv Detail & Related papers (2025-01-12T10:31:33Z) - Unsupervised Multi-Parameter Inverse Solving for Reducing Ring Artifacts in 3D X-Ray CBCT [51.95884144860506]
Ring artifacts are prevalent in 3D cone-beam computed tomography (CBCT)<n>Existing state-of-the-art (SOTA) ring artifact reduction (RAR) methods rely on supervised learning with large-scale paired CT datasets.<n>In this work, we propose Riner, a new unsupervised RAR method.
arXiv Detail & Related papers (2024-12-08T08:22:58Z) - Towards More Accurate Fake Detection on Images Generated from Advanced Generative and Neural Rendering Models [14.867842273942188]
We propose an unsupervised training technique that enables the model to extract comprehensive features from the Fourier spectrum magnitude.
We develop a comprehensive database that includes images generated by diverse neural rendering techniques, providing a robust foundation for evaluating and advancing detection methods.
arXiv Detail & Related papers (2024-11-13T14:32:28Z) - UlRe-NeRF: 3D Ultrasound Imaging through Neural Rendering with Ultrasound Reflection Direction Parameterization [0.5837446811360741]
Traditional 3D ultrasound imaging methods have limitations such as fixed resolution, low storage efficiency, and insufficient contextual connectivity.
We propose a new model, UlRe-NeRF, which combines implicit neural networks and explicit ultrasound rendering architecture.
Experimental results demonstrate that the UlRe-NeRF model significantly enhances the realism and accuracy of high-fidelity ultrasound image reconstruction.
arXiv Detail & Related papers (2024-08-01T18:22:29Z) - Hyperspectral Neural Radiance Fields [11.485829401765521]
We propose a hyperspectral 3D reconstruction using Neural Radiance Fields (NeRFs)
NeRFs have seen widespread success in creating high quality volumetric 3D representations of scenes captured by a variety of camera models.
We show that our hyperspectral NeRF approach enables creating fast, accurate volumetric 3D hyperspectral scenes.
arXiv Detail & Related papers (2024-03-21T21:18:08Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - SpectralGPT: Spectral Remote Sensing Foundation Model [60.023956954916414]
A universal RS foundation model, named SpectralGPT, is purpose-built to handle spectral RS images using a novel 3D generative pretrained transformer (GPT)
Compared to existing foundation models, SpectralGPT accommodates input images with varying sizes, resolutions, time series, and regions in a progressive training fashion, enabling full utilization of extensive RS big data.
Our evaluation highlights significant performance improvements with pretrained SpectralGPT models, signifying substantial potential in advancing spectral RS big data applications within the field of geoscience.
arXiv Detail & Related papers (2023-11-13T07:09:30Z) - Spec-NeRF: Multi-spectral Neural Radiance Fields [9.242830798112855]
We propose Multi-spectral Neural Radiance Fields(Spec-NeRF) for jointly reconstructing a multispectral radiance field and spectral sensitivity functions(SSFs) of the camera from a set of color images filtered by different filters.
Our experiments on both synthetic and real scenario datasets demonstrate that utilizing filtered RGB images with learnable NeRF and SSFs can achieve high fidelity and promising spectral reconstruction.
arXiv Detail & Related papers (2023-09-14T16:17:55Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.