Hyperspectral Gaussian Splatting
- URL: http://arxiv.org/abs/2505.21890v1
- Date: Wed, 28 May 2025 02:07:52 GMT
- Title: Hyperspectral Gaussian Splatting
- Authors: Sunil Kumar Narayanan, Lingjun Zhao, Lu Gan, Yongsheng Chen,
- Abstract summary: 3D reconstruction methods have been used to create implicit neural representations of hyperspectral scenes.<n>NeRF is a cutting-edge implicit representation that can render hyperspectral channel compositions of each spatial location from any viewing direction.<n>We propose Hyperspectral Gaussian Splatting (HS-GS) to enable 3D explicit reconstruction of the hyperspectral scenes and novel view synthesis for the entire spectral range.
- Score: 9.744861764579706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral imaging (HSI) has been widely used in agricultural applications for non-destructive estimation of plant nutrient composition and precise determination of nutritional elements in samples. Recently, 3D reconstruction methods have been used to create implicit neural representations of HSI scenes, which can help localize the target object's nutrient composition spatially and spectrally. Neural Radiance Field (NeRF) is a cutting-edge implicit representation that can render hyperspectral channel compositions of each spatial location from any viewing direction. However, it faces limitations in training time and rendering speed. In this paper, we propose Hyperspectral Gaussian Splatting (HS-GS), which combines the state-of-the-art 3D Gaussian Splatting (3DGS) with a diffusion model to enable 3D explicit reconstruction of the hyperspectral scenes and novel view synthesis for the entire spectral range. To enhance the model's ability to capture fine-grained reflectance variations across the light spectrum and leverage correlations between adjacent wavelengths for denoising, we introduce a wavelength encoder to generate wavelength-specific spherical harmonics offsets. We also introduce a novel Kullback--Leibler divergence-based loss to mitigate the spectral distribution gap between the rendered image and the ground truth. A diffusion model is further applied for denoising the rendered images and generating photorealistic hyperspectral images. We present extensive evaluations on five diverse hyperspectral scenes from the Hyper-NeRF dataset to show the effectiveness of our proposed HS-GS framework. The results demonstrate that HS-GS achieves new state-of-the-art performance among all previously published methods. Code will be released upon publication.
Related papers
- CARL: Camera-Agnostic Representation Learning for Spectral Image Analysis [75.25966323298003]
Spectral imaging offers promising applications across diverse domains, including medicine and urban scene understanding.<n> variability in channel dimensionality and captured wavelengths among spectral cameras impede the development of AI-driven methodologies.<n>We introduce $textbfCARL$, a model for $textbfC$amera-$textbfA$gnostic $textbfR$esupervised $textbfL$ across RGB, multispectral, and hyperspectral imaging modalities.
arXiv Detail & Related papers (2025-04-27T13:06:40Z) - PSRGS:Progressive Spectral Residual of 3D Gaussian for High-Frequency Recovery [3.310033172069517]
3D Gaussian Splatting (3D GS) achieves impressive results in novel view synthesis for small, single-object scenes.<n>However, when applied to large-scale remote sensing scenes, 3D GS faces challenges.<n>We propose PSRGS, a progressive optimization scheme based on spectral residual maps.
arXiv Detail & Related papers (2025-03-02T10:52:46Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Beyond the Visible: Jointly Attending to Spectral and Spatial Dimensions with HSI-Diffusion for the FINCH Spacecraft [2.5057561650768814]
The FINCH mission aims to monitor crop residue cover in agricultural fields.
Hyperspectral imaging captures both spectral and spatial information.
It is prone to various types of noise, including random noise, stripe noise, and dead pixels.
arXiv Detail & Related papers (2024-06-15T19:34:18Z) - WE-GS: An In-the-wild Efficient 3D Gaussian Representation for Unconstrained Photo Collections [8.261637198675151]
Novel View Synthesis (NVS) from unconstrained photo collections is challenging in computer graphics.
We propose an efficient point-based differentiable rendering framework for scene reconstruction from photo collections.
Our approach outperforms existing approaches on the rendering quality of novel view and appearance synthesis with high converge and rendering speed.
arXiv Detail & Related papers (2024-06-04T15:17:37Z) - Hyperspectral Neural Radiance Fields [11.485829401765521]
We propose a hyperspectral 3D reconstruction using Neural Radiance Fields (NeRFs)
NeRFs have seen widespread success in creating high quality volumetric 3D representations of scenes captured by a variety of camera models.
We show that our hyperspectral NeRF approach enables creating fast, accurate volumetric 3D hyperspectral scenes.
arXiv Detail & Related papers (2024-03-21T21:18:08Z) - GS-IR: 3D Gaussian Splatting for Inverse Rendering [71.14234327414086]
We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS)
We extend GS, a top-performance representation for novel view synthesis, to estimate scene geometry, surface material, and environment illumination from multi-view images captured under unknown lighting conditions.
The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction, photorealistic novel view synthesis, and effective physically-based rendering.
arXiv Detail & Related papers (2023-11-26T02:35:09Z) - Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - Spectral Splitting and Aggregation Network for Hyperspectral Face
Super-Resolution [82.59267937569213]
High-resolution (HR) hyperspectral face image plays an important role in face related computer vision tasks under uncontrolled conditions.
In this paper, we investigate how to adapt the deep learning techniques to hyperspectral face image super-resolution.
We present a spectral splitting and aggregation network (SSANet) for HFSR with limited training samples.
arXiv Detail & Related papers (2021-08-31T02:13:00Z) - Non-local Meets Global: An Iterative Paradigm for Hyperspectral Image
Restoration [66.68541690283068]
We propose a unified paradigm combining the spatial and spectral properties for hyperspectral image restoration.
The proposed paradigm enjoys performance superiority from the non-local spatial denoising and light computation complexity.
Experiments on HSI denoising, compressed reconstruction, and inpainting tasks, with both simulated and real datasets, demonstrate its superiority.
arXiv Detail & Related papers (2020-10-24T15:53:56Z) - Hyperspectral Image Denoising with Partially Orthogonal Matrix Vector
Tensor Factorization [42.56231647066719]
Hyperspectral image (HSI) has some advantages over natural image for various applications due to the extra spectral information.
During the acquisition, it is often contaminated by severe noises including Gaussian noise, impulse noise, deadlines, and stripes.
We present a HSI restoration method named smooth and robust low rank tensor recovery.
arXiv Detail & Related papers (2020-06-29T02:10:07Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.