Diffusion Denoised Hyperspectral Gaussian Splatting
- URL: http://arxiv.org/abs/2505.21890v2
- Date: Fri, 07 Nov 2025 00:23:58 GMT
- Title: Diffusion Denoised Hyperspectral Gaussian Splatting
- Authors: Sunil Kumar Narayanan, Lingjun Zhao, Lu Gan, Yongsheng Chen,
- Abstract summary: 3D reconstruction methods have been used to create implicit neural representations of hyperspectral scenes.<n>We propose Diffusion-Denoised Hyperspectral Gaussian Splatting (DD-HGS) to enable 3D explicit reconstruction of hyperspectral scenes.
- Score: 11.486860334986394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral imaging (HSI) has been widely used in agricultural applications for non-destructive estimation of plant nutrient composition and precise determination of nutritional elements of samples. Recently, 3D reconstruction methods have been used to create implicit neural representations of HSI scenes, which can help localize the target object's nutrient composition spatially and spectrally. Neural Radiance Field (NeRF) is a cutting-edge implicit representation that can be used to render hyperspectral channel compositions of each spatial location from any viewing direction. However, it faces limitations in training time and rendering speed. In this paper, we propose Diffusion-Denoised Hyperspectral Gaussian Splatting (DD-HGS), which enhances the state-of-the-art 3D Gaussian Splatting (3DGS) method with wavelength-aware spherical harmonics, a Kullback-Leibler divergence-based spectral loss, and a diffusion-based denoiser to enable 3D explicit reconstruction of hyperspectral scenes across the full spectral range. We present extensive evaluations on diverse real-world hyperspectral scenes from the Hyper-NeRF dataset to show the effectiveness of DD-HGS. The results demonstrate that DD-HGS achieves new state-of-the-art performance among previously published methods. Project page: https://dragonpg2000.github.io/DDHGS-website/
Related papers
- HyRF: Hybrid Radiance Fields for Memory-efficient and High-quality Novel View Synthesis [11.71939856454585]
We present Hybrid Radiance Fields (HyRF), a novel scene representation that combines the strengths of explicit Gaussians and neural fields.<n>HyRF achieves state-of-the-art rendering quality while reducing model size by over 20 times compared to 3DGS.
arXiv Detail & Related papers (2025-09-21T13:59:26Z) - CARL: Camera-Agnostic Representation Learning for Spectral Image Analysis [75.25966323298003]
Spectral imaging offers promising applications across diverse domains, including medicine and urban scene understanding.<n> variability in channel dimensionality and captured wavelengths among spectral cameras impede the development of AI-driven methodologies.<n>We introduce $textbfCARL$, a model for $textbfC$amera-$textbfA$gnostic $textbfR$esupervised $textbfL$ across RGB, multispectral, and hyperspectral imaging modalities.
arXiv Detail & Related papers (2025-04-27T13:06:40Z) - EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis [61.1662426227688]
Existing NeRF and 3DGS-based methods show promising results in achieving photorealistic renderings but require slow, per-scene optimization.<n>We introduce EVolSplat, an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner.
arXiv Detail & Related papers (2025-03-26T02:47:27Z) - PSRGS:Progressive Spectral Residual of 3D Gaussian for High-Frequency Recovery [3.310033172069517]
3D Gaussian Splatting (3D GS) achieves impressive results in novel view synthesis for small, single-object scenes.<n>However, when applied to large-scale remote sensing scenes, 3D GS faces challenges.<n>We propose PSRGS, a progressive optimization scheme based on spectral residual maps.
arXiv Detail & Related papers (2025-03-02T10:52:46Z) - HyperGS: Hyperspectral 3D Gaussian Splatting [13.07553815605148]
We introduce HyperGS, a novel framework for Hyperspectral Novel View Synthesis (HNVS)<n>Our approach enables simultaneous spatial and spectral renderings by encoding material properties from multi-view 3D hyperspectral datasets.<n>We demonstrate HyperGS's robustness through extensive evaluation of real and simulated hyperspectral scenes with a 14db accuracy improvement upon previously published models.
arXiv Detail & Related papers (2024-12-17T12:23:07Z) - Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels [51.08794269211701]
We introduce 3D Linear Splatting (3DLS), which replaces Gaussian kernels with linear kernels to achieve sharper and more precise results.<n>3DLS demonstrates state-of-the-art fidelity and accuracy, along with a 30% FPS improvement over baseline 3DGS.
arXiv Detail & Related papers (2024-11-19T11:59:54Z) - GeoSplatting: Towards Geometry Guided Gaussian Splatting for Physically-based Inverse Rendering [69.67264955234494]
GeoSplatting is a novel approach that augments 3DGS with explicit geometry guidance for precise light transport modeling.<n>By differentiably constructing a surface-grounded 3DGS from an optimizable mesh, our approach leverages well-defined mesh normals and the opaque mesh surface.<n>This enhancement ensures precise material decomposition while preserving the efficiency and high-quality rendering capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-31T17:57:07Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - L3DG: Latent 3D Gaussian Diffusion [74.36431175937285]
L3DG is the first approach for generative 3D modeling of 3D Gaussians through a latent 3D Gaussian diffusion formulation.
We employ a sparse convolutional architecture to efficiently operate on room-scale scenes.
By leveraging the 3D Gaussian representation, the generated scenes can be rendered from arbitrary viewpoints in real-time.
arXiv Detail & Related papers (2024-10-17T13:19:32Z) - Optimizing 3D Gaussian Splatting for Sparse Viewpoint Scene Reconstruction [11.840097269724792]
3D Gaussian Splatting (3DGS) has emerged as a promising approach for 3D scene representation, offering a reduction in computational overhead compared to Neural Radiance Fields (NeRF)
We introduce SVS-GS, a novel framework for Sparse Viewpoint Scene reconstruction that integrates a 3D Gaussian smoothing filter to suppress artifacts.
arXiv Detail & Related papers (2024-09-05T03:18:04Z) - Beyond the Visible: Jointly Attending to Spectral and Spatial Dimensions with HSI-Diffusion for the FINCH Spacecraft [2.5057561650768814]
The FINCH mission aims to monitor crop residue cover in agricultural fields.
Hyperspectral imaging captures both spectral and spatial information.
It is prone to various types of noise, including random noise, stripe noise, and dead pixels.
arXiv Detail & Related papers (2024-06-15T19:34:18Z) - WE-GS: An In-the-wild Efficient 3D Gaussian Representation for Unconstrained Photo Collections [8.261637198675151]
Novel View Synthesis (NVS) from unconstrained photo collections is challenging in computer graphics.
We propose an efficient point-based differentiable rendering framework for scene reconstruction from photo collections.
Our approach outperforms existing approaches on the rendering quality of novel view and appearance synthesis with high converge and rendering speed.
arXiv Detail & Related papers (2024-06-04T15:17:37Z) - Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and adaptive surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - Hyperspectral Neural Radiance Fields [11.485829401765521]
We propose a hyperspectral 3D reconstruction using Neural Radiance Fields (NeRFs)
NeRFs have seen widespread success in creating high quality volumetric 3D representations of scenes captured by a variety of camera models.
We show that our hyperspectral NeRF approach enables creating fast, accurate volumetric 3D hyperspectral scenes.
arXiv Detail & Related papers (2024-03-21T21:18:08Z) - Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting [55.71424195454963]
Spec-Gaussian is an approach that utilizes an anisotropic spherical Gaussian appearance field instead of spherical harmonics.
Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality.
This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
arXiv Detail & Related papers (2024-02-24T17:22:15Z) - GS-IR: 3D Gaussian Splatting for Inverse Rendering [71.14234327414086]
We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS)
We extend GS, a top-performance representation for novel view synthesis, to estimate scene geometry, surface material, and environment illumination from multi-view images captured under unknown lighting conditions.
The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction, photorealistic novel view synthesis, and effective physically-based rendering.
arXiv Detail & Related papers (2023-11-26T02:35:09Z) - Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - Spectral Splitting and Aggregation Network for Hyperspectral Face
Super-Resolution [82.59267937569213]
High-resolution (HR) hyperspectral face image plays an important role in face related computer vision tasks under uncontrolled conditions.
In this paper, we investigate how to adapt the deep learning techniques to hyperspectral face image super-resolution.
We present a spectral splitting and aggregation network (SSANet) for HFSR with limited training samples.
arXiv Detail & Related papers (2021-08-31T02:13:00Z) - Non-local Meets Global: An Iterative Paradigm for Hyperspectral Image
Restoration [66.68541690283068]
We propose a unified paradigm combining the spatial and spectral properties for hyperspectral image restoration.
The proposed paradigm enjoys performance superiority from the non-local spatial denoising and light computation complexity.
Experiments on HSI denoising, compressed reconstruction, and inpainting tasks, with both simulated and real datasets, demonstrate its superiority.
arXiv Detail & Related papers (2020-10-24T15:53:56Z) - Hyperspectral Image Denoising with Partially Orthogonal Matrix Vector
Tensor Factorization [42.56231647066719]
Hyperspectral image (HSI) has some advantages over natural image for various applications due to the extra spectral information.
During the acquisition, it is often contaminated by severe noises including Gaussian noise, impulse noise, deadlines, and stripes.
We present a HSI restoration method named smooth and robust low rank tensor recovery.
arXiv Detail & Related papers (2020-06-29T02:10:07Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.