Towards 3D Scene Understanding of Gas Plumes in LWIR Hyperspectral Images Using Neural Radiance Fields
- URL: http://arxiv.org/abs/2603.05473v1
- Date: Thu, 05 Mar 2026 18:44:45 GMT
- Title: Towards 3D Scene Understanding of Gas Plumes in LWIR Hyperspectral Images Using Neural Radiance Fields
- Authors: Scout Jarman, Zigfried Hampel-Arias, Adra Carr, Kevin R. Moon,
- Abstract summary: Longwave infrared (LWIR) HSI can be used for gas plume detection and analysis.<n>The ability to combine information from multiple images into a single representation could enhance analysis.<n>NeRFs create a latent neural representation of volumetric scene properties.
- Score: 3.8031924942083517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperspectral images (HSI) have many applications, ranging from environmental monitoring to national security, and can be used for material detection and identification. Longwave infrared (LWIR) HSI can be used for gas plume detection and analysis. Oftentimes, only a few images of a scene of interest are available and are analyzed individually. The ability to combine information from multiple images into a single, cohesive representation could enhance analysis by providing more context on the scene's geometry and spectral properties. Neural radiance fields (NeRFs) create a latent neural representation of volumetric scene properties that enable novel-view rendering and geometry reconstruction, offering a promising avenue for hyperspectral 3D scene reconstruction. We explore the possibility of using NeRFs to create 3D scene reconstructions from LWIR HSI and demonstrate that the model can be used for the basic downstream analysis task of gas plume detection. The physics-based DIRSIG software suite was used to generate a synthetic multi-view LWIR HSI dataset of a simple facility with a strong sulfur hexafluoride gas plume. Our method, built on the standard Mip-NeRF architecture, combines state-of-the-art methods for hyperspectral NeRFs and sparse-view NeRFs, along with a novel adaptive weighted MSE loss. Our final NeRF method requires around 50% fewer training images than the standard Mip-NeRF and achieves an average PSNR of 39.8 dB with as few as 30 training images. Gas plume detection applied to NeRF-rendered test images using the adaptive coherence estimator achieves an average AUC of 0.821 when compared with detection masks generated from ground-truth test images.
Related papers
- Diffusion Denoised Hyperspectral Gaussian Splatting [11.486860334986394]
3D reconstruction methods have been used to create implicit neural representations of hyperspectral scenes.<n>We propose Diffusion-Denoised Hyperspectral Gaussian Splatting (DD-HGS) to enable 3D explicit reconstruction of hyperspectral scenes.
arXiv Detail & Related papers (2025-05-28T02:07:52Z) - Hyperspectral Neural Radiance Fields [11.485829401765521]
We propose a hyperspectral 3D reconstruction using Neural Radiance Fields (NeRFs)
NeRFs have seen widespread success in creating high quality volumetric 3D representations of scenes captured by a variety of camera models.
We show that our hyperspectral NeRF approach enables creating fast, accurate volumetric 3D hyperspectral scenes.
arXiv Detail & Related papers (2024-03-21T21:18:08Z) - Improving Neural Radiance Field using Near-Surface Sampling with Point Cloud Generation [6.506009070668646]
This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF.
To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud.
arXiv Detail & Related papers (2023-10-06T10:55:34Z) - Spec-NeRF: Multi-spectral Neural Radiance Fields [9.242830798112855]
We propose Multi-spectral Neural Radiance Fields(Spec-NeRF) for jointly reconstructing a multispectral radiance field and spectral sensitivity functions(SSFs) of the camera from a set of color images filtered by different filters.
Our experiments on both synthetic and real scenario datasets demonstrate that utilizing filtered RGB images with learnable NeRF and SSFs can achieve high fidelity and promising spectral reconstruction.
arXiv Detail & Related papers (2023-09-14T16:17:55Z) - NeRF-Det: Learning Geometry-Aware Volumetric Representation for
Multi-View 3D Object Detection [65.02633277884911]
We present NeRF-Det, a novel method for indoor 3D detection with posed RGB images as input.
Our method makes use of NeRF in an end-to-end manner to explicitly estimate 3D geometry, thereby improving 3D detection performance.
arXiv Detail & Related papers (2023-07-27T04:36:16Z) - SAR-NeRF: Neural Radiance Fields for Synthetic Aperture Radar Multi-View
Representation [7.907504142396784]
This study combines SAR imaging mechanisms with neural networks to propose a novel NeRF model for SAR image generation.
SAR-NeRF is constructed to learn the distribution of attenuation coefficients and scattering intensities of voxels.
It is found that SAR-NeRF augumented dataset can significantly improve SAR target classification performance under few-shot learning setup.
arXiv Detail & Related papers (2023-07-11T07:37:56Z) - MS-NeRF: Multi-Space Neural Radiance Fields [48.0367339199913]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects, often resulting in blurry rendering.<n>We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.<n>Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Multi-temporal speckle reduction with self-supervised deep neural
networks [2.9979894869734927]
Latest techniques rely on deep neural networks to restore the various structures and peculiar textures to SAR images.
Speckle filtering is generally a prerequisite to the analysis of synthetic aperture radar (SAR) images.
We extend a recent self-supervised training strategy for single-look complex SAR images, called MERLIN, to the case of multi-temporal filtering.
arXiv Detail & Related papers (2022-07-22T14:08:22Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.