Thermal3D-GS: Physics-induced 3D Gaussians for Thermal Infrared Novel-view Synthesis
- URL: http://arxiv.org/abs/2409.08042v1
- Date: Thu, 12 Sep 2024 13:46:53 GMT
- Title: Thermal3D-GS: Physics-induced 3D Gaussians for Thermal Infrared Novel-view Synthesis
- Authors: Qian Chen, Shihao Shu, Xiangzhi Bai,
- Abstract summary: This paper introduces a physics-induced 3D Gaussian splatting method named Thermal3D-GS.
The first large-scale benchmark dataset for this field named Thermal Infrared Novel-view Synthesis dataset (TI-NSD) is created.
The results indicate that our method outperforms the baseline method with a 3.03 dB improvement in PSNR.
- Score: 11.793425521298488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novel-view synthesis based on visible light has been extensively studied. In comparison to visible light imaging, thermal infrared imaging offers the advantage of all-weather imaging and strong penetration, providing increased possibilities for reconstruction in nighttime and adverse weather scenarios. However, thermal infrared imaging is influenced by physical characteristics such as atmospheric transmission effects and thermal conduction, hindering the precise reconstruction of intricate details in thermal infrared scenes, manifesting as issues of floaters and indistinct edge features in synthesized images. To address these limitations, this paper introduces a physics-induced 3D Gaussian splatting method named Thermal3D-GS. Thermal3D-GS begins by modeling atmospheric transmission effects and thermal conduction in three-dimensional media using neural networks. Additionally, a temperature consistency constraint is incorporated into the optimization objective to enhance the reconstruction accuracy of thermal infrared images. Furthermore, to validate the effectiveness of our method, the first large-scale benchmark dataset for this field named Thermal Infrared Novel-view Synthesis Dataset (TI-NSD) is created. This dataset comprises 20 authentic thermal infrared video scenes, covering indoor, outdoor, and UAV(Unmanned Aerial Vehicle) scenarios, totaling 6,664 frames of thermal infrared image data. Based on this dataset, this paper experimentally verifies the effectiveness of Thermal3D-GS. The results indicate that our method outperforms the baseline method with a 3.03 dB improvement in PSNR and significantly addresses the issues of floaters and indistinct edge features present in the baseline method. Our dataset and codebase will be released in \href{https://github.com/mzzcdf/Thermal3DGS}{\textcolor{red}{Thermal3DGS}}.
Related papers
- TeX-NeRF: Neural Radiance Fields from Pseudo-TeX Vision [5.77388464529179]
We propose Ne-RF, a 3D reconstruction method using only infrared images.
We map the temperatures, emissivities (e), and textures (X) of the scene into the saturation (S), hue (H), and value (V) channels of the color space.
Novel view synthesis using the processed images has yielded excellent results.
arXiv Detail & Related papers (2024-10-07T09:43:28Z) - UV-free Texture Generation with Denoising and Geodesic Heat Diffusions [50.55154348768031]
Seams, wasted UV space, and varying resolution over the surface are the most prominent issues of the standard UV-based processing mechanism of meshes.
We propose to represent textures as coloured point-cloud colours generated by a denoising diffusion model constrained to operate on the surface of 3D meshes.
arXiv Detail & Related papers (2024-08-29T17:57:05Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - ThermoNeRF: Multimodal Neural Radiance Fields for Thermal Novel View Synthesis [5.66229031510643]
We propose ThermoNeRF, a novel approach to rendering new RGB and thermal views of a scene jointly.
To overcome the lack of texture in thermal images, we use paired RGB and thermal images to learn scene density.
We also introduce ThermoScenes, a new dataset to palliate the lack of available RGB+thermal datasets for scene reconstruction.
arXiv Detail & Related papers (2024-03-18T18:10:34Z) - Thermal-NeRF: Neural Radiance Fields from an Infrared Camera [29.58060552299745]
We introduce Thermal-NeRF, the first method that estimates a volumetric scene representation in the form of a NeRF solely from IR imaging.
We conduct extensive experiments to demonstrate that Thermal-NeRF can achieve superior quality compared to existing methods.
arXiv Detail & Related papers (2024-03-15T14:27:15Z) - Photometric Correction for Infrared Sensors [1.170732359523702]
This article proposes a photometric correction model for infrared sensors based on temperature constancy.
Experiments show that the reconstruction quality from the corrected infrared imagery achieves performance on par with state-of-the-art reconstruction using RGB sensors.
arXiv Detail & Related papers (2023-04-08T06:32:57Z) - Does Thermal Really Always Matter for RGB-T Salient Object Detection? [153.17156598262656]
This paper proposes a network named TNet to solve the RGB-T salient object detection (SOD) task.
In this paper, we introduce a global illumination estimation module to predict the global illuminance score of the image.
On the other hand, we introduce a two-stage localization and complementation module in the decoding phase to transfer object localization cue and internal integrity cue in thermal features to the RGB modality.
arXiv Detail & Related papers (2022-10-09T13:50:12Z) - Maximizing Self-supervision from Thermal Image for Effective
Self-supervised Learning of Depth and Ego-motion [78.19156040783061]
Self-supervised learning of depth and ego-motion from thermal images shows strong robustness and reliability under challenging scenarios.
The inherent thermal image properties such as weak contrast, blurry edges, and noise hinder to generate effective self-supervision from thermal images.
We propose an effective thermal image mapping method that significantly increases image information, such as overall structure, contrast, and details, while preserving temporal consistency.
arXiv Detail & Related papers (2022-01-12T09:49:24Z) - Thermal Image Super-Resolution Using Second-Order Channel Attention with
Varying Receptive Fields [4.991042925292453]
We introduce a system to efficiently reconstruct thermal images.
The restoration of thermal images is critical for applications that involve safety, search and rescue, and military operations.
arXiv Detail & Related papers (2021-07-30T22:17:51Z) - A Large-Scale, Time-Synchronized Visible and Thermal Face Dataset [62.193924313292875]
We present the DEVCOM Army Research Laboratory Visible-Thermal Face dataset (ARL-VTF)
With over 500,000 images from 395 subjects, the ARL-VTF dataset represents to the best of our knowledge, the largest collection of paired visible and thermal face images to date.
This paper presents benchmark results and analysis on thermal face landmark detection and thermal-to-visible face verification by evaluating state-of-the-art models on the ARL-VTF dataset.
arXiv Detail & Related papers (2021-01-07T17:17:12Z) - Exploring Thermal Images for Object Detection in Underexposure Regions
for Autonomous Driving [67.69430435482127]
Underexposure regions are vital to construct a complete perception of the surroundings for safe autonomous driving.
The availability of thermal cameras has provided an essential alternate to explore regions where other optical sensors lack in capturing interpretable signals.
This work proposes a domain adaptation framework which employs a style transfer technique for transfer learning from visible spectrum images to thermal images.
arXiv Detail & Related papers (2020-06-01T09:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.