Robust Perceptual Night Vision in Thermal Colorization
- URL: http://arxiv.org/abs/2003.02204v1
- Date: Wed, 4 Mar 2020 17:17:08 GMT
- Title: Robust Perceptual Night Vision in Thermal Colorization
- Authors: Feras Almasri, Olivier Debeir
- Abstract summary: Objects appear in one spectrum but not necessarily in the other, and the thermal signature of a single object may have different colours in its Visible representation.
This makes a direct mapping from thermal to Visible images impossible.
Deep learning method to map the thermal signature from the thermal image's spectrum to a Visible representation in their low-frequency space is proposed.
- Score: 1.1709244686171956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transforming a thermal infrared image into a robust perceptual colour Visible
image is an ill-posed problem due to the differences in their spectral domains
and in the objects' representations. Objects appear in one spectrum but not
necessarily in the other, and the thermal signature of a single object may have
different colours in its Visible representation. This makes a direct mapping
from thermal to Visible images impossible and necessitates a solution that
preserves texture captured in the thermal spectrum while predicting the
possible colour for certain objects. In this work, a deep learning method to
map the thermal signature from the thermal image's spectrum to a Visible
representation in their low-frequency space is proposed. A pan-sharpening
method is then used to merge the predicted low-frequency representation with
the high-frequency representation extracted from the thermal image. The
proposed model generates colour values consistent with the Visible ground truth
when the object does not vary much in its appearance and generates averaged
grey values in other cases. The proposed method shows robust perceptual night
vision images in preserving the object's appearance and image context compared
with the existing state-of-the-art.
Related papers
- ThermalNeRF: Thermal Radiance Fields [32.881758519242155]
We propose a unified framework for scene reconstruction from a set of LWIR and RGB images.
We calibrate the RGB and infrared cameras with respect to each other, as a preprocessing step.
We show that our method is capable of thermal super-resolution, as well as visually removing obstacles to reveal objects occluded in either the RGB or thermal channels.
arXiv Detail & Related papers (2024-07-22T02:51:29Z) - Colorizing Monochromatic Radiance Fields [55.695149357101755]
We consider reproducing color from monochromatic radiance fields as a representation-prediction task in the Lab color space.
By first constructing the luminance and density representation using monochromatic images, our prediction stage can recreate color representation on the basis of an image colorization module.
We then reproduce a colorful implicit model through the representation of luminance, density, and color.
arXiv Detail & Related papers (2024-02-19T14:47:23Z) - Breaking Modality Disparity: Harmonized Representation for Infrared and
Visible Image Registration [66.33746403815283]
We propose a scene-adaptive infrared and visible image registration.
We employ homography to simulate the deformation between different planes.
We propose the first ground truth available misaligned infrared and visible image dataset.
arXiv Detail & Related papers (2023-04-12T06:49:56Z) - Learning Domain and Pose Invariance for Thermal-to-Visible Face
Recognition [6.454199265634863]
We propose a novel Domain and Pose Invariant Framework that simultaneously learns domain and pose invariant representations.
Our proposed framework is composed of modified networks for extracting the most correlated intermediate representations from off-pose thermal and frontal visible face imagery.
Although DPIF focuses on learning to match off-pose thermal to frontal visible faces, we also show that DPIF enhances performance when matching frontal thermal face images to frontal visible face images.
arXiv Detail & Related papers (2022-11-17T05:24:02Z) - Does Thermal Really Always Matter for RGB-T Salient Object Detection? [153.17156598262656]
This paper proposes a network named TNet to solve the RGB-T salient object detection (SOD) task.
In this paper, we introduce a global illumination estimation module to predict the global illuminance score of the image.
On the other hand, we introduce a two-stage localization and complementation module in the decoding phase to transfer object localization cue and internal integrity cue in thermal features to the RGB modality.
arXiv Detail & Related papers (2022-10-09T13:50:12Z) - T2V-DDPM: Thermal to Visible Face Translation using Denoising Diffusion
Probabilistic Models [71.94264837503135]
We propose a Denoising Diffusion Probabilistic Model (DDPM) based solution for Thermal-to-Visible (T2V) image translation.
During training, the model learns the conditional distribution of visible facial images given their corresponding thermal image.
We achieve the state-of-the-art results on multiple datasets.
arXiv Detail & Related papers (2022-09-19T07:59:32Z) - A Novel Registration & Colorization Technique for Thermal to Cross
Domain Colorized Images [15.787663289343948]
We present a novel registration method that works on images captured via multiple thermal imagers.
We retain the information of the thermal profile as a part of the output, thus providing information of both domains jointly.
arXiv Detail & Related papers (2021-01-18T07:30:51Z) - A Large-Scale, Time-Synchronized Visible and Thermal Face Dataset [62.193924313292875]
We present the DEVCOM Army Research Laboratory Visible-Thermal Face dataset (ARL-VTF)
With over 500,000 images from 395 subjects, the ARL-VTF dataset represents to the best of our knowledge, the largest collection of paired visible and thermal face images to date.
This paper presents benchmark results and analysis on thermal face landmark detection and thermal-to-visible face verification by evaluating state-of-the-art models on the ARL-VTF dataset.
arXiv Detail & Related papers (2021-01-07T17:17:12Z) - Exploring Thermal Images for Object Detection in Underexposure Regions
for Autonomous Driving [67.69430435482127]
Underexposure regions are vital to construct a complete perception of the surroundings for safe autonomous driving.
The availability of thermal cameras has provided an essential alternate to explore regions where other optical sensors lack in capturing interpretable signals.
This work proposes a domain adaptation framework which employs a style transfer technique for transfer learning from visible spectrum images to thermal images.
arXiv Detail & Related papers (2020-06-01T09:59:09Z) - Unsupervised Image-generation Enhanced Adaptation for Object Detection
in Thermal images [4.810743887667828]
This paper proposes an unsupervised image-generation enhanced adaptation method for object detection in thermal images.
To reduce the gap between visible domain and thermal domain, the proposed method manages to generate simulated fake thermal images.
Experiments demonstrate the effectiveness and superiority of the proposed method.
arXiv Detail & Related papers (2020-02-17T04:53:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.