A Novel Deep Learning Method for Thermal to Annotated Thermal-Optical
Fused Images
- URL: http://arxiv.org/abs/2107.05942v1
- Date: Tue, 13 Jul 2021 09:29:12 GMT
- Title: A Novel Deep Learning Method for Thermal to Annotated Thermal-Optical
Fused Images
- Authors: Suranjan Goswami, IEEE Student Member, Satish Kumar Singh, Senior
Member, IEEE and Bidyut B. Chaudhuri, Life Fellow, IEEE
- Abstract summary: We present a work that produces a grayscale thermo-optical fused mask given a thermal input.
This is a deep learning based pioneering work since to the best of our knowledge, there exists no other work on thermal-optical grayscale fusion.
We also present a new and unique database for obtaining the region of interest in thermal images.
- Score: 10.8880508356314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Thermal Images profile the passive radiation of objects and capture them in
grayscale images. Such images have a very different distribution of data
compared to optical colored images. We present here a work that produces a
grayscale thermo-optical fused mask given a thermal input. This is a deep
learning based pioneering work since to the best of our knowledge, there exists
no other work on thermal-optical grayscale fusion. Our method is also unique in
the sense that the deep learning method we are proposing here works on the
Discrete Wavelet Transform (DWT) domain instead of the gray level domain. As a
part of this work, we also present a new and unique database for obtaining the
region of interest in thermal images based on an existing thermal visual paired
database, containing the Region of Interest on 5 different classes of data.
Finally, we are proposing a simple low cost overhead statistical measure for
identifying the region of interest in the fused images, which we call as the
Region of Fusion (RoF). Experiments on the database show encouraging results in
identifying the region of interest in the fused images. We also show that they
can be processed better in the mixed form rather than with only thermal images.
Related papers
- Dif-Fusion: Towards High Color Fidelity in Infrared and Visible Image
Fusion with Diffusion Models [54.952979335638204]
We propose a novel method with diffusion models, termed as Dif-Fusion, to generate the distribution of the multi-channel input data.
Our method is more effective than other state-of-the-art image fusion methods, especially in color fidelity.
arXiv Detail & Related papers (2023-01-19T13:37:19Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Does Thermal Really Always Matter for RGB-T Salient Object Detection? [153.17156598262656]
This paper proposes a network named TNet to solve the RGB-T salient object detection (SOD) task.
In this paper, we introduce a global illumination estimation module to predict the global illuminance score of the image.
On the other hand, we introduce a two-stage localization and complementation module in the decoding phase to transfer object localization cue and internal integrity cue in thermal features to the RGB modality.
arXiv Detail & Related papers (2022-10-09T13:50:12Z) - Glass Segmentation with RGB-Thermal Image Pairs [16.925196782387857]
We propose a new glass segmentation method utilizing paired RGB and thermal images.
Glass regions of a scene are made more distinguishable with a pair of RGB and thermal images than solely with an RGB image.
arXiv Detail & Related papers (2022-04-12T00:20:22Z) - Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD
Images [69.5662419067878]
Grounding referring expressions in RGBD image has been an emerging field.
We present a novel task of 3D visual grounding in single-view RGBD image where the referred objects are often only partially scanned due to occlusion.
Our approach first fuses the language and the visual features at the bottom level to generate a heatmap that localizes the relevant regions in the RGBD image.
Then our approach conducts an adaptive feature learning based on the heatmap and performs the object-level matching with another visio-linguistic fusion to finally ground the referred object.
arXiv Detail & Related papers (2021-03-14T11:18:50Z) - Assessing the applicability of Deep Learning-based visible-infrared
fusion methods for fire imagery [0.0]
wildfire detection is of paramount importance to avoid as much damage as possible to the environment, properties, and lives.
Deep Learning models that can leverage both visible and infrared information have the potential to display state-of-the-art performance.
Most DL-based image fusion methods have not been evaluated in the domain of fire imagery.
arXiv Detail & Related papers (2021-01-27T23:53:36Z) - A Novel Registration & Colorization Technique for Thermal to Cross
Domain Colorized Images [15.787663289343948]
We present a novel registration method that works on images captured via multiple thermal imagers.
We retain the information of the thermal profile as a part of the output, thus providing information of both domains jointly.
arXiv Detail & Related papers (2021-01-18T07:30:51Z) - Exploring Thermal Images for Object Detection in Underexposure Regions
for Autonomous Driving [67.69430435482127]
Underexposure regions are vital to construct a complete perception of the surroundings for safe autonomous driving.
The availability of thermal cameras has provided an essential alternate to explore regions where other optical sensors lack in capturing interpretable signals.
This work proposes a domain adaptation framework which employs a style transfer technique for transfer learning from visible spectrum images to thermal images.
arXiv Detail & Related papers (2020-06-01T09:59:09Z) - Bayesian Fusion for Infrared and Visible Images [26.64101343489016]
In this paper, a novel Bayesian fusion model is established for infrared and visible images.
We aim at making the fused image satisfy human visual system.
Compared with the previous methods, the novel model can generate better fused images with high-light targets and rich texture details.
arXiv Detail & Related papers (2020-05-12T14:57:19Z) - Multi-Scale Thermal to Visible Face Verification via Attribute Guided
Synthesis [55.29770222566124]
We use attributes extracted from visible images to synthesize attribute-preserved visible images from thermal imagery for cross-modal matching.
A novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes.
A pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification.
arXiv Detail & Related papers (2020-04-20T01:45:05Z) - Unsupervised Image-generation Enhanced Adaptation for Object Detection
in Thermal images [4.810743887667828]
This paper proposes an unsupervised image-generation enhanced adaptation method for object detection in thermal images.
To reduce the gap between visible domain and thermal domain, the proposed method manages to generate simulated fake thermal images.
Experiments demonstrate the effectiveness and superiority of the proposed method.
arXiv Detail & Related papers (2020-02-17T04:53:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.