Caltech Aerial RGB-Thermal Dataset in the Wild
- URL: http://arxiv.org/abs/2403.08997v2
- Date: Wed, 31 Jul 2024 23:01:49 GMT
- Title: Caltech Aerial RGB-Thermal Dataset in the Wild
- Authors: Connor Lee, Matthew Anderson, Nikhil Raganathan, Xingxing Zuo, Kevin Do, Georgia Gkioxari, Soon-Jo Chung,
- Abstract summary: We present the first publicly-available RGB-thermal dataset designed for aerial robotics operating in natural environments.
Our dataset captures a variety of terrain across the United States, including rivers, lakes, coastlines, deserts, and forests.
We provide semantic segmentation annotations for 10 classes commonly encountered in natural settings.
- Score: 14.699908177967181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the first publicly-available RGB-thermal dataset designed for aerial robotics operating in natural environments. Our dataset captures a variety of terrain across the United States, including rivers, lakes, coastlines, deserts, and forests, and consists of synchronized RGB, thermal, global positioning, and inertial data. We provide semantic segmentation annotations for 10 classes commonly encountered in natural settings in order to drive the development of perception algorithms robust to adverse weather and nighttime conditions. Using this dataset, we propose new and challenging benchmarks for thermal and RGB-thermal (RGB-T) semantic segmentation, RGB-T image translation, and motion tracking. We present extensive results using state-of-the-art methods and highlight the challenges posed by temporal and geographical domain shifts in our data. The dataset and accompanying code is available at https://github.com/aerorobotics/caltech-aerial-rgbt-dataset.
Related papers
- T-FAKE: Synthesizing Thermal Images for Facial Landmarking [8.20594611891252]
We introduce the T-FAKE dataset, a new large-scale synthetic thermal dataset with sparse and dense landmarks.
Our models show excellent performance with both sparse 70-point landmarks and dense 478-point landmark annotations.
arXiv Detail & Related papers (2024-08-27T15:07:58Z) - CattleFace-RGBT: RGB-T Cattle Facial Landmark Benchmark [4.463254896517738]
CattleFace-RGBT is a RGB-T Cattle Facial Landmark dataset consisting of 2,300 RGB-T image pairs, a total of 4,600 images.
Applying AI to thermal images is challenging due to suboptimal results from direct thermal training and infeasible RGB-thermal alignment.
We transfer models trained on RGB to thermal images and refine them using our AI-assisted annotation tool.
arXiv Detail & Related papers (2024-06-05T16:29:13Z) - RaSim: A Range-aware High-fidelity RGB-D Data Simulation Pipeline for Real-world Applications [55.24463002889]
We focus on depth data synthesis and develop a range-aware RGB-D data simulation pipeline (RaSim)
In particular, high-fidelity depth data is generated by imitating the imaging principle of real-world sensors.
RaSim can be directly applied to real-world scenarios without any finetuning and excel at downstream RGB-D perception tasks.
arXiv Detail & Related papers (2024-04-05T08:52:32Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Attentive Multimodal Fusion for Optical and Scene Flow [24.08052492109655]
Existing methods typically rely solely on RGB images or fuse the modalities at later stages.
We propose a novel deep neural network approach named FusionRAFT, which enables early-stage information fusion between sensor modalities.
Our approach exhibits improved robustness in the presence of noise and low-lighting conditions that affect the RGB images.
arXiv Detail & Related papers (2023-07-28T04:36:07Z) - Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline [80.13652104204691]
In this paper, we construct a large-scale benchmark with high diversity for visible-thermal UAV tracking (VTUAV)
We provide a coarse-to-fine attribute annotation, where frame-level attributes are provided to exploit the potential of challenge-specific trackers.
In addition, we design a new RGB-T baseline, named Hierarchical Multi-modal Fusion Tracker (HMFT), which fuses RGB-T data in various levels.
arXiv Detail & Related papers (2022-04-08T15:22:33Z) - Refer-it-in-RGBD: A Bottom-up Approach for 3D Visual Grounding in RGBD
Images [69.5662419067878]
Grounding referring expressions in RGBD image has been an emerging field.
We present a novel task of 3D visual grounding in single-view RGBD image where the referred objects are often only partially scanned due to occlusion.
Our approach first fuses the language and the visual features at the bottom level to generate a heatmap that localizes the relevant regions in the RGBD image.
Then our approach conducts an adaptive feature learning based on the heatmap and performs the object-level matching with another visio-linguistic fusion to finally ground the referred object.
arXiv Detail & Related papers (2021-03-14T11:18:50Z) - Synergistic saliency and depth prediction for RGB-D saliency detection [76.27406945671379]
Existing RGB-D saliency datasets are small, which may lead to overfitting and limited generalization for diverse scenarios.
We propose a semi-supervised system for RGB-D saliency detection that can be trained on smaller RGB-D saliency datasets without saliency ground truth.
arXiv Detail & Related papers (2020-07-03T14:24:41Z) - Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver
Gaze Zone Estimation Dataset [55.391532084304494]
Driver Gaze in the Wild dataset contains 586 recordings, captured during different times of the day including evenings.
Driver Gaze in the Wild dataset contains 338 subjects with an age range of 18-63 years.
arXiv Detail & Related papers (2020-04-13T14:47:34Z) - HeatNet: Bridging the Day-Night Domain Gap in Semantic Segmentation with
Thermal Images [26.749261270690425]
Real-world driving scenarios entail adverse environmental conditions such as nighttime illumination or glare.
We propose a multimodal semantic segmentation model that can be applied during daytime and nighttime.
Besides RGB images, we leverage thermal images, making our network significantly more robust.
arXiv Detail & Related papers (2020-03-10T11:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.