Robust Human Detection under Visual Degradation via Thermal and mmWave
Radar Fusion
- URL: http://arxiv.org/abs/2307.03623v1
- Date: Fri, 7 Jul 2023 14:23:20 GMT
- Title: Robust Human Detection under Visual Degradation via Thermal and mmWave
Radar Fusion
- Authors: Kaiwen Cai, Qiyue Xia, Peize Li, John Stankovic and Chris Xiaoxuan Lu
- Abstract summary: We present a multimodal human detection system that combines portable thermal cameras and single-chip mmWave radars.
We propose a Bayesian feature extractor and a novel uncertainty-guided fusion method that surpasses a variety of competing methods.
We evaluate the proposed method on real-world data collection and demonstrate that our approach outperforms the state-of-the-art methods by a large margin.
- Score: 4.178845249771262
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The majority of human detection methods rely on the sensor using visible
lights (e.g., RGB cameras) but such sensors are limited in scenarios with
degraded vision conditions. In this paper, we present a multimodal human
detection system that combines portable thermal cameras and single-chip mmWave
radars. To mitigate the noisy detection features caused by the low contrast of
thermal cameras and the multi-path noise of radar point clouds, we propose a
Bayesian feature extractor and a novel uncertainty-guided fusion method that
surpasses a variety of competing methods, either single-modal or multi-modal.
We evaluate the proposed method on real-world data collection and demonstrate
that our approach outperforms the state-of-the-art methods by a large margin.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - RIDERS: Radar-Infrared Depth Estimation for Robust Sensing [22.10378524682712]
Adverse weather conditions pose significant challenges to accurate dense depth estimation.
We present a novel approach for robust metric depth estimation by fusing a millimeter-wave Radar and a monocular infrared thermal camera.
Our method achieves exceptional visual quality and accurate metric estimation by addressing the challenges of ambiguity and misalignment.
arXiv Detail & Related papers (2024-02-03T07:14:43Z) - Radarize: Enhancing Radar SLAM with Generalizable Doppler-Based Odometry [9.420543997290126]
Radarize is a self-contained SLAM pipeline that uses only a commodity single-chip mmWave radar.
Our method outperforms state-of-the-art radar and radar-inertial approaches by approximately 5x in terms of odometry and 8x in terms of end-to-end SLAM.
arXiv Detail & Related papers (2023-11-19T07:47:11Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Multimodal Industrial Anomaly Detection via Hybrid Fusion [59.16333340582885]
We propose a novel multimodal anomaly detection method with hybrid fusion scheme.
Our model outperforms the state-of-the-art (SOTA) methods on both detection and segmentation precision on MVTecD-3 AD dataset.
arXiv Detail & Related papers (2023-03-01T15:48:27Z) - Bridging the View Disparity of Radar and Camera Features for Multi-modal
Fusion 3D Object Detection [6.959556180268547]
This paper focuses on how to utilize millimeter-wave (MMW) radar and camera sensor fusion for 3D object detection.
A novel method which realizes the feature-level fusion under bird-eye view (BEV) for a better feature representation is proposed.
arXiv Detail & Related papers (2022-08-25T13:21:37Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - Human Behavior Recognition Method Based on CEEMD-ES Radar Selection [12.335803365712277]
millimeter-wave radar to identify human behavior has been widely used in medical,security, and other fields.
Processing multiple radar data also requires a lot of time and computational cost.
The Complementary Ensemble Empirical Mode Decomposition-Energy Slice (CEEMD-ES) multistatic radar selection method is proposed to solve these problems.
Experiments show that this method can effectively select the radar, and the recognition rate of three kinds of human actions is 98.53%.
arXiv Detail & Related papers (2022-06-06T16:01:06Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.