Vision Guided MIMO Radar Beamforming for Enhanced Vital Signs Detection
in Crowds
- URL: http://arxiv.org/abs/2306.10515v1
- Date: Sun, 18 Jun 2023 10:09:16 GMT
- Title: Vision Guided MIMO Radar Beamforming for Enhanced Vital Signs Detection
in Crowds
- Authors: Shuaifeng Jiang, Ahmed Alkhateeb, Daniel W. Bliss, and Yu Rong
- Abstract summary: We develop a novel dual-sensing system, in which a vision sensor is leveraged to guide digital beamforming in a radar.
The calibrated dual system achieves about two centimeters precision in three-dimensional space within a field of view of $75circ$ by $65circ$ and for a range of two meters.
- Score: 26.129503530877006
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Radar as a remote sensing technology has been used to analyze human activity
for decades. Despite all the great features such as motion sensitivity, privacy
preservation, penetrability, and more, radar has limited spatial degrees of
freedom compared to optical sensors and thus makes it challenging to sense
crowded environments without prior information. In this paper, we develop a
novel dual-sensing system, in which a vision sensor is leveraged to guide
digital beamforming in a multiple-input multiple-output (MIMO) radar. Also, we
develop a calibration algorithm to align the two types of sensors and show that
the calibrated dual system achieves about two centimeters precision in
three-dimensional space within a field of view of $75^\circ$ by $65^\circ$ and
for a range of two meters. Finally, we show that the proposed approach is
capable of detecting the vital signs simultaneously for a group of closely
spaced subjects, sitting and standing, in a cluttered environment, which
highlights a promising direction for vital signs detection in realistic
environments.
Related papers
- MAROON: A Framework for the Joint Characterization of Near-Field High-Resolution Radar and Optical Depth Imaging Techniques [4.816237933371206]
We take on the unique challenge of characterizing depth imagers from both, the optical and radio-frequency domain.
We provide a comprehensive evaluation of their depth measurements with respect to distinct object materials, geometries, and object-to-sensor distances.
All object measurements will be made public in form of a multimodal dataset, called MAROON.
arXiv Detail & Related papers (2024-11-01T11:53:10Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Automatic Spatial Calibration of Near-Field MIMO Radar With Respect to Optical Depth Sensors [4.328226032204419]
We propose a novel, joint calibration approach for optical RGB-D sensors and MIMO radars that is designed to operate in the radar's near-field range.
Our pipeline consists of a bespoke calibration target, allowing for automatic target detection and localization.
We validate our approach using two different depth sensing technologies from the optical domain.
arXiv Detail & Related papers (2024-03-16T17:24:46Z) - Fisheye Camera and Ultrasonic Sensor Fusion For Near-Field Obstacle
Perception in Bird's-Eye-View [4.536942273206611]
We present the first end-to-end multimodal fusion model tailored for efficient obstacle perception in a bird's-eye-view (BEV) perspective.
Fisheye cameras are frequently employed for comprehensive surround-view perception, including rear-view obstacle localization.
However, the performance of such cameras can significantly deteriorate in low-light conditions, during nighttime, or when subjected to intense sun glare.
arXiv Detail & Related papers (2024-02-01T14:52:16Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - mmSense: Detecting Concealed Weapons with a Miniature Radar Sensor [2.963928676363629]
mmSense is an end-to-end portable miniaturised real-time system that can accurately detect the presence of concealed metallic objects on persons.
mmSense features millimeter wave radar technology, provided by Google's Soli sensor for its data acquisition, and TransDope, our real-time neural network, capable of processing a single radar data frame in 19 ms.
arXiv Detail & Related papers (2023-02-28T15:06:03Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - ImLiDAR: Cross-Sensor Dynamic Message Propagation Network for 3D Object
Detection [20.44294678711783]
We propose ImLiDAR, a new 3OD paradigm to narrow the cross-sensor discrepancies by progressively fusing the multi-scale features of camera Images and LiDAR point clouds.
First, we propose a cross-sensor dynamic message propagation module to combine the best of the multi-scale image and point features.
Second, we raise a direct set prediction problem that allows designing an effective set-based detector.
arXiv Detail & Related papers (2022-11-17T13:31:23Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.