Motion Classification and Height Estimation of Pedestrians Using Sparse
Radar Data
- URL: http://arxiv.org/abs/2103.02278v1
- Date: Wed, 3 Mar 2021 09:36:11 GMT
- Title: Motion Classification and Height Estimation of Pedestrians Using Sparse
Radar Data
- Authors: Markus Horn, Ole Schumann, Markus Hahn, J\"urgen Dickmann, Klaus
Dietmayer
- Abstract summary: It is possible to estimate the body height of walking pedestrians using 2D radar targets.
Different pedestrian motion types are classified.
This work demonstrates that it is possible to estimate the body height of walking pedestrians using 2D radar targets.
- Score: 8.366962883442035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A complete overview of the surrounding vehicle environment is important for
driver assistance systems and highly autonomous driving. Fusing results of
multiple sensor types like camera, radar and lidar is crucial for increasing
the robustness. The detection and classification of objects like cars, bicycles
or pedestrians has been analyzed in the past for many sensor types. Beyond
that, it is also helpful to refine these classes and distinguish for example
between different pedestrian types or activities. This task is usually
performed on camera data, though recent developments are based on radar
spectrograms. However, for most automotive radar systems, it is only possible
to obtain radar targets instead of the original spectrograms. This work
demonstrates that it is possible to estimate the body height of walking
pedestrians using 2D radar targets. Furthermore, different pedestrian motion
types are classified.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - The Radar Ghost Dataset -- An Evaluation of Ghost Objects in Automotive Radar Data [12.653873936535149]
A lot more surfaces in a typical traffic scenario appear flat relative to the radar's emitted signal.
This results in multi-path reflections or so called ghost detections in the radar signal.
We present a dataset with detailed manual annotations for different kinds of ghost detections.
arXiv Detail & Related papers (2024-04-01T19:20:32Z) - Exploring Radar Data Representations in Autonomous Driving: A Comprehensive Review [9.68427762815025]
Review focuses on exploring different radar data representations utilized in autonomous driving systems.
We introduce the capabilities and limitations of the radar sensor.
For each radar representation, we examine the related datasets, methods, advantages and limitations.
arXiv Detail & Related papers (2023-12-08T06:31:19Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Multi-View Radar Semantic Segmentation [3.2093811507874768]
Automotive radars are low-cost active sensors that measure properties of surrounding objects.
They are seldom used for scene understanding due to the size and complexity of radar raw data.
We propose several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically.
arXiv Detail & Related papers (2021-03-30T09:56:41Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - CARRADA Dataset: Camera and Automotive Radar with Range-Angle-Doppler
Annotations [0.0]
We introduce CARRADA, a dataset of synchronized camera and radar recordings with range-angle-Doppler annotations.
We also present a semi-automatic annotation approach, which was used to annotate the dataset, and a radar semantic segmentation baseline.
arXiv Detail & Related papers (2020-05-04T13:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.