Anomaly Detection in Radar Data Using PointNets
- URL: http://arxiv.org/abs/2109.09401v1
- Date: Mon, 20 Sep 2021 10:02:24 GMT
- Title: Anomaly Detection in Radar Data Using PointNets
- Authors: Thomas Griebel, Dominik Authaler, Markus Horn, Matti Henning, Michael
Buchholz, and Klaus Dietmayer
- Abstract summary: We present an approach based on PointNets to detect anomalous radar targets.
Our method is evaluated on a real-world dataset in urban scenarios.
- Score: 7.3600716208089825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For autonomous driving, radar is an important sensor type. On the one hand,
radar offers a direct measurement of the radial velocity of targets in the
environment. On the other hand, in literature, radar sensors are known for
their robustness against several kinds of adverse weather conditions. However,
on the downside, radar is susceptible to ghost targets or clutter which can be
caused by several different causes, e.g., reflective surfaces in the
environment. Ghost targets, for instance, can result in erroneous object
detections. To this end, it is desirable to identify anomalous targets as early
as possible in radar data. In this work, we present an approach based on
PointNets to detect anomalous radar targets. Modifying the
PointNet-architecture driven by our task, we developed a novel grouping variant
which contributes to a multi-form grouping module. Our method is evaluated on a
real-world dataset in urban scenarios and shows promising results for the
detection of anomalous radar targets.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - The Radar Ghost Dataset -- An Evaluation of Ghost Objects in Automotive Radar Data [12.653873936535149]
A lot more surfaces in a typical traffic scenario appear flat relative to the radar's emitted signal.
This results in multi-path reflections or so called ghost detections in the radar signal.
We present a dataset with detailed manual annotations for different kinds of ghost detections.
arXiv Detail & Related papers (2024-04-01T19:20:32Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - A recurrent CNN for online object detection on raw radar frames [7.074916574419171]
This work presents a new recurrent CNN architecture for online radar object detection.
We propose an end-to-end trainable architecture mixing convolutions and ConvLSTMs to learn dependencies between successive frames.
Our model is causal and requires only the past information encoded in the memory of the ConvLSTMs to detect objects.
arXiv Detail & Related papers (2022-12-21T16:36:36Z) - Radar Artifact Labeling Framework (RALF): Method for Plausible Radar
Detections in Datasets [2.5899040911480187]
We propose a cross sensor Radar Artifact Labeling Framework (RALF) for labeling sparse radar point clouds.
RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets.
We validate the results by evaluating error metrics on semi-manually labeled ground truth dataset of $3.28cdot106$ points.
arXiv Detail & Related papers (2020-12-03T15:11:31Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.