RadarGNN: Transformation Invariant Graph Neural Network for Radar-based
Perception
- URL: http://arxiv.org/abs/2304.06547v1
- Date: Thu, 13 Apr 2023 13:57:21 GMT
- Title: RadarGNN: Transformation Invariant Graph Neural Network for Radar-based
Perception
- Authors: Felix Fent, Philipp Bauerschmidt and Markus Lienkamp
- Abstract summary: A novel graph neural network is proposed that does not just use the information of the points themselves but also the relationships between them.
The model is designed to consider both point features and point-pair features, embedded in the edges of the graph.
The RadarGNN model outperforms all previous methods on the RadarScenes dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A reliable perception has to be robust against challenging environmental
conditions. Therefore, recent efforts focused on the use of radar sensors in
addition to camera and lidar sensors for perception applications. However, the
sparsity of radar point clouds and the poor data availability remain
challenging for current perception methods. To address these challenges, a
novel graph neural network is proposed that does not just use the information
of the points themselves but also the relationships between the points. The
model is designed to consider both point features and point-pair features,
embedded in the edges of the graph. Furthermore, a general approach for
achieving transformation invariance is proposed which is robust against unseen
scenarios and also counteracts the limited data availability. The
transformation invariance is achieved by an invariant data representation
rather than an invariant model architecture, making it applicable to other
methods. The proposed RadarGNN model outperforms all previous methods on the
RadarScenes dataset. In addition, the effects of different invariances on the
object detection and semantic segmentation quality are investigated. The code
is made available as open-source software under
https://github.com/TUMFTM/RadarGNN.
Related papers
- Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Geo-Localization Based on Dynamically Weighted Factor-Graph [74.75763142610717]
Feature-based geo-localization relies on associating features extracted from aerial imagery with those detected by the vehicle's sensors.
This requires that the type of landmarks must be observable from both sources.
We present a dynamically weighted factor graph model for the vehicle's trajectory estimation.
arXiv Detail & Related papers (2023-11-13T12:44:14Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection Systems [13.046347364043594]
In autonomous driving, LiDAR and radar are crucial for environmental perception.
Recent state-of-the-art works reveal that the fusion of radar and LiDAR can lead to robust detection in adverse weather.
We propose a bird's-eye view fusion learning-based anchor box-free object detection system.
arXiv Detail & Related papers (2022-11-11T10:24:42Z) - Radar Image Reconstruction from Raw ADC Data using Parametric
Variational Autoencoder with Domain Adaptation [0.0]
We propose a parametrically constrained variational autoencoder, capable of generating the clustered and localized target detections on the range-angle image.
To circumvent the problem of training the proposed neural network on all possible scenarios using real radar data, we propose domain adaptation strategies.
arXiv Detail & Related papers (2022-05-30T16:17:36Z) - Improved Orientation Estimation and Detection with Hybrid Object
Detection Networks for Automotive Radar [1.53934570513443]
We present novel hybrid architectures that combine grid- and point-based processing to improve radar-based object detection networks.
We show that a point-based model can extract neighborhood features, leveraging the exact relative positions of points, before grid rendering.
This has significant benefits for a following convolutional detection backbone.
arXiv Detail & Related papers (2022-05-03T06:29:03Z) - Change Detection from Synthetic Aperture Radar Images via Graph-Based
Knowledge Supplement Network [36.41983596642354]
We propose a Graph-based Knowledge Supplement Network (GKSNet) for image change detection.
To be more specific, we extract discriminative information from the existing labeled dataset as additional knowledge.
To validate the proposed method, we conducted extensive experiments on four SAR datasets.
arXiv Detail & Related papers (2022-01-22T02:50:50Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Dense Label Encoding for Boundary Discontinuity Free Rotation Detection [69.75559390700887]
This paper explores a relatively less-studied methodology based on classification.
We propose new techniques to push its frontier in two aspects.
Experiments and visual analysis on large-scale public datasets for aerial images show the effectiveness of our approach.
arXiv Detail & Related papers (2020-11-19T05:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.