Contrastive Learning for Automotive mmWave Radar Detection Points Based
Instance Segmentation
- URL: http://arxiv.org/abs/2203.06553v1
- Date: Sun, 13 Mar 2022 03:00:34 GMT
- Title: Contrastive Learning for Automotive mmWave Radar Detection Points Based
Instance Segmentation
- Authors: Weiyi Xiong, Jianan Liu, Yuxuan Xia, Tao Huang, Bing Zhu and Wei Xiang
- Abstract summary: We propose a contrastive learning approach for implementing radar detection points-based instance segmentation.
We define the positive and negative samples according to the ground-truth label, apply the contrastive loss to train the model first, and then perform training for the following downstream task.
Experiments show that when the ground-truth information is only available for 5% of the training data, our method still achieves a comparable performance to the approach trained in a supervised manner with 100% ground-truth information.
- Score: 9.491866334097114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The automotive mmWave radar plays a key role in advanced driver assistance
systems (ADAS) and autonomous driving. Deep learning-based instance
segmentation enables real-time object identification from the radar detection
points. In the conventional training process, accurate annotation is the key.
However, high-quality annotations of radar detection points are challenging to
achieve due to their ambiguity and sparsity. To address this issue, we propose
a contrastive learning approach for implementing radar detection points-based
instance segmentation. We define the positive and negative samples according to
the ground-truth label, apply the contrastive loss to train the model first,
and then perform training for the following downstream task. In addition, these
two steps can be merged into one, and pseudo labels can be generated for the
unlabeled data to improve the performance further. Thus, there are four
different training settings for our method. Experiments show that when the
ground-truth information is only available for 5% of the training data, our
method still achieves a comparable performance to the approach trained in a
supervised manner with 100% ground-truth information.
Related papers
- Leveraging Self-Supervised Instance Contrastive Learning for Radar
Object Detection [7.728838099011661]
This paper presents RiCL, an instance contrastive learning framework to pre-train radar object detectors.
We aim to pre-train an object detector's backbone, head and neck to learn with fewer data.
arXiv Detail & Related papers (2024-02-13T12:53:33Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - DOAD: Decoupled One Stage Action Detection Network [77.14883592642782]
Localizing people and recognizing their actions from videos is a challenging task towards high-level video understanding.
Existing methods are mostly two-stage based, with one stage for person bounding box generation and the other stage for action recognition.
We present a decoupled one-stage network dubbed DOAD, to improve the efficiency for-temporal action detection.
arXiv Detail & Related papers (2023-04-01T08:06:43Z) - Histogram-based Deep Learning for Automotive Radar [6.85316573653194]
We present a deep learning approach for processing point cloud data recorded with radar sensors.
Compared to existing methods, the design of our approach is extremely simple: it boils down to computing a point cloud histogram and passing it through a multi-layer perceptron.
Our approach matches and surpasses state-of-the-art approaches on the task of automotive radar object type classification.
arXiv Detail & Related papers (2023-03-06T09:06:49Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Learning Moving-Object Tracking with FMCW LiDAR [53.05551269151209]
We propose a learning-based moving-object tracking method utilizing our newly developed LiDAR sensor, Frequency Modulated Continuous Wave (FMCW) LiDAR.
Given the labels, we propose a contrastive learning framework, which pulls together the features from the same instance in embedding space and pushes apart the features from different instances to improve the tracking quality.
arXiv Detail & Related papers (2022-03-02T09:11:36Z) - Contrastive Learning for Unsupervised Radar Place Recognition [31.04172735067443]
We learn, in an unsupervised way, an embedding from sequences of radar images that is suitable for solving the place recognition problem with complex radar data.
We experiment across two prominent urban radar datasets totalling over 400 km of driving and show that we achieve a new radar place recognition state-of-the-art.
arXiv Detail & Related papers (2021-10-06T13:34:09Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.