A System-driven Automatic Ground Truth Generation Method for DL
Inner-City Driving Corridor Detectors
- URL: http://arxiv.org/abs/2207.11234v1
- Date: Wed, 20 Jul 2022 12:55:16 GMT
- Title: A System-driven Automatic Ground Truth Generation Method for DL
Inner-City Driving Corridor Detectors
- Authors: Jona Ruthardt (Robert Bosch GmbH) and Thomas Michalke (Robert Bosch
GmbH)
- Abstract summary: We propose an automatic labeling approach for semantic segmentation of the drivable ego corridor.
The proposed holistic approach could be used in an automated data loop, allowing a continuous improvement of the depending perception modules.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven perception approaches are well-established in automated driving
systems. In many fields even super-human performance is reached. Unlike
prediction and planning approaches, mainly supervised learning algorithms are
used for the perception domain. Therefore, a major remaining challenge is the
efficient generation of ground truth data. As perception modules are positioned
close to the sensor, they typically run on raw sensor data of high bandwidth.
Due to that, the generation of ground truth labels typically causes a
significant manual effort, which leads to high costs for the labelling itself
and the necessary quality control. In this contribution, we propose an
automatic labeling approach for semantic segmentation of the drivable ego
corridor that reduces the manual effort by a factor of 150 and more. The
proposed holistic approach could be used in an automated data loop, allowing a
continuous improvement of the depending perception modules.
Related papers
- Autosen: improving automatic wifi human sensing through cross-modal
autoencoder [56.44764266426344]
WiFi human sensing is highly regarded for its low-cost and privacy advantages in recognizing human activities.
Traditional cross-modal methods, aimed at enabling self-supervised learning without labeled data, struggle to extract meaningful features from amplitude-phase combinations.
We introduce AutoSen, an innovative automatic WiFi sensing solution that departs from conventional approaches.
arXiv Detail & Related papers (2024-01-08T19:50:02Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Pedestrian Detection: Domain Generalization, CNNs, Transformers and
Beyond [82.37430109152383]
We show that, current pedestrian detectors poorly handle even small domain shifts in cross-dataset evaluation.
We attribute the limited generalization to two main factors, the method and the current sources of data.
We propose a progressive fine-tuning strategy which improves generalization.
arXiv Detail & Related papers (2022-01-10T06:00:26Z) - FAST3D: Flow-Aware Self-Training for 3D Object Detectors [12.511087244102036]
State-of-the-art self-training approaches mostly ignore the temporal nature of autonomous driving data.
We propose a flow-aware self-training method that enables unsupervised domain adaptation for 3D object detectors on continuous LiDAR point clouds.
Our results show a significant improvement over the state-of-the-art, without any prior target domain knowledge.
arXiv Detail & Related papers (2021-10-18T14:32:05Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Self-Supervised Drivable Area and Road Anomaly Segmentation using RGB-D
Data for Robotic Wheelchairs [26.110522390201094]
We develop a pipeline that can automatically generate segmentation labels for drivable areas and road anomalies.
Our proposed automatic labeling pipeline achieves an impressive speed-up compared to manual labeling.
Our proposed self-supervised approach exhibits more robust and accurate results than the state-of-the-art traditional algorithms.
arXiv Detail & Related papers (2020-07-12T10:12:46Z) - Understanding Self-Training for Gradual Domain Adaptation [107.37869221297687]
We consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.
The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data, and suggesting that self-training works particularly well for shifts with small Wasserstein-infinity distance.
arXiv Detail & Related papers (2020-02-26T08:59:40Z) - Controlled time series generation for automotive software-in-the-loop
testing using GANs [0.5352699766206808]
Testing automotive mechatronic systems partly uses the software-in-the-loop approach, where systematically covering inputs of the system-under-test remains a major challenge.
One approach is to craft input sequences which eases control and feedback of the test process but falls short of exposing the system to realistic scenarios.
The other is to replay sequences recorded from field operations which accounts for reality but requires collecting a well-labeled dataset of sufficient capacity for widespread use, which is expensive.
This work applies the well-known unsupervised learning framework of Generative Adrial Networks (GAN) to learn an unlabeled dataset of recorded in-vehicle
arXiv Detail & Related papers (2020-02-16T16:19:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.