TADAP: Trajectory-Aided Drivable area Auto-labeling with Pre-trained
self-supervised features in winter driving conditions
- URL: http://arxiv.org/abs/2312.12954v1
- Date: Wed, 20 Dec 2023 11:51:49 GMT
- Title: TADAP: Trajectory-Aided Drivable area Auto-labeling with Pre-trained
self-supervised features in winter driving conditions
- Authors: Eerik Alamikkotervo, Risto Ojala, Alvari Sepp\"anen, Kari Tammi
- Abstract summary: Trajectory-Aided Drivable area Auto-labeling with Pre-trained self-supervised features (TADAP) is presented.
A prediction model trained with the TADAP labels achieved a +9.6 improvement in intersection over union.
- Score: 1.4993021283916008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detection of the drivable area in all conditions is crucial for autonomous
driving and advanced driver assistance systems. However, the amount of labeled
data in adverse driving conditions is limited, especially in winter, and
supervised methods generalize poorly to conditions outside the training
distribution. For easy adaption to all conditions, the need for human
annotation should be removed from the learning process. In this paper,
Trajectory-Aided Drivable area Auto-labeling with Pre-trained self-supervised
features (TADAP) is presented for automated annotation of the drivable area in
winter driving conditions. A sample of the drivable area is extracted based on
the trajectory estimate from the global navigation satellite system. Similarity
with the sample area is determined based on pre-trained self-supervised visual
features. Image areas similar to the sample area are considered to be drivable.
These TADAP labels were evaluated with a novel winter-driving dataset,
collected in varying driving scenes. A prediction model trained with the TADAP
labels achieved a +9.6 improvement in intersection over union compared to the
previous state-of-the-art of self-supervised drivable area detection.
Related papers
- QuAD: Query-based Interpretable Neural Motion Planning for Autonomous Driving [33.609780917199394]
Self-driving vehicles must understand its environment to determine appropriate action.
Traditional systems rely on object detection to find agents in the scene.
We present a unified, interpretable, and efficient autonomy framework that moves away from cascading modules that first perceive occupancy relevant-temporal autonomy.
arXiv Detail & Related papers (2024-04-01T21:11:43Z) - DriveCoT: Integrating Chain-of-Thought Reasoning with End-to-End Driving [81.04174379726251]
This paper collects a comprehensive end-to-end driving dataset named DriveCoT.
It contains sensor data, control decisions, and chain-of-thought labels to indicate the reasoning process.
We propose a baseline model called DriveCoT-Agent, trained on our dataset, to generate chain-of-thought predictions and final decisions.
arXiv Detail & Related papers (2024-03-25T17:59:01Z) - BAT: Behavior-Aware Human-Like Trajectory Prediction for Autonomous
Driving [24.123577277806135]
We pioneer a novel behavior-aware trajectory prediction model (BAT)
Our model consists of behavior-aware, interaction-aware, priority-aware, and position-aware modules.
We evaluate BAT's performance across the Next Generation Simulation (NGSIM), Highway Drone (HighD), Roundabout Drone (RounD), and Macao Connected Autonomous Driving (MoCAD) datasets.
arXiv Detail & Related papers (2023-12-11T13:27:51Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Automated Static Camera Calibration with Intelligent Vehicles [58.908194559319405]
We present a robust calibration method for automated geo-referenced camera calibration.
Our method requires a calibration vehicle equipped with a combined filtering/RTK receiver and an inertial measurement unit (IMU) for self-localization.
Our method does not require any human interaction with the information recorded by both the infrastructure and the vehicle.
arXiv Detail & Related papers (2023-04-21T08:50:52Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Leveraging Road Area Semantic Segmentation with Auxiliary Steering Task [0.0]
We propose a CNN-based method that can leverage the steering wheel angle information to improve the road area semantic segmentation.
We demonstrate the effectiveness of the proposed approach on two challenging data sets for autonomous driving.
arXiv Detail & Related papers (2022-12-19T13:25:09Z) - Uncertainty-aware Perception Models for Off-road Autonomous Unmanned
Ground Vehicles [6.2574402913714575]
Off-road autonomous unmanned ground vehicles (UGVs) are being developed for military and commercial use to deliver crucial supplies in remote locations.
Current datasets used to train perception models for off-road autonomous navigation lack of diversity in seasons, locations, semantic classes, as well as time of day.
We investigate how to combine multiple datasets to train a semantic segmentation-based environment perception model.
We show that training the model to capture uncertainty could improve the model performance by a significant margin.
arXiv Detail & Related papers (2022-09-22T15:59:33Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - A Benchmark for Spray from Nearby Cutting Vehicles [7.767933159959353]
This publication presents a testing methodology for disturbances from spray.
It introduces a novel lightweight and spray setup alongside an evaluation scheme to assess the disturbances caused by spray.
In a common scenario of a closely cutting vehicle, it is visible that the distortions are severely affecting the perception stack up to four seconds.
arXiv Detail & Related papers (2021-08-24T15:40:09Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.