Trajectory-based Road Autolabeling with Lidar-Camera Fusion in Winter Conditions
- URL: http://arxiv.org/abs/2412.02370v1
- Date: Tue, 03 Dec 2024 10:54:37 GMT
- Title: Trajectory-based Road Autolabeling with Lidar-Camera Fusion in Winter Conditions
- Authors: Eerik Alamikkotervo, Henrik Toikka, Kari Tammi, Risto Ojala,
- Abstract summary: Trajectory-based self-supervised methods can learn from traversed route without manual labels.
Our method outperforms recent standalone camera- and lidar-based methods when evaluated with a challenging winter driving dataset.
- Score: 1.3724491757145387
- License:
- Abstract: Robust road segmentation in all road conditions is required for safe autonomous driving and advanced driver assistance systems. Supervised deep learning methods provide accurate road segmentation in the domain of their training data but cannot be trusted in out-of-distribution scenarios. Including the whole distribution in the trainset is challenging as each sample must be labeled by hand. Trajectory-based self-supervised methods offer a potential solution as they can learn from the traversed route without manual labels. However, existing trajectory-based methods use learning schemes that rely only on the camera or only on the lidar. In this paper, trajectory-based learning is implemented jointly with lidar and camera for increased performance. Our method outperforms recent standalone camera- and lidar-based methods when evaluated with a challenging winter driving dataset including countryside and suburb driving scenes. The source code is available at https://github.com/eerik98/lidar-camera-road-autolabeling.git
Related papers
- Label Correction for Road Segmentation Using Road-side Cameras [0.44241702149260353]
Existing roadside camera infrastructure is utilized for collecting road data in varying weather conditions automatically.
A novel semi-automatic annotation method for roadside cameras is proposed.
The proposed method is validated with roadside camera data collected from 927 cameras across Finland over 4 month time period during winter.
arXiv Detail & Related papers (2025-02-03T11:52:23Z) - R2S100K: Road-Region Segmentation Dataset For Semi-Supervised Autonomous
Driving in the Wild [11.149480965148015]
Road Region dataset (R2S100K) is a large-scale dataset and benchmark for training and evaluation of road segmentation.
R2S100K comprises 100K images extracted from a large and diverse set of video sequences covering more than 1000 KM of roadways.
We present an Efficient Data Sampling method (EDS) based self-training framework to improve learning by leveraging unlabeled data.
arXiv Detail & Related papers (2023-08-11T21:31:37Z) - Learning Off-Road Terrain Traversability with Self-Supervisions Only [2.4316550366482357]
Estimating the traversability of terrain should be reliable and accurate in diverse conditions for autonomous driving in off-road environments.
We introduce a method for learning traversability from images that utilizes only self-supervision and no manual labels.
arXiv Detail & Related papers (2023-05-30T09:51:27Z) - Automated Static Camera Calibration with Intelligent Vehicles [58.908194559319405]
We present a robust calibration method for automated geo-referenced camera calibration.
Our method requires a calibration vehicle equipped with a combined filtering/RTK receiver and an inertial measurement unit (IMU) for self-localization.
Our method does not require any human interaction with the information recorded by both the infrastructure and the vehicle.
arXiv Detail & Related papers (2023-04-21T08:50:52Z) - Leveraging Road Area Semantic Segmentation with Auxiliary Steering Task [0.0]
We propose a CNN-based method that can leverage the steering wheel angle information to improve the road area semantic segmentation.
We demonstrate the effectiveness of the proposed approach on two challenging data sets for autonomous driving.
arXiv Detail & Related papers (2022-12-19T13:25:09Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry [55.11913183006984]
We show how a model can be trained to control a vehicle's trajectory using camera poses estimated through visual odometry methods.
We propose a scalable framework that leverages trajectory information from several different runs using a camera setup placed at the front of a car.
arXiv Detail & Related papers (2021-03-20T16:29:01Z) - Fusion of neural networks, for LIDAR-based evidential road mapping [3.065376455397363]
We introduce RoadSeg, a new convolutional architecture that is optimized for road detection in LIDAR scans.
RoadSeg is used to classify individual LIDAR points as either belonging to the road, or not.
We thus secondly present an evidential road mapping algorithm, that fuses consecutive road detection results.
arXiv Detail & Related papers (2021-02-05T18:14:36Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - BoMuDANet: Unsupervised Adaptation for Visual Scene Understanding in
Unstructured Driving Environments [54.22535063244038]
We present an unsupervised adaptation approach for visual scene understanding in unstructured traffic environments.
Our method is designed for unstructured real-world scenarios with dense and heterogeneous traffic consisting of cars, trucks, two-and three-wheelers, and pedestrians.
arXiv Detail & Related papers (2020-09-22T08:25:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.