A Semi-Automated Corner Case Detection and Evaluation Pipeline
- URL: http://arxiv.org/abs/2305.16369v1
- Date: Thu, 25 May 2023 12:06:43 GMT
- Title: A Semi-Automated Corner Case Detection and Evaluation Pipeline
- Authors: Isabelle Tulleners, Tobias Moers, Thomas Schulik, Martin Sedlacek
- Abstract summary: Perception systems require large datasets for training their deep neural network.
Knowing which parts of the data in these datasets describe a corner case is an advantage during training or testing of the network.
We propose a pipeline that converts collective expert knowledge descriptions into the extended KI Absicherung ontology.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In order to deploy automated vehicles to the public, it has to be proven that
the vehicle can safely and robustly handle traffic in many different scenarios.
One important component of automated vehicles is the perception system that
captures and processes the environment around the vehicle. Perception systems
require large datasets for training their deep neural network. Knowing which
parts of the data in these datasets describe a corner case is an advantage
during training or testing of the network. These corner cases describe
situations that are rare and potentially challenging for the network. We
propose a pipeline that converts collective expert knowledge descriptions into
the extended KI Absicherung ontology. The ontology is used to describe scenes
and scenarios that can be mapped to perception datasets. The corner cases can
then be extracted from the datasets. In addition, the pipeline enables the
evaluation of the detection networks against the extracted corner cases to
measure their performance.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Self-supervised Point Cloud Completion on Real Traffic Scenes via
Scene-concerned Bottom-up Mechanism [14.255659581428333]
Point cloud completion aims to refer the complete shapes for incomplete 3D scans of objects.
Current deep learning-based approaches rely on large-scale complete shapes in the training process.
We propose a self-supervised point cloud completion method (TraPCC) for vehicles in real traffic scenes without any complete data.
arXiv Detail & Related papers (2022-03-20T14:42:37Z) - A-Eye: Driving with the Eyes of AI for Corner Case Generation [0.6445605125467573]
The overall goal of this work is to enrich training data for automated driving with so called corner cases.
We present the design of a test rig to generate synthetic corner cases using a human-in-the-loop approach.
arXiv Detail & Related papers (2022-02-22T10:42:23Z) - A system of vision sensor based deep neural networks for complex driving
scene analysis in support of crash risk assessment and prevention [12.881094474374231]
This paper develops a system for driving scene analysis using dash cameras on vehicles and deep learning algorithms.
The Multi-Net of the system includes two multi-task neural networks that perform scene classification to provide four labels for each scene.
Two completely new datasets have been developed and made available to the public, which were proved to be effective in training the proposed deep neural networks.
arXiv Detail & Related papers (2021-06-18T19:07:59Z) - An Application-Driven Conceptualization of Corner Cases for Perception
in Highly Automated Driving [21.67019631065338]
We provide an application-driven view of corner cases in highly automated driving.
We extend an existing camera-focused systematization of corner cases by adding RADAR and LiDAR.
We describe an exemplary toolchain for data acquisition and processing.
arXiv Detail & Related papers (2021-03-05T13:56:37Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Lane Detection Model Based on Spatio-Temporal Network With Double
Convolutional Gated Recurrent Units [11.968518335236787]
Lane detection will remain an open problem for some time to come.
A-temporal network with double Conal Gated Recurrent Units (ConvGRUs) proposed to address lane detection in challenging scenes.
Our model can outperform the state-of-the-art lane detection models.
arXiv Detail & Related papers (2020-08-10T06:50:48Z) - Key Points Estimation and Point Instance Segmentation Approach for Lane
Detection [65.37887088194022]
We propose a traffic line detection method called Point Instance Network (PINet)
The PINet includes several stacked hourglass networks that are trained simultaneously.
The PINet achieves competitive accuracy and false positive on the TuSimple and Culane datasets.
arXiv Detail & Related papers (2020-02-16T15:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.