A-Eye: Driving with the Eyes of AI for Corner Case Generation
- URL: http://arxiv.org/abs/2202.10803v1
- Date: Tue, 22 Feb 2022 10:42:23 GMT
- Title: A-Eye: Driving with the Eyes of AI for Corner Case Generation
- Authors: Kamil Kowol and Stefan Bracke and Hanno Gottschalk
- Abstract summary: The overall goal of this work is to enrich training data for automated driving with so called corner cases.
We present the design of a test rig to generate synthetic corner cases using a human-in-the-loop approach.
- Score: 0.6445605125467573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The overall goal of this work is to enrich training data for automated
driving with so called corner cases. In road traffic, corner cases are
critical, rare and unusual situations that challenge the perception by AI
algorithms. For this purpose, we present the design of a test rig to generate
synthetic corner cases using a human-in-the-loop approach. For the test rig, a
real-time semantic segmentation network is trained and integrated into the
driving simulation software CARLA in such a way that a human can drive on the
network's prediction. In addition, a second person gets to see the same scene
from the original CARLA output and is supposed to intervene with the help of a
second control unit as soon as the semantic driver shows dangerous driving
behavior. Interventions potentially indicate poor recognition of a critical
scene by the segmentation network and then represents a corner case. In our
experiments, we show that targeted enrichment of training data with corner
cases leads to improvements in pedestrian detection in safety relevant episodes
in road traffic.
Related papers
- A Semi-Automated Corner Case Detection and Evaluation Pipeline [0.0]
Perception systems require large datasets for training their deep neural network.
Knowing which parts of the data in these datasets describe a corner case is an advantage during training or testing of the network.
We propose a pipeline that converts collective expert knowledge descriptions into the extended KI Absicherung ontology.
arXiv Detail & Related papers (2023-05-25T12:06:43Z) - survAIval: Survival Analysis with the Eyes of AI [0.6445605125467573]
We propose a novel approach to enrich the training data for automated driving by using a self-designed driving simulator and two human drivers.
Our results show that incorporating these corner cases during training improves the recognition of corner cases during testing.
arXiv Detail & Related papers (2023-05-23T15:20:31Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - FBLNet: FeedBack Loop Network for Driver Attention Prediction [75.83518507463226]
Nonobjective driving experience is difficult to model.
In this paper, we propose a FeedBack Loop Network (FBLNet) which attempts to model the driving experience accumulation procedure.
Under the guidance of the incremental knowledge, our model fuses the CNN feature and Transformer feature that are extracted from the input image to predict driver attention.
arXiv Detail & Related papers (2022-12-05T08:25:09Z) - Space, Time, and Interaction: A Taxonomy of Corner Cases in Trajectory
Datasets for Automated Driving [9.119257760524782]
Trajectory data analysis is an essential component for highly automated driving.
A highly automated vehicle (HAV) must be able to reliably and safely perform the task assigned to it.
If unusual trajectories occur, so-called trajectory corner cases, a human driver can usually cope well, but an HAV can quickly get into trouble.
arXiv Detail & Related papers (2022-10-17T09:27:45Z) - Perspective Aware Road Obstacle Detection [104.57322421897769]
We show that road obstacle detection techniques ignore the fact that, in practice, the apparent size of the obstacles decreases as their distance to the vehicle increases.
We leverage this by computing a scale map encoding the apparent size of a hypothetical object at every image location.
We then leverage this perspective map to generate training data by injecting onto the road synthetic objects whose size corresponds to the perspective foreshortening.
arXiv Detail & Related papers (2022-10-04T17:48:42Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - An Application-Driven Conceptualization of Corner Cases for Perception
in Highly Automated Driving [21.67019631065338]
We provide an application-driven view of corner cases in highly automated driving.
We extend an existing camera-focused systematization of corner cases by adding RADAR and LiDAR.
We describe an exemplary toolchain for data acquisition and processing.
arXiv Detail & Related papers (2021-03-05T13:56:37Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z) - Towards Safer Self-Driving Through Great PAIN (Physically Adversarial
Intelligent Networks) [3.136861161060885]
We introduce a "Physically Adrial Intelligent Network" (PAIN) wherein self-driving vehicles interact aggressively.
We train two agents, a protagonist and an adversary, using dueling double deep Q networks (DDDQNs) with prioritized experience replay.
The trained protagonist becomes more resilient to environmental uncertainty and less prone to corner case failures.
arXiv Detail & Related papers (2020-03-24T05:04:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.