VATLD: A Visual Analytics System to Assess, Understand and Improve
Traffic Light Detection
- URL: http://arxiv.org/abs/2009.12975v1
- Date: Sun, 27 Sep 2020 22:39:00 GMT
- Title: VATLD: A Visual Analytics System to Assess, Understand and Improve
Traffic Light Detection
- Authors: Liang Gou, Lincan Zou, Nanxiang Li, Michael Hofmann, Arvind Kumar
Shekar, Axel Wendt and Liu Ren
- Abstract summary: We propose a visual analytics system, VATLD, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications.
The disentangled representation learning extracts data semantics to augment human cognition with human-friendly visual summarization.
We also demonstrate the effectiveness of various performance improvement strategies with our visual analytics system, VATLD, and illustrate some practical implications for safety-critical applications in autonomous driving.
- Score: 15.36267013724161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic light detection is crucial for environment perception and
decision-making in autonomous driving. State-of-the-art detectors are built
upon deep Convolutional Neural Networks (CNNs) and have exhibited promising
performance. However, one looming concern with CNN based detectors is how to
thoroughly evaluate the performance of accuracy and robustness before they can
be deployed to autonomous vehicles. In this work, we propose a visual analytics
system, VATLD, equipped with a disentangled representation learning and
semantic adversarial learning, to assess, understand, and improve the accuracy
and robustness of traffic light detectors in autonomous driving applications.
The disentangled representation learning extracts data semantics to augment
human cognition with human-friendly visual summarization, and the semantic
adversarial learning efficiently exposes interpretable robustness risks and
enables minimal human interaction for actionable insights. We also demonstrate
the effectiveness of various performance improvement strategies derived from
actionable insights with our visual analytics system, VATLD, and illustrate
some practical implications for safety-critical applications in autonomous
driving.
Related papers
- Real-Time Detection and Analysis of Vehicles and Pedestrians using Deep Learning [0.0]
Current traffic monitoring systems confront major difficulty in recognizing small objects and pedestrians effectively in real-time.
Our project focuses on the creation and validation of an advanced deep-learning framework capable of processing complex visual input for precise, real-time recognition of cars and people.
The YOLOv8 Large version proved to be the most effective, especially in pedestrian recognition, with great precision and robustness.
arXiv Detail & Related papers (2024-04-11T18:42:14Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - A Cognitive-Based Trajectory Prediction Approach for Autonomous Driving [21.130543517747995]
This paper introduces the Human-Like Trajectory Prediction (H) model, which adopts a teacher-student knowledge distillation framework.
The "teacher" model mimics the visual processing of the human brain, particularly the functions of the occipital and temporal lobes.
The "student" model focuses on real-time interaction and decision-making, capturing essential perceptual cues for accurate prediction.
arXiv Detail & Related papers (2024-02-29T15:22:26Z) - Efficient Object Detection in Autonomous Driving using Spiking Neural
Networks: Performance, Energy Consumption Analysis, and Insights into
Open-set Object Discovery [8.255197802529118]
A well-balanced trade-off between performance and energy consumption is crucial for the sustainability of autonomous vehicles.
We show that well-performing and efficient models can be realized by virtue of Spiking Neural Networks.
arXiv Detail & Related papers (2023-12-12T17:47:13Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Building Trust in Autonomous Vehicles: Role of Virtual Reality Driving
Simulators in HMI Design [8.39368916644651]
We propose a methodology to validate the user experience in AVs based on continuous, objective information gathered from physiological signals.
We applied this methodology to the design of a head-up display interface delivering visual cues about the vehicle's sensory and planning systems.
arXiv Detail & Related papers (2020-07-27T08:42:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.