AutoPreview: A Framework for Autopilot Behavior Understanding
- URL: http://arxiv.org/abs/2102.13034v1
- Date: Thu, 25 Feb 2021 17:40:59 GMT
- Title: AutoPreview: A Framework for Autopilot Behavior Understanding
- Authors: Yuan Shen, Niviru Wijayaratne, Peter Du, Shanduojiao Jiang, Katherine
Driggs Campbell
- Abstract summary: We propose a simple but effective framework, AutoPreview, to enable consumers to preview a target autopilot potential actions.
For a given target autopilot, we design a delegate policy that replicates the target autopilot behavior with explainable action representations.
We conduct a pilot study to investigate whether or not AutoPreview provides deeper understanding about autopilot behavior when experiencing a new autopilot policy.
- Score: 16.177399201198636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The behavior of self driving cars may differ from people expectations, (e.g.
an autopilot may unexpectedly relinquish control). This expectation mismatch
can cause potential and existing users to distrust self driving technology and
can increase the likelihood of accidents. We propose a simple but effective
framework, AutoPreview, to enable consumers to preview a target autopilot
potential actions in the real world driving context before deployment. For a
given target autopilot, we design a delegate policy that replicates the target
autopilot behavior with explainable action representations, which can then be
queried online for comparison and to build an accurate mental model. To
demonstrate its practicality, we present a prototype of AutoPreview integrated
with the CARLA simulator along with two potential use cases of the framework.
We conduct a pilot study to investigate whether or not AutoPreview provides
deeper understanding about autopilot behavior when experiencing a new autopilot
policy for the first time. Our results suggest that the AutoPreview method
helps users understand autopilot behavior in terms of driving style
comprehension, deployment preference, and exact action timing prediction.
Related papers
- Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich Software [9.881955481813465]
Large Language Model (LLM)-based in-application assistants, or copilots, can automate software tasks.
We investigated two automation paradigms by designing and implementing a fully automated copilot and a semi-automated copilot.
GuidedCopilot automates trivial steps while offering step-by-step visual guidance.
arXiv Detail & Related papers (2025-04-22T03:11:10Z) - Explainable deep learning improves human mental models of self-driving cars [12.207001033390226]
Concept-wrapper network (i.e., CW-Net) is a method for explaining the behavior of black-box motion planners.
We deploy CW-Net on a real self-driving car and show that the resulting explanations refine the human driver's mental model of the car.
We anticipate our method could be applied to other safety-critical systems with a human in the loop, such as autonomous drones and robotic surgeons.
arXiv Detail & Related papers (2024-11-27T19:38:43Z) - Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - Incorporating Explanations into Human-Machine Interfaces for Trust and Situation Awareness in Autonomous Vehicles [4.1636282808157254]
We study the role of explainable AI and human-machine interface jointly in building trust in vehicle autonomy.
We present a situation awareness framework for calibrating users' trust in self-driving behavior.
arXiv Detail & Related papers (2024-04-10T23:02:13Z) - Are you a robot? Detecting Autonomous Vehicles from Behavior Analysis [6.422370188350147]
We present a framework that monitors active vehicles using camera images and state information in order to determine whether vehicles are autonomous.
Essentially, it builds on the cooperation among vehicles, which share their data acquired on the road feeding a machine learning model to identify autonomous cars.
Experiments show it is possible to discriminate the two behaviors by analyzing video clips with an accuracy of 80%, which improves up to 93% when the target state information is available.
arXiv Detail & Related papers (2024-03-14T17:00:29Z) - Analyze Drivers' Intervention Behavior During Autonomous Driving -- A
VR-incorporated Approach [2.7532019227694344]
This work sheds light on understanding human drivers' intervention behavior involved in the operation of autonomous vehicles.
Experiment environments were implemented where the virtual reality (VR) and traffic micro-simulation are integrated.
Performance indicators such as the probability of intervention, accident rates are defined and used to quantify and compare the risk levels.
arXiv Detail & Related papers (2023-12-04T06:36:57Z) - Driving through the Concept Gridlock: Unraveling Explainability
Bottlenecks in Automated Driving [22.21822829138535]
We propose a new approach using concept bottlenecks as visual features for control command predictions and explanations of user and vehicle behavior.
We learn a human-understandable concept layer that we use to explain sequential driving scenes while learning vehicle control commands.
This approach can then be used to determine whether a change in a preferred gap or steering commands from a human (or autonomous vehicle) is led by an external stimulus or change in preferences.
arXiv Detail & Related papers (2023-10-25T13:39:04Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Control-Aware Prediction Objectives for Autonomous Driving [78.19515972466063]
We present control-aware prediction objectives (CAPOs) to evaluate the downstream effect of predictions on control without requiring the planner be differentiable.
We propose two types of importance weights that weight the predictive likelihood: one using an attention model between agents, and another based on control variation when exchanging predicted trajectories for ground truth trajectories.
arXiv Detail & Related papers (2022-04-28T07:37:21Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Building Mental Models through Preview of Autopilot Behaviors [20.664610032249037]
We in-troduce our framework, calledAutoPreview, to enable humans to preview autopilot behaviors prior to direct interaction with the vehicle.
Ourresults suggest that theAutoPreview framework does, in fact, helpusers understand autopilot behavior and develop appropriate men-tal models.
arXiv Detail & Related papers (2021-04-12T13:46:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.