Building Mental Models through Preview of Autopilot Behaviors
- URL: http://arxiv.org/abs/2104.05470v1
- Date: Mon, 12 Apr 2021 13:46:55 GMT
- Title: Building Mental Models through Preview of Autopilot Behaviors
- Authors: Yuan Shen and Niviru Wijayaratne and Katherine Driggs-Campbell
- Abstract summary: We in-troduce our framework, calledAutoPreview, to enable humans to preview autopilot behaviors prior to direct interaction with the vehicle.
Ourresults suggest that theAutoPreview framework does, in fact, helpusers understand autopilot behavior and develop appropriate men-tal models.
- Score: 20.664610032249037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective human-vehicle collaboration requires an appropriate un-derstanding
of vehicle behavior for safety and trust. Improvingon our prior work by adding
a future prediction module, we in-troduce our framework, calledAutoPreview, to
enable humans topreview autopilot behaviors prior to direct interaction with
thevehicle. Previewing autopilot behavior can help to ensure
smoothhuman-vehicle collaboration during the initial exploration stagewith the
vehicle. To demonstrate its practicality, we conducted acase study on
human-vehicle collaboration and built a prototypeof our framework with the
CARLA simulator. Additionally, weconducted a between-subject control experiment
(n=10) to studywhether ourAutoPreviewframework can provide a deeper
under-standing of autopilot behavior compared to direct interaction. Ourresults
suggest that theAutoPreviewframework does, in fact, helpusers understand
autopilot behavior and develop appropriate men-tal models
Related papers
- Explainable deep learning improves human mental models of self-driving cars [12.207001033390226]
Concept-wrapper network (i.e., CW-Net) is a method for explaining the behavior of black-box motion planners.
We deploy CW-Net on a real self-driving car and show that the resulting explanations refine the human driver's mental model of the car.
We anticipate our method could be applied to other safety-critical systems with a human in the loop, such as autonomous drones and robotic surgeons.
arXiv Detail & Related papers (2024-11-27T19:38:43Z) - MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - Guiding Attention in End-to-End Driving Models [49.762868784033785]
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving.
We study how to guide the attention of these models to improve their driving quality by adding a loss term during training.
In contrast to previous work, our method does not require these salient semantic maps to be available during testing time.
arXiv Detail & Related papers (2024-04-30T23:18:51Z) - Incorporating Explanations into Human-Machine Interfaces for Trust and Situation Awareness in Autonomous Vehicles [4.1636282808157254]
We study the role of explainable AI and human-machine interface jointly in building trust in vehicle autonomy.
We present a situation awareness framework for calibrating users' trust in self-driving behavior.
arXiv Detail & Related papers (2024-04-10T23:02:13Z) - Are you a robot? Detecting Autonomous Vehicles from Behavior Analysis [6.422370188350147]
We present a framework that monitors active vehicles using camera images and state information in order to determine whether vehicles are autonomous.
Essentially, it builds on the cooperation among vehicles, which share their data acquired on the road feeding a machine learning model to identify autonomous cars.
Experiments show it is possible to discriminate the two behaviors by analyzing video clips with an accuracy of 80%, which improves up to 93% when the target state information is available.
arXiv Detail & Related papers (2024-03-14T17:00:29Z) - Analyze Drivers' Intervention Behavior During Autonomous Driving -- A
VR-incorporated Approach [2.7532019227694344]
This work sheds light on understanding human drivers' intervention behavior involved in the operation of autonomous vehicles.
Experiment environments were implemented where the virtual reality (VR) and traffic micro-simulation are integrated.
Performance indicators such as the probability of intervention, accident rates are defined and used to quantify and compare the risk levels.
arXiv Detail & Related papers (2023-12-04T06:36:57Z) - Policy Pre-training for End-to-end Autonomous Driving via
Self-supervised Geometric Modeling [96.31941517446859]
We propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving.
We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos.
In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input.
In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only.
arXiv Detail & Related papers (2023-01-03T08:52:49Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Drivers' Manoeuvre Modelling and Prediction for Safe HRI [0.0]
Theory of Mind has been broadly explored for robotics and recently for autonomous and semi-autonomous vehicles.
We explored how to predict human intentions before an action is performed by combining data from human-motion, vehicle-state and human inputs.
arXiv Detail & Related papers (2021-06-03T10:07:55Z) - Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry [55.11913183006984]
We show how a model can be trained to control a vehicle's trajectory using camera poses estimated through visual odometry methods.
We propose a scalable framework that leverages trajectory information from several different runs using a camera setup placed at the front of a car.
arXiv Detail & Related papers (2021-03-20T16:29:01Z) - AutoPreview: A Framework for Autopilot Behavior Understanding [16.177399201198636]
We propose a simple but effective framework, AutoPreview, to enable consumers to preview a target autopilot potential actions.
For a given target autopilot, we design a delegate policy that replicates the target autopilot behavior with explainable action representations.
We conduct a pilot study to investigate whether or not AutoPreview provides deeper understanding about autopilot behavior when experiencing a new autopilot policy.
arXiv Detail & Related papers (2021-02-25T17:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.