Giving Commands to a Self-Driving Car: How to Deal with Uncertain
Situations?
- URL: http://arxiv.org/abs/2106.04232v1
- Date: Tue, 8 Jun 2021 10:21:11 GMT
- Title: Giving Commands to a Self-Driving Car: How to Deal with Uncertain
Situations?
- Authors: Thierry Deruyttere, Victor Milewski, Marie-Francine Moens
- Abstract summary: This paper proposes a model that detects uncertain situations when a command is given and finds the visual objects causing it.
We argue that if the car could explain the objects in a human-like way, passengers could gain more confidence in the car's abilities.
- Score: 21.19657707748505
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Current technology for autonomous cars primarily focuses on getting the
passenger from point A to B. Nevertheless, it has been shown that passengers
are afraid of taking a ride in self-driving cars. One way to alleviate this
problem is by allowing the passenger to give natural language commands to the
car. However, the car can misunderstand the issued command or the visual
surroundings which could lead to uncertain situations. It is desirable that the
self-driving car detects these situations and interacts with the passenger to
solve them. This paper proposes a model that detects uncertain situations when
a command is given and finds the visual objects causing it. Optionally, a
question generated by the system describing the uncertain objects is included.
We argue that if the car could explain the objects in a human-like way,
passengers could gain more confidence in the car's abilities. Thus, we
investigate how to (1) detect uncertain situations and their underlying causes,
and (2) how to generate clarifying questions for the passenger. When evaluating
on the Talk2Car dataset, we show that the proposed model, \acrfull{pipeline},
improves \gls{m:ambiguous-absolute-increase} in terms of $IoU_{.5}$ compared to
not using \gls{pipeline}. Furthermore, we designed a referring expression
generator (REG) \acrfull{reg_model} tailored to a self-driving car setting
which yields a relative improvement of \gls{m:meteor-relative} METEOR and
\gls{m:rouge-relative} ROUGE-l compared with state-of-the-art REG models, and
is three times faster.
Related papers
- Effects of Explanation Specificity on Passengers in Autonomous Driving [9.855051716204002]
We investigate the effects of natural language explanations' specificity on passengers in autonomous driving.
We generated auditory natural language explanations with different levels of specificity (abstract and specific)
Our results showed that both abstract and specific explanations had similar positive effects on passengers' perceived safety and the feeling of anxiety.
arXiv Detail & Related papers (2023-07-02T18:40:05Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - Audiovisual Affect Assessment and Autonomous Automobiles: Applications [0.0]
This contribution aims to foresee according challenges and provide potential avenues towards affect modelling in a multimodal "audiovisual plus x" on the road context.
From the technical end, this concerns holistic passenger modelling and reliable diarisation of the individuals in a vehicle.
In conclusion, automated affect analysis has just matured to the point of applicability in autonomous vehicles in first selected use-cases.
arXiv Detail & Related papers (2022-03-14T20:39:02Z) - Predicting Physical World Destinations for Commands Given to
Self-Driving Cars [19.71691537605694]
We propose an extension in which we annotate the 3D destination that the car needs to reach after executing the given command.
We introduce a model that outperforms the prior works adapted for this particular setting.
arXiv Detail & Related papers (2021-12-10T09:51:16Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Studying Person-Specific Pointing and Gaze Behavior for Multimodal
Referencing of Outside Objects from a Moving Vehicle [58.720142291102135]
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing.
Existing outside-the-vehicle referencing methods focus on a static situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints.
We investigate the specific characteristics of each modality and the interaction between them when used in the task of referencing outside objects.
arXiv Detail & Related papers (2020-09-23T14:56:19Z) - Commands 4 Autonomous Vehicles (C4AV) Workshop Summary [91.92872482200018]
This paper presents the results of the emphCommands for Autonomous Vehicles (C4AV) challenge based on the recent emphTalk2Car dataset.
We identify the aspects that render top-performing models successful, and relate them to existing state-of-the-art models for visual grounding.
arXiv Detail & Related papers (2020-09-18T12:33:21Z) - To Explain or Not to Explain: A Study on the Necessity of Explanations
for Autonomous Vehicles [26.095533634997786]
We present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips.
Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary.
In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.
arXiv Detail & Related papers (2020-06-21T00:38:24Z) - A Baseline for the Commands For Autonomous Vehicles Challenge [7.430057056425165]
The challenge is based on the recent textttTalk2Car dataset.
This document provides a technical overview of a model that we released to help participants get started in the competition.
arXiv Detail & Related papers (2020-04-20T13:35:47Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.