Predicting Physical World Destinations for Commands Given to
Self-Driving Cars
- URL: http://arxiv.org/abs/2112.05419v1
- Date: Fri, 10 Dec 2021 09:51:16 GMT
- Title: Predicting Physical World Destinations for Commands Given to
Self-Driving Cars
- Authors: Dusan Grujicic, Thierry Deruyttere, Marie-Francine Moens, Matthew
Blaschko
- Abstract summary: We propose an extension in which we annotate the 3D destination that the car needs to reach after executing the given command.
We introduce a model that outperforms the prior works adapted for this particular setting.
- Score: 19.71691537605694
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In recent years, we have seen significant steps taken in the development of
self-driving cars. Multiple companies are starting to roll out impressive
systems that work in a variety of settings. These systems can sometimes give
the impression that full self-driving is just around the corner and that we
would soon build cars without even a steering wheel. The increase in the level
of autonomy and control given to an AI provides an opportunity for new modes of
human-vehicle interaction. However, surveys have shown that giving more control
to an AI in self-driving cars is accompanied by a degree of uneasiness by
passengers. In an attempt to alleviate this issue, recent works have taken a
natural language-oriented approach by allowing the passenger to give commands
that refer to specific objects in the visual scene. Nevertheless, this is only
half the task as the car should also understand the physical destination of the
command, which is what we focus on in this paper. We propose an extension in
which we annotate the 3D destination that the car needs to reach after
executing the given command and evaluate multiple different baselines on
predicting this destination location. Additionally, we introduce a model that
outperforms the prior works adapted for this particular setting.
Related papers
- Pedestrian motion prediction evaluation for urban autonomous driving [0.0]
We analyze selected publications with provided open-source solutions to determine valuability of traditional motion prediction metrics.
This perspective should be valuable to any potential autonomous driving or robotics engineer looking for the real-world performance of the existing state-of-art pedestrian motion prediction problem.
arXiv Detail & Related papers (2024-10-22T10:06:50Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Are you a robot? Detecting Autonomous Vehicles from Behavior Analysis [6.422370188350147]
We present a framework that monitors active vehicles using camera images and state information in order to determine whether vehicles are autonomous.
Essentially, it builds on the cooperation among vehicles, which share their data acquired on the road feeding a machine learning model to identify autonomous cars.
Experiments show it is possible to discriminate the two behaviors by analyzing video clips with an accuracy of 80%, which improves up to 93% when the target state information is available.
arXiv Detail & Related papers (2024-03-14T17:00:29Z) - Applications of Computer Vision in Autonomous Vehicles: Methods, Challenges and Future Directions [2.693342141713236]
This paper reviews publications on computer vision and autonomous driving that are published during the last ten years.
In particular, we first investigate the development of autonomous driving systems and summarize these systems that are developed by the major automotive manufacturers from different countries.
Then, a comprehensive overview of computer vision applications for autonomous driving such as depth estimation, object detection, lane detection, and traffic sign recognition are discussed.
arXiv Detail & Related papers (2023-11-15T16:41:18Z) - Data and Knowledge for Overtaking Scenarios in Autonomous Driving [0.0]
The overtaking maneuver is one of the most critical actions of driving.
Despite the amount of work available in the literature, just a few handle overtaking maneuvers.
This work contributes in this area by presenting a new synthetic dataset whose focus is the overtaking maneuver.
arXiv Detail & Related papers (2023-05-30T21:27:05Z) - Policy Pre-training for End-to-end Autonomous Driving via
Self-supervised Geometric Modeling [96.31941517446859]
We propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving.
We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos.
In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input.
In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only.
arXiv Detail & Related papers (2023-01-03T08:52:49Z) - Audiovisual Affect Assessment and Autonomous Automobiles: Applications [0.0]
This contribution aims to foresee according challenges and provide potential avenues towards affect modelling in a multimodal "audiovisual plus x" on the road context.
From the technical end, this concerns holistic passenger modelling and reliable diarisation of the individuals in a vehicle.
In conclusion, automated affect analysis has just matured to the point of applicability in autonomous vehicles in first selected use-cases.
arXiv Detail & Related papers (2022-03-14T20:39:02Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - Intelligent Roundabout Insertion using Deep Reinforcement Learning [68.8204255655161]
We present a maneuver planning module able to negotiate the entering in busy roundabouts.
The proposed module is based on a neural network trained to predict when and how entering the roundabout throughout the whole duration of the maneuver.
arXiv Detail & Related papers (2020-01-03T11:16:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.