Predicting Physical World Destinations for Commands Given to
Self-Driving Cars
- URL: http://arxiv.org/abs/2112.05419v1
- Date: Fri, 10 Dec 2021 09:51:16 GMT
- Title: Predicting Physical World Destinations for Commands Given to
Self-Driving Cars
- Authors: Dusan Grujicic, Thierry Deruyttere, Marie-Francine Moens, Matthew
Blaschko
- Abstract summary: We propose an extension in which we annotate the 3D destination that the car needs to reach after executing the given command.
We introduce a model that outperforms the prior works adapted for this particular setting.
- Score: 19.71691537605694
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In recent years, we have seen significant steps taken in the development of
self-driving cars. Multiple companies are starting to roll out impressive
systems that work in a variety of settings. These systems can sometimes give
the impression that full self-driving is just around the corner and that we
would soon build cars without even a steering wheel. The increase in the level
of autonomy and control given to an AI provides an opportunity for new modes of
human-vehicle interaction. However, surveys have shown that giving more control
to an AI in self-driving cars is accompanied by a degree of uneasiness by
passengers. In an attempt to alleviate this issue, recent works have taken a
natural language-oriented approach by allowing the passenger to give commands
that refer to specific objects in the visual scene. Nevertheless, this is only
half the task as the car should also understand the physical destination of the
command, which is what we focus on in this paper. We propose an extension in
which we annotate the 3D destination that the car needs to reach after
executing the given command and evaluate multiple different baselines on
predicting this destination location. Additionally, we introduce a model that
outperforms the prior works adapted for this particular setting.
Related papers
- The Role of World Models in Shaping Autonomous Driving: A Comprehensive Survey [50.62538723793247]
Driving World Model (DWM) focuses on predicting scene evolution during the driving process.
DWM methods enable autonomous driving systems to better perceive, understand, and interact with dynamic driving environments.
arXiv Detail & Related papers (2025-02-14T18:43:15Z) - Transfer Your Perspective: Controllable 3D Generation from Any Viewpoint in a Driving Scene [56.73568220959019]
Collaborative autonomous driving (CAV) seems like a promising direction, but collecting data for development is non-trivial.
We introduce a novel surrogate to the rescue, which is to generate realistic perception from different viewpoints in a driving scene.
We present the very first solution, using a combination of simulated collaborative data and real ego-car data.
arXiv Detail & Related papers (2025-02-10T17:07:53Z) - Pedestrian motion prediction evaluation for urban autonomous driving [0.0]
We analyze selected publications with provided open-source solutions to determine valuability of traditional motion prediction metrics.
This perspective should be valuable to any potential autonomous driving or robotics engineer looking for the real-world performance of the existing state-of-art pedestrian motion prediction problem.
arXiv Detail & Related papers (2024-10-22T10:06:50Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - Are you a robot? Detecting Autonomous Vehicles from Behavior Analysis [6.422370188350147]
We present a framework that monitors active vehicles using camera images and state information in order to determine whether vehicles are autonomous.
Essentially, it builds on the cooperation among vehicles, which share their data acquired on the road feeding a machine learning model to identify autonomous cars.
Experiments show it is possible to discriminate the two behaviors by analyzing video clips with an accuracy of 80%, which improves up to 93% when the target state information is available.
arXiv Detail & Related papers (2024-03-14T17:00:29Z) - Applications of Computer Vision in Autonomous Vehicles: Methods, Challenges and Future Directions [2.693342141713236]
This paper reviews publications on computer vision and autonomous driving that are published during the last ten years.
In particular, we first investigate the development of autonomous driving systems and summarize these systems that are developed by the major automotive manufacturers from different countries.
Then, a comprehensive overview of computer vision applications for autonomous driving such as depth estimation, object detection, lane detection, and traffic sign recognition are discussed.
arXiv Detail & Related papers (2023-11-15T16:41:18Z) - Data and Knowledge for Overtaking Scenarios in Autonomous Driving [0.0]
The overtaking maneuver is one of the most critical actions of driving.
Despite the amount of work available in the literature, just a few handle overtaking maneuvers.
This work contributes in this area by presenting a new synthetic dataset whose focus is the overtaking maneuver.
arXiv Detail & Related papers (2023-05-30T21:27:05Z) - Audiovisual Affect Assessment and Autonomous Automobiles: Applications [0.0]
This contribution aims to foresee according challenges and provide potential avenues towards affect modelling in a multimodal "audiovisual plus x" on the road context.
From the technical end, this concerns holistic passenger modelling and reliable diarisation of the individuals in a vehicle.
In conclusion, automated affect analysis has just matured to the point of applicability in autonomous vehicles in first selected use-cases.
arXiv Detail & Related papers (2022-03-14T20:39:02Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - Intelligent Roundabout Insertion using Deep Reinforcement Learning [68.8204255655161]
We present a maneuver planning module able to negotiate the entering in busy roundabouts.
The proposed module is based on a neural network trained to predict when and how entering the roundabout throughout the whole duration of the maneuver.
arXiv Detail & Related papers (2020-01-03T11:16:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.