Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from
Sensing to Tracking
- URL: http://arxiv.org/abs/2002.11669v2
- Date: Mon, 20 Jul 2020 14:42:27 GMT
- Title: Pedestrian Models for Autonomous Driving Part I: Low-Level Models, from
Sensing to Tracking
- Authors: Fanta Camara, Nicola Bellotto, Serhan Cosar, Dimitris Nathanael,
Matthias Althoff, Jingyuan Wu, Johannes Ruenz, Andr\'e Dietrich and Charles
W. Fox
- Abstract summary: This narrative review article is Part I of a pair, together surveying the current technology stack involved in this process.
This self-contained Part I covers the lower levels of this stack, from sensing, through detection and recognition, up to tracking of pedestrians.
Technologies at these levels are found to be mature and available as foundations for use in high-level systems, such as behaviour modelling, prediction and interaction control.
- Score: 10.789792281963422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous vehicles (AVs) must share space with pedestrians, both in
carriageway cases such as cars at pedestrian crossings and off-carriageway
cases such as delivery vehicles navigating through crowds on pedestrianized
high-streets. Unlike static obstacles, pedestrians are active agents with
complex, interactive motions. Planning AV actions in the presence of
pedestrians thus requires modelling of their probable future behaviour as well
as detecting and tracking them. This narrative review article is Part I of a
pair, together surveying the current technology stack involved in this process,
organising recent research into a hierarchical taxonomy ranging from low-level
image detection to high-level psychology models, from the perspective of an AV
designer. This self-contained Part I covers the lower levels of this stack,
from sensing, through detection and recognition, up to tracking of pedestrians.
Technologies at these levels are found to be mature and available as
foundations for use in high-level systems, such as behaviour modelling,
prediction and interaction control.
Related papers
- GPT-4V Takes the Wheel: Promises and Challenges for Pedestrian Behavior
Prediction [12.613528624623514]
This research is the first to conduct both quantitative and qualitative evaluations of Vision Language Models (VLMs) in the context of pedestrian behavior prediction for autonomous driving.
We evaluate GPT-4V on publicly available pedestrian datasets: JAAD and WiDEVIEW.
The model achieves a 57% accuracy in a zero-shot manner, which, while impressive, is still behind the state-of-the-art domain-specific models (70%) in predicting pedestrian crossing actions.
arXiv Detail & Related papers (2023-11-24T18:02:49Z) - Social Interaction-Aware Dynamical Models and Decision Making for
Autonomous Vehicles [20.123965317836106]
Interaction-aware Autonomous Driving (IAAD) is a rapidly growing field of research.
It focuses on the development of autonomous vehicles that are capable of interacting safely and efficiently with human road users.
This is a challenging task, as it requires the autonomous vehicle to be able to understand and predict the behaviour of human road users.
arXiv Detail & Related papers (2023-10-29T03:43:50Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Pedestrian Stop and Go Forecasting with Hybrid Feature Fusion [87.77727495366702]
We introduce the new task of pedestrian stop and go forecasting.
Considering the lack of suitable existing datasets for it, we release TRANS, a benchmark for explicitly studying the stop and go behaviors of pedestrians in urban traffic.
We build it from several existing datasets annotated with pedestrians' walking motions, in order to have various scenarios and behaviors.
arXiv Detail & Related papers (2022-03-04T18:39:31Z) - TransDARC: Transformer-based Driver Activity Recognition with Latent
Space Feature Calibration [31.908276711898548]
We present a vision-based framework for recognizing secondary driver behaviours based on visual transformers and an augmented feature distribution calibration module.
Our framework consistently leads to better recognition rates, surpassing previous state-of-the-art results of the public Drive&Act benchmark on all levels.
arXiv Detail & Related papers (2022-03-02T08:14:06Z) - Pedestrian Detection: Domain Generalization, CNNs, Transformers and
Beyond [82.37430109152383]
We show that, current pedestrian detectors poorly handle even small domain shifts in cross-dataset evaluation.
We attribute the limited generalization to two main factors, the method and the current sources of data.
We propose a progressive fine-tuning strategy which improves generalization.
arXiv Detail & Related papers (2022-01-10T06:00:26Z) - Pedestrian Trajectory Prediction via Spatial Interaction Transformer
Network [7.150832716115448]
In traffic scenes, when encountering with oncoming people, pedestrians may make sudden turns or stop immediately.
To predict such unpredictable trajectories, we can gain insights into the interaction between pedestrians.
We present a novel generative method named Spatial Interaction Transformer (SIT), which learns the correlation of pedestrian trajectories through attention mechanisms.
arXiv Detail & Related papers (2021-12-13T13:08:04Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - Pedestrian Models for Autonomous Driving Part II: High-Level Models of
Human Behavior [12.627716603026391]
Planning autonomous vehicles in the presence of pedestrians requires modelling of their probable future behaviour.
This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question.
arXiv Detail & Related papers (2020-03-26T14:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.