Intend-Wait-Perceive-Cross: Exploring the Effects of Perceptual
Limitations on Pedestrian Decision-Making
- URL: http://arxiv.org/abs/2302.03816v1
- Date: Wed, 8 Feb 2023 00:47:51 GMT
- Title: Intend-Wait-Perceive-Cross: Exploring the Effects of Perceptual
Limitations on Pedestrian Decision-Making
- Authors: Iuliia Kotseruba and Amir Rasouli
- Abstract summary: Current research on pedestrian behavior understanding focuses on the dynamics of pedestrians and their perceptual abilities.
We propose an agent-based pedestrian behavior model Intend-Wait-Perceive-Cross with three novel elements.
We investigate the effects of perceptual limitations on safe crossing decisions and demonstrate how they contribute to detectable changes in pedestrian behaviors.
- Score: 10.812772606528172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current research on pedestrian behavior understanding focuses on the dynamics
of pedestrians and makes strong assumptions about their perceptual abilities.
For instance, it is often presumed that pedestrians have omnidirectional view
of the scene around them. In practice, human visual system has a number of
limitations, such as restricted field of view (FoV) and range of sensing, which
consequently affect decision-making and overall behavior of the pedestrians. By
including explicit modeling of pedestrian perception, we can better understand
its effect on their decision-making. To this end, we propose an agent-based
pedestrian behavior model Intend-Wait-Perceive-Cross with three novel elements:
field of vision, working memory, and scanning strategy, all motivated by
findings from behavioral literature. Through extensive experimentation we
investigate the effects of perceptual limitations on safe crossing decisions
and demonstrate how they contribute to detectable changes in pedestrian
behaviors.
Related papers
- Robust Pedestrian Detection via Constructing Versatile Pedestrian Knowledge Bank [51.66174565170112]
We propose a novel approach to construct versatile pedestrian knowledge bank.
We extract pedestrian knowledge from a large-scale pretrained model.
We then curate them by quantizing most representative features and guiding them to be distinguishable from background scenes.
arXiv Detail & Related papers (2024-04-30T07:01:05Z) - Predicting and Analyzing Pedestrian Crossing Behavior at Unsignalized Crossings [3.373568134827475]
We propose and evaluate machine learning models to predict gap selection in non-zebra scenarios and zebra crossing usage in zebra scenarios.
We discuss how pedestrians' behaviors are influenced by various factors, including pedestrian waiting time, walking speed, the number of unused gaps, the largest missed gap, and the influence of other pedestrians.
arXiv Detail & Related papers (2024-04-15T08:36:40Z) - Pedestrian crossing decisions can be explained by bounded optimal
decision-making under noisy visual perception [27.33595198576784]
It is assumed that crossing decisions are boundedly optimal, with bounds on optimality arising from human cognitive limitations.
We model mechanistically noisy human visual perception and assumed rewards in crossing, but we use reinforcement learning to learn bounded optimal behaviour policy.
arXiv Detail & Related papers (2024-02-06T20:13:34Z) - GPT-4V Takes the Wheel: Promises and Challenges for Pedestrian Behavior
Prediction [12.613528624623514]
This research is the first to conduct both quantitative and qualitative evaluations of Vision Language Models (VLMs) in the context of pedestrian behavior prediction for autonomous driving.
We evaluate GPT-4V on publicly available pedestrian datasets: JAAD and WiDEVIEW.
The model achieves a 57% accuracy in a zero-shot manner, which, while impressive, is still behind the state-of-the-art domain-specific models (70%) in predicting pedestrian crossing actions.
arXiv Detail & Related papers (2023-11-24T18:02:49Z) - Integrating Language-Derived Appearance Elements with Visual Cues in Pedestrian Detection [51.66174565170112]
We introduce a novel approach to utilize the strengths of large language models in understanding contextual appearance variations.
We propose to formulate language-derived appearance elements and incorporate them with visual cues in pedestrian detection.
arXiv Detail & Related papers (2023-11-02T06:38:19Z) - Behavioral Intention Prediction in Driving Scenes: A Survey [70.53285924851767]
Behavioral Intention Prediction (BIP) simulates a human consideration process and fulfills the early prediction of specific behaviors.
This work provides a comprehensive review of BIP from the available datasets, key factors and challenges, pedestrian-centric and vehicle-centric BIP approaches, and BIP-aware applications.
arXiv Detail & Related papers (2022-11-01T11:07:37Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Coupling Intent and Action for Pedestrian Crossing Behavior Prediction [25.54455403877285]
In this work, we follow the neuroscience and psychological literature to define pedestrian crossing behavior as a combination of an unobserved inner will and a set of multi-class actions.
We present a novel multi-task network that predicts future pedestrian actions and uses predicted future action as a prior to detect the present intent and action of the pedestrian.
arXiv Detail & Related papers (2021-05-10T06:26:25Z) - Pedestrian Intention Prediction: A Multi-task Perspective [83.7135926821794]
In order to be globally deployed, autonomous cars must guarantee the safety of pedestrians.
This work tries to solve this problem by jointly predicting the intention and visual states of pedestrians.
The method is a recurrent neural network in a multi-task learning approach.
arXiv Detail & Related papers (2020-10-20T13:42:31Z) - Learning "What-if" Explanations for Sequential Decision-Making [92.8311073739295]
Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior is essential.
We propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to "what if" outcomes.
We highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.
arXiv Detail & Related papers (2020-07-02T14:24:17Z) - Pedestrian Action Anticipation using Contextual Feature Fusion in
Stacked RNNs [19.13270454742958]
We propose a solution for the problem of pedestrian action anticipation at the point of crossing.
Our approach uses a novel stacked RNN architecture in which information collected from various sources, both scene dynamics and visual features, is gradually fused into the network.
arXiv Detail & Related papers (2020-05-13T20:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.