A low complexity contextual stacked ensemble-learning approach for pedestrian intent prediction
- URL: http://arxiv.org/abs/2410.13039v1
- Date: Wed, 16 Oct 2024 21:02:24 GMT
- Title: A low complexity contextual stacked ensemble-learning approach for pedestrian intent prediction
- Authors: Chia-Yen Chiang, Yasmin Fathy, Gregory Slabaugh, Mona Jaber,
- Abstract summary: Current research leverages computer vision and machine learning advances to predict near-misses.
This work proposes a low-complexity ensemble-learning approach that employs contextual data for predicting the pedestrian's intent for crossing.
Our experiments on different datasets achieve similar pedestrian intent prediction performance as the state-of-the-art approaches.
- Score: 2.443659506850567
- License:
- Abstract: Walking as a form of active travel is essential in promoting sustainable transport. It is thus crucial to accurately predict pedestrian crossing intention and avoid collisions, especially with the advent of autonomous and advanced driver-assisted vehicles. Current research leverages computer vision and machine learning advances to predict near-misses; however, this often requires high computation power to yield reliable results. In contrast, this work proposes a low-complexity ensemble-learning approach that employs contextual data for predicting the pedestrian's intent for crossing. The pedestrian is first detected, and their image is then compressed using skeleton-ization, and contextual information is added into a stacked ensemble-learning approach. Our experiments on different datasets achieve similar pedestrian intent prediction performance as the state-of-the-art approaches with 99.7% reduction in computational complexity. Our source code and trained models will be released upon paper acceptance
Related papers
- PedFormer: Pedestrian Behavior Prediction via Cross-Modal Attention
Modulation and Gated Multitask Learning [10.812772606528172]
We propose a novel framework that relies on different data modalities to predict future trajectories and crossing actions of pedestrians from an ego-centric perspective.
We show that our model improves state-of-the-art in trajectory and action prediction by up to 22% and 13% respectively on various metrics.
arXiv Detail & Related papers (2022-10-14T15:12:00Z) - Pedestrian 3D Bounding Box Prediction [83.7135926821794]
We focus on 3D bounding boxes, which are reasonable estimates of humans without modeling complex motion details for autonomous vehicles.
We suggest this new problem and present a simple yet effective model for pedestrians' 3D bounding box prediction.
This method follows an encoder-decoder architecture based on recurrent neural networks.
arXiv Detail & Related papers (2022-06-28T17:59:45Z) - Pedestrian Stop and Go Forecasting with Hybrid Feature Fusion [87.77727495366702]
We introduce the new task of pedestrian stop and go forecasting.
Considering the lack of suitable existing datasets for it, we release TRANS, a benchmark for explicitly studying the stop and go behaviors of pedestrians in urban traffic.
We build it from several existing datasets annotated with pedestrians' walking motions, in order to have various scenarios and behaviors.
arXiv Detail & Related papers (2022-03-04T18:39:31Z) - Pedestrian Trajectory Prediction via Spatial Interaction Transformer
Network [7.150832716115448]
In traffic scenes, when encountering with oncoming people, pedestrians may make sudden turns or stop immediately.
To predict such unpredictable trajectories, we can gain insights into the interaction between pedestrians.
We present a novel generative method named Spatial Interaction Transformer (SIT), which learns the correlation of pedestrian trajectories through attention mechanisms.
arXiv Detail & Related papers (2021-12-13T13:08:04Z) - PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous
Car [47.01116716025731]
This paper proposes and shares another benchmark dataset called the IUPUI-CSRC Pedestrian Situated Intent (PSI) data.
The first novel label is the dynamic intent changes for the pedestrians to cross in front of the ego-vehicle, achieved from 24 drivers.
The second one is the text-based explanations of the driver reasoning process when estimating pedestrian intents and predicting their behaviors.
arXiv Detail & Related papers (2021-12-05T15:54:57Z) - Injecting Knowledge in Data-driven Vehicle Trajectory Predictors [82.91398970736391]
Vehicle trajectory prediction tasks have been commonly tackled from two perspectives: knowledge-driven or data-driven.
In this paper, we propose to learn a "Realistic Residual Block" (RRB) which effectively connects these two perspectives.
Our proposed method outputs realistic predictions by confining the residual range and taking into account its uncertainty.
arXiv Detail & Related papers (2021-03-08T16:03:09Z) - Pedestrian Behavior Prediction for Automated Driving: Requirements,
Metrics, and Relevant Features [1.1888947789336193]
We analyze the requirements on pedestrian behavior prediction for automated driving via a system-level approach.
Based on human driving behavior we derive appropriate reaction patterns of an automated vehicle.
We present a pedestrian prediction model based on a Variational Conditional Auto-Encoder which incorporates multiple contextual cues.
arXiv Detail & Related papers (2020-12-15T16:52:49Z) - Pedestrian Intention Prediction: A Multi-task Perspective [83.7135926821794]
In order to be globally deployed, autonomous cars must guarantee the safety of pedestrians.
This work tries to solve this problem by jointly predicting the intention and visual states of pedestrians.
The method is a recurrent neural network in a multi-task learning approach.
arXiv Detail & Related papers (2020-10-20T13:42:31Z) - A Real-Time Predictive Pedestrian Collision Warning Service for
Cooperative Intelligent Transportation Systems Using 3D Pose Estimation [10.652350454373531]
We propose a real-time predictive pedestrian collision warning service (P2CWS) for two tasks: pedestrian orientation recognition (100.53 FPS) and intention prediction (35.76 FPS)
Our framework obtains satisfying generalization over multiple sites because of the proposed site-independent features.
The proposed vision framework realizes 89.3% accuracy in the behavior recognition task on the TUD dataset without any training process.
arXiv Detail & Related papers (2020-09-23T00:55:12Z) - Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction [57.56466850377598]
Reasoning over visual data is a desirable capability for robotics and vision-based applications.
In this paper, we present a framework on graph to uncover relationships in different objects in the scene for reasoning about pedestrian intent.
Pedestrian intent, defined as the future action of crossing or not-crossing the street, is a very crucial piece of information for autonomous vehicles.
arXiv Detail & Related papers (2020-02-20T18:50:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.