WatchPed: Pedestrian Crossing Intention Prediction Using Embedded
Sensors of Smartwatch
- URL: http://arxiv.org/abs/2208.07441v1
- Date: Mon, 15 Aug 2022 21:27:21 GMT
- Title: WatchPed: Pedestrian Crossing Intention Prediction Using Embedded
Sensors of Smartwatch
- Authors: Jibran Ali Abbasi, Navid Mohammad Imran, Myounggyu Won
- Abstract summary: We design, implement, and evaluate the first pedestrian intention prediction model based on integration of motion sensor data.
A novel machine learning architecture is proposed to effectively incorporate the motion sensor data.
We present the first pedestrian intention prediction dataset integrated with time-synchronized motion sensor data.
- Score: 11.83842808044211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The pedestrian intention prediction problem is to estimate whether or not the
target pedestrian will cross the street. State-of-the-art approaches heavily
rely on visual information collected with the front camera of the ego-vehicle
to make a prediction of the pedestrian's intention. As such, the performance of
existing methods significantly degrades when the visual information is not
accurate, e.g., when the distance between the pedestrian and ego-vehicle is
far, or the lighting conditions are not good enough. In this paper, we design,
implement, and evaluate the first pedestrian intention prediction model based
on integration of motion sensor data gathered with the smartwatch (or
smartphone) of the pedestrian. A novel machine learning architecture is
proposed to effectively incorporate the motion sensor data to reinforce the
visual information to significantly improve the performance in adverse
situations where the visual information may be unreliable. We also conduct a
large-scale data collection and present the first pedestrian intention
prediction dataset integrated with time-synchronized motion sensor data. The
dataset consists of a total of 128 video clips with different distances and
varying levels of lighting conditions. We trained our model using the
widely-used JAAD and our own datasets and compare the performance with a
state-of-the-art model. The results demonstrate that our model outperforms the
state-of-the-art method particularly when the distance to the pedestrian is far
(over 70m), and the lighting conditions are not sufficient.
Related papers
- Snapshot: Towards Application-centered Models for Pedestrian Trajectory Prediction in Urban Traffic Environments [9.025558624315817]
Snapshot is a feed-forward neural network that outperforms the current state of the art while utilizing significantly less information.
By integrating Snapshot into a modular autonomous driving software stack, we showcase its real-world applicability.
arXiv Detail & Related papers (2024-09-03T15:15:49Z) - Layout Sequence Prediction From Noisy Mobile Modality [53.49649231056857]
Trajectory prediction plays a vital role in understanding pedestrian movement for applications such as autonomous driving and robotics.
Current trajectory prediction models depend on long, complete, and accurately observed sequences from visual modalities.
We propose LTrajDiff, a novel approach that treats objects obstructed or out of sight as equally important as those with fully visible trajectories.
arXiv Detail & Related papers (2023-10-09T20:32:49Z) - Pedestrian Environment Model for Automated Driving [54.16257759472116]
We propose an environment model that includes the position of the pedestrians as well as their pose information.
We extract the skeletal information with a neural network human pose estimator from the image.
To obtain the 3D information of the position, we aggregate the data from consecutive frames in conjunction with the vehicle position.
arXiv Detail & Related papers (2023-08-17T16:10:58Z) - Comparison of Pedestrian Prediction Models from Trajectory and
Appearance Data for Autonomous Driving [13.126949982768505]
The ability to anticipate pedestrian motion changes is a critical capability for autonomous vehicles.
In urban environments, pedestrians may enter the road area and create a high risk for driving.
This work presents a comparative evaluation of trajectory-only and appearance-based methods for pedestrian prediction.
arXiv Detail & Related papers (2023-05-25T11:24:38Z) - Pedestrian Detection: Domain Generalization, CNNs, Transformers and
Beyond [82.37430109152383]
We show that, current pedestrian detectors poorly handle even small domain shifts in cross-dataset evaluation.
We attribute the limited generalization to two main factors, the method and the current sources of data.
We propose a progressive fine-tuning strategy which improves generalization.
arXiv Detail & Related papers (2022-01-10T06:00:26Z) - Safety-Oriented Pedestrian Motion and Scene Occupancy Forecasting [91.69900691029908]
We advocate for predicting both the individual motions as well as the scene occupancy map.
We propose a Scene-Actor Graph Neural Network (SA-GNN) which preserves the relative spatial information of pedestrians.
On two large-scale real-world datasets, we showcase that our scene-occupancy predictions are more accurate and better calibrated than those from state-of-the-art motion forecasting methods.
arXiv Detail & Related papers (2021-01-07T06:08:21Z) - Attentional-GCNN: Adaptive Pedestrian Trajectory Prediction towards
Generic Autonomous Vehicle Use Cases [10.41902340952981]
We propose a novel Graph Convolutional Neural Network (GCNN)-based approach, Attentional-GCNN, which aggregates information of implicit interaction between pedestrians in a crowd by assigning attention weight in edges of the graph.
We show our proposed method achieves an improvement over the state of art by 10% Average Displacement Error (ADE) and 12% Final Displacement Error (FDE) with fast inference speeds.
arXiv Detail & Related papers (2020-11-23T03:13:26Z) - Pedestrian Intention Prediction: A Multi-task Perspective [83.7135926821794]
In order to be globally deployed, autonomous cars must guarantee the safety of pedestrians.
This work tries to solve this problem by jointly predicting the intention and visual states of pedestrians.
The method is a recurrent neural network in a multi-task learning approach.
arXiv Detail & Related papers (2020-10-20T13:42:31Z) - Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction [57.56466850377598]
Reasoning over visual data is a desirable capability for robotics and vision-based applications.
In this paper, we present a framework on graph to uncover relationships in different objects in the scene for reasoning about pedestrian intent.
Pedestrian intent, defined as the future action of crossing or not-crossing the street, is a very crucial piece of information for autonomous vehicles.
arXiv Detail & Related papers (2020-02-20T18:50:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.