Pedestrian Motion State Estimation From 2D Pose
- URL: http://arxiv.org/abs/2103.00145v1
- Date: Sat, 27 Feb 2021 07:00:06 GMT
- Title: Pedestrian Motion State Estimation From 2D Pose
- Authors: Fei Li, Shiwei Fan, Pengzhen Chen, and Xiangxu Li
- Abstract summary: Traffic violation and the flexible and changeable nature of pedestrians make it more difficult to predict pedestrian behavior or intention.
In combination with pedestrian motion state and other influencing factors, pedestrian intention can be predicted to avoid unnecessary accidents.
This paper verifies the proposed algorithm on the JAAD public dataset, and the accuracy is improved by 11.6% compared with the existing method.
- Score: 3.189006905282788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic violation and the flexible and changeable nature of pedestrians make
it more difficult to predict pedestrian behavior or intention, which might be a
potential safety hazard on the road. Pedestrian motion state (such as walking
and standing) directly affects or reflects its intention. In combination with
pedestrian motion state and other influencing factors, pedestrian intention can
be predicted to avoid unnecessary accidents. In this paper, pedestrian is
treated as non-rigid object, which can be represented by a set of
two-dimensional key points, and the movement of key point relative to the torso
is introduced as micro motion. Static and dynamic micro motion features, such
as position, angle and distance, and their differential calculations in time
domain, are used to describe its motion pattern. Gated recurrent neural network
based seq2seq model is used to learn the dependence of motion state transition
on previous information, finally the pedestrian motion state is estimated via a
softmax classifier. The proposed method only needs the previous hidden state of
GRU and current feature to evaluate the probability of current motion state,
and it is computation efficient to deploy on vehicles. This paper verifies the
proposed algorithm on the JAAD public dataset, and the accuracy is improved by
11.6% compared with the existing method.
Related papers
- Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - ForceFormer: Exploring Social Force and Transformer for Pedestrian
Trajectory Prediction [3.5163219821672618]
We propose a new goal-based trajectory predictor called ForceFormer.
We leverage the driving force from the destination to efficiently simulate the guidance of a target on a pedestrian.
Our proposed method achieves on-par performance measured by distance errors with the state-of-the-art models.
arXiv Detail & Related papers (2023-02-15T10:54:14Z) - PREF: Predictability Regularized Neural Motion Fields [68.60019434498703]
Knowing 3D motions in a dynamic scene is essential to many vision applications.
We leverage a neural motion field for estimating the motion of all points in a multiview setting.
We propose to regularize the estimated motion to be predictable.
arXiv Detail & Related papers (2022-09-21T22:32:37Z) - Stochastic Trajectory Prediction via Motion Indeterminacy Diffusion [88.45326906116165]
We present a new framework to formulate the trajectory prediction task as a reverse process of motion indeterminacy diffusion (MID)
We encode the history behavior information and the social interactions as a state embedding and devise a Transformer-based diffusion model to capture the temporal dependencies of trajectories.
Experiments on the human trajectory prediction benchmarks including the Stanford Drone and ETH/UCY datasets demonstrate the superiority of our method.
arXiv Detail & Related papers (2022-03-25T16:59:08Z) - Pedestrian Stop and Go Forecasting with Hybrid Feature Fusion [87.77727495366702]
We introduce the new task of pedestrian stop and go forecasting.
Considering the lack of suitable existing datasets for it, we release TRANS, a benchmark for explicitly studying the stop and go behaviors of pedestrians in urban traffic.
We build it from several existing datasets annotated with pedestrians' walking motions, in order to have various scenarios and behaviors.
arXiv Detail & Related papers (2022-03-04T18:39:31Z) - Pedestrian Trajectory Prediction via Spatial Interaction Transformer
Network [7.150832716115448]
In traffic scenes, when encountering with oncoming people, pedestrians may make sudden turns or stop immediately.
To predict such unpredictable trajectories, we can gain insights into the interaction between pedestrians.
We present a novel generative method named Spatial Interaction Transformer (SIT), which learns the correlation of pedestrian trajectories through attention mechanisms.
arXiv Detail & Related papers (2021-12-13T13:08:04Z) - Safety-Oriented Pedestrian Motion and Scene Occupancy Forecasting [91.69900691029908]
We advocate for predicting both the individual motions as well as the scene occupancy map.
We propose a Scene-Actor Graph Neural Network (SA-GNN) which preserves the relative spatial information of pedestrians.
On two large-scale real-world datasets, we showcase that our scene-occupancy predictions are more accurate and better calibrated than those from state-of-the-art motion forecasting methods.
arXiv Detail & Related papers (2021-01-07T06:08:21Z) - PRANK: motion Prediction based on RANKing [4.4861975043227345]
Predicting the motion of agents is one of the most critical problems in the autonomous driving domain.
We introduce the PRANK method, which produces the conditional distribution of agent's trajectories plausible in the given scene.
We evaluate PRANK on the in-house and Argoverse datasets, where it shows competitive results.
arXiv Detail & Related papers (2020-10-22T19:58:02Z) - Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction [57.56466850377598]
Reasoning over visual data is a desirable capability for robotics and vision-based applications.
In this paper, we present a framework on graph to uncover relationships in different objects in the scene for reasoning about pedestrian intent.
Pedestrian intent, defined as the future action of crossing or not-crossing the street, is a very crucial piece of information for autonomous vehicles.
arXiv Detail & Related papers (2020-02-20T18:50:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.