Graphing the Future: Activity and Next Active Object Prediction using
Graph-based Activity Representations
- URL: http://arxiv.org/abs/2209.05194v1
- Date: Mon, 12 Sep 2022 12:32:24 GMT
- Title: Graphing the Future: Activity and Next Active Object Prediction using
Graph-based Activity Representations
- Authors: Victoria Manousaki, Konstantinos Papoutsakis and Antonis Argyros
- Abstract summary: We present a novel approach for the visual prediction of human-object interactions in videos.
We aim at predicting (a)the class of the on-going human-object interaction and (b) the class of the next active object(s) (NAOs)
High prediction accuracy was obtained for both action prediction and NAO forecasting.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel approach for the visual prediction of human-object
interactions in videos. Rather than forecasting the human and object motion or
the future hand-object contact points, we aim at predicting (a)the class of the
on-going human-object interaction and (b) the class(es) of the next active
object(s) (NAOs), i.e., the object(s) that will be involved in the interaction
in the near future as well as the time the interaction will occur. Graph
matching relies on the efficient Graph Edit distance (GED) method. The
experimental evaluation of the proposed approach was conducted using two
well-established video datasets that contain human-object interactions, namely
the MSR Daily Activities and the CAD120. High prediction accuracy was obtained
for both action prediction and NAO forecasting.
Related papers
- Human Action Anticipation: A Survey [86.415721659234]
The literature on behavior prediction spans various tasks, including action anticipation, activity forecasting, intent prediction, goal prediction, and so on.
Our survey aims to tie together this fragmented literature, covering recent technical innovations as well as the development of new large-scale datasets for model training and evaluation.
arXiv Detail & Related papers (2024-10-17T21:37:40Z) - Short-term Object Interaction Anticipation with Disentangled Object Detection @ Ego4D Short Term Object Interaction Anticipation Challenge [11.429137967096935]
Short-term object interaction anticipation is an important task in egocentric video analysis.
Our proposed method, SOIA-DOD, effectively decomposes it into 1) detecting active object and 2) classifying interaction and predicting their timing.
Our method first detects all potential active objects in the last frame of egocentric video by fine-tuning a pre-trained YOLOv9.
arXiv Detail & Related papers (2024-07-08T08:13:16Z) - Diff-IP2D: Diffusion-Based Hand-Object Interaction Prediction on Egocentric Videos [22.81433371521832]
We propose Diff-IP2D to forecast future hand trajectories and object affordances concurrently in an iterative non-autoregressive manner.
Our method significantly outperforms the state-of-the-art baselines on both the off-the-shelf metrics and our newly proposed evaluation protocol.
arXiv Detail & Related papers (2024-05-07T14:51:05Z) - SSL-Interactions: Pretext Tasks for Interactive Trajectory Prediction [4.286256266868156]
We present SSL-Interactions that proposes pretext tasks to enhance interaction modeling for trajectory prediction.
We introduce four interaction-aware pretext tasks to encapsulate various aspects of agent interactions.
We also propose an approach to curate interaction-heavy scenarios from datasets.
arXiv Detail & Related papers (2024-01-15T14:43:40Z) - Leveraging Next-Active Objects for Context-Aware Anticipation in
Egocentric Videos [31.620555223890626]
We study the problem of Short-Term Object interaction anticipation (STA)
We propose NAOGAT, a multi-modal end-to-end transformer network, to guide the model to predict context-aware future actions.
Our model outperforms existing methods on two separate datasets.
arXiv Detail & Related papers (2023-08-16T12:07:02Z) - Learn to Predict How Humans Manipulate Large-sized Objects from
Interactive Motions [82.90906153293585]
We propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task.
We show the proposed network that consumes dynamic descriptors can achieve state-of-the-art prediction results and help the network better generalize to unseen objects.
arXiv Detail & Related papers (2022-06-25T09:55:39Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Adversarial Generative Grammars for Human Activity Prediction [141.43526239537502]
We propose an adversarial generative grammar model for future prediction.
Our grammar is designed so that it can learn production rules from the data distribution.
Being able to select multiple production rules during inference leads to different predicted outcomes.
arXiv Detail & Related papers (2020-08-11T17:47:53Z) - A Graph-based Interactive Reasoning for Human-Object Interaction
Detection [71.50535113279551]
We present a novel graph-based interactive reasoning model called Interactive Graph (abbr. in-Graph) to infer HOIs.
We construct a new framework to assemble in-Graph models for detecting HOIs, namely in-GraphNet.
Our framework is end-to-end trainable and free from costly annotations like human pose.
arXiv Detail & Related papers (2020-07-14T09:29:03Z) - Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction [57.56466850377598]
Reasoning over visual data is a desirable capability for robotics and vision-based applications.
In this paper, we present a framework on graph to uncover relationships in different objects in the scene for reasoning about pedestrian intent.
Pedestrian intent, defined as the future action of crossing or not-crossing the street, is a very crucial piece of information for autonomous vehicles.
arXiv Detail & Related papers (2020-02-20T18:50:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.