Leveraging Next-Active Objects for Context-Aware Anticipation in
Egocentric Videos
- URL: http://arxiv.org/abs/2308.08303v3
- Date: Thu, 5 Oct 2023 11:05:42 GMT
- Title: Leveraging Next-Active Objects for Context-Aware Anticipation in
Egocentric Videos
- Authors: Sanket Thakur, Cigdem Beyan, Pietro Morerio, Vittorio Murino, Alessio
Del Bue
- Abstract summary: We study the problem of Short-Term Object interaction anticipation (STA)
We propose NAOGAT, a multi-modal end-to-end transformer network, to guide the model to predict context-aware future actions.
Our model outperforms existing methods on two separate datasets.
- Score: 31.620555223890626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objects are crucial for understanding human-object interactions. By
identifying the relevant objects, one can also predict potential future
interactions or actions that may occur with these objects. In this paper, we
study the problem of Short-Term Object interaction anticipation (STA) and
propose NAOGAT (Next-Active-Object Guided Anticipation Transformer), a
multi-modal end-to-end transformer network, that attends to objects in observed
frames in order to anticipate the next-active-object (NAO) and, eventually, to
guide the model to predict context-aware future actions. The task is
challenging since it requires anticipating future action along with the object
with which the action occurs and the time after which the interaction will
begin, a.k.a. the time to contact (TTC). Compared to existing video modeling
architectures for action anticipation, NAOGAT captures the relationship between
objects and the global scene context in order to predict detections for the
next active object and anticipate relevant future actions given these
detections, leveraging the objects' dynamics to improve accuracy. One of the
key strengths of our approach, in fact, is its ability to exploit the motion
dynamics of objects within a given clip, which is often ignored by other
models, and separately decoding the object-centric and motion-centric
information. Through our experiments, we show that our model outperforms
existing methods on two separate datasets, Ego4D and EpicKitchens-100 ("Unseen
Set"), as measured by several additional metrics, such as time to contact, and
next-active-object localization. The code will be available upon acceptance.
Related papers
- Short-term Object Interaction Anticipation with Disentangled Object Detection @ Ego4D Short Term Object Interaction Anticipation Challenge [11.429137967096935]
Short-term object interaction anticipation is an important task in egocentric video analysis.
Our proposed method, SOIA-DOD, effectively decomposes it into 1) detecting active object and 2) classifying interaction and predicting their timing.
Our method first detects all potential active objects in the last frame of egocentric video by fine-tuning a pre-trained YOLOv9.
arXiv Detail & Related papers (2024-07-08T08:13:16Z) - Appearance-Based Refinement for Object-Centric Motion Segmentation [85.2426540999329]
We introduce an appearance-based refinement method that leverages temporal consistency in video streams to correct inaccurate flow-based proposals.
Our approach involves a sequence-level selection mechanism that identifies accurate flow-predicted masks as exemplars.
Its performance is evaluated on multiple video segmentation benchmarks, including DAVIS, YouTube, SegTrackv2, and FBMS-59.
arXiv Detail & Related papers (2023-12-18T18:59:51Z) - Controllable Human-Object Interaction Synthesis [77.56877961681462]
We propose Controllable Human-Object Interaction Synthesis (CHOIS) to generate synchronized object motion and human motion in 3D scenes.
Here, language descriptions inform style and intent, and waypoints, which can be effectively extracted from high-level planning, ground the motion in the scene.
Our module seamlessly integrates with a path planning module, enabling the generation of long-term interactions in 3D environments.
arXiv Detail & Related papers (2023-12-06T21:14:20Z) - Object-centric Video Representation for Long-term Action Anticipation [33.115854386196126]
Key motivation is that objects provide important cues to recognize and predict human-object interactions.
We propose to build object-centric video representations by leveraging visual-language pretrained models.
To recognize and predict human-object interactions, we use a Transformer-based neural architecture.
arXiv Detail & Related papers (2023-10-31T22:54:31Z) - ROAM: Robust and Object-Aware Motion Generation Using Neural Pose
Descriptors [73.26004792375556]
This paper shows that robustness and generalisation to novel scene objects in 3D object-aware character synthesis can be achieved by training a motion model with as few as one reference object.
We leverage an implicit feature representation trained on object-only datasets, which encodes an SE(3)-equivariant descriptor field around the object.
We demonstrate substantial improvements in 3D virtual character motion and interaction quality and robustness to scenarios with unseen objects.
arXiv Detail & Related papers (2023-08-24T17:59:51Z) - Enhancing Next Active Object-based Egocentric Action Anticipation with
Guided Attention [45.60789439017625]
Short-term action anticipation (STA) in first-person videos is a challenging task.
We propose a novel approach that applies a guided attention mechanism between objects.
Our method, GANO, is a multi-modal, end-to-end, single transformer-based network.
arXiv Detail & Related papers (2023-05-22T11:56:10Z) - Object-Centric Video Prediction via Decoupling of Object Dynamics and
Interactions [27.112210225969733]
We propose a novel framework for the task of object-centric video prediction, i.e., extracting the structure of a video sequence, as well as modeling objects dynamics and interactions from visual observations.
With the goal of learning meaningful object representations, we propose two object-centric video predictor (OCVP) transformer modules, which de-couple processing of temporal dynamics and object interactions.
In our experiments, we show how our object-centric prediction framework utilizing our OCVP predictors outperforms object-agnostic video prediction models on two different datasets.
arXiv Detail & Related papers (2023-02-23T08:29:26Z) - Anticipating Next Active Objects for Egocentric Videos [29.473527958651317]
This paper addresses the problem of anticipating the next-active-object location in the future, for a given egocentric video clip.
We propose a transformer-based self-attention framework to identify and locate the next-active-object in an egocentric clip.
arXiv Detail & Related papers (2023-02-13T13:44:52Z) - H-SAUR: Hypothesize, Simulate, Act, Update, and Repeat for Understanding
Object Articulations from Interactions [62.510951695174604]
"Hypothesize, Simulate, Act, Update, and Repeat" (H-SAUR) is a probabilistic generative framework that generates hypotheses about how objects articulate given input observations.
We show that the proposed model significantly outperforms the current state-of-the-art articulated object manipulation framework.
We further improve the test-time efficiency of H-SAUR by integrating a learned prior from learning-based vision models.
arXiv Detail & Related papers (2022-10-22T18:39:33Z) - Learn to Predict How Humans Manipulate Large-sized Objects from
Interactive Motions [82.90906153293585]
We propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task.
We show the proposed network that consumes dynamic descriptors can achieve state-of-the-art prediction results and help the network better generalize to unseen objects.
arXiv Detail & Related papers (2022-06-25T09:55:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.