A-ACT: Action Anticipation through Cycle Transformations
- URL: http://arxiv.org/abs/2204.00942v1
- Date: Sat, 2 Apr 2022 21:50:45 GMT
- Title: A-ACT: Action Anticipation through Cycle Transformations
- Authors: Akash Gupta, Jingen Liu, Liefeng Bo, Amit K. Roy-Chowdhury, Tao Mei
- Abstract summary: We take a step back to analyze how the human capability to anticipate the future can be transferred to machine learning algorithms.
A recent study on human psychology explains that, in anticipating an occurrence, the human brain counts on both systems.
In this work, we study the impact of each system for the task of action anticipation and introduce a paradigm to integrate them in a learning framework.
- Score: 89.83027919085289
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While action anticipation has garnered a lot of research interest recently,
most of the works focus on anticipating future action directly through observed
visual cues only. In this work, we take a step back to analyze how the human
capability to anticipate the future can be transferred to machine learning
algorithms. To incorporate this ability in intelligent systems a question worth
pondering upon is how exactly do we anticipate? Is it by anticipating future
actions from past experiences? Or is it by simulating possible scenarios based
on cues from the present? A recent study on human psychology explains that, in
anticipating an occurrence, the human brain counts on both systems. In this
work, we study the impact of each system for the task of action anticipation
and introduce a paradigm to integrate them in a learning framework. We believe
that intelligent systems designed by leveraging the psychological anticipation
models will do a more nuanced job at the task of human action prediction.
Furthermore, we introduce cyclic transformation in the temporal dimension in
feature and semantic label space to instill the human ability of reasoning of
past actions based on the predicted future. Experiments on Epic-Kitchen,
Breakfast, and 50Salads dataset demonstrate that the action anticipation model
learned using a combination of the two systems along with the cycle
transformation performs favorably against various state-of-the-art approaches.
Related papers
- Human Action Anticipation: A Survey [86.415721659234]
The literature on behavior prediction spans various tasks, including action anticipation, activity forecasting, intent prediction, goal prediction, and so on.
Our survey aims to tie together this fragmented literature, covering recent technical innovations as well as the development of new large-scale datasets for model training and evaluation.
arXiv Detail & Related papers (2024-10-17T21:37:40Z) - A Neural Active Inference Model of Perceptual-Motor Learning [62.39667564455059]
The active inference framework (AIF) is a promising new computational framework grounded in contemporary neuroscience.
In this study, we test the ability for the AIF to capture the role of anticipation in the visual guidance of action in humans.
We present a novel formulation of the prior function that maps a multi-dimensional world-state to a uni-dimensional distribution of free-energy.
arXiv Detail & Related papers (2022-11-16T20:00:38Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - Visual Perspective Taking for Opponent Behavior Modeling [22.69165968663182]
We propose an end-to-end long-term visual prediction framework for robots.
We demonstrate our approach in the context of visual hide-and-seek.
We suggest that visual behavior modeling and perspective taking skills will play a critical role in the ability of physical robots to fully integrate into real-world multi-agent activities.
arXiv Detail & Related papers (2021-05-11T16:02:32Z) - Learning to Anticipate Egocentric Actions by Imagination [60.21323541219304]
We study the egocentric action anticipation task, which predicts future action seconds before it is performed for egocentric videos.
Our method significantly outperforms previous methods on both the seen test set and the unseen test set of the EPIC Kitchens Action Anticipation Challenge.
arXiv Detail & Related papers (2021-01-13T08:04:10Z) - From Recognition to Prediction: Analysis of Human Action and Trajectory
Prediction in Video [4.163207534602723]
Deciphering human behaviors to predict their future paths/trajectories is important.
Human trajectory prediction still remains a challenging task.
System must be able to detect and analyze human activities as well as scene semantics.
arXiv Detail & Related papers (2020-11-20T22:23:34Z) - Knowledge Distillation for Action Anticipation via Label Smoothing [21.457069042129138]
Human capability to anticipate near future from visual observations and non-verbal cues is essential for developing intelligent systems.
We implement a multi-modal framework based on long short-term memory (LSTM) networks to summarize past observations and make predictions at different time steps.
Experiments show that label smoothing systematically improves performance of state-of-the-art models for action anticipation.
arXiv Detail & Related papers (2020-04-16T15:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.