A Hybrid Graph Network for Complex Activity Detection in Video
- URL: http://arxiv.org/abs/2310.17493v2
- Date: Mon, 30 Oct 2023 15:47:44 GMT
- Title: A Hybrid Graph Network for Complex Activity Detection in Video
- Authors: Salman Khan, Izzeddin Teeti, Andrew Bradley, Mohamed Elhoseiny, Fabio
Cuzzolin
- Abstract summary: Complex Activity Detection (CompAD) extends analysis to long-term activities.
We propose a hybrid graph neural network which combines attention applied to a graph encoding the local (short-term) dynamic scene with a temporal graph modelling the overall long-duration activity.
- Score: 40.843533889724924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpretation and understanding of video presents a challenging computer
vision task in numerous fields - e.g. autonomous driving and sports analytics.
Existing approaches to interpreting the actions taking place within a video
clip are based upon Temporal Action Localisation (TAL), which typically
identifies short-term actions. The emerging field of Complex Activity Detection
(CompAD) extends this analysis to long-term activities, with a deeper
understanding obtained by modelling the internal structure of a complex
activity taking place within the video. We address the CompAD problem using a
hybrid graph neural network which combines attention applied to a graph
encoding the local (short-term) dynamic scene with a temporal graph modelling
the overall long-duration activity. Our approach is as follows: i) Firstly, we
propose a novel feature extraction technique which, for each video snippet,
generates spatiotemporal `tubes' for the active elements (`agents') in the
(local) scene by detecting individual objects, tracking them and then
extracting 3D features from all the agent tubes as well as the overall scene.
ii) Next, we construct a local scene graph where each node (representing either
an agent tube or the scene) is connected to all other nodes. Attention is then
applied to this graph to obtain an overall representation of the local dynamic
scene. iii) Finally, all local scene graph representations are interconnected
via a temporal graph, to estimate the complex activity class together with its
start and end time. The proposed framework outperforms all previous
state-of-the-art methods on all three datasets including ActivityNet-1.3,
Thumos-14, and ROAD.
Related papers
- Open-Vocabulary Octree-Graph for 3D Scene Understanding [54.11828083068082]
Octree-Graph is a novel scene representation for open-vocabulary 3D scene understanding.
An adaptive-octree structure is developed that stores semantics and depicts the occupancy of an object adjustably according to its shape.
arXiv Detail & Related papers (2024-11-25T10:14:10Z) - TESGNN: Temporal Equivariant Scene Graph Neural Networks for Efficient and Robust Multi-View 3D Scene Understanding [8.32401190051443]
We present the first implementation of an Equivariant Scene Graph Neural Network (ESGNN) to generate semantic scene graphs from 3D point clouds.
Our combined architecture, termed the Temporal Equivariant Scene Graph Neural Network (TESGNN), not only surpasses existing state-of-the-art methods in scene estimation accuracy but also achieves faster convergence.
arXiv Detail & Related papers (2024-11-15T15:39:04Z) - Spatiotemporal Event Graphs for Dynamic Scene Understanding [14.735329256577101]
We present a series of frameworks for dynamic scene understanding starting from an autonomous driving perspective to complex video activity detection.
We propose a hybrid graph neural network that combines attention applied to a graph encoding the local (short-term) dynamic scene with a temporal graph modelling overall long-term activity.
arXiv Detail & Related papers (2023-12-11T22:30:13Z) - Local-Global Information Interaction Debiasing for Dynamic Scene Graph
Generation [51.92419880088668]
We propose a novel DynSGG model based on multi-task learning, DynSGG-MTL, which introduces the local interaction information and global human-action interaction information.
Long-temporal human actions supervise the model to generate multiple scene graphs that conform to the global constraints and avoid the model being unable to learn the tail predicates.
arXiv Detail & Related papers (2023-08-10T01:24:25Z) - From random-walks to graph-sprints: a low-latency node embedding
framework on continuous-time dynamic graphs [4.372841335228306]
We propose a framework for continuous-time-dynamic-graphs (CTDGs) that has low latency and is competitive with state-of-the-art, higher latency models.
In our framework, time-aware node embeddings summarizing multi-hop information are computed using only single-hop operations on the incoming edges.
We demonstrate that our graph-sprints features, combined with a machine learning, achieve competitive performance.
arXiv Detail & Related papers (2023-07-17T12:25:52Z) - Spatial-Temporal Correlation and Topology Learning for Person
Re-Identification in Videos [78.45050529204701]
We propose a novel framework to pursue discriminative and robust representation by modeling cross-scale spatial-temporal correlation.
CTL utilizes a CNN backbone and a key-points estimator to extract semantic local features from human body.
It explores a context-reinforced topology to construct multi-scale graphs by considering both global contextual information and physical connections of human body.
arXiv Detail & Related papers (2021-04-15T14:32:12Z) - Unified Graph Structured Models for Video Understanding [93.72081456202672]
We propose a message passing graph neural network that explicitly models relational-temporal relations.
We show how our method is able to more effectively model relationships between relevant entities in the scene.
arXiv Detail & Related papers (2021-03-29T14:37:35Z) - Understanding Dynamic Scenes using Graph Convolution Networks [22.022759283770377]
We present a novel framework to model on-road vehicle behaviors from a sequence of temporally ordered frames as grabbed by a moving camera.
We show a seamless transfer of learning to multiple datasets without resorting to fine-tuning.
Such behavior prediction methods find immediate relevance in a variety of navigation tasks.
arXiv Detail & Related papers (2020-05-09T13:05:06Z) - Joint Visual-Temporal Embedding for Unsupervised Learning of Actions in
Untrimmed Sequences [25.299599341774204]
This paper proposes an approach for the unsupervised learning of actions in untrimmed video sequences based on a joint visual-temporal embedding space.
We show that the proposed approach is able to provide a meaningful visual and temporal embedding out of the visual cues present in contiguous video frames.
arXiv Detail & Related papers (2020-01-29T22:51:06Z) - Zero-Shot Video Object Segmentation via Attentive Graph Neural Networks [150.5425122989146]
This work proposes a novel attentive graph neural network (AGNN) for zero-shot video object segmentation (ZVOS)
AGNN builds a fully connected graph to efficiently represent frames as nodes, and relations between arbitrary frame pairs as edges.
Experimental results on three video segmentation datasets show that AGNN sets a new state-of-the-art in each case.
arXiv Detail & Related papers (2020-01-19T10:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.