Discovering Dynamic Salient Regions for Spatio-Temporal Graph Neural
Networks
- URL: http://arxiv.org/abs/2009.08427v3
- Date: Tue, 7 Dec 2021 12:41:45 GMT
- Title: Discovering Dynamic Salient Regions for Spatio-Temporal Graph Neural
Networks
- Authors: Iulia Duta and Andrei Nicolicioiu and Marius Leordeanu
- Abstract summary: We propose a graph neural network model that learns nodes that dynamically attach to well-delimited regions.
We show that it discovers regions that are well correlated with objects in the video.
In extensive ablation studies and experiments on two challenging datasets, we show superior performance to previous graph neural networks models for video classification.
- Score: 14.040676498310198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks are perfectly suited to capture latent interactions
between various entities in the spatio-temporal domain (e.g. videos). However,
when an explicit structure is not available, it is not obvious what atomic
elements should be represented as nodes. Current works generally use
pre-trained object detectors or fixed, predefined regions to extract graph
nodes. Improving upon this, our proposed model learns nodes that dynamically
attach to well-delimited salient regions, which are relevant for a higher-level
task, without using any object-level supervision. Constructing these localized,
adaptive nodes gives our model inductive bias towards object-centric
representations and we show that it discovers regions that are well correlated
with objects in the video. In extensive ablation studies and experiments on two
challenging datasets, we show superior performance to previous graph neural
networks models for video classification.
Related papers
- Understanding Non-linearity in Graph Neural Networks from the
Bayesian-Inference Perspective [33.01636846541052]
Graph neural networks (GNNs) have shown superiority in many prediction tasks over graphs.
We investigate the functions of non-linearity in GNNs for node classification tasks.
arXiv Detail & Related papers (2022-07-22T19:36:12Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Temporal Graph Network Embedding with Causal Anonymous Walks
Representations [54.05212871508062]
We propose a novel approach for dynamic network representation learning based on Temporal Graph Network.
For evaluation, we provide a benchmark pipeline for the evaluation of temporal network embeddings.
We show the applicability and superior performance of our model in the real-world downstream graph machine learning task provided by one of the top European banks.
arXiv Detail & Related papers (2021-08-19T15:39:52Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - CatGCN: Graph Convolutional Networks with Categorical Node Features [99.555850712725]
CatGCN is tailored for graph learning when the node features are categorical.
We train CatGCN in an end-to-end fashion and demonstrate it on semi-supervised node classification.
arXiv Detail & Related papers (2020-09-11T09:25:17Z) - CAGNN: Cluster-Aware Graph Neural Networks for Unsupervised Graph
Representation Learning [19.432449825536423]
Unsupervised graph representation learning aims to learn low-dimensional node embeddings without supervision.
We present a novel cluster-aware graph neural network (CAGNN) model for unsupervised graph representation learning using self-supervised techniques.
arXiv Detail & Related papers (2020-09-03T13:57:18Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Structural Temporal Graph Neural Networks for Anomaly Detection in
Dynamic Graphs [54.13919050090926]
We propose an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs.
In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph.
Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection.
arXiv Detail & Related papers (2020-05-15T09:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.