Sequential Graph Convolutional Network for Active Learning
- URL: http://arxiv.org/abs/2006.10219v3
- Date: Thu, 1 Apr 2021 16:18:44 GMT
- Title: Sequential Graph Convolutional Network for Active Learning
- Authors: Razvan Caramalau, Binod Bhattarai, Tae-Kyun Kim
- Abstract summary: We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
- Score: 53.99104862192055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel pool-based Active Learning framework constructed on a
sequential Graph Convolution Network (GCN). Each image's feature from a pool of
data represents a node in the graph and the edges encode their similarities.
With a small number of randomly sampled images as seed labelled examples, we
learn the parameters of the graph to distinguish labelled vs unlabelled nodes
by minimising the binary cross-entropy loss. GCN performs message-passing
operations between the nodes, and hence, induces similar representations of the
strongly associated nodes. We exploit these characteristics of GCN to select
the unlabelled examples which are sufficiently different from labelled ones. To
this end, we utilise the graph node embeddings and their confidence scores and
adapt sampling techniques such as CoreSet and uncertainty-based methods to
query the nodes. We flip the label of newly queried nodes from unlabelled to
labelled, re-train the learner to optimise the downstream task and the graph to
minimise its modified objective. We continue this process within a fixed
budget. We evaluate our method on 6 different benchmarks:4 real image
classification, 1 depth-based hand pose estimation and 1 synthetic RGB image
classification datasets. Our method outperforms several competitive baselines
such as VAAL, Learning Loss, CoreSet and attains the new state-of-the-art
performance on multiple applications The implementations can be found here:
https://github.com/razvancaramalau/Sequential-GCN-for-Active-Learning
Related papers
- SimMatchV2: Semi-Supervised Learning with Graph Consistency [53.31681712576555]
We introduce a new semi-supervised learning algorithm - SimMatchV2.
It formulates various consistency regularizations between labeled and unlabeled data from the graph perspective.
SimMatchV2 has been validated on multiple semi-supervised learning benchmarks.
arXiv Detail & Related papers (2023-08-13T05:56:36Z) - A Simple and Scalable Graph Neural Network for Large Directed Graphs [11.792826520370774]
We investigate various combinations of node representations and edge direction awareness within an input graph.
In response, we propose a simple yet holistic classification method A2DUG.
We demonstrate that A2DUG stably performs well on various datasets and improves the accuracy up to 11.29 compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-06-14T06:24:58Z) - NESS: Node Embeddings from Static SubGraphs [0.0]
We present a framework for learning Node Embeddings from Static Subgraphs (NESS) using a graph autoencoder (GAE) in a transductive setting.
NESS is based on two key ideas: i) Partitioning the training graph to multiple static, sparse subgraphs with non-overlapping edges using random edge split during data pre-processing.
We demonstrate that NESS gives a better node representation for link prediction tasks compared to current autoencoding methods that use either the whole graph or subgraphs.
arXiv Detail & Related papers (2023-03-15T22:14:28Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Learning Hierarchical Graph Neural Networks for Image Clustering [81.5841862489509]
We propose a hierarchical graph neural network (GNN) model that learns how to cluster a set of images into an unknown number of identities.
Our hierarchical GNN uses a novel approach to merge connected components predicted at each level of the hierarchy to form a new graph at the next level.
arXiv Detail & Related papers (2021-07-03T01:28:42Z) - Residual Network and Embedding Usage: New Tricks of Node Classification
with Graph Convolutional Networks [0.38233569758620045]
We first summarize some existing effective tricks used in GCNs mini-batch training.
Based on this, two novel tricks named GCN_res Framework and Embedding Usage are proposed.
Experiments on Open Graph Benchmark show that, by combining these techniques, the test accuracy of various GCNs increases by 1.21%2.84%.
arXiv Detail & Related papers (2021-05-18T07:52:51Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.