Deperturbation of Online Social Networks via Bayesian Label Transition
- URL: http://arxiv.org/abs/2010.14121v3
- Date: Tue, 18 Jan 2022 23:55:23 GMT
- Title: Deperturbation of Online Social Networks via Bayesian Label Transition
- Authors: Jun Zhuang, Mohammad Al Hasan
- Abstract summary: Online social networks (OSNs) classify users into different categories based on their online activities and interests.
A small number of users, so-called perturbators, may perform random activities on an OSN, which significantly deteriorate the performance of a GCN-based node classification task.
We develop a GCN defense model, namely GraphLT, which uses the concept of label transition.
- Score: 5.037076816350975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online social networks (OSNs) classify users into different categories based
on their online activities and interests, a task which is referred as a node
classification task. Such a task can be solved effectively using Graph
Convolutional Networks (GCNs). However, a small number of users, so-called
perturbators, may perform random activities on an OSN, which significantly
deteriorate the performance of a GCN-based node classification task. Existing
works in this direction defend GCNs either by adversarial training or by
identifying the attacker nodes followed by their removal. However, both of
these approaches require that the attack patterns or attacker nodes be
identified first, which is difficult in the scenario when the number of
perturbator nodes is very small. In this work, we develop a GCN defense model,
namely GraphLT, which uses the concept of label transition. GraphLT assumes
that perturbators' random activities deteriorate GCN's performance. To overcome
this issue, GraphLT subsequently uses a novel Bayesian label transition model,
which takes GCN's predicted labels and applies label transitions by
Gibbs-sampling-based inference and thus repairs GCN's prediction to achieve
better node classification. Extensive experiments on seven benchmark datasets
show that GraphLT considerably enhances the performance of the node classifier
in an unperturbed environment; furthermore, it validates that GraphLT can
successfully repair a GCN-based node classifier with superior performance than
several competing methods.
Related papers
- Enhancing the Resilience of Graph Neural Networks to Topological Perturbations in Sparse Graphs [9.437128738619563]
We propose a novel label inference framework, TraTopo, which combines topology-driven label propagation, Bayesian label transitions, and link analysis via random walks.
TraTopo significantly surpasses its predecessors on sparse graphs by utilizing random walk sampling, specifically targeting isolated nodes for link prediction.
Empirical evaluations highlight TraTopo's superiority in node classification, significantly exceeding contemporary GCN models in accuracy.
arXiv Detail & Related papers (2024-06-05T09:40:08Z) - Defending Graph Convolutional Networks against Dynamic Graph
Perturbations via Bayesian Self-supervision [5.037076816350975]
Graph Convolutional Networks (GCNs) achieve extraordinary accomplishments on the node classification task.
GCNs may be vulnerable to adversarial attacks on label-scarce dynamic graphs.
We propose a novel Bayesian self-supervision model, namely GraphSS, to address the issue.
arXiv Detail & Related papers (2022-03-07T22:57:43Z) - Label-GCN: An Effective Method for Adding Label Propagation to Graph
Convolutional Networks [1.8352113484137624]
We show that a modification of the first layer of a Graph Convolutional Network (GCN) can be used to effectively propagate label information across neighbor nodes.
This is done by selectively eliminating self-loops for the label features during the training phase of a GCN.
We show through several experiments that, depending on how many labels are available during the inference phase, this strategy can lead to a substantial improvement in the model performance.
arXiv Detail & Related papers (2021-04-05T21:02:48Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Graph Convolutional Neural Networks with Node Transition
Probability-based Message Passing and DropNode Regularization [32.260055351563324]
Graph convolutional neural networks (GCNNs) have received much attention recently, owing to their capability in handling graph-structured data.
This work presents a new method to improve the message passing process based on node transition probabilities.
We also propose a novel regularization method termed DropNode to address the over-fitting and over-smoothing issues simultaneously.
arXiv Detail & Related papers (2020-08-28T10:51:03Z) - Investigating and Mitigating Degree-Related Biases in Graph
Convolutional Networks [62.8504260693664]
Graph Convolutional Networks (GCNs) show promising results for semisupervised learning tasks on graphs.
In this paper, we analyze GCNs in regard to the node degree distribution.
We develop a novel Self-Supervised DegreeSpecific GCN (SL-DSGC) that mitigates the degree biases of GCNs.
arXiv Detail & Related papers (2020-06-28T16:26:47Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - Understanding and Resolving Performance Degradation in Graph
Convolutional Networks [105.14867349802898]
Graph Convolutional Network (GCN) stacks several layers and in each layer performs a PROPagation operation (PROP) and a TRANsformation operation (TRAN) for learning node representations over graph-structured data.
GCNs tend to suffer performance drop when the model gets deep.
We study performance degradation of GCNs by experimentally examining how stacking only TRANs or PROPs works.
arXiv Detail & Related papers (2020-06-12T12:12:12Z) - AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack [53.06334363586119]
Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
arXiv Detail & Related papers (2020-05-06T08:15:24Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.