RF-GNN: Random Forest Boosted Graph Neural Network for Social Bot
Detection
- URL: http://arxiv.org/abs/2304.08239v1
- Date: Fri, 14 Apr 2023 00:57:44 GMT
- Title: RF-GNN: Random Forest Boosted Graph Neural Network for Social Bot
Detection
- Authors: Shuhao Shi, Kai Qiao, Jie Yang, Baojie Song, Jian Chen, Bin Yan
- Abstract summary: The presence of a large number of bots on social media leads to adverse effects.
This paper proposes a Random Forest boosted Graph Neural Network for social bot detection, called RF-GNN.
- Score: 10.690802468726078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The presence of a large number of bots on social media leads to adverse
effects. Although Random forest algorithm is widely used in bot detection and
can significantly enhance the performance of weak classifiers, it cannot
utilize the interaction between accounts. This paper proposes a Random Forest
boosted Graph Neural Network for social bot detection, called RF-GNN, which
employs graph neural networks (GNNs) as the base classifiers to construct a
random forest, effectively combining the advantages of ensemble learning and
GNNs to improve the accuracy and robustness of the model. Specifically,
different subgraphs are constructed as different training sets through node
sampling, feature selection, and edge dropout. Then, GNN base classifiers are
trained using various subgraphs, and the remaining features are used for
training Fully Connected Netural Network (FCN). The outputs of GNN and FCN are
aligned in each branch. Finally, the outputs of all branches are aggregated to
produce the final result. Moreover, RF-GNN is compatible with various
widely-used GNNs for node classification. Extensive experimental results
demonstrate that the proposed method obtains better performance than other
state-of-the-art methods.
Related papers
- T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Higher-order Sparse Convolutions in Graph Neural Networks [17.647346486710514]
We introduce a new higher-order sparse convolution based on the Sobolev norm of graph signals.
S-SobGNN shows competitive performance in all applications as compared to several state-of-the-art methods.
arXiv Detail & Related papers (2023-02-21T08:08:18Z) - Over-Sampling Strategy in Feature Space for Graphs based
Class-imbalanced Bot Detection [10.882979272768502]
A large number of bots in Online Social Networks (OSN) leads to undesirable social effects.
We propose an over-sampling strategy for GNNs that generates samples for the minority class without edge synthesis.
The framework is evaluated using three real-world bot detection benchmark datasets.
arXiv Detail & Related papers (2023-02-14T08:35:33Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Robust Graph Neural Networks using Weighted Graph Laplacian [1.8292714902548342]
Graph neural network (GNN) is vulnerable to noise and adversarial attacks in input data.
We propose a generic framework for robustifying GNN known as Weighted Laplacian GNN (RWL-GNN)
arXiv Detail & Related papers (2022-08-03T05:36:35Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Neural Network Branch-and-Bound for Neural Network Verification [26.609606492971967]
We propose a novel machine learning framework that can be used for designing an effective branching strategy.
We learn two graph neural networks (GNNs) that both directly treat the network we want to verify as a graph input.
We show that our GNN models generalize well to harder properties on larger unseen networks.
arXiv Detail & Related papers (2021-07-27T14:42:57Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.