LOSS-GAT: Label Propagation and One-Class Semi-Supervised Graph
Attention Network for Fake News Detection
- URL: http://arxiv.org/abs/2402.08401v1
- Date: Tue, 13 Feb 2024 12:02:37 GMT
- Title: LOSS-GAT: Label Propagation and One-Class Semi-Supervised Graph
Attention Network for Fake News Detection
- Authors: Batool Lakzaei and Mostafa Haghir Chehreghani and Alireza Bagheri
- Abstract summary: Loss-GAT is a semi-supervised and one-class approach for fake news detection.
We employ a two-step label propagation algorithm to categorize news into two groups: interest (fake) and non-interest (real)
- Score: 2.6396287656676725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the era of widespread social networks, the rapid dissemination of fake
news has emerged as a significant threat, inflicting detrimental consequences
across various dimensions of people's lives. Machine learning and deep learning
approaches have been extensively employed for identifying fake news. However, a
significant challenge in identifying fake news is the limited availability of
labeled news datasets. Therefore, the One-Class Learning (OCL) approach,
utilizing only a small set of labeled data from the interest class, can be a
suitable approach to address this challenge. On the other hand, representing
data as a graph enables access to diverse content and structural information,
and label propagation methods on graphs can be effective in predicting node
labels. In this paper, we adopt a graph-based model for data representation and
introduce a semi-supervised and one-class approach for fake news detection,
called LOSS-GAT. Initially, we employ a two-step label propagation algorithm,
utilizing Graph Neural Networks (GNNs) as an initial classifier to categorize
news into two groups: interest (fake) and non-interest (real). Subsequently, we
enhance the graph structure using structural augmentation techniques.
Ultimately, we predict the final labels for all unlabeled data using a GNN that
induces randomness within the local neighborhood of nodes through the
aggregation function. We evaluate our proposed method on five common datasets
and compare the results against a set of baseline models, including both OCL
and binary labeled models. The results demonstrate that LOSS-GAT achieves a
notable improvement, surpassing 10%, with the advantage of utilizing only a
limited set of labeled fake news. Noteworthy, LOSS-GAT even outperforms binary
labeled models.
Related papers
- Mitigating Label Noise on Graph via Topological Sample Selection [72.86862597508077]
We propose a $textitTopological Sample Selection$ (TSS) method that boosts the informative sample selection process in a graph by utilising topological information.
We theoretically prove that our procedure minimizes an upper bound of the expected risk under target clean distribution, and experimentally show the superiority of our method compared with state-of-the-art baselines.
arXiv Detail & Related papers (2024-03-04T11:24:51Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - TGNN: A Joint Semi-supervised Framework for Graph-level Classification [34.300070497510276]
We propose a novel semi-supervised framework called Twin Graph Neural Network (TGNN)
To explore graph structural information from complementary views, our TGNN has a message passing module and a graph kernel module.
We evaluate our TGNN on various public datasets and show that it achieves strong performance.
arXiv Detail & Related papers (2023-04-23T15:42:11Z) - SMARTQUERY: An Active Learning Framework for Graph Neural Networks
through Hybrid Uncertainty Reduction [25.77052028238513]
We propose a framework to learn a graph neural network with very few labeled nodes using a hybrid uncertainty reduction function.
We demonstrate the competitive performance of our method against state-of-the-arts on very few labeled data.
arXiv Detail & Related papers (2022-12-02T20:49:38Z) - Informative Pseudo-Labeling for Graph Neural Networks with Few Labels [12.83841767562179]
Graph Neural Networks (GNNs) have achieved state-of-the-art results for semi-supervised node classification on graphs.
The challenge of how to effectively learn GNNs with very few labels is still under-explored.
We propose a novel informative pseudo-labeling framework, called InfoGNN, to facilitate learning of GNNs with extremely few labels.
arXiv Detail & Related papers (2022-01-20T01:49:30Z) - Weakly-supervised Graph Meta-learning for Few-shot Node Classification [53.36828125138149]
We propose a new graph meta-learning framework -- Graph Hallucination Networks (Meta-GHN)
Based on a new robustness-enhanced episodic training, Meta-GHN is meta-learned to hallucinate clean node representations from weakly-labeled data.
Extensive experiments demonstrate the superiority of Meta-GHN over existing graph meta-learning studies.
arXiv Detail & Related papers (2021-06-12T22:22:10Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z) - Active Learning on Attributed Graphs via Graph Cognizant Logistic
Regression and Preemptive Query Generation [37.742218733235084]
We propose a novel graph-based active learning algorithm for the task of node classification in attributed graphs.
Our algorithm uses graph cognizant logistic regression, equivalent to a linearized graph convolutional neural network (GCN) for the prediction phase and maximizes the expected error reduction in the query phase.
We conduct experiments on five public benchmark datasets, demonstrating a significant improvement over state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-09T18:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.