Contrastive Token-level Explanations for Graph-based Rumour Detection
- URL: http://arxiv.org/abs/2502.04366v1
- Date: Wed, 05 Feb 2025 07:14:11 GMT
- Title: Contrastive Token-level Explanations for Graph-based Rumour Detection
- Authors: Daniel Wai Kit Chin, Roy Ka-Wei Lee,
- Abstract summary: Social media has facilitated the spread of harmful rumours, which can disrupt economies, influence political outcomes, and exacerbate public health crises.
Graph Neural Network (GNN)-based approaches have shown significant promise in automated rumour detection.
Existing graph explainability techniques fall short in addressing the unique challenges posed by the dependencies among feature dimensions in high-dimensional text embeddings.
- Score: 4.626073646852022
- License:
- Abstract: The widespread use of social media has accelerated the dissemination of information, but it has also facilitated the spread of harmful rumours, which can disrupt economies, influence political outcomes, and exacerbate public health crises, such as the COVID-19 pandemic. While Graph Neural Network (GNN)-based approaches have shown significant promise in automated rumour detection, they often lack transparency, making their predictions difficult to interpret. Existing graph explainability techniques fall short in addressing the unique challenges posed by the dependencies among feature dimensions in high-dimensional text embeddings used in GNN-based models. In this paper, we introduce Contrastive Token Layerwise Relevance Propagation (CT-LRP), a novel framework designed to enhance the explainability of GNN-based rumour detection. CT-LRP extends current graph explainability methods by providing token-level explanations that offer greater granularity and interpretability. We evaluate the effectiveness of CT-LRP across multiple GNN models trained on three publicly available rumour detection datasets, demonstrating that it consistently produces high-fidelity, meaningful explanations, paving the way for more robust and trustworthy rumour detection systems.
Related papers
- Epidemiology-informed Network for Robust Rumor Detection [59.89351792706995]
We propose a novel Epidemiology-informed Network (EIN) that integrates epidemiological knowledge to enhance performance.
To adapt epidemiology theory to rumor detection, it is expected that each users stance toward the source information will be annotated.
Our experimental results demonstrate that the proposed EIN not only outperforms state-of-the-art methods on real-world datasets but also exhibits enhanced robustness across varying tree depths.
arXiv Detail & Related papers (2024-11-20T00:43:32Z) - xAI-Drop: Don't Use What You Cannot Explain [23.33477769275026]
Graph Neural Networks (GNNs) have emerged as the predominant paradigm for learning from graph-structured data.
GNNs face challenges such as lack of generalization and poor interpretability.
We introduce xAI-Drop, a novel topological-level dropping regularizer.
arXiv Detail & Related papers (2024-07-29T14:53:45Z) - Advanced Financial Fraud Detection Using GNN-CL Model [13.5240775562349]
The innovative GNN-CL model proposed in this paper marks a breakthrough in the field of financial fraud detection.
It combines the advantages of graph neural networks (gnn), convolutional neural networks (cnn) and long short-term memory (LSTM) networks.
A key novelty of this paper is the use of multilayer perceptrons (MLPS) to estimate node similarity.
arXiv Detail & Related papers (2024-07-09T03:59:06Z) - Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Combating Bilateral Edge Noise for Robust Link Prediction [56.43882298843564]
We propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
Two instantiations, RGIB-SSL and RGIB-REP, are explored to leverage the merits of different methodologies.
Experiments on six datasets and three GNNs with diverse noisy scenarios verify the effectiveness of our RGIB instantiations.
arXiv Detail & Related papers (2023-11-02T12:47:49Z) - Probabilistically Rewired Message-Passing Neural Networks [41.554499944141654]
Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input.
MPNNs operate on a fixed input graph structure, ignoring potential noise and missing information.
We devise probabilistically rewired MPNNs (PR-MPNNs) which learn to add relevant edges while omitting less beneficial ones.
arXiv Detail & Related papers (2023-10-03T15:43:59Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.