Perturbation-based Graph Active Learning for Weakly-Supervised Belief Representation Learning
- URL: http://arxiv.org/abs/2410.19176v1
- Date: Thu, 24 Oct 2024 22:11:06 GMT
- Title: Perturbation-based Graph Active Learning for Weakly-Supervised Belief Representation Learning
- Authors: Dachun Sun, Ruijie Wang, Jinning Li, Ruipeng Han, Xinyi Liu, You Lyu, Tarek Abdelzaher,
- Abstract summary: The objective is to strategically identify valuable messages on social media graphs that are worth labeling within a constrained budget.
This paper proposes a graph data augmentation-inspired active learning strategy (PerbALGraph) that progressively selects messages for labeling.
- Score: 13.311498341765772
- License:
- Abstract: This paper addresses the problem of optimizing the allocation of labeling resources for semi-supervised belief representation learning in social networks. The objective is to strategically identify valuable messages on social media graphs that are worth labeling within a constrained budget, ultimately maximizing the task's performance. Despite the progress in unsupervised or semi-supervised methods in advancing belief and ideology representation learning on social networks and the remarkable efficacy of graph learning techniques, the availability of high-quality curated labeled social data can greatly benefit and further improve performances. Consequently, allocating labeling efforts is a critical research problem in scenarios where labeling resources are limited. This paper proposes a graph data augmentation-inspired perturbation-based active learning strategy (PerbALGraph) that progressively selects messages for labeling according to an automatic estimator, obviating human guidance. This estimator is based on the principle that messages in the network that exhibit heightened sensitivity to structural features of the observational data indicate landmark quality that significantly influences semi-supervision processes. We design the estimator to be the prediction variance under a set of designed graph perturbations, which is model-agnostic and application-independent. Extensive experiment results demonstrate the effectiveness of the proposed strategy for belief representation learning tasks.
Related papers
- Debiasing Graph Representation Learning based on Information Bottleneck [18.35405511009332]
We present the design and implementation of GRAFair, a new framework based on a variational graph auto-encoder.
The crux of GRAFair is the Conditional Fairness Bottleneck, where the objective is to capture the trade-off between the utility of representations and sensitive information of interest.
Experiments on various real-world datasets demonstrate the effectiveness of our proposed method in terms of fairness, utility, robustness, and stability.
arXiv Detail & Related papers (2024-09-02T16:45:23Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Graph Learning under Distribution Shifts: A Comprehensive Survey on
Domain Adaptation, Out-of-distribution, and Continual Learning [53.81365215811222]
We provide a review and summary of the latest approaches, strategies, and insights that address distribution shifts within the context of graph learning.
We categorize existing graph learning methods into several essential scenarios, including graph domain adaptation learning, graph out-of-distribution learning, and graph continual learning.
We discuss the potential applications and future directions for graph learning under distribution shifts with a systematic analysis of the current state in this field.
arXiv Detail & Related papers (2024-02-26T07:52:40Z) - Logical Bias Learning for Object Relation Prediction [3.724255294816294]
Scene graph generation (SGG) aims to automatically map an image into a semantic structural graph for better scene understanding.
It faces severe limitations in practice due to the biased data and training method.
We present a more rational and effective strategy based on causal inference for object relation prediction.
arXiv Detail & Related papers (2023-10-01T16:12:00Z) - A Survey of Graph Unlearning [12.86327535559885]
Graph unlearning provides the means to remove sensitive data traces from trained models, upholding the right to be forgotten.
We present the first systematic review of graph unlearning approaches, encompassing a diverse array of methodologies.
We explore the versatility of graph unlearning across various domains, including but not limited to social networks, recommender systems, and resource-constrained environments like the Internet of Things.
arXiv Detail & Related papers (2023-08-23T20:50:52Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Toward Improved Generalization: Meta Transfer of Self-supervised
Knowledge on Graphs [0.0]
We propose a novel knowledge transfer strategy by integrating meta-learning with self-supervised learning.
Specifically, we perform a self-supervised task on the source domain and apply meta-learning, which strongly improves the generalizability of the model.
We demonstrate that the proposed strategy significantly improves target task performance by increasing the generalizability and transferability of graph-based knowledge.
arXiv Detail & Related papers (2022-12-16T01:10:49Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Fairness-Aware Node Representation Learning [9.850791193881651]
This study addresses fairness issues in graph contrastive learning with fairness-aware graph augmentation designs.
Different fairness notions on graphs are introduced, which serve as guidelines for the proposed graph augmentations.
Experimental results on real social networks are presented to demonstrate that the proposed augmentations can enhance fairness in terms of statistical parity and equal opportunity.
arXiv Detail & Related papers (2021-06-09T21:12:14Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.