Improving Node Representation by Boosting Target-Aware Contrastive Loss
- URL: http://arxiv.org/abs/2410.03901v2
- Date: Fri, 1 Nov 2024 15:19:18 GMT
- Title: Improving Node Representation by Boosting Target-Aware Contrastive Loss
- Authors: Ying-Chun Lin, Jennifer Neville,
- Abstract summary: We introduce Target-Aware Contrastive Learning (Target-aware CL) to enhance target task performance.
By minimizing XTCL, Target-aware CL increases the mutual information between the target task and node representations.
We show experimentally that XTCL significantly improves the performance on two target tasks.
- Score: 10.73390567832967
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graphs model complex relationships between entities, with nodes and edges capturing intricate connections. Node representation learning involves transforming nodes into low-dimensional embeddings. These embeddings are typically used as features for downstream tasks. Therefore, their quality has a significant impact on task performance. Existing approaches for node representation learning span (semi-)supervised, unsupervised, and self-supervised paradigms. In graph domains, (semi-)supervised learning often only optimizes models based on class labels, neglecting other abundant graph signals, which limits generalization. While self-supervised or unsupervised learning produces representations that better capture underlying graph signals, the usefulness of these captured signals for downstream target tasks can vary. To bridge this gap, we introduce Target-Aware Contrastive Learning (Target-aware CL) which aims to enhance target task performance by maximizing the mutual information between the target task and node representations with a self-supervised learning process. This is achieved through a sampling function, XGBoost Sampler (XGSampler), to sample proper positive examples for the proposed Target-Aware Contrastive Loss (XTCL). By minimizing XTCL, Target-aware CL increases the mutual information between the target task and node representations, such that model generalization is improved. Additionally, XGSampler enhances the interpretability of each signal by showing the weights for sampling the proper positive examples. We show experimentally that XTCL significantly improves the performance on two target tasks: node classification and link prediction tasks, compared to state-of-the-art models.
Related papers
- Self-Supervised Conditional Distribution Learning on Graphs [15.730933577970687]
We present an end-to-end graph representation learning model to align the conditional distributions of weakly and strongly augmented features over the original features.
This alignment effectively reduces the risk of disrupting intrinsic semantic information through graph-structured data augmentation.
arXiv Detail & Related papers (2024-11-20T07:26:36Z) - Learning a Mini-batch Graph Transformer via Two-stage Interaction Augmentation [34.969019293698885]
Mini-batch Graph Transformer (MGT) has demonstrated significant advantages in semi-supervised node prediction tasks.
The limited number of nodes in each mini-batch restricts the model's capacity to capture the global characteristic of the graph.
We propose LGMformer, a novel MGT model that employs a two-stage augmented interaction strategy.
arXiv Detail & Related papers (2024-07-13T14:42:22Z) - Mitigating Semantic Confusion from Hostile Neighborhood for Graph Active
Learning [38.5372139056485]
Graph Active Learning (GAL) aims to find the most informative nodes in graphs for annotation to maximize the Graph Neural Networks (GNNs) performance.
Gal strategies may introduce semantic confusion to the selected training set, particularly when graphs are noisy.
We present Semantic-aware Active learning framework for Graphs (SAG) to mitigate the semantic confusion problem.
arXiv Detail & Related papers (2023-08-17T07:06:54Z) - GUESR: A Global Unsupervised Data-Enhancement with Bucket-Cluster
Sampling for Sequential Recommendation [58.6450834556133]
We propose graph contrastive learning to enhance item representations with complex associations from the global view.
We extend the CapsNet module with the elaborately introduced target-attention mechanism to derive users' dynamic preferences.
Our proposed GUESR could not only achieve significant improvements but also could be regarded as a general enhancement strategy.
arXiv Detail & Related papers (2023-03-01T05:46:36Z) - Self-Supervised Node Representation Learning via Node-to-Neighbourhood
Alignment [10.879056662671802]
Self-supervised node representation learning aims to learn node representations from unlabelled graphs that rival the supervised counterparts.
In this work, we present simple-yet-effective self-supervised node representation learning via aligning the hidden representations of nodes and their neighbourhood.
We learn node representations that achieve promising node classification performance on a set of graph-structured datasets from small- to large-scale.
arXiv Detail & Related papers (2023-02-09T13:21:18Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Graph Representation Learning via Contrasting Cluster Assignments [57.87743170674533]
We propose a novel unsupervised graph representation model by contrasting cluster assignments, called as GRCCA.
It is motivated to make good use of local and global information synthetically through combining clustering algorithms and contrastive learning.
GRCCA has strong competitiveness in most tasks.
arXiv Detail & Related papers (2021-12-15T07:28:58Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z) - Self-supervised Graph Learning for Recommendation [69.98671289138694]
We explore self-supervised learning on user-item graph for recommendation.
An auxiliary self-supervised task reinforces node representation learning via self-discrimination.
Empirical studies on three benchmark datasets demonstrate the effectiveness of SGL.
arXiv Detail & Related papers (2020-10-21T06:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.