Contrastive Laplacian Eigenmaps
- URL: http://arxiv.org/abs/2201.05493v1
- Date: Fri, 14 Jan 2022 14:59:05 GMT
- Title: Contrastive Laplacian Eigenmaps
- Authors: Hao Zhu, Ke Sun, Piotr Koniusz
- Abstract summary: Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity.
We extend the celebrated Laplacian Eigenmaps with contrastive learning, and call them COntrastive Laplacian EigenmapS (COLES)
- Score: 37.5297239772525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph contrastive learning attracts/disperses node representations for
similar/dissimilar node pairs under some notion of similarity. It may be
combined with a low-dimensional embedding of nodes to preserve intrinsic and
structural properties of a graph. In this paper, we extend the celebrated
Laplacian Eigenmaps with contrastive learning, and call them COntrastive
Laplacian EigenmapS (COLES). Starting from a GAN-inspired contrastive
formulation, we show that the Jensen-Shannon divergence underlying many
contrastive graph embedding models fails under disjoint positive and negative
distributions, which may naturally emerge during sampling in the contrastive
setting. In contrast, we demonstrate analytically that COLES essentially
minimizes a surrogate of Wasserstein distance, which is known to cope well
under disjoint distributions. Moreover, we show that the loss of COLES belongs
to the family of so-called block-contrastive losses, previously shown to be
superior compared to pair-wise losses typically used by contrastive methods. We
show on popular benchmarks/backbones that COLES offers favourable
accuracy/scalability compared to DeepWalk, GCN, Graph2Gauss, DGI and GRACE
baselines.
Related papers
- Bootstrap Latents of Nodes and Neighbors for Graph Self-Supervised Learning [27.278097015083343]
Contrastive learning requires negative samples to prevent model collapse and learn discriminative representations.
We introduce a cross-attention module to predict the supportiveness score of a neighbor with respect to the anchor node.
Our method mitigates class collision from negative and noisy positive samples, concurrently enhancing intra-class compactness.
arXiv Detail & Related papers (2024-08-09T14:17:52Z) - Topology Reorganized Graph Contrastive Learning with Mitigating Semantic Drift [28.83750578838018]
Graph contrastive learning (GCL) is an effective paradigm for node representation learning in graphs.
To increase the diversity of the contrastive view, we propose two simple and effective global topological augmentations to compensate current GCL.
arXiv Detail & Related papers (2024-07-23T13:55:33Z) - Generation is better than Modification: Combating High Class Homophily Variance in Graph Anomaly Detection [51.11833609431406]
Homophily distribution differences between different classes are significantly greater than those in homophilic and heterophilic graphs.
We introduce a new metric called Class Homophily Variance, which quantitatively describes this phenomenon.
To mitigate its impact, we propose a novel GNN model named Homophily Edge Generation Graph Neural Network (HedGe)
arXiv Detail & Related papers (2024-03-15T14:26:53Z) - Smoothed Graph Contrastive Learning via Seamless Proximity Integration [35.73306919276754]
Graph contrastive learning (GCL) aligns node representations by classifying node pairs into positives and negatives.
We present a Smoothed Graph Contrastive Learning model (SGCL) that injects proximity information associated with positive/negative pairs in the contrastive loss.
The proposed SGCL adjusts the penalties associated with node pairs in the contrastive loss by incorporating three distinct smoothing techniques.
arXiv Detail & Related papers (2024-02-23T11:32:46Z) - OrthoReg: Improving Graph-regularized MLPs via Orthogonality
Regularization [66.30021126251725]
Graph Neural Networks (GNNs) are currently dominating in modeling graphstructure data.
Graph-regularized networks (GR-MLPs) implicitly inject the graph structure information into model weights, while their performance can hardly match that of GNNs in most tasks.
We show that GR-MLPs suffer from dimensional collapse, a phenomenon in which the largest a few eigenvalues dominate the embedding space.
We propose OrthoReg, a novel GR-MLP model to mitigate the dimensional collapse issue.
arXiv Detail & Related papers (2023-01-31T21:20:48Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - A Probabilistic Graph Coupling View of Dimension Reduction [5.35952718937799]
We introduce a unifying statistical framework based on the coupling of hidden graphs using cross entropy.
We show that existing pairwise similarity DR methods can be retrieved from our framework with particular choices of priors for the graphs.
Our model is leveraged and extended to address the issue while new links are drawn with Laplacian eigenmaps and PCA.
arXiv Detail & Related papers (2022-01-31T08:21:55Z) - Implicit vs Unfolded Graph Neural Networks [18.084842625063082]
Graph neural networks (GNNs) sometimes struggle to maintain a healthy balance between modeling long-range dependencies and avoiding unintended consequences.
Two separate strategies have recently been proposed, namely implicit and unfolded GNNs.
We provide empirical head-to-head comparisons across a variety of synthetic and public real-world benchmarks.
arXiv Detail & Related papers (2021-11-12T07:49:16Z) - Prototypical Graph Contrastive Learning [141.30842113683775]
We propose a Prototypical Graph Contrastive Learning (PGCL) approach to mitigate the critical sampling bias issue.
Specifically, PGCL models the underlying semantic structure of the graph data via clustering semantically similar graphs into the same group, and simultaneously encourages the clustering consistency for different augmentations of the same graph.
For a query, PGCL further reweights its negative samples based on the distance between their prototypes (cluster centroids) and the query prototype.
arXiv Detail & Related papers (2021-06-17T16:45:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.