Rethinking the Setting of Semi-supervised Learning on Graphs
- URL: http://arxiv.org/abs/2205.14403v1
- Date: Sat, 28 May 2022 11:31:19 GMT
- Title: Rethinking the Setting of Semi-supervised Learning on Graphs
- Authors: Ziang Li, Ming Ding, Weikai Li, Zihan Wang, Ziyu Zeng, Yukuo Cen, Jie
Tang
- Abstract summary: We argue that the present setting of semisupervised learning on graphs may result in unfair comparisons.
We propose ValidUtil, an approach to fully utilize the label information in the validation set.
- Score: 29.5439965223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We argue that the present setting of semisupervised learning on graphs may
result in unfair comparisons, due to its potential risk of over-tuning
hyper-parameters for models. In this paper, we highlight the significant
influence of tuning hyper-parameters, which leverages the label information in
the validation set to improve the performance. To explore the limit of
over-tuning hyperparameters, we propose ValidUtil, an approach to fully utilize
the label information in the validation set through an extra group of
hyper-parameters. With ValidUtil, even GCN can easily get high accuracy of
85.8% on Cora.
To avoid over-tuning, we merge the training set and the validation set and
construct an i.i.d. graph benchmark (IGB) consisting of 4 datasets. Each
dataset contains 100 i.i.d. graphs sampled from a large graph to reduce the
evaluation variance. Our experiments suggest that IGB is a more stable
benchmark than previous datasets for semisupervised learning on graphs.
Related papers
- LOBSTUR: A Local Bootstrap Framework for Tuning Unsupervised Representations in Graph Neural Networks [0.9208007322096533]
Graph Neural Networks (GNNs) are increasingly used in conjunction with unsupervised learning techniques to learn powerful node representations.<n>We propose a novel framework designed to adapt bootstrapping techniques for unsupervised graph representation learning.
arXiv Detail & Related papers (2025-05-20T19:59:35Z) - OUI Need to Talk About Weight Decay: A New Perspective on Overfitting Detection [0.5242869847419834]
Overfitting-Underfitting Indicator (OUI) is a novel tool for monitoring the training dynamics of Deep Neural Networks (DNNs)
OUI indicates whether a model is overfitting or underfitting during training without requiring validation data.
arXiv Detail & Related papers (2025-04-24T00:41:59Z) - Enhancing Zero-Shot Vision Models by Label-Free Prompt Distribution Learning and Bias Correcting [55.361337202198925]
Vision-language models, such as CLIP, have shown impressive generalization capacities when using appropriate text descriptions.
We propose a label-Free prompt distribution learning and bias correction framework, dubbed as **Frolic**, which boosts zero-shot performance without the need for labeled data.
arXiv Detail & Related papers (2024-10-25T04:00:45Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - Parameter-tuning-free data entry error unlearning with adaptive
selective synaptic dampening [51.34904967046097]
We introduce an extension to the selective synaptic dampening unlearning method that removes the need for parameter tuning.
We demonstrate the performance of this extension, adaptive selective synaptic dampening (ASSD) on various ResNet18 and Vision Transformer unlearning tasks.
The application of this approach is particularly compelling in industrial settings, such as supply chain management.
arXiv Detail & Related papers (2024-02-06T14:04:31Z) - Diving into Unified Data-Model Sparsity for Class-Imbalanced Graph
Representation Learning [30.23894624193583]
Graph Neural Networks (GNNs) training upon non-Euclidean graph data often encounters relatively higher time costs.
We develop a unified data-model dynamic sparsity framework named Graph Decantation (GraphDec) to address challenges brought by training upon a massive class-imbalanced graph data.
arXiv Detail & Related papers (2022-10-01T01:47:00Z) - Three New Validators and a Large-Scale Benchmark Ranking for
Unsupervised Domain Adaptation [37.03614011735927]
We propose three new validators for unsupervised domain adaptation (UDA)
We compare and rank them against five other existing validators, on a large dataset of 1,000,000 checkpoints.
We find that two of our proposed validators achieve state-of-the-art performance in various settings.
arXiv Detail & Related papers (2022-08-15T17:55:26Z) - From Spectral Graph Convolutions to Large Scale Graph Convolutional
Networks [0.0]
Graph Convolutional Networks (GCNs) have been shown to be a powerful concept that has been successfully applied to a large variety of tasks.
We study the theory that paved the way to the definition of GCN, including related parts of classical graph theory.
arXiv Detail & Related papers (2022-07-12T16:57:08Z) - Features Based Adaptive Augmentation for Graph Contrastive Learning [0.0]
Self-Supervised learning aims to eliminate the need for expensive annotation in graph representation learning.
We introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features.
We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning's benchmark datasets.
arXiv Detail & Related papers (2022-07-05T03:41:20Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Graph Contrastive Learning Automated [94.41860307845812]
Graph contrastive learning (GraphCL) has emerged with promising representation learning performance.
The effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset.
This paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data.
arXiv Detail & Related papers (2021-06-10T16:35:27Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.