GABO: Graph Augmentations with Bi-level Optimization
- URL: http://arxiv.org/abs/2104.00722v1
- Date: Thu, 1 Apr 2021 19:00:17 GMT
- Title: GABO: Graph Augmentations with Bi-level Optimization
- Authors: Heejung W. Chung, Avoy Datta, Chris Waites
- Abstract summary: In this work we apply one such method, bilevel optimization, to tackle the problem of graph classification on the ogbg-molhiv dataset.
Our best performing augmentation achieved a test ROCAUC score of 77.77 % with a GIN+virtual classifier.
This framework combines a GIN layer augmentation generator with a bias transformation and outperforms the same classifier augmented using the state-of-the-art FLAG augmentation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data augmentation refers to a wide range of techniques for improving model
generalization by augmenting training examples. Oftentimes such methods require
domain knowledge about the dataset at hand, spawning a plethora of recent
literature surrounding automated techniques for data augmentation. In this work
we apply one such method, bilevel optimization, to tackle the problem of graph
classification on the ogbg-molhiv dataset. Our best performing augmentation
achieved a test ROCAUC score of 77.77 % with a GIN+virtual classifier, which
makes it the most effective augmenter for this classifier on the leaderboard.
This framework combines a GIN layer augmentation generator with a bias
transformation and outperforms the same classifier augmented using the
state-of-the-art FLAG augmentation.
Related papers
- Explanation-Preserving Augmentation for Semi-Supervised Graph Representation Learning [13.494832603509897]
Graph representation learning (GRL) has emerged as an effective technique achieving performance improvements in wide tasks such as node classification and graph classification.
We propose a novel method, Explanation-Preserving Augmentation (EPA), that leverages graph explanation techniques for generating augmented graphs.
EPA first uses a small number of labels to train a graph explainer to infer the sub-structures (explanations) that are most relevant to a graph's semantics.
arXiv Detail & Related papers (2024-10-16T15:18:03Z) - Language Models are Graph Learners [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, including Graph Neural Networks (GNNs) and Graph Transformers (GTs)
We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - Domain Generalization by Rejecting Extreme Augmentations [13.114457707388283]
We show that for out-of-domain and domain generalization settings, data augmentation can provide a conspicuous and robust improvement in performance.
We propose a simple training procedure: (i) use uniform sampling on standard data augmentation transformations; (ii) increase the strength transformations to account for the higher data variance expected when working out-of-domain, and (iii) devise a new reward function to reject extreme transformations that can harm the training.
arXiv Detail & Related papers (2023-10-10T14:46:22Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - Augmentations in Hypergraph Contrastive Learning: Fabricated and
Generative [126.0985540285981]
We apply the contrastive learning approach from images/graphs (we refer to it as HyperGCL) to improve generalizability of hypergraph neural networks.
We fabricate two schemes to augment hyperedges with higher-order relations encoded, and adopt three augmentation strategies from graph-structured data.
We propose a hypergraph generative model to generate augmented views, and then an end-to-end differentiable pipeline to jointly learn hypergraph augmentations and model parameters.
arXiv Detail & Related papers (2022-10-07T20:12:20Z) - Graph Contrastive Learning Automated [94.41860307845812]
Graph contrastive learning (GraphCL) has emerged with promising representation learning performance.
The effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset.
This paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data.
arXiv Detail & Related papers (2021-06-10T16:35:27Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Data Augmentation for Graph Neural Networks [32.24311481878144]
We study graph data augmentation for graph neural networks (GNNs) in the context of improving semi-supervised node-classification.
Our work shows that neural edge predictors can effectively encode class-homophilic structure to promote intra-class edges and demote inter-class edges in given graph structure.
Our main contribution introduces the GAug graph data augmentation framework, which leverages these insights to improve performance in GNN-based node classification via edge prediction.
arXiv Detail & Related papers (2020-06-11T21:17:56Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.