Catastrophic Forgetting in Deep Graph Networks: an Introductory
Benchmark for Graph Classification
- URL: http://arxiv.org/abs/2103.11750v1
- Date: Mon, 22 Mar 2021 12:07:21 GMT
- Title: Catastrophic Forgetting in Deep Graph Networks: an Introductory
Benchmark for Graph Classification
- Authors: Antonio Carta, Andrea Cossu, Federico Errica, Davide Bacciu
- Abstract summary: We study the phenomenon of catastrophic forgetting in the graph representation learning scenario.
We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization.
- Score: 12.423303337249795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we study the phenomenon of catastrophic forgetting in the graph
representation learning scenario. The primary objective of the analysis is to
understand whether classical continual learning techniques for flat and
sequential data have a tangible impact on performances when applied to graph
data. To do so, we experiment with a structure-agnostic model and a deep graph
network in a robust and controlled environment on three different datasets. The
benchmark is complemented by an investigation on the effect of
structure-preserving regularization techniques on catastrophic forgetting. We
find that replay is the most effective strategy in so far, which also benefits
the most from the use of regularization. Our findings suggest interesting
future research at the intersection of the continual and graph representation
learning fields. Finally, we provide researchers with a flexible software
framework to reproduce our results and carry out further experiments.
Related papers
- Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - A Comprehensive Survey on Graph Summarization with Graph Neural Networks [21.337505372979066]
In the past, most graph summarization techniques sought to capture the most important part of a graph statistically.
Today, the high dimensionality and complexity of modern graph data are making deep learning techniques more popular.
Our investigation includes a review of the current state-of-the-art approaches, including recurrent GNNs, convolutional GNNs, graph autoencoders, and graph attention networks.
arXiv Detail & Related papers (2023-02-13T05:43:24Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Graph-Free Learning in Graph-Structured Data: A More Efficient and
Accurate Spatiotemporal Learning Perspective [11.301939428860404]
This paper proposes a Graph-Free (SGF) learning module on normalization for capturing spatial correlations in graphtemporal learning.
Rigorous theoretical proof demonstrates that the time complexity is significantly better than that proposed graph convolution operation.
arXiv Detail & Related papers (2023-01-27T14:26:11Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Latent Augmentation For Better Graph Self-Supervised Learning [20.082614919182692]
We argue that predictive models weaponed with latent augmentations and powerful decoder could achieve comparable or even better representation power than contrastive models.
A novel graph decoder named Wiener Graph Deconvolutional Network is correspondingly designed to perform information reconstruction from augmented latent representations.
arXiv Detail & Related papers (2022-06-26T17:41:59Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Adaptive Graph Auto-Encoder for General Data Clustering [90.8576971748142]
Graph-based clustering plays an important role in the clustering area.
Recent studies about graph convolution neural networks have achieved impressive success on graph type data.
We propose a graph auto-encoder for general data clustering, which constructs the graph adaptively according to the generative perspective of graphs.
arXiv Detail & Related papers (2020-02-20T10:11:28Z) - Deep Learning for Learning Graph Representations [58.649784596090385]
Mining graph data has become a popular research topic in computer science.
The huge amount of network data has posed great challenges for efficient analysis.
This motivates the advent of graph representation which maps the graph into a low-dimension vector space.
arXiv Detail & Related papers (2020-01-02T02:13:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.