Learning Continually on a Sequence of Graphs -- The Dynamical System Way
- URL: http://arxiv.org/abs/2305.12030v1
- Date: Fri, 19 May 2023 23:00:54 GMT
- Title: Learning Continually on a Sequence of Graphs -- The Dynamical System Way
- Authors: Krishnan Raghavan and Prasanna Balaprakash
- Abstract summary: Continual learning(CL) is a field concerned with learning a series of inter-related task with the tasks typically defined in the sense of either regression or classification.
In this work, we formulate a two-player game between the act of learning new tasks(generalization) and remembering previously learned tasks(forgetting)
- Score: 1.0965065178451106
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Continual learning~(CL) is a field concerned with learning a series of
inter-related task with the tasks typically defined in the sense of either
regression or classification. In recent years, CL has been studied extensively
when these tasks are defined using Euclidean data-- data, such as images, that
can be described by a set of vectors in an n-dimensional real space. However,
the literature is quite sparse, when the data corresponding to a CL task is
nonEuclidean-- data , such as graphs, point clouds or manifold, where the
notion of similarity in the sense of Euclidean metric does not hold. For
instance, a graph is described by a tuple of vertices and edges and
similarities between two graphs is not well defined through a Euclidean metric.
Due to this fundamental nature of the data, developing CL for nonEuclidean data
presents several theoretical and methodological challenges. In particular, CL
for graphs requires explicit modelling of nonstationary behavior of vertices
and edges and their effects on the learning problem. Therefore, in this work,
we develop a adaptive dynamic programming viewpoint for CL with graphs. In this
work, we formulate a two-player sequential game between the act of learning new
tasks~(generalization) and remembering previously learned tasks~(forgetting).
We prove mathematically the existence of a solution to the game and demonstrate
convergence to the solution of the game. Finally, we demonstrate the efficacy
of our method on a number of graph benchmarks with a comprehensive ablation
study while establishing state-of-the-art performance.
Related papers
- AGALE: A Graph-Aware Continual Learning Evaluation Framework [7.892731722253387]
We develop a graph-aware evaluation (agale) framework that accommodates both single-labeled and multi-labeled nodes.
In particular, we define new incremental settings and devise algorithms tailored to datasets.
We theoretically analyze agale and provide new insights about the role of homophily in the performance of compared methods.
arXiv Detail & Related papers (2024-06-03T11:50:47Z) - Continual Learning on Graphs: Challenges, Solutions, and Opportunities [72.7886669278433]
We provide a comprehensive review of existing continual graph learning (CGL) algorithms.
We compare methods with traditional continual learning techniques and analyze the applicability of the traditional continual learning techniques to forgetting tasks.
We will maintain an up-to-date repository featuring a comprehensive list of accessible algorithms.
arXiv Detail & Related papers (2024-02-18T12:24:45Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Extended Graph Assessment Metrics for Graph Neural Networks [13.49677006107642]
We introduce extended graph assessment metrics (GAMs) for regression tasks and continuous adjacency matrices.
We show the correlation of these metrics with model performance on different medical population graphs and under different learning settings.
arXiv Detail & Related papers (2023-07-13T13:55:57Z) - Tight and fast generalization error bound of graph embedding in metric
space [54.279425319381374]
We show that graph embedding in non-Euclidean metric space can outperform that in Euclidean space with much smaller training data than the existing bound has suggested.
Our new upper bound is significantly tighter and faster than the existing one, which can be exponential to $R$ and $O(frac1S)$ at the fastest.
arXiv Detail & Related papers (2023-05-13T17:29:18Z) - Learnable Graph Matching: A Practical Paradigm for Data Association [74.28753343714858]
We propose a general learnable graph matching method to address these issues.
Our method achieves state-of-the-art performance on several MOT datasets.
For image matching, our method outperforms state-of-the-art methods on a popular indoor dataset, ScanNet.
arXiv Detail & Related papers (2023-03-27T17:39:00Z) - Geometry Contrastive Learning on Heterogeneous Graphs [50.58523799455101]
This paper proposes a novel self-supervised learning method, termed as Geometry Contrastive Learning (GCL)
GCL views a heterogeneous graph from Euclidean and hyperbolic perspective simultaneously, aiming to make a strong merger of the ability of modeling rich semantics and complex structures.
Extensive experiments on four benchmarks data sets show that the proposed approach outperforms the strong baselines.
arXiv Detail & Related papers (2022-06-25T03:54:53Z) - Hermitian Symmetric Spaces for Graph Embeddings [0.0]
We learn continuous representations of graphs in spaces of symmetric matrices over C.
These spaces offer a rich geometry that simultaneously admits hyperbolic and Euclidean subspaces.
The proposed models are able to automatically adapt to very dissimilar arrangements without any apriori estimates of graph features.
arXiv Detail & Related papers (2021-05-11T18:14:52Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Graph topology inference benchmarks for machine learning [16.857405938139525]
We introduce several benchmarks specifically designed to reveal the relative merits and limitations of graph inference methods.
We also contrast some of the most prominent techniques in the literature.
arXiv Detail & Related papers (2020-07-16T09:40:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.