Graph schemas as abstractions for transfer learning, inference, and
planning
- URL: http://arxiv.org/abs/2302.07350v2
- Date: Wed, 13 Dec 2023 02:36:37 GMT
- Title: Graph schemas as abstractions for transfer learning, inference, and
planning
- Authors: J. Swaroop Guntupalli, Rajkumar Vasudeva Raju, Shrinu Kushagra, Carter
Wendelken, Danny Sawyer, Ishan Deshpande, Guangyao Zhou, Miguel
L\'azaro-Gredilla, Dileep George
- Abstract summary: We propose graph schemas as a mechanism of abstraction for transfer learning.
Latent graph learning is emerging as a new computational model of the hippocampus.
By treating learned latent graphs as prior knowledge, new environments can be quickly learned.
- Score: 5.565347203528707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transferring latent structure from one environment or problem to another is a
mechanism by which humans and animals generalize with very little data.
Inspired by cognitive and neurobiological insights, we propose graph schemas as
a mechanism of abstraction for transfer learning. Graph schemas start with
latent graph learning where perceptually aliased observations are disambiguated
in the latent space using contextual information. Latent graph learning is also
emerging as a new computational model of the hippocampus to explain map
learning and transitive inference. Our insight is that a latent graph can be
treated as a flexible template -- a schema -- that models concepts and
behaviors, with slots that bind groups of latent nodes to the specific
observations or groundings. By treating learned latent graphs (schemas) as
prior knowledge, new environments can be quickly learned as compositions of
schemas and their newly learned bindings. We evaluate graph schemas on two
previously published challenging tasks: the memory & planning game and one-shot
StreetLearn, which are designed to test rapid task solving in novel
environments. Graph schemas can be learned in far fewer episodes than previous
baselines, and can model and plan in a few steps in novel variations of these
tasks. We also demonstrate learning, matching, and reusing graph schemas in
more challenging 2D and 3D environments with extensive perceptual aliasing and
size variations, and show how different schemas can be composed to model larger
and more complex environments. To summarize, our main contribution is a unified
system, inspired and grounded in cognitive science, that facilitates rapid
transfer learning of new environments using schemas via map-induction and
composition that handles perceptual aliasing.
Related papers
- Graph Memory Learning: Imitating Lifelong Remembering and Forgetting of Brain Networks [31.554027786868815]
This paper introduces a new concept of graph memory learning - Brain-inspired Graph Memory Learning (BGML)
BGML incorporates a multi-granular hierarchical progressive learning mechanism rooted in feature graph grain learning to mitigate potential conflict between memorization and forgetting.
In addition, to tackle the issue of unreliable structures in newly added incremental information, the paper introduces an information self-assessment ownership mechanism.
arXiv Detail & Related papers (2024-07-27T05:50:54Z) - A Topology-aware Graph Coarsening Framework for Continual Graph Learning [8.136809136959302]
Continual learning on graphs tackles the problem of training a graph neural network (GNN) where graph data arrive in a streaming fashion.
Traditional continual learning strategies such as Experience Replay can be adapted to streaming graphs.
We propose TA$mathbbCO$, a (t)opology-(a)ware graph (co)arsening and (co)ntinual learning framework.
arXiv Detail & Related papers (2024-01-05T22:22:13Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Cross-view Self-Supervised Learning on Heterogeneous Graph Neural
Network via Bootstrapping [0.0]
Heterogeneous graph neural networks can represent information of heterogeneous graphs with excellent ability.
In this paper, we introduce a that can generate good representations without generating large number of pairs.
The proposed model showed state-of-the-art performance than other methods in various real world datasets.
arXiv Detail & Related papers (2022-01-10T13:36:05Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.