Learning Graph Structure With A Finite-State Automaton Layer
- URL: http://arxiv.org/abs/2007.04929v2
- Date: Fri, 6 Nov 2020 18:26:49 GMT
- Title: Learning Graph Structure With A Finite-State Automaton Layer
- Authors: Daniel D. Johnson, Hugo Larochelle, Daniel Tarlow
- Abstract summary: We study the problem of learning to derive abstract relations from the intrinsic graph structure.
We show how to learn these relations end-to-end by relaxing the problem into learning finite-state automata policies.
We demonstrate that this layer can find shortcuts in grid-world graphs and reproduce simple static analyses on Python programs.
- Score: 31.028101360041227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph-based neural network models are producing strong results in a number of
domains, in part because graphs provide flexibility to encode domain knowledge
in the form of relational structure (edges) between nodes in the graph. In
practice, edges are used both to represent intrinsic structure (e.g., abstract
syntax trees of programs) and more abstract relations that aid reasoning for a
downstream task (e.g., results of relevant program analyses). In this work, we
study the problem of learning to derive abstract relations from the intrinsic
graph structure. Motivated by their power in program analyses, we consider
relations defined by paths on the base graph accepted by a finite-state
automaton. We show how to learn these relations end-to-end by relaxing the
problem into learning finite-state automata policies on a graph-based POMDP and
then training these policies using implicit differentiation. The result is a
differentiable Graph Finite-State Automaton (GFSA) layer that adds a new edge
type (expressed as a weighted adjacency matrix) to a base graph. We demonstrate
that this layer can find shortcuts in grid-world graphs and reproduce simple
static analyses on Python programs. Additionally, we combine the GFSA layer
with a larger graph-based model trained end-to-end on the variable misuse
program understanding task, and find that using the GFSA layer leads to better
performance than using hand-engineered semantic edges or other baseline methods
for adding learned edge types.
Related papers
- SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph Convolution and Pooling [25.555741218526464]
Graph neural networks (GNNs) have revolutionized the field of machine learning on non-Euclidean data such as graphs and networks.
We propose a concatenation-based graph convolution mechanism that injectively updates node representations.
We also design a novel graph pooling module, called WL-SortPool, to learn important subgraph patterns in a deep-learning manner.
arXiv Detail & Related papers (2024-04-21T13:11:59Z) - GraphEdit: Large Language Models for Graph Structure Learning [62.618818029177355]
Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data.
Existing GSL methods heavily depend on explicit graph structural information as supervision signals.
We propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data.
arXiv Detail & Related papers (2024-02-23T08:29:42Z) - HUGE: Huge Unsupervised Graph Embeddings with TPUs [6.108914274067702]
Graph Embedding is a process of creating a continuous representation of nodes in a graph.
A high-performance graph embedding architecture leveraging amounts of high-bandwidth memory is presented.
We verify the embedding space quality on real and synthetic large-scale datasets.
arXiv Detail & Related papers (2023-07-26T20:29:15Z) - GRATIS: Deep Learning Graph Representation with Task-specific Topology
and Multi-dimensional Edge Features [27.84193444151138]
We propose the first general graph representation learning framework (called GRATIS)
It can generate a strong graph representation with a task-specific topology and task-specific multi-dimensional edge features from any arbitrary input.
Our framework is effective, robust and flexible, and is a plug-and-play module that can be combined with different backbones and Graph Neural Networks (GNNs)
arXiv Detail & Related papers (2022-11-19T18:42:55Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Representing Long-Range Context for Graph Neural Networks with Global
Attention [37.212747564546156]
We propose the use of Transformer-based self-attention to learn long-range pairwise relationships.
Our method, which we call GraphTrans, applies a permutation-invariant Transformer module after a standard GNN module.
Our results suggest that purely-learning-based approaches without graph structure may be suitable for learning high-level, long-range relationships on graphs.
arXiv Detail & Related papers (2022-01-21T18:16:21Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.