Dynamic Vertex Replacement Grammars
- URL: http://arxiv.org/abs/2303.11553v2
- Date: Wed, 22 Mar 2023 01:13:23 GMT
- Title: Dynamic Vertex Replacement Grammars
- Authors: Daniel Gonzalez Cedre, Justus Isaiah Hibshman, Timothy La Fond, Grant
Boquet, Tim Weninger
- Abstract summary: We show that DyVeRG grammars can be learned from, and used to generate, real-world dynamic graphs faithfully.
We also demonstrate their ability to forecast by computing dyvergence scores, a novel graph similarity measurement.
- Score: 6.3872634680339635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Context-free graph grammars have shown a remarkable ability to model
structures in real-world relational data. However, graph grammars lack the
ability to capture time-changing phenomena since the left-to-right transitions
of a production rule do not represent temporal change. In the present work, we
describe dynamic vertex-replacement grammars (DyVeRG), which generalize vertex
replacement grammars in the time domain by providing a formal framework for
updating a learned graph grammar in accordance with modifications to its
underlying data. We show that DyVeRG grammars can be learned from, and used to
generate, real-world dynamic graphs faithfully while remaining
human-interpretable. We also demonstrate their ability to forecast by computing
dyvergence scores, a novel graph similarity measurement exposed by this
framework.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Transformers as Graph-to-Graph Models [13.630495199720423]
We argue that Transformers are essentially graph-to-graph models, with sequences just being a special case.
Our Graph-to-Graph Transformer architecture makes this ability explicit, by inputting graph edges into the attention weight computations and predicting graph edges with attention-like functions.
arXiv Detail & Related papers (2023-10-27T07:21:37Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Can Language Models Capture Graph Semantics? From Graphs to Language
Model and Vice-Versa [5.340730281227837]
We conduct a study to examine if the deep learning model can compress a graph and then output the same graph with most of the semantics intact.
Our experiments show that Transformer models are not able to express the full semantics of the input knowledge graph.
arXiv Detail & Related papers (2022-06-18T18:12:20Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Geometry-Aware Supertagging with Heterogeneous Dynamic Convolutions [0.7868449549351486]
We revisit constructive supertagging from a graph-theoretic perspective.
We propose a framework based on heterogeneous dynamic graph convolutions.
We test our approach on a number of categorial grammar datasets spanning different languages.
arXiv Detail & Related papers (2022-03-23T07:07:11Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Polynomial Graph Parsing with Non-Structural Reentrancies [0.2867517731896504]
Graph-based semantic representations are valuable in natural language processing.
We introduce graph extension grammar, which generates graphs with non-structural reentrancies.
We provide a parsing algorithm for graph extension grammars, which is proved to be correct and run in time.
arXiv Detail & Related papers (2021-05-05T13:05:01Z) - Promoting Graph Awareness in Linearized Graph-to-Text Generation [72.83863719868364]
We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
arXiv Detail & Related papers (2020-12-31T18:17:57Z) - Scene Graph Modification Based on Natural Language Commands [90.0662899539489]
Structured representations like graphs and parse trees play a crucial role in many Natural Language Processing systems.
In this paper, we explore the novel problem of graph modification, where the systems need to learn how to update an existing graph given a new user's command.
arXiv Detail & Related papers (2020-10-06T10:01:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.