Can Language Models Capture Graph Semantics? From Graphs to Language
Model and Vice-Versa
- URL: http://arxiv.org/abs/2206.09259v1
- Date: Sat, 18 Jun 2022 18:12:20 GMT
- Title: Can Language Models Capture Graph Semantics? From Graphs to Language
Model and Vice-Versa
- Authors: Tarun Garg, Kaushik Roy, Amit Sheth
- Abstract summary: We conduct a study to examine if the deep learning model can compress a graph and then output the same graph with most of the semantics intact.
Our experiments show that Transformer models are not able to express the full semantics of the input knowledge graph.
- Score: 5.340730281227837
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge Graphs are a great resource to capture semantic knowledge in terms
of entities and relationships between the entities. However, current deep
learning models takes as input distributed representations or vectors. Thus,
the graph is compressed in a vectorized representation. We conduct a study to
examine if the deep learning model can compress a graph and then output the
same graph with most of the semantics intact. Our experiments show that
Transformer models are not able to express the full semantics of the input
knowledge graph. We find that this is due to the disparity between the
directed, relationship and type based information contained in a Knowledge
Graph and the fully connected token-token undirected graphical interpretation
of the Transformer Attention matrix.
Related papers
- Language Independent Neuro-Symbolic Semantic Parsing for Form
Understanding [11.042088913869462]
We propose a unique entity-relation graph parsing method for scanned forms called LAGNN.
Our model parses a form into a word-relation graph in order to identify entities and relations jointly.
Our model simply takes into account relative spacing between bounding boxes from layout information to facilitate easy transfer across languages.
arXiv Detail & Related papers (2023-05-08T05:03:07Z) - Probing Graph Representations [77.7361299039905]
We use a probing framework to quantify the amount of meaningful information captured in graph representations.
Our findings on molecular datasets show the potential of probing for understanding the inductive biases of graph-based models.
We advocate for probing as a useful diagnostic tool for evaluating graph-based models.
arXiv Detail & Related papers (2023-03-07T14:58:18Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Relphormer: Relational Graph Transformer for Knowledge Graph
Representations [25.40961076988176]
We propose a new variant of Transformer for knowledge graph representations dubbed Relphormer.
We propose a novel structure-enhanced self-attention mechanism to encode the relational information and keep the semantic information within entities and relations.
Experimental results on six datasets show that Relphormer can obtain better performance compared with baselines.
arXiv Detail & Related papers (2022-05-22T15:30:18Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - Graph-in-Graph (GiG): Learning interpretable latent graphs in
non-Euclidean domain for biological and healthcare applications [52.65389473899139]
Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain.
Recent works have shown that considering relationships between input data samples have a positive regularizing effect for the downstream task.
We propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications.
arXiv Detail & Related papers (2022-04-01T10:01:37Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Generating a Doppelganger Graph: Resembling but Distinct [5.618335078130568]
We propose an approach to generating a doppelganger graph that resembles a given one in many graph properties.
The approach is an orchestration of graph representation learning, generative adversarial networks, and graph realization algorithms.
arXiv Detail & Related papers (2021-01-23T22:08:27Z) - Auto-decoding Graphs [91.3755431537592]
The generative model is an auto-decoder that learns to synthesize graphs from latent codes.
Graphs are synthesized using self-attention modules that are trained to identify likely connectivity patterns.
arXiv Detail & Related papers (2020-06-04T14:23:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.