Edge: Enriching Knowledge Graph Embeddings with External Text
- URL: http://arxiv.org/abs/2104.04909v1
- Date: Sun, 11 Apr 2021 03:47:06 GMT
- Title: Edge: Enriching Knowledge Graph Embeddings with External Text
- Authors: Saed Rezayi, Handong Zhao, Sungchul Kim, Ryan A. Rossi, Nedim Lipka,
Sheng Li
- Abstract summary: We propose a knowledge graph enrichment and embedding framework named Edge.
Given an original knowledge graph, we first generate a rich but noisy augmented graph using external texts in semantic and structural level.
To distill the relevant knowledge and suppress the introduced noise, we design a graph alignment term in a shared embedding space between the original graph and augmented graph.
- Score: 32.01476220906261
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge graphs suffer from sparsity which degrades the quality of
representations generated by various methods. While there is an abundance of
textual information throughout the web and many existing knowledge bases,
aligning information across these diverse data sources remains a challenge in
the literature. Previous work has partially addressed this issue by enriching
knowledge graph entities based on "hard" co-occurrence of words present in the
entities of the knowledge graphs and external text, while we achieve "soft"
augmentation by proposing a knowledge graph enrichment and embedding framework
named Edge. Given an original knowledge graph, we first generate a rich but
noisy augmented graph using external texts in semantic and structural level. To
distill the relevant knowledge and suppress the introduced noise, we design a
graph alignment term in a shared embedding space between the original graph and
augmented graph. To enhance the embedding learning on the augmented graph, we
further regularize the locality relationship of target entity based on negative
sampling. Experimental results on four benchmark datasets demonstrate the
robustness and effectiveness of Edge in link prediction and node
classification.
Related papers
- Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - GAugLLM: Improving Graph Contrastive Learning for Text-Attributed Graphs with Large Language Models [33.3678293782131]
This work studies self-supervised graph learning for text-attributed graphs (TAGs)
We aim to improve view generation through language supervision.
This is driven by the prevalence of textual attributes in real applications, which complement graph structures with rich semantic information.
arXiv Detail & Related papers (2024-06-17T17:49:19Z) - Enhancing Dialogue Generation via Dynamic Graph Knowledge Aggregation [23.54754465832362]
In conventional graph neural networks (GNNs) message passing on a graph is independent from text.
This training regime leads to a semantic gap between graph knowledge and text.
We propose a novel framework for knowledge graph enhanced dialogue generation.
arXiv Detail & Related papers (2023-06-28T13:21:00Z) - ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings [20.25180279903009]
We propose Contrastive Graph-Text pretraining (ConGraT) for jointly learning separate representations of texts and nodes in a text-attributed graph (TAG)
Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP.
Experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling.
arXiv Detail & Related papers (2023-05-23T17:53:30Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - GCNBoost: Artwork Classification by Label Propagation through a
Knowledge Graph [32.129005474301735]
Contextual information is often the key to structure such real world data, and we propose to use it in form of a knowledge graph.
We propose a novel use of a knowledge graph, that is constructed on annotated data and pseudo-labeled data.
With label propagation, we boost artwork classification by training a model using a graph convolutional network.
arXiv Detail & Related papers (2021-05-25T11:50:05Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Knowledge Graphs [43.06435841693428]
We motivate and contrast various graph-based data models and query languages that are used for knowledge graphs.
We explain how knowledge can be represented and extracted using a combination of deductive and inductive techniques.
We conclude with high-level future research directions for knowledge graphs.
arXiv Detail & Related papers (2020-03-04T20:20:32Z) - Bridging Knowledge Graphs to Generate Scene Graphs [49.69377653925448]
We propose a novel graph-based neural network that iteratively propagates information between the two graphs, as well as within each of them.
Our Graph Bridging Network, GB-Net, successively infers edges and nodes, allowing to simultaneously exploit and refine the rich, heterogeneous structure of the interconnected scene and commonsense graphs.
arXiv Detail & Related papers (2020-01-07T23:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.