Integrating Knowledge Graph embedding and pretrained Language Models in
Hypercomplex Spaces
- URL: http://arxiv.org/abs/2208.02743v3
- Date: Wed, 16 Aug 2023 21:03:42 GMT
- Title: Integrating Knowledge Graph embedding and pretrained Language Models in
Hypercomplex Spaces
- Authors: Mojtaba Nayyeri, Zihao Wang, Mst. Mahfuja Akter, Mirza Mohtashim Alam,
Md Rashad Al Hasan Rony, Jens Lehmann, Steffen Staab
- Abstract summary: We build on existing strong representations of single modalities and use hypercomplex algebra to represent both, (i), single-modality embedding as well as, (ii) the interaction between different modalities.
More specifically, we suggest Dihedron and Quaternion representations of 4D hypercomplex numbers to integrate four modalities namely structural knowledge graph embedding, word-level representations and document-level representations.
Our unified vector representation scores the plausibility of labelled edges via Hamilton and Dihedron products, thus modeling pairwise interactions between different modalities.
- Score: 29.71047521165222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge Graphs, such as Wikidata, comprise structural and textual knowledge
in order to represent knowledge. For each of the two modalities dedicated
approaches for graph embedding and language models learn patterns that allow
for predicting novel structural knowledge. Few approaches have integrated
learning and inference with both modalities and these existing ones could only
partially exploit the interaction of structural and textual knowledge. In our
approach, we build on existing strong representations of single modalities and
we use hypercomplex algebra to represent both, (i), single-modality embedding
as well as, (ii), the interaction between different modalities and their
complementary means of knowledge representation. More specifically, we suggest
Dihedron and Quaternion representations of 4D hypercomplex numbers to integrate
four modalities namely structural knowledge graph embedding, word-level
representations (e.g.\ Word2vec, Fasttext), sentence-level representations
(Sentence transformer), and document-level representations (sentence
transformer, Doc2vec). Our unified vector representation scores the
plausibility of labelled edges via Hamilton and Dihedron products, thus
modeling pairwise interactions between different modalities. Extensive
experimental evaluation on standard benchmark datasets shows the superiority of
our two new models using abundant textual information besides sparse structural
knowledge to enhance performance in link prediction tasks.
Related papers
- Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - CADGE: Context-Aware Dialogue Generation Enhanced with Graph-Structured Knowledge Aggregation [25.56539617837482]
A novel context-aware graph-attention model (Context-aware GAT) is proposed.
It assimilates global features from relevant knowledge graphs through a context-enhanced knowledge aggregation mechanism.
Empirical results demonstrate that our framework outperforms conventional GNN-based language models in terms of performance.
arXiv Detail & Related papers (2023-05-10T16:31:35Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure
Knowledge on Sparse Knowledge Graph Completion [14.537509860565706]
We propose a plug-and-play framework VEM2L over sparse Knowledge Graphs to fuse knowledge extracted from text and structure messages into a unity.
Specifically, we partition knowledge acquired by models into two nonoverlapping parts.
We also propose a new fusion strategy proved by Variational EM algorithm to fuse the generalization ability of models.
arXiv Detail & Related papers (2022-07-04T15:50:21Z) - Building Knowledge-Grounded Dialogue Systems with Graph-Based Semantic Modeling [43.0554223015728]
The knowledge-grounded dialogue task aims to generate responses that convey information from given knowledge documents.
We propose a novel graph structure, Grounded Graph, that models the semantic structure of both dialogue and knowledge.
We also propose a Grounded Graph Aware Transformer to enhance knowledge-grounded response generation.
arXiv Detail & Related papers (2022-04-27T03:31:46Z) - Learning Representations of Entities and Relations [0.0]
This thesis focuses on improving knowledge graph representation with the aim of tackling the link prediction task.
The first contribution is HypER, a convolutional model which simplifies and improves upon the link prediction performance.
The second contribution is TuckER, a relatively straightforward linear model, which, at the time of its introduction, obtained state-of-the-art link prediction performance.
The third contribution is MuRP, first multi-relational graph representation model embedded in hyperbolic space.
arXiv Detail & Related papers (2022-01-31T09:24:43Z) - Neuro-Symbolic Representations for Video Captioning: A Case for
Leveraging Inductive Biases for Vision and Language [148.0843278195794]
We propose a new model architecture for learning multi-modal neuro-symbolic representations for video captioning.
Our approach uses a dictionary learning-based method of learning relations between videos and their paired text descriptions.
arXiv Detail & Related papers (2020-11-18T20:21:19Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.