Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs
- URL: http://arxiv.org/abs/2001.02332v1
- Date: Wed, 8 Jan 2020 01:19:08 GMT
- Title: Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs
- Authors: Pengda Qin, Xin Wang, Wenhu Chen, Chunyun Zhang, Weiran Xu, William
Yang Wang
- Abstract summary: We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
- Score: 96.73259297063619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale knowledge graphs (KGs) are shown to become more important in
current information systems. To expand the coverage of KGs, previous studies on
knowledge graph completion need to collect adequate training instances for
newly-added relations. In this paper, we consider a novel formulation,
zero-shot learning, to free this cumbersome curation. For newly-added
relations, we attempt to learn their semantic features from their text
descriptions and hence recognize the facts of unseen relations with no examples
being seen. For this purpose, we leverage Generative Adversarial Networks
(GANs) to establish the connection between text and knowledge graph domain: The
generator learns to generate the reasonable relation embeddings merely with
noisy text descriptions. Under this setting, zero-shot learning is naturally
converted to a traditional supervised classification task. Empirically, our
method is model-agnostic that could be potentially applied to any version of KG
embeddings, and consistently yields performance improvements on NELL and Wiki
dataset.
Related papers
- Inference over Unseen Entities, Relations and Literals on Knowledge Graphs [1.7474352892977463]
knowledge graph embedding models have been successfully applied in the transductive setting to tackle various challenging tasks.
We propose the attentive byte-pair encoding layer (BytE) to construct a triple embedding from a sequence of byte-pair encoded subword units of entities and relations.
BytE leads to massive feature reuse via weight tying, since it forces a knowledge graph embedding model to learn embeddings for subword units instead of entities and relations directly.
arXiv Detail & Related papers (2024-10-09T10:20:54Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Rule-Guided Joint Embedding Learning over Knowledge Graphs [6.831227021234669]
This paper introduces a novel model that incorporates both contextual and literal information into entity and relation embeddings.
For contextual information, we assess its significance through confidence and relatedness metrics.
We validate our model performance with thorough experiments on two established benchmark datasets.
arXiv Detail & Related papers (2023-12-01T19:58:31Z) - KGLM: Integrating Knowledge Graph Structure in Language Models for Link
Prediction [0.0]
We introduce a new entity/relation embedding layer that learns to differentiate distinctive entity and relation types.
We show that further pre-training the language models with this additional embedding layer using the triples extracted from the knowledge graph, followed by the standard fine-tuning phase sets a new state-of-the-art performance for the link prediction task on the benchmark datasets.
arXiv Detail & Related papers (2022-11-04T20:38:12Z) - I Know What You Do Not Know: Knowledge Graph Embedding via
Co-distillation Learning [16.723470319188102]
Knowledge graph embedding seeks to learn vector representations for entities and relations.
Recent studies have used pre-trained language models to learn embeddings based on the textual information of entities and relations.
We propose CoLE, a Co-distillation Learning method for KG Embedding that exploits the complement of graph structures and text information.
arXiv Detail & Related papers (2022-08-21T07:34:37Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z) - Knowledge Graph Completion with Text-aided Regularization [2.8361571014635407]
Knowledge Graph Completion is a task of expanding the knowledge graph/base through estimating possible entities.
Traditional approaches mainly focus on using the existing graphical information that is intrinsic of the graph.
We try numerous ways of using extracted or raw textual information to help existing KG embedding frameworks reach better prediction results.
arXiv Detail & Related papers (2021-01-22T06:10:09Z) - JAKET: Joint Pre-training of Knowledge Graph and Language Understanding [73.43768772121985]
We propose a novel joint pre-training framework, JAKET, to model both the knowledge graph and language.
The knowledge module and language module provide essential information to mutually assist each other.
Our design enables the pre-trained model to easily adapt to unseen knowledge graphs in new domains.
arXiv Detail & Related papers (2020-10-02T05:53:36Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.