V-Coder: Adaptive AutoEncoder for Semantic Disclosure in Knowledge
Graphs
- URL: http://arxiv.org/abs/2208.01735v1
- Date: Fri, 22 Jul 2022 14:51:46 GMT
- Title: V-Coder: Adaptive AutoEncoder for Semantic Disclosure in Knowledge
Graphs
- Authors: Christian M.M. Frey, Matthias Schubert
- Abstract summary: We propose a new adaptive AutoEncoder, called V-Coder, to identify relations inherently connecting entities from different domains.
The evaluation on real-world datasets shows that the V-Coder is able to recover links from corrupted data.
- Score: 4.493174773769076
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Semantic Web or Knowledge Graphs (KG) emerged to one of the most important
information source for intelligent systems requiring access to structured
knowledge. One of the major challenges is the extraction and processing of
unambiguous information from textual data. Following the human perception,
overlapping semantic linkages between two named entities become clear due to
our common-sense about the context a relationship lives in which is not the
case when we look at it from an automatically driven process of a machine. In
this work, we are interested in the problem of Relational Resolution within the
scope of KGs, i.e, we are investigating the inherent semantic of relationships
between entities within a network. We propose a new adaptive AutoEncoder,
called V-Coder, to identify relations inherently connecting entities from
different domains. Those relations can be considered as being ambiguous and are
candidates for disentanglement. Likewise to the Adaptive Learning Theory (ART),
our model learns new patterns from the KG by increasing units in a competitive
layer without discarding the previous observed patterns whilst learning the
quality of each relation separately. The evaluation on real-world datasets of
Freebase, Yago and NELL shows that the V-Coder is not only able to recover
links from corrupted input data, but also shows that the semantic disclosure of
relations in a KG show the tendency to improve link prediction. A semantic
evaluation wraps the evaluation up.
Related papers
- Type-based Neural Link Prediction Adapter for Complex Query Answering [2.1098688291287475]
We propose TypE-based Neural Link Prediction Adapter (TENLPA), a novel model that constructs type-based entity-relation graphs.
In order to effectively combine type information with complex logical queries, an adaptive learning mechanism is introduced.
Experiments on 3 standard datasets show that TENLPA model achieves state-of-the-art performance on complex query answering.
arXiv Detail & Related papers (2024-01-29T10:54:28Z) - Relation-Aware Language-Graph Transformer for Question Answering [21.244992938222246]
We propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations.
Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations.
We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE.
arXiv Detail & Related papers (2022-12-02T05:10:10Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z) - R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching [58.72111690643359]
We propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching.
We first employ BERT to encode the input sentences from a global perspective.
Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective.
To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task.
arXiv Detail & Related papers (2020-12-16T13:11:30Z) - Adaptive Attentional Network for Few-Shot Knowledge Graph Completion [16.722373937828117]
Few-shot Knowledge Graph (KG) completion is a focus of current research, where each task aims at querying unseen facts of a relation given its few-shot reference entity pairs.
Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties.
This work proposes an adaptive attentional network for few-shot KG completion by learning adaptive entity and reference representations.
arXiv Detail & Related papers (2020-10-19T16:27:48Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - ConsNet: Learning Consistency Graph for Zero-Shot Human-Object
Interaction Detection [101.56529337489417]
We consider the problem of Human-Object Interaction (HOI) Detection, which aims to locate and recognize HOI instances in the form of human, action, object> in images.
We argue that multi-level consistencies among objects, actions and interactions are strong cues for generating semantic representations of rare or previously unseen HOIs.
Our model takes visual features of candidate human-object pairs and word embeddings of HOI labels as inputs, maps them into visual-semantic joint embedding space and obtains detection results by measuring their similarities.
arXiv Detail & Related papers (2020-08-14T09:11:18Z) - Knowledge Graphs and Knowledge Networks: The Story in Brief [0.1933681537640272]
Knowledge Graphs (KGs) represent real-world noisy raw information in a structured form, capturing relationships between entities.
For dynamic real-world applications such as social networks, recommender systems, computational biology, relational knowledge representation has emerged as a challenging research problem.
This article attempts to summarize the journey of KG for AI.
arXiv Detail & Related papers (2020-03-07T18:09:18Z) - End-to-End Entity Linking and Disambiguation leveraging Word and
Knowledge Graph Embeddings [20.4826750211045]
We propose the first end-to-end neural network approach that employs KG as well as word embeddings to perform joint relation and entity classification of simple questions.
An empirical evaluation shows that the proposed approach achieves a performance comparable to state-of-the-art entity linking.
arXiv Detail & Related papers (2020-02-25T19:07:54Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.