Universal Representation Learning of Knowledge Bases by Jointly
Embedding Instances and Ontological Concepts
- URL: http://arxiv.org/abs/2103.08115v1
- Date: Mon, 15 Mar 2021 03:24:37 GMT
- Title: Universal Representation Learning of Knowledge Bases by Jointly
Embedding Instances and Ontological Concepts
- Authors: Junheng Hao, Muhao Chen, Wenchao Yu, Yizhou Sun, Wei Wang
- Abstract summary: We propose a novel two-view KG embedding model, JOIE, with the goal to produce better knowledge embedding.
JOIE employs cross-view and intra-view modeling that learn on multiple facets of the knowledge base.
Our model is trained on large-scale knowledge bases that consist of massive instances and their corresponding ontological concepts connected via a (small) set of cross-view links.
- Score: 39.99087114075884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many large-scale knowledge bases simultaneously represent two views of
knowledge graphs (KGs): an ontology view for abstract and commonsense concepts,
and an instance view for specific entities that are instantiated from
ontological concepts. Existing KG embedding models, however, merely focus on
representing one of the two views alone. In this paper, we propose a novel
two-view KG embedding model, JOIE, with the goal to produce better knowledge
embedding and enable new applications that rely on multi-view knowledge. JOIE
employs both cross-view and intra-view modeling that learn on multiple facets
of the knowledge base. The cross-view association model is learned to bridge
the embeddings of ontological concepts and their corresponding instance-view
entities. The intra-view models are trained to capture the structured knowledge
of instance and ontology views in separate embedding spaces, with a
hierarchy-aware encoding technique enabled for ontologies with hierarchies. We
explore multiple representation techniques for the two model components and
investigate with nine variants of JOIE. Our model is trained on large-scale
knowledge bases that consist of massive instances and their corresponding
ontological concepts connected via a (small) set of cross-view links.
Experimental results on public datasets show that the best variant of JOIE
significantly outperforms previous models on instance-view triple prediction
task as well as ontology population on ontologyview KG. In addition, our model
successfully extends the use of KG embeddings to entity typing with promising
performance.
Related papers
- Mining Frequent Structures in Conceptual Models [2.841785306638839]
We propose a general approach to the problem of discovering frequent structures in conceptual modeling languages.
We use the combination of a frequent subgraph mining algorithm and graph manipulation techniques.
The primary objective is to offer a support facility for language engineers.
arXiv Detail & Related papers (2024-06-11T10:24:02Z) - Knowledge Graph Embedding: An Overview [42.16033541753744]
We make a comprehensive overview of the current state of research in Knowledge Graph completion.
We focus on two main branches of KG embedding (KGE) design: 1) distance-based methods and 2) semantic matching-based methods.
Next, we delve into CompoundE and CompoundE3D, which draw inspiration from 2D and 3D affine operations.
arXiv Detail & Related papers (2023-09-21T21:52:42Z) - CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning
Capabilities of Natural Language Models [30.63276809199399]
We present CommonsenseVIS, a visual explanatory system that utilizes external commonsense knowledge bases to contextualize model behavior for commonsense question-answering.
Our system features multi-level visualization and interactive model probing and editing for different concepts and their underlying relations.
arXiv Detail & Related papers (2023-07-23T17:16:13Z) - Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge
Graphs [77.10299848546717]
Concept2Box is a novel approach that jointly embeds the two views of a KG.
Box embeddings learn the hierarchy structure and complex relations such as overlap and disjoint among them.
We propose a novel vector-to-box distance metric and learn both embeddings jointly.
arXiv Detail & Related papers (2023-07-04T21:37:39Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Model LEGO: Creating Models Like Disassembling and Assembling Building Blocks [53.09649785009528]
In this paper, we explore a paradigm that does not require training to obtain new models.
Similar to the birth of CNN inspired by receptive fields in the biological visual system, we propose Model Disassembling and Assembling.
For model assembling, we present the alignment padding strategy and parameter scaling strategy to construct a new model tailored for a specific task.
arXiv Detail & Related papers (2022-03-25T05:27:28Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Ontology-based n-ball Concept Embeddings Informing Few-shot Image
Classification [5.247029505708008]
ViOCE integrates symbolic knowledge in the form of $n$-ball concept embeddings into a neural network based vision architecture.
We evaluate ViOCE using the task of few-shot image classification, where it demonstrates superior performance on two standard benchmarks.
arXiv Detail & Related papers (2021-09-19T05:35:43Z) - Bowtie Networks: Generative Modeling for Joint Few-Shot Recognition and
Novel-View Synthesis [39.53519330457627]
We propose a novel task of joint few-shot recognition and novel-view synthesis.
We aim to simultaneously learn an object classifier and generate images of that type of object from new viewpoints.
We focus on the interaction and cooperation between a generative model and a discriminative model.
arXiv Detail & Related papers (2020-08-16T19:40:56Z) - Look-into-Object: Self-supervised Structure Modeling for Object
Recognition [71.68524003173219]
We propose to "look into object" (explicitly yet intrinsically model the object structure) through incorporating self-supervisions.
We show the recognition backbone can be substantially enhanced for more robust representation learning.
Our approach achieves large performance gain on a number of benchmarks, including generic object recognition (ImageNet) and fine-grained object recognition tasks (CUB, Cars, Aircraft)
arXiv Detail & Related papers (2020-03-31T12:22:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.