Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge
Graphs
- URL: http://arxiv.org/abs/2307.01933v1
- Date: Tue, 4 Jul 2023 21:37:39 GMT
- Title: Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge
Graphs
- Authors: Zijie Huang, Daheng Wang, Binxuan Huang, Chenwei Zhang, Jingbo Shang,
Yan Liang, Zhengyang Wang, Xian Li, Christos Faloutsos, Yizhou Sun, Wei Wang
- Abstract summary: Concept2Box is a novel approach that jointly embeds the two views of a KG.
Box embeddings learn the hierarchy structure and complex relations such as overlap and disjoint among them.
We propose a novel vector-to-box distance metric and learn both embeddings jointly.
- Score: 77.10299848546717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge graph embeddings (KGE) have been extensively studied to embed
large-scale relational data for many real-world applications. Existing methods
have long ignored the fact many KGs contain two fundamentally different views:
high-level ontology-view concepts and fine-grained instance-view entities. They
usually embed all nodes as vectors in one latent space. However, a single
geometric representation fails to capture the structural differences between
two views and lacks probabilistic semantics towards concepts' granularity. We
propose Concept2Box, a novel approach that jointly embeds the two views of a KG
using dual geometric representations. We model concepts with box embeddings,
which learn the hierarchy structure and complex relations such as overlap and
disjoint among them. Box volumes can be interpreted as concepts' granularity.
Different from concepts, we model entities as vectors. To bridge the gap
between concept box embeddings and entity vector embeddings, we propose a novel
vector-to-box distance metric and learn both embeddings jointly. Experiments on
both the public DBpedia KG and a newly-created industrial KG showed the
effectiveness of Concept2Box.
Related papers
- How to Blend Concepts in Diffusion Models [48.68800153838679]
Recent methods exploit multiple latent representations and their connection, making this research question even more entangled.
Our goal is to understand how operations in the latent space affect the underlying concepts.
Our conclusion is that concept blending through space manipulation is possible, although the best strategy depends on the context of the blend.
arXiv Detail & Related papers (2024-07-19T13:05:57Z) - Knowledge Graph Embedding: An Overview [42.16033541753744]
We make a comprehensive overview of the current state of research in Knowledge Graph completion.
We focus on two main branches of KG embedding (KGE) design: 1) distance-based methods and 2) semantic matching-based methods.
Next, we delve into CompoundE and CompoundE3D, which draw inspiration from 2D and 3D affine operations.
arXiv Detail & Related papers (2023-09-21T21:52:42Z) - Dual-Geometric Space Embedding Model for Two-View Knowledge Graphs [32.47146018135465]
Two-view knowledge graphs (KGs) jointly represent two components: an ontology view for abstract and commonsense concepts, and an instance view for specific entities.
Most recent works on embedding KGs assume that the entire KG belongs to only one of the two views but not both simultaneously.
We construct a dual-geometric space embedding model (DGS) that models two-view KGs using a complex non-Euclidean geometric space.
arXiv Detail & Related papers (2022-09-19T05:11:10Z) - FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic
descriptions, and Conceptual Relations [99.54048050189971]
We present a framework for learning new visual concepts quickly, guided by multiple naturally occurring data streams.
The learned concepts support downstream applications, such as answering questions by reasoning about unseen images.
We demonstrate the effectiveness of our model on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-03-30T19:45:00Z) - Unsupervised Learning of Compositional Energy Concepts [70.11673173291426]
We propose COMET, which discovers and represents concepts as separate energy functions.
Comet represents both global concepts as well as objects under a unified framework.
arXiv Detail & Related papers (2021-11-04T17:46:12Z) - Universal Representation Learning of Knowledge Bases by Jointly
Embedding Instances and Ontological Concepts [39.99087114075884]
We propose a novel two-view KG embedding model, JOIE, with the goal to produce better knowledge embedding.
JOIE employs cross-view and intra-view modeling that learn on multiple facets of the knowledge base.
Our model is trained on large-scale knowledge bases that consist of massive instances and their corresponding ontological concepts connected via a (small) set of cross-view links.
arXiv Detail & Related papers (2021-03-15T03:24:37Z) - On the Role of Conceptualization in Commonsense Knowledge Graph
Construction [59.39512925793171]
Commonsense knowledge graphs (CKGs) like Atomic and ASER are substantially different from conventional KGs.
We introduce to CKG construction methods conceptualization to view entities mentioned in text as instances of specific concepts or vice versa.
Our methods can effectively identify plausible triples and expand the KG by triples of both new nodes and edges of high diversity and novelty.
arXiv Detail & Related papers (2020-03-06T14:35:20Z) - Visual Concept-Metaconcept Learning [101.62725114966211]
We propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs.
Knowing that red and green describe the same property of objects, we generalize to the fact that cube and sphere also describe the same property of objects.
arXiv Detail & Related papers (2020-02-04T18:42:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.