BoxE: A Box Embedding Model for Knowledge Base Completion
- URL: http://arxiv.org/abs/2007.06267v2
- Date: Thu, 29 Oct 2020 09:48:19 GMT
- Title: BoxE: A Box Embedding Model for Knowledge Base Completion
- Authors: Ralph Abboud, \.Ismail \.Ilkan Ceylan, Thomas Lukasiewicz, Tommaso
Salvatori
- Abstract summary: Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB)
Existing embedding models are subject to at least one of the following limitations.
BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes)
- Score: 53.57588201197374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge base completion (KBC) aims to automatically infer missing facts by
exploiting information already present in a knowledge base (KB). A promising
approach for KBC is to embed knowledge into latent spaces and make predictions
from learned embeddings. However, existing embedding models are subject to at
least one of the following limitations: (1) theoretical inexpressivity, (2)
lack of support for prominent inference patterns (e.g., hierarchies), (3) lack
of support for KBC over higher-arity relations, and (4) lack of support for
incorporating logical rules. Here, we propose a spatio-translational embedding
model, called BoxE, that simultaneously addresses all these limitations. BoxE
embeds entities as points, and relations as a set of hyper-rectangles (or
boxes), which spatially characterize basic logical properties. This seemingly
simple abstraction yields a fully expressive model offering a natural encoding
for many desired logical properties. BoxE can both capture and inject rules
from rich classes of rule languages, going well beyond individual inference
patterns. By design, BoxE naturally applies to higher-arity KBs. We conduct a
detailed experimental analysis, and show that BoxE achieves state-of-the-art
performance, both on benchmark knowledge graphs and on more general KBs, and we
empirically show the power of integrating logical rules.
Related papers
- ConstraintChecker: A Plugin for Large Language Models to Reason on
Commonsense Knowledge Bases [53.29427395419317]
Reasoning over Commonsense Knowledge Bases (CSKB) has been explored as a way to acquire new commonsense knowledge.
We propose **ConstraintChecker**, a plugin over prompting techniques to provide and check explicit constraints.
arXiv Detail & Related papers (2024-01-25T08:03:38Z) - CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning [45.62134354858683]
CANDLE is a framework that iteratively performs conceptualization and instantiation over commonsense knowledge bases.
By applying CANDLE to ATOMIC, we construct a comprehensive knowledge base comprising six million conceptualizations and instantiated commonsense knowledge triples.
arXiv Detail & Related papers (2024-01-14T13:24:30Z) - CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense
Question Answering [56.592385613002584]
We propose Conceptualization-Augmented Reasoner (CAR) to tackle the task of zero-shot commonsense question answering.
CAR abstracts a commonsense knowledge triple to many higher-level instances, which increases the coverage of CommonSense Knowledge Bases.
CAR more robustly generalizes to answering questions about zero-shot commonsense scenarios than existing methods.
arXiv Detail & Related papers (2023-05-24T08:21:31Z) - Adapting Knowledge for Few-shot Table-to-Text Generation [35.59842534346997]
We propose a novel framework: Adapt-Knowledge-to-Generate (AKG)
AKG adapts unlabeled domain-specific knowledge into the model, which brings at least three benefits.
Our model achieves superior performance in terms of both fluency and accuracy as judged by human and automatic evaluations.
arXiv Detail & Related papers (2023-02-24T05:48:53Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - TIARA: Multi-grained Retrieval for Robust Question Answering over Large
Knowledge Bases [20.751369684593985]
TIARA outperforms previous SOTA, including those using PLMs or oracle entity annotations, by at least 4.1 and 1.1 F1 points on GrailQA and WebQuestionsSP.
arXiv Detail & Related papers (2022-10-24T02:41:10Z) - Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization [49.00409552570441]
We study the role of conceptualization in commonsense reasoning, and formulate a framework to replicate human conceptual induction.
We apply the framework to ATOMIC, a large-scale human-annotated CKG, aided by the taxonomy Probase.
arXiv Detail & Related papers (2022-06-03T12:24:49Z) - Box Embeddings for the Description Logic EL++ [21.89072991669119]
We present BoxEL, a geometric KB embedding approach that allows for better capturing logical structure.
We show theoretical guarantees (soundness) of BoxEL for preserving logical structure.
Experimental results on subsumption reasoning and a real-world application-protein-protein prediction show that BoxEL outperforms traditional knowledge graph embedding methods.
arXiv Detail & Related papers (2022-01-24T19:24:22Z) - Combining Rules and Embeddings via Neuro-Symbolic AI for Knowledge Base
Completion [59.093293389123424]
We show that not all rule-based Knowledge Base Completion models are the same.
We propose two distinct approaches that learn in one case: 1) a mixture of relations and the other 2) a mixture of paths.
When implemented on top of neuro-symbolic AI, which learns rules by extending Boolean logic to real-valued logic, the latter model leads to superior KBC accuracy outperforming state-of-the-art rule-based KBC by 2-10% in terms of mean reciprocal rank.
arXiv Detail & Related papers (2021-09-16T17:54:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.