Box Embeddings for the Description Logic EL++
- URL: http://arxiv.org/abs/2201.09919v1
- Date: Mon, 24 Jan 2022 19:24:22 GMT
- Title: Box Embeddings for the Description Logic EL++
- Authors: Bo Xiong, Nico Potyka, Trung-Kien Tran, Mojtaba Nayyeri, Steffen Staab
- Abstract summary: We present BoxEL, a geometric KB embedding approach that allows for better capturing logical structure.
We show theoretical guarantees (soundness) of BoxEL for preserving logical structure.
Experimental results on subsumption reasoning and a real-world application-protein-protein prediction show that BoxEL outperforms traditional knowledge graph embedding methods.
- Score: 21.89072991669119
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, various methods for representation learning on Knowledge Bases
(KBs) have been developed. However, these approaches either only focus on
learning the embeddings of the data-level knowledge (ABox) or exhibit inherent
limitations when dealing with the concept-level knowledge (TBox), e.g., not
properly modelling the structure of the logical knowledge. We present BoxEL, a
geometric KB embedding approach that allows for better capturing logical
structure expressed in the theories of Description Logic EL++. BoxEL models
concepts in a KB as axis-parallel boxes exhibiting the advantage of
intersectional closure, entities as points inside boxes, and relations between
concepts/entities as affine transformations. We show theoretical guarantees
(soundness) of BoxEL for preserving logical structure. Namely, the trained
model of BoxEL embedding with loss 0 is a (logical) model of the KB.
Experimental results on subsumption reasoning and a real-world
application--protein-protein prediction show that BoxEL outperforms traditional
knowledge graph embedding methods as well as state-of-the-art EL++ embedding
approaches.
Related papers
- Semantic Objective Functions: A distribution-aware method for adding logical constraints in deep learning [4.854297874710511]
Constrained Learning and Knowledge Distillation techniques have shown promising results.
We propose a loss-based method that embeds knowledge-enforces logical constraints into a machine learning model.
We evaluate our method on a variety of learning tasks, including classification tasks with logic constraints.
arXiv Detail & Related papers (2024-05-03T19:21:47Z) - CLIP-QDA: An Explainable Concept Bottleneck Model [3.570403495760109]
We introduce an explainable algorithm designed from a multi-modal foundation model, that performs fast and explainable image classification.
Our explanations compete with existing XAI methods while being faster to compute.
arXiv Detail & Related papers (2023-11-30T18:19:47Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Great Truths are Always Simple: A Rather Simple Knowledge Encoder for
Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models [89.98762327725112]
Commonsense reasoning in natural language is a desired ability of artificial intelligent systems.
For solving complex commonsense reasoning tasks, a typical solution is to enhance pre-trained language models(PTMs) with a knowledge-aware graph neural network(GNN) encoder.
Despite the effectiveness, these approaches are built on heavy architectures, and can't clearly explain how external knowledge resources improve the reasoning capacity of PTMs.
arXiv Detail & Related papers (2022-05-04T01:27:36Z) - A Closer Look at Knowledge Distillation with Features, Logits, and
Gradients [81.39206923719455]
Knowledge distillation (KD) is a substantial strategy for transferring learned knowledge from one neural network model to another.
This work provides a new perspective to motivate a set of knowledge distillation strategies by approximating the classical KL-divergence criteria with different knowledge sources.
Our analysis indicates that logits are generally a more efficient knowledge source and suggests that having sufficient feature dimensions is crucial for the model design.
arXiv Detail & Related papers (2022-03-18T21:26:55Z) - Description Logic EL++ Embeddings with Intersectional Closure [10.570100236658705]
We develop EL Embedding (ELBE) to learn Description Logic EL++ embeddings using axis-parallel boxes.
We report extensive experimental results on three datasets and present a case study to demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-02-28T18:37:14Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z) - BoxE: A Box Embedding Model for Knowledge Base Completion [53.57588201197374]
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB)
Existing embedding models are subject to at least one of the following limitations.
BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes)
arXiv Detail & Related papers (2020-07-13T09:40:49Z) - Common Sense or World Knowledge? Investigating Adapter-Based Knowledge
Injection into Pretrained Transformers [54.417299589288184]
We investigate models for complementing the distributional knowledge of BERT with conceptual knowledge from ConceptNet and its corresponding Open Mind Common Sense (OMCS) corpus.
Our adapter-based models substantially outperform BERT on inference tasks that require the type of conceptual knowledge explicitly present in ConceptNet and OMCS.
arXiv Detail & Related papers (2020-05-24T15:49:57Z) - Explainable AI for Classification using Probabilistic Logic Inference [9.656846523452502]
We present an explainable classification method.
Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming.
It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the artley Value based method.
arXiv Detail & Related papers (2020-05-05T11:39:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.