EvoLearner: Learning Description Logics with Evolutionary Algorithms
- URL: http://arxiv.org/abs/2111.04879v1
- Date: Mon, 8 Nov 2021 23:47:39 GMT
- Title: EvoLearner: Learning Description Logics with Evolutionary Algorithms
- Authors: Stefan Heindorf, Lukas Bl\"ubaum, Nick D\"usterhus, Till Werner, Varun
Nandkumar Golani, Caglar Demir, Axel-Cyrille Ngonga Ngomo
- Abstract summary: Classifying nodes in knowledge graphs is an important task, e.g., predicting missing types of entities, predicting which molecules cause cancer, or predicting which drugs are promising treatment candidates.
We propose EvoLearner - an evolutionary approach to learn description logic concepts from positive and negative examples.
- Score: 2.0096667731426976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classifying nodes in knowledge graphs is an important task, e.g., predicting
missing types of entities, predicting which molecules cause cancer, or
predicting which drugs are promising treatment candidates. While black-box
models often achieve high predictive performance, they are only post-hoc and
locally explainable and do not allow the learned model to be easily enriched
with domain knowledge. Towards this end, learning description logic concepts
from positive and negative examples has been proposed. However, learning such
concepts often takes a long time and state-of-the-art approaches provide
limited support for literal data values, although they are crucial for many
applications. In this paper, we propose EvoLearner - an evolutionary approach
to learn ALCQ(D), which is the attributive language with complement (ALC)
paired with qualified cardinality restrictions (Q) and data properties (D). We
contribute a novel initialization method for the initial population: starting
from positive examples (nodes in the knowledge graph), we perform biased random
walks and translate them to description logic concepts. Moreover, we improve
support for data properties by maximizing information gain when deciding where
to split the data. We show that our approach significantly outperforms the
state of the art on the benchmarking framework SML-Bench for structured machine
learning. Our ablation study confirms that this is due to our novel
initialization method and support for data properties.
Related papers
- Graph Residual based Method for Molecular Property Prediction [0.7499722271664147]
This manuscript highlights a detailed description of the novel GRU-based methodology, ECRGNN, to map the inputs that have been used.
A detailed description of the Variational Autoencoder (VAE) and the end-to-end learning method used for multi-class multi-label property prediction has been provided as well.
arXiv Detail & Related papers (2024-07-27T09:01:36Z) - Conditional Prototype Rectification Prompt Learning [32.533844163120875]
We propose a Prototype Rectification Prompt Learning (CPR) method to correct the bias of base examples and augment limited data in an effective way.
CPR achieves state-of-the-art performance on both few-shot classification and base-to-new generalization tasks.
arXiv Detail & Related papers (2024-04-15T15:43:52Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Tree-based local explanations of machine learning model predictions,
AraucanaXAI [2.9660372210786563]
A tradeoff between performance and intelligibility is often to be faced, especially in high-stakes applications like medicine.
We propose a novel methodological approach for generating explanations of the predictions of a generic ML model.
arXiv Detail & Related papers (2021-10-15T17:39:19Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Layer-wise Analysis of a Self-supervised Speech Representation Model [26.727775920272205]
Self-supervised learning approaches have been successful for pre-training speech representation models.
Not much has been studied about the type or extent of information encoded in the pre-trained representations themselves.
arXiv Detail & Related papers (2021-07-10T02:13:25Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.