Prediction of concept lengths for fast concept learning in description
logics
- URL: http://arxiv.org/abs/2107.04911v1
- Date: Sat, 10 Jul 2021 21:00:48 GMT
- Title: Prediction of concept lengths for fast concept learning in description
logics
- Authors: N'Dah Jean Kouagou, Stefan Heindorf, Caglar Demir, Axel-Cyrille Ngonga
Ngomo
- Abstract summary: Concept learning approaches based on refinement operators explore partially ordered solution spaces to compute concepts.
We propose a supervised machine learning approach for learning concept lengths, which allows predicting the length of the target concept.
We show that integrating our concept length predictor into the CELOE algorithm improves CELOE's runtime by a factor of up to 13.4 without any significant changes to the quality of the results it generates.
- Score: 2.0474076605741036
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Concept learning approaches based on refinement operators explore partially
ordered solution spaces to compute concepts, which are used as binary
classification models for individuals. However, the refinement trees spanned by
these approaches can easily grow to millions of nodes for complex learning
problems. This leads to refinement-based approaches often failing to detect
optimal concepts efficiently. In this paper, we propose a supervised machine
learning approach for learning concept lengths, which allows predicting the
length of the target concept and therefore facilitates the reduction of the
search space during concept learning. To achieve this goal, we compare four
neural architectures and evaluate them on four benchmark knowledge
graphs--Carcinogenesis, Mutagenesis, Semantic Bible, Family Benchmark. Our
evaluation results suggest that recurrent neural network architectures perform
best at concept length prediction with an F-measure of up to 92%. We show that
integrating our concept length predictor into the CELOE (Class Expression
Learner for Ontology Engineering) algorithm improves CELOE's runtime by a
factor of up to 13.4 without any significant changes to the quality of the
results it generates. For reproducibility, we provide our implementation in the
public GitHub repository at
https://github.com/ConceptLengthLearner/ReproducibilityRepo
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Restyling Unsupervised Concept Based Interpretable Networks with Generative Models [14.604305230535026]
We propose a novel method that relies on mapping the concept features to the latent space of a pretrained generative model.
We quantitatively ascertain the efficacy of our method in terms of accuracy of the interpretable prediction network, fidelity of reconstruction, as well as faithfulness and consistency of learnt concepts.
arXiv Detail & Related papers (2024-07-01T14:39:41Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - On the performance of deep learning for numerical optimization: an
application to protein structure prediction [0.0]
We present a study on the performance of the deep learning models to deal with global optimization problems.
The proposed approach adopts the idea of the neural architecture search (NAS) to generate efficient neural networks.
Experiments reveal that the generated learning models can achieve competitive results when compared to hand-designed algorithms.
arXiv Detail & Related papers (2020-12-17T17:01:30Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.