SAT-Based Bounded Fitting for the Description Logic ALC
- URL: http://arxiv.org/abs/2507.21752v1
- Date: Tue, 29 Jul 2025 12:32:16 GMT
- Title: SAT-Based Bounded Fitting for the Description Logic ALC
- Authors: Maurice Funk, Jean Christoph Jung, Tom Voellmer,
- Abstract summary: We investigate bounded fitting for the description logic ALC and its syntactic fragments.<n>We show that the underlying size-restricted fitting problem is NP-complete for all studied fragments.<n>We present an implementation of bounded fitting in ALC and its fragments based on a SAT solver.
- Score: 7.193373053157516
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bounded fitting is a general paradigm for learning logical formulas from positive and negative data examples, that has received considerable interest recently. We investigate bounded fitting for the description logic ALC and its syntactic fragments. We show that the underlying size-restricted fitting problem is NP-complete for all studied fragments, even in the special case of a single positive and a single negative example. By design, bounded fitting comes with probabilistic guarantees in Valiant's PAC learning framework. In contrast, we show that other classes of algorithms for learning ALC concepts do not provide such guarantees. Finally, we present an implementation of bounded fitting in ALC and its fragments based on a SAT solver. We discuss optimizations and compare our implementation to other concept learning tools.
Related papers
- Nearly Optimal Sample Complexity for Learning with Label Proportions [54.67830198790247]
We investigate Learning from Label Proportions (LLP), a partial information setting where examples in a training set are grouped into bags.<n>Despite the partial observability, the goal is still to achieve small regret at the level of individual examples.<n>We give results on the sample complexity of LLP under square loss, showing that our sample complexity is essentially optimal.
arXiv Detail & Related papers (2025-05-08T15:45:23Z) - Minimum Description Length and Generalization Guarantees for
Representation Learning [16.2444595840653]
This paper presents a framework that allows us to derive upper bounds on the generalization error of a representation learning algorithm.
Rather than the mutual information between the encoder's input and the representation, our new bounds involve the "multi-letter" relative entropy.
To the best knowledge of the authors, the established generalization bounds are the first of their kind for Information Bottleneck (IB) type encoders and representation learning.
arXiv Detail & Related papers (2024-02-05T18:12:28Z) - Nonparametric active learning for cost-sensitive classification [2.1756081703276]
We design a generic nonparametric active learning algorithm for cost-sensitive classification.
We prove the near-optimality of obtained upper bounds by providing matching (up to logarithmic factor) lower bounds.
arXiv Detail & Related papers (2023-09-30T22:19:21Z) - SAT-Based PAC Learning of Description Logic Concepts [18.851061569487616]
We propose bounded fitting as a scheme for learning logic concepts in the presence of description.
We present the system SPELL which implements bounded fitting for the description logic $mathcalELHr$ based on a SAT solver, and compare its performance to a state-of-the-art learner.
arXiv Detail & Related papers (2023-05-15T10:20:31Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Deep Probabilistic Graph Matching [72.6690550634166]
We propose a deep learning-based graph matching framework that works for the original QAP without compromising on the matching constraints.
The proposed method is evaluated on three popularly tested benchmarks (Pascal VOC, Willow Object and SPair-71k) and it outperforms all previous state-of-the-arts on all benchmarks.
arXiv Detail & Related papers (2022-01-05T13:37:27Z) - On Guaranteed Optimal Robust Explanations for NLP Models [16.358394218953833]
We build on abduction-based explanations for ma-chine learning and develop a method for computing local explanations for neural network models.
We present two solution algorithms, respectively based on implicit hitting sets and maximum universal subsets.
We evaluate our framework on three widely used sentiment analysis tasks and texts of up to100words from SST, Twitter and IMDB datasets.
arXiv Detail & Related papers (2021-05-08T08:44:48Z) - Reducing Confusion in Active Learning for Part-Of-Speech Tagging [100.08742107682264]
Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost.
We study the problem of selecting instances which maximally reduce the confusion between particular pairs of output tags.
Our proposed AL strategy outperforms other AL strategies by a significant margin.
arXiv Detail & Related papers (2020-11-02T06:24:58Z) - Joint Contrastive Learning with Infinite Possibilities [114.45811348666898]
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
arXiv Detail & Related papers (2020-09-30T16:24:21Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.