Minimal Model Reasoning in Description Logics: Don't Try This at Home!
- URL: http://arxiv.org/abs/2508.05350v1
- Date: Thu, 07 Aug 2025 12:56:15 GMT
- Title: Minimal Model Reasoning in Description Logics: Don't Try This at Home!
- Authors: Federica Di Stefano, Quentin Manière, Magdalena Ortiz, Mantas Šimkus,
- Abstract summary: We show that concept satisfiability in minimal models is undecidable already for $mathcalEL$.<n>To regain decidability, we impose acyclicity conditions on the TBox that bring the worst circum-case complexity below double exponential time.<n>We conclude with a brief excursion to the DL-Lite family, where a positive result was known for DL-Lite$_textcore$.
- Score: 4.387337528923525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reasoning with minimal models has always been at the core of many knowledge representation techniques, but we still have only a limited understanding of this problem in Description Logics (DLs). Minimization of some selected predicates, letting the remaining predicates vary or be fixed, as proposed in circumscription, has been explored and exhibits high complexity. The case of `pure' minimal models, where the extension of all predicates must be minimal, has remained largely uncharted. We address this problem in popular DLs and obtain surprisingly negative results: concept satisfiability in minimal models is undecidable already for $\mathcal{EL}$. This undecidability also extends to a very restricted fragment of tuple-generating dependencies. To regain decidability, we impose acyclicity conditions on the TBox that bring the worst-case complexity below double exponential time and allow us to establish a connection with the recently studied pointwise circumscription; we also derive results in data complexity. We conclude with a brief excursion to the DL-Lite family, where a positive result was known for DL-Lite$_{\text{core}}$, but our investigation establishes ExpSpace-hardness already for its extension DL-Lite$_{\text{horn}}$.
Related papers
- A Fine-Grained Complexity View on Propositional Abduction -- Algorithms and Lower Bounds [6.6362553223890535]
We analyze the complexity of intractable abduction problems under the seemingly overlooked but natural parameter n.<n>We obtain several positive results for $SigmaP$ as well as NP- and coNP-complete fragments.<n>We complement this with lower bounds and for many fragments rule out improvements under the (strong) exponential-time hypothesis.
arXiv Detail & Related papers (2025-05-15T11:56:19Z) - Syzygy of Thoughts: Improving LLM CoT with the Minimal Free Resolution [59.39066657300045]
Chain-of-Thought (CoT) prompting enhances the reasoning of large language models (LLMs) by decomposing problems into sequential steps.<n>We propose Syzygy of Thoughts (SoT)-a novel framework that extends CoT by introducing auxiliary, interrelated reasoning paths.<n>SoT captures deeper logical dependencies, enabling more robust and structured problem-solving.
arXiv Detail & Related papers (2025-04-13T13:35:41Z) - Identifying General Mechanism Shifts in Linear Causal Representations [58.6238439611389]
We consider the linear causal representation learning setting where we observe a linear mixing of $d$ unknown latent factors.
Recent work has shown that it is possible to recover the latent factors as well as the underlying structural causal model over them.
We provide a surprising identifiability result that it is indeed possible, under some very mild standard assumptions, to identify the set of shifted nodes.
arXiv Detail & Related papers (2024-10-31T15:56:50Z) - Lower Bounds for Learning in Revealing POMDPs [88.23337313766355]
This paper studies the fundamental limits of reinforcement learning (RL) in the challenging emphpartially observable setting.
For emphmulti-step revealing POMDPs, we show that the latent state-space dependence is at least $Omega(S1.5)$ in the sample complexity.
arXiv Detail & Related papers (2023-02-02T18:59:30Z) - Expected Worst Case Regret via Stochastic Sequential Covering [14.834625066344582]
We introduce a notion of expected worst case minimax regret that generalizes and encompasses prior known minimax regrets.
For such minimax regrets we establish tight bounds via a novel concept of upper global sequential covering.
We demonstrate the effectiveness of our approach by establishing tight bounds on the expected worst case minimax regrets for logarithmic loss and general mixable losses.
arXiv Detail & Related papers (2022-09-09T17:31:46Z) - Near-Optimal Sample Complexity Bounds for Constrained MDPs [25.509556551558834]
We provide minimax upper and lower bounds on the sample complexity for learning near-optimal policies in a discounted CMDP.
We show that learning CMDPs is as easy as MDPs when small constraint violations are allowed, but inherently more difficult when we demand zero constraint violation.
arXiv Detail & Related papers (2022-06-13T15:58:14Z) - Expressivity of Planning with Horn Description Logic Ontologies
(Technical Report) [12.448670165713652]
We address open-world state constraints formalized by planning over a description logic (DL) ontology.
We propose a novel compilation scheme into standard PDDL with derived predicates.
We show that our approach can outperform previous work on existing benchmarks for planning with DL.
arXiv Detail & Related papers (2022-03-17T14:50:06Z) - A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs [117.82903457289584]
We derive a novel problem-dependent lower-bound for regret in finite-horizon Markov Decision Processes (MDPs)
We show that our lower-bound is considerably smaller than in the general case and it does not scale with the minimum action gap at all.
We show that this last result is attainable (up to $poly(H)$ terms, where $H$ is the horizon) by providing a regret upper-bound based on policy gaps for an optimistic algorithm.
arXiv Detail & Related papers (2021-06-24T13:46:09Z) - Bilinear Classes: A Structural Framework for Provable Generalization in
RL [119.42509700822484]
Bilinear Classes is a new structural framework which permits generalization in reinforcement learning.
The framework incorporates nearly all existing models in which a sample complexity is achievable.
Our main result provides an RL algorithm which has sample complexity for Bilinear Classes.
arXiv Detail & Related papers (2021-03-19T16:34:20Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.