Quantification and Aggregation over Concepts of the Ontology
- URL: http://arxiv.org/abs/2202.00898v4
- Date: Wed, 30 Aug 2023 09:06:11 GMT
- Title: Quantification and Aggregation over Concepts of the Ontology
- Authors: Pierre Carbonnelle (KU Leuven, Leuven, Belgium), Matthias Van der
Hallen (KU Leuven, Leuven, Belgium), Marc Denecker (KU Leuven, Leuven,
Belgium)
- Abstract summary: We argue that in some KR applications, we want to quantify over sets of concepts formally represented by symbols in the vocabulary.
We present an extension of first-order logic to support such abstractions, and show that it allows writing expressions of knowledge that are elaboration tolerant.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We argue that in some KR applications, we want to quantify over sets of
concepts formally represented by symbols in the vocabulary. We show that this
quantification should be distinguished from second-order quantification and
meta-programming quantification. We also investigate the relationship with
concepts in intensional logic.
We present an extension of first-order logic to support such abstractions,
and show that it allows writing expressions of knowledge that are elaboration
tolerant. To avoid nonsensical sentences in this formalism, we refine the
concept of well-formed sentences, and propose a method to verify
well-formedness with a complexity that is linear with the number of tokens in
the formula.
We have extended FO(.), a Knowledge Representation language, and IDP-Z3, a
reasoning engine for FO(.), accordingly. We show that this extension was
essential in accurately modelling various problem domains in an
elaboration-tolerant way, i.e., without reification.
Related papers
- The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.
Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.
The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Nonparametric Partial Disentanglement via Mechanism Sparsity: Sparse
Actions, Interventions and Sparse Temporal Dependencies [58.179981892921056]
This work introduces a novel principle for disentanglement we call mechanism sparsity regularization.
We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors.
We show that the latent factors can be recovered by regularizing the learned causal graph to be sparse.
arXiv Detail & Related papers (2024-01-10T02:38:21Z) - A Unified View on Forgetting and Strong Equivalence Notions in Answer
Set Programming [14.342696862884704]
We introduce a novel relativized equivalence notion, which is able to capture all related notions from the literature.
We then introduce an operator that combines projection and a relaxation of (SP)-forgetting to obtain the relativized simplifications.
arXiv Detail & Related papers (2023-12-13T09:05:48Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Enriching Disentanglement: From Logical Definitions to Quantitative Metrics [59.12308034729482]
Disentangling the explanatory factors in complex data is a promising approach for data-efficient representation learning.
We establish relationships between logical definitions and quantitative metrics to derive theoretically grounded disentanglement metrics.
We empirically demonstrate the effectiveness of the proposed metrics by isolating different aspects of disentangled representations.
arXiv Detail & Related papers (2023-05-19T08:22:23Z) - Dual Box Embeddings for the Description Logic EL++ [16.70961576041243]
Similar to Knowledge Graphs (KGs), Knowledge Graphs are often incomplete, and maintaining and constructing them has proved challenging.
Similar to KGs, a promising approach is to learn embeddings in a latent vector space, while additionally ensuring they adhere to the semantics of the underlying DL.
We propose a novel ontology embedding method named Box$2$EL for the DL EL++, which represents both concepts and roles as boxes.
arXiv Detail & Related papers (2023-01-26T14:13:37Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Attribute Selection using Contranominal Scales [0.09668407688201358]
Formal Concept Analysis (FCA) allows to analyze binary data by deriving concepts and ordering them in lattices.
The size of such a lattice depends on the number of subcontexts in the corresponding formal context.
We propose the algorithm ContraFinder that enables the computation of all contranominal scales of a given formal context.
arXiv Detail & Related papers (2021-06-21T10:53:50Z) - Reasoning with Contextual Knowledge and Influence Diagrams [4.111899441919165]
Influence diagrams (IDs) are well-known formalisms extending Bayesian networks to model decision situations under uncertainty.
We complement IDs with the light-weight description logic (DL) EL to overcome such limitations.
arXiv Detail & Related papers (2020-07-01T15:57:48Z) - Plausible Reasoning about EL-Ontologies using Concept Interpolation [27.314325986689752]
We propose an inductive mechanism which is based on a clear model-theoretic semantics, and can thus be tightly integrated with standard deductive reasoning.
We focus on inference, a powerful commonsense reasoning mechanism which is closely related to cognitive models of category-based induction.
arXiv Detail & Related papers (2020-06-25T14:19:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.