Non-monotonic Extensions to Formal Concept Analysis via Object Preferences
- URL: http://arxiv.org/abs/2410.04184v1
- Date: Sat, 5 Oct 2024 15:01:00 GMT
- Title: Non-monotonic Extensions to Formal Concept Analysis via Object Preferences
- Authors: Lucas Carr, Nicholas Leisegang, Thomas Meyer, Sebastian Rudolph,
- Abstract summary: We introduce a non-monotonic conditional between sets of attributes, which assumes a preference over the set of objects.
We show that this conditional gives rise to a consequence relation consistent with the postulates for non-monotonicty.
This notion of typical concepts is a further introduction of KLM-style typicality into Formal Concept Analysis.
- Score: 3.4873708563238277
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Formal Concept Analysis (FCA) is an approach to creating a conceptual hierarchy in which a \textit{concept lattice} is generated from a \textit{formal context}. That is, a triple consisting of a set of objects, $G$, a set of attributes, $M$, and an incidence relation $I$ on $G \times M$. A \textit{concept} is then modelled as a pair consisting of a set of objects (the \textit{extent}), and a set of shared attributes (the \textit{intent}). Implications in FCA describe how one set of attributes follows from another. The semantics of these implications closely resemble that of logical consequence in classical logic. In that sense, it describes a monotonic conditional. The contributions of this paper are two-fold. First, we introduce a non-monotonic conditional between sets of attributes, which assumes a preference over the set of objects. We show that this conditional gives rise to a consequence relation that is consistent with the postulates for non-monotonicty proposed by Kraus, Lehmann, and Magidor (commonly referred to as the KLM postulates). We argue that our contribution establishes a strong characterisation of non-monotonicity in FCA. Typical concepts represent concepts where the intent aligns with expectations from the extent, allowing for an exception-tolerant view of concepts. To this end, we show that the set of all typical concepts is a meet semi-lattice of the original concept lattice. This notion of typical concepts is a further introduction of KLM-style typicality into FCA, and is foundational towards developing an algebraic structure representing a concept lattice of prototypical concepts.
Related papers
- Rational Inference in Formal Concept Analysis [0.4499833362998487]
Defeasible conditionals are a form of non-monotonic inference.
KLM framework defines a semantics for the propositional case of defeasible conditionals.
arXiv Detail & Related papers (2025-04-07T20:15:20Z) - Interaction Asymmetry: A General Principle for Learning Composable Abstractions [27.749478197803256]
We show that interaction asymmetry enables both disentanglement and compositional generalization.
We propose an implementation of these criteria using a flexible Transformer-based VAE, with a novel regularizer on the attention weights of the decoder.
arXiv Detail & Related papers (2024-11-12T13:33:26Z) - A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - The Geometry of Categorical and Hierarchical Concepts in Large Language Models [15.126806053878855]
We show how to extend the formalization of the linear representation hypothesis to represent features (e.g., is_animal) as vectors.
We use the formalization to prove a relationship between the hierarchical structure of concepts and the geometry of their representations.
We validate these theoretical results on the Gemma and LLaMA-3 large language models, estimating representations for 900+ hierarchically related concepts using data from WordNet.
arXiv Detail & Related papers (2024-06-03T16:34:01Z) - A Geometric Notion of Causal Probing [85.49839090913515]
The linear subspace hypothesis states that, in a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace.
We give a set of intrinsic criteria which characterize an ideal linear concept subspace.
We find that, for at least one concept across two languages models, the concept subspace can be used to manipulate the concept value of the generated word with precision.
arXiv Detail & Related papers (2023-07-27T17:57:57Z) - Concept Activation Regions: A Generalized Framework For Concept-Based
Explanations [95.94432031144716]
Existing methods assume that the examples illustrating a concept are mapped in a fixed direction of the deep neural network's latent space.
In this work, we propose allowing concept examples to be scattered across different clusters in the DNN's latent space.
This concept activation region (CAR) formalism yields global concept-based explanations and local concept-based feature importance.
arXiv Detail & Related papers (2022-09-22T17:59:03Z) - Concept Gradient: Concept-based Interpretation Without Linear Assumption [77.96338722483226]
Concept Activation Vector (CAV) relies on learning a linear relation between some latent representation of a given model and concepts.
We proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions.
We demonstrated CG outperforms CAV in both toy examples and real world datasets.
arXiv Detail & Related papers (2022-08-31T17:06:46Z) - FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic
descriptions, and Conceptual Relations [99.54048050189971]
We present a framework for learning new visual concepts quickly, guided by multiple naturally occurring data streams.
The learned concepts support downstream applications, such as answering questions by reasoning about unseen images.
We demonstrate the effectiveness of our model on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-03-30T19:45:00Z) - Quantification and Aggregation over Concepts of the Ontology [0.0]
We argue that in some KR applications, we want to quantify over sets of concepts formally represented by symbols in the vocabulary.
We present an extension of first-order logic to support such abstractions, and show that it allows writing expressions of knowledge that are elaboration tolerant.
arXiv Detail & Related papers (2022-02-02T07:49:23Z) - Translational Concept Embedding for Generalized Compositional Zero-shot
Learning [73.60639796305415]
Generalized compositional zero-shot learning means to learn composed concepts of attribute-object pairs in a zero-shot fashion.
This paper introduces a new approach, termed translational concept embedding, to solve these two difficulties in a unified framework.
arXiv Detail & Related papers (2021-12-20T21:27:51Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Reasoning about Typicality and Probabilities in Preferential Description
Logics [0.15749416770494704]
We describe preferential Description Logics of typicality, a nonmonotonic extension of standard Description Logics by means of a typicality operator T.
This extension is based on a minimal model semantics corresponding to a notion of rational closure.
We consider other strengthening of the rational closure semantics and construction to avoid the so-called blocking of property inheritance problem.
arXiv Detail & Related papers (2020-04-20T14:50:31Z) - The ERA of FOLE: Foundation [0.0]
This paper continues the discussion of the representation and interpretation of the first-order logical environment ttfamily FOLE.
The formalism and semantics of (many-sorted) first-order logic can be developed in both a emphclassification form and an emphinterpretation form.
In general, the ttfamily FOLE representation uses a conceptual approach, that is completely compatible with the theory of institutions, formal concept analysis and information flow.
arXiv Detail & Related papers (2015-12-23T11:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.