On rough mereology and VC-dimension in treatment of decision prediction for open world decision systems
- URL: http://arxiv.org/abs/2406.13329v1
- Date: Wed, 19 Jun 2024 08:22:51 GMT
- Title: On rough mereology and VC-dimension in treatment of decision prediction for open world decision systems
- Authors: Lech T. Polkowski,
- Abstract summary: It is crucial for online learning when each new object must have a predicted decision value.
The approach we propose is founded in the theory of rough mereology and it requires a theory of sets/concepts.
We apply this notion in a procedure to select a decision for new yet unseen objects.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given a raw knowledge in the form of a data table/a decision system, one is facing two possible venues. One, to treat the system as closed, i.e., its universe does not admit new objects, or, to the contrary, its universe is open on admittance of new objects. In particular, one may obtain new objects whose sets of values of features are new to the system. In this case the problem is to assign a decision value to any such new object. This problem is somehow resolved in the rough set theory, e.g., on the basis of similarity of the value set of a new object to value sets of objects already assigned a decision value. It is crucial for online learning when each new object must have a predicted decision value.\ There is a vast literature on various methods for decision prediction for new yet unseen object. The approach we propose is founded in the theory of rough mereology and it requires a theory of sets/concepts, and, we root our theory in classical set theory of Syllogistic within which we recall the theory of parts known as Mereology. Then, we recall our theory of Rough Mereology along with the theory of weight assignment to the Tarski algebra of Mereology.\ This allows us to introduce the notion of a part to a degree. Once we have defined basics of Mereology and rough Mereology, we recall our theory of weight assignment to elements of the Boolean algebra within Mereology and this allows us to define the relation of parts to the degree and we apply this notion in a procedure to select a decision for new yet unseen objects.\ In selecting a plausible candidate which would pass its decision value to the new object, we employ the notion of Vapnik - Chervonenkis dimension in order to select at the first stage the candidate with the largest VC-dimension of the family of its $\varepsilon$-components for some choice of $\varepsilon$.
Related papers
- ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference [69.24516189971929]
In this paper, we introduce a new type of solution in the longitudinal setting: a closed-form ordinary differential equation (ODE)
While we still rely on continuous optimization to learn an ODE, the resulting inference machine is no longer a neural network.
arXiv Detail & Related papers (2024-03-16T02:07:45Z) - Fundamental Components of Deep Learning: A category-theoretic approach [0.0]
This thesis develops a novel mathematical foundation for deep learning based on the language of category theory.
We also systematise many existing approaches, placing many existing constructions and concepts under the same umbrella.
arXiv Detail & Related papers (2024-03-13T01:29:40Z) - A Geometric Notion of Causal Probing [91.14470073637236]
In a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace.
We give a set of intrinsic criteria which characterize an ideal linear concept subspace.
We find that LEACE returns a one-dimensional subspace containing roughly half of total concept information.
arXiv Detail & Related papers (2023-07-27T17:57:57Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Logic for Explainable AI [11.358487655918676]
A central quest in explainable AI relates to understanding the decisions made by (learned) classifiers.
We discuss in this tutorial a comprehensive, semantical and computational theory of explainability along these dimensions.
arXiv Detail & Related papers (2023-05-09T04:53:57Z) - Exact learning for infinite families of concepts [0.0]
We study arbitrary infinite families of concepts each of which consists of an infinite set of elements and an infinite set of subsets of this set called concepts.
We consider the notion of a problem over a family of concepts that is described by a finite number of elements.
arXiv Detail & Related papers (2022-01-13T07:32:47Z) - Exact learning and test theory [0.0]
We study arbitrary infinite binary information systems each of which consists of an infinite set of elements.
We consider the notion of a problem over information system, which is described by a finite number of attributes.
arXiv Detail & Related papers (2022-01-12T15:10:22Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - On Exploiting Hitting Sets for Model Reconciliation [53.81101846598925]
In human-aware planning, a planning agent may need to provide an explanation to a human user on why its plan is optimal.
A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the human's model.
We present a logic-based framework for model reconciliation that extends beyond the realm of planning.
arXiv Detail & Related papers (2020-12-16T21:25:53Z) - CURI: A Benchmark for Productive Concept Learning Under Uncertainty [33.83721664338612]
We introduce a new few-shot, meta-learning benchmark, Compositional Reasoning Under Uncertainty (CURI)
CURI evaluates different aspects of productive and systematic generalization, including abstract understandings of disentangling, productive generalization, learning operations, variable binding, etc.
It also defines a model-independent "compositionality gap" to evaluate the difficulty of generalizing out-of-distribution along each of these axes.
arXiv Detail & Related papers (2020-10-06T16:23:17Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.