An Algebraic Framework for Hierarchical Probabilistic Abstraction
- URL: http://arxiv.org/abs/2502.21216v1
- Date: Fri, 28 Feb 2025 16:47:42 GMT
- Title: An Algebraic Framework for Hierarchical Probabilistic Abstraction
- Authors: Nijesh Upreti, Vaishak Belle,
- Abstract summary: We introduce a hierarchical probabilistic abstraction framework aimed at addressing challenges by extending a measure-theoretic foundation for hierarchical abstraction.<n>This approach bridges high-level conceptualization with low-level perceptual data, enhancing interpretability and allowing layered analysis.<n>Our framework provides a robust foundation for abstraction analysis across AI subfields, particularly in aligning System 1 and System 2 thinking.
- Score: 5.455744338342196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Abstraction is essential for reducing the complexity of systems across diverse fields, yet designing effective abstraction methodology for probabilistic models is inherently challenging due to stochastic behaviors and uncertainties. Current approaches often distill detailed probabilistic data into higher-level summaries to support tractable and interpretable analyses, though they typically struggle to fully represent the relational and probabilistic hierarchies through single-layered abstractions. We introduce a hierarchical probabilistic abstraction framework aimed at addressing these challenges by extending a measure-theoretic foundation for hierarchical abstraction. The framework enables modular problem-solving via layered mappings, facilitating both detailed layer-specific analysis and a cohesive system-wide understanding. This approach bridges high-level conceptualization with low-level perceptual data, enhancing interpretability and allowing layered analysis. Our framework provides a robust foundation for abstraction analysis across AI subfields, particularly in aligning System 1 and System 2 thinking, thereby supporting the development of diverse abstraction methodologies.
Related papers
- Causality can systematically address the monsters under the bench(marks) [64.36592889550431]
Benchmarks are plagued by various biases, artifacts, or leakage.<n>Models may behave unreliably due to poorly explored failure modes.<n> causality offers an ideal framework to systematically address these challenges.
arXiv Detail & Related papers (2025-02-07T17:01:37Z) - Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features [19.15360328688008]
We propose a framework, called Symbolic XAI, that attributes relevance to symbolic queries expressing logical relationships between input features.
The framework provides an understanding of the model's decision-making process that is both flexible for customization by the user and human-readable.
arXiv Detail & Related papers (2024-08-30T10:52:18Z) - Consciousness-Inspired Spatio-Temporal Abstractions for Better Generalization in Reinforcement Learning [83.41487567765871]
Skipper is a model-based reinforcement learning framework.
It automatically generalizes the task given into smaller, more manageable subtasks.
It enables sparse decision-making and focused abstractions on the relevant parts of the environment.
arXiv Detail & Related papers (2023-09-30T02:25:18Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Hierarchical State Abstraction Based on Structural Information
Principles [70.24495170921075]
We propose a novel mathematical Structural Information principles-based State Abstraction framework, namely SISA, from the information-theoretic perspective.
SISA is a general framework that can be flexibly integrated with different representation-learning objectives to improve their performances further.
arXiv Detail & Related papers (2023-04-24T11:06:52Z) - Finding Alignments Between Interpretable Causal Variables and
Distributed Neural Representations [62.65877150123775]
Causal abstraction is a promising theoretical framework for explainable artificial intelligence.
Existing causal abstraction methods require a brute-force search over alignments between the high-level model and the low-level one.
We present distributed alignment search (DAS), which overcomes these limitations.
arXiv Detail & Related papers (2023-03-05T00:57:49Z) - Multi-Resolution Online Deterministic Annealing: A Hierarchical and
Progressive Learning Architecture [0.0]
We introduce a general-purpose hierarchical learning architecture that is based on the progressive partitioning of a possibly multi-resolution data space.
We show that the solution of each optimization problem can be estimated online using gradient-free approximation updates.
Asymptotic convergence analysis and experimental results are provided for supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-12-15T23:21:49Z) - A Direct Approximation of AIXI Using Logical State Abstractions [6.570488724773507]
We propose a practical integration of logical state abstraction with AIXI, a Bayesian optimality notion for reinforcement learning agents.
We address the problem of selecting the right subset of features to form state abstractions.
Exact Bayesian model learning is then achieved using a suitable generalisation of Context Tree Weighting over abstract state sequences.
arXiv Detail & Related papers (2022-10-13T11:30:56Z) - Abstract Interpretation for Generalized Heuristic Search in Model-Based
Planning [50.96320003643406]
Domain-general model-based planners often derive their generality by constructing searchs through the relaxation of symbolic world models.
We illustrate how abstract interpretation can serve as a unifying framework for these abstractions, extending the reach of search to richer world models.
Theses can also be integrated with learning, allowing agents to jumpstart planning in novel world models via abstraction-derived information.
arXiv Detail & Related papers (2022-08-05T00:22:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.