Symbolic Abstractions From Data: A PAC Learning Approach
- URL: http://arxiv.org/abs/2104.13901v1
- Date: Wed, 28 Apr 2021 17:34:28 GMT
- Title: Symbolic Abstractions From Data: A PAC Learning Approach
- Authors: Alex Devonport, Adnane Saoud, and Murat Arcak
- Abstract summary: Symbolic control techniques aim to satisfy complex logic specifications.
The methods used to compute symbolic abstractions require knowledge of an accurate closed-form model.
We present a new data-driven approach that does not require closed-form dynamics.
- Score: 0.42603120588176624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Symbolic control techniques aim to satisfy complex logic specifications. A
critical step in these techniques is the construction of a symbolic (discrete)
abstraction, a finite-state system whose behaviour mimics that of a given
continuous-state system. The methods used to compute symbolic abstractions,
however, require knowledge of an accurate closed-form model. To generalize them
to systems with unknown dynamics, we present a new data-driven approach that
does not require closed-form dynamics, instead relying only the ability to
evaluate successors of each state under given inputs. To provide guarantees for
the learned abstraction, we use the Probably Approximately Correct (PAC)
statistical framework. We first introduce a PAC-style behavioural relationship
and an appropriate refinement procedure. We then show how the symbolic
abstraction can be constructed to satisfy this new behavioural relationship.
Moreover, we provide PAC bounds that dictate the number of data required to
guarantee a prescribed level of accuracy and confidence. Finally, we present an
illustrative example.
Related papers
- How to discretize continuous state-action spaces in Q-learning: A symbolic control approach [0.0]
The paper presents a systematic analysis that highlights a major drawback in space discretization methods.
To address this challenge, the paper proposes a symbolic model that represents behavioral relations.
This relation allows for seamless application of the synthesized controller based on abstraction to the original system.
arXiv Detail & Related papers (2024-06-03T17:30:42Z) - Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
arXiv Detail & Related papers (2024-02-02T20:33:14Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Hierarchical State Abstraction Based on Structural Information
Principles [70.24495170921075]
We propose a novel mathematical Structural Information principles-based State Abstraction framework, namely SISA, from the information-theoretic perspective.
SISA is a general framework that can be flexibly integrated with different representation-learning objectives to improve their performances further.
arXiv Detail & Related papers (2023-04-24T11:06:52Z) - Graph state-space models [19.88814714919019]
State-space models are used to describe time series and operate by maintaining an updated representation of the system state from which predictions are made.
The manuscript aims, for the first time, for the first time filling this gap by matching unattended state data where the functional graph capturing latent dependencies is learned directly from data and is allowed to change over time.
An encoder-decoder architecture is proposed to learn the state-space model end-to-end on a downstream task.
arXiv Detail & Related papers (2023-01-04T18:15:07Z) - Relational Action Bases: Formalization, Effective Safety Verification,
and Invariants (Extended Version) [67.99023219822564]
We introduce the general framework of relational action bases (RABs)
RABs generalize existing models by lifting both restrictions.
We demonstrate the effectiveness of this approach on a benchmark of data-aware business processes.
arXiv Detail & Related papers (2022-08-12T17:03:50Z) - Linear-Time Verification of Data-Aware Dynamic Systems with Arithmetic [8.914271888521652]
We introduce a new semantic property of "finite summary", which guarantees the existence of a faithful finite-state abstraction.
Several decidability conditions studied in formal methods and database theory can be seen as concrete, checkable instances of this property.
Our results allow us to analyze systems that were out of reach in earlier approaches.
arXiv Detail & Related papers (2022-03-15T15:14:25Z) - Learning Markov State Abstractions for Deep Reinforcement Learning [17.34529517221924]
We introduce a novel set of conditions and prove that they are sufficient for learning a Markov abstract state representation.
We then describe a practical training procedure that combines inverse model estimation and temporal contrastive learning.
Our approach learns representations that capture the underlying structure of the domain and lead to improved sample efficiency.
arXiv Detail & Related papers (2021-06-08T14:12:36Z) - Towards a Mathematical Theory of Abstraction [0.0]
We provide a precise characterisation of what an abstraction is and, perhaps more importantly, suggest how abstractions can be learnt directly from data.
Our results have deep implications for statistical inference and machine learning and could be used to develop explicit methods for learning precise kinds of abstractions directly from data.
arXiv Detail & Related papers (2021-06-03T13:23:49Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.