Learning Interpretable Models in the Property Specification Language
- URL: http://arxiv.org/abs/2002.03668v1
- Date: Mon, 10 Feb 2020 11:42:50 GMT
- Title: Learning Interpretable Models in the Property Specification Language
- Authors: Rajarshi Roy, Dana Fisman and Daniel Neider
- Abstract summary: We develop a learning algorithm for formulas in the IEEE standard temporal logic PSL.
Our work is motivated by the fact that many natural properties, such as an event happening at every n-th point in time, cannot be expressed in speak.
- Score: 6.875312133832079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of learning human-interpretable descriptions of a
complex system from a finite set of positive and negative examples of its
behavior. In contrast to most of the recent work in this area, which focuses on
descriptions expressed in Linear Temporal Logic (LTL), we develop a learning
algorithm for formulas in the IEEE standard temporal logic PSL (Property
Specification Language). Our work is motivated by the fact that many natural
properties, such as an event happening at every n-th point in time, cannot be
expressed in LTL, whereas it is easy to express such properties in PSL.
Moreover, formulas in PSL can be more succinct and easier to interpret (due to
the use of regular expressions in PSL formulas) than formulas in LTL.
Our learning algorithm builds on top of an existing algorithm for learning
LTL formulas. Roughly speaking, our algorithm reduces the learning task to a
constraint satisfaction problem in propositional logic and then uses a SAT
solver to search for a solution in an incremental fashion. We have implemented
our algorithm and performed a comparative study between the proposed method and
the existing LTL learning algorithm. Our results illustrate the effectiveness
of the proposed approach to provide succinct human-interpretable descriptions
from examples.
Related papers
- DeepLTL: Learning to Efficiently Satisfy Complex LTL Specifications [59.01527054553122]
Linear temporal logic (LTL) has recently been adopted as a powerful formalism for specifying complex, temporally extended tasks in reinforcement learning (RL)
Existing approaches suffer from several shortcomings: they are often only applicable to finite-horizon fragments, are restricted to suboptimal solutions, and do not adequately handle safety constraints.
In this work, we propose a novel learning approach to address these concerns.
Our method leverages the structure of B"uchia, which explicitly represent the semantics of automat- specifications, to learn policies conditioned on sequences of truth assignments that lead to satisfying the desired formulae.
arXiv Detail & Related papers (2024-10-06T21:30:38Z) - On the Design and Analysis of LLM-Based Algorithms [74.7126776018275]
Large language models (LLMs) are used as sub-routines in algorithms.
LLMs have achieved remarkable empirical success.
Our proposed framework holds promise for advancing LLM-based algorithms.
arXiv Detail & Related papers (2024-07-20T07:39:07Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - TLINet: Differentiable Neural Network Temporal Logic Inference [10.36033062385604]
This paper introduces TLINet, a neural-symbolic framework for learning STL formulas.
In contrast to existing approaches, we introduce approximation methods for max operator designed specifically for temporal logic-based gradient techniques.
Our framework not only learns the structure but also the parameters of STL formulas, allowing flexible combinations of operators and various logical structures.
arXiv Detail & Related papers (2024-05-03T16:38:14Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Learning Interpretable Temporal Properties from Positive Examples Only [27.929058359327186]
We consider the problem of explaining the temporal behavior of black-box systems using human-interpretable models.
We rely on the fundamental yet interpretable models of deterministic finite automata (DFAs) and linear temporal logic (LTL) formulas.
Our motivation is that negative examples are generally difficult to observe, in particular, from black-box systems.
arXiv Detail & Related papers (2022-09-06T17:04:09Z) - Exploring Viable Algorithmic Options for Learning from Demonstration
(LfD): A Parameterized Complexity Approach [0.0]
In this paper, we show how such a systematic exploration of algorithmic options can be done using parameterized complexity analysis.
We show that none of our problems can be solved efficiently either in general or relative to a number of (often simultaneous) restrictions on environments, demonstrations, and policies.
arXiv Detail & Related papers (2022-05-10T15:54:06Z) - Adaptive neighborhood Metric learning [184.95321334661898]
We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
arXiv Detail & Related papers (2022-01-20T17:26:37Z) - Scalable Anytime Algorithms for Learning Formulas in Linear Temporal
Logic [2.631744051718347]
We consider the problem of learning formulas for classifying traces.
Existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result.
We introduce a new algorithm addressing both issues: our algorithm is able to construct formulas an order of magnitude larger than previous methods, and it is anytime.
arXiv Detail & Related papers (2021-10-13T13:57:31Z) - Learning Linear Temporal Properties from Noisy Data: A MaxSAT Approach [22.46055650237819]
We devise two algorithms for inferring concise formulas even in the presence of noise.
Our first algorithm infers minimal formulas by reducing the inference problem to a problem in satisfiability.
Our second learning algorithm relies on the first algorithm to derive a decision tree over formulas based on a decision tree learning algorithm.
arXiv Detail & Related papers (2021-04-30T16:06:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.