Learning Temporal Logic Properties: an Overview of Two Recent Methods
- URL: http://arxiv.org/abs/2212.00916v1
- Date: Fri, 2 Dec 2022 00:32:09 GMT
- Title: Learning Temporal Logic Properties: an Overview of Two Recent Methods
- Authors: Jean-Rapha\"el Gaglione, Rajarshi Roy, Nasim Baharisangari, Daniel
Neider, Zhe Xu, Ufuk Topcu
- Abstract summary: Learning linear temporal logic (LTL) formulas from examples labeled as positive or negative has found applications in inferring descriptions of system behavior.
We propose two methods to learn formulas from examples in two different problem settings.
- Score: 27.929058359327186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning linear temporal logic (LTL) formulas from examples labeled as
positive or negative has found applications in inferring descriptions of system
behavior. We summarize two methods to learn LTL formulas from examples in two
different problem settings. The first method assumes noise in the labeling of
the examples. For that, they define the problem of inferring an LTL formula
that must be consistent with most but not all of the examples. The second
method considers the other problem of inferring meaningful LTL formulas in the
case where only positive examples are given. Hence, the first method addresses
the robustness to noise, and the second method addresses the balance between
conciseness and specificity (i.e., language minimality) of the inferred
formula. The summarized methods propose different algorithms to solve the
aforementioned problems, as well as to infer other descriptions of temporal
properties, such as signal temporal logic or deterministic finite automata.
Related papers
- Learning Interpretable Temporal Properties from Positive Examples Only [27.929058359327186]
We consider the problem of explaining the temporal behavior of black-box systems using human-interpretable models.
We rely on the fundamental yet interpretable models of deterministic finite automata (DFAs) and linear temporal logic (LTL) formulas.
Our motivation is that negative examples are generally difficult to observe, in particular, from black-box systems.
arXiv Detail & Related papers (2022-09-06T17:04:09Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Adaptive neighborhood Metric learning [184.95321334661898]
We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
arXiv Detail & Related papers (2022-01-20T17:26:37Z) - Scalable Anytime Algorithms for Learning Formulas in Linear Temporal
Logic [2.631744051718347]
We consider the problem of learning formulas for classifying traces.
Existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result.
We introduce a new algorithm addressing both issues: our algorithm is able to construct formulas an order of magnitude larger than previous methods, and it is anytime.
arXiv Detail & Related papers (2021-10-13T13:57:31Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z) - Uncertainty-Aware Signal Temporal logic [21.626420725274208]
Existing temporal logic inference methods mostly neglect uncertainties in the data.
We propose two uncertainty-aware signal temporal logic (STL) inference approaches to classify the undesired behaviors and desired behaviors of a system.
arXiv Detail & Related papers (2021-05-24T21:26:57Z) - Learning Linear Temporal Properties from Noisy Data: A MaxSAT Approach [22.46055650237819]
We devise two algorithms for inferring concise formulas even in the presence of noise.
Our first algorithm infers minimal formulas by reducing the inference problem to a problem in satisfiability.
Our second learning algorithm relies on the first algorithm to derive a decision tree over formulas based on a decision tree learning algorithm.
arXiv Detail & Related papers (2021-04-30T16:06:03Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Learning explanations that are hard to vary [75.30552491694066]
We show that averaging across examples can favor memorization and patchwork' solutions that sew together different strategies.
We then propose and experimentally validate a simple alternative algorithm based on a logical AND.
arXiv Detail & Related papers (2020-09-01T10:17:48Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Learning Interpretable Models in the Property Specification Language [6.875312133832079]
We develop a learning algorithm for formulas in the IEEE standard temporal logic PSL.
Our work is motivated by the fact that many natural properties, such as an event happening at every n-th point in time, cannot be expressed in speak.
arXiv Detail & Related papers (2020-02-10T11:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.