Learning Linear Temporal Properties from Noisy Data: A MaxSAT Approach
- URL: http://arxiv.org/abs/2104.15083v1
- Date: Fri, 30 Apr 2021 16:06:03 GMT
- Title: Learning Linear Temporal Properties from Noisy Data: A MaxSAT Approach
- Authors: Jean-Rapha\"el Gaglione, Daniel Neider, Rajarshi Roy, Ufuk Topcu and
Zhe Xu
- Abstract summary: We devise two algorithms for inferring concise formulas even in the presence of noise.
Our first algorithm infers minimal formulas by reducing the inference problem to a problem in satisfiability.
Our second learning algorithm relies on the first algorithm to derive a decision tree over formulas based on a decision tree learning algorithm.
- Score: 22.46055650237819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of inferring descriptions of system behavior using
Linear Temporal Logic (LTL) from a finite set of positive and negative
examples. Most of the existing approaches for solving such a task rely on
predefined templates for guiding the structure of the inferred formula. The
approaches that can infer arbitrary LTL formulas, on the other hand, are not
robust to noise in the data. To alleviate such limitations, we devise two
algorithms for inferring concise LTL formulas even in the presence of noise.
Our first algorithm infers minimal LTL formulas by reducing the inference
problem to a problem in maximum satisfiability and then using off-the-shelf
MaxSAT solvers to find a solution. To the best of our knowledge, we are the
first to incorporate the usage of MaxSAT solvers for inferring formulas in LTL.
Our second learning algorithm relies on the first algorithm to derive a
decision tree over LTL formulas based on a decision tree learning algorithm. We
have implemented both our algorithms and verified that our algorithms are
efficient in extracting concise LTL descriptions even in the presence of noise.
Related papers
- Learning Optimal Signal Temporal Logic Decision Trees for Classification: A Max-Flow MILP Formulation [5.924780594614676]
This paper presents a novel framework for inferring timed temporal logic properties from data.
We formulate the inference process as a mixed integer linear programming optimization problem.
Applying a max-flow algorithm on the resultant tree transforms the problem into a global optimization challenge.
We conduct three case studies involving two-class, multi-class, and complex formula classification scenarios.
arXiv Detail & Related papers (2024-07-30T16:56:21Z) - Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
(extended algorithms and examples) [14.163888405810635]
We build on a recently proposed method for stepwise explaining solutions of Constraint Satisfaction Problems.
An explanation here is a sequence of simple inference steps where simplicity is quantified using a cost function.
arXiv Detail & Related papers (2023-03-21T10:03:36Z) - Learning Temporal Logic Properties: an Overview of Two Recent Methods [27.929058359327186]
Learning linear temporal logic (LTL) formulas from examples labeled as positive or negative has found applications in inferring descriptions of system behavior.
We propose two methods to learn formulas from examples in two different problem settings.
arXiv Detail & Related papers (2022-12-02T00:32:09Z) - Scalable Anytime Algorithms for Learning Formulas in Linear Temporal
Logic [2.631744051718347]
We consider the problem of learning formulas for classifying traces.
Existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result.
We introduce a new algorithm addressing both issues: our algorithm is able to construct formulas an order of magnitude larger than previous methods, and it is anytime.
arXiv Detail & Related papers (2021-10-13T13:57:31Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z) - Second-order Conditional Gradient Sliding [79.66739383117232]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.
The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.
It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z) - Learning Interpretable Models in the Property Specification Language [6.875312133832079]
We develop a learning algorithm for formulas in the IEEE standard temporal logic PSL.
Our work is motivated by the fact that many natural properties, such as an event happening at every n-th point in time, cannot be expressed in speak.
arXiv Detail & Related papers (2020-02-10T11:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.