Uncertainty-Aware Signal Temporal logic
- URL: http://arxiv.org/abs/2105.11545v1
- Date: Mon, 24 May 2021 21:26:57 GMT
- Title: Uncertainty-Aware Signal Temporal logic
- Authors: Nasim Baharisangari, Jean-Rapha\"el Gaglione, Daniel Neider, Ufuk
Topcu, Zhe Xu
- Abstract summary: Existing temporal logic inference methods mostly neglect uncertainties in the data.
We propose two uncertainty-aware signal temporal logic (STL) inference approaches to classify the undesired behaviors and desired behaviors of a system.
- Score: 21.626420725274208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal logic inference is the process of extracting formal descriptions of
system behaviors from data in the form of temporal logic formulas. The existing
temporal logic inference methods mostly neglect uncertainties in the data,
which results in limited applicability of such methods in real-world
deployments. In this paper, we first investigate the uncertainties associated
with trajectories of a system and represent such uncertainties in the form of
interval trajectories. We then propose two uncertainty-aware signal temporal
logic (STL) inference approaches to classify the undesired behaviors and
desired behaviors of a system. Instead of classifying finitely many
trajectories, we classify infinitely many trajectories within the interval
trajectories. In the first approach, we incorporate robust semantics of STL
formulas with respect to an interval trajectory to quantify the margin at which
an STL formula is satisfied or violated by the interval trajectory. The second
approach relies on the first learning algorithm and exploits the decision tree
to infer STL formulas to classify behaviors of a given system. The proposed
approaches also work for non-separable data by optimizing the worst-case
robustness in inferring an STL formula. Finally, we evaluate the performance of
the proposed algorithms in two case studies, where the proposed algorithms show
reductions in the computation time by up to four orders of magnitude in
comparison with the sampling-based baseline algorithms (for a dataset with 800
sampled trajectories in total).
Related papers
- Learning Optimal Signal Temporal Logic Decision Trees for Classification: A Max-Flow MILP Formulation [5.924780594614676]
This paper presents a novel framework for inferring timed temporal logic properties from data.
We formulate the inference process as a mixed integer linear programming optimization problem.
Applying a max-flow algorithm on the resultant tree transforms the problem into a global optimization challenge.
We conduct three case studies involving two-class, multi-class, and complex formula classification scenarios.
arXiv Detail & Related papers (2024-07-30T16:56:21Z) - Learning Temporal Logic Predicates from Data with Statistical Guarantees [0.0]
We present a novel method to learn temporal logic predicates from data with finite-sample correctness guarantees.
Our approach leverages expression optimization and conformal prediction to learn predicates that correctly describe future trajectories.
arXiv Detail & Related papers (2024-06-15T00:07:36Z) - Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning [50.84938730450622]
We propose a trajectory-based method TV score, which uses trajectory volatility for OOD detection in mathematical reasoning.
Our method outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios.
Our method can be extended to more applications with high-density features in output spaces, such as multiple-choice questions.
arXiv Detail & Related papers (2024-05-22T22:22:25Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Learning Temporal Logic Properties: an Overview of Two Recent Methods [27.929058359327186]
Learning linear temporal logic (LTL) formulas from examples labeled as positive or negative has found applications in inferring descriptions of system behavior.
We propose two methods to learn formulas from examples in two different problem settings.
arXiv Detail & Related papers (2022-12-02T00:32:09Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Statistical Inference for the Dynamic Time Warping Distance, with
Application to Abnormal Time-Series Detection [29.195884642878422]
We study statistical inference on the similarity/distance between two time-series under uncertain environment.
We propose to employ the conditional selective inference framework, which enables us to derive a valid inference method on the DTW distance.
We evaluate the performance of the proposed inference method on both synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-14T10:28:51Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Data-Driven Verification under Signal Temporal Logic Constraints [0.0]
We consider systems under uncertainty whose dynamics are partially unknown.
Our aim is to study satisfaction of temporal logic properties by trajectories of such systems.
We employ Bayesian inference techniques to associate a confidence value to the satisfaction of the property.
arXiv Detail & Related papers (2020-05-08T08:32:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.