Learning Temporal Logic Predicates from Data with Statistical Guarantees
- URL: http://arxiv.org/abs/2406.10449v2
- Date: Tue, 03 Dec 2024 19:52:27 GMT
- Title: Learning Temporal Logic Predicates from Data with Statistical Guarantees
- Authors: Emi Soroka, Rohan Sinha, Sanjay Lall,
- Abstract summary: We present a novel method to learn temporal logic predicates from data with finite-sample correctness guarantees.
Our approach leverages expression optimization and conformal prediction to learn predicates that correctly describe future trajectories.
- Score: 0.0
- License:
- Abstract: Temporal logic rules are often used in control and robotics to provide structured, human-interpretable descriptions of high-dimensional trajectory data. These rules have numerous applications including safety validation using formal methods, constraining motion planning among autonomous agents, and classifying data. However, existing methods for learning temporal logic predicates from data do not provide assurances about the correctness of the resulting predicate. We present a novel method to learn temporal logic predicates from data with finite-sample correctness guarantees. Our approach leverages expression optimization and conformal prediction to learn predicates that correctly describe future trajectories under mild assumptions. We provide experimental results showing the performance of our approach on a simulated trajectory dataset and perform ablation studies to understand how each component of our algorithm contributes to its performance.
Related papers
- Capturing the Temporal Dependence of Training Data Influence [100.91355498124527]
We formalize the concept of trajectory-specific leave-one-out influence, which quantifies the impact of removing a data point during training.
We propose data value embedding, a novel technique enabling efficient approximation of trajectory-specific LOO.
As data value embedding captures training data ordering, it offers valuable insights into model training dynamics.
arXiv Detail & Related papers (2024-12-12T18:28:55Z) - Structured Prediction in Online Learning [66.36004256710824]
We study a theoretical and algorithmic framework for structured prediction in the online learning setting.
We show that our algorithm is a generalisation of optimal algorithms from the supervised learning setting.
We consider a second algorithm designed especially for non-stationary data distributions, including adversarial data.
arXiv Detail & Related papers (2024-06-18T07:45:02Z) - TLINet: Differentiable Neural Network Temporal Logic Inference [10.36033062385604]
This paper introduces TLINet, a neural-symbolic framework for learning STL formulas.
In contrast to existing approaches, we introduce approximation methods for max operator designed specifically for temporal logic-based gradient techniques.
Our framework not only learns the structure but also the parameters of STL formulas, allowing flexible combinations of operators and various logical structures.
arXiv Detail & Related papers (2024-05-03T16:38:14Z) - Safe Active Learning for Time-Series Modeling with Gaussian Processes [7.505622158856545]
Learning time-series models is useful for many applications, such as simulation and forecasting.
In this study, we consider the problem of actively learning time-series models while taking given safety constraints into account.
The proposed approach generates data appropriate for time series model learning, i.e. input and output trajectories, by dynamically exploring the input space.
arXiv Detail & Related papers (2024-02-09T09:40:33Z) - Graph state-space models [19.88814714919019]
State-space models are used to describe time series and operate by maintaining an updated representation of the system state from which predictions are made.
The manuscript aims, for the first time, for the first time filling this gap by matching unattended state data where the functional graph capturing latent dependencies is learned directly from data and is allowed to change over time.
An encoder-decoder architecture is proposed to learn the state-space model end-to-end on a downstream task.
arXiv Detail & Related papers (2023-01-04T18:15:07Z) - Seminaive Materialisation in DatalogMTL [10.850687097496373]
DatalogMTL is an extension of Datalog with metric temporal operators.
We propose a materialisation-based procedure to minimise redundant computation.
Our experiments show that our optimised seminaive strategy for DatalogMTL is able to significantly reduce materialisation times.
arXiv Detail & Related papers (2022-08-15T10:04:44Z) - MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning [63.50909998372667]
We propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text.
Two novel strategies serve as indispensable components of our method.
arXiv Detail & Related papers (2022-03-01T11:13:00Z) - Uncertainty-Aware Signal Temporal logic [21.626420725274208]
Existing temporal logic inference methods mostly neglect uncertainties in the data.
We propose two uncertainty-aware signal temporal logic (STL) inference approaches to classify the undesired behaviors and desired behaviors of a system.
arXiv Detail & Related papers (2021-05-24T21:26:57Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Learning Implicitly with Noisy Data in Linear Arithmetic [94.66549436482306]
We extend implicit learning in PAC-Semantics to handle intervals and threshold uncertainty in the language of linear arithmetic.
We show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice.
arXiv Detail & Related papers (2020-10-23T19:08:46Z) - Representation Learning for Sequence Data with Deep Autoencoding
Predictive Components [96.42805872177067]
We propose a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space.
We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step.
We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
arXiv Detail & Related papers (2020-10-07T03:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.