The Complexity of Learning Linear Temporal Formulas from Examples
- URL: http://arxiv.org/abs/2102.00876v1
- Date: Mon, 1 Feb 2021 14:34:46 GMT
- Title: The Complexity of Learning Linear Temporal Formulas from Examples
- Authors: Nathana\"el Fijalkow and Guillaume Lagarde
- Abstract summary: We construct approximation algorithms for fragments of and prove hardness results.
In particular we obtain tight bounds for approximation of the fragment containing only the next operator and conjunctions, and prove NP-completeness results for many fragments.
- Score: 2.766648389933265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we initiate the study of the computational complexity of
learning linear temporal logic (LTL) formulas from examples. We construct
approximation algorithms for fragments of LTL and prove hardness results; in
particular we obtain tight bounds for approximation of the fragment containing
only the next operator and conjunctions, and prove NP-completeness results for
many fragments.
Related papers
- Timeline-based Sentence Decomposition with In-Context Learning for Temporal Fact Extraction [19.96263282146533]
In this paper, we specifically address the extraction of temporal facts from natural language text.
We propose a timeline-based sentence decomposition strategy using large language models (LLMs) with in-context learning.
Our experiments show that TSDRE achieves state-of-the-art results on both HyperRED-Temporal and ComplexTRED datasets.
arXiv Detail & Related papers (2024-05-16T17:48:21Z) - Learning temporal formulas from examples is hard [2.8851756275902476]
We study the problem of learning linear temporal logic (LTL) formulas from examples.
We show that the learning problem is NP-complete, both for the full logic and for almost all of its fragments.
arXiv Detail & Related papers (2023-12-26T21:16:19Z) - Tractable Bounding of Counterfactual Queries by Knowledge Compilation [51.47174989680976]
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters.
We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values.
arXiv Detail & Related papers (2023-10-05T07:10:40Z) - An Optimal Algorithm for the Real-Valued Combinatorial Pure Exploration
of Multi-Armed Bandit [65.268245109828]
We study the real-valued pure exploration problem in the multi-armed bandit (R-CPE-MAB)
Existing methods in the R-CPE-MAB can be seen as a special case of the so-called transductive linear bandits.
We propose an algorithm named the gap-based exploration (CombGapE) algorithm, whose sample complexity matches the lower bound.
arXiv Detail & Related papers (2023-06-15T15:37:31Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Sublinear Least-Squares Value Iteration via Locality Sensitive Hashing [49.73889315176884]
We present the first provable Least-Squares Value Iteration (LSVI) algorithms that have runtime complexity sublinear in the number of actions.
We build the connections between the theory of approximate maximum inner product search and the regret analysis of reinforcement learning.
arXiv Detail & Related papers (2021-05-18T05:23:53Z) - Fast and Complete: Enabling Complete Neural Network Verification with
Rapid and Massively Parallel Incomplete Verifiers [112.23981192818721]
We propose to use backward mode linear relaxation based analysis (LiRPA) to replace Linear Programming (LP) during the BaB process.
Unlike LP, LiRPA when applied naively can produce much weaker bounds and even cannot check certain conflicts of sub-domains during splitting.
We demonstrate an order of magnitude speedup compared to existing LP-based approaches.
arXiv Detail & Related papers (2020-11-27T16:42:12Z) - Explaining Multi-stage Tasks by Learning Temporal Logic Formulas from
Suboptimal Demonstrations [6.950510860295866]
We present a method for learning multi-stage tasks from demonstrations by learning the logical structure and atomic propositions of a consistent linear temporal logic (LTL) formula.
The learner is given successful but potentially suboptimal demonstrations, where the demonstrator is optimizing a cost function while satisfying the formula, and the cost function is uncertain to the learner.
Our algorithm uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations together with a counter-example-guided falsification strategy to learn the atomic proposition parameters.
arXiv Detail & Related papers (2020-06-03T17:40:14Z) - Learning Interpretable Models in the Property Specification Language [6.875312133832079]
We develop a learning algorithm for formulas in the IEEE standard temporal logic PSL.
Our work is motivated by the fact that many natural properties, such as an event happening at every n-th point in time, cannot be expressed in speak.
arXiv Detail & Related papers (2020-02-10T11:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.