Learning temporal formulas from examples is hard
- URL: http://arxiv.org/abs/2312.16336v1
- Date: Tue, 26 Dec 2023 21:16:19 GMT
- Title: Learning temporal formulas from examples is hard
- Authors: Corto Mascle, Nathana\"el Fijalkow, Guillaume Lagarde
- Abstract summary: We study the problem of learning linear temporal logic (LTL) formulas from examples.
We show that the learning problem is NP-complete, both for the full logic and for almost all of its fragments.
- Score: 2.8851756275902476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of learning linear temporal logic (LTL) formulas from
examples, as a first step towards expressing a property separating positive and
negative instances in a way that is comprehensible for humans. In this paper we
initiate the study of the computational complexity of the problem. Our main
results are hardness results: we show that the LTL learning problem is
NP-complete, both for the full logic and for almost all of its fragments. This
motivates the search for efficient heuristics, and highlights the complexity of
expressing separating properties in concise natural language.
Related papers
- Divide and Translate: Compositional First-Order Logic Translation and Verification for Complex Logical Reasoning [28.111458981621105]
Complex logical reasoning tasks require a long sequence of reasoning, which a large language model (LLM) with chain-of-thought prompting still falls short.
We propose a Compositional First-Order Logic Translation to capture logical semantics hidden in the natural language during translation.
We evaluate the proposed method, dubbed CLOVER, on seven logical reasoning benchmarks and show that it outperforms the previous neurosymbolic approaches.
arXiv Detail & Related papers (2024-10-10T15:42:39Z) - Data Complexity in Expressive Description Logics With Path Expressions [7.832189413179361]
We investigate the data complexity of the satisfiability problem for the expressive description logic ZOIQ (a.k.a. ALCHb Self reg OIQ) over quasi-forests.
This completes the data complexity landscape for decidable fragments of ZOIQ, and reproves known results on decidable fragments of OWL2 (SR family)
arXiv Detail & Related papers (2024-06-11T09:37:51Z) - Do Language Models Exhibit the Same Cognitive Biases in Problem Solving as Human Learners? [140.9751389452011]
We study the biases of large language models (LLMs) in relation to those known in children when solving arithmetic word problems.
We generate a novel set of word problems for each of these tests, using a neuro-symbolic approach that enables fine-grained control over the problem features.
arXiv Detail & Related papers (2024-01-31T18:48:20Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - The Complexity of Learning Linear Temporal Formulas from Examples [2.766648389933265]
We construct approximation algorithms for fragments of and prove hardness results.
In particular we obtain tight bounds for approximation of the fragment containing only the next operator and conjunctions, and prove NP-completeness results for many fragments.
arXiv Detail & Related papers (2021-02-01T14:34:46Z) - The Complexity of Adversarially Robust Proper Learning of Halfspaces
with Agnostic Noise [67.27523616312428]
We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent PAC model.
We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem.
arXiv Detail & Related papers (2020-07-30T04:18:51Z) - A tetrachotomy of ontology-mediated queries with a covering axiom [1.749935196721634]
Our concern is the problem of efficiently determining the data complexity of answering queries mediated by description and their optimal rewritings to standard database queries.
We focus on Boolean conjunctive-mediated queries called disjunctive sirups (or d-sirups)
Some d-sirups only have exponential-size resolution features, some only double-exponential-size positive existential existential-rewritings and single-exprecursive datalog rewritings.
arXiv Detail & Related papers (2020-06-07T14:47:07Z) - Learning Interpretable Models in the Property Specification Language [6.875312133832079]
We develop a learning algorithm for formulas in the IEEE standard temporal logic PSL.
Our work is motivated by the fact that many natural properties, such as an event happening at every n-th point in time, cannot be expressed in speak.
arXiv Detail & Related papers (2020-02-10T11:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.