TLINet: Differentiable Neural Network Temporal Logic Inference
- URL: http://arxiv.org/abs/2405.06670v2
- Date: Tue, 14 May 2024 18:30:52 GMT
- Title: TLINet: Differentiable Neural Network Temporal Logic Inference
- Authors: Danyang Li, Mingyu Cai, Cristian-Ioan Vasile, Roberto Tron,
- Abstract summary: This paper introduces TLINet, a neural-symbolic framework for learning STL formulas.
In contrast to existing approaches, we introduce approximation methods for max operator designed specifically for temporal logic-based gradient techniques.
Our framework not only learns the structure but also the parameters of STL formulas, allowing flexible combinations of operators and various logical structures.
- Score: 10.36033062385604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been a growing interest in extracting formal descriptions of the system behaviors from data. Signal Temporal Logic (STL) is an expressive formal language used to describe spatial-temporal properties with interpretability. This paper introduces TLINet, a neural-symbolic framework for learning STL formulas. The computation in TLINet is differentiable, enabling the usage of off-the-shelf gradient-based tools during the learning process. In contrast to existing approaches, we introduce approximation methods for max operator designed specifically for temporal logic-based gradient techniques, ensuring the correctness of STL satisfaction evaluation. Our framework not only learns the structure but also the parameters of STL formulas, allowing flexible combinations of operators and various logical structures. We validate TLINet against state-of-the-art baselines, demonstrating that our approach outperforms these baselines in terms of interpretability, compactness, rich expressibility, and computational efficiency.
Related papers
- Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - Standpoint Linear Temporal Logic [2.552459629685159]
We present standpoint linear temporal logic (SLTL), a new logic that combines the temporal features of thepective with the multi-perspective modelling capacity of SL.
We define the logic SLTL, its syntax, and its semantics, establish its decidability and terminating complexity, and provide a tableau calculus to automate SLTL reasoning.
arXiv Detail & Related papers (2023-04-27T15:03:38Z) - Mastering Symbolic Operations: Augmenting Language Models with Compiled
Neural Networks [48.14324895100478]
"Neural architecture" integrates compiled neural networks (CoNNs) into a standard transformer.
CoNNs are neural modules designed to explicitly encode rules through artificially generated attention weights.
Experiments demonstrate superiority of our approach over existing techniques in terms of length generalization, efficiency, and interpretability for symbolic operations.
arXiv Detail & Related papers (2023-04-04T09:50:07Z) - Learning Signal Temporal Logic through Neural Network for Interpretable
Classification [13.829082181692872]
We propose an explainable neural-symbolic framework for the classification of time-series behaviors.
We demonstrate the computational efficiency, compactness, and interpretability of the proposed method through driving scenarios and naval surveillance case studies.
arXiv Detail & Related papers (2022-10-04T21:11:54Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Linear Temporal Logic Modulo Theories over Finite Traces (Extended
Version) [72.38188258853155]
This paper studies Linear Temporal Logic over Finite Traces (LTLf)
proposition letters are replaced with first-order formulas interpreted over arbitrary theories.
The resulting logic, called Satisfiability Modulo Theories (LTLfMT), is semi-decidable.
arXiv Detail & Related papers (2022-04-28T17:57:33Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - Backpropagation through Signal Temporal Logic Specifications: Infusing
Logical Structure into Gradient-Based Methods [28.72161643908351]
This paper presents a technique, named STLCG, to compute the quantitative semantics of Signal Temporal Logic (STL) formulas using computation graphs.
STL is a powerful and expressive formal language that can specify spatial and temporal properties of signals generated by both continuous and hybrid systems.
arXiv Detail & Related papers (2020-07-31T22:01:39Z) - Explaining Multi-stage Tasks by Learning Temporal Logic Formulas from
Suboptimal Demonstrations [6.950510860295866]
We present a method for learning multi-stage tasks from demonstrations by learning the logical structure and atomic propositions of a consistent linear temporal logic (LTL) formula.
The learner is given successful but potentially suboptimal demonstrations, where the demonstrator is optimizing a cost function while satisfying the formula, and the cost function is uncertain to the learner.
Our algorithm uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations together with a counter-example-guided falsification strategy to learn the atomic proposition parameters.
arXiv Detail & Related papers (2020-06-03T17:40:14Z) - Learning Interpretable Models in the Property Specification Language [6.875312133832079]
We develop a learning algorithm for formulas in the IEEE standard temporal logic PSL.
Our work is motivated by the fact that many natural properties, such as an event happening at every n-th point in time, cannot be expressed in speak.
arXiv Detail & Related papers (2020-02-10T11:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.