Towards Interpretable Concept Learning over Time Series via Temporal Logic Semantics
- URL: http://arxiv.org/abs/2508.03269v1
- Date: Tue, 05 Aug 2025 09:50:55 GMT
- Title: Towards Interpretable Concept Learning over Time Series via Temporal Logic Semantics
- Authors: Irene Ferfoglia, Simone Silvetti, Gaia Saveri, Laura Nenzi, Luca Bortolussi,
- Abstract summary: We propose a neuro-symbolic framework that unifies classification and explanation through direct embedding of trajectories.<n>By introducing a novel STL-inspired kernel that maps raw time series to their alignment with predefined STL formulae, our model jointly optimises for accuracy and interpretability.<n>Early results show competitive performance while offering high-quality logical justifications for model decisions.
- Score: 0.5551869906060036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Time series classification is a task of paramount importance, as this kind of data often arises in safety-critical applications. However, it is typically tackled with black-box deep learning methods, making it hard for humans to understand the rationale behind their output. To take on this challenge, we propose a neuro-symbolic framework that unifies classification and explanation through direct embedding of trajectories into a space of Signal Temporal Logic (STL) concepts. By introducing a novel STL-inspired kernel that maps raw time series to their alignment with predefined STL formulae, our model jointly optimises for accuracy and interpretability, as each prediction is accompanied by the most relevant logical concepts that characterise it. This enables classification grounded in human-interpretable temporal patterns and produces both local and global symbolic explanations. Early results show competitive performance while offering high-quality logical justifications for model decisions.
Related papers
- Can Large Language Models Adequately Perform Symbolic Reasoning Over Time Series? [10.185545951475104]
We introduce SymbolBench, a benchmark to assess symbolic reasoning over real-world time series.<n>Unlike prior efforts, SymbolBench spans a diverse set of symbolic forms with varying complexity.<n>We propose a unified framework that integrates Large Language Models with genetic programming to form a closed-loop symbolic reasoning system.
arXiv Detail & Related papers (2025-08-05T22:58:54Z) - Towards Foundation Model on Temporal Knowledge Graph Reasoning [17.165969719351125]
Temporal Knowledge Graphs (TKGs) store temporal facts with quadruple formats (s, p, o, t)<n>New model employs sinusoidal positional encodings to capture fine-grained temporal patterns.<n>PostRA demonstrates strong zero-shot performance on unseen temporal knowledge graphs.
arXiv Detail & Related papers (2025-06-04T09:19:49Z) - FreRA: A Frequency-Refined Augmentation for Contrastive Learning on Time Series Classification [56.925103708982164]
We present a novel perspective from the frequency domain and identify three advantages for downstream classification: global, independent, and compact.<n>We propose the lightweight yet effective Frequency Refined Augmentation (FreRA) tailored for time series contrastive learning on classification tasks.<n>FreRA consistently outperforms ten leading baselines on time series classification, anomaly detection, and transfer learning tasks.
arXiv Detail & Related papers (2025-05-29T07:18:28Z) - ECATS: Explainable-by-design concept-based anomaly detection for time series [0.5956301166481089]
We propose ECATS, a concept-based neuro-symbolic architecture where concepts are represented as Signal Temporal Logic (STL) formulae.
We show that our model is able to achieve great classification performance while ensuring local interpretability.
arXiv Detail & Related papers (2024-05-17T08:12:53Z) - Multi-class Temporal Logic Neural Networks [8.20828081284034]
Time-series data can represent the behaviors of autonomous systems, such as drones and self-driving cars.
We propose a method that combines neural networks that represent STL specifications for multi-class classification of time-series data.
We evaluate our method on two datasets and compare it with state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-17T00:22:29Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - On the Consistency and Robustness of Saliency Explanations for Time
Series Classification [4.062872727927056]
Saliency maps have been applied to interpret time series windows as images.
This paper extensively analyzes the consistency and robustness of saliency maps for time series features and temporal attribution.
arXiv Detail & Related papers (2023-09-04T09:08:22Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Generic Temporal Reasoning with Differential Analysis and Explanation [61.96034987217583]
We introduce a novel task named TODAY that bridges the gap with temporal differential analysis.
TODAY evaluates whether systems can correctly understand the effect of incremental changes.
We show that TODAY's supervision style and explanation annotations can be used in joint learning.
arXiv Detail & Related papers (2022-12-20T17:40:03Z) - Linear Temporal Logic Modulo Theories over Finite Traces (Extended
Version) [72.38188258853155]
This paper studies Linear Temporal Logic over Finite Traces (LTLf)
proposition letters are replaced with first-order formulas interpreted over arbitrary theories.
The resulting logic, called Satisfiability Modulo Theories (LTLfMT), is semi-decidable.
arXiv Detail & Related papers (2022-04-28T17:57:33Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.