DeepDFA: Injecting Temporal Logic in Deep Learning for Sequential Subsymbolic Applications
- URL: http://arxiv.org/abs/2602.03486v1
- Date: Tue, 03 Feb 2026 12:59:47 GMT
- Title: DeepDFA: Injecting Temporal Logic in Deep Learning for Sequential Subsymbolic Applications
- Authors: Elena Umili, Francesco Argenziano, Roberto Capobianco,
- Abstract summary: DeepDFA is a neurosymbolic framework that integrates high-level temporal logic into neural architectures.<n>We show how DeepDFA can be used in two key settings: (i) static image sequence classification, and (ii) policy learning in interactive non-Markovian environments.
- Score: 2.5234156040689233
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Integrating logical knowledge into deep neural network training is still a hard challenge, especially for sequential or temporally extended domains involving subsymbolic observations. To address this problem, we propose DeepDFA, a neurosymbolic framework that integrates high-level temporal logic - expressed as Deterministic Finite Automata (DFA) or Moore Machines - into neural architectures. DeepDFA models temporal rules as continuous, differentiable layers, enabling symbolic knowledge injection into subsymbolic domains. We demonstrate how DeepDFA can be used in two key settings: (i) static image sequence classification, and (ii) policy learning in interactive non-Markovian environments. Across extensive experiments, DeepDFA outperforms traditional deep learning models (e.g., LSTMs, GRUs, Transformers) and novel neuro-symbolic systems, achieving state-of-the-art results in temporal knowledge integration. These results highlight the potential of DeepDFA to bridge subsymbolic learning and symbolic reasoning in sequential tasks.
Related papers
- T-ILR: a Neurosymbolic Integration for LTLf [47.316620315732024]
We propose a neurosymbolic framework to incorporate temporal logic specifications directly into deep learning architectures for sequence-based tasks.<n>We name this proposed method Temporal Iterative Local Refinement (T-ILR)
arXiv Detail & Related papers (2025-08-21T20:24:20Z) - LTLZinc: a Benchmarking Framework for Continual Learning and Neuro-Symbolic Temporal Reasoning [12.599235808369112]
Continual learning concerns agents that expand their knowledge over time, improving their skills while avoiding to forget previously learned concepts.<n>Most of the existing approaches for neuro-symbolic artificial intelligence are applied to static scenarios only.<n>We introduceZinc, a benchmarking framework that can be used to generate datasets covering a variety of different problems.
arXiv Detail & Related papers (2025-07-23T13:04:13Z) - Neural Networks as Universal Finite-State Machines: A Constructive Deterministic Finite Automaton Theory [0.0]
We establish feedforward neural networks as universal finite-state machines (N-FSMs)<n>Our results prove that finite-depth ReLU and threshold networks can exactly simulate deterministic finite automata (DFAs)<n>We formalize the expressivity boundary: fixed-depth feedforward networks cannot recognize non-regular languages requiring memory.
arXiv Detail & Related papers (2025-05-16T21:01:34Z) - Neural Reward Machines [2.0755366440393743]
Non-markovian Reinforcement Learning (RL) tasks are very hard to solve, because agents must consider the entire history of state-action pairs to act rationally in the environment.
We define Neural Reward Machines (NRM), an automata-based neurosymbolic framework that can be used for both reasoning and learning in non-symbolic RL domains.
We show that NRMs can exploit high-level symbolic knowledge in non-symbolic environments without any knowledge of the SG function, outperforming Deep RL methods which cannot incorporate prior knowledge.
arXiv Detail & Related papers (2024-08-16T11:44:27Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - TC-LIF: A Two-Compartment Spiking Neuron Model for Long-Term Sequential
Modelling [54.97005925277638]
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
It remains a challenging task for state-of-the-art spiking neural networks (SNNs) to establish long-term temporal dependency between distant cues.
We propose a novel biologically inspired Two-Compartment Leaky Integrate-and-Fire spiking neuron model, dubbed TC-LIF.
arXiv Detail & Related papers (2023-08-25T08:54:41Z) - Long Short-term Memory with Two-Compartment Spiking Neuron [64.02161577259426]
We propose a novel biologically inspired Long Short-Term Memory Leaky Integrate-and-Fire spiking neuron model, dubbed LSTM-LIF.
Our experimental results, on a diverse range of temporal classification tasks, demonstrate superior temporal classification capability, rapid training convergence, strong network generalizability, and high energy efficiency of the proposed LSTM-LIF model.
This work, therefore, opens up a myriad of opportunities for resolving challenging temporal processing tasks on emerging neuromorphic computing machines.
arXiv Detail & Related papers (2023-07-14T08:51:03Z) - Symbolic Visual Reinforcement Learning: A Scalable Framework with
Object-Level Abstraction and Differentiable Expression Search [63.3745291252038]
We propose DiffSES, a novel symbolic learning approach that discovers discrete symbolic policies.
By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions.
Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more scalable than state-of-the-art symbolic RL methods.
arXiv Detail & Related papers (2022-12-30T17:50:54Z) - DeepSITH: Efficient Learning via Decomposition of What and When Across
Time Scales [0.0]
Neural networks are either plagued by the exploding/vanishing gradient problem in recurrent neural networks (RNNs) or must adjust their parameters to learn the relevant time scales (e.g., in LSTMs)
This paper introduces DeepSITH, a network comprising biologically-inspired Scale-Invariant Temporal History (SITH) modules in series with dense connections between layers.
SITH modules respond to their inputs with a geometrically-spaced set of time constants, enabling the DeepSITH network to learn problems along a continuum of time-scales.
arXiv Detail & Related papers (2021-04-09T23:58:14Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.