LTLZinc: a Benchmarking Framework for Continual Learning and Neuro-Symbolic Temporal Reasoning
- URL: http://arxiv.org/abs/2507.17482v1
- Date: Wed, 23 Jul 2025 13:04:13 GMT
- Title: LTLZinc: a Benchmarking Framework for Continual Learning and Neuro-Symbolic Temporal Reasoning
- Authors: Luca Salvatore Lorello, Nikolaos Manginas, Marco Lippi, Stefano Melacci,
- Abstract summary: Continual learning concerns agents that expand their knowledge over time, improving their skills while avoiding to forget previously learned concepts.<n>Most of the existing approaches for neuro-symbolic artificial intelligence are applied to static scenarios only.<n>We introduceZinc, a benchmarking framework that can be used to generate datasets covering a variety of different problems.
- Score: 12.599235808369112
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuro-symbolic artificial intelligence aims to combine neural architectures with symbolic approaches that can represent knowledge in a human-interpretable formalism. Continual learning concerns with agents that expand their knowledge over time, improving their skills while avoiding to forget previously learned concepts. Most of the existing approaches for neuro-symbolic artificial intelligence are applied to static scenarios only, and the challenging setting where reasoning along the temporal dimension is necessary has been seldom explored. In this work we introduce LTLZinc, a benchmarking framework that can be used to generate datasets covering a variety of different problems, against which neuro-symbolic and continual learning methods can be evaluated along the temporal and constraint-driven dimensions. Our framework generates expressive temporal reasoning and continual learning tasks from a linear temporal logic specification over MiniZinc constraints, and arbitrary image classification datasets. Fine-grained annotations allow multiple neural and neuro-symbolic training settings on the same generated datasets. Experiments on six neuro-symbolic sequence classification and four class-continual learning tasks generated by LTLZinc, demonstrate the challenging nature of temporal learning and reasoning, and highlight limitations of current state-of-the-art methods. We release the LTLZinc generator and ten ready-to-use tasks to the neuro-symbolic and continual learning communities, in the hope of fostering research towards unified temporal learning and reasoning frameworks.
Related papers
- Temporal Chunking Enhances Recognition of Implicit Sequential Patterns [11.298233331771975]
We propose a neuro-inspired approach that compresses temporal sequences into context-tagged chunks.<n>These tags are generated during an offline sleep phase and serve as compact references to past experience.<n>We evaluate this idea in a controlled synthetic environment designed to reveal the limitations of traditional neural network based sequence learners.
arXiv Detail & Related papers (2025-05-31T14:51:08Z) - The Road to Generalizable Neuro-Symbolic Learning Should be Paved with Foundation Models [18.699014321422023]
Neuro-symbolic learning was proposed to address challenges with training neural networks for complex reasoning tasks.<n>We highlight three pitfalls of traditional neuro-symbolic learning with respect to the compute, data, and programs leading to generalization problems.
arXiv Detail & Related papers (2025-05-30T17:59:46Z) - A Neuro-Symbolic Framework for Sequence Classification with Relational and Temporal Knowledge [13.698216735270767]
One of the goals of neuro-symbolic artificial intelligence is to exploit background knowledge to improve the performance of learning tasks.<n>In this work we consider the much more challenging problem of knowledge-driven sequence where different portions of knowledge must be employed at different timesteps.<n>Results demonstrate the challenging nature of this novel setting, and also highlight under-explored shortcomings of neuro-symbolic methods.
arXiv Detail & Related papers (2025-05-08T10:10:00Z) - IID Relaxation by Logical Expressivity: A Research Agenda for Fitting Logics to Neurosymbolic Requirements [50.57072342894621]
We discuss the benefits of exploiting known data dependencies and distribution constraints for Neurosymbolic use cases.
This opens a new research agenda with general questions about Neurosymbolic background knowledge and the expressivity required of its logic.
arXiv Detail & Related papers (2024-04-30T12:09:53Z) - Long Short-term Memory with Two-Compartment Spiking Neuron [64.02161577259426]
We propose a novel biologically inspired Long Short-Term Memory Leaky Integrate-and-Fire spiking neuron model, dubbed LSTM-LIF.
Our experimental results, on a diverse range of temporal classification tasks, demonstrate superior temporal classification capability, rapid training convergence, strong network generalizability, and high energy efficiency of the proposed LSTM-LIF model.
This work, therefore, opens up a myriad of opportunities for resolving challenging temporal processing tasks on emerging neuromorphic computing machines.
arXiv Detail & Related papers (2023-07-14T08:51:03Z) - Neuro-Symbolic Continual Learning: Knowledge, Reasoning Shortcuts and
Concept Rehearsal [26.999987105646966]
We introduce Neuro-Symbolic Continual Learning, where a model has to solve a sequence of neuro-symbolic tasks.
Our key observation is that neuro-symbolic tasks, although different, often share concepts whose semantics remains stable over time.
We show that leveraging prior knowledge by combining neuro-symbolic architectures with continual strategies does help avoid catastrophic forgetting.
arXiv Detail & Related papers (2023-02-02T17:24:43Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Spiking Associative Memory for Spatio-Temporal Patterns [0.21094707683348418]
Spike Timing Dependent Plasticity is form of learning that has been demonstrated in real cortical tissue.
We develop a simple learning rule called cyclic STDP that can extract patterns in the precise spiking times of neurons.
We show that a population of neurons endowed with this learning rule can act as an effective short-term associative memory.
arXiv Detail & Related papers (2020-06-30T11:08:31Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.