A Neuro-Symbolic Framework for Sequence Classification with Relational and Temporal Knowledge
- URL: http://arxiv.org/abs/2505.05106v1
- Date: Thu, 08 May 2025 10:10:00 GMT
- Title: A Neuro-Symbolic Framework for Sequence Classification with Relational and Temporal Knowledge
- Authors: Luca Salvatore Lorello, Marco Lippi, Stefano Melacci,
- Abstract summary: One of the goals of neuro-symbolic artificial intelligence is to exploit background knowledge to improve the performance of learning tasks.<n>In this work we consider the much more challenging problem of knowledge-driven sequence where different portions of knowledge must be employed at different timesteps.<n>Results demonstrate the challenging nature of this novel setting, and also highlight under-explored shortcomings of neuro-symbolic methods.
- Score: 13.698216735270767
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: One of the goals of neuro-symbolic artificial intelligence is to exploit background knowledge to improve the performance of learning tasks. However, most of the existing frameworks focus on the simplified scenario where knowledge does not change over time and does not cover the temporal dimension. In this work we consider the much more challenging problem of knowledge-driven sequence classification where different portions of knowledge must be employed at different timesteps, and temporal relations are available. Our experimental evaluation compares multi-stage neuro-symbolic and neural-only architectures, and it is conducted on a newly-introduced benchmarking framework. Results demonstrate the challenging nature of this novel setting, and also highlight under-explored shortcomings of neuro-symbolic methods, representing a precious reference for future research.
Related papers
- LTLZinc: a Benchmarking Framework for Continual Learning and Neuro-Symbolic Temporal Reasoning [12.599235808369112]
Continual learning concerns agents that expand their knowledge over time, improving their skills while avoiding to forget previously learned concepts.<n>Most of the existing approaches for neuro-symbolic artificial intelligence are applied to static scenarios only.<n>We introduceZinc, a benchmarking framework that can be used to generate datasets covering a variety of different problems.
arXiv Detail & Related papers (2025-07-23T13:04:13Z) - Temporal Chunking Enhances Recognition of Implicit Sequential Patterns [11.298233331771975]
We propose a neuro-inspired approach that compresses temporal sequences into context-tagged chunks.<n>These tags are generated during an offline sleep phase and serve as compact references to past experience.<n>We evaluate this idea in a controlled synthetic environment designed to reveal the limitations of traditional neural network based sequence learners.
arXiv Detail & Related papers (2025-05-31T14:51:08Z) - A Complexity Map of Probabilistic Reasoning for Neurosymbolic Classification Techniques [6.775534755081169]
We develop a unified formalism for four probabilistic reasoning problems.<n>Then, we compile several known and new tractability results into a single complexity map of probabilistic reasoning.<n>We build on this complexity map to characterize the domains of scalability of several techniques.
arXiv Detail & Related papers (2024-04-12T11:31:37Z) - Improving Neural-based Classification with Logical Background Knowledge [0.0]
We propose a new formalism for supervised multi-label classification with propositional background knowledge.
We introduce a new neurosymbolic technique called semantic conditioning at inference.
We discuss its theoritical and practical advantages over two other popular neurosymbolic techniques.
arXiv Detail & Related papers (2024-02-20T14:01:26Z) - Long Short-term Memory with Two-Compartment Spiking Neuron [64.02161577259426]
We propose a novel biologically inspired Long Short-Term Memory Leaky Integrate-and-Fire spiking neuron model, dubbed LSTM-LIF.
Our experimental results, on a diverse range of temporal classification tasks, demonstrate superior temporal classification capability, rapid training convergence, strong network generalizability, and high energy efficiency of the proposed LSTM-LIF model.
This work, therefore, opens up a myriad of opportunities for resolving challenging temporal processing tasks on emerging neuromorphic computing machines.
arXiv Detail & Related papers (2023-07-14T08:51:03Z) - Neuro-Symbolic Continual Learning: Knowledge, Reasoning Shortcuts and
Concept Rehearsal [26.999987105646966]
We introduce Neuro-Symbolic Continual Learning, where a model has to solve a sequence of neuro-symbolic tasks.
Our key observation is that neuro-symbolic tasks, although different, often share concepts whose semantics remains stable over time.
We show that leveraging prior knowledge by combining neuro-symbolic architectures with continual strategies does help avoid catastrophic forgetting.
arXiv Detail & Related papers (2023-02-02T17:24:43Z) - Towards Data-and Knowledge-Driven Artificial Intelligence: A Survey on Neuro-Symbolic Computing [73.0977635031713]
Neural-symbolic computing (NeSy) has been an active research area of Artificial Intelligence (AI) for many years.
NeSy shows promise of reconciling the advantages of reasoning and interpretability of symbolic representation and robust learning in neural networks.
arXiv Detail & Related papers (2022-10-28T04:38:10Z) - Survey on Applications of Neurosymbolic Artificial Intelligence [37.7665470475176]
We introduce a taxonomy of common Neurosymbolic applications and summarize the state-of-the-art for each of those domains.
We identify important current trends and provide new perspectives pertaining to the future of this burgeoning field.
arXiv Detail & Related papers (2022-09-08T18:18:41Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - CogNGen: Constructing the Kernel of a Hyperdimensional Predictive
Processing Cognitive Architecture [79.07468367923619]
We present a new cognitive architecture that combines two neurobiologically plausible, computational models.
We aim to develop a cognitive architecture that has the power of modern machine learning techniques.
arXiv Detail & Related papers (2022-03-31T04:44:28Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.