Temporal Chunking Enhances Recognition of Implicit Sequential Patterns
- URL: http://arxiv.org/abs/2506.00588v2
- Date: Tue, 15 Jul 2025 15:22:47 GMT
- Title: Temporal Chunking Enhances Recognition of Implicit Sequential Patterns
- Authors: Jayanta Dey, Nicholas Soures, Miranda Gonzales, Itamar Lerner, Christopher Kanan, Dhireesha Kudithipudi,
- Abstract summary: We propose a neuro-inspired approach that compresses temporal sequences into context-tagged chunks.<n>These tags are generated during an offline sleep phase and serve as compact references to past experience.<n>We evaluate this idea in a controlled synthetic environment designed to reveal the limitations of traditional neural network based sequence learners.
- Score: 11.298233331771975
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this pilot study, we propose a neuro-inspired approach that compresses temporal sequences into context-tagged chunks, where each tag represents a recurring structural unit or``community'' in the sequence. These tags are generated during an offline sleep phase and serve as compact references to past experience, allowing the learner to incorporate information beyond its immediate input range. We evaluate this idea in a controlled synthetic environment designed to reveal the limitations of traditional neural network based sequence learners, such as recurrent neural networks (RNNs), when facing temporal patterns on multiple timescales. We evaluate this idea in a controlled synthetic environment designed to reveal the limitations of traditional neural network based sequence learners, such as recurrent neural networks (RNNs), when facing temporal patterns on multiple timescales. Our results, while preliminary, suggest that temporal chunking can significantly enhance learning efficiency under resource constrained settings. A small-scale human pilot study using a Serial Reaction Time Task further motivates the idea of structural abstraction. Although limited to synthetic tasks, this work serves as an early proof-of-concept, with initial evidence that learned context tags can transfer across related task, offering potential for future applications in transfer learning.
Related papers
- LTLZinc: a Benchmarking Framework for Continual Learning and Neuro-Symbolic Temporal Reasoning [12.599235808369112]
Continual learning concerns agents that expand their knowledge over time, improving their skills while avoiding to forget previously learned concepts.<n>Most of the existing approaches for neuro-symbolic artificial intelligence are applied to static scenarios only.<n>We introduceZinc, a benchmarking framework that can be used to generate datasets covering a variety of different problems.
arXiv Detail & Related papers (2025-07-23T13:04:13Z) - Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.<n>We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.<n>We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - ELiSe: Efficient Learning of Sequences in Structured Recurrent Networks [1.5931140598271163]
We build a model for efficient learning sequences using only local always-on and phase-free plasticity.
We showcase the capabilities of ELiSe in a mock-up of birdsong learning, and demonstrate its flexibility with respect to parametrization.
arXiv Detail & Related papers (2024-02-26T17:30:34Z) - TC-LIF: A Two-Compartment Spiking Neuron Model for Long-Term Sequential
Modelling [54.97005925277638]
The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.
It remains a challenging task for state-of-the-art spiking neural networks (SNNs) to establish long-term temporal dependency between distant cues.
We propose a novel biologically inspired Two-Compartment Leaky Integrate-and-Fire spiking neuron model, dubbed TC-LIF.
arXiv Detail & Related papers (2023-08-25T08:54:41Z) - Contextualizing MLP-Mixers Spatiotemporally for Urban Data Forecast at Scale [54.15522908057831]
We propose an adapted version of the computationally-Mixer for STTD forecast at scale.
Our results surprisingly show that this simple-yeteffective solution can rival SOTA baselines when tested on several traffic benchmarks.
Our findings contribute to the exploration of simple-yet-effective models for real-world STTD forecasting.
arXiv Detail & Related papers (2023-07-04T05:19:19Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Spiking Neural Networks for event-based action recognition: A new task to understand their advantage [1.4348901037145936]
Spiking Neural Networks (SNNs) are characterised by their unique temporal dynamics.
We show how Spiking neurons can enable temporal feature extraction in feed-forward neural networks.
We also show how recurrent SNNs can achieve comparable results to LSTM with a smaller number of parameters.
arXiv Detail & Related papers (2022-09-29T16:22:46Z) - Learning Sequence Representations by Non-local Recurrent Neural Memory [61.65105481899744]
We propose a Non-local Recurrent Neural Memory (NRNM) for supervised sequence representation learning.
Our model is able to capture long-range dependencies and latent high-level features can be distilled by our model.
Our model compares favorably against other state-of-the-art methods specifically designed for each of these sequence applications.
arXiv Detail & Related papers (2022-07-20T07:26:15Z) - Path sampling of recurrent neural networks by incorporating known
physics [0.0]
We show a path sampling approach that allows us to include generic thermodynamic or kinetic constraints into recurrent neural networks.
We show the method here for a widely used type of recurrent neural network known as long short-term memory network.
Our method can be easily generalized to other generative artificial intelligence models and to generic time series in different areas of physical and social sciences.
arXiv Detail & Related papers (2022-03-01T16:35:50Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Spiking Associative Memory for Spatio-Temporal Patterns [0.21094707683348418]
Spike Timing Dependent Plasticity is form of learning that has been demonstrated in real cortical tissue.
We develop a simple learning rule called cyclic STDP that can extract patterns in the precise spiking times of neurons.
We show that a population of neurons endowed with this learning rule can act as an effective short-term associative memory.
arXiv Detail & Related papers (2020-06-30T11:08:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.