Learning Time in Static Classifiers
- URL: http://arxiv.org/abs/2511.12321v1
- Date: Sat, 15 Nov 2025 18:42:51 GMT
- Title: Learning Time in Static Classifiers
- Authors: Xi Ding, Lei Wang, Piotr Koniusz, Yongsheng Gao,
- Abstract summary: We propose a simple yet effective framework that equips standard feedforward classifiers with temporal reasoning.<n>We use a novel Support-Exemplar-Query (SEQ) learning paradigm, which structures training data into temporally coherent trajectories.<n>Our approach bridges static and temporal learning in a modular and data-efficient manner, requiring only a simple on top of pre-extracted features.
- Score: 44.358377952850994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world visual data rarely presents as isolated, static instances. Instead, it often evolves gradually over time through variations in pose, lighting, object state, or scene context. However, conventional classifiers are typically trained under the assumption of temporal independence, limiting their ability to capture such dynamics. We propose a simple yet effective framework that equips standard feedforward classifiers with temporal reasoning, all without modifying model architectures or introducing recurrent modules. At the heart of our approach is a novel Support-Exemplar-Query (SEQ) learning paradigm, which structures training data into temporally coherent trajectories. These trajectories enable the model to learn class-specific temporal prototypes and align prediction sequences via a differentiable soft-DTW loss. A multi-term objective further promotes semantic consistency and temporal smoothness. By interpreting input sequences as evolving feature trajectories, our method introduces a strong temporal inductive bias through loss design alone. This proves highly effective in both static and temporal tasks: it enhances performance on fine-grained and ultra-fine-grained image classification, and delivers precise, temporally consistent predictions in video anomaly detection. Despite its simplicity, our approach bridges static and temporal learning in a modular and data-efficient manner, requiring only a simple classifier on top of pre-extracted features.
Related papers
- Towards Foundation Model on Temporal Knowledge Graph Reasoning [17.165969719351125]
Temporal Knowledge Graphs (TKGs) store temporal facts with quadruple formats (s, p, o, t)<n>New model employs sinusoidal positional encodings to capture fine-grained temporal patterns.<n>PostRA demonstrates strong zero-shot performance on unseen temporal knowledge graphs.
arXiv Detail & Related papers (2025-06-04T09:19:49Z) - FreRA: A Frequency-Refined Augmentation for Contrastive Learning on Time Series Classification [56.925103708982164]
We present a novel perspective from the frequency domain and identify three advantages for downstream classification: global, independent, and compact.<n>We propose the lightweight yet effective Frequency Refined Augmentation (FreRA) tailored for time series contrastive learning on classification tasks.<n>FreRA consistently outperforms ten leading baselines on time series classification, anomaly detection, and transfer learning tasks.
arXiv Detail & Related papers (2025-05-29T07:18:28Z) - STRAP: Spatio-Temporal Pattern Retrieval for Out-of-Distribution Generalization [29.10084723132903]
We propose an innovative Spatio-Temporal Retrieval-Augmented Pattern Learning framework, STRAP.<n>During inference, STRAP retrieves relevant patterns from this library based on similarity to the current input and injects them into the model via a plug-and-play prompting mechanism.<n>Experiments across multiple real-world streaming graph datasets show that STRAP consistently outperforms state-of-the-art STGNN baselines on STOOD tasks.
arXiv Detail & Related papers (2025-05-26T06:11:05Z) - StPR: Spatiotemporal Preservation and Routing for Exemplar-Free Video Class-Incremental Learning [79.44594332189018]
Class-Incremental Learning (CIL) seeks to develop models that continuously learn new action categories over time without previously acquired knowledge.<n>Existing approaches either rely on forgetting, raising concerns over memory and privacy, or adapt static image-based methods that neglect temporal modeling.<n>We propose a unified and exemplar-free VCIL framework that explicitly disentangles and preserves information.
arXiv Detail & Related papers (2025-05-20T06:46:51Z) - Dynamical Diffusion: Learning Temporal Dynamics with Diffusion Models [71.63194926457119]
We introduce Dynamical Diffusion (DyDiff), a theoretically sound framework that incorporates temporally aware forward and reverse processes.<n>Experiments across scientifictemporal forecasting, video prediction, and time series forecasting demonstrate that Dynamical Diffusion consistently improves performance in temporal predictive tasks.
arXiv Detail & Related papers (2025-03-02T16:10:32Z) - Learning Causal Domain-Invariant Temporal Dynamics for Few-Shot Action Recognition [12.522600594024112]
Few-shot action recognition aims at quickly adapting a pre-trained model to novel data.
Key challenges include how to identify and leverage the transferable knowledge learned by the pre-trained model.
We propose CDTD, or Causal Domain-Invariant Temporal Dynamics for knowledge transfer.
arXiv Detail & Related papers (2024-02-20T04:09:58Z) - Temporal Smoothness Regularisers for Neural Link Predictors [8.975480841443272]
We show that a simple method like TNTComplEx can produce significantly more accurate results than state-of-the-art methods.
We also evaluate the impact of a wide range of temporal smoothing regularisers on two state-of-the-art temporal link prediction models.
arXiv Detail & Related papers (2023-09-16T16:52:49Z) - CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly Detection [53.83593870825628]
One main challenge in time series anomaly detection (TSAD) is the lack of labelled data in many real-life scenarios.
Most of the existing anomaly detection methods focus on learning the normal behaviour of unlabelled time series in an unsupervised manner.
We introduce a novel end-to-end self-supervised ContrAstive Representation Learning approach for time series anomaly detection.
arXiv Detail & Related papers (2023-08-18T04:45:56Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - ModeRNN: Harnessing Spatiotemporal Mode Collapse in Unsupervised
Predictive Learning [75.2748374360642]
We propose ModeRNN, which introduces a novel method to learn hidden structured representations between recurrent states.
Across the entire dataset, different modes result in different responses on the mixtures of slots, which enhances the ability of ModeRNN to build structured representations.
arXiv Detail & Related papers (2021-10-08T03:47:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.