TempoFormer: A Transformer for Temporally-aware Representations in Change Detection
- URL: http://arxiv.org/abs/2408.15689v2
- Date: Tue, 22 Oct 2024 10:49:57 GMT
- Title: TempoFormer: A Transformer for Temporally-aware Representations in Change Detection
- Authors: Talia Tseriotou, Adam Tsakalidis, Maria Liakata,
- Abstract summary: We introduce TempoFormer, the first task-agnostic transformer-based and temporally-aware model for dynamic representation learning.
Our approach is jointly trained on inter and intra context dynamics and introduces a novel temporal variation of rotary positional embeddings.
We show new SOTA performance on three different real-time change detection tasks.
- Score: 12.063146420389371
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic representation learning plays a pivotal role in understanding the evolution of linguistic content over time. On this front both context and time dynamics as well as their interplay are of prime importance. Current approaches model context via pre-trained representations, which are typically temporally agnostic. Previous work on modelling context and temporal dynamics has used recurrent methods, which are slow and prone to overfitting. Here we introduce TempoFormer, the first task-agnostic transformer-based and temporally-aware model for dynamic representation learning. Our approach is jointly trained on inter and intra context dynamics and introduces a novel temporal variation of rotary positional embeddings. The architecture is flexible and can be used as the temporal representation foundation of other models or applied to different transformer-based architectures. We show new SOTA performance on three different real-time change detection tasks.
Related papers
- Temporal Graph Pattern Machine [17.352525018007473]
Temporal Graph Pattern Machine (TGPM) conceptualizes each interaction as an interaction patch synthesized via temporally-biased random walks.<n>TGPM consistently achieves state-of-the-art performance in both transductive and inductive link prediction.
arXiv Detail & Related papers (2026-01-30T01:46:13Z) - Learning Time in Static Classifiers [44.358377952850994]
We propose a simple yet effective framework that equips standard feedforward classifiers with temporal reasoning.<n>We use a novel Support-Exemplar-Query (SEQ) learning paradigm, which structures training data into temporally coherent trajectories.<n>Our approach bridges static and temporal learning in a modular and data-efficient manner, requiring only a simple on top of pre-extracted features.
arXiv Detail & Related papers (2025-11-15T18:42:51Z) - The Birth of Knowledge: Emergent Features across Time, Space, and Scale in Large Language Models [3.541570601342306]
This paper studies the emergence of interpretable categorical features within large language models (LLMs)<n>Using sparse autoencoders for mechanistic interpretability, we identify when and where specific semantic concepts emerge within neural activations.
arXiv Detail & Related papers (2025-05-26T02:59:54Z) - UniSTD: Towards Unified Spatio-Temporal Learning across Diverse Disciplines [64.84631333071728]
We introduce bfUnistage, a unified Transformer-based framework fortemporal modeling.
Our work demonstrates that a task-specific vision-text can build a generalizable model fortemporal learning.
We also introduce a temporal module to incorporate temporal dynamics explicitly.
arXiv Detail & Related papers (2025-03-26T17:33:23Z) - Community-Aware Temporal Walks: Parameter-Free Representation Learning on Continuous-Time Dynamic Graphs [3.833708891059351]
Community-aware Temporal Walks (CTWalks) is a novel framework for representation learning on continuous-time dynamic graphs.
CTWalks integrates a community-based parameter-free temporal walk sampling mechanism, an anonymization strategy enriched with community labels, and an encoding process.
Experiments on benchmark datasets demonstrate that CTWalks outperforms established methods in temporal link prediction tasks.
arXiv Detail & Related papers (2025-01-21T04:16:46Z) - Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework [2.8470354623829577]
We develop a framework based on Concept Bottleneck Models to enforce interpretability of time series Transformers.
We modify the training objective to encourage a model to develop representations similar to predefined interpretable concepts.
We find that the model performance remains mostly unaffected, while the model shows much improved interpretability.
arXiv Detail & Related papers (2024-10-08T14:22:40Z) - Semantic-Guided Multimodal Sentiment Decoding with Adversarial Temporal-Invariant Learning [22.54577327204281]
Multimodal sentiment analysis aims to learn representations from different modalities to identify human emotions.
Existing works often neglect the frame-level redundancy inherent in continuous time series, resulting in incomplete modality representations with noise.
We propose temporal-invariant learning for the first time, which constrains the distributional variations over time steps to effectively capture long-term temporal dynamics.
arXiv Detail & Related papers (2024-08-30T03:28:40Z) - Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers [55.475142494272724]
Time series prediction is crucial for understanding and forecasting complex dynamics in various domains.
We introduce GridTST, a model that combines the benefits of two approaches using innovative multi-directional attentions.
The model consistently delivers state-of-the-art performance across various real-world datasets.
arXiv Detail & Related papers (2024-05-22T16:41:21Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - TimeGraphs: Graph-based Temporal Reasoning [64.18083371645956]
TimeGraphs is a novel approach that characterizes dynamic interactions as a hierarchical temporal graph.
Our approach models the interactions using a compact graph-based representation, enabling adaptive reasoning across diverse time scales.
We evaluate TimeGraphs on multiple datasets with complex, dynamic agent interactions, including a football simulator, the Resistance game, and the MOMA human activity dataset.
arXiv Detail & Related papers (2024-01-06T06:26:49Z) - Latent State Models of Training Dynamics [51.88132043461152]
We train models with different random seeds and compute a variety of metrics throughout training.
We then fit a hidden Markov model (HMM) over the resulting sequences of metrics.
We use the HMM representation to study phase transitions and identify latent "detour" states that slow down convergence.
arXiv Detail & Related papers (2023-08-18T13:20:08Z) - TimeTuner: Diagnosing Time Representations for Time-Series Forecasting
with Counterfactual Explanations [3.8357850372472915]
This paper contributes a novel visual analytics framework, namely TimeTuner, to help analysts understand how model behaviors are associated with localized, stationarity, and correlations of time-series representations.
We show that TimeTuner can help characterize time-series representations and guide the feature engineering processes.
arXiv Detail & Related papers (2023-07-19T11:40:15Z) - Cross-Modality Time-Variant Relation Learning for Generating Dynamic
Scene Graphs [16.760066844287046]
We propose a Time-variant Relation-aware TRansformer (TR$2$) to model the temporal change of relations in dynamic scene graphs.
We show that TR$2$ significantly outperforms previous state-of-the-art methods under two different settings.
arXiv Detail & Related papers (2023-05-15T10:30:38Z) - FormerTime: Hierarchical Multi-Scale Representations for Multivariate
Time Series Classification [53.55504611255664]
FormerTime is a hierarchical representation model for improving the classification capacity for the multivariate time series classification task.
It exhibits three aspects of merits: (1) learning hierarchical multi-scale representations from time series data, (2) inheriting the strength of both transformers and convolutional networks, and (3) tacking the efficiency challenges incurred by the self-attention mechanism.
arXiv Detail & Related papers (2023-02-20T07:46:14Z) - Variational Predictive Routing with Nested Subjective Timescales [1.6114012813668934]
We present Variational Predictive Routing (PRV) - a neural inference system that organizes latent video features in a temporal hierarchy.
We show that VPR is able to detect event boundaries, disentangletemporal features, adapt to the dynamics hierarchy of the data, and produce accurate time-agnostic rollouts of the future.
arXiv Detail & Related papers (2021-10-21T16:12:59Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - Learning Temporal Dynamics from Cycles in Narrated Video [85.89096034281694]
We propose a self-supervised solution to the problem of learning to model how the world changes as time elapses.
Our model learns modality-agnostic functions to predict forward and backward in time, which must undo each other when composed.
We apply the learned dynamics model without further training to various tasks, such as predicting future action and temporally ordering sets of images.
arXiv Detail & Related papers (2021-01-07T02:41:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.