Dynamic Attention Mechanism in Spatiotemporal Memory Networks for Object Tracking
- URL: http://arxiv.org/abs/2503.16768v1
- Date: Fri, 21 Mar 2025 00:48:31 GMT
- Title: Dynamic Attention Mechanism in Spatiotemporal Memory Networks for Object Tracking
- Authors: Meng Zhou, Jiadong Xie, Mingsheng Xu,
- Abstract summary: We propose a differentiable dynamic attention mechanism that adaptively channel adjusts attention weights by analyzing spatial attention weights.<n>A lightweight gating network that autonomously allocates computational resources based on target motion states, prioritizes high-discriminability features in challenging scenarios.
- Score: 8.040709469401257
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Mainstream visual object tracking frameworks predominantly rely on template matching paradigms. Their performance heavily depends on the quality of template features, which becomes increasingly challenging to maintain in complex scenarios involving target deformation, occlusion, and background clutter. While existing spatiotemporal memory-based trackers emphasize memory capacity expansion, they lack effective mechanisms for dynamic feature selection and adaptive fusion. To address this gap, we propose a Dynamic Attention Mechanism in Spatiotemporal Memory Network (DASTM) with two key innovations: 1) A differentiable dynamic attention mechanism that adaptively adjusts channel-spatial attention weights by analyzing spatiotemporal correlations between the templates and memory features; 2) A lightweight gating network that autonomously allocates computational resources based on target motion states, prioritizing high-discriminability features in challenging scenarios. Extensive evaluations on OTB-2015, VOT 2018, LaSOT, and GOT-10K benchmarks demonstrate our DASTM's superiority, achieving state-of-the-art performance in success rate, robustness, and real-time efficiency, thereby offering a novel solution for real-time tracking in complex environments.
Related papers
- Dynamic Memory-enhanced Transformer for Hyperspectral Image Classification [3.5093938502961763]
Hyperspectral image (HSI) classification remains a challenging task due to the intricate spatial-spectral correlations.
Existing transformer models excel in capturing long-range dependencies but often suffer from information redundancy and attention inefficiencies.
MemFormer introduces a memory-enhanced multi-head attention mechanism that iteratively refines a dynamic memory module.
A dynamic memory enrichment strategy progressively captures complex spatial and spectral dependencies, leading to more expressive feature representations.
arXiv Detail & Related papers (2025-04-17T17:43:34Z) - Spiking Meets Attention: Efficient Remote Sensing Image Super-Resolution with Attention Spiking Neural Networks [57.17129753411926]
Spiking neural networks (SNNs) are emerging as a promising alternative to traditional artificial neural networks (ANNs)
We propose SpikeSR, which achieves state-of-the-art performance across various remote sensing benchmarks such as AID, DOTA, and DIOR.
arXiv Detail & Related papers (2025-03-06T09:06:06Z) - Mathematical Formalism for Memory Compression in Selective State Space Models [0.0]
State space models (SSMs) have emerged as a powerful framework for modelling long-range dependencies in sequence data.
We develop a rigorous mathematical framework for understanding memory compression in selective state space models.
We show that selective SSMs offer significant improvements in memory efficiency and processing speed compared to traditional RNN-based models.
arXiv Detail & Related papers (2024-10-04T05:45:48Z) - Context-Conditioned Spatio-Temporal Predictive Learning for Reliable V2V Channel Prediction [25.688521281119037]
Vehicle-to-Vehicle (V2V) channel state information (CSI) prediction is challenging and crucial for optimizing downstream tasks.
Traditional prediction approaches focus on four-dimensional (4D) CSI, which includes predictions over time, bandwidth, and antenna (TX and RX) space.
We propose a novel context-conditionedtemporal predictive learning method to capture dependencies within 4D CSI data.
arXiv Detail & Related papers (2024-09-16T04:15:36Z) - Unsupervised Representation Learning of Complex Time Series for Maneuverability State Identification in Smart Mobility [0.0]
In smart mobility, MTS plays a crucial role in providing temporal dynamics of behaviors such as maneuver patterns.
In this work, we aim to address challenges associated with modeling MTS data collected from a vehicle using sensors.
Our goal is to investigate the effectiveness of two distinct unsupervised representation learning approaches in identifying maneuvering states in smart mobility.
arXiv Detail & Related papers (2024-08-26T15:16:18Z) - Temporal Feature Matters: A Framework for Diffusion Model Quantization [105.3033493564844]
Diffusion models rely on the time-step for the multi-round denoising.<n>We introduce a novel quantization framework that includes three strategies.<n>This framework preserves most of the temporal information and ensures high-quality end-to-end generation.
arXiv Detail & Related papers (2024-07-28T17:46:15Z) - TF-SASM: Training-free Spatial-aware Sparse Memory for Multi-object Tracking [6.91631684487121]
Multi-object tracking (MOT) in computer vision remains a significant challenge, requiring precise localization and continuous tracking of multiple objects in video sequences.
We propose a novel memory-based approach that selectively stores critical features based on object motion and overlapping awareness.
Our approach significantly improves over MOTRv2 in the DanceTrack test set, demonstrating a gain of 2.0% AssA score and 2.1% in IDF1 score.
arXiv Detail & Related papers (2024-07-05T07:55:19Z) - Understanding Self-attention Mechanism via Dynamical System Perspective [58.024376086269015]
Self-attention mechanism (SAM) is widely used in various fields of artificial intelligence.
We show that intrinsic stiffness phenomenon (SP) in the high-precision solution of ordinary differential equations (ODEs) also widely exists in high-performance neural networks (NN)
We show that the SAM is also a stiffness-aware step size adaptor that can enhance the model's representational ability to measure intrinsic SP.
arXiv Detail & Related papers (2023-08-19T08:17:41Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Pessimism meets VCG: Learning Dynamic Mechanism Design via Offline
Reinforcement Learning [114.36124979578896]
We design a dynamic mechanism using offline reinforcement learning algorithms.
Our algorithm is based on the pessimism principle and only requires a mild assumption on the coverage of the offline data set.
arXiv Detail & Related papers (2022-05-05T05:44:26Z) - Online Multiple Object Tracking with Cross-Task Synergy [120.70085565030628]
We propose a novel unified model with synergy between position prediction and embedding association.
The two tasks are linked by temporal-aware target attention and distractor attention, as well as identity-aware memory aggregation model.
arXiv Detail & Related papers (2021-04-01T10:19:40Z) - Learn to cycle: Time-consistent feature discovery for action recognition [83.43682368129072]
Generalizing over temporal variations is a prerequisite for effective action recognition in videos.
We introduce Squeeze Re Temporal Gates (SRTG), an approach that favors temporal activations with potential variations.
We show consistent improvement when using SRTPG blocks, with only a minimal increase in the number of GFLOs.
arXiv Detail & Related papers (2020-06-15T09:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.