DeepMix: Online Auto Data Augmentation for Robust Visual Object Tracking
- URL: http://arxiv.org/abs/2104.11585v1
- Date: Fri, 23 Apr 2021 13:37:47 GMT
- Title: DeepMix: Online Auto Data Augmentation for Robust Visual Object Tracking
- Authors: Ziyi Cheng and Xuhong Ren and Felix Juefei-Xu and Wanli Xue and Qing
Guo and Lei Ma and Jianjun Zhao
- Abstract summary: DeepMix takes historical samples' embeddings as input and generates augmented embeddings online.
MixNet is an offline trained network for performing online data augmentation within one-step.
- Score: 11.92631259817911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online updating of the object model via samples from historical frames is of
great importance for accurate visual object tracking. Recent works mainly focus
on constructing effective and efficient updating methods while neglecting the
training samples for learning discriminative object models, which is also a key
part of a learning problem. In this paper, we propose the DeepMix that takes
historical samples' embeddings as input and generates augmented embeddings
online, enhancing the state-of-the-art online learning methods for visual
object tracking. More specifically, we first propose the online data
augmentation for tracking that online augments the historical samples through
object-aware filtering. Then, we propose MixNet which is an offline trained
network for performing online data augmentation within one-step, enhancing the
tracking accuracy while preserving high speeds of the state-of-the-art online
learning methods. The extensive experiments on three different tracking
frameworks, i.e., DiMP, DSiam, and SiamRPN++, and three large-scale and
challenging datasets, \ie, OTB-2015, LaSOT, and VOT, demonstrate the
effectiveness and advantages of the proposed method.
Related papers
- HIPTrack: Visual Tracking with Historical Prompts [37.85656595341516]
We show that by providing a tracker that follows Siamese paradigm with precise and updated historical information, a significant performance improvement can be achieved.
We build a novel tracker called HIPTrack based on the historical prompt network, which achieves considerable performance improvements without the need to retrain the entire model.
arXiv Detail & Related papers (2023-11-03T17:54:59Z) - AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration [69.21282992341007]
Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
arXiv Detail & Related papers (2023-09-20T09:29:44Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - Dynamic Template Selection Through Change Detection for Adaptive Siamese
Tracking [7.662745552551165]
Single object tracking (SOT) remains a challenging task in real-world application due to changes and deformations in a target object's appearance.
We propose a new method for dynamic sample selection and memory replay, preventing template corruption.
Our proposed method can be integrated into any object tracking algorithm that leverages online learning for model adaptation.
arXiv Detail & Related papers (2022-03-07T07:27:02Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.