Online Descriptor Enhancement via Self-Labelling Triplets for Visual
Data Association
- URL: http://arxiv.org/abs/2011.10471v2
- Date: Thu, 3 Jun 2021 21:12:07 GMT
- Title: Online Descriptor Enhancement via Self-Labelling Triplets for Visual
Data Association
- Authors: Yorai Shaoul, Katherine Liu, Kyel Ok and Nicholas Roy
- Abstract summary: We propose a self-supervised method for incrementally refining visual descriptors to improve performance in the task of object-level visual data association.
Our method optimize deep descriptor generators online, by continuously training a widely available image classification network pre-trained with domain-independent data.
We show that our approach surpasses other visual data-association methods applied to a tracking-by-detection task, and show that it provides better performance-gains when compared to other methods that attempt to adapt to observed information.
- Score: 28.03285334702022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object-level data association is central to robotic applications such as
tracking-by-detection and object-level simultaneous localization and mapping.
While current learned visual data association methods outperform hand-crafted
algorithms, many rely on large collections of domain-specific training examples
that can be difficult to obtain without prior knowledge. Additionally, such
methods often remain fixed during inference-time and do not harness observed
information to better their performance. We propose a self-supervised method
for incrementally refining visual descriptors to improve performance in the
task of object-level visual data association. Our method optimizes deep
descriptor generators online, by continuously training a widely available image
classification network pre-trained with domain-independent data. We show that
earlier layers in the network outperform later-stage layers for the data
association task while also allowing for a 94% reduction in the number of
parameters, enabling the online optimization. We show that self-labelling
challenging triplets--choosing positive examples separated by large temporal
distances and negative examples close in the descriptor space--improves the
quality of the learned descriptors for the multi-object tracking task. Finally,
we demonstrate that our approach surpasses other visual data-association
methods applied to a tracking-by-detection task, and show that it provides
better performance-gains when compared to other methods that attempt to adapt
to observed information.
Related papers
- Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - TrueDeep: A systematic approach of crack detection with less data [0.0]
We show that by incorporating domain knowledge along with deep learning architectures, we can achieve similar performance with less data.
Our algorithms, developed with 23% of the overall data, have a similar performance on the test data and significantly better performance on multiple blind datasets.
arXiv Detail & Related papers (2023-05-30T14:51:58Z) - S$^3$Track: Self-supervised Tracking with Soft Assignment Flow [45.77333923477176]
We study self-supervised multiple object tracking without using any video-level association labels.
We propose differentiable soft object assignment for object association.
We evaluate our proposed model on the KITTI, nuScenes, and Argoverse datasets.
arXiv Detail & Related papers (2023-05-17T06:25:40Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - GAN-Supervised Dense Visual Alignment [95.37027391102684]
We propose GAN-Supervised Learning, a framework for learning discriminative models and their GAN-generated training data jointly end-to-end.
Inspired by the classic Congealing method, our GANgealing algorithm trains a Spatial Transformer to map random samples from a GAN trained on unaligned data to a common, jointly-learned target mode.
arXiv Detail & Related papers (2021-12-09T18:59:58Z) - Segment as Points for Efficient Online Multi-Object Tracking and
Segmentation [66.03023110058464]
We propose a highly effective method for learning instance embeddings based on segments by converting the compact image representation to un-ordered 2D point cloud representation.
Our method generates a new tracking-by-points paradigm where discriminative instance embeddings are learned from randomly selected points rather than images.
The resulting online MOTS framework, named PointTrack, surpasses all the state-of-the-art methods by large margins.
arXiv Detail & Related papers (2020-07-03T08:29:35Z) - Self-supervised Video Object Segmentation [76.83567326586162]
The objective of this paper is self-supervised representation learning, with the goal of solving semi-supervised video object segmentation (a.k.a. dense tracking)
We make the following contributions: (i) we propose to improve the existing self-supervised approach, with a simple, yet more effective memory mechanism for long-term correspondence matching; (ii) by augmenting the self-supervised approach with an online adaptation module, our method successfully alleviates tracker drifts caused by spatial-temporal discontinuity; (iv) we demonstrate state-of-the-art results among the self-supervised approaches on DAVIS-2017 and YouTube
arXiv Detail & Related papers (2020-06-22T17:55:59Z) - Object-Adaptive LSTM Network for Real-time Visual Tracking with
Adversarial Data Augmentation [31.842910084312265]
We propose a novel real-time visual tracking method, which adopts an object-adaptive LSTM network to effectively capture the video sequential dependencies and adaptively learn the object appearance variations.
Experiments on four visual tracking benchmarks demonstrate the state-of-the-art performance of our method in terms of both tracking accuracy and speed.
arXiv Detail & Related papers (2020-02-07T03:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.