Multi-Domain Learning and Identity Mining for Vehicle Re-Identification
- URL: http://arxiv.org/abs/2004.10547v2
- Date: Fri, 24 Apr 2020 14:08:35 GMT
- Title: Multi-Domain Learning and Identity Mining for Vehicle Re-Identification
- Authors: Shuting He, Hao Luo, Weihua Chen, Miao Zhang, Yuqi Zhang, Fan Wang,
Hao Li and Wei Jiang
- Abstract summary: This paper introduces our solution for the Track2 in AI City Challenge 2020 (AICITY20)
The Track2 is a vehicle re-identification task with both the real-world data and synthetic data.
With multiple-model ensemble, our method achieves 0.7322 in the mAP score which yields third place in the competition.
- Score: 38.35753364518881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces our solution for the Track2 in AI City Challenge 2020
(AICITY20). The Track2 is a vehicle re-identification (ReID) task with both the
real-world data and synthetic data. Our solution is based on a strong baseline
with bag of tricks (BoT-BS) proposed in person ReID. At first, we propose a
multi-domain learning method to joint the real-world and synthetic data to
train the model. Then, we propose the Identity Mining method to automatically
generate pseudo labels for a part of the testing data, which is better than the
k-means clustering. The tracklet-level re-ranking strategy with weighted
features is also used to post-process the results. Finally, with multiple-model
ensemble, our method achieves 0.7322 in the mAP score which yields third place
in the competition. The codes are available at
https://github.com/heshuting555/AICITY2020_DMT_VehicleReID.
Related papers
- Alice Benchmarks: Connecting Real World Re-Identification with the
Synthetic [92.02220105679713]
We introduce the Alice benchmarks, large-scale datasets providing benchmarks and evaluation protocols to the research community.
Within the Alice benchmarks, two object re-ID tasks are offered: person and vehicle re-ID.
As an important feature of our real target, the clusterability of its training set is not manually guaranteed to make it closer to a real domain adaptation test scenario.
arXiv Detail & Related papers (2023-10-06T17:58:26Z) - SDTracker: Synthetic Data Based Multi-Object Tracking [8.43201092674197]
We present SDTracker, a method that harnesses the potential of synthetic data for multi-object tracking of real-world scenes.
We use the ImageNet dataset as an auxiliary to randomize the style of synthetic data.
We also adopt the pseudo-labeling method to effectively utilize the unlabeled MOT17 training data.
arXiv Detail & Related papers (2023-03-26T08:21:22Z) - 1st Place Solution of The Robust Vision Challenge (RVC) 2022 Semantic
Segmentation Track [67.56316745239629]
This report describes the winning solution to the semantic segmentation task of the Robust Vision Challenge on ECCV 2022.
Our method adopts the FAN-B-Hybrid model as the encoder and uses Segformer as the segmentation framework.
The proposed method could serve as a strong baseline for the multi-domain segmentation task and benefit future works.
arXiv Detail & Related papers (2022-10-23T20:52:22Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Bag of Tricks for Domain Adaptive Multi-Object Tracking [4.084199842578325]
The proposed method was built from pre-existing detector and tracker under the tracking-by-detection paradigm.
The tracker we used is an online tracker that merely links newly received detections with existing tracks.
Our method, SIA_Track, takes the first place on MOT Synth2MOT17 track at BMTT 2022 challenge.
arXiv Detail & Related papers (2022-05-31T08:49:20Z) - A Free Lunch to Person Re-identification: Learning from Automatically
Generated Noisy Tracklets [52.30547023041587]
unsupervised video-based re-identification (re-ID) methods have been proposed to solve the problem of high labor cost required to annotate re-ID datasets.
But their performance is still far lower than the supervised counterparts.
In this paper, we propose to tackle this problem by learning re-ID models from automatically generated person tracklets.
arXiv Detail & Related papers (2022-04-02T16:18:13Z) - An Empirical Study of Vehicle Re-Identification on the AI City Challenge [19.13038665501964]
The Track2 is a vehicle re-identification (ReID) task with both the real-world data and synthetic data.
We mainly focus on four points, i.e. training data, unsupervised domain-adaptive (UDA) training, post-processing, model ensembling in this challenge.
With aforementioned techniques, our method finally achieves 0.7445 mAP score, yielding the first place in the competition.
arXiv Detail & Related papers (2021-05-20T12:20:52Z) - Phonemer at WNUT-2020 Task 2: Sequence Classification Using COVID
Twitter BERT and Bagging Ensemble Technique based on Plurality Voting [0.0]
We develop a system that automatically identifies whether an English Tweet related to the novel coronavirus (COVID-19) is informative or not.
Our final approach achieved an F1-score of 0.9037 and we were ranked sixth overall with F1-score as the evaluation criteria.
arXiv Detail & Related papers (2020-10-01T10:54:54Z) - Chained-Tracker: Chaining Paired Attentive Regression Results for
End-to-End Joint Multiple-Object Detection and Tracking [102.31092931373232]
We propose a simple online model named Chained-Tracker (CTracker), which naturally integrates all the three subtasks into an end-to-end solution.
The two major novelties: chained structure and paired attentive regression, make CTracker simple, fast and effective.
arXiv Detail & Related papers (2020-07-29T02:38:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.