Global Distance-distributions Separation for Unsupervised Person
Re-identification
- URL: http://arxiv.org/abs/2006.00752v3
- Date: Fri, 10 Jul 2020 09:27:59 GMT
- Title: Global Distance-distributions Separation for Unsupervised Person
Re-identification
- Authors: Xin Jin, Cuiling Lan, Wenjun Zeng, Zhibo Chen
- Abstract summary: Existing unsupervised ReID approaches often fail in correctly identifying the positive samples and negative samples through the distance-based matching/ranking.
We introduce a global distance-distributions separation constraint over the two distributions to encourage the clear separation of positive and negative samples from a global view.
We show that our method leads to significant improvement over the baselines and achieves the state-of-the-art performance.
- Score: 93.39253443415392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised person re-identification (ReID) often has poor scalability and
usability in real-world deployments due to domain gaps and the lack of
annotations for the target domain data. Unsupervised person ReID through domain
adaptation is attractive yet challenging. Existing unsupervised ReID approaches
often fail in correctly identifying the positive samples and negative samples
through the distance-based matching/ranking. The two distributions of distances
for positive sample pairs (Pos-distr) and negative sample pairs (Neg-distr) are
often not well separated, having large overlap. To address this problem, we
introduce a global distance-distributions separation (GDS) constraint over the
two distributions to encourage the clear separation of positive and negative
samples from a global view. We model the two global distance distributions as
Gaussian distributions and push apart the two distributions while encouraging
their sharpness in the unsupervised training process. Particularly, to model
the distributions from a global view and facilitate the timely updating of the
distributions and the GDS related losses, we leverage a momentum update
mechanism for building and maintaining the distribution parameters (mean and
variance) and calculate the loss on the fly during the training.
Distribution-based hard mining is proposed to further promote the separation of
the two distributions. We validate the effectiveness of the GDS constraint in
unsupervised ReID networks. Extensive experiments on multiple ReID benchmark
datasets show our method leads to significant improvement over the baselines
and achieves the state-of-the-art performance.
Related papers
- Improving Distribution Alignment with Diversity-based Sampling [0.0]
Domain shifts are ubiquitous in machine learning, and can substantially degrade a model's performance when deployed to real-world data.
This paper proposes to improve these estimates by inducing diversity in each sampled minibatch.
It simultaneously balances the data and reduces the variance of the gradients, thereby enhancing the model's generalisation ability.
arXiv Detail & Related papers (2024-10-05T17:26:03Z) - Feature-Distribution Perturbation and Calibration for Generalized Person
ReID [47.84576229286398]
Person Re-identification (ReID) has been advanced remarkably over the last 10 years along with the rapid development of deep learning for visual recognition.
We propose a Feature-Distribution Perturbation and generalization (PECA) method to derive generic feature representations for person ReID.
arXiv Detail & Related papers (2022-05-23T11:06:12Z) - Investigating Shifts in GAN Output-Distributions [5.076419064097734]
We introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data.
Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms.
arXiv Detail & Related papers (2021-12-28T09:16:55Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Long-tailed Distribution Adaptation [47.21518849423836]
We formulate Long-tailed recognition as Domain Adaption (LDA), by modeling the long-tailed distribution as an unbalanced domain and the general distribution as a balanced domain.
We propose to jointly optimize empirical risks of the unbalanced and balanced domains and approximate their domain divergence by intra-class and inter-class distances.
Experiments on benchmark datasets for image recognition, object detection, and instance segmentation validate that our LDA approach achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-10-06T12:15:22Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.