Adaptive Sparse Pairwise Loss for Object Re-Identification
- URL: http://arxiv.org/abs/2303.18247v1
- Date: Fri, 31 Mar 2023 17:59:44 GMT
- Title: Adaptive Sparse Pairwise Loss for Object Re-Identification
- Authors: Xiao Zhou, Yujie Zhong, Zhen Cheng, Fan Liang, Lin Ma
- Abstract summary: Pairwise losses play an important role in training a strong ReID network.
We propose a novel loss paradigm termed Sparse Pairwise (SP) loss.
We show that SP loss and its adaptive variant AdaSP loss outperform other pairwise losses.
- Score: 25.515107212575636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object re-identification (ReID) aims to find instances with the same identity
as the given probe from a large gallery. Pairwise losses play an important role
in training a strong ReID network. Existing pairwise losses densely exploit
each instance as an anchor and sample its triplets in a mini-batch. This dense
sampling mechanism inevitably introduces positive pairs that share few visual
similarities, which can be harmful to the training. To address this problem, we
propose a novel loss paradigm termed Sparse Pairwise (SP) loss that only
leverages few appropriate pairs for each class in a mini-batch, and empirically
demonstrate that it is sufficient for the ReID tasks. Based on the proposed
loss framework, we propose an adaptive positive mining strategy that can
dynamically adapt to diverse intra-class variations. Extensive experiments show
that SP loss and its adaptive variant AdaSP loss outperform other pairwise
losses, and achieve state-of-the-art performance across several ReID
benchmarks. Code is available at https://github.com/Astaxanthin/AdaSP.
Related papers
- Learning Compact Features via In-Training Representation Alignment [19.273120635948363]
In each epoch, the true gradient of the loss function is estimated using a mini-batch sampled from the training set.
We propose In-Training Representation Alignment (ITRA) that explicitly aligns feature distributions of two different mini-batches with a matching loss.
We also provide a rigorous analysis of the desirable effects of the matching loss on feature representation learning.
arXiv Detail & Related papers (2022-11-23T22:23:22Z) - Unified Loss of Pair Similarity Optimization for Vision-Language
Retrieval [35.141916376979836]
There are two popular loss functions used for vision-language retrieval, i.e., triplet loss and contrastive learning loss.
This paper proposes a unified loss of pair similarity optimization for vision-language retrieval.
arXiv Detail & Related papers (2022-09-28T07:01:22Z) - Pairwise Learning via Stagewise Training in Proximal Setting [0.0]
We combine adaptive sample size and importance sampling techniques for pairwise learning, with convergence guarantees for nonsmooth convex pairwise loss functions.
We demonstrate that sampling opposite instances at each reduces the variance of the gradient, hence accelerating convergence.
arXiv Detail & Related papers (2022-08-08T11:51:01Z) - Benchmarking Deep Models for Salient Object Detection [67.07247772280212]
We construct a general SALient Object Detection (SALOD) benchmark to conduct a comprehensive comparison among several representative SOD methods.
In the above experiments, we find that existing loss functions usually specialized in some metrics but reported inferior results on the others.
We propose a novel Edge-Aware (EA) loss that promotes deep networks to learn more discriminative features by integrating both pixel- and image-level supervision signals.
arXiv Detail & Related papers (2022-02-07T03:43:16Z) - A Unified Framework of Surrogate Loss by Refactoring and Interpolation [65.60014616444623]
We introduce UniLoss, a unified framework to generate surrogate losses for training deep networks with gradient descent.
We validate the effectiveness of UniLoss on three tasks and four datasets.
arXiv Detail & Related papers (2020-07-27T21:16:51Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z) - An Equivalence between Loss Functions and Non-Uniform Sampling in
Experience Replay [72.23433407017558]
We show that any loss function evaluated with non-uniformly sampled data can be transformed into another uniformly sampled loss function.
Surprisingly, we find in some environments PER can be replaced entirely by this new loss function without impact to empirical performance.
arXiv Detail & Related papers (2020-07-12T17:45:24Z) - Beyond Triplet Loss: Meta Prototypical N-tuple Loss for Person
Re-identification [118.72423376789062]
We introduce a multi-class classification loss, i.e., N-tuple loss, to jointly consider multiple (N) instances for per-query optimization.
With the multi-class classification incorporated, our model achieves the state-of-the-art performance on the benchmark person ReID datasets.
arXiv Detail & Related papers (2020-06-08T23:34:08Z) - Adaptive Adversarial Logits Pairing [65.51670200266913]
An adversarial training solution Adversarial Logits Pairing (ALP) tends to rely on fewer high-contribution features compared with vulnerable ones.
Motivated by these observations, we design an Adaptive Adversarial Logits Pairing (AALP) solution by modifying the training process and training target of ALP.
AALP consists of an adaptive feature optimization module with Guided Dropout to systematically pursue fewer high-contribution features.
arXiv Detail & Related papers (2020-05-25T03:12:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.