Benchmarks for Corruption Invariant Person Re-identification
- URL: http://arxiv.org/abs/2111.00880v1
- Date: Mon, 1 Nov 2021 12:14:28 GMT
- Title: Benchmarks for Corruption Invariant Person Re-identification
- Authors: Minghui Chen, Zhiqiang Wang, Feng Zheng
- Abstract summary: We study corruption invariant learning in single- and cross-modality datasets, including Market-1501, CUHK03, MSMT17, RegDB, SYSU-MM01.
transformer-based models are more robust towards corrupted images, compared with CNN-based models.
Cross-dataset generalization improves with corruption robustness increases.
- Score: 31.919264399996475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When deploying person re-identification (ReID) model in safety-critical
applications, it is pivotal to understanding the robustness of the model
against a diverse array of image corruptions. However, current evaluations of
person ReID only consider the performance on clean datasets and ignore images
in various corrupted scenarios. In this work, we comprehensively establish six
ReID benchmarks for learning corruption invariant representation. In the field
of ReID, we are the first to conduct an exhaustive study on corruption
invariant learning in single- and cross-modality datasets, including
Market-1501, CUHK03, MSMT17, RegDB, SYSU-MM01. After reproducing and examining
the robustness performance of 21 recent ReID methods, we have some
observations: 1) transformer-based models are more robust towards corrupted
images, compared with CNN-based models, 2) increasing the probability of random
erasing (a commonly used augmentation method) hurts model corruption
robustness, 3) cross-dataset generalization improves with corruption robustness
increases. By analyzing the above observations, we propose a strong baseline on
both single- and cross-modality ReID datasets which achieves improved
robustness against diverse corruptions. Our codes are available on
https://github.com/MinghuiChen43/CIL-ReID.
Related papers
- Dynamic Batch Norm Statistics Update for Natural Robustness [5.366500153474747]
We propose a unified framework consisting of a corruption-detection model and BN statistics update.
Our results demonstrate about 8% and 4% accuracy improvement on CIFAR10-C and ImageNet-C.
arXiv Detail & Related papers (2023-10-31T17:20:30Z) - Frequency-Based Vulnerability Analysis of Deep Learning Models against
Image Corruptions [48.34142457385199]
We present MUFIA, an algorithm designed to identify the specific types of corruptions that can cause models to fail.
We find that even state-of-the-art models trained to be robust against known common corruptions struggle against the low visibility-based corruptions crafted by MUFIA.
arXiv Detail & Related papers (2023-06-12T15:19:13Z) - Investigating the Corruption Robustness of Image Classifiers with Random
Lp-norm Corruptions [3.1337872355726084]
This study investigates the use of random p-norm corruptions to augment the training and test data of image classifiers.
We find that training data augmentation with a combination of p-norm corruptions significantly improves corruption robustness, even on top of state-of-the-art data augmentation schemes.
arXiv Detail & Related papers (2023-05-09T12:45:43Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - On Interaction Between Augmentations and Corruptions in Natural
Corruption Robustness [78.6626755563546]
Several new data augmentations have been proposed that significantly improve performance on ImageNet-C.
We develop a new measure in this space between augmentations and corruptions called the Minimal Sample Distance to demonstrate there is a strong correlation between similarity and performance.
We observe a significant degradation in corruption robustness when the test-time corruptions are sampled to be perceptually dissimilar from ImageNet-C.
Our results suggest that test error can be improved by training on perceptually similar augmentations, and data augmentations may not generalize well beyond the existing benchmark.
arXiv Detail & Related papers (2021-02-22T18:58:39Z) - Unsupervised Pre-training for Person Re-identification [90.98552221699508]
We present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson"
We make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
arXiv Detail & Related papers (2020-12-07T14:48:26Z) - Revisiting Batch Normalization for Improving Corruption Robustness [85.20742045853738]
We interpret corruption robustness as a domain shift and propose to rectify batch normalization statistics for improving model robustness.
We find that simply estimating and adapting the BN statistics on a few representation samples, without retraining the model, improves the corruption robustness by a large margin.
arXiv Detail & Related papers (2020-10-07T19:56:47Z) - Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking [83.48804199140758]
We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
arXiv Detail & Related papers (2020-04-08T18:48:29Z) - A simple way to make neural networks robust against diverse image
corruptions [29.225922892332342]
We show that a simple but properly tuned training with additive Gaussian and Speckle noise generalizes surprisingly well to unseen corruptions.
An adversarial training of the recognition model against uncorrelated worst-case noise leads to an additional increase in performance.
arXiv Detail & Related papers (2020-01-16T20:10:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.