A Similarity Inference Metric for RGB-Infrared Cross-Modality Person
Re-identification
- URL: http://arxiv.org/abs/2007.01504v1
- Date: Fri, 3 Jul 2020 05:28:13 GMT
- Title: A Similarity Inference Metric for RGB-Infrared Cross-Modality Person
Re-identification
- Authors: Mengxi Jia, Yunpeng Zhai, Shijian Lu, Siwei Ma, Jian Zhang
- Abstract summary: Cross-modality person re-identification (re-ID) is a challenging task due to the large discrepancy between IR and RGB modalities.
Existing methods address this challenge typically by aligning feature distributions or image styles across modalities.
This paper presents a novel similarity inference metric (SIM) that exploits the intra-modality sample similarities to circumvent the cross-modality discrepancy.
- Score: 66.49212581685127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: RGB-Infrared (IR) cross-modality person re-identification (re-ID), which aims
to search an IR image in RGB gallery or vice versa, is a challenging task due
to the large discrepancy between IR and RGB modalities. Existing methods
address this challenge typically by aligning feature distributions or image
styles across modalities, whereas the very useful similarities among gallery
samples of the same modality (i.e. intra-modality sample similarities) is
largely neglected. This paper presents a novel similarity inference metric
(SIM) that exploits the intra-modality sample similarities to circumvent the
cross-modality discrepancy targeting optimal cross-modality image matching. SIM
works by successive similarity graph reasoning and mutual nearest-neighbor
reasoning that mine cross-modality sample similarities by leveraging
intra-modality sample similarities from two different perspectives. Extensive
experiments over two cross-modality re-ID datasets (SYSU-MM01 and RegDB) show
that SIM achieves significant accuracy improvement but with little extra
training as compared with the state-of-the-art.
Related papers
- Cross-Modality Perturbation Synergy Attack for Person Re-identification [66.48494594909123]
The main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities.
Existing attack methods have primarily focused on the characteristics of the visible image modality.
This study proposes a universal perturbation attack specifically designed for cross-modality ReID.
arXiv Detail & Related papers (2024-01-18T15:56:23Z) - Exploring Invariant Representation for Visible-Infrared Person
Re-Identification [77.06940947765406]
Cross-spectral person re-identification, which aims to associate identities to pedestrians across different spectra, faces a main challenge of the modality discrepancy.
In this paper, we address the problem from both image-level and feature-level in an end-to-end hybrid learning framework named robust feature mining network (RFM)
Experiment results on two standard cross-spectral person re-identification datasets, RegDB and SYSU-MM01, have demonstrated state-of-the-art performance.
arXiv Detail & Related papers (2023-02-02T05:24:50Z) - Unsupervised Misaligned Infrared and Visible Image Fusion via
Cross-Modality Image Generation and Registration [59.02821429555375]
We present a robust cross-modality generation-registration paradigm for unsupervised misaligned infrared and visible image fusion.
To better fuse the registered infrared images and visible images, we present a feature Interaction Fusion Module (IFM)
arXiv Detail & Related papers (2022-05-24T07:51:57Z) - Modality-Adaptive Mixup and Invariant Decomposition for RGB-Infrared
Person Re-Identification [84.32086702849338]
We propose a novel modality-adaptive mixup and invariant decomposition (MID) approach for RGB-infrared person re-identification.
MID designs a modality-adaptive mixup scheme to generate suitable mixed modality images between RGB and infrared images.
Experiments on two challenging benchmarks demonstrate superior performance of MID over state-of-the-art methods.
arXiv Detail & Related papers (2022-03-03T14:26:49Z) - A Novel Self-Supervised Cross-Modal Image Retrieval Method In Remote
Sensing [0.0]
Cross-modal RS image retrieval methods search semantically similar images across different modalities.
Existing CM-RSIR methods require annotated training images and do not concurrently address intra- and inter-modal similarity preservation and inter-modal discrepancy elimination.
We introduce a novel self-supervised cross-modal image retrieval method that aims to model mutual-information between different modalities in a self-supervised manner.
arXiv Detail & Related papers (2022-02-23T11:20:24Z) - Multi-Scale Cascading Network with Compact Feature Learning for
RGB-Infrared Person Re-Identification [35.55895776505113]
Multi-Scale Part-Aware Cascading framework (MSPAC) is formulated by aggregating multi-scale fine-grained features from part to global.
Cross-modality correlations can thus be efficiently explored on salient features for distinctive modality-invariant feature learning.
arXiv Detail & Related papers (2020-12-12T15:39:11Z) - Cross-Spectral Periocular Recognition with Conditional Adversarial
Networks [59.17685450892182]
We propose Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra.
We obtain a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.
arXiv Detail & Related papers (2020-08-26T15:02:04Z) - Cross-Spectrum Dual-Subspace Pairing for RGB-infrared Cross-Modality
Person Re-Identification [15.475897856494583]
Conventional person re-identification can only handle RGB color images, which will fail at dark conditions.
RGB-infrared ReID (also known as Infrared-Visible ReID or Visible-Thermal ReID) is proposed.
In this paper, a novel multi-spectrum image generation method is proposed and the generated samples are utilized to help the network to find discriminative information.
arXiv Detail & Related papers (2020-02-29T09:01:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.