Domain Adversarial Training for Infrared-colour Person Re-Identification
- URL: http://arxiv.org/abs/2003.04191v1
- Date: Mon, 9 Mar 2020 15:17:15 GMT
- Title: Domain Adversarial Training for Infrared-colour Person Re-Identification
- Authors: Nima Mohammadi Meshky, Sara Iodice, Krystian Mikolajczyk
- Abstract summary: Person re-identification (re-ID) is a very active area of research in computer vision.
Most methods only address the task of matching between colour images.
In poorly-lit environments CCTV cameras switch to infrared imaging.
We propose a part-feature extraction network to better focus on subtle, unique signatures on the person.
- Score: 19.852463786440122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Person re-identification (re-ID) is a very active area of research in
computer vision, due to the role it plays in video surveillance. Currently,
most methods only address the task of matching between colour images. However,
in poorly-lit environments CCTV cameras switch to infrared imaging, hence
developing a system which can correctly perform matching between infrared and
colour images is a necessity. In this paper, we propose a part-feature
extraction network to better focus on subtle, unique signatures on the person
which are visible across both infrared and colour modalities. To train the
model we propose a novel variant of the domain adversarial feature-learning
framework. Through extensive experimentation, we show that our approach
outperforms state-of-the-art methods.
Related papers
- CricaVPR: Cross-image Correlation-aware Representation Learning for Visual Place Recognition [73.51329037954866]
We propose a robust global representation method with cross-image correlation awareness for visual place recognition.
Our method uses the attention mechanism to correlate multiple images within a batch.
Our method outperforms state-of-the-art methods by a large margin with significantly less training time.
arXiv Detail & Related papers (2024-02-29T15:05:11Z) - Cross-Modality Perturbation Synergy Attack for Person Re-identification [66.48494594909123]
The main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities.
Existing attack methods have primarily focused on the characteristics of the visible image modality.
This study proposes a universal perturbation attack specifically designed for cross-modality ReID.
arXiv Detail & Related papers (2024-01-18T15:56:23Z) - Breaking Modality Disparity: Harmonized Representation for Infrared and
Visible Image Registration [66.33746403815283]
We propose a scene-adaptive infrared and visible image registration.
We employ homography to simulate the deformation between different planes.
We propose the first ground truth available misaligned infrared and visible image dataset.
arXiv Detail & Related papers (2023-04-12T06:49:56Z) - Interactive Feature Embedding for Infrared and Visible Image Fusion [94.77188069479155]
General deep learning-based methods for infrared and visible image fusion rely on the unsupervised mechanism for vital information retention.
We propose a novel interactive feature embedding in self-supervised learning framework for infrared and visible image fusion.
arXiv Detail & Related papers (2022-11-09T13:34:42Z) - SA-DNet: A on-demand semantic object registration network adapting to
non-rigid deformation [3.3843451892622576]
We propose a Semantic-Aware on-Demand registration network (SA-DNet) to confine the feature matching process to the semantic region of interest.
Our method adapts better to the presence of non-rigid distortions in the images and provides semantically well-registered images.
arXiv Detail & Related papers (2022-10-18T14:41:28Z) - Visible-Infrared Person Re-Identification Using Privileged Intermediate
Information [10.816003787786766]
Cross-modal person re-identification (ReID) is challenging due to the large domain shift in data distributions between RGB and IR modalities.
This paper introduces a novel approach for a creating intermediate virtual domain that acts as bridges between the two main domains.
We devised a new method to generate images between visible and infrared domains that provide additional information to train a deep ReID model.
arXiv Detail & Related papers (2022-09-19T21:08:14Z) - Unsupervised Misaligned Infrared and Visible Image Fusion via
Cross-Modality Image Generation and Registration [59.02821429555375]
We present a robust cross-modality generation-registration paradigm for unsupervised misaligned infrared and visible image fusion.
To better fuse the registered infrared images and visible images, we present a feature Interaction Fusion Module (IFM)
arXiv Detail & Related papers (2022-05-24T07:51:57Z) - Towards Homogeneous Modality Learning and Multi-Granularity Information
Exploration for Visible-Infrared Person Re-Identification [16.22986967958162]
Visible-infrared person re-identification (VI-ReID) is a challenging and essential task, which aims to retrieve a set of person images over visible and infrared camera views.
Previous methods attempt to apply generative adversarial network (GAN) to generate the modality-consisitent data.
In this work, we address cross-modality matching problem with Aligned Grayscale Modality (AGM), an unified dark-line spectrum that reformulates visible-infrared dual-mode learning as a gray-gray single-mode learning problem.
arXiv Detail & Related papers (2022-04-11T03:03:19Z) - Infrared Small-Dim Target Detection with Transformer under Complex
Backgrounds [155.388487263872]
We propose a new infrared small-dim target detection method with the transformer.
We adopt the self-attention mechanism of the transformer to learn the interaction information of image features in a larger range.
We also design a feature enhancement module to learn more features of small-dim targets.
arXiv Detail & Related papers (2021-09-29T12:23:41Z) - Learning by Aligning: Visible-Infrared Person Re-identification using
Cross-Modal Correspondences [42.16002082436691]
Two main challenges in VI-reID are intra-class variations across person images, and cross-modal discrepancies between visible and infrared images.
We introduce a novel feature learning framework that addresses these problems in a unified way.
arXiv Detail & Related papers (2021-08-17T03:38:51Z) - Cross-Spectrum Dual-Subspace Pairing for RGB-infrared Cross-Modality
Person Re-Identification [15.475897856494583]
Conventional person re-identification can only handle RGB color images, which will fail at dark conditions.
RGB-infrared ReID (also known as Infrared-Visible ReID or Visible-Thermal ReID) is proposed.
In this paper, a novel multi-spectrum image generation method is proposed and the generated samples are utilized to help the network to find discriminative information.
arXiv Detail & Related papers (2020-02-29T09:01:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.