X-ReID: Cross-Instance Transformer for Identity-Level Person
Re-Identification
- URL: http://arxiv.org/abs/2302.02075v1
- Date: Sat, 4 Feb 2023 03:16:18 GMT
- Title: X-ReID: Cross-Instance Transformer for Identity-Level Person
Re-Identification
- Authors: Leqi Shen, Tao He, Yuchen Guo, Guiguang Ding
- Abstract summary: Cross Intra-Identity Instances module (IntraX) fuses different intra-identity instances to transfer Identity-Level knowledge.
Cross Inter-Identity Instances module (InterX) involves hard positive and hard negative instances to improve the attention response to the same identity.
- Score: 53.047542904329866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, most existing person re-identification methods use Instance-Level
features, which are extracted only from a single image. However, these
Instance-Level features can easily ignore the discriminative information due to
the appearance of each identity varies greatly in different images. Thus, it is
necessary to exploit Identity-Level features, which can be shared across
different images of each identity. In this paper, we propose to promote
Instance-Level features to Identity-Level features by employing cross-attention
to incorporate information from one image to another of the same identity, thus
more unified and discriminative pedestrian information can be obtained. We
propose a novel training framework named X-ReID. Specifically, a Cross
Intra-Identity Instances module (IntraX) fuses different intra-identity
instances to transfer Identity-Level knowledge and make Instance-Level features
more compact. A Cross Inter-Identity Instances module (InterX) involves hard
positive and hard negative instances to improve the attention response to the
same identity instead of different identity, which minimizes intra-identity
variation and maximizes inter-identity variation. Extensive experiments on
benchmark datasets show the superiority of our method over existing works.
Particularly, on the challenging MSMT17, our proposed method gains 1.1% mAP
improvements when compared to the second place.
Related papers
- Disentangled Representations for Short-Term and Long-Term Person Re-Identification [33.76874948187976]
We propose a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN)
It disentangles identity-related and unrelated features from person images through an identity-shuffling technique.
Experimental results validate the effectiveness of IS-GAN, showing state-of-the-art performance on standard reID benchmarks.
arXiv Detail & Related papers (2024-09-09T02:09:49Z) - Infinite-ID: Identity-preserved Personalization via ID-semantics Decoupling Paradigm [31.06269858216316]
We propose Infinite-ID, an ID-semantics decoupling paradigm for identity-preserved personalization.
We introduce an identity-enhanced training, incorporating an additional image cross-attention module to capture sufficient ID information.
We also introduce a feature interaction mechanism that combines a mixed attention module with an AdaIN-mean operation to seamlessly merge the two streams.
arXiv Detail & Related papers (2024-03-18T13:39:53Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - Protecting Celebrities with Identity Consistency Transformer [119.67996461810304]
Identity Consistency Transformer focuses on high-level semantics, specifically identity information, and detecting a suspect face by finding identity inconsistency in inner and outer face regions.
We show that Identity Consistency Transformer exhibits superior generalization ability not only across different datasets but also across various types of image degradation forms found in real-world applications including deepfake videos.
arXiv Detail & Related papers (2022-03-02T18:59:58Z) - Camera-aware Proxies for Unsupervised Person Re-Identification [60.26031011794513]
This paper tackles the purely unsupervised person re-identification (Re-ID) problem that requires no annotations.
We propose to split each single cluster into multiple proxies and each proxy represents the instances coming from the same camera.
Based on the camera-aware proxies, we design both intra- and inter-camera contrastive learning components for our Re-ID model.
arXiv Detail & Related papers (2020-12-19T12:37:04Z) - Taking Modality-free Human Identification as Zero-shot Learning [46.51413603352702]
We develop a novel Modality-Free Human Identification (named MFHI) task as a generic zero-shot learning model in a scalable way.
It is capable of bridging the visual and semantic modalities by learning a discriminative prototype of each identity.
In addition, the semantics-guided spatial attention is enforced on visual modality to obtain representations with both high global category-level and local attribute-level discrimination.
arXiv Detail & Related papers (2020-10-02T13:08:27Z) - Intra-Camera Supervised Person Re-Identification [87.88852321309433]
We propose a novel person re-identification paradigm based on an idea of independent per-camera identity annotation.
This eliminates the most time-consuming and tedious inter-camera identity labelling process.
We formulate a Multi-tAsk mulTi-labEl (MATE) deep learning method for Intra-Camera Supervised (ICS) person re-id.
arXiv Detail & Related papers (2020-02-12T15:26:33Z) - Towards Precise Intra-camera Supervised Person Re-identification [54.86892428155225]
Intra-camera supervision (ICS) for person re-identification (Re-ID) assumes that identity labels are independently annotated within each camera view.
Lack of inter-camera labels makes the ICS Re-ID problem much more challenging than the fully supervised counterpart.
Our approach performs even comparable to state-of-the-art fully supervised methods in two of the datasets.
arXiv Detail & Related papers (2020-02-12T11:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.