Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination
- URL: http://arxiv.org/abs/2405.01101v4
- Date: Fri, 06 Dec 2024 19:42:48 GMT
- Title: Enhancing Person Re-Identification via Uncertainty Feature Fusion and Auto-weighted Measure Combination
- Authors: Quang-Huy Che, Le-Chuong Nguyen, Duc-Tuan Luu, Vinh-Tiep Nguyen,
- Abstract summary: Person re-identification (Re-ID) is a challenging task that involves identifying the same person across different camera views in surveillance systems.
In this paper, a new approach is introduced that enhances the capability of ReID models through the Uncertain Feature Fusion Method (UFFM) and Auto-weighted Measure Combination (AMC)
Our method significantly improves Rank@1 accuracy and Mean Average Precision (mAP) when evaluated on person re-identification datasets.
- Score: 1.183049138259841
- License:
- Abstract: Person re-identification (Re-ID) is a challenging task that involves identifying the same person across different camera views in surveillance systems. Current methods usually rely on features from single-camera views, which can be limiting when dealing with multiple cameras and challenges such as changing viewpoints and occlusions. In this paper, a new approach is introduced that enhances the capability of ReID models through the Uncertain Feature Fusion Method (UFFM) and Auto-weighted Measure Combination (AMC). UFFM generates multi-view features using features extracted independently from multiple images to mitigate view bias. However, relying only on similarity based on multi-view features is limited because these features ignore the details represented in single-view features. Therefore, we propose the AMC method to generate a more robust similarity measure by combining various measures. Our method significantly improves Rank@1 accuracy and Mean Average Precision (mAP) when evaluated on person re-identification datasets. Combined with the BoT Baseline on challenging datasets, we achieve impressive results, with a 7.9% improvement in Rank@1 and a 12.1% improvement in mAP on the MSMT17 dataset. On the Occluded-DukeMTMC dataset, our method increases Rank@1 by 22.0% and mAP by 18.4%. Code is available: \url{https://github.com/chequanghuy/Enhancing-Person-Re-Identification-via-UFFM-and-AMC}
Related papers
- Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training [51.87027943520492]
We present a novel paradigm Diffusion-ReID to efficiently augment and generate diverse images based on known identities.
Benefiting from our proposed paradigm, we first create a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K images from 5,183 identities.
arXiv Detail & Related papers (2024-06-10T06:26:03Z) - ViewFormer: View Set Attention for Multi-view 3D Shape Understanding [7.39435265842079]
We present ViewFormer, a model for multi-view 3d shape recognition and retrieval.
With only 2 attention blocks and 4.8M learnable parameters, ViewFormer reaches 98.8% recognition accuracy on ModelNet40 for the first time.
On the challenging RGBD dataset, our method achieves 98.4% recognition accuracy, which is a 4.1% absolute improvement over the strongest baseline.
arXiv Detail & Related papers (2023-04-29T03:58:20Z) - End-to-End Context-Aided Unicity Matching for Person Re-identification [100.02321122258638]
We propose an end-to-end person unicity matching architecture for learning and refining the person matching relations.
We use the samples' global context relationship to refine the soft matching results and reach the matching unicity through bipartite graph matching.
Given full consideration to real-world person re-identification applications, we achieve the unicity matching in both one-shot and multi-shot settings.
arXiv Detail & Related papers (2022-10-20T07:33:57Z) - Dynamic Prototype Mask for Occluded Person Re-Identification [88.7782299372656]
Existing methods mainly address this issue by employing body clues provided by an extra network to distinguish the visible part.
We propose a novel Dynamic Prototype Mask (DPM) based on two self-evident prior knowledge.
Under this condition, the occluded representation could be well aligned in a selected subspace spontaneously.
arXiv Detail & Related papers (2022-07-19T03:31:13Z) - A High-Accuracy Unsupervised Person Re-identification Method Using
Auxiliary Information Mined from Datasets [53.047542904329866]
We make use of auxiliary information mined from datasets for multi-modal feature learning.
This paper proposes three effective training tricks, including Restricted Label Smoothing Cross Entropy Loss (RLSCE), Weight Adaptive Triplet Loss (WATL) and Dynamic Training Iterations (DTI)
arXiv Detail & Related papers (2022-05-06T10:16:18Z) - Neighbourhood-guided Feature Reconstruction for Occluded Person
Re-Identification [45.704612531562404]
We propose to reconstruct the feature representation of occluded parts by fully exploiting the information of its neighborhood in a gallery image set.
In the large-scale Occluded-DukeMTMC benchmark, our approach achieves 64.2% mAP and 67.6% rank-1 accuracy.
arXiv Detail & Related papers (2021-05-16T03:53:55Z) - Adversarial Multi-scale Feature Learning for Person Re-identification [0.0]
Person ReID aims to accurately measure visual similarities between person images for determining whether two images correspond to the same person.
We propose to improve Person ReID system performance from two perspective: textbf1).
Multi-scale feature learning (MSFL), which consists of Cross-scale information propagation (CSIP) and Multi-scale feature fusion (MSFF), to dynamically fuse features cross different scales.
Multi-scale gradient regularizor (MSGR), to emphasize ID-related factors and ignore irrelevant factors in an adversarial manner.
arXiv Detail & Related papers (2020-12-28T02:18:00Z) - Camera-aware Proxies for Unsupervised Person Re-Identification [60.26031011794513]
This paper tackles the purely unsupervised person re-identification (Re-ID) problem that requires no annotations.
We propose to split each single cluster into multiple proxies and each proxy represents the instances coming from the same camera.
Based on the camera-aware proxies, we design both intra- and inter-camera contrastive learning components for our Re-ID model.
arXiv Detail & Related papers (2020-12-19T12:37:04Z) - Learning to Recognize Patch-Wise Consistency for Deepfake Detection [39.186451993950044]
We propose a representation learning approach for this task, called patch-wise consistency learning (PCL)
PCL learns by measuring the consistency of image source features, resulting to representation with good interpretability and robustness to multiple forgery methods.
We evaluate our approach on seven popular Deepfake detection datasets.
arXiv Detail & Related papers (2020-12-16T23:06:56Z) - Intra-Camera Supervised Person Re-Identification [87.88852321309433]
We propose a novel person re-identification paradigm based on an idea of independent per-camera identity annotation.
This eliminates the most time-consuming and tedious inter-camera identity labelling process.
We formulate a Multi-tAsk mulTi-labEl (MATE) deep learning method for Intra-Camera Supervised (ICS) person re-id.
arXiv Detail & Related papers (2020-02-12T15:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.