Domain Adaptive Egocentric Person Re-identification
- URL: http://arxiv.org/abs/2103.04870v1
- Date: Mon, 8 Mar 2021 16:19:32 GMT
- Title: Domain Adaptive Egocentric Person Re-identification
- Authors: Ankit Choudhary and Deepak Mishra and Arnab Karmakar
- Abstract summary: Person re-identification (re-ID) in first-person (egocentric) vision is a fairly new and unexplored problem.
With the increase of wearable video recording devices, egocentric data becomes readily available.
There is a significant lack of large scale structured egocentric datasets for person re-identification.
- Score: 10.199631830749839
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Person re-identification (re-ID) in first-person (egocentric) vision is a
fairly new and unexplored problem. With the increase of wearable video
recording devices, egocentric data becomes readily available, and person
re-identification has the potential to benefit greatly from this. However,
there is a significant lack of large scale structured egocentric datasets for
person re-identification, due to the poor video quality and lack of individuals
in most of the recorded content. Although a lot of research has been done in
person re-identification based on fixed surveillance cameras, these do not
directly benefit egocentric re-ID. Machine learning models trained on the
publicly available large scale re-ID datasets cannot be applied to egocentric
re-ID due to the dataset bias problem. The proposed algorithm makes use of
neural style transfer (NST) that incorporates a variant of Convolutional Neural
Network (CNN) to utilize the benefits of both fixed camera vision and
first-person vision. NST generates images having features from both egocentric
datasets and fixed camera datasets, that are fed through a VGG-16 network
trained on a fixed-camera dataset for feature extraction. These extracted
features are then used to re-identify individuals. The fixed camera dataset
Market-1501 and the first-person dataset EGO Re-ID are applied for this work
and the results are on par with the present re-identification models in the
egocentric domain.
Related papers
- Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training [51.87027943520492]
We present a novel paradigm Diffusion-ReID to efficiently augment and generate diverse images based on known identities.
Benefiting from our proposed paradigm, we first create a new large-scale person Re-ID dataset Diff-Person, which consists of over 777K images from 5,183 identities.
arXiv Detail & Related papers (2024-06-10T06:26:03Z) - AG-ReID.v2: Bridging Aerial and Ground Views for Person Re-identification [39.58286453178339]
Aerial-ground person re-identification (Re-ID) presents unique challenges in computer vision.
We introduce AG-ReID.v2, a dataset specifically designed for person Re-ID in mixed aerial and ground scenarios.
This dataset comprises 100,502 images of 1,615 unique individuals, each annotated with matching IDs and 15 soft attribute labels.
arXiv Detail & Related papers (2024-01-05T04:53:33Z) - Learning Invariance from Generated Variance for Unsupervised Person
Re-identification [15.096776375794356]
We propose to replace traditional data augmentation with a generative adversarial network (GAN)
A 3D mesh guided person image generator is proposed to disentangle a person image into id-related and id-unrelated features.
By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised person ReID performance on mainstream large-scale benchmarks.
arXiv Detail & Related papers (2023-01-02T15:40:14Z) - Keypoint Message Passing for Video-based Person Re-Identification [106.41022426556776]
Video-based person re-identification (re-ID) is an important technique in visual surveillance systems which aims to match video snippets of people captured by different cameras.
Existing methods are mostly based on convolutional neural networks (CNNs), whose building blocks either process local neighbor pixels at a time, or, when 3D convolutions are used to model temporal information, suffer from the misalignment problem caused by person movement.
In this paper, we propose to overcome the limitations of normal convolutions with a human-oriented graph method. Specifically, features located at person joint keypoints are extracted and connected as a spatial-temporal graph
arXiv Detail & Related papers (2021-11-16T08:01:16Z) - Unsupervised Pre-training for Person Re-identification [90.98552221699508]
We present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson"
We make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
arXiv Detail & Related papers (2020-12-07T14:48:26Z) - PoseTrackReID: Dataset Description [97.7241689753353]
Pose information is helpful to disentangle useful feature information from background or occlusion noise.
With PoseTrackReID, we want to bridge the gap between person re-ID and multi-person pose tracking.
This dataset provides a good benchmark for current state-of-the-art methods on multi-frame person re-ID.
arXiv Detail & Related papers (2020-11-12T07:44:25Z) - Temporal Continuity Based Unsupervised Learning for Person
Re-Identification [15.195514083289801]
We propose an unsupervised center-based clustering approach capable of progressively learning and exploiting the underlying re-id discriminative information.
We call our framework Temporal Continuity based Unsupervised Learning (TCUL)
Specifically, TCUL simultaneously does center based clustering of unlabeled (target) dataset and fine-tunes a convolutional neural network (CNN) pre-trained on irrelevant labeled (source) dataset.
It exploits temporally continuous nature of images within-camera jointly with spatial similarity of feature maps across-cameras to generate reliable pseudo-labels for training a re-identification model.
arXiv Detail & Related papers (2020-09-01T05:29:30Z) - Intra-Camera Supervised Person Re-Identification [87.88852321309433]
We propose a novel person re-identification paradigm based on an idea of independent per-camera identity annotation.
This eliminates the most time-consuming and tedious inter-camera identity labelling process.
We formulate a Multi-tAsk mulTi-labEl (MATE) deep learning method for Intra-Camera Supervised (ICS) person re-id.
arXiv Detail & Related papers (2020-02-12T15:26:33Z) - Towards Precise Intra-camera Supervised Person Re-identification [54.86892428155225]
Intra-camera supervision (ICS) for person re-identification (Re-ID) assumes that identity labels are independently annotated within each camera view.
Lack of inter-camera labels makes the ICS Re-ID problem much more challenging than the fully supervised counterpart.
Our approach performs even comparable to state-of-the-art fully supervised methods in two of the datasets.
arXiv Detail & Related papers (2020-02-12T11:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.