Graph Consistency based Mean-Teaching for Unsupervised Domain Adaptive
Person Re-Identification
- URL: http://arxiv.org/abs/2105.04776v2
- Date: Thu, 13 May 2021 05:57:52 GMT
- Title: Graph Consistency based Mean-Teaching for Unsupervised Domain Adaptive
Person Re-Identification
- Authors: Xiaobin Liu, Shiliang Zhang
- Abstract summary: This paper proposes a Graph Consistency based Mean-Teaching (GCMT) method with constructing the Graph Consistency Constraint (GCC) between teacher and student networks.
Experiments on three datasets, i.e., Market-1501, DukeMTMCreID, and MSMT17, show that proposed GCMT outperforms state-of-the-art methods by clear margin.
- Score: 54.58165777717885
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent works show that mean-teaching is an effective framework for
unsupervised domain adaptive person re-identification. However, existing
methods perform contrastive learning on selected samples between teacher and
student networks, which is sensitive to noises in pseudo labels and neglects
the relationship among most samples. Moreover, these methods are not effective
in cooperation of different teacher networks. To handle these issues, this
paper proposes a Graph Consistency based Mean-Teaching (GCMT) method with
constructing the Graph Consistency Constraint (GCC) between teacher and student
networks. Specifically, given unlabeled training images, we apply teacher
networks to extract corresponding features and further construct a teacher
graph for each teacher network to describe the similarity relationships among
training images. To boost the representation learning, different teacher graphs
are fused to provide the supervise signal for optimizing student networks. GCMT
fuses similarity relationships predicted by different teacher networks as
supervision and effectively optimizes student networks with more sample
relationships involved. Experiments on three datasets, i.e., Market-1501,
DukeMTMCreID, and MSMT17, show that proposed GCMT outperforms state-of-the-art
methods by clear margin. Specially, GCMT even outperforms the previous method
that uses a deeper backbone. Experimental results also show that GCMT can
effectively boost the performance with multiple teacher and student networks.
Our code is available at https://github.com/liu-xb/GCMT .
Related papers
- MSVQ: Self-Supervised Learning with Multiple Sample Views and Queues [10.327408694770709]
We propose a new simple framework, namely Multiple Sample Views and Queues (MSVQ)
We jointly construct three soft labels on-the-fly by utilizing two complementary and symmetric approaches.
Let the student network mimic the similarity relationships between the samples, thus giving the student network a more flexible ability to identify false negative samples in the dataset.
arXiv Detail & Related papers (2023-05-09T12:05:14Z) - Active Teacher for Semi-Supervised Object Detection [80.10937030195228]
We propose a novel algorithm called Active Teacher for semi-supervised object detection (SSOD)
Active Teacher extends the teacher-student framework to an iterative version, where the label set is partially and gradually augmented by evaluating three key factors of unlabeled examples.
With this design, Active Teacher can maximize the effect of limited label information while improving the quality of pseudo-labels.
arXiv Detail & Related papers (2023-03-15T03:59:27Z) - Semi-Supervised Semantic Segmentation via Gentle Teaching Assistant [72.4512562104361]
We argue that the unlabeled data with pseudo labels can facilitate the learning of representative features in the feature extractor.
Motivated by this consideration, we propose a novel framework, Gentle Teaching Assistant (GTA-Seg) to disentangle the effects of pseudo labels on feature extractor and mask predictor.
arXiv Detail & Related papers (2023-01-18T07:11:24Z) - Compressing Deep Graph Neural Networks via Adversarial Knowledge
Distillation [41.00398052556643]
We propose a novel Adversarial Knowledge Distillation framework for graph models named GraphAKD.
The discriminator distinguishes between teacher knowledge and what the student inherits, while the student GNN works as a generator and aims to fool the discriminator.
The results imply that GraphAKD can precisely transfer knowledge from a complicated teacher GNN to a compact student GNN.
arXiv Detail & Related papers (2022-05-24T00:04:43Z) - Representation Consolidation for Training Expert Students [54.90754502493968]
We show that a multi-head, multi-task distillation method is sufficient to consolidate representations from task-specific teacher(s) and improve downstream performance.
Our method can also combine the representational knowledge of multiple teachers trained on one or multiple domains into a single model.
arXiv Detail & Related papers (2021-07-16T17:58:18Z) - Knowledge Distillation By Sparse Representation Matching [107.87219371697063]
We propose Sparse Representation Matching (SRM) to transfer intermediate knowledge from one Convolutional Network (CNN) to another by utilizing sparse representation.
We formulate as a neural processing block, which can be efficiently optimized using gradient descent and integrated into any CNN in a plug-and-play manner.
Our experiments demonstrate that is robust to architectural differences between the teacher and student networks, and outperforms other KD techniques across several datasets.
arXiv Detail & Related papers (2021-03-31T11:47:47Z) - Teacher-Student Asynchronous Learning with Multi-Source Consistency for
Facial Landmark Detection [15.796415030063802]
We propose a teacher-student asynchronous learning(TSAL) framework based on the multi-source supervision signal consistency criterion.
Experiments on 300W, AFLW, and 300VW benchmarks show that TSAL framework achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-12-12T03:23:30Z) - Semi-supervised Learning with a Teacher-student Network for Generalized
Attribute Prediction [7.462336024223667]
This paper presents a study on semi-supervised learning to solve the visual attribute prediction problem.
Our method achieves competitive performance on various benchmarks for fashion attribute prediction.
arXiv Detail & Related papers (2020-07-14T02:06:24Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.