CNTN: Cyclic Noise-tolerant Network for Gait Recognition
- URL: http://arxiv.org/abs/2210.06910v1
- Date: Thu, 13 Oct 2022 11:23:58 GMT
- Title: CNTN: Cyclic Noise-tolerant Network for Gait Recognition
- Authors: Weichen Yu, Hongyuan Yu, Yan Huang, Chunshui Cao, Liang Wang
- Abstract summary: Gait recognition aims to identify individuals by recognizing their walking patterns.
Most of the previous gait recognition methods degenerate significantly due to two memorization effects, namely appearance memorization and label noise memorization.
For the first time noisy gait recognition is studied, and a cyclic noise-tolerant network (CNTN) is proposed with a cyclic training algorithm.
- Score: 12.571029673961315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gait recognition aims to identify individuals by recognizing their walking
patterns. However, an observation is made that most of the previous gait
recognition methods degenerate significantly due to two memorization effects,
namely appearance memorization and label noise memorization. To address the
problem, for the first time noisy gait recognition is studied, and a cyclic
noise-tolerant network (CNTN) is proposed with a cyclic training algorithm,
which equips the two parallel networks with explicitly different abilities,
namely one forgetting network and one memorizing network. The overall model
will not memorize the pattern unless the two different networks both memorize
it. Further, a more refined co-teaching constraint is imposed to help the model
learn intrinsic patterns which are less influenced by memorization. Also, to
address label noise memorization, an adaptive noise detection module is
proposed to rule out the samples with high possibility to be noisy from
updating the model. Experiments are conducted on the three most popular
benchmarks and CNTN achieves state-of-the-art performances. We also reconstruct
two noisy gait recognition datasets, and CNTN gains significant improvements
(especially 6% improvements on CL setting). CNTN is also compatible with any
off-the-shelf backbones and improves them consistently.
Related papers
- Establishment of Neural Networks Robust to Label Noise [0.0]
In this paper, we have examined the fundamental concept underlying related label noise approaches.
A transition matrix estimator has been created, and its effectiveness against the actual transition matrix has been demonstrated.
We are not efficiently able to demonstrate the influence of the transition matrix noise correction on robustness enhancements due to our inability to correctly tune the complex convolutional neural network model.
arXiv Detail & Related papers (2022-11-28T13:07:23Z) - Dual Clustering Co-teaching with Consistent Sample Mining for
Unsupervised Person Re-Identification [13.65131691012468]
In unsupervised person Re-ID, peer-teaching strategy leveraging two networks to facilitate training has been proven to be an effective method to deal with the pseudo label noise.
This paper proposes a novel Dual Clustering Co-teaching (DCCT) approach to handle this issue.
DCCT mainly exploits the features extracted by two networks to generate two sets of pseudo labels separately by clustering with different parameters.
arXiv Detail & Related papers (2022-10-07T06:04:04Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z) - Noisy Concurrent Training for Efficient Learning under Label Noise [13.041607703862724]
Deep neural networks (DNNs) fail to learn effectively under label noise and have been shown to memorize random labels which affect their performance.
We consider learning in isolation, using one-hot encoded labels as the sole source of supervision, and a lack of regularization to discourage memorization as the major shortcomings of the standard training procedure.
We propose Noisy Concurrent Training (NCT) which leverages collaborative learning to use the consensus between two models as an additional source of supervision.
arXiv Detail & Related papers (2020-09-17T14:22:17Z) - Unpaired Learning of Deep Image Denoising [80.34135728841382]
This paper presents a two-stage scheme by incorporating self-supervised learning and knowledge distillation.
For self-supervised learning, we suggest a dilated blind-spot network (D-BSN) to learn denoising solely from real noisy images.
Experiments show that our unpaired learning method performs favorably on both synthetic noisy images and real-world noisy photographs.
arXiv Detail & Related papers (2020-08-31T16:22:40Z) - Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating
Back-Propagation for Saliency Detection [54.98042023365694]
We propose a noise-aware encoder-decoder framework to disentangle a clean saliency predictor from noisy training examples.
The proposed model consists of two sub-models parameterized by neural networks.
arXiv Detail & Related papers (2020-07-23T18:47:36Z) - Attentive WaveBlock: Complementarity-enhanced Mutual Networks for
Unsupervised Domain Adaptation in Person Re-identification and Beyond [97.25179345878443]
This paper proposes a novel light-weight module, the Attentive WaveBlock (AWB)
AWB can be integrated into the dual networks of mutual learning to enhance the complementarity and further depress noise in the pseudo-labels.
Experiments demonstrate that the proposed method achieves state-of-the-art performance with significant improvements on multiple UDA person re-identification tasks.
arXiv Detail & Related papers (2020-06-11T15:40:40Z) - Many-to-Many Voice Transformer Network [55.17770019619078]
This paper proposes a voice conversion (VC) method based on a sequence-to-sequence (S2S) learning framework.
It enables simultaneous conversion of the voice characteristics, pitch contour, and duration of input speech.
arXiv Detail & Related papers (2020-05-18T04:02:08Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z) - Robust Speaker Recognition Using Speech Enhancement And Attention Model [37.33388614967888]
Instead of individually processing speech enhancement and speaker recognition, the two modules are integrated into one framework by a joint optimisation using deep neural networks.
To increase robustness against noise, a multi-stage attention mechanism is employed to highlight the speaker related features learned from context information in time and frequency domain.
The obtained results show that the proposed approach using speech enhancement and multi-stage attention models outperforms two strong baselines not using them in most acoustic conditions in our experiments.
arXiv Detail & Related papers (2020-01-14T20:03:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.