IntraLoss: Further Margin via Gradient-Enhancing Term for Deep Face
Recognition
- URL: http://arxiv.org/abs/2107.03352v1
- Date: Wed, 7 Jul 2021 16:53:45 GMT
- Title: IntraLoss: Further Margin via Gradient-Enhancing Term for Deep Face
Recognition
- Authors: Chengzhi Jiang, Yanzhou Su, Wen Wang, Haiwei Bai, Haijun Liu, Jian
Cheng
- Abstract summary: Existing classification-based face recognition methods have achieved remarkable progress.
Poor feature distribution will wipe out the performance improvement brought about by margin scheme.
In this paper, we propose the gradient-enhancing term' that concentrates on the distribution characteristics within the class.
- Score: 14.562043494026849
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing classification-based face recognition methods have achieved
remarkable progress, introducing large margin into hypersphere manifold to
learn discriminative facial representations. However, the feature distribution
is ignored. Poor feature distribution will wipe out the performance improvement
brought about by margin scheme. Recent studies focus on the unbalanced
inter-class distribution and form a equidistributed feature representations by
penalizing the angle between identity and its nearest neighbor. But the problem
is more than that, we also found the anisotropy of intra-class distribution. In
this paper, we propose the `gradient-enhancing term' that concentrates on the
distribution characteristics within the class. This method, named IntraLoss,
explicitly performs gradient enhancement in the anisotropic region so that the
intra-class distribution continues to shrink, resulting in isotropic and more
compact intra-class distribution and further margin between identities. The
experimental results on LFW, YTF and CFP-FP show that our outperforms
state-of-the-art methods by gradient enhancement, demonstrating the superiority
of our method. In addition, our method has intuitive geometric interpretation
and can be easily combined with existing methods to solve the previously
ignored problems.
Related papers
- A Learning Paradigm for Interpretable Gradients [9.074325843851726]
We present a novel training approach to improve the quality of gradients for interpretability.
We find that the resulting gradient is qualitatively less noisy and improves quantitatively the interpretability properties of different networks.
arXiv Detail & Related papers (2024-04-23T13:32:29Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Towards the Uncharted: Density-Descending Feature Perturbation for Semi-supervised Semantic Segmentation [51.66997548477913]
We propose a novel feature-level consistency learning framework named Density-Descending Feature Perturbation (DDFP)
Inspired by the low-density separation assumption in semi-supervised learning, our key insight is that feature density can shed a light on the most promising direction for the segmentation classifier to explore.
The proposed DDFP outperforms other designs on feature-level perturbations and shows state of the art performances on both Pascal VOC and Cityscapes dataset.
arXiv Detail & Related papers (2024-03-11T06:59:05Z) - Clip21: Error Feedback for Gradient Clipping [8.979288425347702]
We design Clip21 -- the first provably effective and practically useful feedback mechanism for distributed methods.
Our method converges faster in practice than competing methods.
arXiv Detail & Related papers (2023-05-30T10:41:42Z) - Generalized Inter-class Loss for Gait Recognition [11.15855312510806]
Gait recognition is a unique biometric technique that can be performed at a long distance non-cooperatively.
Previous gait works focus more on minimizing the intra-class variance while ignoring the significance in constraining inter-class variance.
We propose a generalized inter-class loss which resolves the inter-class variance from both sample-level feature distribution and class-level feature distribution.
arXiv Detail & Related papers (2022-10-13T06:44:53Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Real-centric Consistency Learning for Deepfake Detection [8.313889744011933]
We tackle the deepfake detection problem through learning the invariant representations of both classes.
We propose a novel forgery semantical-based pairing strategy to mine latent generation-related features.
At the feature level, based on the centers of natural faces at the representation space, we design a hard positive mining and synthesizing method to simulate the potential marginal features.
arXiv Detail & Related papers (2022-05-15T07:01:28Z) - Towards the Semantic Weak Generalization Problem in Generative Zero-Shot
Learning: Ante-hoc and Post-hoc [89.68803484284408]
We present a simple and effective strategy lowering the previously unexplored factors that limit the performance ceiling of generative Zero-Shot Learning (ZSL)
We begin by formally defining semantic generalization, then look into approaches for reducing the semantic weak generalization problem.
In the ante-hoc phase, we augment the generator's semantic input, as well as relax the fitting target of the generator.
arXiv Detail & Related papers (2022-04-24T13:54:42Z) - KappaFace: Adaptive Additive Angular Margin Loss for Deep Face
Recognition [22.553018305072925]
We introduce a novel adaptive strategy, called KappaFace, to modulate the relative importance based on class difficultness and imbalance.
Experiments conducted on popular facial benchmarks demonstrate that our proposed method achieves superior performance to the state-of-the-art.
arXiv Detail & Related papers (2022-01-19T03:05:24Z) - Manifold Learning Benefits GANs [59.30818650649828]
We improve Generative Adversarial Networks by incorporating a manifold learning step into the discriminator.
In our design, the manifold learning and coding steps are intertwined with layers of the discriminator.
We show substantial improvements over different recent state-of-the-art baselines.
arXiv Detail & Related papers (2021-12-23T14:59:05Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.