Deep Boosting Multi-Modal Ensemble Face Recognition with Sample-Level
Weighting
- URL: http://arxiv.org/abs/2308.09234v1
- Date: Fri, 18 Aug 2023 01:44:54 GMT
- Title: Deep Boosting Multi-Modal Ensemble Face Recognition with Sample-Level
Weighting
- Authors: Sahar Rahimi Malakshan, Mohammad Saeed Ebrahimi Saadabadi, Nima
Najafzadeh, Nasser M. Nasrabadi
- Abstract summary: Deep convolutional neural networks have achieved remarkable success in face recognition.
The current training benchmarks exhibit an imbalanced quality distribution.
This poses issues for generalization on hard samples since they are underrepresented during training.
Inspired by the well-known AdaBoost, we propose a sample-level weighting approach to incorporate the importance of different samples into the FR loss.
- Score: 11.39204323420108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks have achieved remarkable success in face
recognition (FR), partly due to the abundant data availability. However, the
current training benchmarks exhibit an imbalanced quality distribution; most
images are of high quality. This poses issues for generalization on hard
samples since they are underrepresented during training. In this work, we
employ the multi-model boosting technique to deal with this issue. Inspired by
the well-known AdaBoost, we propose a sample-level weighting approach to
incorporate the importance of different samples into the FR loss. Individual
models of the proposed framework are experts at distinct levels of sample
hardness. Therefore, the combination of models leads to a robust feature
extractor without losing the discriminability on the easy samples. Also, for
incorporating the sample hardness into the training criterion, we analytically
show the effect of sample mining on the important aspects of current angular
margin loss functions, i.e., margin and scale. The proposed method shows
superior performance in comparison with the state-of-the-art algorithms in
extensive experiments on the CFP-FP, LFW, CPLFW, CALFW, AgeDB, TinyFace, IJB-B,
and IJB-C evaluation datasets.
Related papers
- Stochastic Sampling for Contrastive Views and Hard Negative Samples in Graph-based Collaborative Filtering [28.886714896469737]
Graph-based collaborative filtering (CF) has emerged as a promising approach in recommendation systems.
Despite its achievements, graph-based CF models face challenges due to data sparsity and negative sampling.
We propose a novel sampling for i) COntrastive views and ii) hard NEgative samples (SCONE) to overcome these issues.
arXiv Detail & Related papers (2024-05-01T02:27:59Z) - Take the Bull by the Horns: Hard Sample-Reweighted Continual Training
Improves LLM Generalization [165.98557106089777]
A key challenge is to enhance the capabilities of large language models (LLMs) amid a looming shortage of high-quality training data.
Our study starts from an empirical strategy for the light continual training of LLMs using their original pre-training data sets.
We then formalize this strategy into a principled framework of Instance-Reweighted Distributionally Robust Optimization.
arXiv Detail & Related papers (2024-02-22T04:10:57Z) - A Lightweight Parallel Framework for Blind Image Quality Assessment [7.9562077122537875]
We propose a lightweight parallel framework (LPF) for blind image quality assessment (BIQA)
First, we extract the visual features using a pre-trained feature extraction network. Furthermore, we construct a simple yet effective feature embedding network (FEN) to transform the visual features.
We present two novel self-supervised subtasks, including a sample-level category prediction task and a batch-level quality comparison task.
arXiv Detail & Related papers (2024-02-19T10:56:58Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - A Quality Aware Sample-to-Sample Comparison for Face Recognition [13.96448286983864]
This work integrates a quality-aware learning process at the sample level into the classification training paradigm (QAFace)
Our method adaptively finds and assigns more attention to the recognizable low-quality samples in the training datasets.
arXiv Detail & Related papers (2023-06-06T20:28:04Z) - OTFace: Hard Samples Guided Optimal Transport Loss for Deep Face
Representation [31.220594076407444]
Face representation in the wild is extremely hard due to the large scale face variations.
This paper proposes the hard samples guided optimal transport (OT) loss for deep face representation, OTFace.
arXiv Detail & Related papers (2022-03-28T02:57:04Z) - Jo-SRC: A Contrastive Approach for Combating Noisy Labels [58.867237220886885]
We propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency)
Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution.
arXiv Detail & Related papers (2021-03-24T07:26:07Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z) - Feature Quantization Improves GAN Training [126.02828112121874]
Feature Quantization (FQ) for the discriminator embeds both true and fake data samples into a shared discrete space.
Our method can be easily plugged into existing GAN models, with little computational overhead in training.
arXiv Detail & Related papers (2020-04-05T04:06:50Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.