MixFaceNets: Extremely Efficient Face Recognition Networks
- URL: http://arxiv.org/abs/2107.13046v1
- Date: Tue, 27 Jul 2021 19:10:27 GMT
- Title: MixFaceNets: Extremely Efficient Face Recognition Networks
- Authors: Fadi Boutros, Naser Damer, Meiling Fang, Florian Kirchbuchner and
Arjan Kuijper
- Abstract summary: We present a set of extremely efficient and high throughput models for accurate face verification, MixFaceNets.
Experiment evaluations on Label Face in the Wild (LFW), Age-DB, MegaFace, and IARPA Janus Benchmarks IJB-B and IJB-C datasets have shown the effectiveness of our MixFaceNets.
With computational complexity between 500M and 1G FLOPs, our MixFaceNets achieved results comparable to the top-ranked models.
- Score: 6.704751710867745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a set of extremely efficient and high throughput
models for accurate face verification, MixFaceNets which are inspired by Mixed
Depthwise Convolutional Kernels. Extensive experiment evaluations on Label Face
in the Wild (LFW), Age-DB, MegaFace, and IARPA Janus Benchmarks IJB-B and IJB-C
datasets have shown the effectiveness of our MixFaceNets for applications
requiring extremely low computational complexity. Under the same level of
computation complexity (< 500M FLOPs), our MixFaceNets outperform
MobileFaceNets on all the evaluated datasets, achieving 99.60% accuracy on LFW,
97.05% accuracy on AgeDB-30, 93.60 TAR (at FAR1e-6) on MegaFace, 90.94 TAR (at
FAR1e-4) on IJB-B and 93.08 TAR (at FAR1e-4) on IJB-C. With computational
complexity between 500M and 1G FLOPs, our MixFaceNets achieved results
comparable to the top-ranked models, while using significantly fewer FLOPs and
less computation overhead, which proves the practical value of our proposed
MixFaceNets. All training codes, pre-trained models, and training logs have
been made available https://github.com/fdbtrs/mixfacenets.
Related papers
- Can we learn better with hard samples? [0.0]
A variant of the traditional algorithm has been proposed, which trains the network focusing on mini-batches with high loss.
We show that the proposed method generalizes in 26.47% less number of epochs than the traditional mini-batch method in EfficientNet-B4 on STL-10.
arXiv Detail & Related papers (2023-04-07T05:45:26Z) - Training Deep Boltzmann Networks with Sparse Ising Machines [5.048818298702389]
We show a new application domain for probabilistic bit (p-bit) based Ising machines by training deep generative AI models with them.
Using sparse, asynchronous, and massively parallel Ising machines we train deep Boltzmann networks in a hybrid probabilistic-classical computing setup.
arXiv Detail & Related papers (2023-03-19T18:10:15Z) - Blind Face Restoration: Benchmark Datasets and a Baseline Model [63.053331687284064]
Blind Face Restoration (BFR) aims to construct a high-quality (HQ) face image from its corresponding low-quality (LQ) input.
We first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512)
State-of-the-art methods are benchmarked on them under five settings including blur, noise, low resolution, JPEG compression artifacts, and the combination of them (full degradation)
arXiv Detail & Related papers (2022-06-08T06:34:24Z) - WebFace260M: A Benchmark for Million-Scale Deep Face Recognition [89.39080252029386]
We contribute a new million-scale recognition benchmark, containing uncurated 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M)
A distributed framework is developed to train face recognition models efficiently without tampering with the performance.
The proposed benchmark shows enormous potential on standard, masked and unbiased face recognition scenarios.
arXiv Detail & Related papers (2022-04-21T14:56:53Z) - Greedy Network Enlarging [53.319011626986004]
We propose a greedy network enlarging method based on the reallocation of computations.
With step-by-step modifying the computations on different stages, the enlarged network will be equipped with optimal allocation and utilization of MACs.
With application of our method on GhostNet, we achieve state-of-the-art 80.9% and 84.3% ImageNet top-1 accuracies.
arXiv Detail & Related papers (2021-07-31T08:36:30Z) - FedFace: Collaborative Learning of Face Recognition Model [66.84737075622421]
FedFace is a framework for collaborative learning of face recognition models.
It learns an accurate and generalizable face recognition model where the face images stored at each client are neither shared with other clients nor the central host.
Our code and pre-trained models will be publicly available.
arXiv Detail & Related papers (2021-04-07T09:25:32Z) - WebFace260M: A Benchmark Unveiling the Power of Million-Scale Deep Face
Recognition [79.65728162193584]
We contribute a new million-scale face benchmark containing noisy 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M)
We reduce relative 40% failure rate on the challenging IJB-C set, and ranks the 3rd among 430 entries on NIST-FRVT.
Even 10% data (WebFace4M) shows superior performance compared with public training set.
arXiv Detail & Related papers (2021-03-06T11:12:43Z) - EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning [82.54669314604097]
EagleEye is a simple yet efficient evaluation component based on adaptive batch normalization.
It unveils a strong correlation between different pruned structures and their final settled accuracy.
This module is also general to plug-in and improve some existing pruning algorithms.
arXiv Detail & Related papers (2020-07-06T01:32:31Z) - TResNet: High Performance GPU-Dedicated Architecture [6.654949459658242]
Many deep learning models, developed in recent years, reach higher ImageNet accuracy than ResNet50, with fewer or comparable FLOPS count.
In this work, we introduce a series of architecture modifications that aim to boost neural networks' accuracy, while retaining their GPU training and inference efficiency.
We introduce a new family of GPU-dedicated models, called TResNet, which achieve better accuracy and efficiency than previous ConvNets.
arXiv Detail & Related papers (2020-03-30T17:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.