EfficientFace: An Efficient Deep Network with Feature Enhancement for
Accurate Face Detection
- URL: http://arxiv.org/abs/2302.11816v1
- Date: Thu, 23 Feb 2023 06:59:45 GMT
- Title: EfficientFace: An Efficient Deep Network with Feature Enhancement for
Accurate Face Detection
- Authors: Guangtao Wang, Jun Li, Zhijian Wu, Jianhua Xu, Jifeng Shen and Wankou
Yang
- Abstract summary: Current lightweight CNN-based face detectors trading accuracy for efficiency have inadequate capability in handling insufficient feature representation.
We design an efficient deep face detector termed EfficientFace in this study, which contains three modules for feature enhancement.
We have evaluated EfficientFace on four public benchmarks and experimental results demonstrate the appealing performance of our method.
- Score: 20.779512288834315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, deep convolutional neural networks (CNN) have significantly
advanced face detection. In particular, lightweight CNNbased architectures have
achieved great success due to their lowcomplexity structure facilitating
real-time detection tasks. However, current lightweight CNN-based face
detectors trading accuracy for efficiency have inadequate capability in
handling insufficient feature representation, faces with unbalanced aspect
ratios and occlusion. Consequently, they exhibit deteriorated performance far
lagging behind the deep heavy detectors. To achieve efficient face detection
without sacrificing accuracy, we design an efficient deep face detector termed
EfficientFace in this study, which contains three modules for feature
enhancement. To begin with, we design a novel cross-scale feature fusion
strategy to facilitate bottom-up information propagation, such that fusing
low-level and highlevel features is further strengthened. Besides, this is
conducive to estimating the locations of faces and enhancing the descriptive
power of face features. Secondly, we introduce a Receptive Field Enhancement
module to consider faces with various aspect ratios. Thirdly, we add an
Attention Mechanism module for improving the representational capability of
occluded faces. We have evaluated EfficientFace on four public benchmarks and
experimental results demonstrate the appealing performance of our method. In
particular, our model respectively achieves 95.1% (Easy), 94.0% (Medium) and
90.1% (Hard) on validation set of WIDER Face dataset, which is competitive with
heavyweight models with only 1/15 computational costs of the state-of-the-art
MogFace detector.
Related papers
- Towards More General Video-based Deepfake Detection through Facial Feature Guided Adaptation for Foundation Model [15.61920157541529]
We propose a novel Deepfake detection approach by adapting the Foundation Models with rich information encoded inside.
Inspired by the recent advances of parameter efficient fine-tuning, we propose a novel side-network-based decoder.
Our approach exhibits superior effectiveness in identifying unseen Deepfake samples, achieving notable performance improvement.
arXiv Detail & Related papers (2024-04-08T14:58:52Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - EfficientSRFace: An Efficient Network with Super-Resolution Enhancement
for Accurate Face Detection [18.977044046941813]
In face detection, low-resolution faces, such as numerous small faces of a human group in a crowded scene, are common in dense face prediction tasks.
We develop an efficient detector termed EfficientSRFace by introducing a feature-level super-resolution reconstruction network.
This module plays an auxiliary role in the training process, and can be removed during the inference without increasing the inference time.
arXiv Detail & Related papers (2023-06-04T06:49:44Z) - Pushing the Limits of Asynchronous Graph-based Object Detection with
Event Cameras [62.70541164894224]
We introduce several architecture choices which allow us to scale the depth and complexity of such models while maintaining low computation.
Our method runs 3.7 times faster than a dense graph neural network, taking only 8.4 ms per forward pass.
arXiv Detail & Related papers (2022-11-22T15:14:20Z) - EResFD: Rediscovery of the Effectiveness of Standard Convolution for
Lightweight Face Detection [13.357235715178584]
We re-examine the effectiveness of the standard convolutional block as a lightweight backbone architecture for face detection.
We show that heavily channel-pruned standard convolution layers can achieve better accuracy and inference speed.
Our proposed detector EResFD obtained 80.4% mAP on WIDER FACE Hard subset which only takes 37.7 ms for VGA image inference on CPU.
arXiv Detail & Related papers (2022-04-04T02:30:43Z) - FasterPose: A Faster Simple Baseline for Human Pose Estimation [65.8413964785972]
We propose a design paradigm for cost-effective network with LR representation for efficient pose estimation, named FasterPose.
We study the training behavior of FasterPose, and formulate a novel regressive cross-entropy (RCE) loss function for accelerating the convergence.
Compared with the previously dominant network of pose estimation, our method reduces 58% of the FLOPs and simultaneously gains 1.3% improvement of accuracy.
arXiv Detail & Related papers (2021-07-07T13:39:08Z) - Sample and Computation Redistribution for Efficient Face Detection [137.19388513633484]
Training data sampling and computation distribution strategies are the keys to efficient and accurate face detection.
scrfdf34 outperforms the best competitor, TinaFace, by $3.86%$ (AP at hard set) while being more than emph3$times$ faster on GPUs with VGA-resolution images.
arXiv Detail & Related papers (2021-05-10T23:51:14Z) - An Efficient Multitask Neural Network for Face Alignment, Head Pose
Estimation and Face Tracking [9.39854778804018]
We propose an efficient multitask face alignment, face tracking and head pose estimation network (ATPN)
ATPN achieves improved performance compared to previous state-of-the-art methods while having less number of parameters and FLOPS.
arXiv Detail & Related papers (2021-03-13T04:41:15Z) - Improving DeepFake Detection Using Dynamic Face Augmentation [0.8793721044482612]
Most publicly available DeepFake detection datasets have limited variations.
Deep neural networks tend to overfit to the facial features instead of learning to detect manipulation features of DeepFake content.
We introduce Face-Cutout, a data augmentation method for training Convolutional Neural Networks (CNN) to improve DeepFake detection.
arXiv Detail & Related papers (2021-02-18T20:25:45Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.