Wide Aspect Ratio Matching for Robust Face Detection
- URL: http://arxiv.org/abs/2103.05993v1
- Date: Wed, 10 Mar 2021 11:05:38 GMT
- Title: Wide Aspect Ratio Matching for Robust Face Detection
- Authors: Shi Luo, Xiongfei Li, Xiaoli Zhang
- Abstract summary: The max IoUs between anchors and extreme aspect ratio faces are still lower than fixed sampling threshold.
We propose a Wide Aspect Ratio Matching (WARM) strategy to collect more representative positive anchors from ground-truth faces.
We also present a novel feature enhancement module, named Receptive Field Diversity (RFD) module, to provide diverse receptive field corresponding to different aspect ratios.
- Score: 11.593495085674345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, anchor-based methods have achieved great progress in face
detection. Once anchor design and anchor matching strategy determined, plenty
of positive anchors will be sampled. However, faces with extreme aspect ratio
always fail to be sampled according to standard anchor matching strategy. In
fact, the max IoUs between anchors and extreme aspect ratio faces are still
lower than fixed sampling threshold. In this paper, we firstly explore the
factors that affect the max IoU of each face in theory. Then, anchor matching
simulation is performed to evaluate the sampling range of face aspect ratio.
Besides, we propose a Wide Aspect Ratio Matching (WARM) strategy to collect
more representative positive anchors from ground-truth faces across a wide
range of aspect ratio. Finally, we present a novel feature enhancement module,
named Receptive Field Diversity (RFD) module, to provide diverse receptive
field corresponding to different aspect ratios. Extensive experiments show that
our method can help detectors better capture extreme aspect ratio faces and
achieve promising detection performance on challenging face detection
benchmarks, including WIDER FACE and FDDB datasets.
Related papers
- GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - MB-RACS: Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network [65.1004435124796]
We propose a Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network (MB-RACS) framework.
Our experiments demonstrate that the proposed MB-RACS method surpasses current leading methods.
arXiv Detail & Related papers (2024-01-19T04:40:20Z) - UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face
Recognition [35.66000285310775]
We propose a unified threshold integrated sample-to-sample based loss (USS loss)
USS loss features an explicit unified threshold for distinguishing positive from negative pairs.
We also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship.
arXiv Detail & Related papers (2023-11-04T23:00:40Z) - COMICS: End-to-end Bi-grained Contrastive Learning for Multi-face Forgery Detection [56.7599217711363]
Face forgery recognition methods can only process one face at a time.
Most face forgery recognition methods can only process one face at a time.
We propose COMICS, an end-to-end framework for multi-face forgery detection.
arXiv Detail & Related papers (2023-08-03T03:37:13Z) - Adaptive Sparse Convolutional Networks with Global Context Enhancement
for Faster Object Detection on Drone Images [26.51970603200391]
This paper investigates optimizing the detection head based on the sparse convolution.
It suffers from inadequate integration of contextual information of tiny objects.
We propose a novel global context-enhanced adaptive sparse convolutional network.
arXiv Detail & Related papers (2023-03-25T14:42:50Z) - Adaptive Transformers for Robust Few-shot Cross-domain Face
Anti-spoofing [71.06718651013965]
We present adaptive vision transformers (ViT) for robust cross-domain face antispoofing.
We adopt ViT as a backbone to exploit its strength to account for long-range dependencies among pixels.
Experiments on several benchmark datasets show that the proposed models achieve both robust and competitive performance.
arXiv Detail & Related papers (2022-03-23T03:37:44Z) - Consistency Regularization for Deep Face Anti-Spoofing [69.70647782777051]
Face anti-spoofing (FAS) plays a crucial role in securing face recognition systems.
Motivated by this exciting observation, we conjecture that encouraging feature consistency of different views may be a promising way to boost FAS models.
We enhance both Embedding-level and Prediction-level Consistency Regularization (EPCR) in FAS.
arXiv Detail & Related papers (2021-11-24T08:03:48Z) - Two-stream Convolutional Networks for Multi-frame Face Anti-spoofing [1.9890930069402575]
We propose an efficient two-stream model to capture the key differences between live and spoof faces.
We evaluate the proposed method on the datasets of Siw, Oulu-NPU, CASIA-MFSD and Replay-Attack.
arXiv Detail & Related papers (2021-08-09T13:35:30Z) - ACFD: Asymmetric Cartoon Face Detector [72.60983975604145]
ACFD achieves the 1st place on the detection track of 2020 iCartoon Face Challenge.
Our ACFD achieves the 1st place on the detection track of 2020 iCartoon Face Challenge under the constraints of model size 200MB, inference time 50ms per image, and without any pretrained models.
arXiv Detail & Related papers (2020-07-02T05:57:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.