CRAFT: Contextual Re-Activation of Filters for face recognition Training
- URL: http://arxiv.org/abs/2312.00072v2
- Date: Tue, 5 Dec 2023 01:58:50 GMT
- Title: CRAFT: Contextual Re-Activation of Filters for face recognition Training
- Authors: Aman Bhatta, Domingo Mery, Haiyu Wu, Kevin W. Bowyer
- Abstract summary: We propose "CRAFT: Contextual Re-Activation of Filters for Face Recognition Training"
We show that CRAFT reduces fraction of inactive filters from 44% to 32% on average and discovers filter patterns not found by standard training.
Compared to standard training without reactivation, CRAFT demonstrates enhanced model accuracy on standard face-recognition benchmark datasets.
- Score: 11.490358127866102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The first layer of a deep CNN backbone applies filters to an image to extract
the basic features available to later layers. During training, some filters may
go inactive, mean ing all weights in the filter approach zero. An inactive fil
ter in the final model represents a missed opportunity to extract a useful
feature. This phenomenon is especially prevalent in specialized CNNs such as
for face recogni tion (as opposed to, e.g., ImageNet). For example, in one the
most widely face recognition model (ArcFace), about half of the convolution
filters in the first layer are inactive. We propose a novel approach designed
and tested specif ically for face recognition networks, known as "CRAFT:
Contextual Re-Activation of Filters for Face Recognition Training". CRAFT
identifies inactive filters during training and reinitializes them based on the
context of strong filters at that stage in training. We show that CRAFT reduces
fraction of inactive filters from 44% to 32% on average and discovers filter
patterns not found by standard training. Compared to standard training without
reactivation, CRAFT demonstrates enhanced model accuracy on standard
face-recognition benchmark datasets including AgeDB-30, CPLFW, LFW, CALFW, and
CFP-FP, as well as on more challenging datasets like IJBB and IJBC.
Related papers
- FaceFilterSense: A Filter-Resistant Face Recognition and Facial Attribute Analysis Framework [1.673834743879962]
Fun selfie filters have come into tremendous mainstream use affecting the functioning of facial biometric systems.
Current AR-based filters and filters which distort facial key points are in vogue recently and make the faces highly unrecognizable even to the naked eye.
To mitigate these limitations, we aim to perform a holistic impact analysis of the latest filters and propose an user recognition model with the filtered images.
arXiv Detail & Related papers (2024-04-12T07:04:56Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - IterativePFN: True Iterative Point Cloud Filtering [18.51768749680731]
A fundamental 3D vision task is the removal of noise, known as point cloud filtering or denoising.
We propose IterativePFN (iterative point cloud filtering network), which consists of multiple Iterations that model the true iterative filtering process internally.
Our method is able to obtain better performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-04-04T04:47:44Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Data Agnostic Filter Gating for Efficient Deep Networks [72.4615632234314]
Current filter pruning methods mainly leverage feature maps to generate important scores for filters and prune those with smaller scores.
In this paper, we propose a data filter pruning method that uses an auxiliary network named Dagger module to induce pruning.
In addition, to help prune filters with certain FLOPs constraints, we leverage an explicit FLOPs-aware regularization to directly promote pruning filters toward target FLOPs.
arXiv Detail & Related papers (2020-10-28T15:26:40Z) - Training Interpretable Convolutional Neural Networks by Differentiating
Class-specific Filters [64.46270549587004]
Convolutional neural networks (CNNs) have been successfully used in a range of tasks.
CNNs are often viewed as "black-box" and lack of interpretability.
We propose a novel strategy to train interpretable CNNs by encouraging class-specific filters.
arXiv Detail & Related papers (2020-07-16T09:12:26Z) - Filter Grafting for Deep Neural Networks: Reason, Method, and
Cultivation [86.91324735966766]
Filter is the key component in modern convolutional neural networks (CNNs)
In this paper, we introduce filter grafting (textbfMethod) to achieve this goal.
We develop a novel criterion to measure the information of filters and an adaptive weighting strategy to balance the grafted information among networks.
arXiv Detail & Related papers (2020-04-26T08:36:26Z) - Filter Grafting for Deep Neural Networks [71.39169475500324]
Filter grafting aims to improve the representation capability of Deep Neural Networks (DNNs)
We develop an entropy-based criterion to measure the information of filters and an adaptive weighting strategy for balancing the grafted information among networks.
For example, the grafted MobileNetV2 outperforms the non-grafted MobileNetV2 by about 7 percent on CIFAR-100 dataset.
arXiv Detail & Related papers (2020-01-15T03:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.