FairGRAPE: Fairness-aware GRAdient Pruning mEthod for Face Attribute
Classification
- URL: http://arxiv.org/abs/2207.10888v1
- Date: Fri, 22 Jul 2022 05:44:03 GMT
- Title: FairGRAPE: Fairness-aware GRAdient Pruning mEthod for Face Attribute
Classification
- Authors: Xiaofeng Lin, Seungbae Kim, Jungseock Joo
- Abstract summary: We propose a novel pruning method, Fairness-aware GRAdient Pruning mEthod (FairGRAPE)
Our method calculates the per-group importance of each model weight and selects a subset of weights that maintain the relative between-group total importance in pruning.
Our method is substantially more effective in a setting with a high pruning rate (99%)
- Score: 4.909402570564468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing pruning techniques preserve deep neural networks' overall ability to
make correct predictions but may also amplify hidden biases during the
compression process. We propose a novel pruning method, Fairness-aware GRAdient
Pruning mEthod (FairGRAPE), that minimizes the disproportionate impacts of
pruning on different sub-groups. Our method calculates the per-group importance
of each model weight and selects a subset of weights that maintain the relative
between-group total importance in pruning. The proposed method then prunes
network edges with small importance values and repeats the procedure by
updating importance values. We demonstrate the effectiveness of our method on
four different datasets, FairFace, UTKFace, CelebA, and ImageNet, for the tasks
of face attribute classification where our method reduces the disparity in
performance degradation by up to 90% compared to the state-of-the-art pruning
algorithms. Our method is substantially more effective in a setting with a high
pruning rate (99%). The code and dataset used in the experiments are available
at https://github.com/Bernardo1998/FairGRAPE
Related papers
- Distilling the Knowledge in Data Pruning [4.720247265804016]
We explore the application of data pruning while incorporating knowledge distillation (KD) when training on a pruned subset.
We demonstrate significant improvement across datasets, pruning methods, and on all pruning fractions.
We make an intriguing observation: when using lower pruning fractions, larger teachers lead to accuracy degradation, while surprisingly, employing teachers with a smaller capacity than the student's may improve results.
arXiv Detail & Related papers (2024-03-12T17:44:45Z) - Towards Higher Ranks via Adversarial Weight Pruning [34.602137305496335]
We propose a Rank-based PruninG (RPG) method to maintain the ranks of sparse weights in an adversarial manner.
RPG outperforms the state-of-the-art performance by 1.13% top-1 accuracy on ImageNet in ResNet-50 with 98% sparsity.
arXiv Detail & Related papers (2023-11-29T10:04:39Z) - FairWASP: Fast and Optimal Fair Wasserstein Pre-processing [9.627848184502783]
We present FairWASP, a novel pre-processing approach to reduce disparities in classification datasets without modifying the original data.
We show theoretically that integer weights are optimal, which means our method can be equivalently understood as duplicating or eliminating samples.
Our work is based on reformulating the pre-processing task as a large-scale mixed-integer program (MIP), for which we propose a highly efficient algorithm based on the cutting plane method.
arXiv Detail & Related papers (2023-10-31T19:36:00Z) - Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases [62.54519787811138]
We present a simple but effective method to measure and mitigate model biases caused by reliance on spurious cues.
We rank images within their classes based on spuriosity, proxied via deep neural features of an interpretable network.
Our results suggest that model bias due to spurious feature reliance is influenced far more by what the model is trained on than how it is trained.
arXiv Detail & Related papers (2022-12-05T23:15:43Z) - Interpretations Steered Network Pruning via Amortized Inferred Saliency
Maps [85.49020931411825]
Convolutional Neural Networks (CNNs) compression is crucial to deploying these models in edge devices with limited resources.
We propose to address the channel pruning problem from a novel perspective by leveraging the interpretations of a model to steer the pruning process.
We tackle this challenge by introducing a selector model that predicts real-time smooth saliency masks for pruned models.
arXiv Detail & Related papers (2022-09-07T01:12:11Z) - AdaPruner: Adaptive Channel Pruning and Effective Weights Inheritance [9.3421559369389]
We propose a pruning framework that adaptively determines the number of each layer's channels as well as the wights inheritance criteria for sub-network.
AdaPruner allows to obtain pruned network quickly, accurately and efficiently.
On ImageNet, we reduce 32.8% FLOPs of MobileNetV2 with only 0.62% decrease for top-1 accuracy, which exceeds all previous state-of-the-art channel pruning methods.
arXiv Detail & Related papers (2021-09-14T01:52:05Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - Effective Sparsification of Neural Networks with Global Sparsity
Constraint [45.640862235500165]
Weight pruning is an effective technique to reduce the model size and inference time for deep neural networks in real-world deployments.
Existing methods rely on either manual tuning or handcrafted rules to find appropriate pruning rates individually for each layer.
We propose an effective network sparsification method called it probabilistic masking (ProbMask) which solves a natural sparsification formulation under global sparsity constraint.
arXiv Detail & Related papers (2021-05-03T14:13:42Z) - Neural Pruning via Growing Regularization [82.9322109208353]
We extend regularization to tackle two central problems of pruning: pruning schedule and weight importance scoring.
Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains.
The proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning.
arXiv Detail & Related papers (2020-12-16T20:16:28Z) - Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot [55.37967301483917]
Conventional wisdom of pruning algorithms suggests that pruning methods exploit information from training data to find goodworks.
In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods.
We propose a series of simple emphdata-independent prune ratios for each layer, and randomly prune each layer accordingly to get a subnetwork.
arXiv Detail & Related papers (2020-09-22T17:36:17Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.