A Fair Loss Function for Network Pruning
- URL: http://arxiv.org/abs/2211.10285v1
- Date: Fri, 18 Nov 2022 15:17:28 GMT
- Title: A Fair Loss Function for Network Pruning
- Authors: Robbie Meyer and Alexander Wong
- Abstract summary: We introduce the performance weighted loss function, a simple modified cross-entropy loss function that can be used to limit the introduction of biases during pruning.
Experiments using biased classifiers for facial classification and skin-lesion classification tasks demonstrate that the proposed method is a simple and effective tool.
- Score: 93.0013343535411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model pruning can enable the deployment of neural networks in environments
with resource constraints. While pruning may have a small effect on the overall
performance of the model, it can exacerbate existing biases into the model such
that subsets of samples see significantly degraded performance. In this paper,
we introduce the performance weighted loss function, a simple modified
cross-entropy loss function that can be used to limit the introduction of
biases during pruning. Experiments using biased classifiers for facial
classification and skin-lesion classification tasks demonstrate that the
proposed method is a simple and effective tool that can enable existing pruning
methods to be used in fairness sensitive contexts.
Related papers
- Enhancing Fine-Grained Visual Recognition in the Low-Data Regime Through Feature Magnitude Regularization [23.78498670529746]
We introduce a regularization technique to ensure that the magnitudes of the extracted features are evenly distributed.
Despite its apparent simplicity, our approach has demonstrated significant performance improvements across various fine-grained visual recognition datasets.
arXiv Detail & Related papers (2024-09-03T07:32:46Z) - SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression [12.44857030152608]
Deep Neural Networks are prone to learning and relying on spurious correlations in the training data, which, for high-risk applications, can have fatal consequences.
Various approaches to suppress model reliance on harmful features have been proposed that can be applied post-hoc without additional training.
We propose a reactive approach conditioned on model-derived knowledge and eXplainable Artificial Intelligence (XAI) insights.
arXiv Detail & Related papers (2024-04-15T09:16:49Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - A Vulnerability of Attribution Methods Using Pre-Softmax Scores [2.3020018305241337]
We discuss a vulnerability involving a category of attribution methods used to provide explanations for the outputs of convolutional neural networks working as classifiers.
It is known that this type of networks are vulnerable to adversarial attacks, in which imperceptible perturbations of the input may alter the outputs of the model.
arXiv Detail & Related papers (2023-07-06T21:38:13Z) - Theoretical Characterization of How Neural Network Pruning Affects its
Generalization [131.1347309639727]
This work makes the first attempt to study how different pruning fractions affect the model's gradient descent dynamics and generalization.
It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero.
More surprisingly, the generalization bound gets better as the pruning fraction gets larger.
arXiv Detail & Related papers (2023-01-01T03:10:45Z) - Imbalanced Nodes Classification for Graph Neural Networks Based on
Valuable Sample Mining [9.156427521259195]
A new loss function FD-Loss is reconstructed based on the traditional algorithm-level approach to the imbalance problem.
Our loss function can effectively solve the sample node imbalance problem and improve the classification accuracy by 4% compared to existing methods in the node classification task.
arXiv Detail & Related papers (2022-09-18T09:22:32Z) - Fine-grained Retrieval Prompt Tuning [149.9071858259279]
Fine-grained Retrieval Prompt Tuning steers a frozen pre-trained model to perform the fine-grained retrieval task from the perspectives of sample prompt and feature adaptation.
Our FRPT with fewer learnable parameters achieves the state-of-the-art performance on three widely-used fine-grained datasets.
arXiv Detail & Related papers (2022-07-29T04:10:04Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Role of Orthogonality Constraints in Improving Properties of Deep
Networks for Image Classification [8.756814963313804]
We propose an Orthogonal Sphere (OS) regularizer that emerges from physics-based latent-representations under simplifying assumptions.
Under further simplifying assumptions, the OS constraint can be written in closed-form as a simple orthonormality term and be used along with the cross-entropy loss function.
We demonstrate the effectiveness of the proposed OS regularization by providing quantitative and qualitative results on four benchmark datasets.
arXiv Detail & Related papers (2020-09-22T18:46:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.