Disparate Impact on Group Accuracy of Linearization for Private Inference
- URL: http://arxiv.org/abs/2402.03629v3
- Date: Tue, 20 Aug 2024 17:08:53 GMT
- Title: Disparate Impact on Group Accuracy of Linearization for Private Inference
- Authors: Saswat Das, Marco Romanelli, Ferdinando Fioretto,
- Abstract summary: We show that reducing the number of ReLU activations disproportionately decreases the accuracy for minority groups compared to majority groups.
We also show how a simple procedure altering the fine-tuning step for linearized models can serve as an effective mitigation strategy.
- Score: 48.27026603581436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring privacy-preserving inference on cryptographically secure data is a well-known computational challenge. To alleviate the bottleneck of costly cryptographic computations in non-linear activations, recent methods have suggested linearizing a targeted portion of these activations in neural networks. This technique results in significantly reduced runtimes with often negligible impacts on accuracy. In this paper, we demonstrate that such computational benefits may lead to increased fairness costs. Specifically, we find that reducing the number of ReLU activations disproportionately decreases the accuracy for minority groups compared to majority groups. To explain these observations, we provide a mathematical interpretation under restricted assumptions about the nature of the decision boundary, while also showing the prevalence of this problem across widely used datasets and architectures. Finally, we show how a simple procedure altering the fine-tuning step for linearized models can serve as an effective mitigation strategy.
Related papers
- A constrained optimization approach to improve robustness of neural networks [1.2338729811609355]
We present a novel nonlinear programming-based approach to fine-tune pre-trained neural networks to improve robustness against adversarial attacks while maintaining accuracy on clean data.
arXiv Detail & Related papers (2024-09-18T18:37:14Z) - A Universal Class of Sharpness-Aware Minimization Algorithms [57.29207151446387]
We introduce a new class of sharpness measures, leading to new sharpness-aware objective functions.
We prove that these measures are textitly expressive, allowing any function of the training loss Hessian matrix to be represented by appropriate hyper and determinants.
arXiv Detail & Related papers (2024-06-06T01:52:09Z) - Batch-oriented Element-wise Approximate Activation for Privacy-Preserving Neural Networks [5.039738753594332]
Homomorphic Encryption (FHE) is facing a great challenge that homomorphic operations cannot be easily adapted for non-linear activation calculations.
Batch-oriented element-wise data packing and approximate activation are proposed, which train linear low-degrees to approximate the non-linear activation function - ReLU.
Experiment results show that when ciphertext inference is performed on 4096 input images, compared with the current most efficient channel-wise method, the inference accuracy is improved by 1.65%, and the amortized inference time is reduced by 99.5%.
arXiv Detail & Related papers (2024-03-16T13:26:33Z) - Towards Practical Robustness Auditing for Linear Regression [8.9598796481325]
We investigate algorithms to find or disprove the existence of small subsets of a dataset.
We show that these methods largely outperform the state of the art and provide a useful robustness check for regression problems in a few dimensions.
We make some headway on this challenge via a spectral algorithm using ideas drawn from recent innovations in algorithmic robust statistics.
arXiv Detail & Related papers (2023-07-30T20:47:36Z) - dugMatting: Decomposed-Uncertainty-Guided Matting [83.71273621169404]
We propose a decomposed-uncertainty-guided matting algorithm, which explores the explicitly decomposed uncertainties to efficiently and effectively improve the results.
The proposed matting framework relieves the requirement for users to determine the interaction areas by using simple and efficient labeling.
arXiv Detail & Related papers (2023-06-02T11:19:50Z) - Neural Networks Reduction via Lumping [0.0]
A large number of solutions has been published to reduce both the number of operations and the parameters involved with the models.
Most of these reducing techniques are actually methods and usually require at least one re-training step to recover the accuracy.
We propose a pruning approach that reduces the number of neurons in a network without using any data or fine-tuning, while completely preserving the exact behaviour.
arXiv Detail & Related papers (2022-09-15T17:13:07Z) - Calibrating Segmentation Networks with Margin-based Label Smoothing [19.669173092632]
We provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses.
These losses could be viewed as approximations of a linear penalty imposing equality constraints on logit distances.
We propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances.
arXiv Detail & Related papers (2022-09-09T20:21:03Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Investigating the Scalability and Biological Plausibility of the
Activation Relaxation Algorithm [62.997667081978825]
Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm.
We show that the algorithm can be further simplified and made more biologically plausible by introducing a learnable set of backwards weights.
We also investigate whether another biologically implausible assumption of the original AR algorithm -- the frozen feedforward pass -- can be relaxed without damaging performance.
arXiv Detail & Related papers (2020-10-13T08:02:38Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.