Regularization Can Help Mitigate Poisoning Attacks... with the Right
Hyperparameters
- URL: http://arxiv.org/abs/2105.10948v1
- Date: Sun, 23 May 2021 14:34:47 GMT
- Title: Regularization Can Help Mitigate Poisoning Attacks... with the Right
Hyperparameters
- Authors: Javier Carnerero-Cano, Luis Mu\~noz-Gonz\'alez, Phillippa Spencer,
Emil C. Lupu
- Abstract summary: Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance.
We show that current approaches, which typically assume that regularization hyper parameters remain constant, lead to an overly pessimistic view of the algorithms' robustness.
We propose a novel optimal attack formulation that considers the effect of the attack on the hyper parameters, modelling the attack as a emphminimax bilevel optimization problem.
- Score: 1.8570591025615453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning algorithms are vulnerable to poisoning attacks, where a
fraction of the training data is manipulated to degrade the algorithms'
performance. We show that current approaches, which typically assume that
regularization hyperparameters remain constant, lead to an overly pessimistic
view of the algorithms' robustness and of the impact of regularization. We
propose a novel optimal attack formulation that considers the effect of the
attack on the hyperparameters, modelling the attack as a \emph{minimax bilevel
optimization problem}. This allows to formulate optimal attacks, select
hyperparameters and evaluate robustness under worst case conditions. We apply
this formulation to logistic regression using $L_2$ regularization, empirically
show the limitations of previous strategies and evidence the benefits of using
$L_2$ regularization to dampen the effect of poisoning attacks.
Related papers
- HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm Attacks [14.626176607206748]
We propose a parametric variation of the well-known fast minimum-norm attack algorithm.
We re-evaluate 12 robust models, showing that our attack finds smaller adversarial perturbations without requiring any additional tuning.
arXiv Detail & Related papers (2024-07-11T18:30:01Z) - Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - Hyperparameter Learning under Data Poisoning: Analysis of the Influence
of Regularization via Multiobjective Bilevel Optimization [3.3181276611945263]
Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance.
Optimal attacks can be formulated as bilevel optimization problems and help to assess their robustness in worst-case scenarios.
arXiv Detail & Related papers (2023-06-02T15:21:05Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack [92.94132883915876]
State-of-the-art deep neural networks are sensitive to small input perturbations.
Many defence methods have been proposed that attempt to improve robustness to adversarial noise.
evaluating adversarial robustness has proven to be extremely challenging.
arXiv Detail & Related papers (2021-06-03T01:45:48Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Optimal Feature Manipulation Attacks Against Linear Regression [64.54500628124511]
In this paper, we investigate how to manipulate the coefficients obtained via linear regression by adding carefully designed poisoning data points to the dataset or modify the original data points.
Given the energy budget, we first provide the closed-form solution of the optimal poisoning data point when our target is modifying one designated regression coefficient.
We then extend the analysis to the more challenging scenario where the attacker aims to change one particular regression coefficient while making others to be changed as small as possible.
arXiv Detail & Related papers (2020-02-29T04:26:59Z) - Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on
Multiobjective Bilevel Optimisation [3.3181276611945263]
Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance.
Optimal poisoning attacks, which can be formulated as bilevel problems, help to assess the robustness of learning algorithms in worst-case scenarios.
We show that this approach leads to an overly pessimistic view of the robustness of the algorithms.
arXiv Detail & Related papers (2020-02-28T19:46:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.