Faster Repeated Evasion Attacks in Tree Ensembles
- URL: http://arxiv.org/abs/2402.08586v1
- Date: Tue, 13 Feb 2024 16:44:02 GMT
- Title: Faster Repeated Evasion Attacks in Tree Ensembles
- Authors: Lorenzo Cascioli, Laurens Devos, Ond\v{r}ej Ku\v{z}elka, Jesse Davis
- Abstract summary: We exploit the fact that adversarial examples for tree ensembles tend to perturb a consistent but relatively small set of features.
We show that we can quickly identify this set of features and use this knowledge to speedup constructing adversarial examples.
- Score: 12.852916723600597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tree ensembles are one of the most widely used model classes. However, these
models are susceptible to adversarial examples, i.e., slightly perturbed
examples that elicit a misprediction. There has been significant research on
designing approaches to construct such examples for tree ensembles. But this is
a computationally challenging problem that often must be solved a large number
of times (e.g., for all examples in a training set). This is compounded by the
fact that current approaches attempt to find such examples from scratch. In
contrast, we exploit the fact that multiple similar problems are being solved.
Specifically, our approach exploits the insight that adversarial examples for
tree ensembles tend to perturb a consistent but relatively small set of
features. We show that we can quickly identify this set of features and use
this knowledge to speedup constructing adversarial examples.
Related papers
- Structured Prompting: Scaling In-Context Learning to 1,000 Examples [78.41281805608081]
We introduce structured prompting that breaks the length limit and scales in-context learning to thousands of examples.
Specifically, demonstration examples are separately encoded with well-designed position embeddings, and then they are jointly attended by the test example using a rescaled attention mechanism.
arXiv Detail & Related papers (2022-12-13T16:31:21Z) - Adversarial Example Detection in Deployed Tree Ensembles [25.204157642042627]
We present a novel approach to detect adversarial examples in tree ensembles.
Our approach works with any additive tree ensemble and does not require training a separate model.
We empirically show that our method is currently the best adversarial detection method for tree ensembles.
arXiv Detail & Related papers (2022-06-27T06:59:00Z) - Towards Compositional Adversarial Robustness: Generalizing Adversarial
Training to Composite Semantic Perturbations [70.05004034081377]
We first propose a novel method for generating composite adversarial examples.
Our method can find the optimal attack composition by utilizing component-wise projected gradient descent.
We then propose generalized adversarial training (GAT) to extend model robustness from $ell_p$-ball to composite semantic perturbations.
arXiv Detail & Related papers (2022-02-09T02:41:56Z) - A Frequency Perspective of Adversarial Robustness [72.48178241090149]
We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
arXiv Detail & Related papers (2021-10-26T19:12:34Z) - Adversarial Robustness: What fools you makes you stronger [1.14219428942199]
We prove an exponential separation for the sample complexity between the PAC-learning model and a version of the Equivalence-Query-learning model.
We show that this separation has interesting implications for adversarial robustness.
We explore a vision of designing an adaptive defense that in the presence of an attacker computes a model that is provably robust.
arXiv Detail & Related papers (2021-02-10T15:00:24Z) - An Efficient Adversarial Attack for Tree Ensembles [91.05779257472675]
adversarial attacks on tree based ensembles such as gradient boosting decision trees (DTs) and random forests (RFs)
We show that our method can be thousands of times faster than the previous mixed-integer linear programming (MILP) based approach.
Our code is available at https://chong-z/tree-ensemble-attack.
arXiv Detail & Related papers (2020-10-22T10:59:49Z) - Robust Estimation of Tree Structured Ising Models [20.224160348675422]
We consider the task of learning Ising models when the signs of different random variables are flipped independently with possibly unequal, unknown probabilities.
We first prove that this problem is unidentifiable, however, this unidentifiability is limited to a small equivalence class of trees formed by leaf nodes exchanging positions with their neighbors.
arXiv Detail & Related papers (2020-06-10T01:32:45Z) - More Bang for Your Buck: Natural Perturbation for Robust Question
Answering [49.83269677507831]
We propose an alternative to the standard approach of constructing training sets of completely new examples.
Our approach involves first collecting a set of seed examples and then applying human-driven natural perturbations.
We find that when natural perturbations are moderately cheaper to create, it is more effective to train models using them.
arXiv Detail & Related papers (2020-04-09T23:12:39Z) - Verifying Tree Ensembles by Reasoning about Potential Instances [25.204157642042627]
We present a strategy that can prune part of the input space given the question asked to simplify the problem.
We then follow a divide and conquer approach that is incremental and can always return some answers.
The usefulness of our approach is shown on a diverse set of use cases.
arXiv Detail & Related papers (2020-01-31T15:31:23Z) - Defensive Few-shot Learning [77.82113573388133]
This paper investigates a new challenging problem called defensive few-shot learning.
It aims to learn a robust few-shot model against adversarial attacks.
The proposed framework can effectively make the existing few-shot models robust against adversarial attacks.
arXiv Detail & Related papers (2019-11-16T05:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.