Scalable Whitebox Attacks on Tree-based Models
- URL: http://arxiv.org/abs/2204.00103v1
- Date: Thu, 31 Mar 2022 21:36:20 GMT
- Title: Scalable Whitebox Attacks on Tree-based Models
- Authors: Giuseppe Castiglione, Gavin Ding, Masoud Hashemi, Christopher
Srinivasa, Ga Wu
- Abstract summary: This paper proposes a novel whitebox adversarial robustness testing approach for tree ensemble models.
By leveraging sampling and the log-derivative trick, the proposed approach can scale up to testing tasks that were previously unmanageable.
- Score: 2.3186641356561646
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Adversarial robustness is one of the essential safety criteria for
guaranteeing the reliability of machine learning models. While various
adversarial robustness testing approaches were introduced in the last decade,
we note that most of them are incompatible with non-differentiable models such
as tree ensembles. Since tree ensembles are widely used in industry, this
reveals a crucial gap between adversarial robustness research and practical
applications. This paper proposes a novel whitebox adversarial robustness
testing approach for tree ensemble models. Concretely, the proposed approach
smooths the tree ensembles through temperature controlled sigmoid functions,
which enables gradient descent-based adversarial attacks. By leveraging
sampling and the log-derivative trick, the proposed approach can scale up to
testing tasks that were previously unmanageable. We compare the approach
against both random perturbations and blackbox approaches on multiple public
datasets (and corresponding models). Our results show that the proposed method
can 1) successfully reveal the adversarial vulnerability of tree ensemble
models without causing computational pressure for testing and 2) flexibly
balance the search performance and time complexity to meet various testing
criteria.
Related papers
- Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations [80.86128012438834]
We show for the first time that computing the robustness of counterfactuals with respect to plausible model shifts is NP-complete.
We propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees.
arXiv Detail & Related papers (2024-07-10T09:13:11Z) - Verifiable Learning for Robust Tree Ensembles [8.207928136395184]
A class of decision tree ensembles called large-spread ensembles admit a security verification algorithm running in restricted time.
We show the benefits of this idea by designing a new training algorithm that automatically learns a large-spread decision tree ensemble from labelled data.
Experimental results on public datasets confirm that large-spread ensembles trained using our algorithm can be verified in a matter of seconds.
arXiv Detail & Related papers (2023-05-05T15:37:23Z) - Subgroup Robustness Grows On Trees: An Empirical Baseline Investigation [13.458414200958797]
We conduct an empirical comparison of several previously-proposed methods for fair and robust learning alongside state-of-the-art tree-based methods.
We show that tree-based methods have strong subgroup robustness, even when compared to robustness- and fairness-enhancing methods.
arXiv Detail & Related papers (2022-11-23T04:49:18Z) - On the Robustness of Random Forest Against Untargeted Data Poisoning: An
Ensemble-Based Approach [42.81632484264218]
In machine learning models, perturbations of fractions of the training set (poisoning) can seriously undermine the model accuracy.
This paper aims to implement a novel hash-based ensemble approach that protects random forest against untargeted, random poisoning attacks.
arXiv Detail & Related papers (2022-09-28T11:41:38Z) - (De-)Randomized Smoothing for Decision Stump Ensembles [5.161531917413708]
Tree-based models are used in many high-stakes application domains such as finance and medicine.
We propose deterministic smoothing for decision stump ensembles.
We obtain deterministic robustness certificates, even jointly over numerical and categorical features.
arXiv Detail & Related papers (2022-05-27T11:23:50Z) - Robust Binary Models by Pruning Randomly-initialized Networks [57.03100916030444]
We propose ways to obtain robust models against adversarial attacks from randomly-d binary networks.
We learn the structure of the robust model by pruning a randomly-d binary network.
Our method confirms the strong lottery ticket hypothesis in the presence of adversarial attacks.
arXiv Detail & Related papers (2022-02-03T00:05:08Z) - Clustering Effect of (Linearized) Adversarial Robust Models [60.25668525218051]
We propose a novel understanding of adversarial robustness and apply it on more tasks including domain adaption and robustness boosting.
Experimental evaluations demonstrate the rationality and superiority of our proposed clustering strategy.
arXiv Detail & Related papers (2021-11-25T05:51:03Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Voting based ensemble improves robustness of defensive models [82.70303474487105]
We study whether it is possible to create an ensemble to further improve robustness.
By ensembling several state-of-the-art pre-trained defense models, our method can achieve a 59.8% robust accuracy.
arXiv Detail & Related papers (2020-11-28T00:08:45Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z) - Feature Partitioning for Robust Tree Ensembles and their Certification
in Adversarial Scenarios [8.300942601020266]
We focus on evasion attacks, where a model is trained in a safe environment and exposed to attacks at test time.
We propose a model-agnostic strategy that builds a robust ensemble by training its basic models on feature-based partitions of the given dataset.
Our algorithm guarantees that the majority of the models in the ensemble cannot be affected by the attacker.
arXiv Detail & Related papers (2020-04-07T12:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.