(De-)Randomized Smoothing for Decision Stump Ensembles
- URL: http://arxiv.org/abs/2205.13909v1
- Date: Fri, 27 May 2022 11:23:50 GMT
- Title: (De-)Randomized Smoothing for Decision Stump Ensembles
- Authors: Mikl\'os Z. Horv\'ath, Mark Niklas M\"uller, Marc Fischer, Martin
Vechev
- Abstract summary: Tree-based models are used in many high-stakes application domains such as finance and medicine.
We propose deterministic smoothing for decision stump ensembles.
We obtain deterministic robustness certificates, even jointly over numerical and categorical features.
- Score: 5.161531917413708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tree-based models are used in many high-stakes application domains such as
finance and medicine, where robustness and interpretability are of utmost
importance. Yet, methods for improving and certifying their robustness are
severely under-explored, in contrast to those focusing on neural networks.
Targeting this important challenge, we propose deterministic smoothing for
decision stump ensembles. Whereas most prior work on randomized smoothing
focuses on evaluating arbitrary base models approximately under input
randomization, the key insight of our work is that decision stump ensembles
enable exact yet efficient evaluation via dynamic programming. Importantly, we
obtain deterministic robustness certificates, even jointly over numerical and
categorical features, a setting ubiquitous in the real world. Further, we
derive an MLE-optimal training method for smoothed decision stumps under
randomization and propose two boosting approaches to improve their provable
robustness. An extensive experimental evaluation shows that our approach yields
significantly higher certified accuracies than the state-of-the-art for
tree-based models. We release all code and trained models at ANONYMIZED.
Related papers
- Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations [80.86128012438834]
We show for the first time that computing the robustness of counterfactuals with respect to plausible model shifts is NP-complete.
We propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees.
arXiv Detail & Related papers (2024-07-10T09:13:11Z) - LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - Implicit Generative Prior for Bayesian Neural Networks [8.013264410621357]
We propose a novel neural adaptive empirical Bayes (NA-EB) framework for complex data structures.
The proposed NA-EB framework combines variational inference with a gradient ascent algorithm.
We demonstrate the practical applications of our framework through extensive evaluations on a variety of tasks.
arXiv Detail & Related papers (2024-04-27T21:00:38Z) - Dynamic ensemble selection based on Deep Neural Network Uncertainty
Estimation for Adversarial Robustness [7.158144011836533]
This work explore the dynamic attributes in model level through dynamic ensemble selection technology.
In training phase the Dirichlet distribution is apply as prior of sub-models' predictive distribution, and the diversity constraint in parameter space is introduced.
In test phase, the certain sub-models are dynamically selected based on their rank of uncertainty value for the final prediction.
arXiv Detail & Related papers (2023-08-01T07:41:41Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation [87.54604263202941]
We propose a tiny deep neural network of which partial layers are iteratively exploited for refining its previous estimations.
We employ learned gating criteria to decide whether to exit from the weight-sharing loop, allowing per-sample adaptation in our model.
Our method consistently outperforms state-of-the-art 2D/3D hand pose estimation approaches in terms of both accuracy and efficiency for widely used benchmarks.
arXiv Detail & Related papers (2021-11-11T23:31:34Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Data-Driven Robust Optimization using Unsupervised Deep Learning [0.0]
We show that a trained neural network can be integrated into a robust optimization model by formulating the adversarial problem as a convex mixed-integer program.
We find that this approach outperforms a similar approach using kernel-based support vector sets.
arXiv Detail & Related papers (2020-11-19T11:06:54Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.