A Robust Optimisation Perspective on Counterexample-Guided Repair of
Neural Networks
- URL: http://arxiv.org/abs/2301.11342v2
- Date: Mon, 5 Jun 2023 14:13:40 GMT
- Title: A Robust Optimisation Perspective on Counterexample-Guided Repair of
Neural Networks
- Authors: David Boetius, Stefan Leue, Tobias Sutter
- Abstract summary: We show that counterexample-guided repair can be viewed as a robust optimisation algorithm.
We prove termination for more restrained machine learning models and disprove termination in a general setting.
- Score: 2.82532357999662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterexample-guided repair aims at creating neural networks with
mathematical safety guarantees, facilitating the application of neural networks
in safety-critical domains. However, whether counterexample-guided repair is
guaranteed to terminate remains an open question. We approach this question by
showing that counterexample-guided repair can be viewed as a robust
optimisation algorithm. While termination guarantees for neural network repair
itself remain beyond our reach, we prove termination for more restrained
machine learning models and disprove termination in a general setting. We
empirically study the practical implications of our theoretical results,
demonstrating the suitability of common verifiers and falsifiers for repair
despite a disadvantageous theoretical result. Additionally, we use our
theoretical insights to devise a novel algorithm for repairing linear
regression models based on quadratic programming, surpassing existing
approaches.
Related papers
- Invertible ResNets for Inverse Imaging Problems: Competitive Performance with Provable Regularization Properties [0.0]
Recent works by Arndt et al addressed the gap by analyzing a data-driven reconstruction method based on invertible residual networks (iResNets)
We extend some of the theoretical results from Arndt et al to encompass nonlinear inverse problems and offer insights for the design of large-scale performant iResNet architectures.
arXiv Detail & Related papers (2024-09-20T13:15:26Z) - Patch Synthesis for Property Repair of Deep Neural Networks [15.580097790702508]
We introduce PatchPro, a novel patch-based approach for property-level repair of deep neural networks (DNNs)
PatchPro provides specialized repairs for all samples within the robustness neighborhood while maintaining the network's original performance.
Our method incorporates formal verification and a mechanism for allocating patch modules, enabling it to defend against adversarial attacks.
arXiv Detail & Related papers (2024-04-02T05:16:59Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Provably tuning the ElasticNet across instances [53.0518090093538]
We consider the problem of tuning the regularization parameters of Ridge regression, LASSO, and the ElasticNet across multiple problem instances.
Our results are the first general learning-theoretic guarantees for this important class of problems.
arXiv Detail & Related papers (2022-07-20T21:22:40Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Bayesian Inference with Certifiable Adversarial Robustness [25.40092314648194]
We consider adversarial training networks through the lens of Bayesian learning.
We present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.
Our method is the first to directly train certifiable BNNs, thus facilitating their use in safety-critical applications.
arXiv Detail & Related papers (2021-02-10T07:17:49Z) - Towards a Theoretical Understanding of the Robustness of Variational
Autoencoders [82.68133908421792]
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
We develop a novel criterion for robustness in probabilistic models: $r$-robustness.
We show that VAEs trained using disentangling methods score well under our robustness metrics.
arXiv Detail & Related papers (2020-07-14T21:22:29Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.