PRoA: A Probabilistic Robustness Assessment against Functional
Perturbations
- URL: http://arxiv.org/abs/2207.02036v1
- Date: Tue, 5 Jul 2022 13:27:38 GMT
- Title: PRoA: A Probabilistic Robustness Assessment against Functional
Perturbations
- Authors: Tianle Zhang, Wenjie Ruan, Jonathan E. Fieldsend
- Abstract summary: In safety-critical deep learning applications robustness measurement is a vital pre-deployment phase.
We present a novel and general it probabilistic robustness assessment method (PRoA) based on the adaptive concentration.
PRoA can measure the robustness of deep learning models against functional perturbations.
- Score: 4.705291741591329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In safety-critical deep learning applications robustness measurement is a
vital pre-deployment phase. However, existing robustness verification methods
are not sufficiently practical for deploying machine learning systems in the
real world. On the one hand, these methods attempt to claim that no
perturbations can ``fool'' deep neural networks (DNNs), which may be too
stringent in practice. On the other hand, existing works rigorously consider
$L_p$ bounded additive perturbations on the pixel space, although
perturbations, such as colour shifting and geometric transformations, are more
practically and frequently occurring in the real world. Thus, from the
practical standpoint, we present a novel and general {\it probabilistic
robustness assessment method} (PRoA) based on the adaptive concentration, and
it can measure the robustness of deep learning models against functional
perturbations. PRoA can provide statistical guarantees on the probabilistic
robustness of a model, \textit{i.e.}, the probability of failure encountered by
the trained model after deployment. Our experiments demonstrate the
effectiveness and flexibility of PRoA in terms of evaluating the probabilistic
robustness against a broad range of functional perturbations, and PRoA can
scale well to various large-scale deep neural networks compared to existing
state-of-the-art baselines. For the purpose of reproducibility, we release our
tool on GitHub: \url{ https://github.com/TrustAI/PRoA}.
Related papers
- Towards Precise Observations of Neural Model Robustness in Classification [2.127049691404299]
In deep learning applications, robustness measures the ability of neural models that handle slight changes in input data.
Our approach contributes to a deeper understanding of model robustness in safety-critical applications.
arXiv Detail & Related papers (2024-04-25T09:37:44Z) - Toward Robust Uncertainty Estimation with Random Activation Functions [3.0586855806896045]
We propose a novel approach for uncertainty quantification via ensembles, called Random Activation Functions (RAFs) Ensemble.
RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-02-28T13:17:56Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - RoMA: a Method for Neural Network Robustness Measurement and Assessment [0.0]
We present a new statistical method, called Robustness Measurement and Assessment (RoMA)
RoMA determines the probability that a random input perturbation might cause misclassification.
One interesting insight obtained through this work is that, in a classification network, different output labels can exhibit very different robustness levels.
arXiv Detail & Related papers (2021-10-21T12:01:54Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.