Adversarial Robustness Verification and Attack Synthesis in Stochastic
Systems
- URL: http://arxiv.org/abs/2110.02125v1
- Date: Tue, 5 Oct 2021 15:52:47 GMT
- Title: Adversarial Robustness Verification and Attack Synthesis in Stochastic
Systems
- Authors: Lisa Oakley, Alina Oprea, Stavros Tripakis
- Abstract summary: We develop a formal framework for adversarial robustness in systems defined as discrete time Markov chains (DTMCs)
We outline a class of threat models under which adversaries can perturb system transitions, constrained by an $varepsilon$ ball around the original transition probabilities.
- Score: 8.833548357664606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probabilistic model checking is a useful technique for specifying and
verifying properties of stochastic systems including randomized protocols and
the theoretical underpinnings of reinforcement learning models. However, these
methods rely on the assumed structure and probabilities of certain system
transitions. These assumptions may be incorrect, and may even be violated in
the event that an adversary gains control of some or all components in the
system.
In this paper, motivated by research in adversarial machine learning on
adversarial examples, we develop a formal framework for adversarial robustness
in systems defined as discrete time Markov chains (DTMCs), and extend to
include deterministic, memoryless policies acting in Markov decision processes
(MDPs). Our framework includes a flexible approach for specifying several
adversarial models with different capabilities to manipulate the system. We
outline a class of threat models under which adversaries can perturb system
transitions, constrained by an $\varepsilon$ ball around the original
transition probabilities and define four specific instances of this threat
model.
We define three main DTMC adversarial robustness problems and present two
optimization-based solutions, leveraging traditional and parametric
probabilistic model checking techniques. We then evaluate our solutions on two
stochastic protocols and a collection of GridWorld case studies, which model an
agent acting in an environment described as an MDP. We find that the parametric
solution results in fast computation for small parameter spaces. In the case of
less restrictive (stronger) adversaries, the number of parameters increases,
and directly computing property satisfaction probabilities is more scalable. We
demonstrate the usefulness of our definitions and solutions by comparing system
outcomes over various properties, threat models, and case studies.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Correct-by-Construction Control for Stochastic and Uncertain Dynamical
Models via Formal Abstractions [44.99833362998488]
We develop an abstraction framework that can be used to solve this problem under various modeling assumptions.
We use state-of-the-art verification techniques to compute an optimal policy on the iMDP with guarantees for satisfying the given specification.
We then show that, by construction, we can refine this policy into a feedback controller for which these guarantees carry over to the dynamical model.
arXiv Detail & Related papers (2023-11-16T11:03:54Z) - Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal
Abstractions [59.605246463200736]
We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states.
We use state-of-the-art verification techniques to provide guarantees on the interval Markov decision process and compute a controller for which these guarantees carry over to the original control system.
arXiv Detail & Related papers (2023-01-04T10:40:30Z) - Formal Controller Synthesis for Markov Jump Linear Systems with
Uncertain Dynamics [64.72260320446158]
We propose a method for synthesising controllers for Markov jump linear systems.
Our method is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS.
We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
arXiv Detail & Related papers (2022-12-01T17:36:30Z) - Online Probabilistic Model Identification using Adaptive Recursive MCMC [8.465242072268019]
We suggest the Adaptive Recursive Markov Chain Monte Carlo (ARMCMC) method.
It eliminates the shortcomings of conventional online techniques while computing the entire probability density function of model parameters.
We demonstrate our approach using parameter estimation in a soft bending actuator and the Hunt-Crossley dynamic model.
arXiv Detail & Related papers (2022-10-23T02:06:48Z) - Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic
Dynamical Models with Epistemic Uncertainty [68.00748155945047]
Capturing uncertainty in models of complex dynamical systems is crucial to designing safe controllers.
Several approaches use formal abstractions to synthesize policies that satisfy temporal specifications related to safety and reachability.
Our contribution is a novel abstraction-based controller method for continuous-state models with noise, uncertain parameters, and external disturbances.
arXiv Detail & Related papers (2022-10-12T07:57:03Z) - Sample Complexity of Robust Reinforcement Learning with a Generative
Model [0.0]
We propose a model-based reinforcement learning (RL) algorithm for learning an $epsilon$-optimal robust policy.
We consider three different forms of uncertainty sets, characterized by the total variation distance, chi-square divergence, and KL divergence.
In addition to the sample complexity results, we also present a formal analytical argument on the benefit of using robust policies.
arXiv Detail & Related papers (2021-12-02T18:55:51Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.