Stability Via Adversarial Training of Neural Network Stochastic Control
of Mean-Field Type
- URL: http://arxiv.org/abs/2210.00874v1
- Date: Tue, 27 Sep 2022 11:37:06 GMT
- Title: Stability Via Adversarial Training of Neural Network Stochastic Control
of Mean-Field Type
- Authors: Julian Barreiro-Gomez and Salah Eddine Choutri and Boualem Djehiche
- Abstract summary: This is a class of data-driven mean-field-type control where the distribution of the variables such as the system states and control inputs are incorporated into the problem.
We present a methodology to validate the feasibility of the approximations of the solutions via neural networks and evaluate their stability.
We enhance the stability by enlarging the training set with adversarial inputs to obtain a more robust neural network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present an approach to neural network mean-field-type
control and its stochastic stability analysis by means of adversarial inputs
(aka adversarial attacks). This is a class of data-driven mean-field-type
control where the distribution of the variables such as the system states and
control inputs are incorporated into the problem. Besides, we present a
methodology to validate the feasibility of the approximations of the solutions
via neural networks and evaluate their stability. Moreover, we enhance the
stability by enlarging the training set with adversarial inputs to obtain a
more robust neural network. Finally, a worked-out example based on the
linear-quadratic mean-field type control problem (LQ-MTC) is presented to
illustrate our methodology.
Related papers
- Mapping back and forth between model predictive control and neural networks [0.0]
Model predictive control (MPC) for linear systems with quadratic costs and linear constraints is shown to admit an exact representation as an implicit neural network.
A method to "unravel" the implicit neural network of MPC into an explicit one is also introduced.
arXiv Detail & Related papers (2024-04-18T09:29:08Z) - An Analytic Solution to Covariance Propagation in Neural Networks [10.013553984400488]
This paper presents a sample-free moment propagation technique to accurately characterize the input-output distributions of neural networks.
A key enabler of our technique is an analytic solution for the covariance of random variables passed through nonlinear activation functions.
The wide applicability and merits of the proposed technique are shown in experiments analyzing the input-output distributions of trained neural networks and training Bayesian neural networks.
arXiv Detail & Related papers (2024-03-24T14:08:24Z) - Distributionally Robust Statistical Verification with Imprecise Neural
Networks [4.094049541486327]
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification.
arXiv Detail & Related papers (2023-08-28T18:06:24Z) - To be or not to be stable, that is the question: understanding neural
networks for inverse problems [0.0]
In this paper, we theoretically analyze the trade-off between stability and accuracy of neural networks.
We propose different supervised and unsupervised solutions to increase the network stability and maintain a good accuracy.
arXiv Detail & Related papers (2022-11-24T16:16:40Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Stability Verification in Stochastic Control Systems via Neural Network
Supermartingales [17.558766911646263]
We present an approach for general nonlinear control problems with two novel aspects.
We use ranking supergales (RSMs) to certify a.s.asymptotic stability, and we present a method for learning neural networks.
arXiv Detail & Related papers (2021-12-17T13:05:14Z) - Stability Analysis of Unfolded WMMSE for Power Allocation [80.71751088398209]
Power allocation is one of the fundamental problems in wireless networks.
It is essential that the output power allocation of these algorithms is stable with respect to input perturbations.
In this paper, we focus on UWMMSE, a modern algorithm leveraging graph neural networks.
arXiv Detail & Related papers (2021-10-14T15:44:19Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.