Rational Neural Network Controllers
- URL: http://arxiv.org/abs/2307.06287v1
- Date: Wed, 12 Jul 2023 16:35:41 GMT
- Title: Rational Neural Network Controllers
- Authors: Matthew Newton and Antonis Papachristodoulou
- Abstract summary: Recent work has demonstrated the effectiveness of neural networks in control systems (known as neural feedback loops)
One of the big challenges of this approach is that neural networks have been shown to be sensitive to adversarial attacks.
This paper considers rational neural networks and presents novel rational activation functions, which can be used effectively in robustness problems for neural feedback loops.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks have shown great success in many machine learning related
tasks, due to their ability to act as general function approximators. Recent
work has demonstrated the effectiveness of neural networks in control systems
(known as neural feedback loops), most notably by using a neural network as a
controller. However, one of the big challenges of this approach is that neural
networks have been shown to be sensitive to adversarial attacks. This means
that, unless they are designed properly, they are not an ideal candidate for
controllers due to issues with robustness and uncertainty, which are pivotal
aspects of control systems. There has been initial work on robustness to both
analyse and design dynamical systems with neural network controllers. However,
one prominent issue with these methods is that they use existing neural network
architectures tailored for traditional machine learning tasks. These structures
may not be appropriate for neural network controllers and it is important to
consider alternative architectures. This paper considers rational neural
networks and presents novel rational activation functions, which can be used
effectively in robustness problems for neural feedback loops. Rational
activation functions are replaced by a general rational neural network
structure, which is convex in the neural network's parameters. A method is
proposed to recover a stabilising controller from a Sum of Squares feasibility
test. This approach is then applied to a refined rational neural network which
is more compatible with Sum of Squares programming. Numerical examples show
that this method can successfully recover stabilising rational neural network
controllers for neural feedback loops with non-linear plants with noise and
parametric uncertainty.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Message Passing Variational Autoregressive Network for Solving Intractable Ising Models [6.261096199903392]
Many deep neural networks have been used to solve Ising models, including autoregressive neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks.
Here we propose a variational autoregressive architecture with a message passing mechanism, which can effectively utilize the interactions between spin variables.
The new network trained under an annealing framework outperforms existing methods in solving several prototypical Ising spin Hamiltonians, especially for larger spin systems at low temperatures.
arXiv Detail & Related papers (2024-04-09T11:27:07Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Self-Healing Robust Neural Networks via Closed-Loop Control [23.360913637445964]
A typical self-healing mechanism is the immune system of a human body.
This paper considers the post-training self-healing of a neural network.
We propose a closed-loop control formulation to automatically detect and fix the errors caused by various attacks or perturbations.
arXiv Detail & Related papers (2022-06-26T20:25:35Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - Provable Regret Bounds for Deep Online Learning and Control [77.77295247296041]
We show that any loss functions can be adapted to optimize the parameters of a neural network such that it competes with the best net in hindsight.
As an application of these results in the online setting, we obtain provable bounds for online control controllers.
arXiv Detail & Related papers (2021-10-15T02:13:48Z) - The mathematics of adversarial attacks in AI -- Why deep learning is
unstable despite the existence of stable neural networks [69.33657875725747]
We prove that any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate)
The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability.
Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them.
arXiv Detail & Related papers (2021-09-13T16:19:25Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Deep Kronecker neural networks: A general framework for neural networks
with adaptive activation functions [4.932130498861987]
We propose a new type of neural networks, Kronecker neural networks (KNNs), that form a general framework for neural networks with adaptive activation functions.
Under suitable conditions, KNNs induce a faster decay of the loss than that by the feed-forward networks.
arXiv Detail & Related papers (2021-05-20T04:54:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.