Analysis and Optimisation of Bellman Residual Errors with Neural
Function Approximation
- URL: http://arxiv.org/abs/2106.08774v2
- Date: Thu, 17 Jun 2021 13:11:16 GMT
- Title: Analysis and Optimisation of Bellman Residual Errors with Neural
Function Approximation
- Authors: Martin Gottwald (1), Sven Gronauer (1), Hao Shen (2), Klaus Diepold
(1) ((1) Technical University of Munich, (2) fortiss)
- Abstract summary: Recent development of Deep Reinforcement Learning has demonstrated superior performance of neural networks in solving challenging problems with large or even continuous state spaces.
One specific approach is to deploy neural networks to approximate value by minimising the Mean Squared Bellman Error function.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent development of Deep Reinforcement Learning has demonstrated superior
performance of neural networks in solving challenging problems with large or
even continuous state spaces. One specific approach is to deploy neural
networks to approximate value functions by minimising the Mean Squared Bellman
Error function. Despite great successes of Deep Reinforcement Learning,
development of reliable and efficient numerical algorithms to minimise the
Bellman Error is still of great scientific interest and practical demand. Such
a challenge is partially due to the underlying optimisation problem being
highly non-convex or using incorrect gradient information as done in
Semi-Gradient algorithms. In this work, we analyse the Mean Squared Bellman
Error from a smooth optimisation perspective combined with a Residual Gradient
formulation. Our contribution is two-fold.
First, we analyse critical points of the error function and provide technical
insights on the optimisation procure and design choices for neural networks.
When the existence of global minima is assumed and the objective fulfils
certain conditions we can eliminate suboptimal local minima when using
over-parametrised neural networks. We can construct an efficient Approximate
Newton's algorithm based on our analysis and confirm theoretical properties of
this algorithm such as being locally quadratically convergent to a global
minimum numerically.
Second, we demonstrate feasibility and generalisation capabilities of the
proposed algorithm empirically using continuous control problems and provide a
numerical verification of our critical point analysis. We outline the short
coming of Semi-Gradients. To benefit from an approximate Newton's algorithm
complete derivatives of the Mean Squared Bellman error must be considered
during training.
Related papers
- SGD method for entropy error function with smoothing l0 regularization for neural networks [3.108634881604788]
entropy error function has been widely used in neural networks.
We propose a novel entropy function with smoothing l0 regularization for feed-forward neural networks.
Our work is novel as it enables neural networks to learn effectively, producing more accurate predictions.
arXiv Detail & Related papers (2024-05-28T19:54:26Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - The limitation of neural nets for approximation and optimization [0.0]
We are interested in assessing the use of neural networks as surrogate models to approximate and minimize objective functions in optimization problems.
Our study begins by determining the best activation function for approximating the objective functions of popular nonlinear optimization test problems.
arXiv Detail & Related papers (2023-11-21T00:21:15Z) - A new approach to generalisation error of machine learning algorithms:
Estimates and convergence [0.0]
We introduce a new approach to the estimation of the (generalisation) error and to convergence.
Our results include estimates of the error without any structural assumption on the neural networks.
arXiv Detail & Related papers (2023-06-23T20:57:31Z) - Can Decentralized Stochastic Minimax Optimization Algorithms Converge
Linearly for Finite-Sum Nonconvex-Nonconcave Problems? [56.62372517641597]
Decentralized minimax optimization has been actively studied in the past few years due to its application in a wide range machine learning.
This paper develops two novel decentralized minimax optimization algorithms for the non-strongly-nonconcave problem.
arXiv Detail & Related papers (2023-04-24T02:19:39Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Proxy Convexity: A Unified Framework for the Analysis of Neural Networks
Trained by Gradient Descent [95.94432031144716]
We propose a unified non- optimization framework for the analysis of a learning network.
We show that existing guarantees can be trained unified through gradient descent.
arXiv Detail & Related papers (2021-06-25T17:45:00Z) - A Dynamical View on Optimization Algorithms of Overparameterized Neural
Networks [23.038631072178735]
We consider a broad class of optimization algorithms that are commonly used in practice.
As a consequence, we can leverage the convergence behavior of neural networks.
We believe our approach can also be extended to other optimization algorithms and network theory.
arXiv Detail & Related papers (2020-10-25T17:10:22Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Robust Deep Learning as Optimal Control: Insights and Convergence
Guarantees [19.28405674700399]
adversarial examples during training is a popular defense mechanism against adversarial attacks.
By interpreting the min-max problem as an optimal control problem, it has been shown that one can exploit the compositional structure of neural networks.
We provide the first convergence analysis of this adversarial training algorithm by combining techniques from robust optimal control and inexact methods in optimization.
arXiv Detail & Related papers (2020-05-01T21:26:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.