Solving the scalarization issues of Advantage-based Reinforcement
Learning Algorithms
- URL: http://arxiv.org/abs/2004.04120v4
- Date: Fri, 1 Oct 2021 14:53:15 GMT
- Title: Solving the scalarization issues of Advantage-based Reinforcement
Learning Algorithms
- Authors: Federico A. Galatolo, Mario G.C.A. Cimino, Gigliola Vaglini
- Abstract summary: Some of the issues that arise from the scalarization of the multi-objective optimization problem in the Advantage Actor Critic (A2C) reinforcement learning algorithm are investigated.
The paper shows how a naive scalarization can lead to gradients overlapping.
The possibility that the entropy regularization term can be a source of uncontrolled noise is discussed.
- Score: 2.400834442447969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this research, some of the issues that arise from the scalarization of the
multi-objective optimization problem in the Advantage Actor Critic (A2C)
reinforcement learning algorithm are investigated. The paper shows how a naive
scalarization can lead to gradients overlapping. Furthermore, the possibility
that the entropy regularization term can be a source of uncontrolled noise is
discussed. With respect to the above issues, a technique to avoid gradient
overlapping is proposed, while keeping the same loss formulation. Moreover, a
method to avoid the uncontrolled noise, by sampling the actions from
distributions with a desired minimum entropy, is investigated. Pilot
experiments have been carried out to show how the proposed method speeds up the
training. The proposed approach can be applied to any Advantage-based
Reinforcement Learning algorithm.
Related papers
- Dealing with unbounded gradients in stochastic saddle-point optimization [9.983014605039658]
We study the performance of first-order methods for finding saddle points of convex-concave functions.
A notorious challenge is that the gradients can grow arbitrarily large during optimization.
We propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees.
arXiv Detail & Related papers (2024-02-21T16:13:49Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Smoothed functional-based gradient algorithms for off-policy reinforcement learning: A non-asymptotic viewpoint [8.087699764574788]
We propose two policy gradient algorithms for solving the problem of control in an off-policy reinforcement learning context.
Both algorithms incorporate a smoothed functional (SF) based gradient estimation scheme.
arXiv Detail & Related papers (2021-01-06T17:06:42Z) - Asymptotic study of stochastic adaptive algorithm in non-convex
landscape [2.1320960069210484]
This paper studies some assumption properties of adaptive algorithms widely used in optimization and machine learning.
Among them Adagrad and Rmsprop, which are involved in most of the blackbox deep learning algorithms.
arXiv Detail & Related papers (2020-12-10T12:54:45Z) - Learning to solve TV regularized problems with unrolled algorithms [18.241062505073234]
Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals.
We develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures.
arXiv Detail & Related papers (2020-10-19T14:19:02Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Differentially Private Accelerated Optimization Algorithms [0.7874708385247353]
We present two classes of differentially private optimization algorithms.
The first algorithm is inspired by Polyak's heavy ball method.
The second class of algorithms are based on Nesterov's accelerated gradient method.
arXiv Detail & Related papers (2020-08-05T08:23:01Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.