Probabilistic Actor-Critic: Learning to Explore with PAC-Bayes
Uncertainty
- URL: http://arxiv.org/abs/2402.03055v1
- Date: Mon, 5 Feb 2024 14:42:45 GMT
- Title: Probabilistic Actor-Critic: Learning to Explore with PAC-Bayes
Uncertainty
- Authors: Bahareh Tasdighi, Nicklas Werge, Yi-Shan Wu, Melih Kandemir
- Abstract summary: We introduce Probabilistic Actor-Critic (PAC), a novel reinforcement learning algorithm with improved continuous control.
PAC achieves this by integrating policies and critics, creating a dynamic synergy between the estimation of critic uncertainty and actor training.
- Score: 14.348879224354125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Probabilistic Actor-Critic (PAC), a novel reinforcement learning
algorithm with improved continuous control performance thanks to its ability to
mitigate the exploration-exploitation trade-off. PAC achieves this by
seamlessly integrating stochastic policies and critics, creating a dynamic
synergy between the estimation of critic uncertainty and actor training. The
key contribution of our PAC algorithm is that it explicitly models and infers
epistemic uncertainty in the critic through Probably Approximately
Correct-Bayesian (PAC-Bayes) analysis. This incorporation of critic uncertainty
enables PAC to adapt its exploration strategy as it learns, guiding the actor's
decision-making process. PAC compares favorably against fixed or pre-scheduled
exploration schemes of the prior art. The synergy between stochastic policies
and critics, guided by PAC-Bayes analysis, represents a fundamental step
towards a more adaptive and effective exploration strategy in deep
reinforcement learning. We report empirical evaluations demonstrating PAC's
enhanced stability and improved performance over the state of the art in
diverse continuous control problems.
Related papers
- The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Actor-Critic Reinforcement Learning with Phased Actor [10.577516871906816]
We propose a novel phased actor in actor-critic (PAAC) method to improve policy gradient estimation.
PAAC accounts for both $Q$ value and TD error in its actor update.
Results show that PAAC leads to significant performance improvement measured by total cost, learning variance, robustness, learning speed and success rate.
arXiv Detail & Related papers (2024-04-18T01:27:31Z) - Towards Optimal Adversarial Robust Q-learning with Bellman Infinity-error [9.473089575932375]
Recent studies explore state-adversarial robustness and suggest the potential lack of an optimal robust policy (ORP)
We prove the existence of a deterministic and stationary ORP that aligns with the Bellman optimal policy.
This finding motivates us to train a Consistent Adversarial Robust Deep Q-Network (CAR-DQN) by minimizing a surrogate of Bellman Infinity-error.
arXiv Detail & Related papers (2024-02-03T14:25:33Z) - PAC-Bayesian Soft Actor-Critic Learning [9.752336113724928]
Actor-critic algorithms address the dual goals of reinforcement learning (RL), policy evaluation and improvement via two separate function approximators.
We tackle this bottleneck by employing an existing Probably Approximately Correct (PAC) Bayesian bound for the first time as the critic training objective of the Soft Actor-Critic (SAC) algorithm.
arXiv Detail & Related papers (2023-01-30T10:44:15Z) - Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior:
From Theory to Practice [54.03076395748459]
A central question in the meta-learning literature is how to regularize to ensure generalization to unseen tasks.
We present a generalization bound for meta-learning, which was first derived by Rothfuss et al.
We provide a theoretical analysis and empirical case study under which conditions and to what extent these guarantees for meta-learning improve upon PAC-Bayesian per-task learning bounds.
arXiv Detail & Related papers (2022-11-14T08:51:04Z) - CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery [88.97076030698433]
We introduce Contrastive Intrinsic Control (CIC), an algorithm for unsupervised skill discovery.
CIC explicitly incentivizes diverse behaviors by maximizing state entropy.
We find that CIC substantially improves over prior unsupervised skill discovery methods.
arXiv Detail & Related papers (2022-02-01T00:36:29Z) - Off-policy Reinforcement Learning with Optimistic Exploration and
Distribution Correction [73.77593805292194]
We train a separate exploration policy to maximize an approximate upper confidence bound of the critics in an off-policy actor-critic framework.
To mitigate the off-policy-ness, we adapt the recently introduced DICE framework to learn a distribution correction ratio for off-policy actor-critic training.
arXiv Detail & Related papers (2021-10-22T22:07:51Z) - A Deep Reinforcement Learning Approach to Marginalized Importance
Sampling with the Successor Representation [61.740187363451746]
Marginalized importance sampling (MIS) measures the density ratio between the state-action occupancy of a target policy and that of a sampling distribution.
We bridge the gap between MIS and deep reinforcement learning by observing that the density ratio can be computed from the successor representation of the target policy.
We evaluate the empirical performance of our approach on a variety of challenging Atari and MuJoCo environments.
arXiv Detail & Related papers (2021-06-12T20:21:38Z) - A PAC-Bayes Analysis of Adversarial Robustness [0.0]
We propose the first general PAC-Bayesian bounds generalization for adversarial robustness.
We leverage the PAC-Bayesian framework to bound the averaged risk on the perturbations for majority votes.
arXiv Detail & Related papers (2021-02-19T10:23:48Z) - PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [77.67258935234403]
We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
arXiv Detail & Related papers (2020-02-13T15:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.