Beyond Bayes-optimality: meta-learning what you know you don't know
- URL: http://arxiv.org/abs/2209.15618v1
- Date: Fri, 30 Sep 2022 17:44:01 GMT
- Title: Beyond Bayes-optimality: meta-learning what you know you don't know
- Authors: Jordi Grau-Moya, Gr\'egoire Del\'etang, Markus Kunesch, Tim Genewein,
Elliot Catt, Kevin Li, Anian Ruoss, Chris Cundy, Joel Veness, Jane Wang,
Marcus Hutter, Christopher Summerfield, Shane Legg, Pedro Ortega
- Abstract summary: We show that risk- and ambiguity-sensitivity also emerge as the result of an optimization problem using modified meta-training algorithms.
We empirically test our proposed meta-training algorithms on agents exposed to foundational classes of decision-making experiments.
- Score: 27.941629748440224
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-training agents with memory has been shown to culminate in Bayes-optimal
agents, which casts Bayes-optimality as the implicit solution to a numerical
optimization problem rather than an explicit modeling assumption. Bayes-optimal
agents are risk-neutral, since they solely attune to the expected return, and
ambiguity-neutral, since they act in new situations as if the uncertainty were
known. This is in contrast to risk-sensitive agents, which additionally exploit
the higher-order moments of the return, and ambiguity-sensitive agents, which
act differently when recognizing situations in which they lack knowledge.
Humans are also known to be averse to ambiguity and sensitive to risk in ways
that aren't Bayes-optimal, indicating that such sensitivity can confer
advantages, especially in safety-critical situations. How can we extend the
meta-learning protocol to generate risk- and ambiguity-sensitive agents? The
goal of this work is to fill this gap in the literature by showing that risk-
and ambiguity-sensitivity also emerge as the result of an optimization problem
using modified meta-training algorithms, which manipulate the
experience-generation process of the learner. We empirically test our proposed
meta-training algorithms on agents exposed to foundational classes of
decision-making experiments and demonstrate that they become sensitive to risk
and ambiguity.
Related papers
- Uncertainty-Aware Decoding with Minimum Bayes Risk [70.6645260214115]
We show how Minimum Bayes Risk decoding, which selects model generations according to an expected risk, can be generalized into a principled uncertainty-aware decoding method.
We show that this modified expected risk is useful for both choosing outputs and deciding when to abstain from generation and can provide improvements without incurring overhead.
arXiv Detail & Related papers (2025-03-07T10:55:12Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery [53.08822154199948]
Unsupervised Environment Design (UED) methods have gained recent attention as their adaptive curricula promise to enable agents to be robust to in- and out-of-distribution tasks.
This work investigates how existing UED methods select training environments, focusing on task prioritisation metrics.
We develop a method that directly trains on scenarios with high learnability.
arXiv Detail & Related papers (2024-08-27T14:31:54Z) - Information-Theoretic Safe Bayesian Optimization [59.758009422067005]
We consider a sequential decision making task, where the goal is to optimize an unknown function without evaluating parameters that violate an unknown (safety) constraint.
Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case.
We propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate.
arXiv Detail & Related papers (2024-02-23T14:31:10Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Dynamic Memory for Interpretable Sequential Optimisation [0.0]
We present a solution to handling non-stationarity that is suitable for deployment at scale.
We develop an adaptive Bayesian learning agent that employs a novel form of dynamic memory.
We describe the architecture of a large-scale deployment of automatic-as-a-service.
arXiv Detail & Related papers (2022-06-28T12:29:13Z) - Adaptive Risk Tendency: Nano Drone Navigation in Cluttered Environments
with Distributional Reinforcement Learning [17.940958199767234]
We present a distributional reinforcement learning framework to learn adaptive risk tendency policies.
We show our algorithm can adjust its risk-sensitivity on the fly both in simulation and real-world experiments.
arXiv Detail & Related papers (2022-03-28T13:39:58Z) - Risk Sensitive Model-Based Reinforcement Learning using Uncertainty
Guided Planning [0.0]
In this paper, risk sensitivity is promoted in a model-based reinforcement learning algorithm.
We propose uncertainty guided cross-entropy method planning, which penalises action sequences that result in high variance state predictions.
Experiments display the ability for the agent to identify uncertain regions of the state space during planning and to take actions that maintain the agent within high confidence areas.
arXiv Detail & Related papers (2021-11-09T07:28:00Z) - Policy Gradient Bayesian Robust Optimization for Imitation Learning [49.881386773269746]
We derive a novel policy gradient-style robust optimization approach, PG-BROIL, to balance expected performance and risk.
Results suggest PG-BROIL can produce a family of behaviors ranging from risk-neutral to risk-averse.
arXiv Detail & Related papers (2021-06-11T16:49:15Z) - Temporal Difference Uncertainties as a Signal for Exploration [76.6341354269013]
An effective approach to exploration in reinforcement learning is to rely on an agent's uncertainty over the optimal policy.
In this paper, we highlight that value estimates are easily biased and temporally inconsistent.
We propose a novel method for estimating uncertainty over the value function that relies on inducing a distribution over temporal difference errors.
arXiv Detail & Related papers (2020-10-05T18:11:22Z) - Bayesian Robust Optimization for Imitation Learning [34.40385583372232]
Inverse reinforcement learning can enable generalization to new states by learning a parameterized reward function.
Existing safe imitation learning approaches based on IRL deal with this uncertainty using a maxmin framework.
BROIL provides a natural way to interpolate between return-maximizing and risk-minimizing behaviors.
arXiv Detail & Related papers (2020-07-24T01:52:11Z) - Bayesian Residual Policy Optimization: Scalable Bayesian Reinforcement
Learning with Clairvoyant Experts [22.87432549580184]
We formulate this as Bayesian Reinforcement Learning over latent Markov Decision Processes (MDPs)
We first obtain an ensemble of experts, one for each latent MDP, and fuse their advice to compute a baseline policy.
Next, we train a Bayesian residual policy to improve upon the ensemble's recommendation and learn to reduce uncertainty.
BRPO significantly improves the ensemble of experts and drastically outperforms existing adaptive RL methods.
arXiv Detail & Related papers (2020-02-07T23:10:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.