Bridging adaptive management and reinforcement learning for more robust
decisions
- URL: http://arxiv.org/abs/2303.08731v1
- Date: Wed, 15 Mar 2023 16:14:12 GMT
- Title: Bridging adaptive management and reinforcement learning for more robust
decisions
- Authors: Melissa Chapman, Lily Xu, Marcus Lapeyrolerie, Carl Boettiger
- Abstract summary: We show how reinforcement learning can help us devise robust strategies for managing environmental systems under great uncertainty.
Our synthesis suggests that environmental management and computer science can learn from one another about the practices, promises, and perils of experience-based decision-making.
- Score: 6.152873761869356
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: From out-competing grandmasters in chess to informing high-stakes healthcare
decisions, emerging methods from artificial intelligence are increasingly
capable of making complex and strategic decisions in diverse, high-dimensional,
and uncertain situations. But can these methods help us devise robust
strategies for managing environmental systems under great uncertainty? Here we
explore how reinforcement learning, a subfield of artificial intelligence,
approaches decision problems through a lens similar to adaptive environmental
management: learning through experience to gradually improve decisions with
updated knowledge. We review where reinforcement learning (RL) holds promise
for improving evidence-informed adaptive management decisions even when
classical optimization methods are intractable. For example, model-free deep RL
might help identify quantitative decision strategies even when models are
nonidentifiable. Finally, we discuss technical and social issues that arise
when applying reinforcement learning to adaptive management problems in the
environmental domain. Our synthesis suggests that environmental management and
computer science can learn from one another about the practices, promises, and
perils of experience-based decision-making.
Related papers
- Mimicking Human Intuition: Cognitive Belief-Driven Q-Learning [5.960184723807347]
We propose Cognitive Belief-Driven Q-Learning (CBDQ), which integrates subjective belief modeling into the Q-learning framework.
CBDQ enhances decision-making accuracy by endowing agents with human-like learning and reasoning capabilities.
We evaluate the proposed method on discrete control benchmark tasks in various complicate environments.
arXiv Detail & Related papers (2024-10-02T16:50:29Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Leveraging automatic strategy discovery to teach people how to select better projects [0.9821874476902969]
The decisions of individuals and organizations are often suboptimal because normative decision strategies are too demanding in the real world.
Recent work suggests that some errors can be prevented by leveraging artificial intelligence to discover and teach prescriptive decision strategies.
This article is the first to extend this approach to a real-world decision problem, namely project selection.
arXiv Detail & Related papers (2024-06-06T13:51:44Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Risk-reducing design and operations toolkit: 90 strategies for managing
risk and uncertainty in decision problems [65.268245109828]
This paper develops a catalog of such strategies and develops a framework for them.
It argues that they provide an efficient response to decision problems that are seemingly intractable due to high uncertainty.
It then proposes a framework to incorporate them into decision theory using multi-objective optimization.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - On solving decision and risk management problems subject to uncertainty [91.3755431537592]
Uncertainty is a pervasive challenge in decision and risk management.
This paper develops a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
arXiv Detail & Related papers (2023-01-18T19:16:23Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Learning-Driven Decision Mechanisms in Physical Layer: Facts,
Challenges, and Remedies [23.446736654473753]
This paper introduces the common assumptions in the physical layer to highlight their discrepancies with practical systems.
As a solution, learning algorithms are examined by considering implementation steps and challenges.
arXiv Detail & Related papers (2021-02-14T22:26:44Z) - Automatic Discovery of Interpretable Planning Strategies [9.410583483182657]
We introduce AI-Interpret, a method for transforming idiosyncratic policies into simple and interpretable descriptions.
We show that prividing the decision rules generated by AI-Interpret as flowcharts significantly improved people's planning strategies and decisions.
arXiv Detail & Related papers (2020-05-24T12:24:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.