Improving Confidence in Evolutionary Mine Scheduling via Uncertainty
Discounting
- URL: http://arxiv.org/abs/2305.17957v1
- Date: Mon, 29 May 2023 08:43:09 GMT
- Title: Improving Confidence in Evolutionary Mine Scheduling via Uncertainty
Discounting
- Authors: Michael Stimson, William Reid, Aneta Neumann, Simon Ratcliffe, Frank
Neumann
- Abstract summary: We introduce a new approach for determining an "optimal schedule under uncertainty"
This treatment of uncertainty within an economic framework reduces previously difficult-to-use models of variability into actionable insights.
We provide experimental studies using Maptek's mine planning software Evolution.
- Score: 10.609857097723266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mine planning is a complex task that involves many uncertainties. During
early stage feasibility, available mineral resources can only be estimated
based on limited sampling of ore grades from sparse drilling, leading to large
uncertainty in under-sampled parts of the deposit. Planning the extraction
schedule of ore over the life of a mine is crucial for its economic viability.
We introduce a new approach for determining an "optimal schedule under
uncertainty" that provides probabilistic bounds on the profits obtained in each
period. This treatment of uncertainty within an economic framework reduces
previously difficult-to-use models of variability into actionable insights. The
new method discounts profits based on uncertainty within an evolutionary
algorithm, sacrificing economic optimality of a single geological model for
improving the downside risk over an ensemble of equally likely models. We
provide experimental studies using Maptek's mine planning software Evolution.
Our results show that our new approach is successful for effectively making use
of uncertainty information in the mine planning process.
Related papers
- Uncertainty-Aware Strategies: A Model-Agnostic Framework for Robust Financial Optimization through Subsampling [0.7916373508978822]
This paper addresses the challenge of model uncertainty in quantitative finance.<n>Decisions in portfolio allocation, derivative pricing, and risk management rely on estimating models from limited data.<n>We superimpose an outer "uncertainty measure", motivated by traditional monetary risk measures, on the space of models.
arXiv Detail & Related papers (2025-06-08T21:55:00Z) - Look Before Leap: Look-Ahead Planning with Uncertainty in Reinforcement Learning [4.902161835372679]
We propose a novel framework for uncertainty-aware policy optimization with model-based exploratory planning.
In the policy optimization phase, we leverage an uncertainty-driven exploratory policy to actively collect diverse training samples.
Our approach offers flexibility and applicability to tasks with varying state/action spaces and reward structures.
arXiv Detail & Related papers (2025-03-26T01:07:35Z) - Predicting Bad Goods Risk Scores with ARIMA Time Series: A Novel Risk Assessment Approach [0.0]
This research presents a novel framework that integrates Time Series ARIMA models with a proprietary formula designed to calculate bad goods after time series forecasting.
Experimental results, validated on a dataset spanning 2022-2024 for Organic Beer-G 1 Liter, demonstrate that the proposed method outperforms traditional statistical models.
arXiv Detail & Related papers (2025-02-23T09:52:11Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Pitfall of Optimism: Distributional Reinforcement Learning by
Randomizing Risk Criterion [9.35556128467037]
We present a novel distributional reinforcement learning algorithm that selects actions by randomizing risk criterion to avoid one-sided tendency on risk.
Our theoretical results support that the proposed method does not fall into biased exploration and is guaranteed to converge to an optimal return.
arXiv Detail & Related papers (2023-10-25T10:53:04Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - RAP: Risk-Aware Prediction for Robust Planning [21.83865866611308]
We introduce a new prediction objective to learn a risk-biased distribution over trajectories.
This reduces the sample complexity of the risk estimation during online planning.
arXiv Detail & Related papers (2022-10-04T04:19:15Z) - Distributionally Robust Model-Based Offline Reinforcement Learning with
Near-Optimal Sample Complexity [39.886149789339335]
offline reinforcement learning aims to learn to perform decision making from history data without active exploration.
Due to uncertainties and variabilities of the environment, it is critical to learn a robust policy that performs well even when the deployed environment deviates from the nominal one used to collect the history dataset.
We consider a distributionally robust formulation of offline RL, focusing on robust Markov decision processes with an uncertainty set specified by the Kullback-Leibler divergence in both finite-horizon and infinite-horizon settings.
arXiv Detail & Related papers (2022-08-11T11:55:31Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - A Regret Minimization Approach to Iterative Learning Control [61.37088759497583]
We propose a new performance metric, planning regret, which replaces the standard uncertainty assumptions with worst case regret.
We provide theoretical and empirical evidence that the proposed algorithm outperforms existing methods on several benchmarks.
arXiv Detail & Related papers (2021-02-26T13:48:49Z) - Outside the Echo Chamber: Optimizing the Performative Risk [21.62040119228266]
We identify a natural set of properties of the loss function and model-induced distribution shift under which the performative risk is convex.
We develop algorithms that leverage our structural assumptions to optimize the performative risk with better sample efficiency than generic methods for derivative-free convex optimization.
arXiv Detail & Related papers (2021-02-17T04:36:39Z) - Temporal Difference Uncertainties as a Signal for Exploration [76.6341354269013]
An effective approach to exploration in reinforcement learning is to rely on an agent's uncertainty over the optimal policy.
In this paper, we highlight that value estimates are easily biased and temporally inconsistent.
We propose a novel method for estimating uncertainty over the value function that relies on inducing a distribution over temporal difference errors.
arXiv Detail & Related papers (2020-10-05T18:11:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.