Managing Geological Uncertainty in Critical Mineral Supply Chains: A POMDP Approach with Application to U.S. Lithium Resources
- URL: http://arxiv.org/abs/2502.05690v1
- Date: Sat, 08 Feb 2025 20:44:44 GMT
- Title: Managing Geological Uncertainty in Critical Mineral Supply Chains: A POMDP Approach with Application to U.S. Lithium Resources
- Authors: Mansur Arief, Yasmine Alonso, CJ Oshiro, William Xu, Anthony Corso, David Zhen Yin, Jef K. Caers, Mykel J. Kochenderfer,
- Abstract summary: The world is entering an unprecedented period of critical mineral demand, driven by renewable energy technologies and electric vehicles.<n>This transition presents unique challenges in mineral resource development, particularly due to geological uncertainty.<n>We propose a novel application of Partially Observable Markov Decision Processes (POMDPs) that optimize critical mineral sourcing decisions.
- Score: 30.240279885272827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The world is entering an unprecedented period of critical mineral demand, driven by the global transition to renewable energy technologies and electric vehicles. This transition presents unique challenges in mineral resource development, particularly due to geological uncertainty-a key characteristic that traditional supply chain optimization approaches do not adequately address. To tackle this challenge, we propose a novel application of Partially Observable Markov Decision Processes (POMDPs) that optimizes critical mineral sourcing decisions while explicitly accounting for the dynamic nature of geological uncertainty. Through a case study of the U.S. lithium supply chain, we demonstrate that POMDP-based policies achieve superior outcomes compared to traditional approaches, especially when initial reserve estimates are imperfect. Our framework provides quantitative insights for balancing domestic resource development with international supply diversification, offering policymakers a systematic approach to strategic decision-making in critical mineral supply chains.
Related papers
- Multi-Agent Reinforcement Learning Simulation for Environmental Policy Synthesis [5.738989367102034]
Climate policy development faces significant challenges due to deep uncertainty, complex system dynamics, and competing stakeholder interests.
We propose a framework for augmenting climate simulations with Multi-Agent Reinforcement Learning (MARL) to address these limitations.
arXiv Detail & Related papers (2025-04-17T09:18:04Z) - Identifying Trustworthiness Challenges in Deep Learning Models for Continental-Scale Water Quality Prediction [64.4881275941927]
We present the first comprehensive evaluation of trustworthiness in a continental-scale multi-task LSTM model.
Our investigation uncovers systematic patterns of model performance disparities linked to basin characteristics.
This work serves as a timely call to action for advancing trustworthy data-driven methods for water resources management.
arXiv Detail & Related papers (2025-03-13T01:50:50Z) - Control Policy Correction Framework for Reinforcement Learning-based Energy Arbitrage Strategies [4.950434218152639]
We propose a new RL-based control framework for batteries to obtain a safe energy arbitrage strategy in the imbalance settlement mechanism.
We use the Belgian imbalance price of 2023 to evaluate the performance of our proposed framework.
arXiv Detail & Related papers (2024-04-29T16:03:21Z) - End-to-End Mineral Exploration with Artificial Intelligence and Ambient Noise Tomography [0.0]
We focus on copper as a critical element, required in significant quantities for renewable energy solutions.
We show the benefits of utilising ANT, characterised by its speed, scalability, depth penetration, resolution, and low environmental impact.
We show how AI can augment geophysical data interpretation, providing a novel approach to mineral exploration.
arXiv Detail & Related papers (2024-03-22T10:23:48Z) - Learning non-Markovian Decision-Making from State-only Sequences [57.20193609153983]
We develop a model-based imitation of state-only sequences with non-Markov Decision Process (nMDP)
We demonstrate the efficacy of the proposed method in a path planning task with non-Markovian constraints.
arXiv Detail & Related papers (2023-06-27T02:26:01Z) - Learning on Graphs for Mineral Asset Valuation Under Supply and Demand
Uncertainty [2.1485350418225244]
This work jointly addresses mineral asset valuation and mine plan scheduling and optimization under supply and demand uncertainty.
Three graph-based solutions are proposed: (i) a neural branching policy that learns a block-sampling ore body representation, (ii) a guiding policy that learns to explore a selection tree.
Results on two large-scale industrial mining complexes show a reduction of up to three orders of magnitude in primal suboptimality, execution time, and number of iterations, and an increase of up to 40% in the mineral asset value.
arXiv Detail & Related papers (2022-12-07T00:30:18Z) - Offline Reinforcement Learning with Instrumental Variables in Confounded
Markov Decision Processes [93.61202366677526]
We study the offline reinforcement learning (RL) in the face of unmeasured confounders.
We propose various policy learning methods with the finite-sample suboptimality guarantee of finding the optimal in-class policy.
arXiv Detail & Related papers (2022-09-18T22:03:55Z) - Beyond modeling: NLP Pipeline for efficient environmental policy
analysis [0.6597195879147557]
Policy analysis is necessary for policymakers to understand the actors and rules involved in forest restoration.
We propose a Knowledge Management Framework based on Natural Language Processing (NLP) techniques.
We describe the design of the NLP pipeline, review the state-of-the-art methods for each of its components, and discuss the challenges that rise when building a framework oriented towards policy analysis.
arXiv Detail & Related papers (2022-01-08T05:33:04Z) - Coordinated Online Learning for Multi-Agent Systems with Coupled
Constraints and Perturbed Utility Observations [91.02019381927236]
We introduce a novel method to steer the agents toward a stable population state, fulfilling the given resource constraints.
The proposed method is a decentralized resource pricing method based on the resource loads resulting from the augmentation of the game's Lagrangian.
arXiv Detail & Related papers (2020-10-21T10:11:17Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.