Contextual Scenario Generation for Two-Stage Stochastic Programming
- URL: http://arxiv.org/abs/2502.05349v1
- Date: Fri, 07 Feb 2025 21:42:50 GMT
- Title: Contextual Scenario Generation for Two-Stage Stochastic Programming
- Authors: David Islip, Roy H. Kwon, Sanghyeon Bae, Woo Chang Kim,
- Abstract summary: Two-stage programs (2SPs) are important tools for making decisions under uncertainty.
Current scenario generation approaches do not leverage contextual information or do not address computational concerns.
We propose a distributional approach that learns the mapping by minimizing a distributional distance between the predicted surrogate scenarios and the true contextual distribution.
Second, we propose a task-based approach that aims to produce surrogate scenarios that yield high-quality decisions.
- Score: 0.2812395851874055
- License:
- Abstract: Two-stage stochastic programs (2SPs) are important tools for making decisions under uncertainty. Decision-makers use contextual information to generate a set of scenarios to represent the true conditional distribution. However, the number of scenarios required is a barrier to implementing 2SPs, motivating the problem of generating a small set of surrogate scenarios that yield high-quality decisions when they represent uncertainty. Current scenario generation approaches do not leverage contextual information or do not address computational concerns. In response, we propose contextual scenario generation (CSG) to learn a mapping between the context and a set of surrogate scenarios of user-specified size. First, we propose a distributional approach that learns the mapping by minimizing a distributional distance between the predicted surrogate scenarios and the true contextual distribution. Second, we propose a task-based approach that aims to produce surrogate scenarios that yield high-quality decisions. The task-based approach uses neural architectures to approximate the downstream objective and leverages the approximation to search for the mapping. The proposed approaches apply to various problem structures and loosely only require efficient solving of the associated subproblems and 2SPs defined on the reduced scenario sets. Numerical experiments demonstrating the effectiveness of the proposed methods are presented.
Related papers
- Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Predictive Inference in Multi-environment Scenarios [18.324321417099394]
We address the challenge of constructing valid confidence intervals and sets in problems of prediction across multiple environments.
We extend the jackknife and split-conformal methods to show how to obtain distribution-free coverage in non-traditional, potentially hierarchical data-generating scenarios.
Our contributions also include extensions for settings with non-real-valued responses, a theory of consistency for predictive inference in these general problems, and insights on the limits of conditional coverage.
arXiv Detail & Related papers (2024-03-25T00:21:34Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Fast Empirical Scenarios [0.0]
We seek to extract representative scenarios from large panel data consistent with sample moments.
Among two novel algorithms, the first identifies scenarios that have not been observed before.
The second proposal selects important data points from states of the world that have already realized.
arXiv Detail & Related papers (2023-07-08T07:58:53Z) - Data-driven Prediction of Relevant Scenarios for Robust Optimization [0.0]
We study robust one- and two-stage problems with discrete uncertainty sets.
We propose a data-driven computation to seed the iterative solution method with a set of starting scenarios.
Our experiments show that predicting even a small number of good start scenarios by our method can considerably reduce the time of the iterative methods.
arXiv Detail & Related papers (2022-03-30T19:52:29Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Application-Driven Learning: A Closed-Loop Prediction and Optimization Approach Applied to Dynamic Reserves and Demand Forecasting [41.94295877935867]
We present application-driven learning, a new closed-loop framework in which the processes of forecasting and decision-making are merged and co-optimized.
We show that the proposed methodology is scalable and yields consistently better performance than the standard open-loop approach.
arXiv Detail & Related papers (2021-02-26T02:43:28Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Learning to Collide: An Adaptive Safety-Critical Scenarios Generating
Method [20.280573307366627]
We propose a generative framework to create safety-critical scenarios for evaluating task algorithms.
We demonstrate that the proposed framework generates safety-critical scenarios more efficiently than grid search or human design methods.
arXiv Detail & Related papers (2020-03-02T21:26:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.