Data-Driven Approximations of Chance Constrained Programs in
Nonstationary Environments
- URL: http://arxiv.org/abs/2205.03748v1
- Date: Sun, 8 May 2022 01:01:57 GMT
- Title: Data-Driven Approximations of Chance Constrained Programs in
Nonstationary Environments
- Authors: Shuhao Yan, Francesca Parise, Eilyan Bitar
- Abstract summary: We study sample average approximations (SAA) of chance constrained programs.
We consider a nonstationary variant of this problem, where the random samples are assumed to be independently drawn in a sequential fashion.
We propose a novel robust SAA method exploiting information about the Wasserstein distance between the sequence of data-generating distributions and the actual chance constraint distribution.
- Score: 3.126118485851773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study sample average approximations (SAA) of chance constrained programs.
SAA methods typically approximate the actual distribution in the chance
constraint using an empirical distribution constructed from random samples
assumed to be independent and identically distributed according to the actual
distribution. In this paper, we consider a nonstationary variant of this
problem, where the random samples are assumed to be independently drawn in a
sequential fashion from an unknown and possibly time-varying distribution. This
nonstationarity may be driven by changing environmental conditions present in
many real-world applications. To account for the potential nonstationarity in
the data generation process, we propose a novel robust SAA method exploiting
information about the Wasserstein distance between the sequence of
data-generating distributions and the actual chance constraint distribution. As
a key result, we obtain distribution-free estimates of the sample size required
to ensure that the robust SAA method will yield solutions that are feasible for
the chance constraint under the actual distribution with high confidence.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - User-defined Event Sampling and Uncertainty Quantification in Diffusion
Models for Physical Dynamical Systems [49.75149094527068]
We show that diffusion models can be adapted to make predictions and provide uncertainty quantification for chaotic dynamical systems.
We develop a probabilistic approximation scheme for the conditional score function which converges to the true distribution as the noise level decreases.
We are able to sample conditionally on nonlinear userdefined events at inference time, and matches data statistics even when sampling from the tails of the distribution.
arXiv Detail & Related papers (2023-06-13T03:42:03Z) - Conformal Inference for Invariant Risk Minimization [12.049545417799125]
The application of machine learning models can be significantly impeded by the occurrence of distributional shifts.
One way to tackle this problem is to use invariant learning, such as invariant risk minimization (IRM), to acquire an invariant representation.
This paper develops methods for obtaining distribution-free prediction regions to describe uncertainty estimates for invariant representations.
arXiv Detail & Related papers (2023-05-22T03:48:38Z) - Time-uniform confidence bands for the CDF under nonstationarity [9.289846887298854]
Estimation of the complete distribution of a random variable is a useful primitive for both manual and automated decision making.
We present time-uniform and value-uniform bounds on the CDF of the running averaged conditional distribution of a real-valued random variable.
The importance-weighted extension is appropriate for estimating complete counterfactual distributions of rewards given controlled experimentation data exhaust.
arXiv Detail & Related papers (2023-02-28T02:14:54Z) - Wasserstein Distributionally Robust Optimization via Wasserstein
Barycenters [10.103413548140848]
We seek data-driven decisions which perform well under the most adverse distribution from a nominal distribution constructed from data samples within a certain distance of probability distributions.
We propose constructing the nominal distribution in Wasserstein distributionally robust optimization problems through the notion of Wasserstein barycenter as an aggregation of data samples from multiple sources.
arXiv Detail & Related papers (2022-03-23T02:03:47Z) - Sampling-Based Robust Control of Autonomous Systems with Non-Gaussian
Noise [59.47042225257565]
We present a novel planning method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous system into a discrete-state model that captures noise by probabilistic transitions between states.
We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP)
arXiv Detail & Related papers (2021-10-25T06:18:55Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z) - Distributionally Robust Chance Constrained Programming with Generative
Adversarial Networks (GANs) [0.0]
A novel generative adversarial network (GAN) based data-driven distributionally robust chance constrained programming framework is proposed.
GAN is applied to fully extract distributional information from historical data in a nonparametric and unsupervised way.
The proposed framework is then applied to supply chain optimization under demand uncertainty.
arXiv Detail & Related papers (2020-02-28T00:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.