Distributionally Robust Chance Constrained Programming with Generative
Adversarial Networks (GANs)
- URL: http://arxiv.org/abs/2002.12486v1
- Date: Fri, 28 Feb 2020 00:05:22 GMT
- Title: Distributionally Robust Chance Constrained Programming with Generative
Adversarial Networks (GANs)
- Authors: Shipu Zhao, Fengqi You
- Abstract summary: A novel generative adversarial network (GAN) based data-driven distributionally robust chance constrained programming framework is proposed.
GAN is applied to fully extract distributional information from historical data in a nonparametric and unsupervised way.
The proposed framework is then applied to supply chain optimization under demand uncertainty.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel deep learning based data-driven optimization
method. A novel generative adversarial network (GAN) based data-driven
distributionally robust chance constrained programming framework is proposed.
GAN is applied to fully extract distributional information from historical data
in a nonparametric and unsupervised way without a priori approximation or
assumption. Since GAN utilizes deep neural networks, complicated data
distributions and modes can be learned, and it can model uncertainty
efficiently and accurately. Distributionally robust chance constrained
programming takes into consideration ambiguous probability distributions of
uncertain parameters. To tackle the computational challenges, sample average
approximation method is adopted, and the required data samples are generated by
GAN in an end-to-end way through the differentiable networks. The proposed
framework is then applied to supply chain optimization under demand
uncertainty. The applicability of the proposed approach is illustrated through
a county-level case study of a spatially explicit biofuel supply chain in
Illinois.
Related papers
- Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Distributionally Robust Skeleton Learning of Discrete Bayesian Networks [9.46389554092506]
We consider the problem of learning the exact skeleton of general discrete Bayesian networks from potentially corrupted data.
We propose to optimize the most adverse risk over a family of distributions within bounded Wasserstein distance or KL divergence to the empirical distribution.
We present efficient algorithms and show the proposed methods are closely related to the standard regularized regression approach.
arXiv Detail & Related papers (2023-11-10T15:33:19Z) - Estimating Regression Predictive Distributions with Sample Networks [17.935136717050543]
A common approach to model uncertainty is to choose a parametric distribution and fit the data to it using maximum likelihood estimation.
The chosen parametric form can be a poor fit to the data-generating distribution, resulting in unreliable uncertainty estimates.
We propose SampleNet, a flexible and scalable architecture for modeling uncertainty that avoids specifying a parametric form on the output distribution.
arXiv Detail & Related papers (2022-11-24T17:23:29Z) - Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Data-Driven Approximations of Chance Constrained Programs in
Nonstationary Environments [3.126118485851773]
We study sample average approximations (SAA) of chance constrained programs.
We consider a nonstationary variant of this problem, where the random samples are assumed to be independently drawn in a sequential fashion.
We propose a novel robust SAA method exploiting information about the Wasserstein distance between the sequence of data-generating distributions and the actual chance constraint distribution.
arXiv Detail & Related papers (2022-05-08T01:01:57Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Uncertainty-Aware Deep Classifiers using Generative Models [7.486679152591502]
Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions.
Some recent approaches quantify uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution.
We develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions.
arXiv Detail & Related papers (2020-06-07T15:38:35Z) - Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks [82.61546580149427]
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
arXiv Detail & Related papers (2020-02-10T16:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.