Uncertainty Modelling in Risk-averse Supply Chain Systems Using
Multi-objective Pareto Optimization
- URL: http://arxiv.org/abs/2004.13836v1
- Date: Fri, 24 Apr 2020 21:04:25 GMT
- Title: Uncertainty Modelling in Risk-averse Supply Chain Systems Using
Multi-objective Pareto Optimization
- Authors: Heerok Banerjee, V. Ganapathy and V. M. Shenbagaraman
- Abstract summary: One of the arduous tasks in supply chain modelling is to build robust models against irregular variations.
We have introduced a novel methodology namely, Pareto Optimization to handle uncertainties and bound the entropy of such uncertainties by explicitly modelling them under some apriori assumptions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the arduous tasks in supply chain modelling is to build robust models
against irregular variations. During the proliferation of time-series analyses
and machine learning models, several modifications were proposed such as
acceleration of the classical levenberg-marquardt algorithm, weight decaying
and normalization, which introduced an algorithmic optimization approach to
this problem. In this paper, we have introduced a novel methodology namely,
Pareto Optimization to handle uncertainties and bound the entropy of such
uncertainties by explicitly modelling them under some apriori assumptions. We
have implemented Pareto Optimization using a genetic approach and compared the
results with classical genetic algorithms and Mixed-Integer Linear Programming
(MILP) models. Our results yields empirical evidence suggesting that Pareto
Optimization can elude such non-deterministic errors and is a formal approach
towards producing robust and reactive supply chain models.
Related papers
- Proximal Interacting Particle Langevin Algorithms [0.0]
We introduce Proximal Interacting Particle Langevin Algorithms (PIPLA) for inference and learning in latent variable models.
We propose several variants within the novel proximal IPLA family, tailored to the problem of estimating parameters in a non-differentiable statistical model.
Our theory and experiments together show that PIPLA family can be the de facto choice for parameter estimation problems in latent variable models for non-differentiable models.
arXiv Detail & Related papers (2024-06-20T13:16:41Z) - Multi-Response Heteroscedastic Gaussian Process Models and Their
Inference [1.52292571922932]
We propose a novel framework for the modeling of heteroscedastic covariance functions.
We employ variational inference to approximate the posterior and facilitate posterior predictive modeling.
We show that our proposed framework offers a robust and versatile tool for a wide array of applications.
arXiv Detail & Related papers (2023-08-29T15:06:47Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Combining Gaussian processes and polynomial chaos expansions for
stochastic nonlinear model predictive control [0.0]
We introduce a new algorithm to explicitly consider time-invariant uncertainties in optimal control problems.
The main novelty in this paper is to use this combination in an efficient fashion to obtain mean and variance estimates of nonlinear transformations.
It is shown how to formulate both chance-constraints and a probabilistic objective for the optimal control problem.
arXiv Detail & Related papers (2021-03-09T14:25:08Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.