Sensitivity Analysis on Policy-Augmented Graphical Hybrid Models with Shapley Value Estimation
- URL: http://arxiv.org/abs/2411.13396v1
- Date: Wed, 20 Nov 2024 15:33:39 GMT
- Title: Sensitivity Analysis on Policy-Augmented Graphical Hybrid Models with Shapley Value Estimation
- Authors: Junkai Zhao, Wei Xie, Jun Luo,
- Abstract summary: We propose a comprehensive sensitivity analysis framework for general nonlinear policy-augmented graphical (pKG) hybrid models.
Our proposed framework can benefit efficient interpretation and support stable optimal process control in biomanufacturing.
- Score: 5.077690675309647
- License:
- Abstract: Driven by the critical challenges in biomanufacturing, including high complexity and high uncertainty, we propose a comprehensive and computationally efficient sensitivity analysis framework for general nonlinear policy-augmented knowledge graphical (pKG) hybrid models that characterize the risk- and science-based understandings of underlying stochastic decision process mechanisms. The criticality of each input (i.e., random factors, policy parameters, and model parameters) is measured by applying Shapley value (SV) sensitivity analysis to pKG (called SV-pKG), accounting for process causal interdependences. To quickly assess the SV for heavily instrumented bioprocesses, we approximate their dynamics with linear Gaussian pKG models and improve the SV estimation efficiency by utilizing the linear Gaussian properties. In addition, we propose an effective permutation sampling method with TFWW transformation and variance reduction techniques, namely the quasi-Monte Carlo and antithetic sampling methods, to further improve the sampling efficiency and estimation accuracy of SV for both general nonlinear and linear Gaussian pKG models. Our proposed framework can benefit efficient interpretation and support stable optimal process control in biomanufacturing.
Related papers
- Adjoint Sensitivity Analysis on Multi-Scale Bioprocess Stochastic Reaction Network [2.6130735302655554]
We introduce an adjoint sensitivity approach to expedite the learning of mechanistic model parameters.
In this paper, we consider enzymatic analysis (SA) representing a multi-scale bioprocess mechanistic model.
arXiv Detail & Related papers (2024-05-07T05:06:45Z) - Using evolutionary machine learning to characterize and optimize
co-pyrolysis of biomass feedstocks and polymeric wastes [14.894507238371768]
Co-pyrolysis is a promising strategy for improving the quantity and quality parameters of the resulting liquid fuel.
Machine learning (ML) provides capabilities to cope with such issues by leveraging on existing data.
This work aims to introduce an evolutionary ML approach to quantify the (by)products of the biomass-polymer co-pyrolysis process.
arXiv Detail & Related papers (2023-05-24T19:59:21Z) - Stability and Generalization Analysis of Gradient Methods for Shallow
Neural Networks [59.142826407441106]
We study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability.
We consider gradient descent (GD) and gradient descent (SGD) to train SNNs, for both of which we develop consistent excess bounds.
arXiv Detail & Related papers (2022-09-19T18:48:00Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Adaptive Latent Factor Analysis via Generalized Momentum-Incorporated
Particle Swarm Optimization [6.2303427193075755]
A gradient descent (SGD) algorithm is an effective learning strategy to build a latent factor analysis (LFA) model on a high-dimensional and incomplete (HDI) matrix.
A particle swarm optimization (PSO) algorithm is commonly adopted to make an SGD-based LFA model's hyper- parameters, i.e., learning rate and regularization coefficient, self-adaptation.
This paper incorporates more historical information into each particle's evolutionary process for avoiding premature convergence.
arXiv Detail & Related papers (2022-08-04T03:15:07Z) - Dynamic Bayesian Network Auxiliary ABC-SMC for Hybrid Model Bayesian
Inference to Accelerate Biomanufacturing Process Mechanism Learning and
Robust Control [2.727760379582405]
We present a knowledge graph hybrid model characterizing complex causal interdependencies of underlying bioprocessing mechanisms.
It can faithfully capture the important properties, including nonlinear reactions, partially observed state, and nonstationary dynamics.
We derive a posterior distribution model uncertainty, which can facilitate mechanism learning and support robust process control.
arXiv Detail & Related papers (2022-05-05T02:54:21Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Yield Optimization using Hybrid Gaussian Process Regression and a
Genetic Multi-Objective Approach [0.0]
We propose a hybrid approach combining the reliability and accuracy of a Monte Carlo analysis with the efficiency of a surrogate model based on Gaussian Process Regression.
We present two optimization approaches. An adaptive Newton-MC to reduce the impact of uncertainty and a genetic multi-objective approach to optimize performance and robustness at the same time.
arXiv Detail & Related papers (2020-10-08T14:44:37Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.