Sensitivity Analysis of High-Dimensional Models with Correlated Inputs
- URL: http://arxiv.org/abs/2306.00555v1
- Date: Wed, 31 May 2023 14:48:54 GMT
- Title: Sensitivity Analysis of High-Dimensional Models with Correlated Inputs
- Authors: Juraj Kardos, Wouter Edeling, Diana Suleimenova, Derek Groen, Olaf
Schenk
- Abstract summary: The sensitivity of correlated parameters can not only differ in magnitude, but even the sign of the derivative-based index can be inverted.
We demonstrate that the sensitivity of the correlated parameters can not only differ in magnitude, but even the sign of the derivative-based index can be inverted.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Sensitivity analysis is an important tool used in many domains of
computational science to either gain insight into the mathematical model and
interaction of its parameters or study the uncertainty propagation through the
input-output interactions. In many applications, the inputs are stochastically
dependent, which violates one of the essential assumptions in the
state-of-the-art sensitivity analysis methods. Consequently, the results
obtained ignoring the correlations provide values which do not reflect the true
contributions of the input parameters. This study proposes an approach to
address the parameter correlations using a polynomial chaos expansion method
and Rosenblatt and Cholesky transformations to reflect the parameter
dependencies. Treatment of the correlated variables is discussed in context of
variance and derivative-based sensitivity analysis. We demonstrate that the
sensitivity of the correlated parameters can not only differ in magnitude, but
even the sign of the derivative-based index can be inverted, thus significantly
altering the model behavior compared to the prediction of the analysis
disregarding the correlations. Numerous experiments are conducted using
workflow automation tools within the VECMA toolkit.
Related papers
- Causality Pursuit from Heterogeneous Environments via Neural Adversarial Invariance Learning [12.947265104477237]
Pursuing causality from data is a fundamental problem in scientific discovery, treatment intervention, and transfer learning.
The proposed Focused Adversial Invariant Regularization (FAIR) framework utilizes an innovative minimax optimization approach.
It is shown that FAIR-NN can find the invariant variables and quasi-causal variables under a minimal identification condition.
arXiv Detail & Related papers (2024-05-07T23:37:40Z) - Determining the significance and relative importance of parameters of a
simulated quenching algorithm using statistical tools [0.0]
In this paper the ANOVA (ANalysis Of the VAriance) method is used to carry out an exhaustive analysis of a simulated annealing based method.
The significance and relative importance of the parameters regarding the obtained results, as well as suitable values for each of these, were obtained using ANOVA and post-hoc Tukey HSD test.
arXiv Detail & Related papers (2024-02-08T16:34:00Z) - Challenges in Variable Importance Ranking Under Correlation [6.718144470265263]
We present a comprehensive simulation study investigating the impact of feature correlation on the assessment of variable importance.
While there is always no correlation between knockoff variables and its corresponding predictor variables, we prove that the correlation increases linearly beyond a certain correlation threshold between the predictor variables.
arXiv Detail & Related papers (2024-02-05T19:02:13Z) - A Neural Framework for Generalized Causal Sensitivity Analysis [78.71545648682705]
We propose NeuralCSA, a neural framework for causal sensitivity analysis.
We provide theoretical guarantees that NeuralCSA is able to infer valid bounds on the causal query of interest.
arXiv Detail & Related papers (2023-11-27T17:40:02Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [85.67870425656368]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Towards Inferential Reproducibility of Machine Learning Research [16.223631948455797]
Several sources of nondeterminism can be regarded as measurement noise.
Current tendencies to remove noise in order to enforce neglect of research results inherent nondeterminism at the implementation level.
We propose to incorporate several sources of variance, including their interaction with data properties, into an analysis of significance and reliability of machine learning evaluation.
arXiv Detail & Related papers (2023-02-08T13:47:00Z) - Weight-variant Latent Causal Models [79.79711624326299]
Causal representation learning exposes latent high-level causal variables behind low-level observations.
In this work we focus on identifying latent causal variables.
We show that the transitivity severely hinders the identifiability of latent causal variables.
We propose a novel method, termed Structural caUsAl Variational autoEncoder, which directly learns latent causal variables.
arXiv Detail & Related papers (2022-08-30T11:12:59Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian
Nonparametrics [85.31247588089686]
We show that variational Bayesian methods can yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models.
We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
arXiv Detail & Related papers (2021-07-08T03:40:18Z) - Numerical Solution of the Parametric Diffusion Equation by Deep Neural
Networks [2.2731658205414025]
We study the machine-learning-based solution of parametric partial differential equations.
We find strong support for the hypothesis that approximation-theoretical effects heavily influence the practical behavior of learning problems in numerical analysis.
arXiv Detail & Related papers (2020-04-25T12:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.