Efficient Sensitivity Analysis for Parametric Robust Markov Chains
- URL: http://arxiv.org/abs/2305.01473v1
- Date: Mon, 1 May 2023 08:23:55 GMT
- Title: Efficient Sensitivity Analysis for Parametric Robust Markov Chains
- Authors: Thom Badings, Sebastian Junges, Ahmadreza Marandi, Ufuk Topcu, Nils
Jansen
- Abstract summary: We provide a novel method for sensitivity analysis of robust Markov chains.
We measure sensitivity in terms of partial derivatives with respect to the uncertain transition probabilities.
We embed the results within an iterative learning scheme that profits from having access to a dedicated sensitivity analysis.
- Score: 23.870902923521335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We provide a novel method for sensitivity analysis of parametric robust
Markov chains. These models incorporate parameters and sets of probability
distributions to alleviate the often unrealistic assumption that precise
probabilities are available. We measure sensitivity in terms of partial
derivatives with respect to the uncertain transition probabilities regarding
measures such as the expected reward. As our main contribution, we present an
efficient method to compute these partial derivatives. To scale our approach to
models with thousands of parameters, we present an extension of this method
that selects the subset of $k$ parameters with the highest partial derivative.
Our methods are based on linear programming and differentiating these programs
around a given value for the parameters. The experiments show the applicability
of our approach on models with over a million states and thousands of
parameters. Moreover, we embed the results within an iterative learning scheme
that profits from having access to a dedicated sensitivity analysis.
Related papers
- Finite Sample Confidence Regions for Linear Regression Parameters Using
Arbitrary Predictors [1.6860963320038902]
We explore a novel methodology for constructing confidence regions for parameters of linear models, using predictions from any arbitrary predictor.
The derived confidence regions can be cast as constraints within a Mixed Linear Programming framework, enabling optimisation of linear objectives.
Unlike previous methods, the confidence region can be empty, which can be used for hypothesis testing.
arXiv Detail & Related papers (2024-01-27T00:15:48Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Kernel Debiased Plug-in Estimation: Simultaneous, Automated Debiasing without Influence Functions for Many Target Parameters [1.5999407512883512]
We propose a novel method named emph kernel plug-in estimation (KDPE)
We show that KDPE simultaneously debiases emphall pathwise differentiable target parameters that satisfy our regularity conditions.
We numerically illustrate the use of KDPE and validate our theoretical results.
arXiv Detail & Related papers (2023-06-14T15:58:50Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Data-Driven Influence Functions for Optimization-Based Causal Inference [105.5385525290466]
We study a constructive algorithm that approximates Gateaux derivatives for statistical functionals by finite differencing.
We study the case where probability distributions are not known a priori but need to be estimated from data.
arXiv Detail & Related papers (2022-08-29T16:16:22Z) - Evaluating Sensitivity to the Stick-Breaking Prior in Bayesian
Nonparametrics [85.31247588089686]
We show that variational Bayesian methods can yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models.
We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.
arXiv Detail & Related papers (2021-07-08T03:40:18Z) - Support estimation in high-dimensional heteroscedastic mean regression [2.28438857884398]
We consider a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors.
We use a strictly convex, smooth variant of the Huber loss function with tuning parameter depending on the parameters of the problem.
For the resulting estimator we show sign-consistency and optimal rates of convergence in the $ell_infty$ norm.
arXiv Detail & Related papers (2020-11-03T09:46:31Z) - Variable selection for Gaussian process regression through a sparse
projection [0.802904964931021]
This paper presents a new variable selection approach integrated with Gaussian process (GP) regression.
The choice of tuning parameters and the accuracy of the estimation are evaluated with the simulation some chosen benchmark approaches.
arXiv Detail & Related papers (2020-08-25T01:06:10Z) - On the Estimation of Derivatives Using Plug-in Kernel Ridge Regression
Estimators [4.392844455327199]
We propose a simple plug-in kernel ridge regression (KRR) estimator in nonparametric regression.
We provide a non-asymotic analysis to study the behavior of the proposed estimator in a unified manner.
The proposed estimator achieves the optimal rate of convergence with the same choice of tuning parameter for any order of derivatives.
arXiv Detail & Related papers (2020-06-02T02:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.