Robustness of Model Predictions under Extension
- URL: http://arxiv.org/abs/2012.04723v1
- Date: Tue, 8 Dec 2020 20:21:03 GMT
- Title: Robustness of Model Predictions under Extension
- Authors: Tineke Blom and Joris M. Mooij
- Abstract summary: A caveat to using models for analysis is that predicted causal effects and conditional independences may not be robust under model extensions.
We show how to use the technique of causal ordering to efficiently assess the robustness of qualitative model predictions.
For dynamical systems at equilibrium, we demonstrate how novel insights help to select appropriate model extensions.
- Score: 3.766702945560518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Often, mathematical models of the real world are simplified representations
of complex systems. A caveat to using models for analysis is that predicted
causal effects and conditional independences may not be robust under model
extensions, and therefore applicability of such models is limited. In this
work, we consider conditions under which qualitative model predictions are
preserved when two models are combined. We show how to use the technique of
causal ordering to efficiently assess the robustness of qualitative model
predictions and characterize a large class of model extensions that preserve
these predictions. For dynamical systems at equilibrium, we demonstrate how
novel insights help to select appropriate model extensions and to reason about
the presence of feedback loops. We apply our ideas to a viral infection model
with immune responses.
Related papers
- On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Leveraging Model-based Trees as Interpretable Surrogate Models for Model
Distillation [3.5437916561263694]
Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models.
This paper focuses on using model-based trees as surrogate models which partition the feature space into interpretable regions via decision rules.
Four model-based tree algorithms, namely SLIM, GUIDE, MOB, and CTree, are compared regarding their ability to generate such surrogate models.
arXiv Detail & Related papers (2023-10-04T19:06:52Z) - Representer Point Selection for Explaining Regularized High-dimensional
Models [105.75758452952357]
We introduce a class of sample-based explanations we term high-dimensional representers.
Our workhorse is a novel representer theorem for general regularized high-dimensional models.
We study the empirical performance of our proposed methods on three real-world binary classification datasets and two recommender system datasets.
arXiv Detail & Related papers (2023-05-31T16:23:58Z) - Investigating Ensemble Methods for Model Robustness Improvement of Text
Classifiers [66.36045164286854]
We analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases.
By choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
arXiv Detail & Related papers (2022-10-28T17:52:10Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Unifying Epidemic Models with Mixtures [28.771032745045428]
The COVID-19 pandemic has emphasized the need for a robust understanding of epidemic models.
Here, we introduce a simple mixture-based model which bridges the two approaches.
Although the model is non-mechanistic, we show that it arises as the natural outcome of a process based on a networked SIR framework.
arXiv Detail & Related papers (2022-01-07T19:42:05Z) - Thief, Beware of What Get You There: Towards Understanding Model
Extraction Attack [13.28881502612207]
In some scenarios, AI models are trained proprietarily, where neither pre-trained models nor sufficient in-distribution data is publicly available.
We find the effectiveness of existing techniques significantly affected by the absence of pre-trained models.
We formulate model extraction attacks into an adaptive framework that captures these factors with deep reinforcement learning.
arXiv Detail & Related papers (2021-04-13T03:46:59Z) - On the model-based stochastic value gradient for continuous
reinforcement learning [50.085645237597056]
We show that simple model-based agents can outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward.
Our findings suggest that model-based policy evaluation deserves closer attention.
arXiv Detail & Related papers (2020-08-28T17:58:29Z) - A comprehensive study on the prediction reliability of graph neural
networks for virtual screening [0.0]
We investigate the effects of model architectures, regularization methods, and loss functions on the prediction performance and reliability of classification results.
Our result highlights that correct choice of regularization and inference methods is evidently important to achieve high success rate.
arXiv Detail & Related papers (2020-03-17T10:13:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.