Structural Regularization
- URL: http://arxiv.org/abs/2004.12601v4
- Date: Fri, 12 Jun 2020 11:59:17 GMT
- Title: Structural Regularization
- Authors: Jiaming Mao and Zhesheng Zheng
- Abstract summary: We propose a novel method for modeling data by using structural models based on economic theory as regularizers for statistical models.
We show that our method can outperform both the (misspecified) structural model and un-structural-regularized statistical models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel method for modeling data by using structural models based
on economic theory as regularizers for statistical models. We show that even if
a structural model is misspecified, as long as it is informative about the
data-generating mechanism, our method can outperform both the (misspecified)
structural model and un-structural-regularized statistical models. Our method
permits a Bayesian interpretation of theory as prior knowledge and can be used
both for statistical prediction and causal inference. It contributes to
transfer learning by showing how incorporating theory into statistical modeling
can significantly improve out-of-domain predictions and offers a way to
synthesize reduced-form and structural approaches for causal effect estimation.
Simulation experiments demonstrate the potential of our method in various
settings, including first-price auctions, dynamic models of entry and exit, and
demand estimation with instrumental variables. Our method has potential
applications not only in economics, but in other scientific disciplines whose
theoretical models offer important insight but are subject to significant
misspecification concerns.
Related papers
- Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope Theory [9.771997770574947]
We analyze how model reconstruction using counterfactuals can be improved.
Our main contribution is to derive novel theoretical relationships between the error in model reconstruction and the number of counterfactual queries.
arXiv Detail & Related papers (2024-05-08T18:52:47Z) - Simulation-Based Prior Knowledge Elicitation for Parametric Bayesian Models [2.9172603864294024]
We focus on translating domain expert knowledge into corresponding prior distributions over model parameters, a process known as prior elicitation.
A major challenge for existing elicitation methods is how to effectively utilize all of these different formats in order to formulate prior distributions that align with the expert's expectations, regardless of the model structure.
Our results support the claim that our method is largely independent of the underlying model structure and adaptable to various elicitation techniques, including quantile-based, moment-based, and histogram-based methods.
arXiv Detail & Related papers (2023-08-22T10:43:05Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Minimal Value-Equivalent Partial Models for Scalable and Robust Planning
in Lifelong Reinforcement Learning [56.50123642237106]
Common practice in model-based reinforcement learning is to learn models that model every aspect of the agent's environment.
We argue that such models are not particularly well-suited for performing scalable and robust planning in lifelong reinforcement learning scenarios.
We propose new kinds of models that only model the relevant aspects of the environment, which we call "minimal value-minimal partial models"
arXiv Detail & Related papers (2023-01-24T16:40:01Z) - Planning with Diffusion for Flexible Behavior Synthesis [125.24438991142573]
We consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem.
The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories.
arXiv Detail & Related papers (2022-05-20T07:02:03Z) - Causality and Generalizability: Identifiability and Learning Methods [0.0]
This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust prediction methods.
We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization.
We propose a general framework for distributional robustness with respect to intervention-induced distributions.
arXiv Detail & Related papers (2021-10-04T13:12:11Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Ensemble Learning with Statistical and Structural Models [0.0]
We propose a set of novel methods for combining statistical and structural models for improved prediction and causal inference.
Our first proposed estimator has the doubly robustness property in that it only requires the correct specification of either the statistical or the structural model.
arXiv Detail & Related papers (2020-06-07T13:36:50Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.