Identification and Estimation of Simultaneous Equation Models Using Higher-Order Cumulant Restrictions
- URL: http://arxiv.org/abs/2501.06777v1
- Date: Sun, 12 Jan 2025 11:27:39 GMT
- Title: Identification and Estimation of Simultaneous Equation Models Using Higher-Order Cumulant Restrictions
- Authors: Ziyu Jiang,
- Abstract summary: Identifying structural parameters in linear simultaneous equation models is a fundamental challenge in economics and related fields.
We show that under any diagonal higher-cumulant condition, the structural parameter matrix can be identified by solving an eigenvector problem.
Our framework offers a transparent way to test for it, all within the same higher-orderity setting employed by earlier studies.
- Score: 9.10992754495906
- License:
- Abstract: Identifying structural parameters in linear simultaneous equation models is a fundamental challenge in economics and related fields. Recent work leverages higher-order distributional moments, exploiting the fact that non-Gaussian data carry more structural information than the Gaussian framework. While many of these contributions still require zero-covariance assumptions for structural errors, this paper shows that such an assumption can be dispensed with. Specifically, we demonstrate that under any diagonal higher-cumulant condition, the structural parameter matrix can be identified by solving an eigenvector problem. This yields a direct identification argument and motivates a simple sample-analogue estimator that is both consistent and asymptotically normal. Moreover, when uncorrelatedness may still be plausible -- such as in vector autoregression models -- our framework offers a transparent way to test for it, all within the same higher-order orthogonality setting employed by earlier studies. Monte Carlo simulations confirm desirable finite-sample performance, and we further illustrate the method's practical value in two empirical applications.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - On the Complexity of Identification in Linear Structural Causal Models [3.44747819522562]
We give a new sound and complete algorithm for generic identification which runs in space.
The paper also presents evidence that identification is computationally hard in general.
arXiv Detail & Related papers (2024-07-17T13:11:26Z) - Diffeomorphic Measure Matching with Kernels for Generative Modeling [1.2058600649065618]
This article presents a framework for transport of probability measures towards minimum divergence generative modeling and sampling using ordinary differential equations (ODEs) and Reproducing Kernel Hilbert Spaces (RKHSs)
A theoretical analysis of the proposed method is presented, giving a priori error bounds in terms of the complexity of the model, the number of samples in the training set, and model misspecification.
arXiv Detail & Related papers (2024-02-12T21:44:20Z) - Representation Disentaglement via Regularization by Causal
Identification [3.9160947065896803]
We propose the use of a causal collider structured model to describe the underlying data generative process assumptions in disentangled representation learning.
For this, we propose regularization by identification (ReI), a modular regularization engine designed to align the behavior of large scale generative models with the disentanglement constraints imposed by causal identification.
arXiv Detail & Related papers (2023-02-28T23:18:54Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Hessian Eigenspectra of More Realistic Nonlinear Models [73.31363313577941]
We make a emphprecise characterization of the Hessian eigenspectra for a broad family of nonlinear models.
Our analysis takes a step forward to identify the origin of many striking features observed in more complex machine learning models.
arXiv Detail & Related papers (2021-03-02T06:59:52Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Asymptotic Errors for Teacher-Student Convex Generalized Linear Models
(or : How to Prove Kabashima's Replica Formula) [23.15629681360836]
We prove an analytical formula for the reconstruction performance of convex generalized linear models.
We show that an analytical continuation may be carried out to extend the result to convex (non-strongly) problems.
We illustrate our claim with numerical examples on mainstream learning methods.
arXiv Detail & Related papers (2020-06-11T16:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.