Nonparametric Identification and Estimation of Earnings Dynamics using a
Hidden Markov Model: Evidence from the PSID
- URL: http://arxiv.org/abs/2306.01760v2
- Date: Fri, 1 Sep 2023 01:45:00 GMT
- Title: Nonparametric Identification and Estimation of Earnings Dynamics using a
Hidden Markov Model: Evidence from the PSID
- Authors: Tong Zhou
- Abstract summary: This paper presents a hidden Markov model designed to investigate the complex nature of earnings persistence.
We find that the earnings process displays nonlinear persistence, conditional skewness, and conditional kurtosis.
Our empirical findings also reveal the presence of ARCH effects in earnings at horizons ranging from 2 to 8 years.
- Score: 9.788039182463768
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a hidden Markov model designed to investigate the complex
nature of earnings persistence. The proposed model assumes that the residuals
of log-earnings consist of a persistent component and a transitory component,
both following general Markov processes. Nonparametric identification is
achieved through spectral decomposition of linear operators, and a modified
stochastic EM algorithm is introduced for model estimation. Applying the
framework to the Panel Study of Income Dynamics (PSID) dataset, we find that
the earnings process displays nonlinear persistence, conditional skewness, and
conditional kurtosis. Additionally, the transitory component is found to
possess non-Gaussian properties, resulting in a significantly asymmetric
distributional impact when high-earning households face negative shocks or
low-earning households encounter positive shocks. Our empirical findings also
reveal the presence of ARCH effects in earnings at horizons ranging from 2 to 8
years, further highlighting the complex dynamics of earnings persistence.
Related papers
- On the Benefits of Over-parameterization for Out-of-Distribution Generalization [28.961538657831788]
We investigate the performance of a machine learning model in terms of Out-of-Distribution (OOD) loss under benign overfitting conditions.
We show that further increasing the model's parameterization can significantly reduce the OOD loss.
These insights explain the empirical phenomenon of enhanced OOD generalization through model ensembles.
arXiv Detail & Related papers (2024-03-26T11:01:53Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [85.67870425656368]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Siamese Representation Learning for Unsupervised Relation Extraction [5.776369192706107]
Unsupervised relation extraction (URE) aims at discovering underlying relations between named entity pairs from open-domain plain text.
Existing URE models utilizing contrastive learning, which attract positive samples and repulse negative samples to promote better separation, have got decent effect.
We propose Siamese Representation Learning for Unsupervised Relation Extraction -- a novel framework to simply leverage positive pairs to representation learning.
arXiv Detail & Related papers (2023-10-01T02:57:43Z) - Uncertain Facial Expression Recognition via Multi-task Assisted
Correction [43.02119884581332]
We propose a novel method of multi-task assisted correction in addressing uncertain facial expression recognition called MTAC.
Specifically, a confidence estimation block and a weighted regularization module are applied to highlight solid samples and suppress uncertain samples in every batch.
Experiments on RAF-DB, AffectNet, and AffWild2 datasets demonstrate that the MTAC obtains substantial improvements over baselines when facing synthetic and real uncertainties.
arXiv Detail & Related papers (2022-12-14T10:28:08Z) - Unsupervised representation learning with recognition-parametrised
probabilistic models [12.865596223775649]
We introduce a new approach to probabilistic unsupervised learning based on the recognition-parametrised model ( RPM)
Under the key assumption that observations are conditionally independent given latents, the RPM combines parametric prior observation-conditioned latent distributions with non-parametric observationfactors.
The RPM provides a powerful framework to discover meaningful latent structure underlying observational data, a function critical to both animal and artificial intelligence.
arXiv Detail & Related papers (2022-09-13T00:33:21Z) - On the Statistical Efficiency of Reward-Free Exploration in Non-Linear
RL [54.55689632571575]
We study reward-free reinforcement learning (RL) under general non-linear function approximation.
We propose the RFOLIVE (Reward-Free OLIVE) algorithm for sample-efficient reward-free exploration.
arXiv Detail & Related papers (2022-06-21T23:17:43Z) - Formal Verification of Unknown Dynamical Systems via Gaussian Process Regression [11.729744197698718]
Leveraging autonomous systems in safety-critical scenarios requires verifying their behaviors in the presence of uncertainties.
We develop a framework for verifying discrete-time dynamical systems with unmodelled dynamics and noisy measurements.
arXiv Detail & Related papers (2021-12-31T05:10:05Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.