Estimating Linear Mixed Effects Models with Truncated Normally
Distributed Random Effects
- URL: http://arxiv.org/abs/2011.04538v9
- Date: Sat, 31 Jul 2021 00:46:43 GMT
- Title: Estimating Linear Mixed Effects Models with Truncated Normally
Distributed Random Effects
- Authors: Hao Chen, Lanshan Han and Alvin Lim
- Abstract summary: Inference can be conducted using maximum likelihood approach if assuming Normal distributions on the random effects.
In this paper we extend the classical (unconstrained) LME models to allow for sign constraints on its overall coefficients.
- Score: 5.4052819252055055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Linear Mixed Effects (LME) models have been widely applied in clustered data
analysis in many areas including marketing research, clinical trials, and
biomedical studies. Inference can be conducted using maximum likelihood
approach if assuming Normal distributions on the random effects. However, in
many applications of economy, business and medicine, it is often essential to
impose constraints on the regression parameters after taking their real-world
interpretations into account. Therefore, in this paper we extend the classical
(unconstrained) LME models to allow for sign constraints on its overall
coefficients. We propose to assume a symmetric doubly truncated Normal (SDTN)
distribution on the random effects instead of the unconstrained Normal
distribution which is often found in classical literature. With the
aforementioned change, difficulty has dramatically increased as the exact
distribution of the dependent variable becomes analytically intractable. We
then develop likelihood-based approaches to estimate the unknown model
parameters utilizing the approximation of its exact distribution. Simulation
studies have shown that the proposed constrained model not only improves
real-world interpretations of results, but also achieves satisfactory
performance on model fits as compared to the existing model.
Related papers
- A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models [6.647819824559201]
We study the large-sample properties of a likelihood-based approach for estimating conditional deep generative models.
Our results lead to the convergence rate of a sieve maximum likelihood estimator for estimating the conditional distribution.
arXiv Detail & Related papers (2024-10-02T20:46:21Z) - GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Dropout Regularization in Extended Generalized Linear Models based on Double Exponential Families [0.0]
We study dropout regularization in extended generalized linear models based on double exponential families.
A theoretical analysis shows that dropout regularization prefers rare but important features in both the mean and dispersion.
arXiv Detail & Related papers (2023-05-11T07:54:11Z) - Optimal regularizations for data generation with probabilistic graphical
models [0.0]
Empirically, well-chosen regularization schemes dramatically improve the quality of the inferred models.
We consider the particular case of L 2 and L 1 regularizations in the Maximum A Posteriori (MAP) inference of generative pairwise graphical models.
arXiv Detail & Related papers (2021-12-02T14:45:16Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Achieving Efficiency in Black Box Simulation of Distribution Tails with
Self-structuring Importance Samplers [1.6114012813668934]
The paper presents a novel Importance Sampling (IS) scheme for estimating distribution of performance measures modeled with a rich set of tools such as linear programs, integer linear programs, piecewise linear/quadratic objectives, feature maps specified with deep neural networks, etc.
arXiv Detail & Related papers (2021-02-14T03:37:22Z) - Flexible mean field variational inference using mixtures of
non-overlapping exponential families [6.599344783327053]
I show that using standard mean field variational inference can fail to produce sensible results for models with sparsity-inducing priors.
I show that any mixture of a diffuse exponential family and a point mass at zero to model sparsity forms an exponential family.
arXiv Detail & Related papers (2020-10-14T01:46:56Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.