Inference for Network Regression Models with Community Structure
- URL: http://arxiv.org/abs/2106.04271v1
- Date: Tue, 8 Jun 2021 12:04:31 GMT
- Title: Inference for Network Regression Models with Community Structure
- Authors: Mengjie Pan, Tyler H. McCormick, Bailey K. Fosdick
- Abstract summary: We present a novel regression modeling framework that models the errors as resulting from a community-based dependence structure.
We exploit the subsequent exchangeability properties of the error distribution to obtain parsimonious standard errors for regression parameters.
- Score: 1.7188280334580197
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Network regression models, where the outcome comprises the valued edge in a
network and the predictors are actor or dyad-level covariates, are used
extensively in the social and biological sciences. Valid inference relies on
accurately modeling the residual dependencies among the relations. Frequently
homogeneity assumptions are placed on the errors which are commonly incorrect
and ignore critical, natural clustering of the actors. In this work, we present
a novel regression modeling framework that models the errors as resulting from
a community-based dependence structure and exploits the subsequent
exchangeability properties of the error distribution to obtain parsimonious
standard errors for regression parameters.
Related papers
- Model Reconstruction Using Counterfactual Explanations: A Perspective From Polytope Theory [9.771997770574947]
We analyze how model reconstruction using counterfactuals can be improved.
Our main contribution is to derive novel theoretical relationships between the error in model reconstruction and the number of counterfactual queries.
arXiv Detail & Related papers (2024-05-08T18:52:47Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Strong identifiability and parameter learning in regression with
heterogeneous response [5.503319042839695]
We investigate conditions of strong identifiability, rates of convergence for conditional density and parameter estimation, and the Bayesian posterior contraction behavior arising in finite mixture of regression models.
We provide simulation studies and data illustrations, which shed some light on the parameter learning behavior found in several popular regression mixture models reported in the literature.
arXiv Detail & Related papers (2022-12-08T05:58:13Z) - Regularized Sequential Latent Variable Models with Adversarial Neural
Networks [33.74611654607262]
We will present different ways of using high level latent random variables in RNN to model the variability in the sequential data.
We will explore possible ways of using adversarial method to train a variational RNN model.
arXiv Detail & Related papers (2021-08-10T08:05:14Z) - Causality-aware counterfactual confounding adjustment as an alternative
to linear residualization in anticausal prediction tasks based on linear
learners [14.554818659491644]
We compare the linear residualization approach against the causality-aware confounding adjustment in anticausal prediction tasks.
We show that the causality-aware approach tends to (asymptotically) outperform the residualization adjustment in terms of predictive performance in linear learners.
arXiv Detail & Related papers (2020-11-09T17:59:57Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Semi-Structured Distributional Regression -- Extending Structured
Additive Models by Arbitrary Deep Neural Networks and Data Modalities [0.0]
We propose a general framework to combine structured regression models and deep neural networks into a unifying network architecture.
We demonstrate the framework's efficacy in numerical experiments and illustrate its special merits in benchmarks and real-world applications.
arXiv Detail & Related papers (2020-02-13T21:01:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.