Treatment Effect Estimation for Exponential Family Outcomes using Neural Networks with Targeted Regularization
- URL: http://arxiv.org/abs/2502.07295v1
- Date: Tue, 11 Feb 2025 06:36:20 GMT
- Title: Treatment Effect Estimation for Exponential Family Outcomes using Neural Networks with Targeted Regularization
- Authors: Jiahong Li, Zeqin Yang, Jiayi Dan, Jixing Xu, Zhichao Zou, Peng Zhen, Jiecheng Guo,
- Abstract summary: We show how to design NN-based estimators with desirable properties, such as low bias and doubly robustness.
We develop a NN-based estimator for ADCF by generalizing functional targeted regularization to exponential families, and provide the corresponding theoretical convergence rate.
- Score: 5.124955904159979
- License:
- Abstract: Neural Networks (NNs) have became a natural choice for treatment effect estimation due to their strong approximation capabilities. Nevertheless, how to design NN-based estimators with desirable properties, such as low bias and doubly robustness, still remains a significant challenge. A common approach to address this is targeted regularization, which modifies the objective function of NNs. However, existing works on targeted regularization are limited to Gaussian-distributed outcomes, significantly restricting their applicability in real-world scenarios. In this work, we aim to bridge this blank by extending this framework to the boarder exponential family outcomes. Specifically, we first derive the von-Mises expansion of the Average Dose function of Canonical Functions (ADCF), which inspires us how to construct a doubly robust estimator with good properties. Based on this, we develop a NN-based estimator for ADCF by generalizing functional targeted regularization to exponential families, and provide the corresponding theoretical convergence rate. Extensive experimental results demonstrate the effectiveness of our proposed model.
Related papers
- Bridge the Inference Gaps of Neural Processes via Expectation Maximization [27.92039393053804]
The neural process (NP) is a family of computationally efficient models for learning distributions over functions.
We propose a surrogate objective of the target log-likelihood of the meta dataset within the expectation framework.
The resulting model, referred to as the Self-normalized weighted Neural Process (SI-NP), can learn a more accurate functional prior.
arXiv Detail & Related papers (2025-01-04T03:28:21Z) - Revisiting the Equivalence of Bayesian Neural Networks and Gaussian Processes: On the Importance of Learning Activations [1.0468715529145969]
We show that trainable activations are crucial for effective mapping of GP priors to wide BNNs.
We also introduce trainable periodic activations that ensure global stationarity by design.
arXiv Detail & Related papers (2024-10-21T08:42:10Z) - Implicit Generative Prior for Bayesian Neural Networks [8.013264410621357]
We propose a novel neural adaptive empirical Bayes (NA-EB) framework for complex data structures.
The proposed NA-EB framework combines variational inference with a gradient ascent algorithm.
We demonstrate the practical applications of our framework through extensive evaluations on a variety of tasks.
arXiv Detail & Related papers (2024-04-27T21:00:38Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Subject-specific Deep Neural Networks for Count Data with
High-cardinality Categorical Features [1.2289361708127877]
We propose a novel hierarchical likelihood learning framework for introducing gamma random effects into a Poisson deep neural network.
The proposed method simultaneously yields maximum likelihood estimators for fixed parameters and best unbiased predictors for random effects.
State-of-the-art network architectures can be easily implemented into the proposed h-likelihood framework.
arXiv Detail & Related papers (2023-10-18T01:54:48Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - TANGOS: Regularizing Tabular Neural Networks through Gradient
Orthogonalization and Specialization [69.80141512683254]
We introduce Tabular Neural Gradient Orthogonalization and gradient (TANGOS)
TANGOS is a novel framework for regularization in the tabular setting built on latent unit attributions.
We demonstrate that our approach can lead to improved out-of-sample generalization performance, outperforming other popular regularization methods.
arXiv Detail & Related papers (2023-03-09T18:57:13Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Causal Inference of General Treatment Effects using Neural Networks with
A Diverging Number of Confounders [12.105996764226227]
Under the unconfoundedness condition, adjustment for confounders requires estimating the nuisance functions relating outcome or treatment to confounders nonparametrically.
This paper considers a generalized optimization framework for efficient estimation of general treatment effects using artificial neural networks (ANNs) to approximate the unknown nuisance function of growing-dimensional confounders.
arXiv Detail & Related papers (2020-09-15T13:07:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.