Generalized Balancing Weights via Deep Neural Networks
- URL: http://arxiv.org/abs/2211.07533v6
- Date: Fri, 29 Sep 2023 10:04:11 GMT
- Title: Generalized Balancing Weights via Deep Neural Networks
- Authors: Yoshiaki Kitazawa
- Abstract summary: Estimating causal effects from observational data is a central problem in many domains.
We present generalized balancing weights, Neural Balancing Weights (NBW), to estimate the causal effects of an arbitrary mixture of discrete and continuous interventions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating causal effects from observational data is a central problem in
many domains. A general approach is to balance covariates with weights such
that the distribution of the data mimics randomization. We present generalized
balancing weights, Neural Balancing Weights (NBW), to estimate the causal
effects of an arbitrary mixture of discrete and continuous interventions. The
weights were obtained through direct estimation of the density ratio between
the source and balanced distributions by optimizing the variational
representation of $f$-divergence. For this, we selected $\alpha$-divergence as
it presents efficient optimization because it has an estimator whose sample
complexity is independent of its ground truth value and unbiased mini-batch
gradients; moreover, it is advantageous for the vanishing-gradient problem. In
addition, we provide the following two methods for estimating the balancing
weights: improving the generalization performance of the balancing weights and
checking the balance of the distribution changed by the weights. Finally, we
discuss the sample size requirements for the weights as a general problem of a
curse of dimensionality when balancing multidimensional data. Our study
provides a basic approach for estimating the balancing weights of
multidimensional data using variational $f$-divergences.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Nearest Neighbor Sampling for Covariate Shift Adaptation [7.940293148084844]
We propose a new covariate shift adaptation method which avoids estimating the weights.
The basic idea is to directly work on unlabeled target data, labeled according to the $k$-nearest neighbors in the source dataset.
Our experiments show that it achieves drastic reduction in the running time with remarkable accuracy.
arXiv Detail & Related papers (2023-12-15T17:28:09Z) - Anomaly Detection with Variance Stabilized Density Estimation [49.46356430493534]
We present a variance-stabilized density estimation problem for maximizing the likelihood of the observed samples.
To obtain a reliable anomaly detector, we introduce a spectral ensemble of autoregressive models for learning the variance-stabilized distribution.
We have conducted an extensive benchmark with 52 datasets, demonstrating that our method leads to state-of-the-art results.
arXiv Detail & Related papers (2023-06-01T11:52:58Z) - Augmented balancing weights as linear regression [3.877356414450364]
We provide a novel characterization of augmented balancing weights, also known as automatic debiased machine learning (AutoDML)
We show that the augmented estimator is equivalent to a single linear model with coefficients that combine the coefficients from the original outcome model and coefficients from an unpenalized ordinary least squares (OLS) fit on the same data.
Our framework opens the black box on this increasingly popular class of estimators.
arXiv Detail & Related papers (2023-04-27T21:53:54Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Continuous Weight Balancing [0.0]
We propose a simple method by which to choose sample weights for problems with highly imbalanced or skewed traits.
We derive sample weights from the transfer function between an estimated source and specified target distributions.
Our method outperforms both unweighted and discretely-weighted models on both regression and classification tasks.
arXiv Detail & Related papers (2021-03-30T18:03:12Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Interpolation and Learning with Scale Dependent Kernels [91.41836461193488]
We study the learning properties of nonparametric ridge-less least squares.
We consider the common case of estimators defined by scale dependent kernels.
arXiv Detail & Related papers (2020-06-17T16:43:37Z) - The Heavy-Tail Phenomenon in SGD [7.366405857677226]
We show that depending on the structure of the Hessian of the loss at the minimum, the SGD iterates will converge to a emphheavy-tailed stationary distribution.
We translate our results into insights about the behavior of SGD in deep learning.
arXiv Detail & Related papers (2020-06-08T16:43:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.