Stable Probability Weighting: Large-Sample and Finite-Sample Estimation
and Inference Methods for Heterogeneous Causal Effects of Multivalued
Treatments Under Limited Overlap
- URL: http://arxiv.org/abs/2301.05703v1
- Date: Fri, 13 Jan 2023 18:52:18 GMT
- Title: Stable Probability Weighting: Large-Sample and Finite-Sample Estimation
and Inference Methods for Heterogeneous Causal Effects of Multivalued
Treatments Under Limited Overlap
- Authors: Ganesh Karapakula
- Abstract summary: I propose new practical large-sample and finite-sample methods for estimating and inferring heterogeneous causal effects.
I develop a general principle called "Stable Probability Weighting"
I also propose new finite-sample inference methods for testing a general class of weak null hypotheses.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, I try to tame "Basu's elephants" (data with extreme selection
on observables). I propose new practical large-sample and finite-sample methods
for estimating and inferring heterogeneous causal effects (under
unconfoundedness) in the empirically relevant context of limited overlap. I
develop a general principle called "Stable Probability Weighting" (SPW) that
can be used as an alternative to the widely used Inverse Probability Weighting
(IPW) technique, which relies on strong overlap. I show that IPW (or its
augmented version), when valid, is a special case of the more general SPW (or
its doubly robust version), which adjusts for the extremeness of the
conditional probabilities of the treatment states. The SPW principle can be
implemented using several existing large-sample parametric, semiparametric, and
nonparametric procedures for conditional moment models. In addition, I provide
new finite-sample results that apply when unconfoundedness is plausible within
fine strata. Since IPW estimation relies on the problematic reciprocal of the
estimated propensity score, I develop a "Finite-Sample Stable Probability
Weighting" (FPW) set-estimator that is unbiased in a sense. I also propose new
finite-sample inference methods for testing a general class of weak null
hypotheses. The associated computationally convenient methods, which can be
used to construct valid confidence sets and to bound the finite-sample
confidence distribution, are of independent interest. My large-sample and
finite-sample frameworks extend to the setting of multivalued treatments.
Related papers
- Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Distribution Estimation under the Infinity Norm [19.997465098927858]
We present novel bounds for estimating discrete probability distributions under the $ell_infty$ norm.
Our data-dependent convergence guarantees for the maximum likelihood estimator significantly improve upon the currently known results.
arXiv Detail & Related papers (2024-02-13T12:49:50Z) - A Semi-Bayesian Nonparametric Estimator of the Maximum Mean Discrepancy
Measure: Applications in Goodness-of-Fit Testing and Generative Adversarial
Networks [3.623570119514559]
We propose a semi-Bayesian nonparametric (semi-BNP) procedure for the goodness-of-fit (GOF) test.
Our method introduces a novel Bayesian estimator for the maximum mean discrepancy (MMD) measure.
We demonstrate that our proposed test outperforms frequentist MMD-based methods by achieving a lower false rejection and acceptance rate of the null hypothesis.
arXiv Detail & Related papers (2023-03-05T10:36:21Z) - Bayesian Hierarchical Models for Counterfactual Estimation [12.159830463756341]
We propose a probabilistic paradigm to estimate a diverse set of counterfactuals.
We treat the perturbations as random variables endowed with prior distribution functions.
A gradient based sampler with superior convergence characteristics efficiently computes the posterior samples.
arXiv Detail & Related papers (2023-01-21T00:21:11Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Selective Probabilistic Classifier Based on Hypothesis Testing [14.695979686066066]
We propose a simple yet effective method to deal with the violation of the Closed-World Assumption for a classifier.
The proposed method is a rejection option based on hypothesis testing with probabilistic networks.
It is shown that the proposed method can achieve a broader range of operation and cover a lower False Positive Ratio than the alternative.
arXiv Detail & Related papers (2021-05-09T08:55:56Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Confidence Sets and Hypothesis Testing in a Likelihood-Free Inference
Setting [5.145741425164947]
$texttACORE$ is a frequentist approach to LFI that first formulates the classical likelihood ratio test (LRT) as a parametrized classification problem.
$texttACORE$ is based on the key observation that the statistic, the rejection probability of the test, and the coverage of the confidence set are conditional distribution functions.
arXiv Detail & Related papers (2020-02-24T17:34:49Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.