Double Robust Semi-Supervised Inference for the Mean: Selection Bias
under MAR Labeling with Decaying Overlap
- URL: http://arxiv.org/abs/2104.06667v2
- Date: Thu, 18 May 2023 12:10:21 GMT
- Title: Double Robust Semi-Supervised Inference for the Mean: Selection Bias
under MAR Labeling with Decaying Overlap
- Authors: Yuqian Zhang, Abhishek Chakrabortty and Jelena Bradic
- Abstract summary: Semi-supervised (SS) inference has received much attention in recent years.
Most of the SS literature implicitly assumes L and U to be equally distributed.
Inferential challenges in missing at random (MAR) type labeling allowing for selection bias, are inevitably exacerbated by the decaying nature of the propensity score (PS)
- Score: 11.758346319792361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised (SS) inference has received much attention in recent years.
Apart from a moderate-sized labeled data, L, the SS setting is characterized by
an additional, much larger sized, unlabeled data, U. The setting of |U| >> |L|,
makes SS inference unique and different from the standard missing data
problems, owing to natural violation of the so-called "positivity" or "overlap"
assumption. However, most of the SS literature implicitly assumes L and U to be
equally distributed, i.e., no selection bias in the labeling. Inferential
challenges in missing at random (MAR) type labeling allowing for selection
bias, are inevitably exacerbated by the decaying nature of the propensity score
(PS). We address this gap for a prototype problem, the estimation of the
response's mean. We propose a double robust SS (DRSS) mean estimator and give a
complete characterization of its asymptotic properties. The proposed estimator
is consistent as long as either the outcome or the PS model is correctly
specified. When both models are correctly specified, we provide inference
results with a non-standard consistency rate that depends on the smaller size
|L|. The results are also extended to causal inference with imbalanced
treatment groups. Further, we provide several novel choices of models and
estimators of the decaying PS, including a novel offset logistic model and a
stratified labeling model. We present their properties under both high and low
dimensional settings. These may be of independent interest. Lastly, we present
extensive simulations and also a real data application.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Covariate Assisted Entity Ranking with Sparse Intrinsic Scores [3.2839905453386162]
We introduce novel model identification conditions and examine the regularized penalized Maximum Likelihood Estimator statistical rates.
We also apply our method to the goodness-of-fit test for models with no latent intrinsic scores.
arXiv Detail & Related papers (2024-07-09T19:58:54Z) - Revisiting the Dataset Bias Problem from a Statistical Perspective [72.94990819287551]
We study the "dataset bias" problem from a statistical standpoint.
We identify the main cause of the problem as the strong correlation between a class attribute u and a non-class attribute b.
We propose to mitigate dataset bias via either weighting the objective of each sample n by frac1p(u_n|b_n) or sampling that sample with a weight proportional to frac1p(u_n|b_n).
arXiv Detail & Related papers (2024-02-05T22:58:06Z) - Statistical Limits of Adaptive Linear Models: Low-Dimensional Estimation
and Inference [5.924780594614676]
We show that the error of estimating a single coordinate can be enlarged by a multiple of $sqrtd$ when data are allowed to be arbitrarily adaptive.
We propose a novel estimator for single coordinate inference via solving a Two-stage Adaptive Linear Estimating equation (TALE)
arXiv Detail & Related papers (2023-10-01T00:45:09Z) - The Decaying Missing-at-Random Framework: Doubly Robust Causal Inference
with Partially Labeled Data [10.021381302215062]
In real-world scenarios, data collection limitations often result in partially labeled datasets, leading to difficulties in drawing reliable causal inferences.
Traditional approaches in the semi-parametric (SS) and missing data literature may not adequately handle these complexities, leading to biased estimates.
This framework tackles missing outcomes in high-dimensional settings and accounts for selection bias.
arXiv Detail & Related papers (2023-05-22T07:37:12Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Pseudo-labeling for Kernel Ridge Regression under Covariate Shift [2.7920304852537536]
We learn a regression function with small mean squared error over a target distribution, based on unlabeled data from there and labeled data that may have a different feature distribution.
We propose to split the labeled data into two subsets and conduct kernel ridge regression on them separately to obtain a collection of candidate models and an imputation model.
arXiv Detail & Related papers (2023-02-20T18:46:12Z) - Bayesian Self-Supervised Contrastive Learning [16.903874675729952]
This paper proposes a new self-supervised contrastive loss called the BCL loss.
The key idea is to design the desired sampling distribution for sampling hard true negative samples under the Bayesian framework.
Experiments validate the effectiveness and superiority of the BCL loss.
arXiv Detail & Related papers (2023-01-27T12:13:06Z) - Breaking the Spurious Causality of Conditional Generation via Fairness
Intervention with Corrective Sampling [77.15766509677348]
Conditional generative models often inherit spurious correlations from the training dataset.
This can result in label-conditional distributions that are imbalanced with respect to another latent attribute.
We propose a general two-step strategy to mitigate this issue.
arXiv Detail & Related papers (2022-12-05T08:09:33Z) - How Does Pseudo-Labeling Affect the Generalization Error of the
Semi-Supervised Gibbs Algorithm? [73.80001705134147]
We provide an exact characterization of the expected generalization error (gen-error) for semi-supervised learning (SSL) with pseudo-labeling via the Gibbs algorithm.
The gen-error is expressed in terms of the symmetrized KL information between the output hypothesis, the pseudo-labeled dataset, and the labeled dataset.
arXiv Detail & Related papers (2022-10-15T04:11:56Z) - The Gap on GAP: Tackling the Problem of Differing Data Distributions in
Bias-Measuring Datasets [58.53269361115974]
Diagnostic datasets that can detect biased models are an important prerequisite for bias reduction within natural language processing.
undesired patterns in the collected data can make such tests incorrect.
We introduce a theoretically grounded method for weighting test samples to cope with such patterns in the test data.
arXiv Detail & Related papers (2020-11-03T16:50:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.