Robust and flexible learning of a high-dimensional classification rule
using auxiliary outcomes
- URL: http://arxiv.org/abs/2011.05493v3
- Date: Wed, 22 Mar 2023 15:29:48 GMT
- Title: Robust and flexible learning of a high-dimensional classification rule
using auxiliary outcomes
- Authors: Muxuan Liang, Jaeyoung Park, Qing Lu, Xiang Zhong
- Abstract summary: We develop a transfer learning approach to estimating a high-dimensional linear decision rule with the presence of auxiliary outcomes.
We show that the final estimator can achieve a lower estimation error than the one using only the single outcome of interest.
- Score: 2.92281985958308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Correlated outcomes are common in many practical problems. In some settings,
one outcome is of particular interest, and others are auxiliary. To leverage
information shared by all the outcomes, traditional multi-task learning (MTL)
minimizes an averaged loss function over all the outcomes, which may lead to
biased estimation for the target outcome, especially when the MTL model is
mis-specified. In this work, based on a decomposition of estimation bias into
two types, within-subspace and against-subspace, we develop a robust transfer
learning approach to estimating a high-dimensional linear decision rule for the
outcome of interest with the presence of auxiliary outcomes. The proposed
method includes an MTL step using all outcomes to gain efficiency, and a
subsequent calibration step using only the outcome of interest to correct both
types of biases. We show that the final estimator can achieve a lower
estimation error than the one using only the single outcome of interest.
Simulations and real data analysis are conducted to justify the superiority of
the proposed method.
Related papers
- Off-policy estimation with adaptively collected data: the power of online learning [20.023469636707635]
We consider estimation of a linear functional of the treatment effect using adaptively collected data.
We propose a general reduction scheme that allows one to produce a sequence of estimates for the treatment effect via online learning.
arXiv Detail & Related papers (2024-11-19T10:18:27Z) - Dissecting Misalignment of Multimodal Large Language Models via Influence Function [12.832792175138241]
We introduce the Extended Influence Function for Contrastive Loss (ECIF), an influence function crafted for contrastive loss.
ECIF considers both positive and negative samples and provides a closed-form approximation of contrastive learning models.
Building upon ECIF, we develop a series of algorithms for data evaluation in MLLM, misalignment detection, and misprediction trace-back tasks.
arXiv Detail & Related papers (2024-11-18T15:45:41Z) - Assumption-Lean Post-Integrated Inference with Negative Control Outcomes [0.0]
We introduce a robust post-integrated inference (PII) method that adjusts for latent heterogeneity using negative control outcomes.
Our method extends to projected direct effect estimands, accounting for hidden mediators, confounders, and moderators.
The proposed doubly robust estimators are consistent and efficient under minimal assumptions and potential misspecification.
arXiv Detail & Related papers (2024-10-07T12:52:38Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data [102.16105233826917]
Learning from preference labels plays a crucial role in fine-tuning large language models.
There are several distinct approaches for preference fine-tuning, including supervised learning, on-policy reinforcement learning (RL), and contrastive learning.
arXiv Detail & Related papers (2024-04-22T17:20:18Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Counterfactual Propagation for Semi-Supervised Individual Treatment
Effect Estimation [21.285425135761795]
Individual treatment effect (ITE) represents the expected improvement in the outcome of taking a particular action to a particular target.
In this study, we consider a semi-supervised ITE estimation problem that exploits more easily-available unlabeled instances.
We propose counterfactual propagation, which is the first semi-supervised ITE estimation method.
arXiv Detail & Related papers (2020-05-11T13:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.