Distributionally Robust Machine Learning with Multi-source Data
- URL: http://arxiv.org/abs/2309.02211v2
- Date: Tue, 26 Sep 2023 18:05:43 GMT
- Title: Distributionally Robust Machine Learning with Multi-source Data
- Authors: Zhenyu Wang, Peter B\"uhlmann, Zijian Guo
- Abstract summary: We introduce a group distributionally robust prediction model to optimize an adversarial reward about explained variance with respect to a class of target distributions.
Compared to classical empirical risk minimization, the proposed robust prediction model improves the prediction accuracy for target populations with distribution shifts.
We demonstrate the performance of our proposed group distributionally robust method on simulated and real data with random forests and neural networks as base-learning algorithms.
- Score: 6.383451076043423
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Classical machine learning methods may lead to poor prediction performance
when the target distribution differs from the source populations. This paper
utilizes data from multiple sources and introduces a group distributionally
robust prediction model defined to optimize an adversarial reward about
explained variance with respect to a class of target distributions. Compared to
classical empirical risk minimization, the proposed robust prediction model
improves the prediction accuracy for target populations with distribution
shifts. We show that our group distributionally robust prediction model is a
weighted average of the source populations' conditional outcome models. We
leverage this key identification result to robustify arbitrary machine learning
algorithms, including, for example, random forests and neural networks. We
devise a novel bias-corrected estimator to estimate the optimal aggregation
weight for general machine-learning algorithms and demonstrate its improvement
in the convergence rate. Our proposal can be seen as a distributionally robust
federated learning approach that is computationally efficient and easy to
implement using arbitrary machine learning base algorithms, satisfies some
privacy constraints, and has a nice interpretation of different sources'
importance for predicting a given target covariate distribution. We demonstrate
the performance of our proposed group distributionally robust method on
simulated and real data with random forests and neural networks as
base-learning algorithms.
Related papers
- Ranking and Combining Latent Structured Predictive Scores without Labeled Data [2.5064967708371553]
This paper introduces a novel structured unsupervised ensemble learning model (SUEL)
It exploits the dependency between a set of predictors with continuous predictive scores, rank the predictors without labeled data and combine them to an ensembled score with weights.
The efficacy of the proposed methods is rigorously assessed through both simulation studies and real-world application of risk genes discovery.
arXiv Detail & Related papers (2024-08-14T20:14:42Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Confidence estimation of classification based on the distribution of the
neural network output layer [4.529188601556233]
One of the most common problems preventing the application of prediction models in the real world is lack of generalization.
We propose novel methods that estimate uncertainty of particular predictions generated by a neural network classification model.
The proposed methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.
arXiv Detail & Related papers (2022-10-14T12:32:50Z) - Aggregating distribution forecasts from deep ensembles [0.0]
We propose a general quantile aggregation framework for deep ensembles.
We show that combining forecast distributions from deep ensembles can substantially improve the predictive performance.
arXiv Detail & Related papers (2022-04-05T15:42:51Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Robustness via Cross-Domain Ensembles [0.5801044612920816]
We present a method for making neural network predictions robust to shifts from the training data distribution.
The proposed method is based on making predictions via a diverse set of cues and ensembling them into one strong prediction.
arXiv Detail & Related papers (2021-03-19T17:28:03Z) - Uncertainty Estimation and Sample Selection for Crowd Counting [87.29137075538213]
We present a method for image-based crowd counting that can predict a crowd density map together with the uncertainty values pertaining to the predicted density map.
A key advantage of our method over existing crowd counting methods is its ability to quantify the uncertainty of its predictions.
We show that our sample selection strategy drastically reduces the amount of labeled data needed to adapt a counting network trained on a source domain to the target domain.
arXiv Detail & Related papers (2020-09-30T03:40:07Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Scalable Approximate Inference and Some Applications [2.6541211006790983]
In this thesis, we propose a new framework for approximate inference.
Our proposed four algorithms are motivated by the recent computational progress of Stein's method.
Results on simulated and real datasets indicate the statistical efficiency and wide applicability of our algorithm.
arXiv Detail & Related papers (2020-03-07T04:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.