Domain Adaptive Bootstrap Aggregating
- URL: http://arxiv.org/abs/2001.03988v2
- Date: Tue, 16 Jun 2020 04:51:55 GMT
- Title: Domain Adaptive Bootstrap Aggregating
- Authors: Meimei Liu and David B. Dunson
- Abstract summary: bootstrap aggregating, or bagging, is a popular method for improving stability of predictive algorithms.
This article proposes a domain adaptive bagging method coupled with a new iterative nearest neighbor sampler.
- Score: 5.444459446244819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When there is a distributional shift between data used to train a predictive
algorithm and current data, performance can suffer. This is known as the domain
adaptation problem. Bootstrap aggregating, or bagging, is a popular method for
improving stability of predictive algorithms, while reducing variance and
protecting against over-fitting. This article proposes a domain adaptive
bagging method coupled with a new iterative nearest neighbor sampler. The key
idea is to draw bootstrap samples from the training data in such a manner that
their distribution equals that of new testing data. The proposed approach
provides a general ensemble framework that can be applied to arbitrary
classifiers. We further modify the method to allow anomalous samples in the
test data corresponding to outliers in the training data. Theoretical support
is provided, and the approach is compared to alternatives in simulations and
real data applications.
Related papers
- Protected Test-Time Adaptation via Online Entropy Matching: A Betting Approach [14.958884168060097]
We present a novel approach for test-time adaptation via online self-training.
Our approach combines concepts in betting martingales and online learning to form a detection tool capable of reacting to distribution shifts.
Experimental results demonstrate that our approach improves test-time accuracy under distribution shifts while maintaining accuracy and calibration in their absence.
arXiv Detail & Related papers (2024-08-14T12:40:57Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Open-Sampling: Exploring Out-of-Distribution data for Re-balancing
Long-tailed datasets [24.551465814633325]
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
Recent studies found that directly training with out-of-distribution data in a semi-supervised manner would harm the generalization performance.
We propose a novel method called Open-sampling, which utilizes open-set noisy labels to re-balance the class priors of the training dataset.
arXiv Detail & Related papers (2022-06-17T14:29:52Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Sampling Bias Correction for Supervised Machine Learning: A Bayesian
Inference Approach with Practical Applications [0.0]
We discuss a problem where a dataset might be subject to intentional sample bias such as label imbalance.
We then apply this solution to binary logistic regression, and discuss scenarios where a dataset might be subject to intentional sample bias.
This technique is widely applicable for statistical inference on big data, from the medical sciences to image recognition to marketing.
arXiv Detail & Related papers (2022-03-11T20:46:37Z) - Adaptive Conformal Inference Under Distribution Shift [0.0]
We develop methods for forming prediction sets in an online setting where the data generating distribution is allowed to vary over time in an unknown fashion.
Our framework builds on ideas from conformal inference to provide a general wrapper that can be combined with any black box method.
We test our method, adaptive conformal inference, on two real world datasets and find that its predictions are robust to visible and significant distribution shifts.
arXiv Detail & Related papers (2021-06-01T01:37:32Z) - Domain Impression: A Source Data Free Domain Adaptation Method [27.19677042654432]
Unsupervised domain adaptation methods solve the adaptation problem for an unlabeled target set, assuming that the source dataset is available with all labels.
This paper proposes a domain adaptation technique that does not need any source data.
Instead of the source data, we are only provided with a classifier that is trained on the source data.
arXiv Detail & Related papers (2021-02-17T19:50:49Z) - Source-free Domain Adaptation via Distributional Alignment by Matching
Batch Normalization Statistics [85.75352990739154]
We propose a novel domain adaptation method for the source-free setting.
We use batch normalization statistics stored in the pretrained model to approximate the distribution of unobserved source data.
Our method achieves competitive performance with state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-01-19T14:22:33Z) - BREEDS: Benchmarks for Subpopulation Shift [98.90314444545204]
We develop a methodology for assessing the robustness of models to subpopulation shift.
We leverage the class structure underlying existing datasets to control the data subpopulations that comprise the training and test distributions.
Applying this methodology to the ImageNet dataset, we create a suite of subpopulation shift benchmarks of varying granularity.
arXiv Detail & Related papers (2020-08-11T17:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.