A Batch Normalization Classifier for Domain Adaptation
- URL: http://arxiv.org/abs/2103.11642v1
- Date: Mon, 22 Mar 2021 08:03:44 GMT
- Title: A Batch Normalization Classifier for Domain Adaptation
- Authors: Matthew R. Behrend and Sean M. Robinson
- Abstract summary: Adapting a model to perform well on unforeseen data outside its training set is a common problem that continues to motivate new approaches.
We demonstrate that application of batch normalization in the output layer, prior to softmax activation, results in improved generalization across visual data domains in a refined ResNet model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adapting a model to perform well on unforeseen data outside its training set
is a common problem that continues to motivate new approaches. We demonstrate
that application of batch normalization in the output layer, prior to softmax
activation, results in improved generalization across visual data domains in a
refined ResNet model. The approach adds negligible computational complexity yet
outperforms many domain adaptation methods that explicitly learn to align data
domains. We benchmark this technique on the Office-Home dataset and show that
batch normalization is competitive with other leading methods. We show that
this method is not sensitive to presence of source data during adaptation and
further present the impact on trained tensor distributions tends toward
sparsity. Code is available at https://github.com/matthewbehrend/BNC
Related papers
- First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - Continuous Unsupervised Domain Adaptation Using Stabilized
Representations and Experience Replay [23.871860648919593]
We introduce an algorithm for tackling the problem of unsupervised domain adaptation (UDA) in continual learning (CL) scenarios.
Our solution is based on stabilizing the learned internal distribution to enhances the model generalization on new domains.
We leverage experience replay to overcome the problem of catastrophic forgetting, where the model loses previously acquired knowledge when learning new tasks.
arXiv Detail & Related papers (2024-01-31T05:09:14Z) - Domain Generalization by Rejecting Extreme Augmentations [13.114457707388283]
We show that for out-of-domain and domain generalization settings, data augmentation can provide a conspicuous and robust improvement in performance.
We propose a simple training procedure: (i) use uniform sampling on standard data augmentation transformations; (ii) increase the strength transformations to account for the higher data variance expected when working out-of-domain, and (iii) devise a new reward function to reject extreme transformations that can harm the training.
arXiv Detail & Related papers (2023-10-10T14:46:22Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - The Norm Must Go On: Dynamic Unsupervised Domain Adaptation by
Normalization [10.274423413222763]
Domain adaptation is crucial to adapt a learned model to new scenarios, such as domain shifts or changing data distributions.
Current approaches usually require a large amount of labeled or unlabeled data from the shifted domain.
We propose Dynamic Unsupervised Adaptation (DUA) to overcome this problem.
arXiv Detail & Related papers (2021-12-01T12:43:41Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - Domain Impression: A Source Data Free Domain Adaptation Method [27.19677042654432]
Unsupervised domain adaptation methods solve the adaptation problem for an unlabeled target set, assuming that the source dataset is available with all labels.
This paper proposes a domain adaptation technique that does not need any source data.
Instead of the source data, we are only provided with a classifier that is trained on the source data.
arXiv Detail & Related papers (2021-02-17T19:50:49Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z) - Unshuffling Data for Improved Generalization [65.57124325257409]
Generalization beyond the training distribution is a core challenge in machine learning.
We show that partitioning the data into well-chosen, non-i.i.d. subsets treated as multiple training environments can guide the learning of models with better out-of-distribution generalization.
arXiv Detail & Related papers (2020-02-27T03:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.