Practicality of generalization guarantees for unsupervised domain
adaptation with neural networks
- URL: http://arxiv.org/abs/2303.08720v1
- Date: Wed, 15 Mar 2023 16:05:05 GMT
- Title: Practicality of generalization guarantees for unsupervised domain
adaptation with neural networks
- Authors: Adam Breitholtz and Fredrik D. Johansson
- Abstract summary: We evaluate existing bounds from the literature with potential to satisfy our desiderata on domain adaptation image classification tasks.
We find that all bounds are vacuous and that sample generalization terms account for much of the observed looseness.
We find that, when domain overlap can be assumed, a simple importance weighting extension of previous work provides the tightest estimable bound.
- Score: 7.951847862547378
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding generalization is crucial to confidently engineer and deploy
machine learning models, especially when deployment implies a shift in the data
domain. For such domain adaptation problems, we seek generalization bounds
which are tractably computable and tight. If these desiderata can be reached,
the bounds can serve as guarantees for adequate performance in deployment.
However, in applications where deep neural networks are the models of choice,
deriving results which fulfill these remains an unresolved challenge; most
existing bounds are either vacuous or has non-estimable terms, even in
favorable conditions. In this work, we evaluate existing bounds from the
literature with potential to satisfy our desiderata on domain adaptation image
classification tasks, where deep neural networks are preferred. We find that
all bounds are vacuous and that sample generalization terms account for much of
the observed looseness, especially when these terms interact with measures of
domain shift. To overcome this and arrive at the tightest possible results, we
combine each bound with recent data-dependent PAC-Bayes analysis, greatly
improving the guarantees. We find that, when domain overlap can be assumed, a
simple importance weighting extension of previous work provides the tightest
estimable bound. Finally, we study which terms dominate the bounds and identify
possible directions for further improvement.
Related papers
- Set Valued Predictions For Robust Domain Generalization [10.517157722122452]
We argue that set-valued predictors could be leveraged to enhance robustness across unseen domains.<n>We introduce a theoretical framework defining successful set prediction in the Domain Generalization setting.<n>We propose a practical optimization method compatible with modern learning architectures, that balances robust performance on unseen domains with small prediction set sizes.
arXiv Detail & Related papers (2025-07-03T19:57:09Z) - Robust Unsupervised Domain Adaptation by Retaining Confident Entropy via
Edge Concatenation [7.953644697658355]
Unsupervised domain adaptation can mitigate the need for extensive pixel-level annotations to train semantic segmentation networks.
We introduce a novel approach to domain adaptation, leveraging the synergy of internal and external information within entropy-based adversarial networks.
We devised a probability-sharing network that integrates diverse information for more effective segmentation.
arXiv Detail & Related papers (2023-10-11T02:50:16Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Localized Adversarial Domain Generalization [83.4195658745378]
Adversarial domain generalization is a popular approach to domain generalization.
We propose localized adversarial domain generalization with space compactness maintenance(LADG)
We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach.
arXiv Detail & Related papers (2022-05-09T08:30:31Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Shallow Features Guide Unsupervised Domain Adaptation for Semantic
Segmentation at Class Boundaries [21.6953660626021]
Deep neural networks fail to generalize towards new domains when performing synthetic-to-real adaptation.
In this work, we present a novel low-level adaptation strategy that allows us to obtain sharp predictions.
We also introduce an effective data augmentation that alleviates the noise typically present at semantic boundaries when employing pseudo-labels for self-training.
arXiv Detail & Related papers (2021-10-06T15:05:48Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - SAND-mask: An Enhanced Gradient Masking Strategy for the Discovery of
Invariances in Domain Generalization [7.253255826783766]
We propose a masking strategy, which determines a continuous weight based on the agreement of gradients that flow in each edge of network.
SAND-mask is validated over the Domainbed benchmark for domain generalization.
arXiv Detail & Related papers (2021-06-04T05:20:54Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs [2.1320960069210484]
Neural networks are commonly used in safety-critical real-world applications.
Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications.
We provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks.
arXiv Detail & Related papers (2020-06-16T10:00:33Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.