An In-Depth Analysis of Adversarial Discriminative Domain Adaptation for Digit Classification
- URL: http://arxiv.org/abs/2412.19391v2
- Date: Tue, 07 Jan 2025 03:15:49 GMT
- Title: An In-Depth Analysis of Adversarial Discriminative Domain Adaptation for Digit Classification
- Authors: Eugene Choi, Julian Rodriguez, Edmund Young,
- Abstract summary: We implement a specific adversarial learning technique known as Adversarial Discriminative Domain Adaptation (ADDA)
Our results demonstrate that ADDA significantly improves accuracy across certain domain shifts with minimal impact on in-domain performance.
- Score: 2.4578723416255754
- License:
- Abstract: Domain adaptation is an active area of research driven by the growing demand for robust machine learning models that perform well on real-world data. Adversarial learning for deep neural networks (DNNs) has emerged as a promising approach to improving generalization ability, particularly for image classification. In this paper, we implement a specific adversarial learning technique known as Adversarial Discriminative Domain Adaptation (ADDA) and replicate digit classification experiments from the original ADDA paper. We extend their findings by examining a broader range of domain shifts and provide a detailed analysis of in-domain classification accuracy post-ADDA. Our results demonstrate that ADDA significantly improves accuracy across certain domain shifts with minimal impact on in-domain performance. Furthermore, we provide qualitative analysis and propose potential explanations for ADDA's limitations in less successful domain shifts. Code is at https://github.com/eugenechoi2004/COS429_FINAL .
Related papers
- AD-Aligning: Emulating Human-like Generalization for Cognitive Domain Adaptation in Deep Learning [3.3543468626874486]
Domain adaptation is pivotal for enabling deep learning models to generalize across diverse domains.
We introduce AD-Aligning, a novel approach that combines adversarial training with source-target domain alignment.
Our findings highlight AD-Aligning's ability to emulate the nuanced cognitive processes inherent in human perception.
arXiv Detail & Related papers (2024-05-15T02:34:06Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain [11.764601181046496]
This paper presents an unsupervised domain adaptation (UDA) method for predicting unlabeled target domain data.
We propose the Adversarial self-superVised domain Adaptation network for the TARget domain (AVATAR) algorithm.
Our proposed model significantly outperforms state-of-the-art methods on three UDA benchmarks.
arXiv Detail & Related papers (2023-04-28T20:31:56Z) - ELDA: Using Edges to Have an Edge on Semantic Segmentation Based UDA [11.985940818257786]
We introduce Edge Learning based Domain Adaptation (ELDA), a framework which incorporates edge information into its training process to serve as a type of domain invariant information.
We show that ELDA is able to better separate the feature distributions of different classes.
arXiv Detail & Related papers (2022-11-16T12:49:33Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Information-theoretic regularization for Multi-source Domain Adaptation [5.444459446244819]
Adversarial learning strategy has demonstrated remarkable performance in dealing with single-source Domain Adaptation (DA) problems.
It has recently been applied to Multi-source DA (MDA) problems.
Here we adopt an information-theoretic approach to identify and resolve the potential adverse effect of the multiple domain discriminators on MDA.
arXiv Detail & Related papers (2021-04-04T09:11:35Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z) - Multi-source Domain Adaptation in the Deep Learning Era: A Systematic
Survey [53.656086832255944]
Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources.
MDA has attracted increasing attention in both academia and industry.
arXiv Detail & Related papers (2020-02-26T08:07:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.