Incremental Open-set Domain Adaptation
- URL: http://arxiv.org/abs/2409.00530v1
- Date: Sat, 31 Aug 2024 19:37:54 GMT
- Title: Incremental Open-set Domain Adaptation
- Authors: Sayan Rakshit, Hmrishav Bandyopadhyay, Nibaran Das, Biplab Banerjee,
- Abstract summary: Catastrophic forgetting makes neural network models unstable when learning visual domains consecutively.
We develop a forgetting-resistant incremental learning strategy for image classification.
- Score: 27.171935835686117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Catastrophic forgetting makes neural network models unstable when learning visual domains consecutively. The neural network model drifts to catastrophic forgetting-induced low performance of previously learnt domains when training with new domains. We illuminate this current neural network model weakness and develop a forgetting-resistant incremental learning strategy. Here, we propose a new unsupervised incremental open-set domain adaptation (IOSDA) issue for image classification. Open-set domain adaptation adds complexity to the incremental domain adaptation issue since each target domain has more classes than the Source domain. In IOSDA, the model learns training with domain streams phase by phase in incremented time. Inference uses test data from all target domains without revealing their identities. We proposed IOSDA-Net, a two-stage learning pipeline, to solve the problem. The first module replicates prior domains from random noise using a generative framework and creates a pseudo source domain. In the second step, this pseudo source is adapted to the present target domain. We test our model on Office-Home, DomainNet, and UPRN-RSDA, a newly curated optical remote sensing dataset.
Related papers
- Domain Adaptation from Scratch [24.612696638386623]
We present a new learning setup, domain adaptation from scratch'', which we believe to be crucial for extending the reach of NLP to sensitive domains.
In this setup, we aim to efficiently annotate data from a set of source domains such that the trained model performs well on a sensitive target domain.
Our study compares several approaches for this challenging setup, ranging from data selection and domain adaptation algorithms to active learning paradigms.
arXiv Detail & Related papers (2022-09-02T05:55:09Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Domain Adaptation via Prompt Learning [39.97105851723885]
Unsupervised domain adaption (UDA) aims to adapt models learned from a well-annotated source domain to a target domain.
We introduce a novel prompt learning paradigm for UDA, named Domain Adaptation via Prompt Learning (DAPL)
arXiv Detail & Related papers (2022-02-14T13:25:46Z) - Domain-Invariant Proposals based on a Balanced Domain Classifier for
Object Detection [8.583307102907295]
Object recognition from images means to automatically find object(s) of interest and to return their category and location information.
Benefiting from research on deep learning, like convolutional neural networks(CNNs) and generative adversarial networks, the performance in this field has been improved significantly.
mismatching distributions, i.e., domain shifts, lead to a significant performance drop.
arXiv Detail & Related papers (2022-02-12T00:21:27Z) - FRIDA -- Generative Feature Replay for Incremental Domain Adaptation [34.00059350161178]
We propose a novel framework called Feature based Incremental Domain Adaptation (FRIDA)
For domain alignment, we propose a simple extension of the popular domain adversarial neural network (DANN) called DANN-IB.
Experiment results on Office-Home, Office-CalTech, and DomainNet datasets confirm that FRIDA maintains superior stability-plasticity trade-off than the literature.
arXiv Detail & Related papers (2021-12-28T22:24:32Z) - Self-supervised Autoregressive Domain Adaptation for Time Series Data [9.75443057146649]
Unsupervised domain adaptation (UDA) has successfully addressed the domain shift problem for visual applications.
These approaches may have limited performance for time series data due to the following reasons.
We propose a Self-supervised Autoregressive Domain Adaptation (SLARDA) framework to address these limitations.
arXiv Detail & Related papers (2021-11-29T08:17:23Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Dynamic Transfer for Multi-Source Domain Adaptation [82.54405157719641]
We present dynamic transfer to address domain conflicts, where the model parameters are adapted to samples.
It breaks down source domain barriers and turns multi-source domains into a single-source domain.
Experimental results show that, without using domain labels, our dynamic transfer outperforms the state-of-the-art method by more than 3%.
arXiv Detail & Related papers (2021-03-19T01:22:12Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.