Multi-Domain Incremental Learning for Semantic Segmentation
- URL: http://arxiv.org/abs/2110.12205v1
- Date: Sat, 23 Oct 2021 12:21:42 GMT
- Title: Multi-Domain Incremental Learning for Semantic Segmentation
- Authors: Prachi Garg, Rohit Saluja, Vineeth N Balasubramanian, Chetan Arora,
Anbumani Subramanian, C.V. Jawahar
- Abstract summary: We propose a dynamic architecture that assigns universally shared, domain-invariant parameters to capture homogeneous semantic features.
We demonstrate the effectiveness of our proposed solution on domain incremental settings pertaining to real-world driving scenes from roads of Germany (Cityscapes), the United States (BDD100k), and India (IDD)
- Score: 42.30646442211311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent efforts in multi-domain learning for semantic segmentation attempt to
learn multiple geographical datasets in a universal, joint model. A simple
fine-tuning experiment performed sequentially on three popular road scene
segmentation datasets demonstrates that existing segmentation frameworks fail
at incrementally learning on a series of visually disparate geographical
domains. When learning a new domain, the model catastrophically forgets
previously learned knowledge. In this work, we pose the problem of multi-domain
incremental learning for semantic segmentation. Given a model trained on a
particular geographical domain, the goal is to (i) incrementally learn a new
geographical domain, (ii) while retaining performance on the old domain, (iii)
given that the previous domain's dataset is not accessible. We propose a
dynamic architecture that assigns universally shared, domain-invariant
parameters to capture homogeneous semantic features present in all domains,
while dedicated domain-specific parameters learn the statistics of each domain.
Our novel optimization strategy helps achieve a good balance between retention
of old knowledge (stability) and acquiring new knowledge (plasticity). We
demonstrate the effectiveness of our proposed solution on domain incremental
settings pertaining to real-world driving scenes from roads of Germany
(Cityscapes), the United States (BDD100k), and India (IDD).
Related papers
- Boosting Large Language Models with Continual Learning for Aspect-based Sentiment Analysis [33.86086075084374]
Aspect-based sentiment analysis (ABSA) is an important subtask of sentiment analysis.
We propose a Large Language Model-based Continual Learning (textttLLM-CL) model for ABSA.
arXiv Detail & Related papers (2024-05-09T02:00:07Z) - Benchmarking Multi-Domain Active Learning on Image Classification [16.690755621494215]
We introduce a multi-domain active learning benchmark to bridge the gap between research on single-source data and real-world data.
Our benchmark demonstrates that traditional single-domain active learning strategies are often less effective than random selection in multi-domain scenarios.
Analysis on our benchmark shows that all multi-domain strategies exhibit significant tradeoffs, with no strategy outperforming across all datasets or all metrics.
arXiv Detail & Related papers (2023-12-01T06:11:14Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization [78.93669377251396]
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
arXiv Detail & Related papers (2020-04-30T15:15:40Z) - Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog [70.79442700890843]
We propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.
With little training data, we show its transferability by outperforming prior best model by 13.9% on average.
arXiv Detail & Related papers (2020-04-23T08:17:22Z) - Spatial Attention Pyramid Network for Unsupervised Domain Adaptation [66.75008386980869]
Unsupervised domain adaptation is critical in various computer vision tasks.
We design a new spatial attention pyramid network for unsupervised domain adaptation.
Our method performs favorably against the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-03-29T09:03:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.