Benchmarking Multi-Domain Active Learning on Image Classification
- URL: http://arxiv.org/abs/2312.00364v1
- Date: Fri, 1 Dec 2023 06:11:14 GMT
- Title: Benchmarking Multi-Domain Active Learning on Image Classification
- Authors: Jiayi Li, Rohan Taori, Tatsunori B. Hashimoto
- Abstract summary: We introduce a multi-domain active learning benchmark to bridge the gap between research on single-source data and real-world data.
Our benchmark demonstrates that traditional single-domain active learning strategies are often less effective than random selection in multi-domain scenarios.
Analysis on our benchmark shows that all multi-domain strategies exhibit significant tradeoffs, with no strategy outperforming across all datasets or all metrics.
- Score: 16.690755621494215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning aims to enhance model performance by strategically labeling
informative data points. While extensively studied, its effectiveness on
large-scale, real-world datasets remains underexplored. Existing research
primarily focuses on single-source data, ignoring the multi-domain nature of
real-world data. We introduce a multi-domain active learning benchmark to
bridge this gap. Our benchmark demonstrates that traditional single-domain
active learning strategies are often less effective than random selection in
multi-domain scenarios. We also introduce CLIP-GeoYFCC, a novel large-scale
image dataset built around geographical domains, in contrast to existing
genre-based domain datasets. Analysis on our benchmark shows that all
multi-domain strategies exhibit significant tradeoffs, with no strategy
outperforming across all datasets or all metrics, emphasizing the need for
future research.
Related papers
- Perturbation-Based Two-Stage Multi-Domain Active Learning [31.073745612552926]
We propose a perturbation-based two-stage multi-domain active learning (P2S-MDAL) method incorporated into the well-regarded ASP-MTL model.
P2S-MDAL involves allocating budgets for domains and establishing regions for diversity selection.
A perturbation metric has been introduced to evaluate the robustness of the shared feature extractor of the model.
arXiv Detail & Related papers (2023-06-19T04:58:32Z) - Few-Shot Classification in Unseen Domains by Episodic Meta-Learning
Across Visual Domains [36.98387822136687]
Few-shot classification aims to carry out classification given only few labeled examples for the categories of interest.
In this paper, we present a unique learning framework for domain-generalized few-shot classification.
By advancing meta-learning strategies, our learning framework exploits data across multiple source domains to capture domain-invariant features.
arXiv Detail & Related papers (2021-12-27T06:54:11Z) - Improving Multi-Domain Generalization through Domain Re-labeling [31.636953426159224]
We study the important link between pre-specified domain labels and the generalization performance.
We introduce a general approach for multi-domain generalization, MulDEns, that uses an ERM-based deep ensembling backbone.
We show that MulDEns does not require tailoring the augmentation strategy or the training process specific to a dataset.
arXiv Detail & Related papers (2021-12-17T23:21:50Z) - Multi-Domain Incremental Learning for Semantic Segmentation [42.30646442211311]
We propose a dynamic architecture that assigns universally shared, domain-invariant parameters to capture homogeneous semantic features.
We demonstrate the effectiveness of our proposed solution on domain incremental settings pertaining to real-world driving scenes from roads of Germany (Cityscapes), the United States (BDD100k), and India (IDD)
arXiv Detail & Related papers (2021-10-23T12:21:42Z) - Variational Attention: Propagating Domain-Specific Knowledge for
Multi-Domain Learning in Crowd Counting [75.80116276369694]
In crowd counting, due to the problem of laborious labelling, it is perceived intractability of collecting a new large-scale dataset.
We resort to the multi-domain joint learning and propose a simple but effective Domain-specific Knowledge Propagating Network (DKPNet)
It is mainly achieved by proposing the novel Variational Attention(VA) technique for explicitly modeling the attention distributions for different domains.
arXiv Detail & Related papers (2021-08-18T08:06:37Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Cross-Domain Facial Expression Recognition: A Unified Evaluation
Benchmark and Adversarial Graph Learning [85.6386289476598]
We develop a novel adversarial graph representation adaptation (AGRA) framework for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair evaluations on several popular benchmarks and show that the proposed AGRA framework outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T15:00:31Z) - Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog [70.79442700890843]
We propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.
With little training data, we show its transferability by outperforming prior best model by 13.9% on average.
arXiv Detail & Related papers (2020-04-23T08:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.