From Multi-label Learning to Cross-Domain Transfer: A Model-Agnostic
Approach
- URL: http://arxiv.org/abs/2207.11742v1
- Date: Sun, 24 Jul 2022 13:37:25 GMT
- Title: From Multi-label Learning to Cross-Domain Transfer: A Model-Agnostic
Approach
- Authors: Jesse Read
- Abstract summary: We develop an approach for transfer learning that challenges the long-held assumption that transferability of tasks comes from measurements of similarity between the source and target domains or models.
We show that essentially we can create task-dependence based on source-model capacity.
- Score: 1.5076964620370268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multi-label learning, a particular case of multi-task learning where a
single data point is associated with multiple target labels, it was widely
assumed in the literature that, to obtain best accuracy, the dependence among
the labels should be explicitly modeled. This premise led to a proliferation of
methods offering techniques to learn and predict labels together, for example
where the prediction for one label influences predictions for other labels.
Even though it is now acknowledged that in many contexts a model of dependence
is not required for optimal performance, such models continue to outperform
independent models in some of those very contexts, suggesting alternative
explanations for their performance beyond label dependence, which the
literature is only recently beginning to unravel. Leveraging and extending
recent discoveries, we turn the original premise of multi-label learning on its
head, and approach the problem of joint-modeling specifically under the absence
of any measurable dependence among task labels; for example, when task labels
come from separate problem domains. We shift insights from this study towards
building an approach for transfer learning that challenges the long-held
assumption that transferability of tasks comes from measurements of similarity
between the source and target domains or models. This allows us to design and
test a method for transfer learning, which is model driven rather than purely
data driven, and furthermore it is black box and model-agnostic (any base model
class can be considered). We show that essentially we can create
task-dependence based on source-model capacity. The results we obtain have
important implications and provide clear directions for future work, both in
the areas of multi-label and transfer learning.
Related papers
- Propensity-driven Uncertainty Learning for Sample Exploration in Source-Free Active Domain Adaptation [19.620523416385346]
Source-free active domain adaptation (SFADA) addresses the challenge of adapting a pre-trained model to new domains without access to source data.
This scenario is particularly relevant in real-world applications where data privacy, storage limitations, or labeling costs are significant concerns.
We propose the Propensity-driven Uncertainty Learning (ProULearn) framework to effectively select more informative samples without frequently requesting human annotations.
arXiv Detail & Related papers (2025-01-23T10:05:25Z) - Multi-Label Contrastive Learning : A Comprehensive Study [48.81069245141415]
Multi-label classification has emerged as a key area in both research and industry.
Applying contrastive learning to multi-label classification presents unique challenges.
We conduct an in-depth study of contrastive learning loss for multi-label classification across diverse settings.
arXiv Detail & Related papers (2024-11-27T20:20:06Z) - Multi-Label Bayesian Active Learning with Inter-Label Relationships [3.88369051454137]
We propose a new multi-label active learning strategy to address both challenges.
Our method incorporates progressively updated positive and negative correlation matrices to capture co-occurrence and disjoint relationships.
Our strategy consistently achieves more reliable and superior performance, compared to several established methods.
arXiv Detail & Related papers (2024-11-26T23:28:54Z) - Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Frugal Reinforcement-based Active Learning [12.18340575383456]
We propose a novel active learning approach for label-efficient training.
The proposed method is iterative and aims at minimizing a constrained objective function that mixes diversity, representativity and uncertainty criteria.
We also introduce a novel weighting mechanism based on reinforcement learning, which adaptively balances these criteria at each training iteration.
arXiv Detail & Related papers (2022-12-09T14:17:45Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Semi-Supervised Learning of Semantic Correspondence with Pseudo-Labels [26.542718087103665]
SemiMatch is a semi-supervised solution for establishing dense correspondences across semantically similar images.
Our framework generates the pseudo-labels using the model's prediction itself between source and weakly-augmented target, and uses pseudo-labels to learn the model again between source and strongly-augmented target.
In experiments, SemiMatch achieves state-of-the-art performance on various benchmarks, especially on PF-Willow by a large margin.
arXiv Detail & Related papers (2022-03-30T03:52:50Z) - Active Refinement for Multi-Label Learning: A Pseudo-Label Approach [84.52793080276048]
Multi-label learning (MLL) aims to associate a given instance with its relevant labels from a set of concepts.
Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed.
Many real-world applications require introducing new concepts into the set to meet new demands.
arXiv Detail & Related papers (2021-09-29T19:17:05Z) - Few-shot Learning via Dependency Maximization and Instance Discriminant
Analysis [21.8311401851523]
We study the few-shot learning problem, where a model learns to recognize new objects with extremely few labeled data per category.
We propose a simple approach to exploit unlabeled data accompanying the few-shot task for improving few-shot performance.
arXiv Detail & Related papers (2021-09-07T02:19:01Z) - Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain
Adaptation [87.60688582088194]
We propose a novel Self-Supervised Noisy Label Learning method.
Our method can easily achieve state-of-the-art results and surpass other methods by a very large margin.
arXiv Detail & Related papers (2021-02-23T10:51:45Z) - Improving Classification through Weak Supervision in Context-specific
Conversational Agent Development for Teacher Education [1.215785021723604]
The effort required to develop an educational scenario specific conversational agent is time consuming.
Previous approaches to modeling annotations have relied on labeling thousands of examples and calculating inter-annotator agreement and majority votes.
We propose using a multi-task weak supervision method combined with active learning to address these concerns.
arXiv Detail & Related papers (2020-10-23T23:39:40Z) - Adversarial Knowledge Transfer from Unlabeled Data [62.97253639100014]
We present a novel Adversarial Knowledge Transfer framework for transferring knowledge from internet-scale unlabeled data to improve the performance of a classifier.
An important novel aspect of our method is that the unlabeled source data can be of different classes from those of the labeled target data, and there is no need to define a separate pretext task.
arXiv Detail & Related papers (2020-08-13T08:04:27Z) - Unsupervised Transfer Learning with Self-Supervised Remedy [60.315835711438936]
Generalising deep networks to novel domains without manual labels is challenging to deep learning.
Pre-learned knowledge does not transfer well without making strong assumptions about the learned and the novel domains.
In this work, we aim to learn a discriminative latent space of the unlabelled target data in a novel domain by knowledge transfer from labelled related domains.
arXiv Detail & Related papers (2020-06-08T16:42:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.