Task-Level Contrastiveness for Cross-Domain Few-Shot Learning
- URL: http://arxiv.org/abs/2510.03509v1
- Date: Fri, 03 Oct 2025 20:49:04 GMT
- Title: Task-Level Contrastiveness for Cross-Domain Few-Shot Learning
- Authors: Kristi Topollai, Anna Choromanska,
- Abstract summary: We introduce the notion of task-level contrastiveness, a novel approach designed to address issues of existing methods.<n>Our method is lightweight and can be easily integrated within existing few-shot/meta-learning algorithms.<n>We demonstrate the effectiveness of our approach through different experiments on the MetaDataset benchmark.
- Score: 10.325245543844245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot classification and meta-learning methods typically struggle to generalize across diverse domains, as most approaches focus on a single dataset, failing to transfer knowledge across various seen and unseen domains. Existing solutions often suffer from low accuracy, high computational costs, and rely on restrictive assumptions. In this paper, we introduce the notion of task-level contrastiveness, a novel approach designed to address issues of existing methods. We start by introducing simple ways to define task augmentations, and thereafter define a task-level contrastive loss that encourages unsupervised clustering of task representations. Our method is lightweight and can be easily integrated within existing few-shot/meta-learning algorithms while providing significant benefits. Crucially, it leads to improved generalization and computational efficiency without requiring prior knowledge of task domains. We demonstrate the effectiveness of our approach through different experiments on the MetaDataset benchmark, where it achieves superior performance without additional complexity.
Related papers
- Forget Less, Retain More: A Lightweight Regularizer for Rehearsal-Based Continual Learning [51.07663354001582]
Deep neural networks suffer from catastrophic forgetting, where performance on previous tasks degrades after training on a new task.<n>We present a novel approach to address this challenge, focusing on the intersection of memory-based methods and regularization approaches.<n>We formulate a regularization strategy, termed Information Maximization (IM) regularizer, for memory-based continual learning methods.
arXiv Detail & Related papers (2025-12-01T15:56:00Z) - Reverse Probing: Evaluating Knowledge Transfer via Finetuned Task Embeddings for Coreference Resolution [23.375053899418504]
Instead of probing frozen representations from a complex source task, we explore the effectiveness of embeddings from multiple simple source tasks on a single target task.<n>Our findings reveal that task embeddings vary significantly in utility for coreference resolution, with semantic similarity tasks proving most beneficial.
arXiv Detail & Related papers (2025-01-31T17:12:53Z) - Robust Analysis of Multi-Task Learning Efficiency: New Benchmarks on Light-Weighed Backbones and Effective Measurement of Multi-Task Learning Challenges by Feature Disentanglement [69.51496713076253]
In this paper, we focus on the aforementioned efficiency aspects of existing MTL methods.
We first carry out large-scale experiments of the methods with smaller backbones and on a the MetaGraspNet dataset as a new test ground.
We also propose Feature Disentanglement measure as a novel and efficient identifier of the challenges in MTL.
arXiv Detail & Related papers (2024-02-05T22:15:55Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Clustering-based Domain-Incremental Learning [4.835091081509403]
Key challenge in continual learning is the so-called "catastrophic forgetting problem"
We propose an online clustering-based approach on a dynamically updated finite pool of samples or gradients.
We demonstrate the effectiveness of the proposed strategy and its promising performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-09-21T13:49:05Z) - Set-based Meta-Interpolation for Few-Task Meta-Learning [79.4236527774689]
We propose a novel domain-agnostic task augmentation method, Meta-Interpolation, to densify the meta-training task distribution.
We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains.
arXiv Detail & Related papers (2022-05-20T06:53:03Z) - Leveraging convergence behavior to balance conflicting tasks in
multi-task learning [3.6212652499950138]
Multi-Task Learning uses correlated tasks to improve performance generalization.
Tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined.
We propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation.
arXiv Detail & Related papers (2022-04-14T01:52:34Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Submodular Meta-Learning [43.15332631500541]
We introduce a discrete variant of the meta-learning framework to improve performance on future tasks.
Our approach aims at using prior data, i.e., previously visited tasks, to train a proper initial solution set.
We show that our framework leads to a significant reduction in computational complexity in solving the new tasks while incurring a small performance loss.
arXiv Detail & Related papers (2020-07-11T21:02:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.