Unsupervised Meta-Learning via Dynamic Head and Heterogeneous Task Construction for Few-Shot Classification
- URL: http://arxiv.org/abs/2410.02267v2
- Date: Sun, 13 Oct 2024 12:18:48 GMT
- Title: Unsupervised Meta-Learning via Dynamic Head and Heterogeneous Task Construction for Few-Shot Classification
- Authors: Yunchuan Guan, Yu Liu, Ketong Liu, Ke Zhou, Zhiqi Shen,
- Abstract summary: We propose DHM-UHT, a dynamic head meta-learning algorithm with unsupervised heterogeneous task construction.
On several unsupervised zero-shot and few-shot datasets, DHM-UHT obtains state-of-the-art performance.
- Score: 8.167963214348203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning has been widely used in recent years in areas such as few-shot learning and reinforcement learning. However, the questions of why and when it is better than other algorithms in few-shot classification remain to be explored. In this paper, we perform pre-experiments by adjusting the proportion of label noise and the degree of task heterogeneity in the dataset. We use the metric of Singular Vector Canonical Correlation Analysis to quantify the representation stability of the neural network and thus to compare the behavior of meta-learning and classical learning algorithms. We find that benefiting from the bi-level optimization strategy, the meta-learning algorithm has better robustness to label noise and heterogeneous tasks. Based on the above conclusion, we argue a promising future for meta-learning in the unsupervised area, and thus propose DHM-UHT, a dynamic head meta-learning algorithm with unsupervised heterogeneous task construction. The core idea of DHM-UHT is to use DBSCAN and dynamic head to achieve heterogeneous task construction and meta-learn the whole process of unsupervised heterogeneous task construction. On several unsupervised zero-shot and few-shot datasets, DHM-UHT obtains state-of-the-art performance. The code is released at https://github.com/tuantuange/DHM-UHT.
Related papers
- Is Meta-Learning Out? Rethinking Unsupervised Few-Shot Classification with Limited Entropy [32.59271703559131]
We show that meta-learning is more efficient with limited entropy and is more robust to label noise and heterogeneous tasks.<n>We propose MINO, a meta-learning framework designed to enhance unsupervised performance.
arXiv Detail & Related papers (2025-09-16T15:39:03Z) - Imbalanced Regression Pipeline Recommendation [5.863538874435322]
This work proposes the Meta-learning for Imbalanced Regression (Meta-IR) framework.<n>It trains meta-classifiers to recommend the best pipeline composed of the resampling strategy and learning model per task.<n>Compared with AutoML frameworks, Meta-IR obtained better results.
arXiv Detail & Related papers (2025-07-16T04:34:02Z) - BLUR: A Bi-Level Optimization Approach for LLM Unlearning [105.98410883830596]
We argue that it is important to model the hierarchical structure of the unlearning problem.<n>We propose a novel algorithm, termed Bi-Level UnleaRning (textttBLUR), which delivers superior performance.
arXiv Detail & Related papers (2025-06-09T19:23:05Z) - Meta-Learning with Heterogeneous Tasks [42.695853959923625]
Heterogeneous Tasks Robust Meta-learning (HeTRoM)
An efficient iterative optimization algorithm based on bi-level optimization.
Results demonstrate that our method provides flexibility, enabling users to adapt to diverse task settings.
arXiv Detail & Related papers (2024-10-24T16:32:23Z) - Data-Efficient and Robust Task Selection for Meta-Learning [1.4557421099695473]
We propose the Data-Efficient and Robust Task Selection (DERTS) algorithm, which can be incorporated into both gradient and metric-based meta-learning algorithms.
DERTS selects weighted subsets of tasks from task pools by minimizing the approximation error of the full gradient of task pools in the meta-training stage.
Unlike existing algorithms, DERTS does not require any architecture modification for training and can handle noisy label data in both the support and query sets.
arXiv Detail & Related papers (2024-05-11T19:47:27Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - Learning to Learn with Indispensable Connections [6.040904021861969]
We propose a novel meta-learning method called Meta-LTH that includes indispensible (necessary) connections.
Our method improves the classification accuracy by approximately 2% (20-way 1-shot task setting) for omniglot dataset.
arXiv Detail & Related papers (2023-04-06T04:53:13Z) - Does MAML Only Work via Feature Re-use? A Data Centric Perspective [19.556093984142418]
We provide empirical results that shed some light on how meta-learned MAML representations function.
We show that it is possible to define a family of synthetic benchmarks that result in a low degree of feature re-use.
We conjecture the core challenge of re-thinking meta-learning is in the design of few-shot learning data sets and benchmarks.
arXiv Detail & Related papers (2021-12-24T20:18:38Z) - Energy-Efficient and Federated Meta-Learning via Projected Stochastic
Gradient Ascent [79.58680275615752]
We propose an energy-efficient federated meta-learning framework.
We assume each task is owned by a separate agent, so a limited number of tasks is used to train a meta-model.
arXiv Detail & Related papers (2021-05-31T08:15:44Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Unsupervised Meta-Learning through Latent-Space Interpolation in
Generative Models [11.943374020641214]
We describe an approach that generates meta-tasks using generative models.
We find that the proposed approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), outperforms or is competitive with current unsupervised learning baselines.
arXiv Detail & Related papers (2020-06-18T02:10:56Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.