Tuning the Right Foundation Models is What you Need for Partial Label Learning
- URL: http://arxiv.org/abs/2506.05027v1
- Date: Thu, 05 Jun 2025 13:37:33 GMT
- Title: Tuning the Right Foundation Models is What you Need for Partial Label Learning
- Authors: Kuang He, Wei Tang, Tong Wei, Min-Ling Zhang,
- Abstract summary: Partial label learning seeks to train generalizable classifiers from datasets with inexact supervision.<n>In this work, we empirically conduct evaluations of 11 foundation models across 13 approaches on 8 benchmark datasets under 3 scenarios.<n>Our findings reveal that current approaches tend to achieve significant performance gains when using foundation models, 2) exhibit remarkably similar performance to each other, 3) maintain stable performance across varying ambiguity levels, while 4) are susceptible to foundation model selection and adaptation strategies.
- Score: 55.61644150441799
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Partial label learning (PLL) seeks to train generalizable classifiers from datasets with inexact supervision, a common challenge in real-world applications. Existing studies have developed numerous approaches to progressively refine and recover ground-truth labels by training convolutional neural networks. However, limited attention has been given to foundation models that offer transferrable representations. In this work, we empirically conduct comprehensive evaluations of 11 foundation models across 13 PLL approaches on 8 benchmark datasets under 3 PLL scenarios. We further propose PartialCLIP, an efficient fine-tuning framework for foundation models in PLL. Our findings reveal that current PLL approaches tend to 1) achieve significant performance gains when using foundation models, 2) exhibit remarkably similar performance to each other, 3) maintain stable performance across varying ambiguity levels, while 4) are susceptible to foundation model selection and adaptation strategies. Additionally, we demonstrate the efficacy of text-embedding classifier initialization and effective candidate label filtering using zero-shot CLIP. Our experimental results and analysis underscore the limitations of current PLL approaches and provide valuable insights for developing more generalizable PLL models. The source code can be found at https://github.com/SEU-hk/PartialCLIP.
Related papers
- Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs [51.21041884010009]
Ring-lite is a Mixture-of-Experts (MoE)-based large language model optimized via reinforcement learning (RL)<n>Our approach matches the performance of state-of-the-art (SOTA) small-scale reasoning models on challenging benchmarks.
arXiv Detail & Related papers (2025-06-17T17:12:34Z) - SPARC: Score Prompting and Adaptive Fusion for Zero-Shot Multi-Label Recognition in Vision-Language Models [74.40683913645731]
Zero-shot multi-label recognition (MLR) with Vision-Language Models (VLMs) faces significant challenges without training data, model tuning, or architectural modifications.<n>Our work proposes a novel solution treating VLMs as black boxes, leveraging scores without training data or ground truth.<n>Analysis of these prompt scores reveals VLM biases and AND''/OR' signal ambiguities, notably that maximum scores are surprisingly suboptimal compared to second-highest scores.
arXiv Detail & Related papers (2025-02-24T07:15:05Z) - Realistic Evaluation of Deep Partial-Label Learning Algorithms [94.79036193414058]
Partial-label learning (PLL) is a weakly supervised learning problem in which each example is associated with multiple candidate labels and only one is the true label.<n>In recent years, many deep algorithms have been developed to improve model performance.<n>Some early developed algorithms are often underestimated and can outperform many later algorithms with complicated designs.
arXiv Detail & Related papers (2025-02-14T14:22:16Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - PLOOD: Partial Label Learning with Out-of-distribution Objects [37.23754625256131]
Existing Partial Label Learning (PLL) methods posit that training and test data adhere to the same distribution.<n>We introduce theDPLL paradigm to tackle this significant yet underexplored issue.<n>And our newly proposed PLOOD framework enables simulating OOD objects through Positive-Negative Sample (PNSA) feature learning and Partial Energy (PE)-based label refinement.
arXiv Detail & Related papers (2024-03-11T12:56:36Z) - ARNet: Automatic Refinement Network for Noisy Partial Label Learning [41.577081851679765]
We propose a novel framework called "Automatic Refinement Network (ARNet)"
Our method consists of multiple rounds. In each round, we purify the noisy samples through two key modules, i.e., noisy sample detection and label correction.
We prove that our method is able to reduce the noise level of the dataset and eventually approximate the Bayes optimal.
arXiv Detail & Related papers (2022-11-09T10:01:25Z) - Progressive Purification for Instance-Dependent Partial Label Learning [37.65717805892473]
Partial label learning (PLL) aims to train multiclass classifiers from the examples each annotated with a set of candidate labels where a fixed but unknown candidate label is correct.
The candidate labels are always instance-dependent in practice and there is no theoretical guarantee that the model trained on the instance-dependent examples can converge to an ideal one.
In this paper, a theoretically grounded and practically effective approach named POP, i.e. PrOgressive Purification, is proposed. Specifically, POP updates the learning model and purifies each candidate label set progressively in every epoch.
arXiv Detail & Related papers (2022-06-02T02:07:12Z) - Few-Shot Partial-Label Learning [25.609766770479265]
Partial-label learning (PLL) generally focuses on inducing a noise-tolerant multi-class by training on overly-annotated samples.
Existing few-shot learning algorithms assume precise labels of the support set; as such, irrelevant labels may seriously mislead the meta-learner.
In this paper, we introduce an approach called FsPLL (Few-shot image learning)
arXiv Detail & Related papers (2021-06-02T07:03:54Z) - Provably Consistent Partial-Label Learning [120.4734093544867]
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
In this paper, we propose the first generation model of candidate label sets, and develop two novel methods that are guaranteed to be consistent.
Experiments on benchmark and real-world datasets validate the effectiveness of the proposed generation model and two methods.
arXiv Detail & Related papers (2020-07-17T12:19:16Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.