A Robust Multilabel Method Integrating Rule-based Transparent Model,
Soft Label Correlation Learning and Label Noise Resistance
- URL: http://arxiv.org/abs/2301.03283v3
- Date: Mon, 25 Sep 2023 13:58:57 GMT
- Title: A Robust Multilabel Method Integrating Rule-based Transparent Model,
Soft Label Correlation Learning and Label Noise Resistance
- Authors: Qiongdan Lou, Zhaohong Deng, Kup-Sze Choi, Shitong Wang
- Abstract summary: We propose a robust multilabel Takagi-Sugeno-Kang fuzzy system (R-MLTSK-FS) with three mechanisms.
First, we design a soft label learning mechanism to reduce the effect of label noise by explicitly measuring the interactions between labels.
Second, the rule-based TSK FS is used as the base model to efficiently model the inference relationship be-tween features and soft labels.
Third, to further improve the performance of multilabel learning, we build a correlation enhancement learning mechanism based on the soft label space and the fuzzy feature space.
- Score: 24.18699701732533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model transparency, label correlation learning and the robust-ness to label
noise are crucial for multilabel learning. However, few existing methods study
these three characteristics simultaneously. To address this challenge, we
propose the robust multilabel Takagi-Sugeno-Kang fuzzy system (R-MLTSK-FS) with
three mechanisms. First, we design a soft label learning mechanism to reduce
the effect of label noise by explicitly measuring the interactions between
labels, which is also the basis of the other two mechanisms. Second, the
rule-based TSK FS is used as the base model to efficiently model the inference
relationship be-tween features and soft labels in a more transparent way than
many existing multilabel models. Third, to further improve the performance of
multilabel learning, we build a correlation enhancement learning mechanism
based on the soft label space and the fuzzy feature space. Extensive
experiments are conducted to demonstrate the superiority of the proposed
method.
Related papers
- Bridging Weakly-Supervised Learning and VLM Distillation: Noisy Partial Label Learning for Efficient Downstream Adaptation [51.67328507400985]
In noisy partial label learning (NPLL), each training sample is associated with a set of candidate labels annotated by multiple noisy annotators.<n>This paper focuses on learning from partial labels annotated by pre-trained vision-language models.<n>It proposes an innovative collaborative consistency regularization (Co-Reg) method.
arXiv Detail & Related papers (2025-06-03T12:48:54Z) - 3DResT: A Strong Baseline for Semi-Supervised 3D Referring Expression Segmentation [73.877177695218]
3D Referring Expression (3D-RES) typically requires extensive instance-level annotations, which are time-consuming and costly.
Semi-supervised learning (SSL) mitigates this by using limited labeled data alongside abundant unlabeled data, improving performance while reducing annotation costs.
In this paper, we introduce the first semi-supervised learning framework for 3D-RES, presenting a robust baseline method named 3DResT.
arXiv Detail & Related papers (2025-04-17T02:50:52Z) - Dual-Label Learning With Irregularly Present Labels [14.817794592309637]
This work focuses on the two-label learning task, and proposes a novel training and inference framework, Dual-Label Learning (DLL)
Our method makes consistently better predictions than baseline approaches by up to a 10% gain in F1-score or MAPE.
Remarkably, our method provided with data at a label missing rate as high as 60% can achieve similar or even better results than baseline approaches at a label missing rate of only 10%.
arXiv Detail & Related papers (2024-10-18T11:07:26Z) - A Multi-Task and Multi-Label Classification Model for Implicit Discourse Relation Recognition [0.23020018305241333]
We propose a novel multi-label classification approach to implicit discourse relation recognition (IDRR)<n>Our approach features a multi-task model that jointly learns multi-label representations of implicit discourse relations across all three sense levels in the PDTB 3.0 framework.<n>We conduct extensive experiments to identify optimal model configurations and loss functions in both settings.
arXiv Detail & Related papers (2024-08-16T18:47:08Z) - Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Exploring Homogeneous and Heterogeneous Consistent Label Associations
for Unsupervised Visible-Infrared Person ReID [62.81466902601807]
Unsupervised visible-infrared person re-identification (USL-VI-ReID) aims to retrieve pedestrian images of the same identity from different modalities without annotations.
We introduce a Modality-Unified Label Transfer (MULT) module that simultaneously accounts for both homogeneous and heterogeneous fine-grained instance-level structures.
It models both homogeneous and heterogeneous affinities, leveraging them to define the inconsistency for the pseudo-labels and then minimize it, leading to pseudo-labels that maintain alignment across modalities and consistency within intra-modality structures.
arXiv Detail & Related papers (2024-02-01T15:33:17Z) - Multi-Label Takagi-Sugeno-Kang Fuzzy System [22.759310690164227]
We propose a new multi-label classification method, called Multi-Label Takagi-Sugeno-Kang Fuzzy System (ML-TSK FS)
The structure of ML-TSK FS is designed using fuzzy rules to model the relationship between features and labels.
The proposed ML-TSK FS is evaluated experimentally on 12 benchmark multi-label datasets.
arXiv Detail & Related papers (2023-09-20T17:09:09Z) - Channel-Wise Contrastive Learning for Learning with Noisy Labels [60.46434734808148]
We introduce channel-wise contrastive learning (CWCL) to distinguish authentic label information from noise.
Unlike conventional instance-wise contrastive learning (IWCL), CWCL tends to yield more nuanced and resilient features aligned with the authentic labels.
Our strategy is twofold: firstly, using CWCL to extract pertinent features to identify cleanly labeled samples, and secondly, progressively fine-tuning using these samples.
arXiv Detail & Related papers (2023-08-14T06:04:50Z) - Learning Disentangled Label Representations for Multi-label
Classification [39.97251974500034]
One-shared-Feature-for-Multiple-Labels (OFML) is not conducive to learning discriminative label features.
We introduce the One-specific-Feature-for-One-Label (OFOL) mechanism and propose a novel disentangled label feature learning framework.
We achieve state-of-the-art performance on eight datasets.
arXiv Detail & Related papers (2022-12-02T21:49:34Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z) - Heterogeneous Semantic Transfer for Multi-label Recognition with Partial Labels [70.45813147115126]
Multi-label image recognition with partial labels (MLR-PL) may greatly reduce the cost of annotation and thus facilitate large-scale MLR.
We find that strong semantic correlations exist within each image and across different images.
These correlations can help transfer the knowledge possessed by the known labels to retrieve the unknown labels.
arXiv Detail & Related papers (2022-05-23T08:37:38Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.