Incomplete Multi-View Weak-Label Learning with Noisy Features and
Imbalanced Labels
- URL: http://arxiv.org/abs/2201.01079v5
- Date: Tue, 29 Aug 2023 08:10:29 GMT
- Title: Incomplete Multi-View Weak-Label Learning with Noisy Features and
Imbalanced Labels
- Authors: Zhiwei Li, Zijian Yang, Lu Sun, Mineichi Kudo, Kego Kimura
- Abstract summary: We propose a novel method to overcome the limitations of multi-view learning.
It embeds incomplete views and weak labels into a low-dimensional subspace with adaptive weights.
It adaptively learns view-wise importance for embedding to detect noisy views, and mitigates the label imbalance problem by focal loss.
- Score: 4.800187500079582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A variety of modern applications exhibit multi-view multi-label learning,
where each sample has multi-view features, and multiple labels are correlated
via common views. Current methods usually fail to directly deal with the
setting where only a subset of features and labels are observed for each
sample, and ignore the presence of noisy views and imbalanced labels in
real-world problems. In this paper, we propose a novel method to overcome the
limitations. It jointly embeds incomplete views and weak labels into a
low-dimensional subspace with adaptive weights, and facilitates the difference
between embedding weight matrices via auto-weighted Hilbert-Schmidt
Independence Criterion (HSIC) to reduce the redundancy. Moreover, it adaptively
learns view-wise importance for embedding to detect noisy views, and mitigates
the label imbalance problem by focal loss. Experimental results on four
real-world multi-view multi-label datasets demonstrate the effectiveness of the
proposed method.
Related papers
- Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Towards Imbalanced Large Scale Multi-label Classification with Partially
Annotated Labels [8.977819892091]
Multi-label classification is a widely encountered problem in daily life, where an instance can be associated with multiple classes.
In this work, we address the issue of label imbalance and investigate how to train neural networks using partial labels.
arXiv Detail & Related papers (2023-07-31T21:50:48Z) - Co-Learning Meets Stitch-Up for Noisy Multi-label Visual Recognition [70.00984078351927]
This paper focuses on reducing noise based on some inherent properties of multi-label classification and long-tailed learning under noisy cases.
We propose a Stitch-Up augmentation to synthesize a cleaner sample, which directly reduces multi-label noise.
A Heterogeneous Co-Learning framework is further designed to leverage the inconsistency between long-tailed and balanced distributions.
arXiv Detail & Related papers (2023-07-03T09:20:28Z) - Reliable Representations Learning for Incomplete Multi-View Partial Multi-Label Classification [78.15629210659516]
In this paper, we propose an incomplete multi-view partial multi-label classification network named RANK.
We break through the view-level weights inherent in existing methods and propose a quality-aware sub-network to dynamically assign quality scores to each view of each sample.
Our model is not only able to handle complete multi-view multi-label datasets, but also works on datasets with missing instances and labels.
arXiv Detail & Related papers (2023-03-30T03:09:25Z) - DICNet: Deep Instance-Level Contrastive Network for Double Incomplete
Multi-View Multi-Label Classification [20.892833511657166]
Multi-view multi-label data in the real world is commonly incomplete due to the uncertain factors of data collection and manual annotation.
We propose a deep instance-level contrastive network, namely DICNet, to deal with the double incomplete multi-view multi-label classification problem.
Our DICNet is adept in capturing consistent discriminative representations of multi-view multi-label data and avoiding the negative effects of missing views and missing labels.
arXiv Detail & Related papers (2023-03-15T04:24:01Z) - An Effective Approach for Multi-label Classification with Missing Labels [8.470008570115146]
We propose a pseudo-label based approach to reduce the cost of annotation without bringing additional complexity to the classification networks.
By designing a novel loss function, we are able to relax the requirement that each instance must contain at least one positive label.
We show that our method can handle the imbalance between positive labels and negative labels, while still outperforming existing missing-label learning approaches.
arXiv Detail & Related papers (2022-10-24T23:13:57Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Disentangling Sampling and Labeling Bias for Learning in Large-Output
Spaces [64.23172847182109]
We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels.
We provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance.
arXiv Detail & Related papers (2021-05-12T15:40:13Z) - A Concise yet Effective model for Non-Aligned Incomplete Multi-view and
Missing Multi-label Learning [29.827794317616497]
Learning from multi-view multi-label data inevitably confronts three challenges: missing labels, incomplete views, and non-aligned views.
Existing methods mainly concern the first two and commonly need multiple assumptions to attack them.
In this paper, we aim at meeting these under the least assumption by building a concise yet effective model with just one hyper- parameter.
arXiv Detail & Related papers (2020-05-03T03:38:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.