Single-Stage Broad Multi-Instance Multi-Label Learning (BMIML) with
Diverse Inter-Correlations and its application to medical image
classification
- URL: http://arxiv.org/abs/2209.02625v2
- Date: Wed, 14 Jun 2023 15:40:52 GMT
- Title: Single-Stage Broad Multi-Instance Multi-Label Learning (BMIML) with
Diverse Inter-Correlations and its application to medical image
classification
- Authors: Qi Lai, Jianhang Zhou, Yanfen Gan, Chi-Man Vong, Deshuang Huang
- Abstract summary: Existing MIML methods suffer from relatively low accuracy and training efficiency due to several issues.
BMIML is highly competitive to (or even better than) existing methods in accuracy and much faster than most MIML methods even for large medical image data sets.
- Score: 10.403614735252503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: described by multiple instances (e.g., image patches) and simultaneously
associated with multiple labels. Existing MIML methods are useful in many
applications but most of which suffer from relatively low accuracy and training
efficiency due to several issues: i) the inter-label correlations(i.e., the
probabilistic correlations between the multiple labels corresponding to an
object) are neglected; ii) the inter-instance correlations (i.e., the
probabilistic correlations of different instances in predicting the object
label) cannot be learned directly (or jointly) with other types of correlations
due to the missing instance labels; iii) diverse inter-correlations (e.g.,
inter-label correlations, inter-instance correlations) can only be learned in
multiple stages. To resolve these issues, a new single-stage framework called
broad multi-instance multi-label learning (BMIML) is proposed. In BMIML, there
are three innovative modules: i) an auto-weighted label enhancement learning
(AWLEL) based on broad learning system (BLS) is designed, which simultaneously
and efficiently captures the inter-label correlations while traditional BLS
cannot; ii) A specific MIML neural network called scalable multi-instance
probabilistic regression (SMIPR) is constructed to effectively estimate the
inter-instance correlations using the object label only, which can provide
additional probabilistic information for learning; iii) Finally, an interactive
decision optimization (IDO) is designed to combine and optimize the results
from AWLEL and SMIPR and form a single-stage framework. Experiments show that
BMIML is highly competitive to (or even better than) existing methods in
accuracy and much faster than most MIML methods even for large medical image
data sets (> 90K images).
Related papers
- MLCBART: Multilabel Classification with Bayesian Additive Regression Trees [0.6117371161379209]
Multilabel Classification deals with the simultaneous classification of multiple binary labels.<n>BART is a nonparametric and flexible model structure capable of uncovering complex relationships within the data.<n>Our adaptation, MLCBART, assumes that labels arise from thresholding an underlying numeric scale.
arXiv Detail & Related papers (2026-01-13T20:17:45Z) - Instance Relation Learning Network with Label Knowledge Propagation for Few-shot Multi-label Intent Detection [26.403716144346756]
Few-shot Multi-label Intent Detection (MID) is crucial for dialogue systems, aiming to detect multiple intents of utterances.<n>We propose a multi-label joint learning method for few-shot MID in an end-to-end manner.<n> Experiments show that we outperform strong baselines by an average of 9.54% AUC and 11.19% Macro-F1 in 1-shot scenarios.
arXiv Detail & Related papers (2025-10-09T04:47:06Z) - Partially Supervised Unpaired Multi-Modal Learning for Label-Efficient Medical Image Segmentation [53.723234136550055]
We term the new learning paradigm as Partially Supervised Unpaired Multi-Modal Learning (PSUMML)
We propose a novel Decomposed partial class adaptation with snapshot Ensembled Self-Training (DEST) framework for it.
Our framework consists of a compact segmentation network with modality specific normalization layers for learning with partially labeled unpaired multi-modal data.
arXiv Detail & Related papers (2025-03-07T07:22:42Z) - Robust Multi-View Learning via Representation Fusion of Sample-Level Attention and Alignment of Simulated Perturbation [61.64052577026623]
Real-world multi-view datasets are often heterogeneous and imperfect.
We propose a novel robust MVL method (namely RML) with simultaneous representation fusion and alignment.
In experiments, we employ it in unsupervised multi-view clustering, noise-label classification, and as a plug-and-play module for cross-modal hashing retrieval.
arXiv Detail & Related papers (2025-03-06T07:01:08Z) - Cross-Modality Clustering-based Self-Labeling for Multimodal Data Classification [2.666791490663749]
Cross-Modality Clustering-based Self-Labeling ( CMCSL)
CMCSL groups instances belonging to each modality in the deep feature space and then propagates known labels within the resulting clusters.
Experimental evaluation conducted on 20 datasets derived from the MM-IMDb dataset.
arXiv Detail & Related papers (2024-08-05T15:43:56Z) - Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - POGEMA: A Benchmark Platform for Cooperative Multi-Agent Navigation [76.67608003501479]
We introduce and specify an evaluation protocol defining a range of domain-related metrics computed on the basics of the primary evaluation indicators.
The results of such a comparison, which involves a variety of state-of-the-art MARL, search-based, and hybrid methods, are presented.
arXiv Detail & Related papers (2024-07-20T16:37:21Z) - Dynamic Correlation Learning and Regularization for Multi-Label Confidence Calibration [60.95748658638956]
This paper introduces the Multi-Label Confidence task, aiming to provide well-calibrated confidence scores in multi-label scenarios.
Existing single-label calibration methods fail to account for category correlations, which are crucial for addressing semantic confusion.
We propose the Dynamic Correlation Learning and Regularization algorithm, which leverages multi-grained semantic correlations to better model semantic confusion.
arXiv Detail & Related papers (2024-07-09T13:26:21Z) - MMRel: A Relation Understanding Benchmark in the MLLM Era [72.95901753186227]
Multi-Modal Relation Understanding (MMRel) is a benchmark that features large-scale, high-quality, and diverse data on inter-object relations.
MMRel is ideal for evaluating MLLMs on relation understanding, as well as for fine-tuning MLLMs to enhance relation comprehension capability.
arXiv Detail & Related papers (2024-06-13T13:51:59Z) - Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications [90.6849884683226]
We study the challenge of interaction quantification in a semi-supervised setting with only labeled unimodal data.
Using a precise information-theoretic definition of interactions, our key contribution is the derivation of lower and upper bounds.
We show how these theoretical results can be used to estimate multimodal model performance, guide data collection, and select appropriate multimodal models for various tasks.
arXiv Detail & Related papers (2023-06-07T15:44:53Z) - Graph based Label Enhancement for Multi-instance Multi-label learning [20.178466198202376]
Multi-instance multi-label (MIML) learning is widely applicated in numerous domains.
This paper proposes a novel MIML framework based on graph label enhancement, namely GLEMIML, to improve the classification performance of MIML.
arXiv Detail & Related papers (2023-04-21T02:24:49Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Evolving Multi-Label Fuzzy Classifier [5.53329677986653]
Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one class at the same time.
We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner.
arXiv Detail & Related papers (2022-03-29T08:01:03Z) - Gaussian Mixture Variational Autoencoder with Contrastive Learning for
Multi-Label Classification [27.043136219527767]
We propose a novel contrastive learning boosted multi-label prediction model.
By using contrastive learning in the supervised setting, we can exploit label information effectively.
We show that the learnt embeddings provide insights into the interpretation of label-label interactions.
arXiv Detail & Related papers (2021-12-02T04:23:34Z) - Multi-label Few/Zero-shot Learning with Knowledge Aggregated from
Multiple Label Graphs [8.44680447457879]
We present a simple multi-graph aggregation model that fuses knowledge from multiple label graphs encoding different semantic label relationships.
We show that methods equipped with the multi-graph knowledge aggregation achieve significant performance improvement across almost all the measures on few/zero-shot labels.
arXiv Detail & Related papers (2020-10-15T01:15:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.