Deep Learning for Multi-Label Learning: A Comprehensive Survey
- URL: http://arxiv.org/abs/2401.16549v3
- Date: Tue, 25 Jun 2024 18:20:40 GMT
- Title: Deep Learning for Multi-Label Learning: A Comprehensive Survey
- Authors: Adane Nega Tarekegn, Mohib Ullah, Faouzi Alaya Cheikh,
- Abstract summary: Multi-label learning is a rapidly growing research area that aims to predict multiple labels from a single input data point.
Inherent difficulties in MLC include dealing with high-dimensional data, addressing label correlations, and handling partial labels.
Recent years have witnessed a notable increase in adopting deep learning (DL) techniques to address these challenges more effectively in MLC.
- Score: 6.571492336879553
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-label learning is a rapidly growing research area that aims to predict multiple labels from a single input data point. In the era of big data, tasks involving multi-label classification (MLC) or ranking present significant and intricate challenges, capturing considerable attention in diverse domains. Inherent difficulties in MLC include dealing with high-dimensional data, addressing label correlations, and handling partial labels, for which conventional methods prove ineffective. Recent years have witnessed a notable increase in adopting deep learning (DL) techniques to address these challenges more effectively in MLC. Notably, there is a burgeoning effort to harness the robust learning capabilities of DL for improved modelling of label dependencies and other challenges in MLC. However, it is noteworthy that comprehensive studies specifically dedicated to DL for multi-label learning are limited. Thus, this survey aims to thoroughly review recent progress in DL for multi-label learning, along with a summary of open research problems in MLC. The review consolidates existing research efforts in DL for MLC,including deep neural networks, transformers, autoencoders, and convolutional and recurrent architectures. Finally, the study presents a comparative analysis of the existing methods to provide insightful observations and stimulate future research directions in this domain.
Related papers
- Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data [54.934578742209716]
In real-world NLP applications, Large Language Models (LLMs) offer promising solutions due to their extensive training on vast datasets.
LLKD is an adaptive sample selection method that incorporates signals from both the teacher and student.
Our comprehensive experiments show that LLKD achieves superior performance across various datasets with higher data efficiency.
arXiv Detail & Related papers (2024-11-12T18:57:59Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Exploring Contrastive Learning for Long-Tailed Multi-Label Text Classification [48.81069245141415]
We introduce a novel contrastive loss function for multi-label text classification.
It attains Micro-F1 scores that either match or surpass those obtained with other frequently employed loss functions.
It demonstrates a significant improvement in Macro-F1 scores across three multi-label datasets.
arXiv Detail & Related papers (2024-04-12T11:12:16Z) - Co-Learning Meets Stitch-Up for Noisy Multi-label Visual Recognition [70.00984078351927]
This paper focuses on reducing noise based on some inherent properties of multi-label classification and long-tailed learning under noisy cases.
We propose a Stitch-Up augmentation to synthesize a cleaner sample, which directly reduces multi-label noise.
A Heterogeneous Co-Learning framework is further designed to leverage the inconsistency between long-tailed and balanced distributions.
arXiv Detail & Related papers (2023-07-03T09:20:28Z) - Label-Efficient Deep Learning in Medical Image Analysis: Challenges and
Future Directions [10.502964056448283]
Training models in medical imaging analysis typically require expensive and time-consuming collection of labeled data.
We extensively investigated over 300 recent papers to provide a comprehensive overview of progress on label-efficient learning strategies in MIA.
Specifically, we provide an in-depth investigation, covering not only canonical semi-supervised, self-supervised, and multi-instance learning schemes, but also recently emerged active and annotation-efficient learning strategies.
arXiv Detail & Related papers (2023-03-22T11:51:49Z) - FLAG: Fast Label-Adaptive Aggregation for Multi-label Classification in
Federated Learning [1.4280238304844592]
This study proposes a new multi-label federated learning framework with a Clustering-based Multi-label Data Allocation (CMDA) and a novel aggregation method, Fast Label-Adaptive Aggregation (FLAG)
The experimental results demonstrate that our methods only need less than 50% of training epochs and communication rounds to surpass the performance of state-of-the-art federated learning methods.
arXiv Detail & Related papers (2023-02-27T08:16:39Z) - Knowledge Restore and Transfer for Multi-label Class-Incremental
Learning [34.378828633726854]
We propose a knowledge restore and transfer (KRT) framework for multi-label class-incremental learning (MLCIL)
KRT includes a dynamic pseudo-label (DPL) module to restore the old class knowledge and an incremental cross-attention(ICA) module to save session-specific knowledge and transfer old class knowledge to the new model sufficiently.
Experimental results on MS-COCO and PASCAL VOC datasets demonstrate the effectiveness of our method for improving recognition performance and mitigating forgetting.
arXiv Detail & Related papers (2023-02-26T15:34:05Z) - A Multi-label Continual Learning Framework to Scale Deep Learning
Approaches for Packaging Equipment Monitoring [57.5099555438223]
We study multi-label classification in the continual scenario for the first time.
We propose an efficient approach that has a logarithmic complexity with regard to the number of tasks.
We validate our approach on a real-world multi-label Forecasting problem from the packaging industry.
arXiv Detail & Related papers (2022-08-08T15:58:39Z) - The Emerging Trends of Multi-Label Learning [45.63795570392158]
Exabytes of data are generated daily by humans, leading to the growing need for new efforts in dealing with the grand challenges for multi-label learning brought by big data.
There is a lack of systemic studies that focus explicitly on analyzing the emerging trends and new challenges of multi-label learning in the era of big data.
It is imperative to call for a comprehensive survey to fulfill this mission and delineate future research directions and new applications.
arXiv Detail & Related papers (2020-11-23T03:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.