FLAG: Fast Label-Adaptive Aggregation for Multi-label Classification in
Federated Learning
- URL: http://arxiv.org/abs/2302.13571v1
- Date: Mon, 27 Feb 2023 08:16:39 GMT
- Title: FLAG: Fast Label-Adaptive Aggregation for Multi-label Classification in
Federated Learning
- Authors: Shih-Fang Chang, Benny Wei-Yun Hsu, Tien-Yu Chang, Vincent S. Tseng
- Abstract summary: This study proposes a new multi-label federated learning framework with a Clustering-based Multi-label Data Allocation (CMDA) and a novel aggregation method, Fast Label-Adaptive Aggregation (FLAG)
The experimental results demonstrate that our methods only need less than 50% of training epochs and communication rounds to surpass the performance of state-of-the-art federated learning methods.
- Score: 1.4280238304844592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning aims to share private data to maximize the data utility
without privacy leakage. Previous federated learning research mainly focuses on
multi-class classification problems. However, multi-label classification is a
crucial research problem close to real-world data properties. Nevertheless, a
limited number of federated learning studies explore this research problem.
Existing studies of multi-label federated learning did not consider the
characteristics of multi-label data, i.e., they used the concept of multi-class
classification to verify their methods' performance, which means it will not be
feasible to apply their methods to real-world applications. Therefore, this
study proposed a new multi-label federated learning framework with a
Clustering-based Multi-label Data Allocation (CMDA) and a novel aggregation
method, Fast Label-Adaptive Aggregation (FLAG), for multi-label classification
in the federated learning environment. The experimental results demonstrate
that our methods only need less than 50\% of training epochs and communication
rounds to surpass the performance of state-of-the-art federated learning
methods.
Related papers
- JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Reliable Representations Learning for Incomplete Multi-View Partial Multi-Label Classification [78.15629210659516]
In this paper, we propose an incomplete multi-view partial multi-label classification network named RANK.
We break through the view-level weights inherent in existing methods and propose a quality-aware sub-network to dynamically assign quality scores to each view of each sample.
Our model is not only able to handle complete multi-view multi-label datasets, but also works on datasets with missing instances and labels.
arXiv Detail & Related papers (2023-03-30T03:09:25Z) - Knowledge Distillation from Single to Multi Labels: an Empirical Study [14.12487391004319]
We introduce a novel distillation method based on Class Activation Maps (CAMs)
Our findings indicate that the logit-based method is not well-suited for multi-label classification.
We propose that a suitable dark knowledge should incorporate class-wise information and be highly correlated with the final classification results.
arXiv Detail & Related papers (2023-03-15T04:39:01Z) - Exploiting Diversity of Unlabeled Data for Label-Efficient
Semi-Supervised Active Learning [57.436224561482966]
Active learning is a research area that addresses the issues of expensive labeling by selecting the most important samples for labeling.
We introduce a new diversity-based initial dataset selection algorithm to select the most informative set of samples for initial labeling in the active learning setting.
Also, we propose a novel active learning query strategy, which uses diversity-based sampling on consistency-based embeddings.
arXiv Detail & Related papers (2022-07-25T16:11:55Z) - Class-Incremental Lifelong Learning in Multi-Label Classification [3.711485819097916]
This paper studies Lifelong Multi-Label (LML) classification, which builds an online class-incremental classifier in a sequential multi-label classification data stream.
To solve the problem, the study proposes an Augmented Graph Convolutional Network (AGCN) with a built Augmented Correlation Matrix (ACM) across sequential partial-label tasks.
arXiv Detail & Related papers (2022-07-16T05:14:07Z) - Evolving Multi-Label Fuzzy Classifier [5.53329677986653]
Multi-label classification has attracted much attention in the machine learning community to address the problem of assigning single samples to more than one class at the same time.
We propose an evolving multi-label fuzzy classifier (EFC-ML) which is able to self-adapt and self-evolve its structure with new incoming multi-label samples in an incremental, single-pass manner.
arXiv Detail & Related papers (2022-03-29T08:01:03Z) - Leveraging Ensembles and Self-Supervised Learning for Fully-Unsupervised
Person Re-Identification and Text Authorship Attribution [77.85461690214551]
Learning from fully-unlabeled data is challenging in Multimedia Forensics problems, such as Person Re-Identification and Text Authorship Attribution.
Recent self-supervised learning methods have shown to be effective when dealing with fully-unlabeled data in cases where the underlying classes have significant semantic differences.
We propose a strategy to tackle Person Re-Identification and Text Authorship Attribution by enabling learning from unlabeled data even when samples from different classes are not prominently diverse.
arXiv Detail & Related papers (2022-02-07T13:08:11Z) - Active Refinement for Multi-Label Learning: A Pseudo-Label Approach [84.52793080276048]
Multi-label learning (MLL) aims to associate a given instance with its relevant labels from a set of concepts.
Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed.
Many real-world applications require introducing new concepts into the set to meet new demands.
arXiv Detail & Related papers (2021-09-29T19:17:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.