Exploring Semantic Attributes from A Foundation Model for Federated
Learning of Disjoint Label Spaces
- URL: http://arxiv.org/abs/2208.13465v2
- Date: Tue, 28 Nov 2023 16:49:39 GMT
- Title: Exploring Semantic Attributes from A Foundation Model for Federated
Learning of Disjoint Label Spaces
- Authors: Shitong Sun, Chenyang Si, Guile Wu, Shaogang Gong
- Abstract summary: In this work, we consider transferring mid-level semantic knowledge (such as attribute) which is not sensitive to specific objects of interest.
We formulate a new Federated Zero-Shot Learning (FZSL) paradigm to learn mid-level semantic knowledge at multiple local clients.
To improve model discriminative ability, we propose to explore semantic knowledge augmentation from external knowledge.
- Score: 46.59992662412557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional centralised deep learning paradigms are not feasible when data
from different sources cannot be shared due to data privacy or transmission
limitation. To resolve this problem, federated learning has been introduced to
transfer knowledge across multiple sources (clients) with non-shared data while
optimising a globally generalised central model (server). Existing federated
learning paradigms mostly focus on transferring holistic high-level knowledge
(such as class) across models, which are closely related to specific objects of
interest so may suffer from inverse attack. In contrast, in this work, we
consider transferring mid-level semantic knowledge (such as attribute) which is
not sensitive to specific objects of interest and therefore is more
privacy-preserving and scalable. To this end, we formulate a new Federated
Zero-Shot Learning (FZSL) paradigm to learn mid-level semantic knowledge at
multiple local clients with non-shared local data and cumulatively aggregate a
globally generalised central model for deployment. To improve model
discriminative ability, we propose to explore semantic knowledge augmentation
from external knowledge for enriching the mid-level semantic space in FZSL.
Extensive experiments on five zeroshot learning benchmark datasets validate the
effectiveness of our approach for optimising a generalisable federated learning
model with mid-level semantic knowledge transfer.
Related papers
- Cross-Training with Multi-View Knowledge Fusion for Heterogenous Federated Learning [13.796783869133531]
This paper presents a novel approach that enhances federated learning through a cross-training scheme incorporating multi-view information.
Specifically, the proposed method, termed FedCT, includes three main modules, where the consistency-aware knowledge broadcasting module aims to optimize model assignment strategies.
The multi-view knowledge-guided representation learning module leverages fused knowledge from both global and local views to enhance the preservation of local knowledge before and after model exchange.
The mixup-based feature augmentation module aggregates rich information to further increase the diversity of feature spaces, which enables the model to better discriminate complex samples.
arXiv Detail & Related papers (2024-05-30T13:27:30Z) - Personalized Federated Learning with Feature Alignment and Classifier
Collaboration [13.320381377599245]
Data heterogeneity is one of the most challenging issues in federated learning.
One such approach in deep neural networks based tasks is employing a shared feature representation and learning a customized classifier head for each client.
In this work, we conduct explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation.
arXiv Detail & Related papers (2023-06-20T19:58:58Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning [19.505644178449046]
We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
arXiv Detail & Related papers (2022-11-14T13:59:50Z) - FedClassAvg: Local Representation Learning for Personalized Federated
Learning on Heterogeneous Neural Networks [21.613436984547917]
We propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg)
FedClassAvg aggregates weights as an agreement on decision boundaries on feature spaces.
We demonstrate it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
arXiv Detail & Related papers (2022-10-25T08:32:08Z) - Meta Knowledge Condensation for Federated Learning [65.20774786251683]
Existing federated learning paradigms usually extensively exchange distributed models at a central solver to achieve a more powerful model.
This would incur severe communication burden between a server and multiple clients especially when data distributions are heterogeneous.
Unlike existing paradigms, we introduce an alternative perspective to significantly decrease the communication cost in federate learning.
arXiv Detail & Related papers (2022-09-29T15:07:37Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - Decentralised Person Re-Identification with Selective Knowledge
Aggregation [56.40855978874077]
Existing person re-identification (Re-ID) methods mostly follow a centralised learning paradigm which shares all training data to a collection for model learning.
Two recent works have introduced decentralised (federated) Re-ID learning for constructing a globally generalised model (server)
However, these methods are poor on how to adapt the generalised model to maximise its performance on individual client domain Re-ID tasks.
We present a new Selective Knowledge Aggregation approach to decentralised person Re-ID to optimise the trade-off between model personalisation and generalisation.
arXiv Detail & Related papers (2021-10-21T18:09:53Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.