Self-supervised Cross-silo Federated Neural Architecture Search
- URL: http://arxiv.org/abs/2101.11896v1
- Date: Thu, 28 Jan 2021 09:57:30 GMT
- Title: Self-supervised Cross-silo Federated Neural Architecture Search
- Authors: Xinle Liang, Yang Liu, Jiahuan Luo, Yuanqin He, Tianjian Chen, Qiang
Yang
- Abstract summary: We present Self-supervised Vertical Federated Neural Architecture Search (SS-VFNAS) for automating Vertical Federated Learning (VFL)
In the proposed framework, each party first conducts NAS using self-supervised approach to find a local optimal architecture with its own data.
We demonstrate experimentally that our approach has superior performance, communication efficiency and privacy compared to Federated NAS.
- Score: 13.971827232338716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) provides both model performance and data privacy for
machine learning tasks where samples or features are distributed among
different parties. In the training process of FL, no party has a global view of
data distributions or model architectures of other parties. Thus the
manually-designed architectures may not be optimal. In the past, Neural
Architecture Search (NAS) has been applied to FL to address this critical
issue. However, existing Federated NAS approaches require prohibitive
communication and computation effort, as well as the availability of
high-quality labels. In this work, we present Self-supervised Vertical
Federated Neural Architecture Search (SS-VFNAS) for automating FL where
participants hold feature-partitioned data, a common cross-silo scenario called
Vertical Federated Learning (VFL). In the proposed framework, each party first
conducts NAS using self-supervised approach to find a local optimal
architecture with its own data. Then, parties collaboratively improve the local
optimal architecture in a VFL framework with supervision. We demonstrate
experimentally that our approach has superior performance, communication
efficiency and privacy compared to Federated NAS and is capable of generating
high-performance and highly-transferable heterogeneous architectures even with
insufficient overlapping samples, providing automation for those parties
without deep learning expertise.
Related papers
- Handling Data Heterogeneity via Architectural Design for Federated
Visual Recognition [16.50490537786593]
We study 19 visual recognition models from five different architectural families on four challenging FL datasets.
Our findings emphasize the importance of architectural design for computer vision tasks in practical scenarios.
arXiv Detail & Related papers (2023-10-23T17:59:16Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - FedorAS: Federated Architecture Search under system heterogeneity [7.187123335023895]
Federated learning (FL) has recently gained considerable attention due to its ability to use decentralised data while preserving privacy.
It also poses additional challenges related to the heterogeneity of the participating devices, in terms of their computational capabilities and contributed data.
We design our system, FedorAS, to discover and train promising architectures when dealing with devices of varying capabilities holding non-IID distributed data.
arXiv Detail & Related papers (2022-06-22T17:36:26Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - SPIDER: Searching Personalized Neural Architecture for Federated
Learning [17.61748275091843]
Federated learning (FL) assists machine learning when data cannot be shared with a centralized server due to privacy and regulatory restrictions.
Recent advancements in FL use predefined architecture-based learning for all the clients.
We introduce SPIDER, an algorithmic framework that aims to Search Personalized neural architecture for federated learning.
arXiv Detail & Related papers (2021-12-27T23:42:15Z) - Rethinking Architecture Design for Tackling Data Heterogeneity in
Federated Learning [53.73083199055093]
We show that attention-based architectures (e.g., Transformers) are fairly robust to distribution shifts.
Our experiments show that replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices.
arXiv Detail & Related papers (2021-06-10T21:04:18Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z) - Towards Non-I.I.D. and Invisible Data with FedNAS: Federated Deep
Learning via Neural Architecture Search [15.714385295889944]
We propose a Federated NAS (FedNAS) algorithm to help scattered workers collaboratively searching for a better architecture with higher accuracy.
Our experiments on non-IID dataset show that the architecture searched by FedNAS can outperform the manually predefined architecture.
arXiv Detail & Related papers (2020-04-18T08:04:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.