FDNAS: Improving Data Privacy and Model Diversity in AutoML
- URL: http://arxiv.org/abs/2011.03372v1
- Date: Fri, 6 Nov 2020 14:13:42 GMT
- Title: FDNAS: Improving Data Privacy and Model Diversity in AutoML
- Authors: Chunhui Zhang, Yongyuan Liang, Xiaoming Yuan, and Lei Cheng
- Abstract summary: We propose a Federated Direct Neural Architecture Search (FDNAS) framework that allows hardware-aware NAS from decentralized non-iid data of clients.
To further adapt for various data distributions of clients, inspired by meta-learning, a cluster Federated Direct Neural Architecture Search (CFDNAS) framework is proposed to achieve client-aware NAS.
- Score: 7.402044070683503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To prevent the leakage of private information while enabling automated
machine intelligence, there is an emerging trend to integrate federated
learning and Neural Architecture Search (NAS). Although promising as it may
seem, the coupling of difficulties from both two tenets makes the algorithm
development quite challenging. In particular, how to efficiently search the
optimal neural architecture directly from massive non-iid data of clients in a
federated manner remains to be a hard nut to crack. To tackle this challenge,
in this paper, by leveraging the advances in proxy-less NAS, we propose a
Federated Direct Neural Architecture Search (FDNAS) framework that allows
hardware-aware NAS from decentralized non-iid data of clients. To further adapt
for various data distributions of clients, inspired by meta-learning, a cluster
Federated Direct Neural Architecture Search (CFDNAS) framework is proposed to
achieve client-aware NAS, in the sense that each client can learn a tailored
deep learning model for its particular data distribution. Extensive experiments
on real-world non-iid datasets show state-of-the-art accuracy-efficiency
trade-offs for various hardware and data distributions of clients. Our codes
will be released publicly upon paper acceptance.
Related papers
- Fair Differentiable Neural Network Architecture Search for Long-Tailed Data with Self-Supervised Learning [0.0]
This paper explores to improve the searching and training performance of NAS on long-tailed datasets.
We first discuss the related works about NAS and the deep learning method for long-tailed datasets.
Then, we focus on an existing work, called SSF-NAS, which integrates the self-supervised learning and fair differentiable NAS.
Finally, we conducted a series of experiments on the CIFAR10-LT dataset for performance evaluation.
arXiv Detail & Related papers (2024-06-19T12:39:02Z) - Insights from the Use of Previously Unseen Neural Architecture Search Datasets [6.239015118429603]
We present eight new datasets created for a series of NAS Challenges: AddNIST, Language, MultNIST, CIFARTile, Gutenberg, Isabella, GeoClassing, and Chesseract.
These datasets and challenges are developed to direct attention to issues in NAS development and to encourage authors to consider how their models will perform on datasets unknown to them at development time.
arXiv Detail & Related papers (2024-04-02T16:48:34Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets [55.2118691522524]
Distillation-aware Neural Architecture Search (DaNAS) aims to search for an optimal student architecture.
We propose a distillation-aware meta accuracy prediction model, DaSS (Distillation-aware Student Search), which can predict a given architecture's final performances on a dataset.
arXiv Detail & Related papers (2023-05-26T14:00:35Z) - DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models [56.584561770857306]
We propose a novel conditional Neural Architecture Generation (NAG) framework based on diffusion models, dubbed DiffusionNAG.
Specifically, we consider the neural architectures as directed graphs and propose a graph diffusion model for generating them.
We validate the effectiveness of DiffusionNAG through extensive experiments in two predictor-based NAS scenarios: Transferable NAS and Bayesian Optimization (BO)-based NAS.
When integrated into a BO-based algorithm, DiffusionNAG outperforms existing BO-based NAS approaches, particularly in the large MobileNetV3 search space on the ImageNet 1K dataset.
arXiv Detail & Related papers (2023-05-26T13:58:18Z) - UnrealNAS: Can We Search Neural Architectures with Unreal Data? [84.78460976605425]
Neural architecture search (NAS) has shown great success in the automatic design of deep neural networks (DNNs)
Previous work has analyzed the necessity of having ground-truth labels in NAS and inspired broad interest.
We take a further step to question whether real data is necessary for NAS to be effective.
arXiv Detail & Related papers (2022-05-04T16:30:26Z) - Towards Tailored Models on Private AIoT Devices: Federated Direct Neural
Architecture Search [22.69123714900226]
We propose a Federated Direct Neural Architecture Search (FDNAS) framework that allows for hardware-friendly NAS from non- IID data across devices.
Experiments on non-IID datasets have shown the state-of-the-art accuracy-efficiency trade-offs achieved by the proposed solution.
arXiv Detail & Related papers (2022-02-23T13:10:01Z) - Direct Federated Neural Architecture Search [0.0]
We present an effective approach for direct federated NAS which is hardware agnostic, computationally lightweight, and a one-stage method to search for ready-to-deploy neural network models.
Our results show an order of magnitude reduction in resource consumption while edging out prior art in accuracy.
arXiv Detail & Related papers (2020-10-13T08:11:35Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z) - Federated Neural Architecture Search [19.573780215917477]
We propose an automatic neural architecture search into the decentralized training, as a new DNN training paradigm called Federated Neural Architecture Search.
We present FedNAS, a highly optimized framework for efficient federated NAS.
Tested on large-scale datasets and typical CNN architectures, FedNAS achieves comparable model accuracy as state-of-the-art NAS algorithm.
arXiv Detail & Related papers (2020-02-15T10:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.