Divide-and-Conquer the NAS puzzle in Resource Constrained Federated
Learning Systems
- URL: http://arxiv.org/abs/2305.07135v1
- Date: Thu, 11 May 2023 20:57:29 GMT
- Title: Divide-and-Conquer the NAS puzzle in Resource Constrained Federated
Learning Systems
- Authors: Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini
Panda
- Abstract summary: We propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space.
We show that our approach outperforms several sampling strategies including Hadamard sampling, where the samples are maximally separated.
- Score: 5.53904487910875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a privacy-preserving distributed machine learning
approach geared towards applications in edge devices. However, the problem of
designing custom neural architectures in federated environments is not tackled
from the perspective of overall system efficiency. In this paper, we propose
DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural
Architecture Search (NAS) in a federated system by systematically sampling the
search space. We propose a novel diversified sampling strategy that balances
exploration and exploitation of the search space by initially maximizing the
distance between the samples and progressively shrinking this distance as the
training progresses. We then perform channel pruning to reduce the training
complexity at the devices further. We show that our approach outperforms
several sampling strategies including Hadamard sampling, where the samples are
maximally separated. We evaluate our method on the CIFAR10, CIFAR100, EMNIST,
and TinyImagenet benchmarks and show a comprehensive analysis of different
aspects of federated learning such as scalability, and non-IID data. DC-NAS
achieves near iso-accuracy as compared to full-scale federated NAS with 50%
fewer resources.
Related papers
- A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - The devil is in discretization discrepancy. Robustifying Differentiable NAS with Single-Stage Searching Protocol [2.4300749758571905]
gradient-based methods suffer from the discretization error, which can severely damage the process of obtaining the final architecture.
We introduce a novel single-stage searching protocol, which is not reliant on decoding a continuous architecture.
Our results demonstrate that this approach outperforms other DNAS methods by achieving 75.3% in the searching stage on the Cityscapes validation dataset.
arXiv Detail & Related papers (2024-05-26T15:44:53Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - Towards Self-supervised and Weight-preserving Neural Architecture Search [38.497608743382145]
We propose the self-supervised and weight-preserving neural architecture search (SSWP-NAS) as an extension of the current NAS framework.
Experiments show that the architectures searched by the proposed framework achieve state-of-the-art accuracy on CIFAR-10, CIFAR-100, and ImageNet datasets.
arXiv Detail & Related papers (2022-06-08T18:48:05Z) - Supernet Training for Federated Image Classification under System
Heterogeneity [15.2292571922932]
In this work, we propose a novel framework to consider both scenarios, namely Federation of Supernet Training (FedSup)
It is inspired by how averaging parameters in the model aggregation stage of Federated Learning (FL) is similar to weight-sharing in supernet training.
Under our framework, we present an efficient algorithm (E-FedSup) by sending the sub-model to clients in the broadcast stage for reducing communication costs and training overhead.
arXiv Detail & Related papers (2022-06-03T02:21:01Z) - Understanding and Accelerating Neural Architecture Search with
Training-Free and Theory-Grounded Metrics [117.4281417428145]
This work targets designing a principled and unified training-free framework for Neural Architecture Search (NAS)
NAS has been explosively studied to automate the discovery of top-performer neural networks, but suffers from heavy resource consumption and often incurs search bias due to truncated training or approximations.
We present a unified framework to understand and accelerate NAS, by disentangling "TEG" characteristics of searched networks.
arXiv Detail & Related papers (2021-08-26T17:52:07Z) - Weight Divergence Driven Divide-and-Conquer Approach for Optimal
Federated Learning from non-IID Data [0.0]
Federated Learning allows training of data stored in distributed devices without the need for centralizing training data.
We propose a novel Divide-and-Conquer training methodology that enables the use of the popular FedAvg aggregation algorithm.
arXiv Detail & Related papers (2021-06-28T09:34:20Z) - ES-ENAS: Combining Evolution Strategies with Neural Architecture Search
at No Extra Cost for Reinforcement Learning [46.4401207304477]
We introduce ES-ENAS, a simple neural architecture search (NAS) algorithm for the purpose of reinforcement learning (RL) policy design.
We achieve >90% network compression for multiple tasks, which may be special interest in mobile robotics with limited storage and computational resources.
arXiv Detail & Related papers (2021-01-19T02:19:05Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.