Towards Tailored Models on Private AIoT Devices: Federated Direct Neural
Architecture Search
- URL: http://arxiv.org/abs/2202.11490v1
- Date: Wed, 23 Feb 2022 13:10:01 GMT
- Title: Towards Tailored Models on Private AIoT Devices: Federated Direct Neural
Architecture Search
- Authors: Chunhui Zhang, Xiaoming Yuan, Qianyun Zhang, Guangxu Zhu, Lei Cheng,
and Ning Zhang
- Abstract summary: We propose a Federated Direct Neural Architecture Search (FDNAS) framework that allows for hardware-friendly NAS from non- IID data across devices.
Experiments on non-IID datasets have shown the state-of-the-art accuracy-efficiency trade-offs achieved by the proposed solution.
- Score: 22.69123714900226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks often encounter various stringent resource constraints while
deploying on edge devices. To tackle these problems with less human efforts,
automated machine learning becomes popular in finding various neural
architectures that fit diverse Artificial Intelligence of Things (AIoT)
scenarios. Recently, to prevent the leakage of private information while enable
automated machine intelligence, there is an emerging trend to integrate
federated learning and neural architecture search (NAS). Although promising as
it may seem, the coupling of difficulties from both tenets makes the algorithm
development quite challenging. In particular, how to efficiently search the
optimal neural architecture directly from massive non-independent and
identically distributed (non-IID) data among AIoT devices in a federated manner
is a hard nut to crack. In this paper, to tackle this challenge, by leveraging
the advances in ProxylessNAS, we propose a Federated Direct Neural Architecture
Search (FDNAS) framework that allows for hardware-friendly NAS from non- IID
data across devices. To further adapt to both various data distributions and
different types of devices with heterogeneous embedded hardware platforms,
inspired by meta-learning, a Cluster Federated Direct Neural Architecture
Search (CFDNAS) framework is proposed to achieve device-aware NAS, in the sense
that each device can learn a tailored deep learning model for its particular
data distribution and hardware constraint. Extensive experiments on non-IID
datasets have shown the state-of-the-art accuracy-efficiency trade-offs
achieved by the proposed solution in the presence of both data and device
heterogeneity.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Multi-objective Differentiable Neural Architecture Search [58.67218773054753]
We propose a novel NAS algorithm that encodes user preferences for the trade-off between performance and hardware metrics.
Our method outperforms existing MOO NAS methods across a broad range of qualitatively different search spaces and datasets.
arXiv Detail & Related papers (2024-02-28T10:09:04Z) - IDENAS: Internal Dependency Exploration for Neural Architecture Search [0.0]
Internal Dependency-based Exploration for Neural Architecture Search (NAS) and Feature Selection have emerged as promising solutions in such scenarios.
This research proposes IDENAS, an Internal Dependency-based Exploration for Neural Architecture Search, integrating NAS with feature selection.
The methodology explores internal dependencies in the complete parameter space for classification involving 1D sensor and 2D image data as well.
arXiv Detail & Related papers (2023-10-26T08:58:29Z) - Ensemble Learning based Anomaly Detection for IoT Cybersecurity via
Bayesian Hyperparameters Sensitivity Analysis [2.3226893628361682]
Internet of Things (IoT) integrates more than billions of intelligent devices over the globe with the capability of communicating with other connected devices.
Data collected by IoT contain a tremendous amount of information for anomaly detection.
In this paper, we present a study on using ensemble machine learning methods for enhancing IoT cybersecurity via anomaly detection.
arXiv Detail & Related papers (2023-07-20T05:23:49Z) - Search-time Efficient Device Constraints-Aware Neural Architecture
Search [6.527454079441765]
Deep learning techniques like computer vision and natural language processing can be computationally expensive and memory-intensive.
We automate the construction of task-specific deep learning architectures optimized for device constraints through Neural Architecture Search (NAS)
We present DCA-NAS, a principled method of fast neural network architecture search that incorporates edge-device constraints.
arXiv Detail & Related papers (2023-07-10T09:52:28Z) - FedorAS: Federated Architecture Search under system heterogeneity [7.187123335023895]
Federated learning (FL) has recently gained considerable attention due to its ability to use decentralised data while preserving privacy.
It also poses additional challenges related to the heterogeneity of the participating devices, in terms of their computational capabilities and contributed data.
We design our system, FedorAS, to discover and train promising architectures when dealing with devices of varying capabilities holding non-IID distributed data.
arXiv Detail & Related papers (2022-06-22T17:36:26Z) - D-DARTS: Distributed Differentiable Architecture Search [75.12821786565318]
Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
arXiv Detail & Related papers (2021-08-20T09:07:01Z) - Rethinking Architecture Design for Tackling Data Heterogeneity in
Federated Learning [53.73083199055093]
We show that attention-based architectures (e.g., Transformers) are fairly robust to distribution shifts.
Our experiments show that replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices.
arXiv Detail & Related papers (2021-06-10T21:04:18Z) - FDNAS: Improving Data Privacy and Model Diversity in AutoML [7.402044070683503]
We propose a Federated Direct Neural Architecture Search (FDNAS) framework that allows hardware-aware NAS from decentralized non-iid data of clients.
To further adapt for various data distributions of clients, inspired by meta-learning, a cluster Federated Direct Neural Architecture Search (CFDNAS) framework is proposed to achieve client-aware NAS.
arXiv Detail & Related papers (2020-11-06T14:13:42Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.