RoHNAS: A Neural Architecture Search Framework with Conjoint
Optimization for Adversarial Robustness and Hardware Efficiency of
Convolutional and Capsule Networks
- URL: http://arxiv.org/abs/2210.05276v1
- Date: Tue, 11 Oct 2022 09:14:56 GMT
- Title: RoHNAS: A Neural Architecture Search Framework with Conjoint
Optimization for Adversarial Robustness and Hardware Efficiency of
Convolutional and Capsule Networks
- Authors: Alberto Marchisio and Vojtech Mrazek and Andrea Massa and Beatrice
Bussolino and Maurizio Martina and Muhammad Shafique
- Abstract summary: RoHNAS is a novel framework that jointly optimize for adversarial-robustness and hardware-efficiency of Deep Neural Network (DNN)
For reducing the exploration time, RoHNAS analyzes and selects appropriate values of adversarial perturbation for each dataset to employ in the NAS flow.
- Score: 10.946374356026679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Architecture Search (NAS) algorithms aim at finding efficient Deep
Neural Network (DNN) architectures for a given application under given system
constraints. DNNs are computationally-complex as well as vulnerable to
adversarial attacks. In order to address multiple design objectives, we propose
RoHNAS, a novel NAS framework that jointly optimizes for adversarial-robustness
and hardware-efficiency of DNNs executed on specialized hardware accelerators.
Besides the traditional convolutional DNNs, RoHNAS additionally accounts for
complex types of DNNs such as Capsule Networks. For reducing the exploration
time, RoHNAS analyzes and selects appropriate values of adversarial
perturbation for each dataset to employ in the NAS flow. Extensive evaluations
on multi - Graphics Processing Unit (GPU) - High Performance Computing (HPC)
nodes provide a set of Pareto-optimal solutions, leveraging the tradeoff
between the above-discussed design objectives. For example, a Pareto-optimal
DNN for the CIFAR-10 dataset exhibits 86.07% accuracy, while having an energy
of 38.63 mJ, a memory footprint of 11.85 MiB, and a latency of 4.47 ms.
Related papers
- RNC: Efficient RRAM-aware NAS and Compilation for DNNs on Resource-Constrained Edge Devices [0.30458577208819987]
We aim to develop edge-friendly deep neural networks (DNNs) for accelerators based on resistive random-access memory (RRAM)
We propose an edge compilation and resource-constrained RRAM-aware neural architecture search (NAS) framework to search for optimized neural networks meeting specific hardware constraints.
The resulting model from NAS optimized for speed achieved 5x-30x speedup.
arXiv Detail & Related papers (2024-09-27T15:35:36Z) - HASNAS: A Hardware-Aware Spiking Neural Architecture Search Framework for Neuromorphic Compute-in-Memory Systems [6.006032394972252]
Spiking Neural Networks (SNNs) have shown capabilities for solving diverse machine learning tasks with ultra-low-power/energy computation.
We propose HASNAS, a novel hardware-aware spiking neural architecture search framework for neuromorphic CIM systems.
arXiv Detail & Related papers (2024-06-30T09:51:58Z) - LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization [48.41286573672824]
Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient.
We propose a new approach named LitE-SNN that incorporates both spatial and temporal compression into the automated network design process.
arXiv Detail & Related papers (2024-01-26T05:23:11Z) - DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs [53.82853297675979]
1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices.
One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS.
We introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs.
arXiv Detail & Related papers (2023-06-27T11:28:29Z) - HQNAS: Auto CNN deployment framework for joint quantization and
architecture search [30.45926484863791]
We propose a novel neural network design framework called Hardware-aware Quantized Neural Architecture Search(HQNAS)
It takes only 4 GPU hours to discover an outstanding NN policy on CIFAR10.
It also takes only %10 GPU time to generate a comparable model on Imagenet.
arXiv Detail & Related papers (2022-10-16T08:32:18Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - FLASH: Fast Neural Architecture Search with Hardware Optimization [7.263481020106725]
Neural architecture search (NAS) is a promising technique to design efficient and high-performance deep neural networks (DNNs)
This paper proposes FLASH, a very fast NAS methodology that co-optimizes the DNN accuracy and performance on a real hardware platform.
arXiv Detail & Related papers (2021-08-01T23:46:48Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - NASCaps: A Framework for Neural Architecture Search to Optimize the
Accuracy and Hardware Efficiency of Convolutional Capsule Networks [10.946374356026679]
We propose NASCaps, an automated framework for the hardware-aware NAS of different types of Deep Neural Networks (DNNs)
We study the efficacy of deploying a multi-objective Genetic Algorithm (e.g., based on the NSGA-II algorithm)
Our framework is the first to model and supports the specialized capsule layers and dynamic routing in the NAS-flow.
arXiv Detail & Related papers (2020-08-19T14:29:36Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.