XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural
Architectures for Non-ideal Xbars
- URL: http://arxiv.org/abs/2302.07769v2
- Date: Sat, 15 Apr 2023 18:51:32 GMT
- Title: XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural
Architectures for Non-ideal Xbars
- Authors: Abhiroop Bhattacharjee, Abhishek Moitra, and Priyadarshini Panda
- Abstract summary: This work proposes a two-phase algorithm-hardware co-optimization approach called XploreNAS.
It searches for hardware-efficient & adversarially robust neural architectures for non-ideal crossbar platforms.
Experiments on crossbars with benchmark datasets show upto 8-16% improvement in the adversarial robustness of the searched Subnets.
- Score: 2.222917681321253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compute In-Memory platforms such as memristive crossbars are gaining focus as
they facilitate acceleration of Deep Neural Networks (DNNs) with high area and
compute-efficiencies. However, the intrinsic non-idealities associated with the
analog nature of computing in crossbars limits the performance of the deployed
DNNs. Furthermore, DNNs are shown to be vulnerable to adversarial attacks
leading to severe security threats in their large-scale deployment. Thus,
finding adversarially robust DNN architectures for non-ideal crossbars is
critical to the safe and secure deployment of DNNs on the edge. This work
proposes a two-phase algorithm-hardware co-optimization approach called
XploreNAS that searches for hardware-efficient & adversarially robust neural
architectures for non-ideal crossbar platforms. We use the one-shot Neural
Architecture Search (NAS) approach to train a large Supernet with
crossbar-awareness and sample adversarially robust Subnets therefrom,
maintaining competitive hardware-efficiency. Our experiments on crossbars with
benchmark datasets (SVHN, CIFAR10 & CIFAR100) show upto ~8-16% improvement in
the adversarial robustness of the searched Subnets against a baseline ResNet-18
model subjected to crossbar-aware adversarial training. We benchmark our robust
Subnets for Energy-Delay-Area-Products (EDAPs) using the Neurosim tool and find
that with additional hardware-efficiency driven optimizations, the Subnets
attain ~1.5-1.6x lower EDAPs than ResNet-18 baseline.
Related papers
- DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs [53.82853297675979]
1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices.
One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS.
We introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs.
arXiv Detail & Related papers (2023-06-27T11:28:29Z) - RoHNAS: A Neural Architecture Search Framework with Conjoint
Optimization for Adversarial Robustness and Hardware Efficiency of
Convolutional and Capsule Networks [10.946374356026679]
RoHNAS is a novel framework that jointly optimize for adversarial-robustness and hardware-efficiency of Deep Neural Network (DNN)
For reducing the exploration time, RoHNAS analyzes and selects appropriate values of adversarial perturbation for each dataset to employ in the NAS flow.
arXiv Detail & Related papers (2022-10-11T09:14:56Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - On the Noise Stability and Robustness of Adversarially Trained Networks
on NVM Crossbars [6.506883928959601]
We study the design of robust Deep Neural Networks (DNNs) through the amalgamation of adversarial training and intrinsic robustness of NVM crossbar-based analog hardware.
Our results indicate that implementing adversarially trained networks on analog hardware requires careful calibration between hardware non-idealities and $epsilon_train$ for optimum robustness and performance.
arXiv Detail & Related papers (2021-09-19T04:59:39Z) - NAX: Co-Designing Neural Network and Hardware Architecture for
Memristive Xbar based Computing Systems [7.481928921197249]
In-Memory Computing (IMC) hardware using Memristive Crossbar Arrays (MCAs) are gaining popularity to accelerate Deep Neural Networks (DNNs)
We propose NAX -- an efficient neural architecture search engine that co-designs neural network and IMC based hardware architecture.
arXiv Detail & Related papers (2021-06-23T02:27:00Z) - Efficiency-driven Hardware Optimization for Adversarially Robust Neural
Networks [3.125321230840342]
We will focus on how to address adversarial robustness for Deep Neural Networks (DNNs) through efficiency-driven hardware optimizations.
One such approach is approximate digital CMOS memories with hybrid 6T-8T cells that enable supply scaling (Vdd) yielding low-power operation.
Another memory optimization approach involves the creation of memristive crossbars that perform Matrix-Multiplications (MVMs) efficiently with low energy and area requirements.
arXiv Detail & Related papers (2021-05-09T19:26:25Z) - BossNAS: Exploring Hybrid CNN-transformers with Block-wisely
Self-supervised Neural Architecture Search [100.28980854978768]
We present Block-wisely Self-supervised Neural Architecture Search (BossNAS)
We factorize the search space into blocks and utilize a novel self-supervised training scheme, named ensemble bootstrapping, to train each block separately.
We also present HyTra search space, a fabric-like hybrid CNN-transformer search space with searchable down-sampling positions.
arXiv Detail & Related papers (2021-03-23T10:05:58Z) - Rethinking Non-idealities in Memristive Crossbars for Adversarial
Robustness in Neural Networks [2.729253370269413]
Deep Neural Networks (DNNs) have been shown to be prone to adversarial attacks.
crossbar non-idealities have always been devalued since they cause errors in performing MVMs.
We show that the intrinsic hardware non-idealities yield adversarial robustness to the mapped DNNs without any additional optimization.
arXiv Detail & Related papers (2020-08-25T22:45:34Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.