Adversarially Robust and Explainable Model Compression with On-Device
Personalization for Text Classification
- URL: http://arxiv.org/abs/2101.05624v3
- Date: Wed, 20 Jan 2021 13:18:25 GMT
- Title: Adversarially Robust and Explainable Model Compression with On-Device
Personalization for Text Classification
- Authors: Yao Qiang, Supriya Tumkur Suresh Kumar, Marco Brocanelli and Dongxiao
Zhu
- Abstract summary: On-device Deep Neural Networks (DNNs) have recently gained more attention due to the increasing computing power of mobile devices and the number of applications in Computer Vision (CV) and Natural Language Processing (NLP)
In NLP applications, although model compression has seen initial success, there are at least three major challenges yet to be addressed: adversarial robustness, explainability, and personalization.
Here we attempt to tackle these challenges by designing a new training scheme for model compression and adversarial robustness, including the optimization of an explainable feature mapping objective.
The resulting compressed model is personalized using on-device private training data via fine-
- Score: 4.805959718658541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-device Deep Neural Networks (DNNs) have recently gained more attention due
to the increasing computing power of the mobile devices and the number of
applications in Computer Vision (CV), Natural Language Processing (NLP), and
Internet of Things (IoTs). Unfortunately, the existing efficient convolutional
neural network (CNN) architectures designed for CV tasks are not directly
applicable to NLP tasks and the tiny Recurrent Neural Network (RNN)
architectures have been designed primarily for IoT applications. In NLP
applications, although model compression has seen initial success in on-device
text classification, there are at least three major challenges yet to be
addressed: adversarial robustness, explainability, and personalization. Here we
attempt to tackle these challenges by designing a new training scheme for model
compression and adversarial robustness, including the optimization of an
explainable feature mapping objective, a knowledge distillation objective, and
an adversarially robustness objective. The resulting compressed model is
personalized using on-device private training data via fine-tuning. We perform
extensive experiments to compare our approach with both compact RNN (e.g.,
FastGRNN) and compressed RNN (e.g., PRADO) architectures in both natural and
adversarial NLP test settings.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - EvSegSNN: Neuromorphic Semantic Segmentation for Event Data [0.6138671548064356]
EvSegSNN is a biologically plausible encoder-decoder U-shaped architecture relying on Parametric Leaky Integrate and Fire neurons.
We introduce an end-to-end biologically inspired semantic segmentation approach by combining Spiking Neural Networks with event cameras.
Experiments conducted on DDD17 demonstrate that EvSegSNN outperforms the closest state-of-the-art model in terms of MIoU.
arXiv Detail & Related papers (2024-06-20T10:36:24Z) - Q-SNNs: Quantized Spiking Neural Networks [12.719590949933105]
Spiking Neural Networks (SNNs) leverage sparse spikes to represent information and process them in an event-driven manner.
We introduce a lightweight and hardware-friendly Quantized SNN that applies quantization to both synaptic weights and membrane potentials.
We present a new Weight-Spike Dual Regulation (WS-DR) method inspired by information entropy theory.
arXiv Detail & Related papers (2024-06-19T16:23:26Z) - LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization [48.41286573672824]
Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient.
We propose a new approach named LitE-SNN that incorporates both spatial and temporal compression into the automated network design process.
arXiv Detail & Related papers (2024-01-26T05:23:11Z) - Single Channel Speech Enhancement Using U-Net Spiking Neural Networks [2.436681150766912]
Speech enhancement (SE) is crucial for reliable communication devices or robust speech recognition systems.
We propose a novel approach to SE using a spiking neural network (SNN) based on a U-Net architecture.
SNNs are suitable for processing data with a temporal dimension, such as speech, and are known for their energy-efficient implementation on neuromorphic hardware.
arXiv Detail & Related papers (2023-07-26T19:10:29Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - Complexity-Driven CNN Compression for Resource-constrained Edge AI [1.6114012813668934]
We propose a novel and computationally efficient pruning pipeline by exploiting the inherent layer-level complexities of CNNs.
We define three modes of pruning, namely parameter-aware (PA), FLOPs-aware (FA), and memory-aware (MA), to introduce versatile compression of CNNs.
arXiv Detail & Related papers (2022-08-26T16:01:23Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.