VeriCompress: A Tool to Streamline the Synthesis of Verified Robust
Compressed Neural Networks from Scratch
- URL: http://arxiv.org/abs/2211.09945v7
- Date: Tue, 21 Nov 2023 18:03:06 GMT
- Title: VeriCompress: A Tool to Streamline the Synthesis of Verified Robust
Compressed Neural Networks from Scratch
- Authors: Sawinder Kaur, Yi Xiao, Asif Salekin
- Abstract summary: AI's widespread integration has led to neural networks (NNs) deployment on edge and similar limited-resource platforms for safety-critical scenarios.
This study introduces VeriCompress, a tool that automates the search and training of compressed models with robustness guarantees.
The method trains models 2-3 times faster than the state-of-the-art approaches, surpassing relevant baseline approaches by average accuracy and robustness gains of 15.1 and 9.8 percentage points, respectively.
- Score: 10.061078548888567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI's widespread integration has led to neural networks (NNs) deployment on
edge and similar limited-resource platforms for safety-critical scenarios. Yet,
NN's fragility raises concerns about reliable inference. Moreover, constrained
platforms demand compact networks. This study introduces VeriCompress, a tool
that automates the search and training of compressed models with robustness
guarantees. These models are well-suited for safety-critical applications and
adhere to predefined architecture and size limitations, making them deployable
on resource-restricted platforms. The method trains models 2-3 times faster
than the state-of-the-art approaches, surpassing relevant baseline approaches
by average accuracy and robustness gains of 15.1 and 9.8 percentage points,
respectively. When deployed on a resource-restricted generic platform, these
models require 5-8 times less memory and 2-4 times less inference time than
models used in verified robustness literature. Our comprehensive evaluation
across various model architectures and datasets, including MNIST, CIFAR, SVHN,
and a relevant pedestrian detection dataset, showcases VeriCompress's capacity
to identify compressed verified robust models with reduced computation overhead
compared to current standards. This underscores its potential as a valuable
tool for end users, such as developers of safety-critical applications on edge
or Internet of Things platforms, empowering them to create suitable models for
safety-critical, resource-constrained platforms in their respective domains.
Related papers
- PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - A Systematic Approach to Robustness Modelling for Deep Convolutional
Neural Networks [0.294944680995069]
Recent work raises questions about the ability for even larger models to generalize to data outside of the controlled train and test sets.
We provide a method that uses induced failures to model the probability of failure as a function of time.
We examine the various trade-offs between cost, robustness, latency, and reliability to find that larger models do not significantly aid in adversarial robustness.
arXiv Detail & Related papers (2024-01-24T19:12:37Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
Stealing in Memories [26.067920958354]
One of the major threats to the privacy of Deep Neural Networks (DNNs) is model extraction attacks.
Recent studies show hardware-based side channel attacks can reveal internal knowledge about DNN models (e.g., model architectures)
We propose an advanced model extraction attack framework DeepSteal that effectively steals DNN weights with the aid of memory side-channel attack.
arXiv Detail & Related papers (2021-11-08T16:55:45Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Compact CNN Structure Learning by Knowledge Distillation [34.36242082055978]
We propose a framework that leverages knowledge distillation along with customizable block-wise optimization to learn a lightweight CNN structure.
Our method results in a state of the art network compression while being capable of achieving better inference accuracy.
In particular, for the already compact network MobileNet_v2, our method offers up to 2x and 5.2x better model compression.
arXiv Detail & Related papers (2021-04-19T10:34:22Z) - Dataless Model Selection with the Deep Frame Potential [45.16941644841897]
We quantify networks by their intrinsic capacity for unique and robust representations.
We propose the deep frame potential: a measure of coherence that is approximately related to representation stability but has minimizers that depend only on network structure.
We validate its use as a criterion for model selection and demonstrate correlation with generalization error on a variety of common residual and densely connected network architectures.
arXiv Detail & Related papers (2020-03-30T23:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.