DiverGet: A Search-Based Software Testing Approach for Deep Neural
Network Quantization Assessment
- URL: http://arxiv.org/abs/2207.06282v1
- Date: Wed, 13 Jul 2022 15:27:51 GMT
- Title: DiverGet: A Search-Based Software Testing Approach for Deep Neural
Network Quantization Assessment
- Authors: Ahmed Haj Yahmed, Houssem Ben Braiek, Foutse Khomh, Sonia Bouzidi,
Rania Zaatour
- Abstract summary: Quantization is one of the most applied Deep Neural Network (DNN) compression strategies.
We present DiverGet, a search-based testing framework for quantization assessment.
We evaluate the performance of DiverGet on state-of-the-art DNNs applied to hyperspectral remote sensing images.
- Score: 10.18462284491991
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantization is one of the most applied Deep Neural Network (DNN) compression
strategies, when deploying a trained DNN model on an embedded system or a cell
phone. This is owing to its simplicity and adaptability to a wide range of
applications and circumstances, as opposed to specific Artificial Intelligence
(AI) accelerators and compilers that are often designed only for certain
specific hardware (e.g., Google Coral Edge TPU). With the growing demand for
quantization, ensuring the reliability of this strategy is becoming a critical
challenge. Traditional testing methods, which gather more and more genuine data
for better assessment, are often not practical because of the large size of the
input space and the high similarity between the original DNN and its quantized
counterpart. As a result, advanced assessment strategies have become of
paramount importance. In this paper, we present DiverGet, a search-based
testing framework for quantization assessment. DiverGet defines a space of
metamorphic relations that simulate naturally-occurring distortions on the
inputs. Then, it optimally explores these relations to reveal the disagreements
among DNNs of different arithmetic precision. We evaluate the performance of
DiverGet on state-of-the-art DNNs applied to hyperspectral remote sensing
images. We chose the remote sensing DNNs as they're being increasingly deployed
at the edge (e.g., high-lift drones) in critical domains like climate change
research and astronomy. Our results show that DiverGet successfully challenges
the robustness of established quantization techniques against
naturally-occurring shifted data, and outperforms its most recent concurrent,
DiffChaser, with a success rate that is (on average) four times higher.
Related papers
- Search-based DNN Testing and Retraining with GAN-enhanced Simulations [2.362412515574206]
In safety-critical systems, Deep Neural Networks (DNNs) are becoming a key component for computer vision tasks.
We propose to combine meta-heuristic search, used to explore the input space using simulators, with Generative Adversarial Networks (GANs) to transform the data generated by simulators into realistic input images.
arXiv Detail & Related papers (2024-06-19T09:05:16Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - QVIP: An ILP-based Formal Verification Approach for Quantized Neural
Networks [14.766917269393865]
Quantization has emerged as a promising technique to reduce the size of neural networks with comparable accuracy as their floating-point numbered counterparts.
We propose a novel and efficient formal verification approach for QNNs.
In particular, we are the first to propose an encoding that reduces the verification problem of QNNs into the solving of integer linear constraints.
arXiv Detail & Related papers (2022-12-10T03:00:29Z) - Taming Reachability Analysis of DNN-Controlled Systems via
Abstraction-Based Training [14.787056022080625]
This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis.
We extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training.
We devise the first black-box reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states.
arXiv Detail & Related papers (2022-11-21T00:11:50Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - A Simple Approach to Improve Single-Model Deep Uncertainty via
Distance-Awareness [33.09831377640498]
We study approaches to improve uncertainty property of a single network, based on a single, deterministic representation.
We propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection.
arXiv Detail & Related papers (2022-05-01T05:46:13Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.