An Abstraction-Refinement Approach to Verifying Convolutional Neural
Networks
- URL: http://arxiv.org/abs/2201.01978v1
- Date: Thu, 6 Jan 2022 08:57:43 GMT
- Title: An Abstraction-Refinement Approach to Verifying Convolutional Neural
Networks
- Authors: Matan Ostrovsky and Clark Barrett and Guy Katz
- Abstract summary: We present the Cnn-Abs framework, which is aimed at the verification of convolutional networks.
The core of Cnn-Abs is an abstraction-refinement technique, which simplifies the verification problem.
Cnn-Abs can significantly boost the performance of a state-of-the-art verification engine, reducing runtime by 15.7% on average.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks have gained vast popularity due to their
excellent performance in the fields of computer vision, image processing, and
others. Unfortunately, it is now well known that convolutional networks often
produce erroneous results - for example, minor perturbations of the inputs of
these networks can result in severe classification errors. Numerous
verification approaches have been proposed in recent years to prove the absence
of such errors, but these are typically geared for fully connected networks and
suffer from exacerbated scalability issues when applied to convolutional
networks. To address this gap, we present here the Cnn-Abs framework, which is
particularly aimed at the verification of convolutional networks. The core of
Cnn-Abs is an abstraction-refinement technique, which simplifies the
verification problem through the removal of convolutional connections in a way
that soundly creates an over-approximation of the original problem; and which
restores these connections if the resulting problem becomes too abstract.
Cnn-Abs is designed to use existing verification engines as a backend, and our
evaluation demonstrates that it can significantly boost the performance of a
state-of-the-art DNN verification engine, reducing runtime by 15.7% on average.
Related papers
- SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Tighter Abstract Queries in Neural Network Verification [0.0]
We present CEGARETTE, a novel verification mechanism where both the system and the property are abstracted and refined simultaneously.
Our results are very promising, and demonstrate a significant improvement in performance over multiple benchmarks.
arXiv Detail & Related papers (2022-10-23T22:18:35Z) - Neural Network Verification using Residual Reasoning [0.0]
We present an enhancement to abstraction-based verification of neural networks, by using emphresidual reasoning.
In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly.
arXiv Detail & Related papers (2022-08-05T10:39:04Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Scalable Verification of Quantized Neural Networks (Technical Report) [14.04927063847749]
We show that bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard.
We propose three techniques for making SMT-based verification of quantized neural networks more scalable.
arXiv Detail & Related papers (2020-12-15T10:05:37Z) - ResGCN: Attention-based Deep Residual Modeling for Anomaly Detection on
Attributed Networks [10.745544780660165]
Residual Graph Convolutional Network (ResGCN) is an attention-based deep residual modeling approach.
We show that ResGCN can effectively detect anomalous nodes in attributed networks.
arXiv Detail & Related papers (2020-09-30T15:24:51Z) - DeepAbstract: Neural Network Abstraction for Accelerating Verification [0.0]
We introduce an abstraction framework applicable to fully-connected feed-forward neural networks based on clustering of neurons that behave similarly on some inputs.
We show how the abstraction reduces the size of the network, while preserving its accuracy, and how verification results on the abstract network can be transferred back to the original network.
arXiv Detail & Related papers (2020-06-24T13:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.