Verifying Generalization in Deep Learning
- URL: http://arxiv.org/abs/2302.05745v2
- Date: Tue, 9 May 2023 23:14:22 GMT
- Title: Verifying Generalization in Deep Learning
- Authors: Guy Amir, Osher Maayan, Tom Zelazny, Guy Katz and Michael Schapira
- Abstract summary: Deep neural networks (DNNs) are the workhorses of deep learning.
DNNs are notoriously prone to poor generalization, i.e., may prove inadequate on inputs not encountered during training.
We propose a novel, verification-driven methodology for identifying DNN-based decision rules that generalize well to new input domains.
- Score: 3.4948705785954917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are the workhorses of deep learning, which
constitutes the state of the art in numerous application domains. However,
DNN-based decision rules are notoriously prone to poor generalization, i.e.,
may prove inadequate on inputs not encountered during training. This limitation
poses a significant obstacle to employing deep learning for mission-critical
tasks, and also in real-world environments that exhibit high variability. We
propose a novel, verification-driven methodology for identifying DNN-based
decision rules that generalize well to new input domains. Our approach
quantifies generalization to an input domain by the extent to which decisions
reached by independently trained DNNs are in agreement for inputs in this
domain. We show how, by harnessing the power of DNN verification, our approach
can be efficiently and effectively realized. We evaluate our verification-based
approach on three deep reinforcement learning (DRL) benchmarks, including a
system for Internet congestion control. Our results establish the usefulness of
our approach. More broadly, our work puts forth a novel objective for formal
verification, with the potential for mitigating the risks associated with
deploying DNN-based systems in the wild.
Related papers
- Verifying the Generalization of Deep Learning to Out-of-Distribution Domains [1.5774380628229037]
Deep neural networks (DNNs) play a crucial role in the field of machine learning.
DNNs may occasionally exhibit challenges with generalization, i.e., may fail to handle inputs that were not encountered during training.
This limitation is a significant challenge when it comes to deploying deep learning for safety-critical tasks.
arXiv Detail & Related papers (2024-06-04T07:02:59Z) - DeepKnowledge: Generalisation-Driven Deep Learning Testing [2.526146573337397]
DeepKnowledge is a systematic testing methodology for DNN-based systems.
It aims to enhance robustness and reduce the residual risk of 'black box' models.
We report improvements of up to 10 percentage points over state-of-the-art coverage criteria for detecting adversarial attacks.
arXiv Detail & Related papers (2024-03-25T13:46:09Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Enhancing Deep Learning with Scenario-Based Override Rules: a Case Study [0.0]
Deep neural networks (DNNs) have become a crucial instrument in the software development toolkit.
DNNs are highly opaque, and can behave in an unexpected manner when they encounter unfamiliar input.
One promising approach is by extending DNN-based systems with hand-crafted override rules.
arXiv Detail & Related papers (2023-01-19T15:06:32Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Taming Reachability Analysis of DNN-Controlled Systems via
Abstraction-Based Training [14.787056022080625]
This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis.
We extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training.
We devise the first black-box reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states.
arXiv Detail & Related papers (2022-11-21T00:11:50Z) - Neuron Coverage-Guided Domain Generalization [37.77033512313927]
This paper focuses on the domain generalization task where domain knowledge is unavailable, and even worse, only samples from a single domain can be utilized during training.
Our motivation originates from the recent progresses in deep neural network (DNN) testing, which has shown that maximizing neuron coverage of DNN can help to explore possible defects of DNN.
arXiv Detail & Related papers (2021-02-27T14:26:53Z) - A Survey on Assessing the Generalization Envelope of Deep Neural
Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art performance on numerous applications.
It is difficult to tell beforehand if a DNN receiving an input will deliver the correct output since their decision criteria are usually nontransparent.
This survey connects the three fields within the larger framework of investigating the generalization performance of machine learning methods and in particular DNNs.
arXiv Detail & Related papers (2020-08-21T09:12:52Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.