PaRoT: A Practical Framework for Robust Deep Neural Network Training
- URL: http://arxiv.org/abs/2001.02152v3
- Date: Wed, 25 Mar 2020 11:17:37 GMT
- Title: PaRoT: A Practical Framework for Robust Deep Neural Network Training
- Authors: Edward Ayers, Francisco Eiras, Majd Hawasly, Iain Whiteside
- Abstract summary: Deep Neural Networks (DNNs) are finding important applications in safety-critical systems such as Autonomous Vehicles (AVs)
Raising unique challenges for assurance due to their black-box nature, DNNs pose a fundamental problem for regulatory acceptance of these types of systems.
We introduce a novel framework, PaRoT, developed on a popular training platform, that greatly reduces the barrier to entry.
- Score: 1.9034855801255839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) are finding important applications in
safety-critical systems such as Autonomous Vehicles (AVs), where perceiving the
environment correctly and robustly is necessary for safe operation. Raising
unique challenges for assurance due to their black-box nature, DNNs pose a
fundamental problem for regulatory acceptance of these types of systems. Robust
training --- training to minimize excessive sensitivity to small changes in
input --- has emerged as one promising technique to address this challenge.
However, existing robust training tools are inconvenient to use or apply to
existing codebases and models: they typically only support a small subset of
model elements and require users to extensively rewrite the training code. In
this paper we introduce a novel framework, PaRoT, developed on the popular
TensorFlow platform, that greatly reduces the barrier to entry. Our framework
enables robust training to be performed on arbitrary DNNs without any rewrites
to the model. We demonstrate that our framework's performance is comparable to
prior art, and exemplify its ease of use on off-the-shelf, trained models and
its testing capabilities on a real-world industrial application: a traffic
light detection network.
Related papers
- NetFlowGen: Leveraging Generative Pre-training for Network Traffic Dynamics [72.95483148058378]
We propose to pre-train a general-purpose machine learning model to capture traffic dynamics with only traffic data from NetFlow records.
We address challenges such as unifying network feature representations, learning from large unlabeled traffic data volume, and testing on real downstream tasks in DDoS attack detection.
arXiv Detail & Related papers (2024-12-30T00:47:49Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Interference Cancellation GAN Framework for Dynamic Channels [74.22393885274728]
We introduce an online training framework that can adapt to any changes in the channel.
Our framework significantly outperforms recent neural network models on highly dynamic channels.
arXiv Detail & Related papers (2022-08-17T02:01:18Z) - Provably Safe Model-Based Meta Reinforcement Learning: An
Abstraction-Based Approach [3.569867801312134]
We consider the problem of training a provably safe Neural Network (NN) controller for uncertain nonlinear dynamical systems.
Our approach is to learn a set of NN controllers during the training phase.
When the task becomes available at runtime, our framework will carefully select a subset of these NN controllers and compose them to form the final NN controller.
arXiv Detail & Related papers (2021-09-03T00:38:05Z) - Active Learning for Deep Neural Networks on Edge Devices [0.0]
This paper formalizes a practical active learning problem for neural networks on edge devices.
We propose a general task-agnostic framework to tackle this problem, which reduces it to a stream submodular property.
We evaluate our approach on both classification and object detection tasks in a practical setting to simulate a real-life scenario.
arXiv Detail & Related papers (2021-06-21T03:55:33Z) - Pruning and Slicing Neural Networks using Formal Verification [0.2538209532048866]
Deep neural networks (DNNs) play an increasingly important role in various computer systems.
In order to create these networks, engineers typically specify a desired topology, and then use an automated training algorithm to select the network's weights.
Here, we propose to address this challenge by harnessing recent advances in DNN verification.
arXiv Detail & Related papers (2021-05-28T07:53:50Z) - FAT: Training Neural Networks for Reliable Inference Under Hardware
Faults [3.191587417198382]
We present a novel methodology called fault-aware training (FAT), which includes error modeling during neural network (NN) training, to make QNNs resilient to specific fault models on the device.
FAT has been validated for numerous classification tasks including CIFAR10, GTSRB, SVHN and ImageNet.
arXiv Detail & Related papers (2020-11-11T16:09:39Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.