Neural Network Virtual Sensors for Fuel Injection Quantities with
Provable Performance Specifications
- URL: http://arxiv.org/abs/2007.00147v1
- Date: Tue, 30 Jun 2020 23:33:17 GMT
- Title: Neural Network Virtual Sensors for Fuel Injection Quantities with
Provable Performance Specifications
- Authors: Eric Wong, Tim Schneider, Joerg Schmitt, Frank R. Schmidt, J. Zico
Kolter
- Abstract summary: We show how provable guarantees can be naturally applied to other real world settings.
We show how specific intervals of fuel injection quantities can be targeted to maximize robustness for certain ranges.
- Score: 71.1911136637719
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown that it is possible to learn neural networks with
provable guarantees on the output of the model when subject to input
perturbations, however these works have focused primarily on defending against
adversarial examples for image classifiers. In this paper, we study how these
provable guarantees can be naturally applied to other real world settings,
namely getting performance specifications for robust virtual sensors measuring
fuel injection quantities within an engine. We first demonstrate that, in this
setting, even simple neural network models are highly susceptible to reasonable
levels of adversarial sensor noise, which are capable of increasing the mean
relative error of a standard neural network from 6.6% to 43.8%. We then
leverage methods for learning provably robust networks and verifying robustness
properties, resulting in a robust model which we can provably guarantee has at
most 16.5% mean relative error under any sensor noise. Additionally, we show
how specific intervals of fuel injection quantities can be targeted to maximize
robustness for certain ranges, allowing us to train a virtual sensor for fuel
injection which is provably guaranteed to have at most 10.69% relative error
under noise while maintaining 3% relative error on non-adversarial data within
normalized fuel injection ranges of 0.6 to 1.0.
Related papers
- FDINet: Protecting against DNN Model Extraction via Feature Distortion Index [25.69643512837956]
FDINET is a novel defense mechanism that leverages the feature distribution of deep neural network (DNN) models.
It exploits FDI similarity to identify colluding adversaries from distributed extraction attacks.
FDINET exhibits the capability to identify colluding adversaries with an accuracy exceeding 91%.
arXiv Detail & Related papers (2023-06-20T07:14:37Z) - Combining Gradients and Probabilities for Heterogeneous Approximation of
Neural Networks [2.5744053804694893]
We discuss the validity of additive Gaussian noise as a surrogate model for behavioral simulation of approximate multipliers.
The amount of noise injected into the accurate computations is learned during network training using backpropagation.
Our experiments show that the combination of heterogeneous approximation and neural network retraining reduces the energy consumption for variants.
arXiv Detail & Related papers (2022-08-15T15:17:34Z) - (Certified!!) Adversarial Robustness for Free! [116.6052628829344]
We certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within a 2-norm of 0.5.
We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.
arXiv Detail & Related papers (2022-06-21T17:27:27Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Toward Compact Deep Neural Networks via Energy-Aware Pruning [2.578242050187029]
We propose a novel energy-aware pruning method that quantifies the importance of each filter in the network using nuclear-norm (NN)
We achieve competitive results with 40.4/49.8% of FLOPs and 45.9/52.9% of parameter reduction with 94.13/94.61% in the Top-1 accuracy with ResNet-56/110 on CIFAR-10.
arXiv Detail & Related papers (2021-03-19T15:33:16Z) - Real-time detection of uncalibrated sensors using Neural Networks [62.997667081978825]
An online machine-learning based uncalibration detector for temperature, humidity and pressure sensors was developed.
The solution integrates an Artificial Neural Network as main component which learns from the behavior of the sensors under calibrated conditions.
The obtained results show that the proposed solution is able to detect uncalibrations for deviation values of 0.25 degrees, 1% RH and 1.5 Pa, respectively.
arXiv Detail & Related papers (2021-02-02T15:44:39Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - SNIFF: Reverse Engineering of Neural Networks with Fault Attacks [26.542434084399265]
We explore the possibility to reverse engineer neural networks with the usage of fault attacks.
SNIFF stands for sign bit flip fault, which enables the reverse engineering by changing the sign of intermediate values.
We develop the first exact extraction method on deep-layer feature extractor networks that provably allows the recovery of the model parameters.
arXiv Detail & Related papers (2020-02-23T05:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.