Verifying Inverse Model Neural Networks
- URL: http://arxiv.org/abs/2202.02429v1
- Date: Fri, 4 Feb 2022 23:13:22 GMT
- Title: Verifying Inverse Model Neural Networks
- Authors: Chelsea Sidrane, Sydney Katz, Anthony Corso, Mykel J. Kochenderfer
- Abstract summary: Inverse problems exist in a wide variety of physical domains from aerospace engineering to medical imaging.
We introduce a method for verifying the correctness of inverse model neural networks.
- Score: 39.4062479625023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverse problems exist in a wide variety of physical domains from aerospace
engineering to medical imaging. The goal is to infer the underlying state from
a set of observations. When the forward model that produced the observations is
nonlinear and stochastic, solving the inverse problem is very challenging.
Neural networks are an appealing solution for solving inverse problems as they
can be trained from noisy data and once trained are computationally efficient
to run. However, inverse model neural networks do not have guarantees of
correctness built-in, which makes them unreliable for use in safety and
accuracy-critical contexts. In this work we introduce a method for verifying
the correctness of inverse model neural networks. Our approach is to
overapproximate a nonlinear, stochastic forward model with piecewise linear
constraints and encode both the overapproximate forward model and the neural
network inverse model as a mixed-integer program. We demonstrate this
verification procedure on a real-world airplane fuel gauge case study. The
ability to verify and consequently trust inverse model neural networks allows
their use in a wide variety of contexts, from aerospace to medicine.
Related papers
- The Unreasonable Effectiveness of Solving Inverse Problems with Neural Networks [24.766470360665647]
We show that neural networks trained to learn solutions to inverse problems can find better solutions than classicals even on their training set.
Our findings suggest an alternative use for neural networks: rather than generalizing to new data for fast inference, they can also be used to find better solutions on known data.
arXiv Detail & Related papers (2024-08-15T12:38:10Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - On Modifying a Neural Network's Perception [3.42658286826597]
We propose a method which allows one to modify what an artificial neural network is perceiving regarding specific human-defined concepts.
We test the proposed method on different models, assessing whether the performed manipulations are well interpreted by the models, and analyzing how they react to them.
arXiv Detail & Related papers (2023-03-05T12:09:37Z) - Stable, accurate and efficient deep neural networks for inverse problems
with analysis-sparse models [2.969705152497174]
We present a novel construction of an accurate, stable and efficient neural network for inverse problems with general analysis-sparse models.
To construct the network, we unroll NESTA, an accelerated first-order method for convex optimization.
A restart scheme is employed to enable exponential decay of the required network depth, yielding a shallower, and consequently more efficient, network.
arXiv Detail & Related papers (2022-03-02T00:44:25Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Solving inverse problems with deep neural networks driven by sparse
signal decomposition in a physics-based dictionary [0.0]
Deep neural networks (DNN) have an impressive ability to invert very complex models, i.e. to learn the generative parameters from a model's output.
We propose an approach for solving general inverse problems which combines the efficiency of DNN and the interpretability of traditional analytical methods.
arXiv Detail & Related papers (2021-07-16T09:32:45Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.