Uncertainty Quantification and Resource-Demanding Computer Vision
Applications of Deep Learning
- URL: http://arxiv.org/abs/2205.14917v1
- Date: Mon, 30 May 2022 08:31:03 GMT
- Title: Uncertainty Quantification and Resource-Demanding Computer Vision
Applications of Deep Learning
- Authors: Julian Burghoff, Robin Chan, Hanno Gottschalk, Annika Muetze, Tobias
Riedlinger, Matthias Rottmann, and Marius Schubert
- Abstract summary: Bringing deep neural networks (DNNs) into safety critical applications requires a thorough treatment of the model's uncertainties.
In this article, we survey methods that we developed to teach DNNs to be uncertain when they encounter new object classes.
We also present training methods to learn from only a few labels with help of uncertainty quantification.
- Score: 5.130440339897478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bringing deep neural networks (DNNs) into safety critical applications such
as automated driving, medical imaging and finance, requires a thorough
treatment of the model's uncertainties. Training deep neural networks is
already resource demanding and so is also their uncertainty quantification. In
this overview article, we survey methods that we developed to teach DNNs to be
uncertain when they encounter new object classes. Additionally, we present
training methods to learn from only a few labels with help of uncertainty
quantification. Note that this is typically paid with a massive overhead in
computation of an order of magnitude and more compared to ordinary network
training. Finally, we survey our work on neural architecture search which is
also an order of magnitude more resource demanding then ordinary network
training.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - DeepCSHAP: Utilizing Shapley Values to Explain Deep Complex-Valued
Neural Networks [7.4841568561701095]
Deep Neural Networks are widely used in academy as well as corporate and public applications.
The ability to explain their output is critical for safety reasons as well as acceptance among applicants.
We present four gradient based explanation methods suitable for use in complex-valued neural networks.
arXiv Detail & Related papers (2024-03-13T11:26:43Z) - Set-Based Training for Neural Network Verification [8.97708612393722]
Small input perturbations can significantly affect the outputs of a neural network.
In safety-critical environments, the inputs often contain noisy sensor data.
We employ an end-to-end set-based training procedure that trains robust neural networks for formal verification.
arXiv Detail & Related papers (2024-01-26T15:52:41Z) - Deep Internal Learning: Deep Learning from a Single Input [88.59966585422914]
In many cases there is value in training a network just from the input at hand.
This is particularly relevant in many signal and image processing problems where training data is scarce and diversity is large.
This survey paper aims at covering deep internal-learning techniques that have been proposed in the past few years for these two important directions.
arXiv Detail & Related papers (2023-12-12T16:48:53Z) - Generalized Uncertainty of Deep Neural Networks: Taxonomy and
Applications [1.9671123873378717]
We show that the uncertainty of deep neural networks is not only important in a sense of interpretability and transparency, but also crucial in further advancing their performance.
We will generalize the definition of the uncertainty of deep neural networks to any number or vector that is associated with an input or an input-label pair, and catalog existing methods on mining'' such uncertainty from a deep model.
arXiv Detail & Related papers (2023-02-02T22:02:33Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Deep Binary Reinforcement Learning for Scalable Verification [44.44006029119672]
We present an RL algorithm tailored specifically for binarized neural networks (BNNs)
After training BNNs for the Atari environments, we verify robustness properties.
arXiv Detail & Related papers (2022-03-11T01:20:23Z) - Pruning and Slicing Neural Networks using Formal Verification [0.2538209532048866]
Deep neural networks (DNNs) play an increasingly important role in various computer systems.
In order to create these networks, engineers typically specify a desired topology, and then use an automated training algorithm to select the network's weights.
Here, we propose to address this challenge by harnessing recent advances in DNN verification.
arXiv Detail & Related papers (2021-05-28T07:53:50Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.