Limitations of Deep Learning for Inverse Problems on Digital Hardware
- URL: http://arxiv.org/abs/2202.13490v4
- Date: Wed, 25 Oct 2023 14:29:27 GMT
- Title: Limitations of Deep Learning for Inverse Problems on Digital Hardware
- Authors: Holger Boche, Adalbert Fono and Gitta Kutyniok
- Abstract summary: We analyze what actually can be computed on current hardware platforms modeled as Turing machines.
We prove that finite-dimensional inverse problems are not Banach-Mazur computable for small relaxation parameters.
- Score: 65.26723285209853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have seen tremendous success over the last years. Since
the training is performed on digital hardware, in this paper, we analyze what
actually can be computed on current hardware platforms modeled as Turing
machines, which would lead to inherent restrictions of deep learning. For this,
we focus on the class of inverse problems, which, in particular, encompasses
any task to reconstruct data from measurements. We prove that
finite-dimensional inverse problems are not Banach-Mazur computable for small
relaxation parameters. Even more, our results introduce a lower bound on the
accuracy that can be obtained algorithmically.
Related papers
- Higher-order topological kernels via quantum computation [68.8204255655161]
Topological data analysis (TDA) has emerged as a powerful tool for extracting meaningful insights from complex data.
We propose a quantum approach to defining Betti kernels, which is based on constructing Betti curves with increasing order.
arXiv Detail & Related papers (2023-07-14T14:48:52Z) - Biologically Plausible Learning on Neuromorphic Hardware Architectures [27.138481022472]
Neuromorphic computing is an emerging paradigm that confronts this imbalance by computations directly in analog memories.
This work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa.
arXiv Detail & Related papers (2022-12-29T15:10:59Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - Near-Exact Recovery for Tomographic Inverse Problems via Deep Learning [3.441021278275805]
We show that an iterative end-to-end network scheme enables reconstructions close to numerical precision.
We also demonstrate our state-of-the-art performance on the open-access real-world dataset LoDoPaB CT.
arXiv Detail & Related papers (2022-06-14T10:06:41Z) - Deep neural networks can stably solve high-dimensional, noisy,
non-linear inverse problems [2.6651200086513107]
We study the problem of reconstructing solutions of inverse problems when only noisy measurements are available.
For the inverse operator, we demonstrate that there exists a neural network which is a robust-to-noise approximation of the operator.
arXiv Detail & Related papers (2022-06-02T08:51:46Z) - A Survey of Quantization Methods for Efficient Neural Network Inference [75.55159744950859]
quantization is the problem of distributing continuous real-valued numbers over a fixed discrete set of numbers to minimize the number of bits required.
It has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas.
Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x.
arXiv Detail & Related papers (2021-03-25T06:57:11Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Graphs for deep learning representations [1.0152838128195467]
We introduce a graph formalism based on the recent advances in Graph Signal Processing (GSP)
Namely, we use graphs to represent the latent spaces of deep neural networks.
We showcase that this graph formalism allows us to answer various questions including: ensuring robustness, reducing the amount of arbitrary choices in the design of the learning process, improving to small generalizations added to the inputs, and reducing computational complexity.
arXiv Detail & Related papers (2020-12-14T11:51:23Z) - Solving Inverse Problems With Deep Neural Networks -- Robustness
Included? [3.867363075280544]
Recent works have pointed out instabilities of deep neural networks for several image reconstruction tasks.
In analogy to adversarial attacks in classification, it was shown that slight distortions in the input domain may cause severe artifacts.
This article sheds new light on this concern, by conducting an extensive study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems.
arXiv Detail & Related papers (2020-11-09T09:33:07Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.