The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in
Deep Learning
- URL: http://arxiv.org/abs/2309.07072v1
- Date: Wed, 13 Sep 2023 16:33:27 GMT
- Title: The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in
Deep Learning
- Authors: Alexander Bastounis, Alexander N. Gorban, Anders C. Hansen, Desmond J.
Higham, Danil Prokhorov, Oliver Sutton, Ivan Y. Tyukin, Qinghua Zhou
- Abstract summary: We consider classical distribution-agnostic framework and algorithms minimising empirical risks.
We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks is extremely challenging.
- Score: 73.5095051707364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we assess the theoretical limitations of determining guaranteed
stability and accuracy of neural networks in classification tasks. We consider
classical distribution-agnostic framework and algorithms minimising empirical
risks and potentially subjected to some weights regularisation. We show that
there is a large family of tasks for which computing and verifying ideal stable
and accurate neural networks in the above settings is extremely challenging, if
at all possible, even when such ideal solutions exist within the given class of
neural architectures.
Related papers
- Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization [53.15874572081944]
We study computability in the deep learning framework from two perspectives.
We show algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved.
Finally, we show that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
arXiv Detail & Related papers (2024-08-12T15:02:26Z) - Generalized Uncertainty of Deep Neural Networks: Taxonomy and
Applications [1.9671123873378717]
We show that the uncertainty of deep neural networks is not only important in a sense of interpretability and transparency, but also crucial in further advancing their performance.
We will generalize the definition of the uncertainty of deep neural networks to any number or vector that is associated with an input or an input-label pair, and catalog existing methods on mining'' such uncertainty from a deep model.
arXiv Detail & Related papers (2023-02-02T22:02:33Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Sparse Deep Learning: A New Framework Immune to Local Traps and
Miscalibration [12.05471394131891]
We provide a new framework for sparse deep learning, which has the above issues addressed in a coherent way.
We lay down a theoretical foundation for sparse deep learning and propose prior annealing algorithms for learning sparse neural networks.
arXiv Detail & Related papers (2021-10-01T21:16:34Z) - On the regularized risk of distributionally robust learning over deep
neural networks [0.0]
We study the relation between distributionally robust learning and different forms of regularization to enforce robustness of deep neural networks.
We motivate a family of scalable algorithms for the training of robust neural networks.
arXiv Detail & Related papers (2021-09-13T20:10:39Z) - The mathematics of adversarial attacks in AI -- Why deep learning is
unstable despite the existence of stable neural networks [69.33657875725747]
We prove that any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate)
The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability.
Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them.
arXiv Detail & Related papers (2021-09-13T16:19:25Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Learning for Integer-Constrained Optimization through Neural Networks
with Limited Training [28.588195947764188]
We introduce a symmetric and decomposed neural network structure, which is fully interpretable in terms of the functionality of its constituent components.
By taking advantage of the underlying pattern of the integer constraint, the introduced neural network offers superior generalization performance with limited training.
We show that the introduced decomposed approach can be further extended to semi-decomposed frameworks.
arXiv Detail & Related papers (2020-11-10T21:17:07Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.