Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement
- URL: http://arxiv.org/abs/2401.10310v1
- Date: Thu, 18 Jan 2024 15:32:38 GMT
- Title: Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement
- Authors: Holger Boche, Adalbert Fono, Gitta Kutyniok
- Abstract summary: We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
- Score: 65.26723285209853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning still has drawbacks in terms of trustworthiness, which
describes a comprehensible, fair, safe, and reliable method. To mitigate the
potential risk of AI, clear obligations associated to trustworthiness have been
proposed via regulatory guidelines, e.g., in the European AI Act. Therefore, a
central question is to what extent trustworthy deep learning can be realized.
Establishing the described properties constituting trustworthiness requires
that the factors influencing an algorithmic computation can be retraced, i.e.,
the algorithmic implementation is transparent. Motivated by the observation
that the current evolution of deep learning models necessitates a change in
computing technology, we derive a mathematical framework which enables us to
analyze whether a transparent implementation in a computing model is feasible.
We exemplarily apply our trustworthiness framework to analyze deep learning
approaches for inverse problems in digital and analog computing models
represented by Turing and Blum-Shub-Smale Machines, respectively. Based on
previous results, we find that Blum-Shub-Smale Machines have the potential to
establish trustworthy solvers for inverse problems under fairly general
conditions, whereas Turing machines cannot guarantee trustworthiness to the
same degree.
Related papers
- Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization [53.15874572081944]
We study computability in the deep learning framework from two perspectives.
We show algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved.
Finally, we show that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
arXiv Detail & Related papers (2024-08-12T15:02:26Z) - Mechanistic Interpretability for AI Safety -- A Review [28.427951836334188]
This review explores mechanistic interpretability.
Mechanistic interpretability could help prevent catastrophic outcomes as AI systems become more powerful and inscrutable.
arXiv Detail & Related papers (2024-04-22T11:01:51Z) - Tensor Networks for Explainable Machine Learning in Cybersecurity [0.0]
We develop an unsupervised clustering algorithm based on Matrix Product States (MPS)
Our investigation proves that MPS rival traditional deep learning models such as autoencoders and GANs in terms of performance.
Our approach naturally facilitates the extraction of feature-wise probabilities, Von Neumann Entropy, and mutual information.
arXiv Detail & Related papers (2023-12-29T22:35:45Z) - Towards Efficient and Trustworthy AI Through
Hardware-Algorithm-Communication Co-Design [32.815326729969904]
State-of-the-art AI models are largely incapable of providing trustworthy measures of their uncertainty.
This paper highlights research directions at the intersection of hardware and software design.
arXiv Detail & Related papers (2023-09-27T18:39:46Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Information Theoretic Evaluation of Privacy-Leakage, Interpretability,
and Transferability for a Novel Trustworthy AI Framework [11.764605963190817]
Guidelines and principles of trustworthy AI should be adhered to in practice during the development of AI systems.
This work suggests a novel information theoretic trustworthy AI framework based on the hypothesis that information theory enables taking into account the ethical AI principles.
arXiv Detail & Related papers (2021-06-06T09:47:06Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.