ClaudesLens: Uncertainty Quantification in Computer Vision Models
- URL: http://arxiv.org/abs/2406.13008v1
- Date: Tue, 18 Jun 2024 18:58:54 GMT
- Title: ClaudesLens: Uncertainty Quantification in Computer Vision Models
- Authors: Mohamad Al Shaar, Nils Ekström, Gustav Gille, Reza Rezvan, Ivan Wely,
- Abstract summary: We show a possible method to quantify and evaluate the uncertainty of the output of different computer vision models based on Shannon entropy.
We believe that Shannon entropy may eventually have a bigger role in the SOTA (State-of-the-art) methods to quantify uncertainty in artificial intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a world where more decisions are made using artificial intelligence, it is of utmost importance to ensure these decisions are well-grounded. Neural networks are the modern building blocks for artificial intelligence. Modern neural network-based computer vision models are often used for object classification tasks. Correctly classifying objects with \textit{certainty} has become of great importance in recent times. However, quantifying the inherent \textit{uncertainty} of the output from neural networks is a challenging task. Here we show a possible method to quantify and evaluate the uncertainty of the output of different computer vision models based on Shannon entropy. By adding perturbation of different levels, on different parts, ranging from the input to the parameters of the network, one introduces entropy to the system. By quantifying and evaluating the perturbed models on the proposed PI and PSI metrics, we can conclude that our theoretical framework can grant insight into the uncertainty of predictions of computer vision models. We believe that this theoretical framework can be applied to different applications for neural networks. We believe that Shannon entropy may eventually have a bigger role in the SOTA (State-of-the-art) methods to quantify uncertainty in artificial intelligence. One day we might be able to apply Shannon entropy to our neural systems.
Related papers
- Quantum-Cognitive Neural Networks: Assessing Confidence and Uncertainty with Human Decision-Making Simulations [0.0]
We employ the recently proposed quantum-tunnelling neural networks (QT-NNs) to classify image datasets.
Our findings suggest that the QT-NN model provides compelling evidence of its potential to replicate human-like decision-making.
arXiv Detail & Related papers (2024-12-11T01:34:21Z) - From Neurons to Neutrons: A Case Study in Interpretability [5.242869847419834]
We argue that high-dimensional neural networks can learn low-dimensional representations of their training data that are useful beyond simply making good predictions.
This indicates that such approaches to interpretability can be useful for deriving a new understanding of a problem from models trained to solve it.
arXiv Detail & Related papers (2024-05-27T17:59:35Z) - Bayes in the age of intelligent machines [11.613278345297399]
We argue that Bayesian models of cognition and artificial neural networks lie at different levels of analysis and are complementary modeling approaches.
We also argue that the same perspective can be applied to intelligent machines, where a Bayesian approach may be uniquely valuable.
arXiv Detail & Related papers (2023-11-16T21:39:54Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - A Survey on Brain-Inspired Deep Learning via Predictive Coding [85.93245078403875]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Explainable Artificial Intelligence for Bayesian Neural Networks:
Towards trustworthy predictions of ocean dynamics [0.0]
The trustworthiness of neural networks is often challenged because they lack the ability to express uncertainty and explain their skill.
This can be problematic given the increasing use of neural networks in high stakes decision-making such as in climate change applications.
We address both issues by successfully implementing a Bayesian Neural Network (BNN), where parameters are distributions rather than deterministic, and applying novel implementations of explainable AI (XAI) techniques.
arXiv Detail & Related papers (2022-04-30T08:35:57Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - The mathematics of adversarial attacks in AI -- Why deep learning is
unstable despite the existence of stable neural networks [69.33657875725747]
We prove that any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate)
The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability.
Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them.
arXiv Detail & Related papers (2021-09-13T16:19:25Z) - Quantum Superposition Inspired Spiking Neural Network [4.5727987473456055]
Despite advances in artificial intelligence models, neural networks still cannot achieve human performance.
We propose a quantum superposition spiking neural network inspired by quantum mechanisms and phenomena in the brain.
The QS-SNN incorporates quantum theory with brain-inspired spiking neural network models from a computational perspective, resulting in more robust performance compared with traditional ANN models.
arXiv Detail & Related papers (2020-10-23T07:11:53Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.