Neurosymbolic Conformal Classification
- URL: http://arxiv.org/abs/2409.13585v1
- Date: Fri, 20 Sep 2024 15:38:34 GMT
- Title: Neurosymbolic Conformal Classification
- Authors: Arthur Ledaguenel, CĂ©line Hudelot, Mostepha Khouadjia,
- Abstract summary: The last decades have seen a drastic improvement of Machine Learning (ML), mainly driven by Deep Learning (DL)
Despite the resounding successes of ML in many domains, the impossibility to provide guarantees of conformity and the fragility of ML systems have prevented the design of trustworthy AI systems.
Several research paths have been investigated to mitigate this fragility and provide some guarantees regarding the behavior of ML systems.
- Score: 6.775534755081169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The last decades have seen a drastic improvement of Machine Learning (ML), mainly driven by Deep Learning (DL). However, despite the resounding successes of ML in many domains, the impossibility to provide guarantees of conformity and the fragility of ML systems (faced with distribution shifts, adversarial attacks, etc.) have prevented the design of trustworthy AI systems. Several research paths have been investigated to mitigate this fragility and provide some guarantees regarding the behavior of ML systems, among which are neurosymbolic AI and conformal prediction. Neurosymbolic artificial intelligence is a growing field of research aiming to combine neural network learning capabilities with the reasoning abilities of symbolic systems. One of the objective of this hybridization can be to provide theoritical guarantees that the output of the system will comply with some prior knowledge. Conformal prediction is a set of techniques that enable to take into account the uncertainty of ML systems by transforming the unique prediction into a set of predictions, called a confidence set. Interestingly, this comes with statistical guarantees regarding the presence of the true label inside the confidence set. Both approaches are distribution-free and model-agnostic. In this paper, we see how these two approaches can complement one another. We introduce several neurosymbolic conformal prediction techniques and explore their different characteristics (size of confidence sets, computational complexity, etc.).
Related papers
- On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms [9.071347361654931]
We assess the assurance of end-to-end fully differentiable neurosymbolic systems that are an emerging method to create data-efficient models.
We find end-to-end neurosymbolic methods present unique opportunities for assurance beyond their data efficiency.
arXiv Detail & Related papers (2025-02-13T03:29:42Z) - From Aleatoric to Epistemic: Exploring Uncertainty Quantification Techniques in Artificial Intelligence [19.369216778200034]
Uncertainty quantification (UQ) is a critical aspect of artificial intelligence (AI) systems.
This review explores the evolution of uncertainty quantification techniques in AI.
We examine the diverse applications of UQ across various fields, emphasizing its impact on decision-making, predictive accuracy, and system robustness.
arXiv Detail & Related papers (2025-01-05T23:14:47Z) - Machine Learning Robustness: A Primer [12.426425119438846]
The discussion begins with a detailed definition of robustness, portraying it as the ability of ML models to maintain stable performance across varied and unexpected environmental conditions.
The chapter delves into the factors that impede robustness, such as data bias, model complexity, and the pitfalls of underspecified ML pipelines.
The discussion progresses to explore amelioration strategies for bolstering robustness, starting with data-centric approaches like debiasing and augmentation.
arXiv Detail & Related papers (2024-04-01T03:49:42Z) - Evaluation of Predictive Reliability to Foster Trust in Artificial
Intelligence. A case study in Multiple Sclerosis [0.34473740271026115]
Spotting Machine Learning failures is of paramount importance when ML predictions are used to drive clinical decisions.
We propose a simple approach that can be used in the deployment phase of any ML model to suggest whether to trust predictions or not.
Our method holds the promise to provide effective support to clinicians by spotting potential ML failures during deployment.
arXiv Detail & Related papers (2024-02-27T14:48:07Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Enhancing Human-Machine Teaming for Medical Prognosis Through Neural
Ordinary Differential Equations (NODEs) [0.0]
A key barrier to the full realization of Machine Learning's potential in medical prognoses is technology acceptance.
Recent efforts to produce explainable AI (XAI) have made progress in improving the interpretability of some ML models.
We propose a novel ML architecture to enhance human understanding and encourage acceptability.
arXiv Detail & Related papers (2021-02-08T10:52:23Z) - Neuro-symbolic Neurodegenerative Disease Modeling as Probabilistic
Programmed Deep Kernels [93.58854458951431]
We present a probabilistic programmed deep kernel learning approach to personalized, predictive modeling of neurodegenerative diseases.
Our analysis considers a spectrum of neural and symbolic machine learning approaches.
We run evaluations on the problem of Alzheimer's disease prediction, yielding results that surpass deep learning.
arXiv Detail & Related papers (2020-09-16T15:16:03Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.