Speculations on Uncertainty and Humane Algorithms
- URL: http://arxiv.org/abs/2408.06736v1
- Date: Tue, 13 Aug 2024 08:54:34 GMT
- Title: Speculations on Uncertainty and Humane Algorithms
- Authors: Nicholas Gray,
- Abstract summary: Provenance enables algorithms to know what they know preventing possible harms.
It is essential to compute with what we know rather than make assumptions that may be unjustified or untenable.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The appreciation and utilisation of risk and uncertainty can play a key role in helping to solve some of the many ethical issues that are posed by AI. Understanding the uncertainties can allow algorithms to make better decisions by providing interrogatable avenues to check the correctness of outputs. Allowing algorithms to deal with variability and ambiguity with their inputs means they do not need to force people into uncomfortable classifications. Provenance enables algorithms to know what they know preventing possible harms. Additionally, uncertainty about provenance highlights the trustworthiness of algorithms. It is essential to compute with what we know rather than make assumptions that may be unjustified or untenable. This paper provides a perspective on the need for the importance of risk and uncertainty in the development of ethical AI, especially in high-risk scenarios. It argues that the handling of uncertainty, especially epistemic uncertainty, is critical to ensuring that algorithms do not cause harm and are trustworthy and ensure that the decisions that they make are humane.
Related papers
- Integrating Expert Judgment and Algorithmic Decision Making: An Indistinguishability Framework [12.967730957018688]
We introduce a novel framework for human-AI collaboration in prediction and decision tasks.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to any feasible predictive algorithm.
arXiv Detail & Related papers (2024-10-11T13:03:53Z) - A Structured Review of Literature on Uncertainty in Machine Learning & Deep Learning [0.8667724053232616]
We focus on a critical concern for adaptation of Machine Learning in risk-sensitive applications, namely understanding and quantifying uncertainty.
Our paper approaches this topic in a structured way, providing a review of the literature in the various facets that uncertainty is enveloped in the ML process.
Key contributions in this review are broadening the scope of uncertainty discussion, as well as an updated review of uncertainty quantification methods in Deep Learning.
arXiv Detail & Related papers (2024-06-01T07:17:38Z) - Auditing Fairness under Unobserved Confounding [56.61738581796362]
We show that we can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed.
This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner.
arXiv Detail & Related papers (2024-03-18T21:09:06Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Building Safe and Reliable AI systems for Safety Critical Tasks with
Vision-Language Processing [1.2183405753834557]
Current AI algorithms are unable to identify common causes for failure detection.
Additional techniques are required to quantify the quality of predictions.
This thesis will focus on vision-language data processing for tasks like classification, image captioning, and vision question answering.
arXiv Detail & Related papers (2023-08-06T18:05:59Z) - Influence of the algorithm's reliability and transparency in the user's
decision-making process [0.0]
We conduct an online empirical study with 61 participants to find out how the change in transparency and reliability of an algorithm could impact users' decision-making process.
The results indicate that people show at least moderate confidence in the decisions of the algorithm even when the reliability is bad.
arXiv Detail & Related papers (2023-07-13T03:13:49Z) - Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications [56.130945359053776]
We provide a comprehensive review of uncertainty-relevant works in the NLP field.
We first categorize the sources of uncertainty in natural language into three types, including input, system, and output.
We discuss the challenges of uncertainty estimation in NLP and discuss potential future directions.
arXiv Detail & Related papers (2023-06-05T06:46:53Z) - Distributional Actor-Critic Ensemble for Uncertainty-Aware Continuous
Control [13.767812547998735]
Uncertainty quantification is one of the central challenges for machine learning in real-world applications.
Disentangling and evaluating uncertainties simultaneously stands a chance of improving the agent's final performance.
We propose an uncertainty-aware reinforcement learning algorithm for continuous control tasks.
arXiv Detail & Related papers (2022-07-27T18:11:04Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.