Trust Calibration as a Function of the Evolution of Uncertainty in
Knowledge Generation: A Survey
- URL: http://arxiv.org/abs/2209.04388v1
- Date: Fri, 9 Sep 2022 16:46:37 GMT
- Title: Trust Calibration as a Function of the Evolution of Uncertainty in
Knowledge Generation: A Survey
- Authors: Joshua Boley and Maoyuan Sun
- Abstract summary: We argue that accounting for the propagation of uncertainty from data sources all the way through extraction of information is necessary to understand how user trust in a visual analytics system evolves over its lifecycle.
We sample a broad cross-section of the literature from visual analytics, human cognitive theory, and uncertainty, and attempt to synthesize a useful perspective.
- Score: 1.462008690460147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User trust is a crucial consideration in designing robust visual analytics
systems that can guide users to reasonably sound conclusions despite inevitable
biases and other uncertainties introduced by the human, the machine, and the
data sources which paint the canvas upon which knowledge emerges. A multitude
of factors emerge upon studied consideration which introduce considerable
complexity and exacerbate our understanding of how trust relationships evolve
in visual analytics systems, much as they do in intelligent sociotechnical
systems. A visual analytics system, however, does not by its nature provoke
exactly the same phenomena as its simpler cousins, nor are the phenomena
necessarily of the same exact kind. Regardless, both application domains
present the same root causes from which the need for trustworthiness arises:
Uncertainty and the assumption of risk. In addition, visual analytics systems,
even more than the intelligent systems which (traditionally) tend to be closed
to direct human input and direction during processing, are influenced by a
multitude of cognitive biases that further exacerbate an accounting of the
uncertainties that may afflict the user's confidence, and ultimately trust in
the system.
In this article we argue that accounting for the propagation of uncertainty
from data sources all the way through extraction of information and hypothesis
testing is necessary to understand how user trust in a visual analytics system
evolves over its lifecycle, and that the analyst's selection of visualization
parameters affords us a simple means to capture the interactions between
uncertainty and cognitive bias as a function of the attributes of the search
tasks the analyst executes while evaluating explanations. We sample a broad
cross-section of the literature from visual analytics, human cognitive theory,
and uncertainty, and attempt to synthesize a useful perspective.
Related papers
- Unified Causality Analysis Based on the Degrees of Freedom [1.2289361708127877]
This paper presents a unified method capable of identifying fundamental causal relationships between pairs of systems.
By analyzing the degrees of freedom in the system, our approach provides a more comprehensive understanding of both causal influence and hidden confounders.
This unified framework is validated through theoretical models and simulations, demonstrating its robustness and potential for broader application.
arXiv Detail & Related papers (2024-10-25T10:57:35Z) - A Factor Graph Model of Trust for a Collaborative Multi-Agent System [8.286807697708113]
Trust is the reliance and confidence an agent has in the information, behaviors, intentions, truthfulness, and capabilities of others within the system.
This paper introduces a new graphical approach that utilizes factor graphs to represent the interdependent behaviors and trustworthiness among agents.
Our method for evaluating trust is decentralized and considers key interdependent sub-factors such as proximity safety, consistency, and cooperation.
arXiv Detail & Related papers (2024-02-10T21:44:28Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Detection and Evaluation of bias-inducing Features in Machine learning [14.045499740240823]
In the context of machine learning (ML), one can use cause-to-effect analysis to understand the reason for the biased behavior of the system.
We propose an approach for systematically identifying all bias-inducing features of a model to help support the decision-making of domain experts.
arXiv Detail & Related papers (2023-10-19T15:01:16Z) - Explainable AI for clinical risk prediction: a survey of concepts,
methods, and modalities [2.9404725327650767]
Review of progress in developing explainable models for clinical risk prediction.
emphasizes the need for external validation and the combination of diverse interpretability methods.
End-to-end approach to explainability in clinical risk prediction is essential for success.
arXiv Detail & Related papers (2023-08-16T14:51:51Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.