A Conceptual Framework for Establishing Trust in Real World Intelligent
Systems
- URL: http://arxiv.org/abs/2104.05432v1
- Date: Mon, 12 Apr 2021 12:58:47 GMT
- Title: A Conceptual Framework for Establishing Trust in Real World Intelligent
Systems
- Authors: Michael Guckert, Nils Gumpfer, Jennifer Hannig, Till Keller and Neil
Urquhart
- Abstract summary: Trust in algorithms can be established by letting users interact with the system.
Reflecting features and patterns of human understanding of a domain against algorithmic results can create awareness of such patterns.
Close inspection can be used to decide whether a solution conforms to the expectations or whether it goes beyond the expected.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent information systems that contain emergent elements often
encounter trust problems because results do not get sufficiently explained and
the procedure itself can not be fully retraced. This is caused by a control
flow depending either on stochastic elements or on the structure and relevance
of the input data. Trust in such algorithms can be established by letting users
interact with the system so that they can explore results and find patterns
that can be compared with their expected solution. Reflecting features and
patterns of human understanding of a domain against algorithmic results can
create awareness of such patterns and may increase the trust that a user has in
the solution. If expectations are not met, close inspection can be used to
decide whether a solution conforms to the expectations or whether it goes
beyond the expected. By either accepting or rejecting a solution, the user's
set of expectations evolves and a learning process for the users is
established. In this paper we present a conceptual framework that reflects and
supports this process. The framework is the result of an analysis of two
exemplary case studies from two different disciplines with information systems
that assist experts in their complex tasks.
Related papers
- Discovering Decision Manifolds to Assure Trusted Autonomous Systems [0.0]
We propose an optimization-based search technique for capturing the range of correct and incorrect responses a system could exhibit.
This manifold provides a more detailed understanding of system reliability than traditional testing or Monte Carlo simulations.
In this proof-of-concept, we apply our method to a software-in-the-loop evaluation of an autonomous vehicle.
arXiv Detail & Related papers (2024-02-12T16:55:58Z) - TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection [37.394874500480206]
We propose a novel framework for trustworthy fake news detection that prioritizes explainability, generalizability and controllability of models.
This is achieved via a dual-system framework that integrates cognition and decision systems.
We present comprehensive evaluation results on four datasets, demonstrating the feasibility and trustworthiness of our proposed framework.
arXiv Detail & Related papers (2024-02-12T16:41:54Z) - Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - VISPUR: Visual Aids for Identifying and Interpreting Spurious
Associations in Data-Driven Decisions [8.594140167290098]
Simpson's paradox is a phenomenon where aggregated and subgroup-level associations contradict with each other.
Existing tools provide little insights for humans to locate, reason about, and prevent pitfalls of spurious association in practice.
We propose VISPUR, a visual analytic system that provides a causal analysis framework and a human-centric workflow for tackling spurious associations.
arXiv Detail & Related papers (2023-07-26T18:40:07Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Examining the Impact of Algorithm Awareness on Wikidata's Recommender
System Recoin [12.167153941840958]
We conduct online experiments with 105 participants using MTurk for the recommender system Recoin, a gadget for Wikidata.
Our findings include a positive correlation between comprehension of and trust in an algorithmic system in our interactive redesign.
Our results are not conclusive yet, and suggest that the measures of comprehension, fairness, accuracy and trust are not yet exhaustive for the empirical study of algorithm awareness.
arXiv Detail & Related papers (2020-09-18T20:06:53Z) - Dirichlet uncertainty wrappers for actionable algorithm accuracy
accountability and auditability [0.5156484100374058]
We propose a wrapper that enriches its output prediction with a measure of uncertainty.
Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions.
Results demonstrate the effectiveness of the uncertainty computed by the wrapper.
arXiv Detail & Related papers (2019-12-29T11:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.