Trustworthy AI: Deciding What to Decide
- URL: http://arxiv.org/abs/2311.12604v1
- Date: Tue, 21 Nov 2023 13:43:58 GMT
- Title: Trustworthy AI: Deciding What to Decide
- Authors: Caesar Wu, Yuan-Fang Li, Jian Li, Jingjing Xu, Bouvry Pascal
- Abstract summary: We propose a novel framework of Trustworthy AI (TAI) encompassing crucial components of AI.
We aim to use this framework to conduct the TAI experiments by quantitive and qualitative research methods.
We formulate an optimal prediction model for applying the strategic investment decision of credit default swaps (CDS) in the technology sector.
- Score: 41.10597843436572
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: When engaging in strategic decision-making, we are frequently confronted with
overwhelming information and data. The situation can be further complicated
when certain pieces of evidence contradict each other or become paradoxical.
The primary challenge is how to determine which information can be trusted when
we adopt Artificial Intelligence (AI) systems for decision-making. This issue
is known as deciding what to decide or Trustworthy AI. However, the AI system
itself is often considered an opaque black box. We propose a new approach to
address this issue by introducing a novel framework of Trustworthy AI (TAI)
encompassing three crucial components of AI: representation space, loss
function, and optimizer. Each component is loosely coupled with four TAI
properties. Altogether, the framework consists of twelve TAI properties. We aim
to use this framework to conduct the TAI experiments by quantitive and
qualitative research methods to satisfy TAI properties for the decision-making
context. The framework allows us to formulate an optimal prediction model
trained by the given dataset for applying the strategic investment decision of
credit default swaps (CDS) in the technology sector. Finally, we provide our
view of the future direction of TAI research
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Survey of Trustworthy AI: A Meta Decision of AI [0.41292255339309647]
Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI)
To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reliability, and sustainability.
arXiv Detail & Related papers (2023-06-01T06:25:01Z) - On solving decision and risk management problems subject to uncertainty [91.3755431537592]
Uncertainty is a pervasive challenge in decision and risk management.
This paper develops a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
arXiv Detail & Related papers (2023-01-18T19:16:23Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions [4.169915659794568]
People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
arXiv Detail & Related papers (2021-02-10T14:28:34Z) - Explainable AI and Adoption of Algorithmic Advisors: an Experimental
Study [0.6875312133832077]
We develop an experimental methodology where participants play a web-based game, during which they receive advice from either a human or an algorithmic advisor.
We evaluate whether the different types of explanations affect the readiness to adopt, willingness to pay and trust a financial AI consultant.
We find that the types of explanations that promote adoption during first encounter differ from those that are most successful following failure or when cost is involved.
arXiv Detail & Related papers (2021-01-05T09:34:38Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.