Never trust, always verify : a roadmap for Trustworthy AI?
- URL: http://arxiv.org/abs/2206.11981v1
- Date: Thu, 23 Jun 2022 21:13:10 GMT
- Title: Never trust, always verify : a roadmap for Trustworthy AI?
- Authors: Lionel Nganyewou Tidjon and Foutse Khomh
- Abstract summary: We examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy.
We suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
- Score: 12.031113181911627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is becoming the corner stone of many systems
used in our daily lives such as autonomous vehicles, healthcare systems, and
unmanned aircraft systems. Machine Learning is a field of AI that enables
systems to learn from data and make decisions on new data based on models to
achieve a given goal. The stochastic nature of AI models makes verification and
validation tasks challenging. Moreover, there are intrinsic biaises in AI
models such as reproductibility bias, selection bias (e.g., races, genders,
color), and reporting bias (i.e., results that do not reflect the reality).
Increasingly, there is also a particular attention to the ethical, legal, and
societal impacts of AI. AI systems are difficult to audit and certify because
of their black-box nature. They also appear to be vulnerable to threats; AI
systems can misbehave when untrusted data are given, making them insecure and
unsafe. Governments, national and international organizations have proposed
several principles to overcome these challenges but their applications in
practice are limited and there are different interpretations in the principles
that can bias implementations. In this paper, we examine trust in the context
of AI-based systems to understand what it means for an AI system to be
trustworthy and identify actions that need to be undertaken to ensure that AI
systems are trustworthy. To achieve this goal, we first review existing
approaches proposed for ensuring the trustworthiness of AI systems, in order to
identify potential conceptual gaps in understanding what trustworthy AI is.
Then, we suggest a trust (resp. zero-trust) model for AI and suggest a set of
properties that should be satisfied to ensure the trustworthiness of AI
systems.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Who to Trust, How and Why: Untangling AI Ethics Principles,
Trustworthiness and Trust [0.0]
We argue for the need to distinguish these concepts more clearly.
We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system.
arXiv Detail & Related papers (2023-09-19T05:00:34Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - It is not "accuracy vs. explainability" -- we need both for trustworthy
AI systems [0.0]
We are witnessing the emergence of an AI economy and society where AI technologies are increasingly impacting health care, business, transportation and many aspects of everyday life.
However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges in their adoption.
These recent shortcomings and concerns have been documented in scientific but also in general press such as accidents with self driving cars, biases in healthcare, hiring and face recognition systems for people of color, seemingly correct medical decisions later found to be made due to wrong reasons etc.
arXiv Detail & Related papers (2022-12-16T23:33:10Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Knowledge-intensive Language Understanding for Explainable AI [9.541228711585886]
How AI-led decisions are made and what determining factors were included are crucial to understand.
It is critical to have human-centered explanations that are directly related to decision-making.
It is necessary to involve explicit domain knowledge that humans understand and use.
arXiv Detail & Related papers (2021-08-02T21:12:30Z) - Know Your Model (KYM): Increasing Trust in AI and Machine Learning [4.93786553432578]
We analyze each element of trustworthiness and provide a set of 20 guidelines that can be leveraged to ensure optimal AI functionality.
The guidelines help ensure that trustworthiness is provable and can be demonstrated, they are implementation agnostic, and they can be applied to any AI system in any sector.
arXiv Detail & Related papers (2021-05-31T14:08:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.