Trustworthy AI in the Age of Pervasive Computing and Big Data
- URL: http://arxiv.org/abs/2002.05657v1
- Date: Thu, 30 Jan 2020 08:09:31 GMT
- Title: Trustworthy AI in the Age of Pervasive Computing and Big Data
- Authors: Abhishek Kumar, Tristan Braud, Sasu Tarkoma, Pan Hui
- Abstract summary: We formalise the requirements of trustworthy AI systems through an ethics perspective.
After discussing the state of research and the remaining challenges, we show how a concrete use-case in smart cities can benefit from these methods.
- Score: 22.92621391190282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The era of pervasive computing has resulted in countless devices that
continuously monitor users and their environment, generating an abundance of
user behavioural data. Such data may support improving the quality of service,
but may also lead to adverse usages such as surveillance and advertisement. In
parallel, Artificial Intelligence (AI) systems are being applied to sensitive
fields such as healthcare, justice, or human resources, raising multiple
concerns on the trustworthiness of such systems. Trust in AI systems is thus
intrinsically linked to ethics, including the ethics of algorithms, the ethics
of data, or the ethics of practice. In this paper, we formalise the
requirements of trustworthy AI systems through an ethics perspective. We
specifically focus on the aspects that can be integrated into the design and
development of AI systems. After discussing the state of research and the
remaining challenges, we show how a concrete use-case in smart cities can
benefit from these methods.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Towards Measuring Ethicality of an Intelligent Assistive System [1.2961180148172198]
The presence of autonomous entities poses ethical challenges concerning the stakeholders involved in using these systems.
There is a lack of research when it comes to analysing how such IAT adheres to provided ethical regulations.
We present a method to measure the ethicality of an assistive system.
arXiv Detail & Related papers (2023-02-28T14:59:17Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Never trust, always verify : a roadmap for Trustworthy AI? [12.031113181911627]
We examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy.
We suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
arXiv Detail & Related papers (2022-06-23T21:13:10Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Know Your Model (KYM): Increasing Trust in AI and Machine Learning [4.93786553432578]
We analyze each element of trustworthiness and provide a set of 20 guidelines that can be leveraged to ensure optimal AI functionality.
The guidelines help ensure that trustworthiness is provable and can be demonstrated, they are implementation agnostic, and they can be applied to any AI system in any sector.
arXiv Detail & Related papers (2021-05-31T14:08:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.