Towards Trustworthy Edge Intelligence: Insights from Voice-Activated
Services
- URL: http://arxiv.org/abs/2206.09523v1
- Date: Mon, 20 Jun 2022 00:56:21 GMT
- Title: Towards Trustworthy Edge Intelligence: Insights from Voice-Activated
Services
- Authors: W.T. Hutiri, A.Y. Ding
- Abstract summary: Edge Intelligence is a key enabling technology for smart services.
This paper examines requirements for trustworthy Edge Intelligence in a concrete application scenario of voice-activated services.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In an age of surveillance capitalism, anchoring the design of emerging smart
services in trustworthiness is urgent and important. Edge Intelligence, which
brings together the fields of AI and Edge computing, is a key enabling
technology for smart services. Trustworthy Edge Intelligence should thus be a
priority research concern. However, determining what makes Edge Intelligence
trustworthy is not straight forward. This paper examines requirements for
trustworthy Edge Intelligence in a concrete application scenario of
voice-activated services. We contribute to deepening the understanding of
trustworthiness in the emerging Edge Intelligence domain in three ways:
firstly, we propose a unified framing for trustworthy Edge Intelligence that
jointly considers trustworthiness attributes of AI and the IoT. Secondly, we
present research outputs of a tangible case study in voice-activated services
that demonstrates interdependencies between three important trustworthiness
attributes: privacy, security and fairness. Thirdly, based on the empirical and
analytical findings, we highlight challenges and open questions that present
important future research areas for trustworthy Edge Intelligence.
Related papers
- A Survey on Automatic Credibility Assessment of Textual Credibility Signals in the Era of Large Language Models [6.538395325419292]
Credibility assessment is fundamentally based on aggregating credibility signals.
Credibility signals provide a more granular, more easily explainable and widely utilizable information.
A growing body of research on automatic credibility assessment and detection of credibility signals can be characterized as highly fragmented and lacking mutual interconnections.
arXiv Detail & Related papers (2024-10-28T17:51:08Z) - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems [2.444630714797783]
We review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias.
We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making.
arXiv Detail & Related papers (2024-08-28T06:04:25Z) - A Survey on Trustworthy Edge Intelligence: From Security and Reliability
To Transparency and Sustainability [32.959723590246384]
Edge Intelligence (EI) integrates Edge Computing (EC) and Artificial Intelligence (AI) to push the capabilities of AI to the network edge.
This survey comprehensively summarizes the characteristics, architecture, technologies, and solutions of trustworthy EI.
arXiv Detail & Related papers (2023-10-27T07:39:54Z) - Bridging Trustworthiness and Open-World Learning: An Exploratory Neural
Approach for Enhancing Interpretability, Generalization, and Robustness [20.250799593459053]
We explore a neural program to bridge trustworthiness and open-world learning, extending from single-modal to multi-modal scenarios for readers.
We enhance various trustworthy properties through the establishment of design-level explainability, environmental well-being task-interfaces and open-world recognition programs.
arXiv Detail & Related papers (2023-08-07T15:35:32Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - On Achieving Privacy-Preserving State-of-the-Art Edge Intelligence [0.0]
Inference in Edge Computing, often called Edge Intelligence, requires solutions to insure that sensitive data confidentiality and intellectual property are not revealed in the process.
This paper provides an original assessment of the compatibility of existing techniques for privacy-preserving DNN Inference with the characteristics of an Edge Computing setup.
We then address the future role of model compression methods in the research towards secret sharing on DNNs with state-of-the-art performance.
arXiv Detail & Related papers (2023-02-10T15:34:42Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Uncertainty Quantification 360: A Holistic Toolkit for Quantifying and
Communicating the Uncertainty of AI [49.64037266892634]
We describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
The goal of this toolkit is twofold: first, to provide a broad range of capabilities to streamline as well as foster the common practices of quantifying, evaluating, improving, and communicating uncertainty in the AI application development lifecycle; second, to encourage further exploration of UQ's connections to other pillars of trustworthy AI.
arXiv Detail & Related papers (2021-06-02T18:29:04Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Towards Self-learning Edge Intelligence in 6G [143.1821636135413]
Edge intelligence, also called edge-native artificial intelligence (AI), is an emerging technological framework focusing on seamless integration of AI, communication networks, and mobile edge computing.
In this article, we identify the key requirements and challenges of edge-native AI in 6G.
arXiv Detail & Related papers (2020-10-01T02:16:40Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.