A Survey on Trustworthy Edge Intelligence: From Security and Reliability
To Transparency and Sustainability
- URL: http://arxiv.org/abs/2310.17944v2
- Date: Thu, 25 Jan 2024 15:52:51 GMT
- Title: A Survey on Trustworthy Edge Intelligence: From Security and Reliability
To Transparency and Sustainability
- Authors: Xiaojie Wang, Beibei Wang, Yu Wu, Zhaolong Ning, Song Guo, and Fei
Richard Yu
- Abstract summary: Edge Intelligence (EI) integrates Edge Computing (EC) and Artificial Intelligence (AI) to push the capabilities of AI to the network edge.
This survey comprehensively summarizes the characteristics, architecture, technologies, and solutions of trustworthy EI.
- Score: 32.959723590246384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edge Intelligence (EI) integrates Edge Computing (EC) and Artificial
Intelligence (AI) to push the capabilities of AI to the network edge for
real-time, efficient and secure intelligent decision-making and computation.
However, EI faces various challenges due to resource constraints, heterogeneous
network environments, and diverse service requirements of different
applications, which together affect the trustworthiness of EI in the eyes of
stakeholders. This survey comprehensively summarizes the characteristics,
architecture, technologies, and solutions of trustworthy EI. Specifically, we
first emphasize the need for trustworthy EI in the context of the trend toward
large models. We then provide an initial definition of trustworthy EI, explore
its key characteristics and give a multi-layered architecture for trustworthy
EI. Then, we summarize several important issues that hinder the achievement of
trustworthy EI. Subsequently, we present enabling technologies for trustworthy
EI systems and provide an in-depth literature review of the state-of-the-art
solutions for realizing the trustworthiness of EI. Finally, we discuss the
corresponding research challenges and open issues.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Socialized Learning: A Survey of the Paradigm Shift for Edge Intelligence in Networked Systems [62.252355444948904]
This paper presents the findings of a literature review on the integration of edge intelligence (EI) and socialized learning (SL)
SL is a learning paradigm predicated on social principles and behaviors, aimed at amplifying the collaborative capacity and collective intelligence of agents.
We elaborate on three integrated components: socialized architecture, socialized training, and socialized inference, analyzing their strengths and weaknesses.
arXiv Detail & Related papers (2024-04-20T11:07:29Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - A Systematic Review on Fostering Appropriate Trust in Human-AI
Interaction [19.137907393497848]
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners.
Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communication.
This paper presents a systematic review to identify current practices in building appropriate trust, different ways to measure it, types of tasks used, and potential challenges associated with it.
arXiv Detail & Related papers (2023-11-08T12:19:58Z) - Trustworthy Federated Learning: A Survey [0.5089078998562185]
Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI)
We provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy.
We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy.
arXiv Detail & Related papers (2023-05-19T09:11:26Z) - Towards Trustworthy Edge Intelligence: Insights from Voice-Activated
Services [0.0]
Edge Intelligence is a key enabling technology for smart services.
This paper examines requirements for trustworthy Edge Intelligence in a concrete application scenario of voice-activated services.
arXiv Detail & Related papers (2022-06-20T00:56:21Z) - Trust in AI and Implications for the AEC Research: A Literature Analysis [0.0]
The architecture, engineering, and construction (AEC) research community has been harnessing advanced solutions offered by artificial intelligence (AI) to improve project.
Despite the unique characteristics of work, workers, and workplaces in the AEC industry, the concept of trust in AI has received very little attention in the literature.
This paper presents a comprehensive analysis of the academic literature in two main areas of trust in AI and AI in the AEC, to explore the interplay between AEC projects unique aspects and the sociotechnical concepts that lead to trust in AI.
arXiv Detail & Related papers (2022-03-08T04:38:34Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.