A proof of contribution in blockchain using game theoretical deep learning model
- URL: http://arxiv.org/abs/2409.07460v1
- Date: Sun, 25 Aug 2024 12:40:19 GMT
- Title: A proof of contribution in blockchain using game theoretical deep learning model
- Authors: Jin Wang,
- Abstract summary: We introduce a game-theoretic deep learning model to reach a consensus among service providers on task scheduling and resource provisioning.
Our model reduces latency by 584% compared to the state-of-the-art.
- Score: 4.53216122219986
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Building elastic and scalable edge resources is an inevitable prerequisite for providing platform-based smart city services. Smart city services are delivered through edge computing to provide low-latency applications. However, edge computing has always faced the challenge of limited resources. A single edge device cannot undertake the various intelligent computations in a smart city, and the large-scale deployment of edge devices from different service providers to build an edge resource platform has become a necessity. Selecting computing power from different service providers is a game-theoretic problem. To incentivize service providers to actively contribute their valuable resources and provide low-latency collaborative computing power, we introduce a game-theoretic deep learning model to reach a consensus among service providers on task scheduling and resource provisioning. Traditional centralized resource management approaches are inefficient and lack credibility, while the introduction of blockchain technology can enable decentralized resource trading and scheduling. We propose a contribution-based proof mechanism to provide the low-latency service of edge computing. The deep learning model consists of dual encoders and a single decoder, where the GNN (Graph Neural Network) encoder processes structured decision action data, and the RNN (Recurrent Neural Network) encoder handles time-series task scheduling data. Extensive experiments have demonstrated that our model reduces latency by 584% compared to the state-of-the-art.
Related papers
- Leveraging Federated Learning and Edge Computing for Recommendation
Systems within Cloud Computing Networks [3.36271475827981]
Key technology for edge intelligence is the privacy-protecting machine learning paradigm known as Federated Learning (FL), which enables data owners to train models without having to transfer raw data to third-party servers.
To reduce node failures and device exits, a Hierarchical Federated Learning (HFL) framework is proposed, where a designated cluster leader supports the data owner through intermediate model aggregation.
In order to mitigate the impact of soft clicks on the quality of user experience (QoE), the authors model the user QoE as a comprehensive system cost.
arXiv Detail & Related papers (2024-03-05T17:58:26Z) - Generative AI-enabled Quantum Computing Networks and Intelligent
Resource Allocation [80.78352800340032]
Quantum computing networks execute large-scale generative AI computation tasks and advanced quantum algorithms.
efficient resource allocation in quantum computing networks is a critical challenge due to qubit variability and network complexity.
We introduce state-of-the-art reinforcement learning (RL) algorithms, from generative learning to quantum machine learning for optimal quantum resource allocation.
arXiv Detail & Related papers (2024-01-13T17:16:38Z) - Double Deep Q-Learning-based Path Selection and Service Placement for
Latency-Sensitive Beyond 5G Applications [11.864695986880347]
This paper studies the joint problem of communication and computing resource allocation, dubbed CCRA, to minimize total cost.
We formulate the problem as a non-linear programming model and propose two approaches, dubbed B&B-CCRA and WF-CCRA, based on the Branch & Bound and Water-Filling algorithms.
Numerical simulations show that B&B-CCRA optimally solves the problem, whereas WF-CCRA delivers near-optimal solutions in a substantially shorter time.
arXiv Detail & Related papers (2023-09-18T22:17:23Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - Auto-Split: A General Framework of Collaborative Edge-Cloud AI [49.750972428032355]
This paper describes the techniques and engineering practice behind Auto-Split, an edge-cloud collaborative prototype of Huawei Cloud.
To the best of our knowledge, there is no existing industry product that provides the capability of Deep Neural Network (DNN) splitting.
arXiv Detail & Related papers (2021-08-30T08:03:29Z) - When Deep Reinforcement Learning Meets Federated Learning: Intelligent
Multi-Timescale Resource Management for Multi-access Edge Computing in 5G
Ultra Dense Network [31.274279003934268]
We first propose an intelligent ultra-dense edge computing (I-UDEC) framework, which integrates blockchain and AI into 5G edge computing networks.
In order to achieve real-time and low overhead computation offloading decisions and resource allocation strategies, we design a novel two-timescale deep reinforcement learning (textit2Ts-DRL) approach.
Our proposed algorithm can reduce task execution time up to 31.87%.
arXiv Detail & Related papers (2020-09-22T15:08:00Z) - Incentive Mechanism Design for Resource Sharing in Collaborative Edge
Learning [106.51930957941433]
In 5G and Beyond networks, Artificial Intelligence applications are expected to be increasingly ubiquitous.
This necessitates a paradigm shift from the current cloud-centric model training approach to the Edge Computing based collaborative learning scheme known as edge learning.
arXiv Detail & Related papers (2020-05-31T12:45:06Z) - Multi-agent Reinforcement Learning for Resource Allocation in IoT
networks with Edge Computing [16.129649374251088]
It's challenging for end users to offload computation due to their massive requirements on spectrum and resources.
In this paper, we investigate offloading mechanism with resource allocation in IoT edge computing networks by formulating it as a game.
arXiv Detail & Related papers (2020-04-05T20:59:20Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.