Computation Rate Maximization for Wireless Powered Edge Computing With Multi-User Cooperation
- URL: http://arxiv.org/abs/2402.16866v1
- Date: Mon, 22 Jan 2024 05:22:19 GMT
- Title: Computation Rate Maximization for Wireless Powered Edge Computing With Multi-User Cooperation
- Authors: Yang Li, Xing Zhang, Bo Lei, Qianying Zhao, Min Wei, Zheyan Qu, Wenbo Wang,
- Abstract summary: This study considers a wireless-powered mobile edge computing system that includes a hybrid access point equipped with a computing unit and multiple Internet of Things (IoT) devices.
We propose a novel muti-user cooperation scheme to improve computation performance, where collaborative clusters are dynamically formed.
Specifically, we aims to maximize the weighted sum computation rate (WSCR) of all the IoT devices in the network.
- Score: 10.268239987867453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The combination of mobile edge computing (MEC) and radio frequency-based wireless power transfer (WPT) presents a promising technique for providing sustainable energy supply and computing services at the network edge. This study considers a wireless-powered mobile edge computing system that includes a hybrid access point (HAP) equipped with a computing unit and multiple Internet of Things (IoT) devices. In particular, we propose a novel muti-user cooperation scheme to improve computation performance, where collaborative clusters are dynamically formed. Each collaborative cluster comprises a source device (SD) and an auxiliary device (AD), where the SD can partition the computation task into various segments for local processing, offloading to the HAP, and remote execution by the AD with the assistance of the HAP. Specifically, we aims to maximize the weighted sum computation rate (WSCR) of all the IoT devices in the network. This involves jointly optimizing collaboration, time and data allocation among multiple IoT devices and the HAP, while considering the energy causality property and the minimum data processing requirement of each device. Initially, an optimization algorithm based on the interior-point method is designed for time and data allocation. Subsequently, a priority-based iterative algorithm is developed to search for a near-optimal solution to the multi-user collaboration scheme. Finally, a deep learning-based approach is devised to further accelerate the algorithm's operation, building upon the initial two algorithms. Simulation results show that the performance of the proposed algorithms is comparable to that of the exhaustive search method, and the deep learning-based algorithm significantly reduces the execution time of the algorithm.
Related papers
- Multi-Resource Allocation for On-Device Distributed Federated Learning
Systems [79.02994855744848]
This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system.
Each mobile device in the system engages the model training process within the specified area and allocates its computation and communication resources for deriving and uploading parameters, respectively.
arXiv Detail & Related papers (2022-11-01T14:16:05Z) - Task-Oriented Sensing, Computation, and Communication Integration for
Multi-Device Edge AI [108.08079323459822]
This paper studies a new multi-intelligent edge artificial-latency (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC)
We measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain.
arXiv Detail & Related papers (2022-07-03T06:57:07Z) - Computation Offloading and Resource Allocation in F-RANs: A Federated
Deep Reinforcement Learning Approach [67.06539298956854]
fog radio access network (F-RAN) is a promising technology in which the user mobile devices (MDs) can offload computation tasks to the nearby fog access points (F-APs)
arXiv Detail & Related papers (2022-06-13T02:19:20Z) - On Jointly Optimizing Partial Offloading and SFC Mapping: A Cooperative
Dual-agent Deep Reinforcement Learning Approach [8.168647937560504]
This paper studies the partial offloading and SFC mapping joint optimization (POSMJO) problem in an computation-enabled MEC system.
The objective is to minimize the average cost in the long term which is a combination of execution delay, MD's energy consumption, and usage charge for edge computing.
We propose a cooperative dual-agent deep reinforcement learning (CDADRL) algorithm, where we design a framework enabling interaction between two agents.
arXiv Detail & Related papers (2022-05-20T02:00:53Z) - Collaborative Learning over Wireless Networks: An Introductory Overview [84.09366153693361]
We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
arXiv Detail & Related papers (2021-12-07T20:15:39Z) - DESTRESS: Computation-Optimal and Communication-Efficient Decentralized
Nonconvex Finite-Sum Optimization [43.31016937305845]
Internet-of-things, networked sensing, autonomous systems and federated learning call for decentralized algorithms for finite-sum optimizations.
We develop DEcentralized STochastic REcurSive methodDESTRESS for non finite-sum optimization.
Detailed theoretical and numerical comparisons show that DESTRESS improves upon prior decentralized algorithms.
arXiv Detail & Related papers (2021-10-04T03:17:41Z) - Data-Driven Random Access Optimization in Multi-Cell IoT Networks with
NOMA [78.60275748518589]
Non-orthogonal multiple access (NOMA) is a key technology to enable massive machine type communications (mMTC) in 5G networks and beyond.
In this paper, NOMA is applied to improve the random access efficiency in high-density spatially-distributed multi-cell wireless IoT networks.
A novel formulation of random channel access management is proposed, in which the transmission probability of each IoT device is tuned to maximize the geometric mean of users' expected capacity.
arXiv Detail & Related papers (2021-01-02T15:21:08Z) - Edge Intelligence for Energy-efficient Computation Offloading and
Resource Allocation in 5G Beyond [7.953533529450216]
5G beyond is an end-edge-cloud orchestrated network that can exploit heterogeneous capabilities of the end devices, edge servers, and the cloud.
In multi user wireless networks, diverse application requirements and the possibility of various radio access modes for communication among devices make it challenging to design an optimal computation offloading scheme.
Deep Reinforcement Learning (DRL) is an emerging technique to address such an issue with limited and less accurate network information.
arXiv Detail & Related papers (2020-11-17T05:51:03Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.