Secure Deep Reinforcement Learning for Dynamic Resource Allocation in
Wireless MEC Networks
- URL: http://arxiv.org/abs/2312.08016v1
- Date: Wed, 13 Dec 2023 09:39:32 GMT
- Title: Secure Deep Reinforcement Learning for Dynamic Resource Allocation in
Wireless MEC Networks
- Authors: Xin Hao, Phee Lep Yeoh, Changyang She, Branka Vucetic, and Yonghui Li
- Abstract summary: This paper proposes a blockchain-secured deep reinforcement learning (BC-DRL) optimization framework for data management and resource allocation in mobile edge computing networks.
We design a low-latency reputation-based proof-of-stake (RPoS) consensus protocol to select highly reliable blockchain-enabled BSs.
We provide extensive simulation results and analysis to validate that our BC-DRL framework achieves higher security, reliability, and resource utilization efficiency than benchmark blockchain consensus protocols and MEC resource allocation algorithms.
- Score: 46.689212344009015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a blockchain-secured deep reinforcement learning (BC-DRL)
optimization framework for {data management and} resource allocation in
decentralized {wireless mobile edge computing (MEC)} networks. In our
framework, {we design a low-latency reputation-based proof-of-stake (RPoS)
consensus protocol to select highly reliable blockchain-enabled BSs to securely
store MEC user requests and prevent data tampering attacks.} {We formulate the
MEC resource allocation optimization as a constrained Markov decision process
that balances minimum processing latency and denial-of-service (DoS)
probability}. {We use the MEC aggregated features as the DRL input to
significantly reduce the high-dimensionality input of the remaining service
processing time for individual MEC requests. Our designed constrained DRL
effectively attains the optimal resource allocations that are adapted to the
dynamic DoS requirements. We provide extensive simulation results and analysis
to} validate that our BC-DRL framework achieves higher security, reliability,
and resource utilization efficiency than benchmark blockchain consensus
protocols and {MEC} resource allocation algorithms.
Related papers
- Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Efficient Zero-Knowledge Proofs for Set Membership in Blockchain-Based Sensor Networks: A Novel OR-Aggregation Approach [20.821562115822182]
This paper introduces a novel OR-aggregation approach for zero-knowledge set membership proofs.
We provide a comprehensive theoretical foundation, detailed protocol specification, and rigorous security analysis.
Results show significant improvements in proof size, generation time, and verification efficiency.
arXiv Detail & Related papers (2024-10-11T18:16:34Z) - Digital Twin-Assisted Data-Driven Optimization for Reliable Edge Caching in Wireless Networks [60.54852710216738]
We introduce a novel digital twin-assisted optimization framework, called D-REC, to ensure reliable caching in nextG wireless networks.
By incorporating reliability modules into a constrained decision process, D-REC can adaptively adjust actions, rewards, and states to comply with advantageous constraints.
arXiv Detail & Related papers (2024-06-29T02:40:28Z) - Joint Service Caching, Communication and Computing Resource Allocation in Collaborative MEC Systems: A DRL-based Two-timescale Approach [15.16859210403316]
Meeting the strict Quality of Service (QoS) requirements of terminals has imposed a challenge on Multiaccess Edge Computing (MEC) systems.
We propose a collaborative framework that facilitates resource sharing between the edge servers.
We show that our proposed algorithm outperforms the baseline algorithms in terms of the average switching and cache cost.
arXiv Detail & Related papers (2023-07-19T00:27:49Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Deep Reinforcement Learning for QoS-Constrained Resource Allocation in
Multiservice Networks [0.3324986723090368]
This article focuses on a non- optimization problem whose main aim is to maximize the spectral efficiency to satisfaction guarantees in multiservice wireless systems.
We propose a solution based on a Reinforcement Learning (RL) framework, where each agent makes its decisions to find a policy by interacting with the local environment.
We show a near optimal performance of the latter in terms of throughput and outage rate.
arXiv Detail & Related papers (2020-03-03T19:32:15Z) - Stacked Auto Encoder Based Deep Reinforcement Learning for Online
Resource Scheduling in Large-Scale MEC Networks [44.40722828581203]
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet of things (IoT) users.
A deep reinforcement learning (DRL) based solution is proposed, which includes the following components.
A preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy.
arXiv Detail & Related papers (2020-01-24T23:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.