Secure Deep Reinforcement Learning for Dynamic Resource Allocation in
Wireless MEC Networks
- URL: http://arxiv.org/abs/2312.08016v1
- Date: Wed, 13 Dec 2023 09:39:32 GMT
- Title: Secure Deep Reinforcement Learning for Dynamic Resource Allocation in
Wireless MEC Networks
- Authors: Xin Hao, Phee Lep Yeoh, Changyang She, Branka Vucetic, and Yonghui Li
- Abstract summary: This paper proposes a blockchain-secured deep reinforcement learning (BC-DRL) optimization framework for data management and resource allocation in mobile edge computing networks.
We design a low-latency reputation-based proof-of-stake (RPoS) consensus protocol to select highly reliable blockchain-enabled BSs.
We provide extensive simulation results and analysis to validate that our BC-DRL framework achieves higher security, reliability, and resource utilization efficiency than benchmark blockchain consensus protocols and MEC resource allocation algorithms.
- Score: 46.689212344009015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a blockchain-secured deep reinforcement learning (BC-DRL)
optimization framework for {data management and} resource allocation in
decentralized {wireless mobile edge computing (MEC)} networks. In our
framework, {we design a low-latency reputation-based proof-of-stake (RPoS)
consensus protocol to select highly reliable blockchain-enabled BSs to securely
store MEC user requests and prevent data tampering attacks.} {We formulate the
MEC resource allocation optimization as a constrained Markov decision process
that balances minimum processing latency and denial-of-service (DoS)
probability}. {We use the MEC aggregated features as the DRL input to
significantly reduce the high-dimensionality input of the remaining service
processing time for individual MEC requests. Our designed constrained DRL
effectively attains the optimal resource allocations that are adapted to the
dynamic DoS requirements. We provide extensive simulation results and analysis
to} validate that our BC-DRL framework achieves higher security, reliability,
and resource utilization efficiency than benchmark blockchain consensus
protocols and {MEC} resource allocation algorithms.
Related papers
- RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent [11.22833419439317]
Empirical robustness risk (ERM) is a cornerstone of modern machine learning (ML)
This paper focuses on the man-in-the-middle (MITM) attack, which can cause models to deviate significantly from their intended ERM solutions.
We propose RESIST, an algorithm designed to be robust against adversarially compromised communication links.
arXiv Detail & Related papers (2025-02-11T21:48:10Z) - An Offline Multi-Agent Reinforcement Learning Framework for Radio Resource Management [5.771885923067511]
offline multi-agent reinforcement learning (MARL) addresses key limitations of online MARL.
We propose an offline MARL algorithm for radio resource management (RRM)
We evaluate three training paradigms: centralized, independent, and centralized training with decentralized execution (CTDE)
arXiv Detail & Related papers (2025-01-22T16:25:46Z) - Secure Resource Allocation via Constrained Deep Reinforcement Learning [49.15061461220109]
We present SARMTO, a framework that balances resource allocation, task offloading, security, and performance.
SARMTO consistently outperforms five baseline approaches, achieving up to a 40% reduction in system costs.
These enhancements highlight SARMTO's potential to revolutionize resource management in intricate distributed computing environments.
arXiv Detail & Related papers (2025-01-20T15:52:43Z) - Wireless Resource Allocation with Collaborative Distributed and Centralized DRL under Control Channel Attacks [9.981962772130025]
We consider a wireless resource allocation problem in a cyber-physical system (CPS) where the control channel is subjected to denial-of-service (DoS) attacks.
We propose a novel concept of collaborative distributed and centralized (CDC) resource allocation to effectively mitigate the impact of these attacks.
We develop a new CDC-deep reinforcement learning (DRL) algorithm, whereas existing DRL frameworks only formulate either centralized or distributed decision-making problems.
arXiv Detail & Related papers (2024-11-16T04:56:23Z) - Digital Twin-Assisted Data-Driven Optimization for Reliable Edge Caching in Wireless Networks [60.54852710216738]
We introduce a novel digital twin-assisted optimization framework, called D-REC, to ensure reliable caching in nextG wireless networks.
By incorporating reliability modules into a constrained decision process, D-REC can adaptively adjust actions, rewards, and states to comply with advantageous constraints.
arXiv Detail & Related papers (2024-06-29T02:40:28Z) - Joint Service Caching, Communication and Computing Resource Allocation in Collaborative MEC Systems: A DRL-based Two-timescale Approach [15.16859210403316]
Meeting the strict Quality of Service (QoS) requirements of terminals has imposed a challenge on Multiaccess Edge Computing (MEC) systems.
We propose a collaborative framework that facilitates resource sharing between the edge servers.
We show that our proposed algorithm outperforms the baseline algorithms in terms of the average switching and cache cost.
arXiv Detail & Related papers (2023-07-19T00:27:49Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Stacked Auto Encoder Based Deep Reinforcement Learning for Online
Resource Scheduling in Large-Scale MEC Networks [44.40722828581203]
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet of things (IoT) users.
A deep reinforcement learning (DRL) based solution is proposed, which includes the following components.
A preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy.
arXiv Detail & Related papers (2020-01-24T23:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.