Differentially Private Deep Q-Learning for Pattern Privacy Preservation
in MEC Offloading
- URL: http://arxiv.org/abs/2302.04608v1
- Date: Thu, 9 Feb 2023 12:50:18 GMT
- Title: Differentially Private Deep Q-Learning for Pattern Privacy Preservation
in MEC Offloading
- Authors: Shuying Gan, Marie Siew, Chao Xu, Tony Q.S. Quek
- Abstract summary: attackers may eavesdrop on the offloading decisions to infer the edge server's (ES's) queue information and users' usage patterns.
We propose an offloading strategy which jointly minimizes the latency, ES's energy consumption, and task dropping rate, while preserving pattern privacy (PP)
We develop a Differential Privacy Deep Q-learning based Offloading (DP-DQO) algorithm to solve this problem while addressing the PP issue by injecting noise into the generated offloading decisions.
- Score: 76.0572817182483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile edge computing (MEC) is a promising paradigm to meet the quality of
service (QoS) requirements of latency-sensitive IoT applications. However,
attackers may eavesdrop on the offloading decisions to infer the edge server's
(ES's) queue information and users' usage patterns, thereby incurring the
pattern privacy (PP) issue. Therefore, we propose an offloading strategy which
jointly minimizes the latency, ES's energy consumption, and task dropping rate,
while preserving PP. Firstly, we formulate the dynamic computation offloading
procedure as a Markov decision process (MDP). Next, we develop a Differential
Privacy Deep Q-learning based Offloading (DP-DQO) algorithm to solve this
problem while addressing the PP issue by injecting noise into the generated
offloading decisions. This is achieved by modifying the deep Q-network (DQN)
with a Function-output Gaussian process mechanism. We provide a theoretical
privacy guarantee and a utility guarantee (learning error bound) for the DP-DQO
algorithm and finally, conduct simulations to evaluate the performance of our
proposed algorithm by comparing it with greedy and DQN-based algorithms.
Related papers
- Pointer Networks with Q-Learning for Combinatorial Optimization [55.2480439325792]
We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets)
Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments.
arXiv Detail & Related papers (2023-11-05T12:03:58Z) - Dynamic Partial Computation Offloading for the Metaverse in In-Network
Computing [1.1124588036301817]
We consider the partial computation offloading problem in the metaverse for multiple subtasks in a COIN environment.
We transform it into two subproblems: the task-splitting problem (TSP) on the user side and the task-offloading problem (TOP) on the COIN side.
Unlike the conventional DDQN algorithm, where intelligent agents sample offloading decisions randomly within a certain probability, the COIN agent explores the NE of the TSP and the deep neural network.
arXiv Detail & Related papers (2023-06-09T16:41:34Z) - Optimal Privacy Preserving for Federated Learning in Mobile Edge
Computing [35.57643489979182]
Federated Learning (FL) with quantization and deliberately added noise over wireless networks is a promising approach to preserve user differential privacy (DP)
This article aims to jointly optimize the quantization and Binomial mechanism parameters and communication resources to maximize the convergence rate under the constraints of the wireless network and DP requirement.
arXiv Detail & Related papers (2022-11-14T07:54:14Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge
Caching [91.50631418179331]
A privacy-preserving distributed deep policy gradient (P2D3PG) is proposed to maximize the cache hit rates of devices in the MEC networks.
We convert the distributed optimizations into model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction.
arXiv Detail & Related papers (2021-10-20T02:48:27Z) - Learning Augmented Index Policy for Optimal Service Placement at the
Network Edge [8.136957953239254]
We consider the problem of service placement at the network edge, in which a decision maker has to choose between $N$ services to host at the edge.
Our goal is to design adaptive algorithms to minimize the average service delivery latency for customers.
arXiv Detail & Related papers (2021-01-10T23:54:59Z) - RIS Enhanced Massive Non-orthogonal Multiple Access Networks: Deployment
and Passive Beamforming Design [116.88396201197533]
A novel framework is proposed for the deployment and passive beamforming design of a reconfigurable intelligent surface (RIS)
The problem of joint deployment, phase shift design, as well as power allocation is formulated for maximizing the energy efficiency.
A novel long short-term memory (LSTM) based echo state network (ESN) algorithm is proposed to predict users' tele-traffic demand by leveraging a real dataset.
A decaying double deep Q-network (D3QN) based position-acquisition and phase-control algorithm is proposed to solve the joint problem of deployment and design of the RIS.
arXiv Detail & Related papers (2020-01-28T14:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.