Deep Reinforcement Learning-based Rebalancing Policies for Profit
Maximization of Relay Nodes in Payment Channel Networks
- URL: http://arxiv.org/abs/2210.07302v1
- Date: Thu, 13 Oct 2022 19:11:10 GMT
- Title: Deep Reinforcement Learning-based Rebalancing Policies for Profit
Maximization of Relay Nodes in Payment Channel Networks
- Authors: Nikolaos Papadis, Leandros Tassiulas
- Abstract summary: We study how a relay node can maximize its profits from fees by using the rebalancing method of submarine swaps.
We formulate the problem of the node's fortune over time over all rebalancing policies, and approximate the optimal solution by designing a Deep Reinforcement Learning-based rebalancing policy.
- Score: 7.168126766674749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Payment channel networks (PCNs) are a layer-2 blockchain scalability
solution, with its main entity, the payment channel, enabling transactions
between pairs of nodes "off-chain," thus reducing the burden on the layer-1
network. Nodes with multiple channels can serve as relays for multihop payments
over a path of channels: they relay payments of others by providing the
liquidity of their channels, in exchange for part of the amount withheld as a
fee. Relay nodes might after a while end up with one or more unbalanced
channels, and thus need to trigger a rebalancing operation. In this paper, we
study how a relay node can maximize its profits from fees by using the
rebalancing method of submarine swaps. We introduce a stochastic model to
capture the dynamics of a relay node observing random transaction arrivals and
performing occasional rebalancing operations, and express the system evolution
as a Markov Decision Process. We formulate the problem of the maximization of
the node's fortune over time over all rebalancing policies, and approximate the
optimal solution by designing a Deep Reinforcement Learning (DRL)-based
rebalancing policy. We build a discrete event simulator of the system and use
it to demonstrate the DRL policy's superior performance under most conditions
by conducting a comparative study of different policies and parameterizations.
In all, our approach aims to be the first to introduce DRL for network
optimization in the complex world of PCNs.
Related papers
- Channel Balance Interpolation in the Lightning Network via Machine Learning [6.391448436169024]
Bitcoin Lightning Network is a Layer 2 payment protocol that addresses Bitcoin's scalability.
This research explores the feasibility of using machine learning models to interpolate channel balances within the network.
arXiv Detail & Related papers (2024-05-20T14:57:16Z) - Relay Mining: Incentivizing Full Non-Validating Nodes Servicing All RPC Types [0.0]
Relay Mining estimates and proves the volume of Remote Procedure Calls (RPCs) made from a client to a server.
We leverage digital signatures, commit-and-reveal schemes, and Sparse Merkle Sum Tries (SMSTs) to prove the amount of work done.
A native cryptocurrency on a distributed ledger is used to rate limit applications and disincentivize over-usage.
arXiv Detail & Related papers (2023-05-18T03:23:41Z) - Entangled Pair Resource Allocation under Uncertain Fidelity Requirements [59.83361663430336]
In quantum networks, effective entanglement routing facilitates communication between quantum source and quantum destination nodes.
We propose a resource allocation model for entangled pairs and an entanglement routing model with a fidelity guarantee.
Our proposed model can reduce the total cost by at least 20% compared to the baseline model.
arXiv Detail & Related papers (2023-04-10T07:16:51Z) - Fast and reliable entanglement distribution with quantum repeaters: principles for improving protocols using reinforcement learning [0.6249768559720122]
Future quantum technologies will rely on networks of shared entanglement between spatially separated nodes.
We provide improved protocols/policies for entanglement distribution along a linear chain of nodes.
arXiv Detail & Related papers (2023-03-01T19:05:32Z) - Learning Resilient Radio Resource Management Policies with Graph Neural
Networks [124.89036526192268]
We formulate a resilient radio resource management problem with per-user minimum-capacity constraints.
We show that we can parameterize the user selection and power control policies using a finite set of parameters.
Thanks to such adaptation, our proposed method achieves a superior tradeoff between the average rate and the 5th percentile rate.
arXiv Detail & Related papers (2022-03-07T19:40:39Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Operation-Aware Soft Channel Pruning using Differentiable Masks [51.04085547997066]
We propose a data-driven algorithm, which compresses deep neural networks in a differentiable way by exploiting the characteristics of operations.
We perform extensive experiments and achieve outstanding performance in terms of the accuracy of output networks.
arXiv Detail & Related papers (2020-07-08T07:44:00Z) - Simultaneous Decision Making for Stochastic Multi-echelon Inventory
Optimization with Deep Neural Networks as Decision Makers [0.7614628596146599]
We propose a framework that uses deep neural networks (DNN) to optimize inventory decisions in complex multi-echelon supply chains.
Our method is suitable for a wide variety of supply chain networks, including general topologies that may contain both assembly and distribution nodes.
arXiv Detail & Related papers (2020-06-10T02:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.