Channel Balance Interpolation in the Lightning Network via Machine Learning
- URL: http://arxiv.org/abs/2405.12087v1
- Date: Mon, 20 May 2024 14:57:16 GMT
- Title: Channel Balance Interpolation in the Lightning Network via Machine Learning
- Authors: Vincent, Emanuele Rossi, Vikash Singh,
- Abstract summary: Bitcoin Lightning Network is a Layer 2 payment protocol that addresses Bitcoin's scalability.
This research explores the feasibility of using machine learning models to interpolate channel balances within the network.
- Score: 6.391448436169024
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The Bitcoin Lightning Network is a Layer 2 payment protocol that addresses Bitcoin's scalability by facilitating quick and cost effective transactions through payment channels. This research explores the feasibility of using machine learning models to interpolate channel balances within the network, which can be used for optimizing the network's pathfinding algorithms. While there has been much exploration in balance probing and multipath payment protocols, predicting channel balances using solely node and channel features remains an uncharted area. This paper evaluates the performance of several machine learning models against two heuristic baselines and investigates the predictive capabilities of various features. Our model performs favorably in experimental evaluation, outperforming by 10% against an equal split baseline where both edges are assigned half of the channel capacity.
Related papers
- An Exposition of Pathfinding Strategies Within Lightning Network Clients [4.926283917321645]
The Lightning Network is a peer-to-peer network designed to address Bitcoin's scalability challenges.
This paper explores differences within pathfinding strategies used by prominent Lightning Network node implementations.
We evaluate efficacy of different pathfinding strategies across metrics such as success rate, fees, path length, and timelock.
arXiv Detail & Related papers (2024-10-17T17:21:45Z) - Deep Reinforcement Learning-based Rebalancing Policies for Profit
Maximization of Relay Nodes in Payment Channel Networks [7.168126766674749]
We study how a relay node can maximize its profits from fees by using the rebalancing method of submarine swaps.
We formulate the problem of the node's fortune over time over all rebalancing policies, and approximate the optimal solution by designing a Deep Reinforcement Learning-based rebalancing policy.
arXiv Detail & Related papers (2022-10-13T19:11:10Z) - Semi-supervised Impedance Inversion by Bayesian Neural Network Based on
2-d CNN Pre-training [0.966840768820136]
We improve the semi-supervised learning from two aspects.
First, by replacing 1-d convolutional neural network layers in deep learning structure with 2-d CNN layers and 2-d maxpooling layers, the prediction accuracy is improved.
Second, prediction uncertainty can also be estimated by embedding the network into a Bayesian inference framework.
arXiv Detail & Related papers (2021-11-20T14:12:05Z) - Deep Diffusion Models for Robust Channel Estimation [1.7259824817932292]
We introduce a novel approach for multiple-input multiple-output (MIMO) channel estimation using deep diffusion models.
Our method uses a deep neural network that is trained to estimate the gradient of the log-likelihood of wireless channels at any point in high-dimensional space.
arXiv Detail & Related papers (2021-11-16T01:32:11Z) - CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization [61.71504948770445]
We propose a novel channel pruning method via Class-Aware Trace Ratio Optimization (CATRO) to reduce the computational burden and accelerate the model inference.
We show that CATRO achieves higher accuracy with similar cost or lower cost with similar accuracy than other state-of-the-art channel pruning algorithms.
Because of its class-aware property, CATRO is suitable to prune efficient networks adaptively for various classification subtasks, enhancing handy deployment and usage of deep networks in real-world applications.
arXiv Detail & Related papers (2021-10-21T06:26:31Z) - BWCP: Probabilistic Learning-to-Prune Channels for ConvNets via Batch
Whitening [63.081808698068365]
This work presents a probabilistic channel pruning method to accelerate Convolutional Neural Networks (CNNs)
Previous pruning methods often zero out unimportant channels in training in a deterministic manner, which reduces CNN's learning capacity and results in suboptimal performance.
We develop a probability-based pruning algorithm, called batch whitening channel pruning (BWCP), which canally discard unimportant channels by modeling the probability of a channel being activated.
arXiv Detail & Related papers (2021-05-13T17:00:05Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Operation-Aware Soft Channel Pruning using Differentiable Masks [51.04085547997066]
We propose a data-driven algorithm, which compresses deep neural networks in a differentiable way by exploiting the characteristics of operations.
We perform extensive experiments and achieve outstanding performance in terms of the accuracy of output networks.
arXiv Detail & Related papers (2020-07-08T07:44:00Z) - Decentralized Learning for Channel Allocation in IoT Networks over
Unlicensed Bandwidth as a Contextual Multi-player Multi-armed Bandit Game [134.88020946767404]
We study a decentralized channel allocation problem in an ad-hoc Internet of Things network underlaying on the spectrum licensed to a primary cellular network.
Our study maps this problem into a contextual multi-player, multi-armed bandit game, and proposes a purely decentralized, three-stage policy learning algorithm through trial-and-error.
arXiv Detail & Related papers (2020-03-30T10:05:35Z) - ReActNet: Towards Precise Binary Neural Network with Generalized
Activation Functions [76.05981545084738]
We propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost.
We first construct a baseline network by modifying and binarizing a compact real-valued network with parameter-free shortcuts.
We show that the proposed ReActNet outperforms all the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-03-07T02:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.