A Holistic Approach for Bitcoin Confirmation Times & Optimal Fee Selection
- URL: http://arxiv.org/abs/2402.17474v1
- Date: Tue, 27 Feb 2024 12:55:36 GMT
- Title: A Holistic Approach for Bitcoin Confirmation Times & Optimal Fee Selection
- Authors: Rowel Gündlach, Ivo V. Stoepker, Stella Kapodistria, Jacques A. C. Resing,
- Abstract summary: Bitcoin is subject to a significant pay-for-speed trade-off.
Users can reduce their transaction confirmation times by increasing their transaction fee.
We propose a model-based approach that can be used to determine the optimal fee.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bitcoin is currently subject to a significant pay-for-speed trade-off. This is caused by lengthy and highly variable transaction confirmation times, especially during times of congestion. Users can reduce their transaction confirmation times by increasing their transaction fee. In this paper, based on the inner workings of Bitcoin, we propose a model-based approach (based on the Cram\'er-Lundberg model) that can be used to determine the optimal fee, via, for example, the mean or quantiles, and models accurately the confirmation time distribution for a given fee. The proposed model is highly suitable as it arises as the limiting model for the mempool process (that tracks the unconfirmed transactions), which we rigorously show via a fluid limit and we extend this to the diffusion limit (an approximation of the Cram\'er-Lundberg model for fast computations in highly congested instances). We also propose methods (incorporating the real-time data) to estimate the model parameters, thereby combining model and data-driven approaches. The model-based approach is validated on real-world data and the resulting transaction fees outperform, in most instances, the data-driven ones.
Related papers
- Transaction Fee Estimation in the Bitcoin System [11.065598886291735]
In the Bitcoin system, transaction fees serve as an incentive for blockchain confirmations.
In this work, we focus on estimating the transaction fee for a new transaction to help with its confirmation within a given expected time.
We propose a framework FENN, which aims to integrate the knowledge from a wide range of sources, including the transaction itself, into a neural network model in order to estimate a proper transaction fee.
arXiv Detail & Related papers (2024-05-24T07:27:00Z) - Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution [62.71425232332837]
We show that training amortized models with noisy labels is inexpensive and surprisingly effective.
This approach significantly accelerates several feature attribution and data valuation methods, often yielding an order of magnitude speedup over existing approaches.
arXiv Detail & Related papers (2024-01-29T03:42:37Z) - TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models [52.454274602380124]
Diffusion models heavily depend on the time-step $t$ to achieve satisfactory multi-round denoising.
We propose a Temporal Feature Maintenance Quantization (TFMQ) framework building upon a Temporal Information Block.
Powered by the pioneering block design, we devise temporal information aware reconstruction (TIAR) and finite set calibration (FSC) to align the full-precision temporal features.
arXiv Detail & Related papers (2023-11-27T12:59:52Z) - Exploring Sparse Expert Models and Beyond [51.90860155810848]
Mixture-of-Experts (MoE) models can achieve promising results with outrageous large amount of parameters but constant computation cost.
We propose a simple method called expert prototyping that splits experts into different prototypes and applies $k$ top-$1$ routing.
This strategy improves the model quality but maintains constant computational costs, and our further exploration on extremely large-scale models reflects that it is more effective in training larger models.
arXiv Detail & Related papers (2021-05-31T16:12:44Z) - Time-Series Imputation with Wasserstein Interpolation for Optimal
Look-Ahead-Bias and Variance Tradeoff [66.59869239999459]
In finance, imputation of missing returns may be applied prior to training a portfolio optimization model.
There is an inherent trade-off between the look-ahead-bias of using the full data set for imputation and the larger variance in the imputation from using only the training data.
We propose a Bayesian posterior consensus distribution which optimally controls the variance and look-ahead-bias trade-off in the imputation.
arXiv Detail & Related papers (2021-02-25T09:05:35Z) - Analysis of Models for Decentralized and Collaborative AI on Blockchain [0.0]
We evaluate the use of several models and configurations in order to propose best practices when using the Self-Assessment incentive mechanism.
We compare several factors for each dataset when models are hosted in smart contracts on a public blockchain.
arXiv Detail & Related papers (2020-09-14T21:38:55Z) - Bitcoin Transaction Forecasting with Deep Network Representation
Learning [16.715475608359046]
This paper presents a novel approach to developing a Bitcoin transaction forecast model, DLForecast, by leveraging deep neural networks for learning Bitcoin transaction network representations.
We construct a time-decaying reachability graph and a time-decaying transaction pattern graph, aiming at capturing different types of spatial-temporal Bitcoin transaction patterns.
We show that our spatial-temporal forecasting model is efficient with fast runtime and effective with forecasting accuracy over 60% and improves the prediction performance by 50% when compared to forecasting model built on the static graph baseline.
arXiv Detail & Related papers (2020-07-15T21:11:32Z) - A Time Series Analysis-Based Stock Price Prediction Using Machine
Learning and Deep Learning Models [0.0]
We present a very robust and accurate framework of stock price prediction that consists of an agglomeration of statistical, machine learning and deep learning models.
We use the daily stock price data, collected at five minutes interval of time, of a very well known company that is listed in the National Stock Exchange (NSE) of India.
We contend that the agglomerative approach of model building that uses a combination of statistical, machine learning, and deep learning approaches, can very effectively learn from the volatile and random movement patterns in a stock price data.
arXiv Detail & Related papers (2020-04-17T19:41:22Z) - The Right Tool for the Job: Matching Model and Instance Complexities [62.95183777679024]
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs.
We propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) "exit"
We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks.
arXiv Detail & Related papers (2020-04-16T04:28:08Z) - Nonparametric Estimation in the Dynamic Bradley-Terry Model [69.70604365861121]
We develop a novel estimator that relies on kernel smoothing to pre-process the pairwise comparisons over time.
We derive time-varying oracle bounds for both the estimation error and the excess risk in the model-agnostic setting.
arXiv Detail & Related papers (2020-02-28T21:52:49Z) - A posteriori Trading-inspired Model-free Time Series Segmentation [0.0]
Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a state-of-the-art model-based top-down approach fitting Gaussian models.
Performance is demonstrated on synthetic and real-world data, including a large-scale dataset.
arXiv Detail & Related papers (2019-12-16T06:14:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.