A Model-Based Machine Learning Approach for Assessing the Performance of
Blockchain Applications
- URL: http://arxiv.org/abs/2309.11205v1
- Date: Wed, 20 Sep 2023 10:39:21 GMT
- Title: A Model-Based Machine Learning Approach for Assessing the Performance of
Blockchain Applications
- Authors: Adel Albshri, Ali Alzubaidi, Ellis Solaiman
- Abstract summary: We use machine learning (ML) model-based methods to predict blockchain performance.
We employ the salp swarm optimization (SO) ML model which enables the investigation of optimal blockchain configurations.
The $k$NN model outperforms SVM by 5% and the ISO also demonstrates a reduction of 4% inaccuracy deviation compared to regular SO.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent advancement of Blockchain technology consolidates its status as a
viable alternative for various domains. However, evaluating the performance of
blockchain applications can be challenging due to the underlying
infrastructure's complexity and distributed nature. Therefore, a reliable
modelling approach is needed to boost Blockchain-based applications'
development and evaluation. While simulation-based solutions have been
researched, machine learning (ML) model-based techniques are rarely discussed
in conjunction with evaluating blockchain application performance. Our novel
research makes use of two ML model-based methods. Firstly, we train a $k$
nearest neighbour ($k$NN) and support vector machine (SVM) to predict
blockchain performance using predetermined configuration parameters. Secondly,
we employ the salp swarm optimization (SO) ML model which enables the
investigation of optimal blockchain configurations for achieving the required
performance level. We use rough set theory to enhance SO, hereafter called ISO,
which we demonstrate to prove achieving an accurate recommendation of optimal
parameter configurations; despite uncertainty. Finally, statistical comparisons
indicate that our models have a competitive edge. The $k$NN model outperforms
SVM by 5\% and the ISO also demonstrates a reduction of 4\% inaccuracy
deviation compared to regular SO.
Related papers
- Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - Proof of Quality: A Costless Paradigm for Trustless Generative AI Model Inference on Blockchains [24.934767209724335]
Generative AI models have demonstrated powerful and disruptive capabilities in natural language and image tasks.
deploying these models in decentralized environments remains challenging.
We present a new inference paradigm called emphproof of quality (PoQ) to enable the deployment of arbitrarily large generative models on blockchain architecture.
arXiv Detail & Related papers (2024-05-28T08:00:54Z) - Leveraging Reinforcement Learning and Large Language Models for Code
Optimization [14.602997316032706]
This paper introduces a new framework to decrease the complexity of code optimization.
The proposed framework builds on large language models (LLMs) and reinforcement learning (RL)
We run several experiments on the PIE dataset using a CodeT5 language model and RRHF, a new reinforcement learning algorithm.
arXiv Detail & Related papers (2023-12-09T19:50:23Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - The Implications of Decentralization in Blockchained Federated Learning: Evaluating the Impact of Model Staleness and Inconsistencies [2.6391879803618115]
We study the practical implications of outsourcing the orchestration of federated learning to a democratic setting such as in a blockchain.
Using simulation, we evaluate the blockchained FL operation by applying two different ML models on the well-known MNIST and CIFAR-10 datasets.
Our results show the high impact of model inconsistencies on the accuracy of the models (up to a 35% decrease in prediction accuracy)
arXiv Detail & Related papers (2023-10-11T13:18:23Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Latency Optimization for Blockchain-Empowered Federated Learning in
Multi-Server Edge Computing [24.505675843652448]
In this paper, we study a new latency optimization problem for federated learning (BFL) in multi-server edge computing.
In this system model, distributed mobile devices (MDs) communicate with a set of edge servers (ESs) to handle both machine learning (ML) model training and block mining simultaneously.
arXiv Detail & Related papers (2022-03-18T00:38:29Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Proof of Learning (PoLe): Empowering Machine Learning with Consensus
Building on Blockchains [7.854034211489588]
We propose a new consensus mechanism, Proof of Learning (PoLe), which directs the spent for consensus toward optimization of neural networks (NN)
In our mechanism, the training/testing data are released to the entire blockchain network (BCN) and the consensus nodes train NN models on the data.
We show that PoLe can achieve a more stable block generation rate, which leads to more efficient transaction processing.
arXiv Detail & Related papers (2020-07-29T22:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.