Proof of Learning (PoLe): Empowering Machine Learning with Consensus
Building on Blockchains
- URL: http://arxiv.org/abs/2007.15145v1
- Date: Wed, 29 Jul 2020 22:53:43 GMT
- Title: Proof of Learning (PoLe): Empowering Machine Learning with Consensus
Building on Blockchains
- Authors: Yixiao Lan, Yuan Liu, Boyang Li
- Abstract summary: We propose a new consensus mechanism, Proof of Learning (PoLe), which directs the spent for consensus toward optimization of neural networks (NN)
In our mechanism, the training/testing data are released to the entire blockchain network (BCN) and the consensus nodes train NN models on the data.
We show that PoLe can achieve a more stable block generation rate, which leads to more efficient transaction processing.
- Score: 7.854034211489588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The progress of deep learning (DL), especially the recent development of
automatic design of networks, has brought unprecedented performance gains at
heavy computational cost. On the other hand, blockchain systems routinely
perform a huge amount of computation that does not achieve practical purposes
in order to build Proof-of-Work (PoW) consensus from decentralized
participants. In this paper, we propose a new consensus mechanism, Proof of
Learning (PoLe), which directs the computation spent for consensus toward
optimization of neural networks (NN). In our mechanism, the training/testing
data are released to the entire blockchain network (BCN) and the consensus
nodes train NN models on the data, which serves as the proof of learning. When
the consensus on the BCN considers a NN model to be valid, a new block is
appended to the blockchain. We experimentally compare the PoLe protocol with
Proof of Work (PoW) and show that PoLe can achieve a more stable block
generation rate, which leads to more efficient transaction processing. We also
introduce a novel cheating prevention mechanism, Secure Mapping Layer (SML),
which can be straightforwardly implemented as a linear NN layer. Empirical
evaluation shows that SML can detect cheating nodes at small cost to the
predictive performance.
Related papers
- Proof-of-Collaborative-Learning: A Multi-winner Federated Learning Consensus Algorithm [2.5203968759841158]
We propose Proof-of-Collaborative-Learning (PoCL), a multi-winner federated learning validated consensus mechanism.
PoCL redirects the power of blockchains to train federated learning models.
We present a novel evaluation mechanism to ensure the efficiency of the locally trained models of miners.
arXiv Detail & Related papers (2024-07-17T21:14:05Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Robust softmax aggregation on blockchain based federated learning with convergence guarantee [11.955062839855334]
We propose a softmax aggregation blockchain based federated learning framework.
First, we propose a new blockchain based federated learning architecture that utilizes the well-tested proof-of-stake consensus mechanism.
Second, to ensure the robustness of the aggregation process, we design a novel softmax aggregation method.
arXiv Detail & Related papers (2023-11-13T02:25:52Z) - A Model-Based Machine Learning Approach for Assessing the Performance of
Blockchain Applications [0.0]
We use machine learning (ML) model-based methods to predict blockchain performance.
We employ the salp swarm optimization (SO) ML model which enables the investigation of optimal blockchain configurations.
The $k$NN model outperforms SVM by 5% and the ISO also demonstrates a reduction of 4% inaccuracy deviation compared to regular SO.
arXiv Detail & Related papers (2023-09-20T10:39:21Z) - On-Device Learning with Binary Neural Networks [2.7040098749051635]
We propose a CL solution that embraces the recent advancements in CL field and the efficiency of the Binary Neural Networks (BNN)
The choice of a binary network as backbone is essential to meet the constraints of low power devices.
arXiv Detail & Related papers (2023-08-29T13:48:35Z) - Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Blockchain Framework for Artificial Intelligence Computation [1.8148198154149393]
We design the block verification and consensus mechanism as a deep reinforcement-learning process.
Our method is used to design the next generation of public blockchain networks.
arXiv Detail & Related papers (2022-02-23T01:44:27Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.