opML: Optimistic Machine Learning on Blockchain
- URL: http://arxiv.org/abs/2401.17555v2
- Date: Mon, 5 Feb 2024 05:23:59 GMT
- Title: opML: Optimistic Machine Learning on Blockchain
- Authors: KD Conway, Cathie So, Xiaohang Yu, Kartin Wong,
- Abstract summary: We introduce opML (Optimistic Machine Learning on chain), an innovative approach that empowers blockchain systems to conduct AI model inference.
opML lies a interactive fraud proof protocol, reminiscent of the optimistic rollup systems.
opML offers cost-efficient and highly efficient ML services, with minimal participation requirements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The integration of machine learning with blockchain technology has witnessed increasing interest, driven by the vision of decentralized, secure, and transparent AI services. In this context, we introduce opML (Optimistic Machine Learning on chain), an innovative approach that empowers blockchain systems to conduct AI model inference. opML lies a interactive fraud proof protocol, reminiscent of the optimistic rollup systems. This mechanism ensures decentralized and verifiable consensus for ML services, enhancing trust and transparency. Unlike zkML (Zero-Knowledge Machine Learning), opML offers cost-efficient and highly efficient ML services, with minimal participation requirements. Remarkably, opML enables the execution of extensive language models, such as 7B-LLaMA, on standard PCs without GPUs, significantly expanding accessibility. By combining the capabilities of blockchain and AI through opML, we embark on a transformative journey toward accessible, secure, and efficient on-chain machine learning.
Related papers
- Large Language Models for Base Station Siting: Intelligent Deployment based on Prompt or Agent [62.16747639440893]
Large language models (LLMs) and their associated technologies advance, particularly in the realms of prompt engineering and agent engineering.
This approach entails the strategic use of well-crafted prompts to infuse human experience and knowledge into these sophisticated LLMs.
This integration represents the future paradigm of artificial intelligence (AI) as a service and AI for more ease.
arXiv Detail & Related papers (2024-08-07T08:43:32Z) - TDML -- A Trustworthy Distributed Machine Learning Framework [7.302091381583343]
The rapid advancement of large models (LM) has intensified the demand for computing resources.
This demand is exacerbated by limited availability due to supply chain delays and monopolistic acquisition by major tech firms.
We propose a textittrustworthy distributed machine learning (TDML) framework that leverages guidance to coordinate remote trainers and validate workloads.
arXiv Detail & Related papers (2024-07-10T03:22:28Z) - Verbalized Machine Learning: Revisiting Machine Learning with Language Models [63.10391314749408]
We introduce the framework of verbalized machine learning (VML)
VML constrains the parameter space to be human-interpretable natural language.
We empirically verify the effectiveness of VML, and hope that VML can serve as a stepping stone to stronger interpretability.
arXiv Detail & Related papers (2024-06-06T17:59:56Z) - Federated TrustChain: Blockchain-Enhanced LLM Training and Unlearning [22.33179965773829]
We propose a novel blockchain-based federated learning framework for Large Language Models (LLMs)
Our framework leverages blockchain technology to create a tamper-proof record of each model's contributions and introduces an innovative unlearning function that seamlessly integrates with the federated learning mechanism.
arXiv Detail & Related papers (2024-06-06T13:44:44Z) - Position: A Call to Action for a Human-Centered AutoML Paradigm [83.78883610871867]
Automated machine learning (AutoML) was formed around the fundamental objectives of automatically and efficiently configuring machine learning (ML)
We argue that a key to unlocking AutoML's full potential lies in addressing the currently underexplored aspect of user interaction with AutoML systems.
arXiv Detail & Related papers (2024-06-05T15:05:24Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - BasedAI: A decentralized P2P network for Zero Knowledge Large Language Models (ZK-LLMs) [0.0]
BasedAI is a distributed network of machines capable of integrating Fully Homomorphic Encryption (FHE) with any large language model (LLM) connected to its network.
The proposed framework embeds a default mechanism, called "Cerberus Squeezing", into the mining process.
arXiv Detail & Related papers (2024-03-01T22:10:15Z) - opp/ai: Optimistic Privacy-Preserving AI on Blockchain [0.0]
The Optimistic Privacy-Preserving AI (opp/ai) framework is introduced as a pioneering solution to these issues.
The framework integrates Zero-Knowledge Machine Learning (zkML) for privacy with Optimistic Machine Learning (opML) for efficiency.
This study presents the opp/ai framework, delves into the privacy features of zkML, and assesses the framework's performance and adaptability across different scenarios.
arXiv Detail & Related papers (2024-02-22T22:54:41Z) - Machine Vision Therapy: Multimodal Large Language Models Can Enhance Visual Robustness via Denoising In-Context Learning [67.0609518552321]
We propose to conduct Machine Vision Therapy which aims to rectify the noisy predictions from vision models.
By fine-tuning with the denoised labels, the learning model performance can be boosted in an unsupervised manner.
arXiv Detail & Related papers (2023-12-05T07:29:14Z) - BC4LLM: Trusted Artificial Intelligence When Blockchain Meets Large
Language Models [6.867309936992639]
Large language models (LLMs) serve people in the form of AI-generated content (AIGC)
It is difficult to guarantee the authenticity and reliability of AIGC learning data.
There are also hidden dangers of privacy disclosure in distributed AI training.
arXiv Detail & Related papers (2023-10-10T03:18:26Z) - Resource Management for Blockchain-enabled Federated Learning: A Deep
Reinforcement Learning Approach [54.29213445674221]
Federated Learning (BFL) enables mobile devices to collaboratively train neural network models required by a Machine Learning Model Owner (MLMO)
The issue of BFL is that the mobile devices have energy and CPU constraints that may reduce the system lifetime and training efficiency.
We propose to use the Deep Reinforcement Learning (DRL) to derive the optimal decisions for theO.
arXiv Detail & Related papers (2020-04-08T16:29:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.