Model Context Contracts - MCP-Enabled Framework to Integrate LLMs With Blockchain Smart Contracts
- URL: http://arxiv.org/abs/2510.19856v1
- Date: Tue, 21 Oct 2025 19:28:20 GMT
- Title: Model Context Contracts - MCP-Enabled Framework to Integrate LLMs With Blockchain Smart Contracts
- Authors: Eranga Bandara, Sachin Shetty, Ravi Mukkamala, Ross Gore, Peter Foytik, Safdar H. Bouk, Abdul Rahman, Xueping Liang, Ng Wee Keong, Kasun De Zoysa, Aruna Withanage, Nilaan Loganathan,
- Abstract summary: "MCC: Model Context Contracts" is a novel framework that enables AI agents to interact directly with blockchain smart contracts.<n>Within this proposed architecture, blockchain smart contracts can function as intelligent agents capable of recognizing user input in natural language and executing the corresponding transactions.<n>This research represents the first approach to using the concept of Model Context Protocol to integrate LLMs with blockchain.
- Score: 3.8304325298697464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, blockchain has experienced widespread adoption across various industries, becoming integral to numerous enterprise applications. Concurrently, the rise of generative AI and LLMs has transformed human-computer interactions, offering advanced capabilities in understanding and generating human-like text. The introduction of the MCP has further enhanced AI integration by standardizing communication between AI systems and external data sources. Despite these advancements, there is still no standardized method for seamlessly integrating LLM applications and blockchain. To address this concern, we propose "MCC: Model Context Contracts" a novel framework that enables LLMs to interact directly with blockchain smart contracts through MCP-like protocol. This integration allows AI agents to invoke blockchain smart contracts, facilitating more dynamic and context-aware interactions between users and blockchain networks. Essentially, it empowers users to interact with blockchain systems and perform transactions using queries in natural language. Within this proposed architecture, blockchain smart contracts can function as intelligent agents capable of recognizing user input in natural language and executing the corresponding transactions. To ensure that the LLM accurately interprets natural language inputs and maps them to the appropriate MCP functions, the LLM was fine-tuned using a custom dataset comprising user inputs paired with their corresponding MCP server functions. This fine-tuning process significantly improved the platform's performance and accuracy. To validate the effectiveness of MCC, we have developed an end-to-end prototype implemented on the Rahasak blockchain with the fine-tuned Llama-4 LLM. To the best of our knowledge, this research represents the first approach to using the concept of Model Context Protocol to integrate LLMs with blockchain.
Related papers
- Token Communication in the Era of Large Models: An Information Bottleneck-Based Approach [55.861432910722186]
UniToCom is a unified token communication paradigm that treats tokens as the fundamental units for both processing and wireless transmission.<n>We propose a generative information bottleneck (GenIB) principle, which facilitates the learning of tokens that preserve essential information.<n>We employ a causal Transformer-based multimodal large language model (MLLM) at the receiver to unify the processing of both discrete and continuous tokens.
arXiv Detail & Related papers (2025-07-02T14:03:01Z) - Federated Learning-Enabled Hybrid Language Models for Communication-Efficient Token Transmission [87.68447072141402]
Hybrid Language Models (HLMs) combine the low-latency efficiency of Small Language Models (SLMs) on edge devices with the high accuracy of Large Language Models (LLMs) on centralized servers.<n>We propose FedHLM, a communication-efficient HLM framework that integrates uncertainty-aware inference with Federated Learning (FL)
arXiv Detail & Related papers (2025-06-30T02:56:11Z) - Weaving the Cosmos: WASM-Powered Interchain Communication for AI Enabled Smart Contracts [4.780973517287942]
This paper introduces an innovative framework that integrates blockchain technology, particularly the Cosmos SDK, to facilitate on-chain AI inferences.<n>This system, built on WebAssembly (WASM), enables interchain communication and deployment of WASM modules executing AI inferences across multiple blockchain nodes.
arXiv Detail & Related papers (2025-02-24T19:44:28Z) - Connecting Large Language Models with Blockchain: Advancing the Evolution of Smart Contracts from Automation to Intelligence [2.2727580420156857]
This paper proposes and implements a universal framework for integrating Large Language Models with blockchain data, sysname.<n>By combining semantic relatedness with truth discovery methods, we introduce an innovative data aggregation approach, funcname.<n> Experimental results demonstrate that, even with 40% malicious nodes, the proposed solution improves data accuracy by an average of 17.74% compared to the optimal baseline.
arXiv Detail & Related papers (2024-12-03T08:35:51Z) - Semantic Interoperability on Blockchain by Generating Smart Contracts Based on Knowledge Graphs [0.820828081284034]
In a distributed setting, transmitted data will be structured using standards for semantic interoperability.
We propose the encoding of smart contract logic using a high-level semantic Knowledge Graph.
We show that it is feasible to automatically generate smart contract code based on a semantic KG.
arXiv Detail & Related papers (2024-09-11T13:46:24Z) - Large Language Model as a Catalyst: A Paradigm Shift in Base Station Siting Optimization [62.16747639440893]
Large language models (LLMs) and their associated technologies advance, particularly in the realms of prompt engineering and agent engineering.<n>Our proposed framework incorporates retrieval-augmented generation (RAG) to enhance the system's ability to acquire domain-specific knowledge and generate solutions.
arXiv Detail & Related papers (2024-08-07T08:43:32Z) - opML: Optimistic Machine Learning on Blockchain [0.0]
We introduce opML (Optimistic Machine Learning on chain), an innovative approach that empowers blockchain systems to conduct AI model inference.
opML lies a interactive fraud proof protocol, reminiscent of the optimistic rollup systems.
opML offers cost-efficient and highly efficient ML services, with minimal participation requirements.
arXiv Detail & Related papers (2024-01-31T02:43:38Z) - Generative AI-enabled Blockchain Networks: Fundamentals, Applications,
and Case Study [73.87110604150315]
Generative Artificial Intelligence (GAI) has emerged as a promising solution to address challenges of blockchain technology.
In this paper, we first introduce GAI techniques, outline their applications, and discuss existing solutions for integrating GAI into blockchains.
arXiv Detail & Related papers (2024-01-28T10:46:17Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - BC4LLM: Trusted Artificial Intelligence When Blockchain Meets Large
Language Models [6.867309936992639]
Large language models (LLMs) serve people in the form of AI-generated content (AIGC)
It is difficult to guarantee the authenticity and reliability of AIGC learning data.
There are also hidden dangers of privacy disclosure in distributed AI training.
arXiv Detail & Related papers (2023-10-10T03:18:26Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.