MCIP: Protecting MCP Safety via Model Contextual Integrity Protocol
- URL: http://arxiv.org/abs/2505.14590v4
- Date: Wed, 04 Jun 2025 06:21:46 GMT
- Title: MCIP: Protecting MCP Safety via Model Contextual Integrity Protocol
- Authors: Huihao Jing, Haoran Li, Wenbin Hu, Qi Hu, Heli Xu, Tianshu Chu, Peizhao Hu, Yangqiu Song,
- Abstract summary: This paper proposes a novel framework to enhance Model Context Protocol safety.<n>Based on the MAESTRO framework, we first analyze the missing safety mechanisms in MCP.<n>Next, we develop a fine-grained taxonomy that captures a diverse range of unsafe behaviors observed in MCP scenarios.
- Score: 40.43415601554268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Model Context Protocol (MCP) introduces an easy-to-use ecosystem for users and developers, it also brings underexplored safety risks. Its decentralized architecture, which separates clients and servers, poses unique challenges for systematic safety analysis. This paper proposes a novel framework to enhance MCP safety. Guided by the MAESTRO framework, we first analyze the missing safety mechanisms in MCP, and based on this analysis, we propose the Model Contextual Integrity Protocol (MCIP), a refined version of MCP that addresses these gaps. Next, we develop a fine-grained taxonomy that captures a diverse range of unsafe behaviors observed in MCP scenarios. Building on this taxonomy, we develop benchmark and training data that support the evaluation and improvement of LLMs' capabilities in identifying safety risks within MCP interactions. Leveraging the proposed benchmark and training data, we conduct extensive experiments on state-of-the-art LLMs. The results highlight LLMs' vulnerabilities in MCP interactions and demonstrate that our approach substantially improves their safety performance.
Related papers
- We Should Identify and Mitigate Third-Party Safety Risks in MCP-Powered Agent Systems [48.345884334050965]
We advocate the research community in LLM safety to pay close attention to the new safety risks issues introduced by MCP.<n>We conduct a series of pilot experiments to demonstrate the safety risks in MCP-powered agent systems is a real threat and its defense is not trivial.
arXiv Detail & Related papers (2025-06-16T16:24:31Z) - Incorporating Failure of Machine Learning in Dynamic Probabilistic Safety Assurance [0.1398098625978622]
We introduce a safety assurance framework that integrates SafeML with Bayesian Networks (BNs) to model ML failures as part of a broader causal safety analysis.<n>We demonstrate the approach on a simulated automotive platooning system with traffic sign recognition.
arXiv Detail & Related papers (2025-06-07T17:16:05Z) - A Survey On Secure Machine Learning [0.0]
Recent advances in secure multiparty computation (MPC) have significantly improved its applicability in the realm of machine learning (ML)<n>This review explores key contributions that leverage MPC to enable multiple parties to engage in ML tasks without compromising the privacy of their data.<n> Innovations such as specialized software frameworks and domain-specific languages streamline the adoption of MPC in ML.
arXiv Detail & Related papers (2025-05-21T05:19:45Z) - MCP Guardian: A Security-First Layer for Safeguarding MCP-Based AI System [0.0]
We present MCP Guardian, a framework that strengthens MCP-based communication with authentication, rate-limiting, logging, tracing, and Web Application Firewall (WAF) scanning.<n>Our approach fosters secure, scalable data access for AI assistants, underscoring the importance of a defense-in-depth approach.
arXiv Detail & Related papers (2025-04-17T08:49:10Z) - Enterprise-Grade Security for the Model Context Protocol (MCP): Frameworks and Mitigation Strategies [0.0]
The Model Context Protocol (MCP) provides a standardized framework for artificial intelligence (AI) systems to interact with external data sources and tools in real-time.<n>This paper builds upon foundational research into MCP architecture and preliminary security assessments to deliver enterprise-grade mitigation frameworks.
arXiv Detail & Related papers (2025-04-11T15:25:58Z) - A Survey of Safety on Large Vision-Language Models: Attacks, Defenses and Evaluations [127.52707312573791]
This survey provides a comprehensive analysis of LVLM safety, covering key aspects such as attacks, defenses, and evaluation methods.<n>We introduce a unified framework that integrates these interrelated components, offering a holistic perspective on the vulnerabilities of LVLMs.<n>We conduct a set of safety evaluations on the latest LVLM, Deepseek Janus-Pro, and provide a theoretical analysis of the results.
arXiv Detail & Related papers (2025-02-14T08:42:43Z) - SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - Developing Safe and Responsible Large Language Model : Can We Balance Bias Reduction and Language Understanding in Large Language Models? [2.089112028396727]
This study explores whether Large Language Models can produce safe, unbiased outputs without sacrificing knowledge or comprehension.<n>We introduce the Safe and Responsible Large Language Model (textbfSR$_textLLM$)<n>Experiments on our specialized dataset and out-of-distribution test sets reveal that textbfSR$_textLLM$ effectively reduces biases while preserving knowledge integrity.
arXiv Detail & Related papers (2024-04-01T18:10:05Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.