AgentCrypt: Advancing Privacy and (Secure) Computation in AI Agent Collaboration
- URL: http://arxiv.org/abs/2512.08104v1
- Date: Mon, 08 Dec 2025 23:20:20 GMT
- Title: AgentCrypt: Advancing Privacy and (Secure) Computation in AI Agent Collaboration
- Authors: Harish Karthikeyan, Yue Guo, Leo de Castro, Antigoni Polychroniadou, Leo Ardon, Udari Madhushani Sehwag, Sumitra Ganesh, Manuela Veloso,
- Abstract summary: AgentCrypt is a framework for fine-grained, encrypted agent communication.<n>It ensures privacy across diverse interactions and enables computation on otherwise inaccessible data.<n>We implemented and tested it with Langgraph and Google ADK, demonstrating versatility across platforms.
- Score: 23.22266919684932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As AI agents increasingly operate in real-world, multi-agent environments, ensuring reliable and context-aware privacy in agent communication is critical, especially to comply with evolving regulatory requirements. Traditional access controls are insufficient, as privacy risks often arise after access is granted; agents may use information in ways that compromise privacy, such as messaging humans, sharing context with other agents, making tool calls, persisting data, or generating derived private information. Existing approaches often treat privacy as a binary constraint, whether data is shareable or not, overlooking nuanced, role-specific, and computation-dependent privacy needs essential for regulatory compliance. Agents, including those based on large language models, are inherently probabilistic and heuristic. There is no formal guarantee of how an agent will behave for any query, making them ill-suited for operations critical to security. To address this, we introduce AgentCrypt, a four-tiered framework for fine-grained, encrypted agent communication that adds a protection layer atop any AI agent platform. AgentCrypt spans unrestricted data exchange (Level 1) to fully encrypted computation using techniques such as homomorphic encryption (Level 4). Crucially, it guarantees the privacy of tagged data is always maintained, prioritizing privacy above correctness. AgentCrypt ensures privacy across diverse interactions and enables computation on otherwise inaccessible data, overcoming barriers such as data silos. We implemented and tested it with Langgraph and Google ADK, demonstrating versatility across platforms. We also introduce a benchmark dataset simulating privacy-critical tasks at all privacy levels, enabling systematic evaluation and fostering the development of regulatable machine learning systems for secure agent communication and computation.
Related papers
- Contextualized Privacy Defense for LLM Agents [84.30907378390512]
LLM agents increasingly act on users' personal information, yet existing privacy defenses remain limited in both design and adaptability.<n>We propose Contextualized Defense Instructing (CDI), a new privacy defense paradigm.<n>We show that our CDI consistently achieves a better balance between privacy preservation (94.2%) and helpfulness (80.6%) than baselines.
arXiv Detail & Related papers (2026-03-03T13:35:33Z) - OMNI-LEAK: Orchestrator Multi-Agent Network Induced Data Leakage [59.3826294523924]
We investigate the security vulnerabilities of a popular multi-agent pattern known as the orchestrator setup.<n>We report the susceptibility of frontier models to different categories of attacks, finding that both reasoning and non-reasoning models are vulnerable.
arXiv Detail & Related papers (2026-02-13T21:32:32Z) - Trusted AI Agents in the Cloud [4.2366483628004366]
Omega is a system that enables trusted AI agents by enforcing end-to-end isolation.<n>It provides efficient multi-agent orchestration with cross-principal trust establishment.<n>It achieves high performance while enabling high-density, policy-compliant multi-agent deployments at cloud scale.
arXiv Detail & Related papers (2025-12-05T18:48:53Z) - AudAgent: Automated Auditing of Privacy Policy Compliance in AI Agents [3.802907024025868]
AudAgent is a visual framework that monitors AI agents' data practices in real time.<n>AudAgent effectively identifies potential privacy policy violations in real time.
arXiv Detail & Related papers (2025-11-03T17:32:08Z) - MAGPIE: A dataset for Multi-AGent contextual PrIvacy Evaluation [54.410825977390274]
Existing benchmarks to evaluate contextual privacy in LLM-agents primarily assess single-turn, low-complexity tasks.<n>We first present a benchmark - MAGPIE comprising 158 real-life high-stakes scenarios across 15 domains.<n>We then evaluate the current state-of-the-art LLMs on their understanding of contextually private data and their ability to collaborate without violating user privacy.
arXiv Detail & Related papers (2025-06-25T18:04:25Z) - Can One Safety Loop Guard Them All? Agentic Guard Rails for Federated Computing [0.0]
We propose Guardian-FC, a novel framework for privacy preserving federated computing.<n>It unifies safety enforcement across diverse privacy preserving mechanisms.<n>We present qualitative scenarios illustrating backend-agnostic safety and a formal model foundation for verification.
arXiv Detail & Related papers (2025-06-24T20:39:49Z) - Encrypted Federated Search Using Homomorphic Encryption [0.0]
This paper introduces a privacy-preserving federated search system that allows law enforcement agencies to conduct queries on encrypted criminal databases.<n>The key innovation here is the ability to execute encrypted queries across distributed databases, without the decryption of the data.
arXiv Detail & Related papers (2025-05-05T07:03:30Z) - AgentDAM: Privacy Leakage Evaluation for Autonomous Web Agents [66.29263282311258]
We introduce a new benchmark AgentDAM that measures if AI web-navigation agents follow the privacy principle of data minimization''<n>Our benchmark simulates realistic web interaction scenarios end-to-end and is adaptable to all existing web navigation agents.
arXiv Detail & Related papers (2025-03-12T19:30:31Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.<n>We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.<n>State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Air Gap: Protecting Privacy-Conscious Conversational Agents [44.04662124191715]
We introduce a novel threat model where adversarial third-party apps manipulate the context of interaction to trick LLM-based agents into revealing private information not relevant to the task at hand.
We introduce AirGapAgent, a privacy-conscious agent designed to prevent unintended data leakage by restricting the agent's access to only the data necessary for a specific task.
arXiv Detail & Related papers (2024-05-08T16:12:45Z) - Mitigating Leakage from Data Dependent Communications in Decentralized
Computing using Differential Privacy [1.911678487931003]
We propose a general execution model to control the data-dependence of communications in user-side decentralized computations.
Our formal privacy guarantees leverage and extend recent results on privacy amplification by shuffling.
arXiv Detail & Related papers (2021-12-23T08:30:17Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.