Ransomware Negotiation: Dynamics and Privacy-Preserving Mechanism Design
- URL: http://arxiv.org/abs/2508.15844v1
- Date: Tue, 19 Aug 2025 20:29:12 GMT
- Title: Ransomware Negotiation: Dynamics and Privacy-Preserving Mechanism Design
- Authors: Haohui Zhang, Sirui Shen, Xinyu Hu, Chenglu Jin,
- Abstract summary: This paper presents a formal analysis of attacker-victim interactions in modern ransomware incidents using a finite-horizon alternating-offers bargaining game model.<n>Our analysis demonstrates how bargaining alters the optimal strategies of both parties.<n>To the best of our knowledge, this is the first automated, privacy-preserving negotiation mechanism grounded in a formal analysis of ransomware negotiation dynamics.
- Score: 7.902492214749489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ransomware attacks have become a pervasive and costly form of cybercrime, causing tens of millions of dollars in losses as organizations increasingly pay ransoms to mitigate operational disruptions and financial risks. While prior research has largely focused on proactive defenses, the post-infection negotiation dynamics between attackers and victims remains underexplored. This paper presents a formal analysis of attacker-victim interactions in modern ransomware incidents using a finite-horizon alternating-offers bargaining game model. Our analysis demonstrates how bargaining alters the optimal strategies of both parties. In practice, incomplete information-attackers lacking knowledge of victims' data valuations and victims lacking knowledge of attackers' reservation ransoms-can prolong negotiations and increase victims' business interruption costs. To address this, we design a Bayesian incentive-compatible mechanism that facilitates rapid agreement on a fair ransom without requiring either party to disclose private valuations. We further implement this mechanism using secure two-party computation based on garbled circuits, thereby eliminating the need for trusted intermediaries and preserving the privacy of both parties throughout the negotiation. To the best of our knowledge, this is the first automated, privacy-preserving negotiation mechanism grounded in a formal analysis of ransomware negotiation dynamics.
Related papers
- zkRansomware: Proof-of-Data Recoverability and Multi-round Game Theoretic Modeling of Ransomware Decisions [10.732091797893903]
We introduce and analyze zkRansomware.<n>This new ransomware model integrates zero-knowledge proofs to enable verifiable data recovery.<n>We show that zkRansomware is technically feasible using existing cryptographic and blockchain tools.
arXiv Detail & Related papers (2026-01-10T20:00:31Z) - Real AI Agents with Fake Memories: Fatal Context Manipulation Attacks on Web3 Agents [36.49717045080722]
This paper investigates the vulnerabilities of AI agents within blockchain-based financial ecosystems when exposed to adversarial threats in real-world scenarios.<n>We introduce the concept of context manipulation -- a comprehensive attack vector that exploits unprotected context surfaces.<n>Using ElizaOS, we showcase that malicious injections into prompts or historical records can trigger unauthorized asset transfers and protocol violations.
arXiv Detail & Related papers (2025-03-20T15:44:31Z) - Assessing and Prioritizing Ransomware Risk Based on Historical Victim Data [0.0]
We present an approach to identifying which ransomware adversaries are most likely to target specific entities.<n>Ransomware poses a formidable cybersecurity threat characterized by profit-driven motives, a complex underlying economy supporting criminal syndicates, and the overt nature of its attacks.
arXiv Detail & Related papers (2025-02-06T15:57:56Z) - Ransomware IR Model: Proactive Threat Intelligence-Based Incident Response Strategy [0.0]
There is no clear and proven published incident response strategy to satisfy different business priorities and objectives under ransomware attack in detail.<n>In this paper, we quote one of our representative front-line ransomware incident response experiences for Company X.
arXiv Detail & Related papers (2025-02-03T10:25:26Z) - Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.<n>We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.<n>Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - Taming the Ransomware Threats: Leveraging Prospect Theory for Rational Payment Decisions [0.0]
This paper adopts a novel approach, leveraging Prospect Theory, to elucidate the tactics employed by cyber attackers to entice organizations into paying the ransom.
It introduces an algorithm based on Prospect Theory and an Attack Recovery Plan, enabling organizations to make informed decisions on whether to consent to the ransom demands or resist.
arXiv Detail & Related papers (2024-09-15T14:20:03Z) - From Mean to Extreme: Formal Differential Privacy Bounds on the Success of Real-World Data Reconstruction Attacks [54.25638567385662]
Differential Privacy in machine learning is often interpreted as guarantees against membership inference.<n> translating DP budgets into quantitative protection against the more damaging threat of data reconstruction remains a challenging open problem.<n>This paper bridges the critical gap by deriving the first formal privacy bounds tailored to the mechanics of demonstrated "from-scratch" attacks.
arXiv Detail & Related papers (2024-02-20T09:52:30Z) - Locally Differentially Private Embedding Models in Distributed Fraud
Prevention Systems [2.001149416674759]
We present a collaborative deep learning framework for fraud prevention, designed from a privacy standpoint, and awarded at the recent PETs Prize Challenges.
We leverage latent embedded representations of varied-length transaction sequences, along with local differential privacy, in order to construct a data release mechanism which can securely inform externally hosted fraud and anomaly detection models.
We assess our contribution on two distributed data sets donated by large payment networks, and demonstrate robustness to popular inference-time attacks, along with utility-privacy trade-offs analogous to published work in alternative application domains.
arXiv Detail & Related papers (2024-01-03T14:04:18Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools
Stock Prediction [100.9772316028191]
In this paper, we experiment with a variety of adversarial attack configurations to fool three stock prediction victim models.
Our results show that the proposed attack method can achieve consistent success rates and cause significant monetary loss in trading simulation.
arXiv Detail & Related papers (2022-05-01T05:12:22Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.