Differential Privacy for Regulatory Compliance in Cyberattack Detection on Critical Infrastructure Systems
- URL: http://arxiv.org/abs/2508.08190v1
- Date: Mon, 11 Aug 2025 17:10:49 GMT
- Title: Differential Privacy for Regulatory Compliance in Cyberattack Detection on Critical Infrastructure Systems
- Authors: Paritosh Ramanan, H. M. Mohaimanul Islam, Abhiram Reddy Alugula,
- Abstract summary: This paper presents a cyberattack detection framework geared towards enhancing regulatory confidence while alleviating privacy concerns of CIN stakeholders.<n>We show that our method induces a misclassification error rate comparable to the non-DP cases while delivering robust privacy guarantees.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Industrial control systems are a fundamental component of critical infrastructure networks (CIN) such as gas, water and power. With the growing risk of cyberattacks, regulatory compliance requirements are also increasing for large scale critical infrastructure systems comprising multiple utility stakeholders. The primary goal of regulators is to ensure overall system stability with recourse to trustworthy stakeholder attack detection. However, adhering to compliance requirements requires stakeholders to also disclose sensor and control data to regulators raising privacy concerns. In this paper, we present a cyberattack detection framework that utilizes differentially private (DP) hypothesis tests geared towards enhancing regulatory confidence while alleviating privacy concerns of CIN stakeholders. The hallmark of our approach is a two phase privacy scheme that protects the privacy of covariance, as well as the associated sensor driven test statistics computed as a means to generate alarms. Theoretically, we show that our method induces a misclassification error rate comparable to the non-DP cases while delivering robust privacy guarantees. With the help of real-world datasets, we show the reliability of our DP-detection outcomes for a wide variety of attack scenarios for interdependent stakeholders.
Related papers
- PrivFly: A Privacy-Preserving Self-Supervised Framework for Rare Attack Detection in IoFT [2.217288163160845]
Internet of Flying Things (IoFT) plays a vital role in modern applications such as aerial surveillance and smart mobility.<n>Internet of Flying Things (IoFT) plays a vital role in modern applications such as aerial surveillance and smart mobility.
arXiv Detail & Related papers (2026-01-19T12:30:20Z) - Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy [50.66105844449181]
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice.<n>We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms.<n>We propose $(varepsilon_i,_i,overline)$-iDP a privacy contract that uses $$-divergences to provide users with a hard upper bound on their excess vulnerability.
arXiv Detail & Related papers (2026-01-19T10:26:12Z) - zkSTAR: A zero knowledge system for time series attack detection enforcing regulatory compliance in critical infrastructure networks [0.9558392439655014]
Industrial control systems (ICS) form the operational backbone of critical infrastructure networks.<n>Regulators are imposing stricter compliance requirements to ensure system-wide security and reliability.<n>A central challenge is enabling regulators to verify the effectiveness of detection mechanisms without requiring utilities to disclose sensitive operational data.<n>We introduce zkSTAR, a cyberattack detection framework that leverages zk-SNARKs to reconcile these requirements and enable provable detection guarantees.
arXiv Detail & Related papers (2025-10-27T06:45:11Z) - On the MIA Vulnerability Gap Between Private GANs and Diffusion Models [51.53790101362898]
Generative Adversarial Networks (GANs) and diffusion models have emerged as leading approaches for high-quality image synthesis.<n>We present the first unified theoretical and empirical analysis of the privacy risks faced by differentially private generative models.
arXiv Detail & Related papers (2025-09-03T14:18:22Z) - Blockchain Powered Edge Intelligence for U-Healthcare in Privacy Critical and Time Sensitive Environment [0.559239450391449]
We propose an autonomous computing model for privacy-critical and time-sensitive health applications.<n>The system supports continuous monitoring, real-time alert notifications, disease detection, and robust data processing and aggregation.<n>A secure access scheme is defined to manage both off-chain and on-chain data sharing and storage.
arXiv Detail & Related papers (2025-05-31T06:58:52Z) - Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)<n>FedE4RAG facilitates collaborative training of client-side RAG retrieval models.<n>We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - Bayes-Nash Generative Privacy Against Membership Inference Attacks [24.330984323956173]
We propose a game-theoretic framework modeling privacy protection as a Bayesian game between defender and attacker.<n>To address strategic complexity, we represent the defender's mixed strategy as a neural network generator mapping private datasets to public representations.<n>Our approach significantly outperforms state-of-the-art methods by generating stronger attacks and achieving better privacy-utility tradeoffs.
arXiv Detail & Related papers (2024-10-09T20:29:04Z) - Decentralized Federated Anomaly Detection in Smart Grids: A P2P Gossip Approach [0.44328715570014865]
This paper introduces a novel decentralized federated anomaly detection scheme based on two main gossip protocols namely Random Walk and Epidemic.<n>Our approach yields a notable 35% improvement in training time compared to conventional Federated Learning.
arXiv Detail & Related papers (2024-07-20T10:45:06Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - FedDiSC: A Computation-efficient Federated Learning Framework for Power
Systems Disturbance and Cyber Attack Discrimination [1.0621485365427565]
This paper proposes a novel Federated Learning-based privacy-preserving and communication-efficient attack detection framework, known as FedDiSC.
We put forward a representation learning-based Deep Auto-Encoder network to accurately detect power system and cybersecurity anomalies.
To adapt our proposed framework to the timeliness of real-world cyberattack detection in SGs, we leverage the use of a gradient privacy-preserving quantization scheme known as DP-SIGNSGD.
arXiv Detail & Related papers (2023-04-07T13:43:57Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.