Onto-DP: Constructing Neighborhoods for Differential Privacy on Ontological Databases
- URL: http://arxiv.org/abs/2602.15614v1
- Date: Tue, 17 Feb 2026 14:42:17 GMT
- Title: Onto-DP: Constructing Neighborhoods for Differential Privacy on Ontological Databases
- Authors: Yasmine Hayder, Adrien Boiret, Cédric Eichler, Benjamin Nguyen,
- Abstract summary: We show how attackers can discover sensitive information embedded within databases by exploiting inference rules.<n>We introduce a novel extension of differential privacy paradigms built on top of any classical DP model by enriching it with semantic awareness.
- Score: 0.6999740786886536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate how attackers can discover sensitive information embedded within databases by exploiting inference rules. We demonstrate the inadequacy of naively applied existing state of the art differential privacy (DP) models in safeguarding against such attacks. We introduce ontology aware differential privacy (Onto-DP), a novel extension of differential privacy paradigms built on top of any classical DP model by enriching it with semantic awareness. We show that this extension is a sufficient condition to adequately protect against attackers aware of inference rules.
Related papers
- On the MIA Vulnerability Gap Between Private GANs and Diffusion Models [51.53790101362898]
Generative Adversarial Networks (GANs) and diffusion models have emerged as leading approaches for high-quality image synthesis.<n>We present the first unified theoretical and empirical analysis of the privacy risks faced by differentially private generative models.
arXiv Detail & Related papers (2025-09-03T14:18:22Z) - Beyond the Worst Case: Extending Differential Privacy Guarantees to Realistic Adversaries [17.780319275883127]
Differential Privacy is a family of definitions that bound the worst-case privacy leakage of a mechanism.<n>This work sheds light on what the worst-case guarantee of DP implies about the success of attackers that are more representative of real-world privacy risks.
arXiv Detail & Related papers (2025-07-10T20:36:31Z) - Machine Learning with Privacy for Protected Attributes [56.44253915927481]
We refine the definition of differential privacy (DP) to create a more general and flexible framework that we call feature differential privacy (FDP)<n>Our definition is simulation-based and allows for both addition/removal and replacement variants of privacy, and can handle arbitrary separation of protected and non-protected features.<n>We apply our framework to various machine learning tasks and show that it can significantly improve the utility of DP-trained models when public features are available.
arXiv Detail & Related papers (2025-06-24T17:53:28Z) - Differential Privacy in Machine Learning: From Symbolic AI to LLMs [49.1574468325115]
Differential privacy provides a formal framework to mitigate privacy risks.<n>It ensures that the inclusion or exclusion of any single data point does not significantly alter the output of an algorithm.
arXiv Detail & Related papers (2025-06-13T11:30:35Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.<n>We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.<n>We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - FedAdOb: Privacy-Preserving Federated Deep Learning with Adaptive Obfuscation [26.617708498454743]
Federated learning (FL) has emerged as a collaborative approach that allows multiple clients to jointly learn a machine learning model without sharing their private data.
We propose a novel adaptive obfuscation mechanism, coined FedAdOb, to protect private data without yielding original model performances.
arXiv Detail & Related papers (2024-06-03T08:12:09Z) - ATTAXONOMY: Unpacking Differential Privacy Guarantees Against Practical Adversaries [11.550822252074733]
We offer a detailed taxonomy of attacks, showing the various dimensions of attacks and highlighting that many real-world settings have been understudied.
We operationalize our taxonomy by using it to analyze a real-world case study, the Israeli Ministry of Health's recent release of a birth dataset using Differential Privacy.
arXiv Detail & Related papers (2024-05-02T20:23:23Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Node Injection Link Stealing Attack [0.649970685896541]
We present a stealthy and effective attack that exposes privacy vulnerabilities in Graph Neural Networks (GNNs) by inferring private links within graph-structured data.
Our work highlights the privacy vulnerabilities inherent in GNNs, underscoring the importance of developing robust privacy-preserving mechanisms for their application.
arXiv Detail & Related papers (2023-07-25T14:51:01Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Bounding Training Data Reconstruction in Private (Deep) Learning [40.86813581191581]
Differential privacy is widely accepted as the de facto method for preventing data leakage in ML.
Existing semantic guarantees for DP focus on membership inference.
We show that two distinct privacy accounting methods -- Renyi differential privacy and Fisher information leakage -- both offer strong semantic protection against data reconstruction attacks.
arXiv Detail & Related papers (2022-01-28T19:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.