Fiducial Matching: Differentially Private Inference for Categorical Data
- URL: http://arxiv.org/abs/2507.11762v1
- Date: Tue, 15 Jul 2025 21:56:15 GMT
- Title: Fiducial Matching: Differentially Private Inference for Categorical Data
- Authors: Ogonnaya Michael Romanus, Younes Boulaguiem, Roberto Molinari,
- Abstract summary: Inferential statistical inference is still an open area of investigation in a differentially private (DP) setting.<n>We propose a simulation-based matching approach, solved through tools from the fiducial framework.<n>We focus on the analysis of categorical (nominal) data that is common in national surveys.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of statistical inference, which includes the building of confidence intervals and tests for parameters and effects of interest to a researcher, is still an open area of investigation in a differentially private (DP) setting. Indeed, in addition to the randomness due to data sampling, DP delivers another source of randomness consisting of the noise added to protect an individual's data from being disclosed to a potential attacker. As a result of this convolution of noises, in many cases it is too complicated to determine the stochastic behavior of the statistics and parameters resulting from a DP procedure. In this work, we contribute to this line of investigation by employing a simulation-based matching approach, solved through tools from the fiducial framework, which aims to replicate the data generation pipeline (including the DP step) and retrieve an approximate distribution of the estimates resulting from this pipeline. For this purpose, we focus on the analysis of categorical (nominal) data that is common in national surveys, for which sensitivity is naturally defined, and on additive privacy mechanisms. We prove the validity of the proposed approach in terms of coverage and highlight its good computational and statistical performance for different inferential tasks in simulated and applied data settings.
Related papers
- Benchmarking Fraud Detectors on Private Graph Data [70.4654745317714]
Currently, many types of fraud are managed in part by automated detection algorithms that operate over graphs.<n>We consider the scenario where a data holder wishes to outsource development of fraud detectors to third parties.<n>Third parties submit their fraud detectors to the data holder, who evaluates these algorithms on a private dataset and then publicly communicates the results.<n>We propose a realistic privacy attack on this system that allows an adversary to de-anonymize individuals' data based only on the evaluation results.
arXiv Detail & Related papers (2025-07-30T03:20:15Z) - Statistical Inference for Differentially Private Stochastic Gradient Descent [14.360996967498002]
This paper bridges the gap between existing statistical methods and Differentially Private Gradient Descent (DP-SGD)<n>For the output of DP-SGD, we show that the variance decomposes into statistical, sampling, and privacy-induced components.<n>Two methods are proposed for constructing valid confidence intervals: the plug-in method and the random scaling method.
arXiv Detail & Related papers (2025-07-28T06:45:15Z) - Verifying Differentially Private Median Estimation [4.083860866484599]
We propose the first verifiable differentially private median estimation scheme based on zk-SNARKs.<n>Our scheme combines the exponential mechanism and a utility function for median estimation into an arithmetic circuit.
arXiv Detail & Related papers (2025-05-22T05:31:22Z) - Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation [62.2436697657307]
Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data.<n>We propose a method called Stratified Prediction-Powered Inference (StratPPI)<n>We show that the basic PPI estimates can be considerably improved by employing simple data stratification strategies.
arXiv Detail & Related papers (2024-06-06T17:37:39Z) - Inference With Combining Rules From Multiple Differentially Private Synthetic Datasets [0.0]
We study the applicability of procedures based on combining rules to the analysis of DIPS datasets.
Our empirical experiments show that the proposed combining rules may offer accurate inference in certain contexts, but not in all cases.
arXiv Detail & Related papers (2024-05-08T02:33:35Z) - Simulation-based Bayesian Inference from Privacy Protected Data [0.0]
We propose simulation-based inference methods from privacy-protected datasets.<n>We illustrate our methods on discrete time-series data under an infectious disease model and with ordinary linear regression models.
arXiv Detail & Related papers (2023-10-19T14:34:17Z) - Evaluating the Impact of Local Differential Privacy on Utility Loss via
Influence Functions [11.504012974208466]
We demonstrate the ability of influence functions to offer insight into how a specific privacy parameter value will affect a model's test loss.
Our proposed method allows a data curator to select the privacy parameter best aligned with their allowed privacy-utility trade-off.
arXiv Detail & Related papers (2023-09-15T18:08:24Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Differentially Private Confidence Intervals for Proportions under Stratified Random Sampling [14.066813980992132]
With the increase of data privacy awareness, developing a private version of confidence intervals has gained growing attention.
Recent work has been done around differentially private confidence intervals, yet rigorous methodologies on differentially private confidence intervals have not been studied.
We propose three differentially private algorithms for constructing confidence intervals for proportions under stratified random sampling.
arXiv Detail & Related papers (2023-01-19T21:25:41Z) - Differentially Private Federated Clustering over Non-IID Data [59.611244450530315]
clustering clusters (FedC) problem aims to accurately partition unlabeled data samples distributed over massive clients into finite clients under the orchestration of a server.
We propose a novel FedC algorithm using differential privacy convergence technique, referred to as DP-Fed, in which partial participation and multiple clients are also considered.
Various attributes of the proposed DP-Fed are obtained through theoretical analyses of privacy protection, especially for the case of non-identically and independently distributed (non-i.i.d.) data.
arXiv Detail & Related papers (2023-01-03T05:38:43Z) - Non-parametric Differentially Private Confidence Intervals for the
Median [3.205141100055992]
This paper proposes and evaluates several strategies to compute valid differentially private confidence intervals for the median.
We also illustrate that addressing both sources of uncertainty--the error from sampling and the error from protecting the output--should be preferred over simpler approaches that incorporate the uncertainty in a sequential fashion.
arXiv Detail & Related papers (2021-06-18T19:45:37Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.