Consistent Estimation of Numerical Distributions under Local Differential Privacy by Wavelet Expansion
- URL: http://arxiv.org/abs/2509.19661v1
- Date: Wed, 24 Sep 2025 00:37:22 GMT
- Title: Consistent Estimation of Numerical Distributions under Local Differential Privacy by Wavelet Expansion
- Authors: Puning Zhao, Zhikun Zhang, Bo Sun, Li Shen, Liang Zhang, Shaowei Wang, Zhe Liu,
- Abstract summary: We propose a new approach that express the sample distribution using wavelet expansions.<n>Our method prioritizes the estimation of low-order coefficients, in order to ensure accurate estimation at macroscopic level.<n> Experiments show that our wavelet expansion method significantly outperforms existing solutions under Wasserstein and KS distances.
- Score: 33.04753728429468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distribution estimation under local differential privacy (LDP) is a fundamental and challenging task. Significant progresses have been made on categorical data. However, due to different evaluation metrics, these methods do not work well when transferred to numerical data. In particular, we need to prevent the probability mass from being misplaced far away. In this paper, we propose a new approach that express the sample distribution using wavelet expansions. The coefficients of wavelet series are estimated under LDP. Our method prioritizes the estimation of low-order coefficients, in order to ensure accurate estimation at macroscopic level. Therefore, the probability mass is prevented from being misplaced too far away from its ground truth. We establish theoretical guarantees for our methods. Experiments show that our wavelet expansion method significantly outperforms existing solutions under Wasserstein and KS distances.
Related papers
- Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.<n>We show how such maps can be learned via standard DBM training using a novel noise schedule.<n>The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.<n>We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.<n>We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Combining Statistical Depth and Fermat Distance for Uncertainty Quantification [3.3975558777609915]
We measure the Out-of-domain uncertainty in the prediction of Neural Networks using a statistical notion called Lens Depth'' (LD) combined with Fermat Distance.
The proposed method gives excellent qualitative result on toy datasets and can give competitive or better uncertainty estimation on standard deep learning datasets.
arXiv Detail & Related papers (2024-04-12T13:54:21Z) - Transformer-based Parameter Estimation in Statistics [0.0]
We propose a transformer-based approach to parameter estimation.
It does not even require knowing the probability density function, which is needed by numerical methods.
It is shown that our approach achieves similar or better accuracy as measured by mean-square-errors.
arXiv Detail & Related papers (2024-02-28T04:30:41Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Learning to Stop with Surprisingly Few Samples [17.46537996825982]
We consider a discounted infinite horizon optimal stopping problem.
If the underlying distribution is known a priori, the solution of this problem is obtained via dynamic programming.
When information on this distribution is lacking, a natural (though naive) approach is "explore-then-exploit"
arXiv Detail & Related papers (2021-02-19T16:51:07Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z) - Robust Density Estimation under Besov IPM Losses [10.079698681921672]
We study minimax convergence rates of nonparametric density estimation in the Huber contamination model.
We show that a re-scaled thresholding wavelet series estimator achieves minimax optimal convergence rates under a wide variety of losses.
arXiv Detail & Related papers (2020-04-18T11:30:35Z) - Batch Stationary Distribution Estimation [98.18201132095066]
We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.
We propose a consistent estimator that is based on recovering a correction ratio function over the given data.
arXiv Detail & Related papers (2020-03-02T09:10:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.