Private Selection with Heterogeneous Sensitivities
- URL: http://arxiv.org/abs/2501.05309v1
- Date: Thu, 09 Jan 2025 15:25:07 GMT
- Title: Private Selection with Heterogeneous Sensitivities
- Authors: Daniela Antonova, Allegra Laro, Audra McMillan, Lorenz Wolf,
- Abstract summary: Differentially private (DP) selection involves choosing a high-scoring candidate from a finite candidate pool.
This problem arises naturally in a variety of contexts including model selection, hypothesis testing, and within many DP algorithms.
To address this, algorithms like the Generalised Exponential Mechanism (GEM) leverage variability in candidate sensitivities.
- Score: 2.0936724675062406
- License:
- Abstract: Differentially private (DP) selection involves choosing a high-scoring candidate from a finite candidate pool, where each score depends on a sensitive dataset. This problem arises naturally in a variety of contexts including model selection, hypothesis testing, and within many DP algorithms. Classical methods, such as Report Noisy Max (RNM), assume all candidates' scores are equally sensitive to changes in a single individual's data, but this often isn't the case. To address this, algorithms like the Generalised Exponential Mechanism (GEM) leverage variability in candidate sensitivities. However, we observe that while these algorithms can outperform RNM in some situations, they may underperform in others - they can even perform worse than random selection. In this work, we explore how the distribution of scores and sensitivities impacts DP selection mechanisms. In all settings we study, we find that there exists a mechanism that utilises heterogeneity in the candidate sensitivities that outperforms standard mechanisms like RNM. However, no single mechanism uniformly outperforms RNM. We propose using the correlation between the scores and sensitivities as the basis for deciding which DP selection mechanism to use. Further, we design a slight variant of GEM, modified GEM that generally performs well whenever GEM performs poorly. Relying on the correlation heuristic we propose combined GEM, which adaptively chooses between GEM and modified GEM and outperforms both in polarised settings.
Related papers
- Robust Gaussian Processes via Relevance Pursuit [17.39376866275623]
We propose and study a GP model that achieves robustness against sparse outliers by inferring data-point-specific noise levels.
We show, surprisingly, that the model can be parameterized such that the associated log marginal likelihood is strongly concave in the data-point-specific noise variances.
arXiv Detail & Related papers (2024-10-31T17:59:56Z) - Optimal Kernel Choice for Score Function-based Causal Discovery [92.65034439889872]
We propose a kernel selection method within the generalized score function that automatically selects the optimal kernel that best fits the data.
We conduct experiments on both synthetic data and real-world benchmarks, and the results demonstrate that our proposed method outperforms kernel selection methods.
arXiv Detail & Related papers (2024-07-14T09:32:20Z) - Causality and Independence Enhancement for Biased Node Classification [56.38828085943763]
We propose a novel Causality and Independence Enhancement (CIE) framework, applicable to various graph neural networks (GNNs)
Our approach estimates causal and spurious features at the node representation level and mitigates the influence of spurious correlations.
Our approach CIE not only significantly enhances the performance of GNNs but outperforms state-of-the-art debiased node classification methods.
arXiv Detail & Related papers (2023-10-14T13:56:24Z) - Q(D)O-ES: Population-based Quality (Diversity) Optimisation for Post Hoc
Ensemble Selection in AutoML [5.089078998562186]
We introduce two novel population-based ensemble selection methods, QO-ES and QDO-ES, and compare them to greedy ensemble selection (GES)
QO-ES optimises solely for predictive performance, while QDO-ES also considers the diversity of ensembles within the population, maintaining a diverse set of well-performing ensembles during optimisation based on ideas of quality diversity optimisation.
Our results suggest that diversity can be beneficial for post hoc ensembling but also increases the risk of overfitting.
arXiv Detail & Related papers (2023-07-17T10:02:01Z) - A GA-like Dynamic Probability Method With Mutual Information for Feature
Selection [1.290382979353427]
We propose a GA-like dynamic probability (GADP) method with mutual information.
As each gene's probability is independent, the chromosome variety in GADP is more notable than in traditional GA.
To verify our method's superiority, we evaluate our method under multiple conditions on 15 datasets.
arXiv Detail & Related papers (2022-10-21T13:30:01Z) - Coefficient Mutation in the Gene-pool Optimal Mixing Evolutionary
Algorithm for Symbolic Regression [0.0]
GP-GOMEA is a top-performing algorithm for symbolic regression.
We show how simple approaches for optimizing coefficients can be integrated into GP-GOMEA.
We find that coefficient mutation can help re-discover the underlying equation by a substantial amount.
arXiv Detail & Related papers (2022-04-26T08:58:47Z) - Effective Mutation Rate Adaptation through Group Elite Selection [50.88204196504888]
This paper introduces the Group Elite Selection of Mutation Rates (GESMR) algorithm.
GESMR co-evolves a population of solutions and a population of MRs, such that each MR is assigned to a group of solutions.
With the same number of function evaluations and with almost no overhead, GESMR converges faster and to better solutions than previous approaches.
arXiv Detail & Related papers (2022-04-11T01:08:26Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z) - Gaussian Process Latent Class Choice Models [7.992550355579791]
We present a non-parametric class of probabilistic machine learning within discrete choice models (DCMs)
The proposed model would assign individuals probabilistically to behaviorally homogeneous clusters (latent classes) using GPs.
The model is tested on two different mode choice applications and compared against different LCCM benchmarks.
arXiv Detail & Related papers (2021-01-28T19:56:42Z) - A Unified Joint Maximum Mean Discrepancy for Domain Adaptation [73.44809425486767]
This paper theoretically derives a unified form of JMMD that is easy to optimize.
From the revealed unified JMMD, we illustrate that JMMD degrades the feature-label dependence that benefits to classification.
We propose a novel MMD matrix to promote the dependence, and devise a novel label kernel that is robust to label distribution shift.
arXiv Detail & Related papers (2021-01-25T09:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.