Transfer Learning In Differential Privacy's Hybrid-Model
- URL: http://arxiv.org/abs/2201.12018v1
- Date: Fri, 28 Jan 2022 09:54:54 GMT
- Title: Transfer Learning In Differential Privacy's Hybrid-Model
- Authors: Refael Kohen and Or Sheffet
- Abstract summary: We study the problem of machine learning in the hybrid-model where the n individuals in the curators dataset are drawn from a different distribution.
We give a general scheme -- Subsample-Test-Reweigh -- for this transfer learning problem.
- Score: 10.584333748643774
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The hybrid-model (Avent et al 2017) in Differential Privacy is a an
augmentation of the local-model where in addition to N local-agents we are
assisted by one special agent who is in fact a curator holding the sensitive
details of n additional individuals. Here we study the problem of machine
learning in the hybrid-model where the n individuals in the curators dataset
are drawn from a different distribution than the one of the general population
(the local-agents). We give a general scheme -- Subsample-Test-Reweigh -- for
this transfer learning problem, which reduces any curator-model DP-learner to a
hybrid-model learner in this setting using iterative subsampling and reweighing
of the n examples held by the curator based on a smooth variation of the
Multiplicative-Weights algorithm (introduced by Bun et al, 2020). Our scheme
has a sample complexity which relies on the chi-squared divergence between the
two distributions. We give worst-case analysis bounds on the sample complexity
required for our private reduction. Aiming to reduce said sample complexity, we
give two specific instances our sample complexity can be drastically reduced
(one instance is analyzed mathematically, while the other - empirically) and
pose several directions for follow-up work.
Related papers
- Training Implicit Generative Models via an Invariant Statistical Loss [3.139474253994318]
Implicit generative models have the capability to learn arbitrary complex data distributions.
On the downside, training requires telling apart real data from artificially-generated ones using adversarial discriminators.
We develop a discriminator-free method for training one-dimensional (1D) generative implicit models.
arXiv Detail & Related papers (2024-02-26T09:32:28Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Sample Complexity of Opinion Formation on Networks with Linear Regression Models [36.75032460874647]
We study the study of sample complexity of opinion convergence in networks.
Our framework is built on the recognized opinion formation game.
Empirical results on both synthetic and real-world networks strongly support our theoretical findings.
arXiv Detail & Related papers (2023-11-04T08:28:33Z) - On-Demand Sampling: Learning Optimally from Multiple Distributions [63.20009081099896]
Social and real-world considerations have given rise to multi-distribution learning paradigms.
We establish the optimal sample complexity of these learning paradigms and give algorithms that meet this sample complexity.
Our algorithm design and analysis are enabled by our extensions of online learning techniques for solving zero-sum games.
arXiv Detail & Related papers (2022-10-22T19:07:26Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Multitask Learning and Bandits via Robust Statistics [3.103098467546532]
Decision-makers often simultaneously face many related but heterogeneous learning problems.
We propose a novel two-stage multitask learning estimator that exploits this structure in a sample-efficient way.
Our estimator yields improved sample complexity bounds in the feature dimension $d$ relative to commonly-employed estimators.
arXiv Detail & Related papers (2021-12-28T17:37:08Z) - CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator [60.799183326613395]
We propose an unbiased estimator for categorical random variables based on multiple mutually negatively correlated (jointly antithetic) samples.
CARMS combines REINFORCE with copula based sampling to avoid duplicate samples and reduce its variance, while keeping the estimator unbiased using importance sampling.
We evaluate CARMS on several benchmark datasets on a generative modeling task, as well as a structured output prediction task, and find it to outperform competing methods including a strong self-control baseline.
arXiv Detail & Related papers (2021-10-26T20:14:30Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - The Bures Metric for Generative Adversarial Networks [10.69910379275607]
Generative Adversarial Networks (GANs) are performant generative methods yielding high-quality samples.
We propose to match the real batch diversity to the fake batch diversity.
We observe that diversity matching reduces mode collapse substantially and has a positive effect on the sample quality.
arXiv Detail & Related papers (2020-06-16T12:04:41Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.