PILLAR: How to make semi-private learning more effective
- URL: http://arxiv.org/abs/2306.03962v1
- Date: Tue, 6 Jun 2023 18:45:05 GMT
- Title: PILLAR: How to make semi-private learning more effective
- Authors: Francesco Pinto, Yaxi Hu, Fanny Yang, Amartya Sanyal
- Abstract summary: In Semi-Supervised Semi-Private (SP) learning, the learner has access to both public unlabelled and private labelled data.
We propose a computationally efficient algorithm that achieves significantly lower private labelled sample complexity and can be efficiently run on real-world datasets.
- Score: 12.292092677396347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Semi-Supervised Semi-Private (SP) learning, the learner has access to both
public unlabelled and private labelled data. We propose a computationally
efficient algorithm that, under mild assumptions on the data, provably achieves
significantly lower private labelled sample complexity and can be efficiently
run on real-world datasets. For this purpose, we leverage the features
extracted by networks pre-trained on public (labelled or unlabelled) data,
whose distribution can significantly differ from the one on which SP learning
is performed. To validate its empirical effectiveness, we propose a wide
variety of experiments under tight privacy constraints (\(\epsilon=0.1\)) and
with a focus on low-data regimes. In all of these settings, our algorithm
exhibits significantly improved performance over available baselines that use
similar amounts of public data.
Related papers
- Locally Differentially Private Gradient Tracking for Distributed Online
Learning over Directed Graphs [2.1271873498506038]
We propose a locally differentially private gradient tracking based distributed online learning algorithm.
We prove that the proposed algorithm converges in mean square to the exact optimal solution while ensuring rigorous local differential privacy.
arXiv Detail & Related papers (2023-10-24T18:15:25Z) - Exploring the Boundaries of Semi-Supervised Facial Expression
Recognition: Learning from In-Distribution, Out-of-Distribution, and
Unconstrained Data [19.442685015494316]
We present a study on 11 of the most recent semi-supervised methods, in the context of facial expression recognition (FER)
Our investigation covers semi-supervised learning from in-distribution, out-of-distribution, unconstrained, and very small unlabelled data.
Our results demonstrate that FixMatch consistently achieves better performance on in-distribution unlabelled data, while ReMixMatch stands out among all methods for out-of-distribution, unconstrained, and scarce unlabelled data scenarios.
arXiv Detail & Related papers (2023-06-02T01:40:08Z) - Decentralized Learning with Multi-Headed Distillation [12.90857834791378]
Decentralized learning with private data is a central problem in machine learning.
We propose a novel distillation-based decentralized learning technique that allows multiple agents with private non-iid data to learn from each other.
arXiv Detail & Related papers (2022-11-28T21:01:43Z) - Prompt-driven efficient Open-set Semi-supervised Learning [52.30303262499391]
Open-set semi-supervised learning (OSSL) has attracted growing interest, which investigates a more practical scenario where out-of-distribution (OOD) samples are only contained in unlabeled data.
We propose a prompt-driven efficient OSSL framework, called OpenPrompt, which can propagate class information from labeled to unlabeled data with only a small number of trainable parameters.
arXiv Detail & Related papers (2022-09-28T16:25:08Z) - How unfair is private learning ? [13.815080318918833]
We show that, when the data has a long-tailed structure, it is not possible to build accurate learning algorithms that are both private and fair.
We show that relaxing overall accuracy can lead to good fairness even with strict privacy requirements.
arXiv Detail & Related papers (2022-06-08T16:03:44Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Learning to Count in the Crowd from Limited Labeled Data [109.2954525909007]
We focus on reducing the annotation efforts by learning to count in the crowd from limited number of labeled samples.
Specifically, we propose a Gaussian Process-based iterative learning mechanism that involves estimation of pseudo-ground truth for the unlabeled data.
arXiv Detail & Related papers (2020-07-07T04:17:01Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.