Exploring the Benefits of Visual Prompting in Differential Privacy
- URL: http://arxiv.org/abs/2303.12247v2
- Date: Wed, 30 Aug 2023 14:09:13 GMT
- Title: Exploring the Benefits of Visual Prompting in Differential Privacy
- Authors: Yizhe Li, Yu-Lin Tsai, Xuebin Ren, Chia-Mu Yu, Pin-Yu Chen
- Abstract summary: Visual Prompting (VP) is an emerging and powerful technique that allows sample-efficient adaptation to downstream tasks by engineering a well-trained frozen source model.
We explore and integrate VP into canonical DP training methods and demonstrate its simplicity and efficiency.
- Score: 54.56619360046841
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual Prompting (VP) is an emerging and powerful technique that allows
sample-efficient adaptation to downstream tasks by engineering a well-trained
frozen source model. In this work, we explore the benefits of VP in
constructing compelling neural network classifiers with differential privacy
(DP). We explore and integrate VP into canonical DP training methods and
demonstrate its simplicity and efficiency. In particular, we discover that VP
in tandem with PATE, a state-of-the-art DP training method that leverages the
knowledge transfer from an ensemble of teachers, achieves the state-of-the-art
privacy-utility trade-off with minimum expenditure of privacy budget. Moreover,
we conduct additional experiments on cross-domain image classification with a
sufficient domain gap to further unveil the advantage of VP in DP. Lastly, we
also conduct extensive ablation studies to validate the effectiveness and
contribution of VP under DP consideration. Our code is available at
(https://github.com/EzzzLi/Prompt-PATE).
Related papers
- Differentially Private Policy Gradient [48.748194765816955]
We show that it is possible to find the right trade-off between privacy noise and trust-region size to obtain a performant differentially private policy gradient algorithm.
Our results and the complexity of the tasks addressed represent a significant improvement over existing DP algorithms in online RL.
arXiv Detail & Related papers (2025-01-31T12:11:13Z) - LLM-based Privacy Data Augmentation Guided by Knowledge Distillation
with a Distribution Tutor for Medical Text Classification [67.92145284679623]
We propose a DP-based tutor that models the noised private distribution and controls samples' generation with a low privacy cost.
We theoretically analyze our model's privacy protection and empirically verify our model.
arXiv Detail & Related papers (2024-02-26T11:52:55Z) - DPBalance: Efficient and Fair Privacy Budget Scheduling for Federated
Learning as a Service [15.94482624965024]
Federated learning (FL) has emerged as a prevalent distributed machine learning scheme.
We propose DPBalance, a novel privacy budget scheduling mechanism that jointly optimize both efficiency and fairness.
We show that DPBalance achieves an average efficiency improvement of $1.44times sim 3.49 times$, and an average fairness improvement of $1.37times sim 24.32 times$.
arXiv Detail & Related papers (2024-02-15T05:19:53Z) - DistilVPR: Cross-Modal Knowledge Distillation for Visual Place
Recognition [27.742693995915808]
DistilVPR is a novel distillation pipeline for visual place recognition.
We propose leveraging feature relationships from multiple agents, including self-agents and cross-agents for teacher and student neural networks.
The experiments demonstrate that our proposed pipeline achieves state-of-the-art performance compared to other distillation baselines.
arXiv Detail & Related papers (2023-12-17T05:59:06Z) - Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced
Transfer Learning [66.20311762506702]
dataset pruning (DP) has emerged as an effective way to improve data efficiency.
We propose two new DP methods, label mapping and feature mapping, for supervised and self-supervised pretraining settings.
We show that source data classes can be pruned by up to 40% 80% without sacrificing downstream performance.
arXiv Detail & Related papers (2023-10-13T00:07:49Z) - Sequential Information Design: Markov Persuasion Process and Its
Efficient Reinforcement Learning [156.5667417159582]
This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs)
Planning in MPPs faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender.
We design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles.
arXiv Detail & Related papers (2022-02-22T05:41:43Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - Learning Meta Pattern for Face Anti-Spoofing [26.82129880310214]
Face Anti-Spoofing (FAS) is essential to secure face recognition systems.
Recent hybrid methods have been explored to extract task-aware handcrafted features.
We propose a learnable network to extract Meta Pattern (MP) in our learning-to-learn framework.
arXiv Detail & Related papers (2021-10-13T14:34:20Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.