Opacus: User-Friendly Differential Privacy Library in PyTorch
- URL: http://arxiv.org/abs/2109.12298v1
- Date: Sat, 25 Sep 2021 07:10:54 GMT
- Title: Opacus: User-Friendly Differential Privacy Library in PyTorch
- Authors: Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide
Testuggine, Karthik Prasad, Mani Malek, John Nguyen, Sayan Gosh, Akash
Bharadwaj, Jessica Zhao, Graham Cormode, Ilya Mironov
- Abstract summary: We introduce Opacus, a free, open-source PyTorch library for training deep learning models with differential privacy.
It provides a simple and user-friendly API, and enables machine learning practitioners to make a training pipeline private by adding as little as two lines to their code.
- Score: 54.8720687562153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Opacus, a free, open-source PyTorch library for training deep
learning models with differential privacy (hosted at opacus.ai). Opacus is
designed for simplicity, flexibility, and speed. It provides a simple and
user-friendly API, and enables machine learning practitioners to make a
training pipeline private by adding as little as two lines to their code. It
supports a wide variety of layers, including multi-head attention, convolution,
LSTM, and embedding, right out of the box, and it also provides the means for
supporting other user-defined layers. Opacus computes batched per-sample
gradients, providing better efficiency compared to the traditional "micro
batch" approach. In this paper we present Opacus, detail the principles that
drove its implementation and unique features, and compare its performance
against other frameworks for differential privacy in ML.
Related papers
- Loop Improvement: An Efficient Approach for Extracting Shared Features from Heterogeneous Data without Central Server [16.249442761713322]
"Loop Improvement" (LI) is a novel method enhancing this separation and feature extraction without necessitating a central server or data interchange among participants.
In personalized federated learning environments, LI consistently outperforms the advanced FedALA algorithm in accuracy across diverse scenarios.
LI's adaptability extends to multi-task learning, streamlining the extraction of common features across tasks and obviating the need for simultaneous training.
arXiv Detail & Related papers (2024-03-21T12:59:24Z) - Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)
To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.
When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - Scalable Federated Learning for Clients with Different Input Image Sizes
and Numbers of Output Categories [34.22635158366194]
Federated learning is a privacy-preserving training method which consists of training from a plurality of clients but without sharing their confidential data.
We propose an effective federated learning method named ScalableFL, where the depths and widths of the local models for each client are adjusted according to the clients' input image size and the numbers of output categories.
arXiv Detail & Related papers (2023-11-15T05:43:14Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - CECILIA: Comprehensive Secure Machine Learning Framework [2.949446809950691]
We propose a secure 3-party framework, CECILIA, offering PP building blocks to enable complex operations privately.
CECILIA also has two novel methods, which are the exact exponential of a public base raised to the power of a secret value and the inverse square root of a secret Gram matrix.
The framework shows a great promise to make other ML algorithms as well as further computations privately computable.
arXiv Detail & Related papers (2022-02-07T09:27:34Z) - Routing with Self-Attention for Multimodal Capsule Networks [108.85007719132618]
We present a new multimodal capsule network that allows us to leverage the strength of capsules in the context of a multimodal learning framework.
To adapt the capsules to large-scale input data, we propose a novel routing by self-attention mechanism that selects relevant capsules.
This allows not only for robust training with noisy video data, but also to scale up the size of the capsule network compared to traditional routing methods.
arXiv Detail & Related papers (2021-12-01T19:01:26Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Fedlearn-Algo: A flexible open-source privacy-preserving machine
learning platform [15.198116661595487]
We present Fedlearn-Algo, an open-source privacy preserving machine learning platform.
We use this platform to demonstrate our research and development results on privacy preserving machine learning algorithms.
arXiv Detail & Related papers (2021-07-08T21:59:56Z) - Captum: A unified and generic model interpretability library for PyTorch [49.72749684393332]
We introduce a novel, unified, open-source model interpretability library for PyTorch.
The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms.
It can be used for both classification and non-classification models.
arXiv Detail & Related papers (2020-09-16T18:57:57Z) - GRAFFL: Gradient-free Federated Learning of a Bayesian Generative Model [8.87104231451079]
This paper presents the first gradient-free federated learning framework called GRAFFL.
It uses implicit information derived from each participating institution to learn posterior distributions of parameters.
We propose the GRAFFL-based Bayesian mixture model to serve as a proof-of-concept of the framework.
arXiv Detail & Related papers (2020-08-29T07:19:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.