Bayesian Inference Forgetting
- URL: http://arxiv.org/abs/2101.06417v2
- Date: Thu, 18 Feb 2021 09:05:14 GMT
- Title: Bayesian Inference Forgetting
- Authors: Shaopeng Fu, Fengxiang He, Yue Xu, Dacheng Tao
- Abstract summary: The right to be forgotten has been legislated in many countries but the enforcement in machine learning would cause unbearable costs.
This paper proposes a it Bayesian inference forgetting (BIF) framework to realize the right to be forgotten in Bayesian inference.
- Score: 82.6681466124663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The right to be forgotten has been legislated in many countries but the
enforcement in machine learning would cause unbearable costs: companies may
need to delete whole models learned from massive resources due to single
individual requests. Existing works propose to remove the knowledge learned
from the requested data via its influence function which is no longer naturally
well-defined in Bayesian inference. This paper proposes a {\it Bayesian
inference forgetting} (BIF) framework to realize the right to be forgotten in
Bayesian inference. In the BIF framework, we develop forgetting algorithms for
variational inference and Markov chain Monte Carlo. We show that our algorithms
can provably remove the influence of single datums on the learned models.
Theoretical analysis demonstrates that our algorithms have guaranteed
generalizability. Experiments of Gaussian mixture models on the synthetic
dataset and Bayesian neural networks on the real-world data verify the
feasibility of our methods. The source code package is available at
\url{https://github.com/fshp971/BIF}.
Related papers
- Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - Partially Oblivious Neural Network Inference [4.843820624525483]
We show that for neural network models, like CNNs, some information leakage can be acceptable.
We experimentally demonstrate that in a CIFAR-10 network we can leak up to $80%$ of the model's weights with practically no security impact.
arXiv Detail & Related papers (2022-10-27T05:39:36Z) - Knowledge Removal in Sampling-based Bayesian Inference [86.14397783398711]
When single data deletion requests come, companies may need to delete the whole models learned with massive resources.
Existing works propose methods to remove knowledge learned from data for explicitly parameterized models.
In this paper, we propose the first machine unlearning algorithm for MCMC.
arXiv Detail & Related papers (2022-03-24T10:03:01Z) - ReLU Regression with Massart Noise [52.10842036932169]
We study the fundamental problem of ReLU regression, where the goal is to fit Rectified Linear Units (ReLUs) to data.
We focus on ReLU regression in the Massart noise model, a natural and well-studied semi-random noise model.
We develop an efficient algorithm that achieves exact parameter recovery in this model.
arXiv Detail & Related papers (2021-09-10T02:13:22Z) - A Bayesian Framework for Information-Theoretic Probing [51.98576673620385]
We argue that probing should be seen as approximating a mutual information.
This led to the rather unintuitive conclusion that representations encode exactly the same information about a target task as the original sentences.
This paper proposes a new framework to measure what we term Bayesian mutual information.
arXiv Detail & Related papers (2021-09-08T18:08:36Z) - Extending the statistical software package Engine for Likelihood-Free
Inference [0.0]
This dissertation focuses on the implementation of the Robust optimisation Monte Carlo (ROMC) method in the software package Engine for Likelihood-Free Inference (ELFI)
Our implementation provides a robust and efficient solution to a practitioner who wants to perform inference on a simulator-based model.
arXiv Detail & Related papers (2020-11-08T13:22:37Z) - Variational Bayesian Unlearning [54.26984662139516]
We study the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased.
We show that it is equivalent to minimizing an evidence upper bound which trades off between fully unlearning from erased data vs. not entirely forgetting the posterior belief.
In model training with VI, only an approximate (instead of exact) posterior belief given the full data can be obtained, which makes unlearning even more challenging.
arXiv Detail & Related papers (2020-10-24T11:53:00Z) - GRAFFL: Gradient-free Federated Learning of a Bayesian Generative Model [8.87104231451079]
This paper presents the first gradient-free federated learning framework called GRAFFL.
It uses implicit information derived from each participating institution to learn posterior distributions of parameters.
We propose the GRAFFL-based Bayesian mixture model to serve as a proof-of-concept of the framework.
arXiv Detail & Related papers (2020-08-29T07:19:44Z) - Towards Deep Learning Models Resistant to Large Perturbations [0.0]
Adversarial robustness has proven to be a required property of machine learning algorithms.
We show that the well-established algorithm called "adversarial training" fails to train a deep neural network given a large, but reasonable, perturbation magnitude.
arXiv Detail & Related papers (2020-03-30T12:03:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.