Reconstructing Training Data from Multiclass Neural Networks
- URL: http://arxiv.org/abs/2305.03350v1
- Date: Fri, 5 May 2023 08:11:00 GMT
- Title: Reconstructing Training Data from Multiclass Neural Networks
- Authors: Gon Buzaglo, Niv Haim, Gilad Yehudai, Gal Vardi and Michal Irani
- Abstract summary: Reconstructing samples from the training set of trained neural networks is a major privacy concern.
We show that training-data reconstruction is possible in the multi-class setting and that the reconstruction quality is even higher than in the case of binary classification.
- Score: 20.736732081151363
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing samples from the training set of trained neural networks is a
major privacy concern. Haim et al. (2022) recently showed that it is possible
to reconstruct training samples from neural network binary classifiers, based
on theoretical results about the implicit bias of gradient methods. In this
work, we present several improvements and new insights over this previous work.
As our main improvement, we show that training-data reconstruction is possible
in the multi-class setting and that the reconstruction quality is even higher
than in the case of binary classification. Moreover, we show that using
weight-decay during training increases the vulnerability to sample
reconstruction. Finally, while in the previous work the training set was of
size at most $1000$ from $10$ classes, we show preliminary evidence of the
ability to reconstruct from a model trained on $5000$ samples from $100$
classes.
Related papers
- Deconstructing Data Reconstruction: Multiclass, Weight Decay and General
Losses [28.203535970330343]
Haim et al. (2022) proposed a scheme to reconstruct training samples from multilayer perceptron binary classifiers.
We extend their findings in several directions, including reconstruction from multiclass and convolutional neural networks.
We study the various factors that contribute to networks' susceptibility to such reconstruction schemes.
arXiv Detail & Related papers (2023-07-04T17:09:49Z) - Understanding Reconstruction Attacks with the Neural Tangent Kernel and
Dataset Distillation [110.61853418925219]
We build a stronger version of the dataset reconstruction attack and show how it can provably recover the emphentire training set in the infinite width regime.
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset.
These reconstruction attacks can be used for textitdataset distillation, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
arXiv Detail & Related papers (2023-02-02T21:41:59Z) - Reconstructing Training Data from Model Gradient, Provably [68.21082086264555]
We reconstruct the training samples from a single gradient query at a randomly chosen parameter value.
As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy.
arXiv Detail & Related papers (2022-12-07T15:32:22Z) - Reconstructing Training Data from Trained Neural Networks [42.60217236418818]
We show in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.
We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.
arXiv Detail & Related papers (2022-06-15T18:35:16Z) - Last Layer Re-Training is Sufficient for Robustness to Spurious
Correlations [51.552870594221865]
We show that last layer retraining can match or outperform state-of-the-art approaches on spurious correlation benchmarks.
We also show that last layer retraining on large ImageNet-trained models can significantly reduce reliance on background and texture information.
arXiv Detail & Related papers (2022-04-06T16:55:41Z) - Few-shot Transfer Learning for Holographic Image Reconstruction using a
Recurrent Neural Network [0.30586855806896046]
We show a few-shot transfer learning method that helps a holographic image reconstruction deep neural network rapidly generalize to new types of samples using small datasets.
We validated the effectiveness of this approach by successfully generalizing to new types of samples using small holographic datasets for training.
arXiv Detail & Related papers (2022-01-27T05:51:36Z) - Logarithmic Continual Learning [11.367079056418957]
We introduce a neural network architecture that logarithmically reduces the number of self-rehearsal steps in the generative rehearsal of continually learned models.
In continual learning (CL), training samples come in subsequent tasks, and the trained model can access only a single task at a time.
arXiv Detail & Related papers (2022-01-17T17:29:16Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Taylorized Training: Towards Better Approximation of Neural Network
Training at Finite Width [116.69845849754186]
Taylorized training involves training the $k$-th order Taylor expansion of the neural network.
We show that Taylorized training agrees with full neural network training increasingly better as we increase $k$.
We complement our experiments with theoretical results showing that the approximation error of $k$-th order Taylorized models decay exponentially over $k$ in wide neural networks.
arXiv Detail & Related papers (2020-02-10T18:37:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.