Deconstructing Data Reconstruction: Multiclass, Weight Decay and General
Losses
- URL: http://arxiv.org/abs/2307.01827v2
- Date: Thu, 2 Nov 2023 20:05:44 GMT
- Title: Deconstructing Data Reconstruction: Multiclass, Weight Decay and General
Losses
- Authors: Gon Buzaglo, Niv Haim, Gilad Yehudai, Gal Vardi, Yakir Oz, Yaniv
Nikankin and Michal Irani
- Abstract summary: Haim et al. (2022) proposed a scheme to reconstruct training samples from multilayer perceptron binary classifiers.
We extend their findings in several directions, including reconstruction from multiclass and convolutional neural networks.
We study the various factors that contribute to networks' susceptibility to such reconstruction schemes.
- Score: 28.203535970330343
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Memorization of training data is an active research area, yet our
understanding of the inner workings of neural networks is still in its infancy.
Recently, Haim et al. (2022) proposed a scheme to reconstruct training samples
from multilayer perceptron binary classifiers, effectively demonstrating that a
large portion of training samples are encoded in the parameters of such
networks. In this work, we extend their findings in several directions,
including reconstruction from multiclass and convolutional neural networks. We
derive a more general reconstruction scheme which is applicable to a wider
range of loss functions such as regression losses. Moreover, we study the
various factors that contribute to networks' susceptibility to such
reconstruction schemes. Intriguingly, we observe that using weight decay during
training increases reconstructability both in terms of quantity and quality.
Additionally, we examine the influence of the number of neurons relative to the
number of training samples on the reconstructability. Code:
https://github.com/gonbuzaglo/decoreco
Related papers
- Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - Reconstructing Training Data from Multiclass Neural Networks [20.736732081151363]
Reconstructing samples from the training set of trained neural networks is a major privacy concern.
We show that training-data reconstruction is possible in the multi-class setting and that the reconstruction quality is even higher than in the case of binary classification.
arXiv Detail & Related papers (2023-05-05T08:11:00Z) - Understanding Reconstruction Attacks with the Neural Tangent Kernel and
Dataset Distillation [110.61853418925219]
We build a stronger version of the dataset reconstruction attack and show how it can provably recover the emphentire training set in the infinite width regime.
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset.
These reconstruction attacks can be used for textitdataset distillation, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
arXiv Detail & Related papers (2023-02-02T21:41:59Z) - Global quantitative robustness of regression feed-forward neural
networks [0.0]
We adapt the notion of the regression breakdown point to regression neural networks.
We compare the performance, measured by the out-of-sample loss, by a proxy of the breakdown rate.
The results indeed motivate to use robust loss functions for neural network training.
arXiv Detail & Related papers (2022-11-18T09:57:53Z) - Reconstructing Training Data from Trained Neural Networks [42.60217236418818]
We show in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.
We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.
arXiv Detail & Related papers (2022-06-15T18:35:16Z) - The learning phases in NN: From Fitting the Majority to Fitting a Few [2.5991265608180396]
We analyze a layer's reconstruction ability of the input and prediction performance based on the evolution of parameters during training.
We also assess the behavior using common datasets and architectures from computer vision such as ResNet and VGG.
arXiv Detail & Related papers (2022-02-16T19:11:42Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Compressive Sensing and Neural Networks from a Statistical Learning
Perspective [4.561032960211816]
We present a generalization error analysis for a class of neural networks suitable for sparse reconstruction from few linear measurements.
Under realistic conditions, the generalization error scales only logarithmically in the number of layers, and at most linear in number of measurements.
arXiv Detail & Related papers (2020-10-29T15:05:43Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.