SmoothNets: Optimizing CNN architecture design for differentially
private deep learning
- URL: http://arxiv.org/abs/2205.04095v1
- Date: Mon, 9 May 2022 07:51:54 GMT
- Title: SmoothNets: Optimizing CNN architecture design for differentially
private deep learning
- Authors: Nicolas W. Remerscheid, Alexander Ziller, Daniel Rueckert, Georgios
Kaissis
- Abstract summary: DPSGD requires clipping and noising of per-sample gradients.
This introduces a reduction in model utility compared to non-private training.
We distilled a new model architecture termed SmoothNet, which is characterised by increased robustness to the challenges of DP-SGD training.
- Score: 69.10072367807095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The arguably most widely employed algorithm to train deep neural networks
with Differential Privacy is DPSGD, which requires clipping and noising of
per-sample gradients. This introduces a reduction in model utility compared to
non-private training. Empirically, it can be observed that this accuracy
degradation is strongly dependent on the model architecture. We investigated
this phenomenon and, by combining components which exhibit good individual
performance, distilled a new model architecture termed SmoothNet, which is
characterised by increased robustness to the challenges of DP-SGD training.
Experimentally, we benchmark SmoothNet against standard architectures on two
benchmark datasets and observe that our architecture outperforms others,
reaching an accuracy of 73.5\% on CIFAR-10 at $\varepsilon=7.0$ and 69.2\% at
$\varepsilon=7.0$ on ImageNette, a state-of-the-art result compared to prior
architectural modifications for DP.
Related papers
- Multi-conditioned Graph Diffusion for Neural Architecture Search [8.290336491323796]
We present a graph diffusion-based NAS approach that uses discrete conditional graph diffusion processes to generate high-performing neural network architectures.
We show promising results on six standard benchmarks, yielding novel and unique architectures at a fast speed.
arXiv Detail & Related papers (2024-03-09T21:45:31Z) - Equivariant Differentially Private Deep Learning: Why DP-SGD Needs
Sparser Models [7.49320945341034]
We show that small and efficient architecture design can outperform current state-of-the-art models with substantially lower computational requirements.
Our results are a step towards efficient model architectures that make optimal use of their parameters.
arXiv Detail & Related papers (2023-01-30T17:43:47Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Kernel Normalized Convolutional Networks for Privacy-Preserving Machine
Learning [7.384030323608299]
We compare layer normalization (LayerNorm), group normalization (GroupNorm), and the recently proposed kernel normalization ( KernelNorm) in FL and DP settings.
LayerNorm and GroupNorm provide no performance gain compared to the baseline (i.e. no normalization) for shallow models, but they considerably enhance performance of deeper models.
KernelNorm, on the other hand, significantly outperforms its competitors in terms of accuracy and convergence rate (or communication efficiency) for both shallow and deeper models.
arXiv Detail & Related papers (2022-09-30T19:33:53Z) - Efficient Deep Learning Methods for Identification of Defective Casting
Products [0.0]
In this paper, we have compared and contrasted various pre-trained and custom-built AI architectures.
Our results show that custom architectures are efficient than pre-trained mobile architectures.
Augmentation experimentations have also been carried out on the custom architectures to make the models more robust and generalizable.
arXiv Detail & Related papers (2022-05-14T19:35:05Z) - ZARTS: On Zero-order Optimization for Neural Architecture Search [94.41017048659664]
Differentiable architecture search (DARTS) has been a popular one-shot paradigm for NAS due to its high efficiency.
This work turns to zero-order optimization and proposes a novel NAS scheme, called ZARTS, to search without enforcing the above approximation.
In particular, results on 12 benchmarks verify the outstanding robustness of ZARTS, where the performance of DARTS collapses due to its known instability issue.
arXiv Detail & Related papers (2021-10-10T09:35:15Z) - NAS-OoD: Neural Architecture Search for Out-of-Distribution
Generalization [23.859795806659395]
We propose robust Neural Architecture Search for OoD generalization (NAS-OoD)
NAS-OoD achieves superior performance on various OoD generalization benchmarks with deep models having a much fewer number of parameters.
On a real industry dataset, the proposed NAS-OoD method reduces the error rate by more than 70% compared with the state-of-the-art method.
arXiv Detail & Related papers (2021-09-05T10:23:29Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.