SecDD: Efficient and Secure Method for Remotely Training Neural Networks
- URL: http://arxiv.org/abs/2009.09155v1
- Date: Sat, 19 Sep 2020 03:37:44 GMT
- Title: SecDD: Efficient and Secure Method for Remotely Training Neural Networks
- Authors: Ilia Sucholutsky, Matthias Schonlau
- Abstract summary: We leverage what are typically considered the worst qualities of deep learning algorithms.
We create a method for the secure and efficient training of remotely deployed neural networks over unsecured channels.
- Score: 13.70633147306388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We leverage what are typically considered the worst qualities of deep
learning algorithms - high computational cost, requirement for large data, no
explainability, high dependence on hyper-parameter choice, overfitting, and
vulnerability to adversarial perturbations - in order to create a method for
the secure and efficient training of remotely deployed neural networks over
unsecured channels.
Related papers
- HiPreNets: High-Precision Neural Networks through Progressive Training [1.5429976366871665]
We present a framework for tuning and high-precision neural networks (HiPreNets)<n>Our approach refines a previously explored staged training technique for neural networks.<n>We discuss how to take advantage of the structure of the residuals to guide the choice loss function number parameters to use.
arXiv Detail & Related papers (2025-06-18T02:12:24Z) - Training Safe Neural Networks with Global SDP Bounds [0.0]
We present a novel approach to training neural networks with formal safety guarantees using semidefinite programming (SDP) for verification.
Our method focuses on verifying safety over large, high-dimensional input regions, addressing limitations of existing techniques that focus on adversarial bounds.
arXiv Detail & Related papers (2024-09-15T10:50:22Z) - Scalable and Efficient Methods for Uncertainty Estimation and Reduction
in Deep Learning [0.0]
This paper explores scalable and efficient methods for uncertainty estimation and reduction in deep learning.
We tackle the inherent uncertainties arising from out-of-distribution inputs and hardware non-idealities.
Our approach encompasses problem-aware training algorithms, novel NN topologies, and hardware co-design solutions.
arXiv Detail & Related papers (2024-01-13T19:30:34Z) - Efficient Uncertainty Quantification and Reduction for
Over-Parameterized Neural Networks [23.7125322065694]
Uncertainty quantification (UQ) is important for reliability assessment and enhancement of machine learning models.
We create statistically guaranteed schemes to principally emphcharacterize, and emphremove, the uncertainty of over- parameterized neural networks.
In particular, our approach, based on what we call a procedural-noise-correcting (PNC) predictor, removes the procedural uncertainty by using only emphone auxiliary network that is trained on a suitably labeled dataset.
arXiv Detail & Related papers (2023-06-09T05:15:53Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - Hierarchical fuzzy neural networks with privacy preservation for
heterogeneous big data [29.65840169552303]
Heterogeneous big data poses many challenges in machine learning.
We propose a privacy-preserving hierarchical fuzzy neural network (PP-HFNN) to address these technical challenges while also alleviating privacy concerns.
The entire training procedure is scalable, fast and does not suffer from gradient vanishing problems like the methods based on back-propagation.
arXiv Detail & Related papers (2022-09-18T03:53:02Z) - Provable Regret Bounds for Deep Online Learning and Control [77.77295247296041]
We show that any loss functions can be adapted to optimize the parameters of a neural network such that it competes with the best net in hindsight.
As an application of these results in the online setting, we obtain provable bounds for online control controllers.
arXiv Detail & Related papers (2021-10-15T02:13:48Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Protecting the integrity of the training procedure of neural networks [0.0]
neural networks are used for an ever-increasing number of applications.
One of the most striking IT security problems aggravated by the opacity of neural networks is the possibility of poisoning attacks during the training phase.
We propose an approach to this problem which allows provably verifying the integrity of the training procedure by making use of standard cryptographic mechanisms.
arXiv Detail & Related papers (2020-05-14T12:57:23Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.