On Hiding Neural Networks Inside Neural Networks
- URL: http://arxiv.org/abs/2002.10078v3
- Date: Sat, 22 May 2021 00:27:12 GMT
- Title: On Hiding Neural Networks Inside Neural Networks
- Authors: Chuan Guo, Ruihan Wu, Kilian Q. Weinberger
- Abstract summary: We show that this excess capacity provides an opportunity for embedding secret machine learning models within a trained neural network.
Our novel framework hides the existence of a secret neural network with arbitrary desired functionality within a carrier network.
- Score: 30.20287788363144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern neural networks often contain significantly more parameters than the
size of their training data. We show that this excess capacity provides an
opportunity for embedding secret machine learning models within a trained
neural network. Our novel framework hides the existence of a secret neural
network with arbitrary desired functionality within a carrier network. We prove
theoretically that the secret network's detection is computationally infeasible
and demonstrate empirically that the carrier network does not compromise the
secret network's disguise. Our paper introduces a previously unknown
steganographic technique that can be exploited by adversaries if left
unchecked.
Related papers
- Residual Random Neural Networks [0.0]
Single-layer feedforward neural network with random weights is a recurring motif in the neural networks literature.
We show that one can obtain good classification results even if the number of hidden neurons has the same order of magnitude as the dimensionality of the data samples.
arXiv Detail & Related papers (2024-10-25T22:00:11Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Set-Based Training for Neural Network Verification [8.97708612393722]
Small input perturbations can significantly affect the outputs of a neural network.
In safety-critical environments, the inputs often contain noisy sensor data.
We employ an end-to-end set-based training procedure that trains robust neural networks for formal verification.
arXiv Detail & Related papers (2024-01-26T15:52:41Z) - Neural Bayesian Network Understudy [13.28673601999793]
We show that a neural network can be trained to output conditional probabilities, providing approximately the same functionality as a Bayesian Network.
We propose two training strategies that allow encoding the independence relations inferred from a given causal structure into the neural network.
arXiv Detail & Related papers (2022-11-15T15:56:51Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Provable Regret Bounds for Deep Online Learning and Control [77.77295247296041]
We show that any loss functions can be adapted to optimize the parameters of a neural network such that it competes with the best net in hindsight.
As an application of these results in the online setting, we obtain provable bounds for online control controllers.
arXiv Detail & Related papers (2021-10-15T02:13:48Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z) - Finding trainable sparse networks through Neural Tangent Transfer [16.092248433189816]
In deep learning, trainable sparse networks that perform well on a specific task are usually constructed using label-dependent pruning criteria.
In this article, we introduce Neural Tangent Transfer, a method that instead finds trainable sparse networks in a label-free manner.
arXiv Detail & Related papers (2020-06-15T08:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.