Protecting the integrity of the training procedure of neural networks
- URL: http://arxiv.org/abs/2005.06928v1
- Date: Thu, 14 May 2020 12:57:23 GMT
- Title: Protecting the integrity of the training procedure of neural networks
- Authors: Christian Berghoff
- Abstract summary: neural networks are used for an ever-increasing number of applications.
One of the most striking IT security problems aggravated by the opacity of neural networks is the possibility of poisoning attacks during the training phase.
We propose an approach to this problem which allows provably verifying the integrity of the training procedure by making use of standard cryptographic mechanisms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to significant improvements in performance in recent years, neural
networks are currently used for an ever-increasing number of applications.
However, neural networks have the drawback that their decisions are not readily
interpretable and traceable for a human. This creates several problems, for
instance in terms of safety and IT security for high-risk applications, where
assuring these properties is crucial. One of the most striking IT security
problems aggravated by the opacity of neural networks is the possibility of
so-called poisoning attacks during the training phase, where an attacker
inserts specially crafted data to manipulate the resulting model. We propose an
approach to this problem which allows provably verifying the integrity of the
training procedure by making use of standard cryptographic mechanisms.
Related papers
- Adaptive Intrusion Detection System Leveraging Dynamic Neural Models with Adversarial Learning for 5G/6G Networks [2.062593640149623]
This paper presents an advanced IDS framework that leverages adversarial training and dynamic neural networks in 5G/6G networks.<n>Unlike conventional models, which require costly retraining to update knowledge, the proposed framework integrates incremental learning algorithms, reducing the need for frequent retraining.
arXiv Detail & Related papers (2025-12-11T13:40:37Z) - Defending against Stegomalware in Deep Neural Networks with Permutation Symmetry [3.9341402479278216]
State-of-the-art neural network stegomalware can be efficiently and effectively neutralized through shuffling the column order of the weight- and bias-matrices.<n>We show that this effectively corrupts payloads that have been embedded by state-of-the-art methods in neural network steganography at no cost to network accuracy.
arXiv Detail & Related papers (2025-09-23T17:15:38Z) - Training Safe Neural Networks with Global SDP Bounds [0.0]
We present a novel approach to training neural networks with formal safety guarantees using semidefinite programming (SDP) for verification.
Our method focuses on verifying safety over large, high-dimensional input regions, addressing limitations of existing techniques that focus on adversarial bounds.
arXiv Detail & Related papers (2024-09-15T10:50:22Z) - Training Verifiably Robust Agents Using Set-Based Reinforcement Learning [8.217552831952]
We train neural networks utilizing entire sets of perturbed inputs and maximize the worst-case reward.
The obtained agents are verifiably more robust than agents obtained by related work, making them more applicable in safety-critical environments.
arXiv Detail & Related papers (2024-08-17T06:26:17Z) - Constraint-based Adversarial Example Synthesis [1.2548803788632799]
This study focuses on enhancing Concolic Testing, a specialized technique for testing Python programs implementing neural networks.
The extended tool, PyCT, now accommodates a broader range of neural network operations, including floating-point and activation function computations.
arXiv Detail & Related papers (2024-06-03T11:35:26Z) - Advancing Security in AI Systems: A Novel Approach to Detecting
Backdoors in Deep Neural Networks [3.489779105594534]
backdoors can be exploited by malicious actors on deep neural networks (DNNs) and cloud services for data processing.
Our approach leverages advanced tensor decomposition algorithms to meticulously analyze the weights of pre-trained DNNs and distinguish between backdoored and clean models.
This advancement enhances the security of deep learning and AI in networked systems, providing essential cybersecurity against evolving threats in emerging technologies.
arXiv Detail & Related papers (2024-03-13T03:10:11Z) - Set-Based Training for Neural Network Verification [8.97708612393722]
Small input perturbations can significantly affect the outputs of a neural network.
In safety-critical environments, the inputs often contain noisy sensor data.
We employ an end-to-end set-based training procedure that trains robust neural networks for formal verification.
arXiv Detail & Related papers (2024-01-26T15:52:41Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Experimental Review of Neural-based approaches for Network Intrusion
Management [8.727349339883094]
We provide an experimental-based review of neural-based methods applied to intrusion detection issues.
We offer a complete view of the most prominent neural-based techniques relevant to intrusion detection, including deep-based approaches or weightless neural networks.
Our evaluation quantifies the value of neural networks, particularly when state-of-the-art datasets are used to train the models.
arXiv Detail & Related papers (2020-09-18T18:32:24Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.