Estimating Neural Network Robustness via Lipschitz Constant and Architecture Sensitivity
- URL: http://arxiv.org/abs/2410.23382v1
- Date: Wed, 30 Oct 2024 18:38:42 GMT
- Title: Estimating Neural Network Robustness via Lipschitz Constant and Architecture Sensitivity
- Authors: Abulikemu Abuduweili, Changliu Liu,
- Abstract summary: This paper investigates the robustness of neural networks in perception systems.
We identify the Lipschitz constant as a key metric for quantifying and enhancing network robustness.
- Score: 6.468625143772815
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Ensuring neural network robustness is essential for the safe and reliable operation of robotic learning systems, especially in perception and decision-making tasks within real-world environments. This paper investigates the robustness of neural networks in perception systems, specifically examining their sensitivity to targeted, small-scale perturbations. We identify the Lipschitz constant as a key metric for quantifying and enhancing network robustness. We derive an analytical expression to compute the Lipschitz constant based on neural network architecture, providing a theoretical basis for estimating and improving robustness. Several experiments reveal the relationship between network design, the Lipschitz constant, and robustness, offering practical insights for developing safer, more robust robot learning systems.
Related papers
- Towards Trustworthy Wi-Fi Sensing: Systematic Evaluation of Deep Learning Model Robustness to Adversarial Attacks [4.5835414225547195]
We evaluate the robustness of CSI deep learning models under diverse threat models and varying degrees of attack realism.<n>Our experiments show that smaller models, while efficient and equally performant on clean data, are markedly less robust.<n>We confirm that physically realizable signal-space perturbations, designed to be feasible in real wireless channels, significantly reduce attack success.
arXiv Detail & Related papers (2025-11-25T16:24:29Z) - MindFlow: A Network Traffic Anomaly Detection Model Based on MindSpore [7.564738687560689]
This study proposes MindFlow, a multi-dimensional dynamic traffic prediction and anomaly detection system.
The proposed model achieves 99% in key metrics such as accuracy, precision, recall and F1 score.
arXiv Detail & Related papers (2025-04-24T15:48:02Z) - Beyond Pruning Criteria: The Dominant Role of Fine-Tuning and Adaptive Ratios in Neural Network Robustness [7.742297876120561]
Deep neural networks (DNNs) excel in tasks like image recognition and natural language processing.
Traditional pruning methods compromise the network's ability to withstand subtle perturbations.
This paper challenges the conventional emphasis on weight importance scoring as the primary determinant of a pruned network's performance.
arXiv Detail & Related papers (2024-10-19T18:35:52Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning [71.14237199051276]
We consider classical distribution-agnostic framework and algorithms minimising empirical risks.
We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks is extremely challenging.
arXiv Detail & Related papers (2023-09-13T16:33:27Z) - Generalized Uncertainty of Deep Neural Networks: Taxonomy and
Applications [1.9671123873378717]
We show that the uncertainty of deep neural networks is not only important in a sense of interpretability and transparency, but also crucial in further advancing their performance.
We will generalize the definition of the uncertainty of deep neural networks to any number or vector that is associated with an input or an input-label pair, and catalog existing methods on mining'' such uncertainty from a deep model.
arXiv Detail & Related papers (2023-02-02T22:02:33Z) - Global quantitative robustness of regression feed-forward neural
networks [0.0]
We adapt the notion of the regression breakdown point to regression neural networks.
We compare the performance, measured by the out-of-sample loss, by a proxy of the breakdown rate.
The results indeed motivate to use robust loss functions for neural network training.
arXiv Detail & Related papers (2022-11-18T09:57:53Z) - Neuro-Symbolic Artificial Intelligence (AI) for Intent based Semantic
Communication [85.06664206117088]
6G networks must consider semantics and effectiveness (at end-user) of the data transmission.
NeSy AI is proposed as a pillar for learning causal structure behind the observed data.
GFlowNet is leveraged for the first time in a wireless system to learn the probabilistic structure which generates the data.
arXiv Detail & Related papers (2022-05-22T07:11:57Z) - On the uncertainty principle of neural networks [36.098205818550554]
We show that neural networks are subject to an uncertainty relation, which manifests as a fundamental limitation in their ability to simultaneously achieve high accuracy and robustness against adversarial attacks.
Our findings reveal that the complementarity principle, a cornerstone of quantum physics, applies to neural networks, imposing fundamental limits on their capabilities in simultaneous learning of conjugate features.
arXiv Detail & Related papers (2022-05-03T13:48:12Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Tiny Adversarial Mulit-Objective Oneshot Neural Architecture Search [35.362883630015354]
Most neural network models deployed in mobile devices are tiny. However, tiny neural networks are commonly very vulnerable to attacks.
Our work focuses on how to improve the robustness of tiny neural networks without seriously deteriorating of clean accuracy under mobile-level resources.
arXiv Detail & Related papers (2021-02-28T00:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.