Achieving Domain-Independent Certified Robustness via Knowledge Continuity
- URL: http://arxiv.org/abs/2411.01644v1
- Date: Sun, 03 Nov 2024 17:37:59 GMT
- Title: Achieving Domain-Independent Certified Robustness via Knowledge Continuity
- Authors: Alan Sun, Chiyu Ma, Kenneth Ge, Soroush Vosoughi,
- Abstract summary: We present knowledge continuity, a novel definition inspired by Lipschitz continuity.
Our proposed definition yields certification guarantees that depend only on the loss function and the intermediate learned metric spaces of the neural network.
We show that knowledge continuity can be used to localize vulnerable components of a neural network.
- Score: 21.993471256103085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present knowledge continuity, a novel definition inspired by Lipschitz continuity which aims to certify the robustness of neural networks across input domains (such as continuous and discrete domains in vision and language, respectively). Most existing approaches that seek to certify robustness, especially Lipschitz continuity, lie within the continuous domain with norm and distribution-dependent guarantees. In contrast, our proposed definition yields certification guarantees that depend only on the loss function and the intermediate learned metric spaces of the neural network. These bounds are independent of domain modality, norms, and distribution. We further demonstrate that the expressiveness of a model class is not at odds with its knowledge continuity. This implies that achieving robustness by maximizing knowledge continuity should not theoretically hinder inferential performance. Finally, to complement our theoretical results, we present several applications of knowledge continuity such as regularization, a certification algorithm, and show that knowledge continuity can be used to localize vulnerable components of a neural network.
Related papers
- Principles of Lipschitz continuity in neural networks [0.304585143845864]
This thesis seeks to advance a principled understanding of the principles of Lipschitz continuity in neural networks.<n>It examines how Lipschitz continuity modulates the behavior of neural networks with respect to features in the input data.
arXiv Detail & Related papers (2026-02-03T23:30:08Z) - Certified Approximate Reachability (CARe): Formal Error Bounds on Deep Learning of Reachable Sets [45.67587657709892]
We introduce an epsilon-approximate Hamilton-Jacobi Partial Differential Equation (HJ-PDE), which establishes a relationship between training loss and accuracy of the true reachable set.
To the best of our knowledge, Certified Approximate Reachability (CARe) is the first approach to provide soundness guarantees on learned reachable sets of continuous dynamical systems.
arXiv Detail & Related papers (2025-03-31T10:02:57Z) - Neural Continuous-Time Supermartingale Certificates [7.527234046228324]
We introduce for the first time a neural-certificate framework for continuous-time dynamical systems.
Inspired by the success of training neural Lyapunov certificates for deterministic continuous-time systems, we propose a framework that bridges the gap between continuous-time and probabilistic neural certification.
arXiv Detail & Related papers (2024-12-23T09:51:54Z) - Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization [53.15874572081944]
We study computability in the deep learning framework from two perspectives.
We show algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved.
Finally, we show that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
arXiv Detail & Related papers (2024-08-12T15:02:26Z) - Learning-Based Verification of Stochastic Dynamical Systems with Neural Network Policies [7.9898826915621965]
We use a verification procedure that trains another neural network, which acts as a certificate proving that the policy satisfies the task.
For reach-avoid tasks, it suffices to show that this certificate network is a reach-avoid supermartingale (RASM)
arXiv Detail & Related papers (2024-06-02T18:19:19Z) - DARE: Towards Robust Text Explanations in Biomedical and Healthcare
Applications [54.93807822347193]
We show how to adapt attribution robustness estimation methods to a given domain, so as to take into account domain-specific plausibility.
Next, we provide two methods, adversarial training and FAR training, to mitigate the brittleness characterized by DARE.
Finally, we empirically validate our methods with extensive experiments on three established biomedical benchmarks.
arXiv Detail & Related papers (2023-07-05T08:11:40Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - CARE: Certifiably Robust Learning with Reasoning via Variational
Inference [26.210129662748862]
We propose a certifiably robust learning with reasoning pipeline (CARE)
CARE achieves significantly higher certified robustness compared with the state-of-the-art baselines.
We additionally conducted different ablation studies to demonstrate the empirical robustness of CARE and the effectiveness of different knowledge integration.
arXiv Detail & Related papers (2022-09-12T07:15:52Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Coarse-Grained Smoothness for RL in Metric Spaces [13.837098609529257]
A common approach is to assume Lipschitz continuity of the Q-function.
We show that, unfortunately, this property fails to hold in many typical domains.
We propose a new coarse-grained smoothness definition that generalizes the notion of Lipschitz continuity.
arXiv Detail & Related papers (2021-10-23T18:53:56Z) - On the Regularity of Attention [11.703070372807293]
We propose a new mathematical framework that uses measure theory and integral operators to model attention.
We show that this framework is consistent with the usual definition, and that it captures the essential properties of attention.
We also discuss the effects regularity can have on NLP models, and applications to invertible and infinitely-deep networks.
arXiv Detail & Related papers (2021-02-10T18:40:11Z) - Uncertainty Quantification for Inferring Hawkes Networks [13.283258096829146]
We develop a statistical inference framework to learn causal relationships between nodes from networked data.
We provide uncertainty quantification for the maximum likelihood estimate of the network Hawkes process.
arXiv Detail & Related papers (2020-06-12T23:08:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.