Extending Neural Network Verification to a Larger Family of Piece-wise
Linear Activation Functions
- URL: http://arxiv.org/abs/2311.10780v1
- Date: Thu, 16 Nov 2023 11:01:39 GMT
- Title: Extending Neural Network Verification to a Larger Family of Piece-wise
Linear Activation Functions
- Authors: L\'aszl\'o Antal (RWTH Aachen University), Hana Masara (RWTH Aachen
University), Erika \'Abrah\'am (RWTH Aachen University)
- Abstract summary: We extend an available neural network verification technique to support a wider class of piece-wise linear activation functions.
We also extend the algorithms, which provide in their original form exact respectively over-approximative results for bounded input sets represented as start sets, to allow also unbounded input set.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we extend an available neural network verification technique
to support a wider class of piece-wise linear activation functions.
Furthermore, we extend the algorithms, which provide in their original form
exact respectively over-approximative results for bounded input sets
represented as start sets, to allow also unbounded input set. We implemented
our algorithms and demonstrated their effectiveness in some case studies.
Related papers
- Provable Bounds on the Hessian of Neural Networks: Derivative-Preserving Reachability Analysis [6.9060054915724]
We propose a novel reachability analysis method tailored for neural networks with differentiable activations.
A key aspect of our method is loop transformation on the activation functions to exploit their monotonicity effectively.
The resulting end-to-end abstraction locally preserves the derivative information, yielding accurate bounds on small input sets.
arXiv Detail & Related papers (2024-06-06T20:02:49Z) - Enhancing Neural Subset Selection: Integrating Background Information into Set Representations [53.15923939406772]
We show that when the target value is conditioned on both the input set and subset, it is essential to incorporate an textitinvariant sufficient statistic of the superset into the subset of interest.
This ensures that the output value remains invariant to permutations of the subset and its corresponding superset, enabling identification of the specific superset from which the subset originated.
arXiv Detail & Related papers (2024-02-05T16:09:35Z) - Provable Preimage Under-Approximation for Neural Networks (Full Version) [27.519993407376862]
We propose an efficient anytime algorithm for generating symbolic under-approximations of the preimage of any polyhedron output set for neural networks.
Empirically, we validate the efficacy of our method across a range of domains, including a high-dimensional MNIST classification task.
We present a sound and complete algorithm for the former, which exploits our disjoint union of polytopes representation to provide formal guarantees.
arXiv Detail & Related papers (2023-05-05T16:55:27Z) - Unification of popular artificial neural network activation functions [0.0]
We present a unified representation of the most popular neural network activation functions.
Adopting Mittag-Leffler functions of fractional calculus, we propose a flexible and compact functional form.
arXiv Detail & Related papers (2023-02-21T21:20:59Z) - Neural Network Verification as Piecewise Linear Optimization:
Formulations for the Composition of Staircase Functions [2.088583843514496]
We present a technique for neural network verification using mixed-integer programming (MIP) formulations.
We derive a strong formulation for each neuron in a network using piecewise linear activation functions.
We also derive a separation procedure that runs in super-linear time in the input dimension.
arXiv Detail & Related papers (2022-11-27T03:25:48Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Efficient Neural Network Analysis with Sum-of-Infeasibilities [64.31536828511021]
Inspired by sum-of-infeasibilities methods in convex optimization, we propose a novel procedure for analyzing verification queries on networks with extensive branching functions.
An extension to a canonical case-analysis-based complete search procedure can be achieved by replacing the convex procedure executed at each search state with DeepSoI.
arXiv Detail & Related papers (2022-03-19T15:05:09Z) - Activation function design for deep networks: linearity and effective
initialisation [10.108857371774977]
We study how to avoid two problems at initialisation identified in prior works.
We prove that both these problems can be avoided by choosing an activation function possessing a sufficiently large linear region around the origin.
arXiv Detail & Related papers (2021-05-17T11:30:46Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.