Efficient Neural Network Analysis with Sum-of-Infeasibilities
- URL: http://arxiv.org/abs/2203.11201v1
- Date: Sat, 19 Mar 2022 15:05:09 GMT
- Title: Efficient Neural Network Analysis with Sum-of-Infeasibilities
- Authors: Haoze Wu, Aleksandar Zelji\'c, Guy Katz, Clark Barrett
- Abstract summary: Inspired by sum-of-infeasibilities methods in convex optimization, we propose a novel procedure for analyzing verification queries on networks with extensive branching functions.
An extension to a canonical case-analysis-based complete search procedure can be achieved by replacing the convex procedure executed at each search state with DeepSoI.
- Score: 64.31536828511021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspired by sum-of-infeasibilities methods in convex optimization, we propose
a novel procedure for analyzing verification queries on neural networks with
piecewise-linear activation functions. Given a convex relaxation which
over-approximates the non-convex activation functions, we encode the violations
of activation functions as a cost function and optimize it with respect to the
convex relaxation. The cost function, referred to as the Sum-of-Infeasibilities
(SoI), is designed so that its minimum is zero and achieved only if all the
activation functions are satisfied. We propose a stochastic procedure, DeepSoI,
to efficiently minimize the SoI. An extension to a canonical
case-analysis-based complete search procedure can be achieved by replacing the
convex procedure executed at each search state with DeepSoI. Extending the
complete search with DeepSoI achieves multiple simultaneous goals: 1) it guides
the search towards a counter-example; 2) it enables more informed branching
decisions; and 3) it creates additional opportunities for bound derivation. An
extensive evaluation across different benchmarks and solvers demonstrates the
benefit of the proposed techniques. In particular, we demonstrate that SoI
significantly improves the performance of an existing complete search
procedure. Moreover, the SoI-based implementation outperforms other
state-of-the-art complete verifiers. We also show that our technique can
efficiently improve upon the perturbation bound derived by a recent adversarial
attack algorithm.
Related papers
- An Enhanced Zeroth-Order Stochastic Frank-Wolfe Framework for Constrained Finite-Sum Optimization [15.652261277429968]
We propose an enhanced zeroth-order convex computation Frank-Wolfe to address constrained finite-sum optimization problems.
Our method introduces a novel double variance reduction framework that effectively reduces the approximation induced by zeroth-order oracles.
arXiv Detail & Related papers (2025-01-13T10:53:19Z) - Indirect Query Bayesian Optimization with Integrated Feedback [17.66813850517961]
We develop a new class of Bayesian optimization problems where integrated feedback is given via a conditional expectation of the unknown function $f$ to be optimized.
The goal is to find the global optimum of $f$ by adaptively querying and observing in the space transformed by the conditional distribution.
This is motivated by real-world applications where one cannot access direct feedback due to privacy, hardware or computational constraints.
arXiv Detail & Related papers (2024-12-18T07:20:33Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Faster Stochastic Variance Reduction Methods for Compositional MiniMax
Optimization [50.10952609321302]
compositional minimax optimization is a pivotal challenge across various machine learning domains.
Current methods of compositional minimax optimization are plagued by sub-optimal complexities or heavy reliance on sizable batch sizes.
This paper introduces a novel method, called Nested STOchastic Recursive Momentum (NSTORM), which can achieve the optimal sample complexity of $O(kappa3 /epsilon3 )$.
arXiv Detail & Related papers (2023-08-18T14:57:21Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Gleo-Det: Deep Convolution Feature-Guided Detector with Local Entropy
Optimization for Salient Points [5.955667705173262]
We propose to achieve fine constraint based on the requirement of repeatability while coarse constraint with guidance of deep convolution features.
With the guidance of convolution features, we define the cost function from both positive and negative sides.
arXiv Detail & Related papers (2022-04-27T12:40:21Z) - Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization [39.824086260578646]
This paper presents a novel trusted-maximizers entropy search (TES) acquisition function.
It measures how much an input contributes to the information gain on a query over a finite set of trusted maximizers.
arXiv Detail & Related papers (2021-07-30T07:25:07Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.