LEVIS: Large Exact Verifiable Input Spaces for Neural Networks
- URL: http://arxiv.org/abs/2408.08824v1
- Date: Fri, 16 Aug 2024 16:15:57 GMT
- Title: LEVIS: Large Exact Verifiable Input Spaces for Neural Networks
- Authors: Mohamad Fares El Hajj Chehade, Brian Wesley Bell, Russell Bent, Hao Zhu, Wenting Li,
- Abstract summary: robustness of neural networks is paramount in safety-critical applications.
We introduce a novel framework, $textttLEVIS$, comprising $textttLEVIS$-$beta$.
We offer a theoretical analysis elucidating the properties of the verifiable balls acquired through $textttLEVIS$-$alpha$ and $textttLEVIS$-$beta$.
- Score: 8.673606921201442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The robustness of neural networks is paramount in safety-critical applications. While most current robustness verification methods assess the worst-case output under the assumption that the input space is known, identifying a verifiable input space $\mathcal{C}$, where no adversarial examples exist, is crucial for effective model selection, robustness evaluation, and the development of reliable control strategies. To address this challenge, we introduce a novel framework, $\texttt{LEVIS}$, comprising $\texttt{LEVIS}$-$\alpha$ and $\texttt{LEVIS}$-$\beta$. $\texttt{LEVIS}$-$\alpha$ locates the largest possible verifiable ball within the central region of $\mathcal{C}$ that intersects at least two boundaries. In contrast, $\texttt{LEVIS}$-$\beta$ integrates multiple verifiable balls to encapsulate the entirety of the verifiable space comprehensively. Our contributions are threefold: (1) We propose $\texttt{LEVIS}$ equipped with three pioneering techniques that identify the maximum verifiable ball and the nearest adversarial point along collinear or orthogonal directions. (2) We offer a theoretical analysis elucidating the properties of the verifiable balls acquired through $\texttt{LEVIS}$-$\alpha$ and $\texttt{LEVIS}$-$\beta$. (3) We validate our methodology across diverse applications, including electrical power flow regression and image classification, showcasing performance enhancements and visualizations of the searching characteristics.
Related papers
- Inertial Confinement Fusion Forecasting via Large Language Models [48.76222320245404]
In this study, we introduce $textbfLPI-LLM$, a novel integration of Large Language Models (LLMs) with classical reservoir computing paradigms.
We propose the $textitLLM-anchored Reservoir$, augmented with a $textitFusion-specific Prompt$, enabling accurate forecasting of $textttLPI$-generated-hot electron dynamics during implosion.
We also present $textbfLPI4AI$, the first $textttLPI$ benchmark based
arXiv Detail & Related papers (2024-07-15T05:46:44Z) - Uncertainty of Joint Neural Contextual Bandit [0.41436032949434404]
This paper focuses on a joint neural contextual bandit solution which serves all recommending items in one model.
The tuning of the parameter $alpha$ is typically complex in practice due to its nature.
We provide both theoretical analysis and experimental findings regarding the uncertainty $sigma$ of the joint neural contextual bandit model.
arXiv Detail & Related papers (2024-06-04T17:38:24Z) - Bayesian Inference with Deep Weakly Nonlinear Networks [57.95116787699412]
We show at a physics level of rigor that Bayesian inference with a fully connected neural network is solvable.
We provide techniques to compute the model evidence and posterior to arbitrary order in $1/N$ and at arbitrary temperature.
arXiv Detail & Related papers (2024-05-26T17:08:04Z) - Efficient Solution of Point-Line Absolute Pose [52.775981113238046]
We revisit certain problems of pose estimation based on 3D--2D correspondences between features which may be points or lines.
We show experimentally that the resulting solvers are numerically stable and fast.
arXiv Detail & Related papers (2024-04-25T12:09:16Z) - Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with
General Utilities [12.104551746465932]
We investigate safe multi-agent reinforcement learning, where agents seek to collectively maximize an aggregate sum of local objectives while satisfying their own safety constraints.
Our algorithm converges to a first-order stationary point (FOSP) at the rate of $mathcalOleft(T-2/3right)$.
In the sample-based setting, we demonstrate that, with high probability, our algorithm requires $widetildemathcalOleft(epsilon-3.5right)$ samples to achieve an $epsilon$-FOSP.
arXiv Detail & Related papers (2023-05-27T20:08:35Z) - Exploring Active 3D Object Detection from a Generalization Perspective [58.597942380989245]
Uncertainty-based active learning policies fail to balance the trade-off between point cloud informativeness and box-level annotation costs.
We propose textscCrb, which hierarchically filters out the point clouds of redundant 3D bounding box labels.
Experiments show that the proposed approach outperforms existing active learning strategies.
arXiv Detail & Related papers (2023-01-23T02:43:03Z) - Graph Neural Networks for Multimodal Single-Cell Data Integration [32.8390339109358]
We present a general Graph Neural Network framework $textitscMoGNN$ to tackle three tasks.
textitscMoGNN$ demonstrates superior results in all three tasks compared with the state-of-the-art and conventional approaches.
arXiv Detail & Related papers (2022-03-03T17:59:02Z) - Certifiably Robust Interpretation via Renyi Differential Privacy [77.04377192920741]
We study the problem of interpretation robustness from a new perspective of Renyi differential privacy (RDP)
First, it can offer provable and certifiable top-$k$ robustness.
Second, our proposed method offers $sim10%$ better experimental robustness than existing approaches.
Third, our method can provide a smooth tradeoff between robustness and computational efficiency.
arXiv Detail & Related papers (2021-07-04T06:58:01Z) - Provable Robust Classification via Learned Smoothed Densities [1.599072005190786]
We formulate the problem of robust classification in terms of $widehatx(Y)$, the $textitBayes estimator$ of $X$ given the noisy measurements.
We show that with a learned smoothed energy function and a linear classifier we can achieve provable $ell$ robust accuracies that are competitive with empirical defenses.
arXiv Detail & Related papers (2020-05-09T19:52:32Z) - Naive Exploration is Optimal for Online LQR [49.681825576239355]
We show that the optimal regret scales as $widetildeTheta(sqrtd_mathbfu2 d_mathbfx T)$, where $T$ is the number of time steps, $d_mathbfu$ is the dimension of the input space, and $d_mathbfx$ is the dimension of the system state.
Our lower bounds rule out the possibility of a $mathrmpoly(logT)$-regret algorithm, which had been
arXiv Detail & Related papers (2020-01-27T03:44:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.