Evaluating robustness of support vector machines with the Lagrangian
dual approach
- URL: http://arxiv.org/abs/2306.02639v1
- Date: Mon, 5 Jun 2023 07:15:54 GMT
- Title: Evaluating robustness of support vector machines with the Lagrangian
dual approach
- Authors: Yuting Liu, Hong Gu, Pan Qin
- Abstract summary: We propose a method to improve the verification performance for vector machines (SVMs) with nonlinear kernels.
We evaluate the adversarial robustness of SVMs with linear and nonlinear kernels on the MNIST and Fashion-MNIST datasets.
The experimental results show that the percentage of provable robustness obtained by our method on the test set is better than that of the state-of-the-art.
- Score: 6.868150350359336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples bring a considerable security threat to support vector
machines (SVMs), especially those used in safety-critical applications. Thus,
robustness verification is an essential issue for SVMs, which can provide
provable robustness against various kinds of adversary attacks. The evaluation
results obtained through the robustness verification can provide a safe
guarantee for the use of SVMs. The existing verification method does not often
perform well in verifying SVMs with nonlinear kernels. To this end, we propose
a method to improve the verification performance for SVMs with nonlinear
kernels. We first formalize the adversarial robustness evaluation of SVMs as an
optimization problem. Then a lower bound of the original problem is obtained by
solving the Lagrangian dual problem of the original problem. Finally, the
adversarial robustness of SVMs is evaluated concerning the lower bound. We
evaluate the adversarial robustness of SVMs with linear and nonlinear kernels
on the MNIST and Fashion-MNIST datasets. The experimental results show that the
percentage of provable robustness obtained by our method on the test set is
better than that of the state-of-the-art.
Related papers
- A Safe Screening Rule with Bi-level Optimization of $\nu$ Support Vector
Machine [15.096652880354199]
We propose a safe screening rule with bi-level optimization for $nu$-SVM.
Our SRBO-$nu$-SVM is strictly deduced by integrating the Karush-Kuhn-Tucker conditions.
We also develop an efficient dual coordinate descent method (DCDM) to further improve computational speed.
arXiv Detail & Related papers (2024-03-04T06:55:57Z) - Abstract Interpretation-Based Feature Importance for SVMs [8.879921160392737]
We propose a symbolic representation for support vector machines (SVMs) by means of abstract interpretation.
We derive a novel feature importance measure, called abstract feature importance (AFI), that does not depend in any way on a given dataset of the accuracy of the SVM.
Our experimental results show that, independently of the accuracy of the SVM, our AFI measure correlates much more strongly with the stability of the SVM to feature perturbations than feature importance measures widely available in machine learning software.
arXiv Detail & Related papers (2022-10-22T13:57:44Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Handling Imbalanced Classification Problems With Support Vector Machines
via Evolutionary Bilevel Optimization [73.17488635491262]
Support vector machines (SVMs) are popular learning algorithms to deal with binary classification problems.
This article introduces EBCS-SVM: evolutionary bilevel cost-sensitive SVMs.
arXiv Detail & Related papers (2022-04-21T16:08:44Z) - Max-Margin Contrastive Learning [120.32963353348674]
We present max-margin contrastive learning (MMCL) for unsupervised representation learning.
Our approach selects negatives as the sparse support vectors obtained via a quadratic optimization problem.
We validate our approach on standard vision benchmark datasets, demonstrating better performance in unsupervised representation learning.
arXiv Detail & Related papers (2021-12-21T18:56:54Z) - Training very large scale nonlinear SVMs using Alternating Direction
Method of Multipliers coupled with the Hierarchically Semi-Separable kernel
approximations [0.0]
nonlinear Support Vector Machines (SVMs) produce significantly higher classification quality when compared to linear ones.
Their computational complexity is prohibitive for large-scale datasets.
arXiv Detail & Related papers (2021-08-09T16:52:04Z) - Estimating Average Treatment Effects with Support Vector Machines [77.34726150561087]
Support vector machine (SVM) is one of the most popular classification algorithms in the machine learning literature.
We adapt SVM as a kernel-based weighting procedure that minimizes the maximum mean discrepancy between the treatment and control groups.
We characterize the bias of causal effect estimation arising from this trade-off, connecting the proposed SVM procedure to the existing kernel balancing methods.
arXiv Detail & Related papers (2021-02-23T20:22:56Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN
Approach [27.503734504441365]
Adversarial machine learning has attracted a great amount of attention in recent years.
In this paper, we consider defending SVM against poisoning attacks.
We study two commonly used strategies for defending: designing robust SVM algorithms and data sanitization.
arXiv Detail & Related papers (2020-06-14T01:19:38Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.