An Efficient Method for Sample Adversarial Perturbations against
Nonlinear Support Vector Machines
- URL: http://arxiv.org/abs/2206.05664v1
- Date: Sun, 12 Jun 2022 05:21:51 GMT
- Title: An Efficient Method for Sample Adversarial Perturbations against
Nonlinear Support Vector Machines
- Authors: Wen Su, Qingna Li
- Abstract summary: We investigate the sample adversarial perturbations for nonlinear support vector machines (SVMs)
Due to the implicit form of the nonlinear functions mapping data to the feature space, it is difficult to obtain the explicit form of the adversarial perturbations.
By exploring the special property of nonlinear SVMs, we transform the optimization problem of attacking nonlinear SVMs into a nonlinear KKT system.
- Score: 8.000799046379749
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial perturbations have drawn great attentions in various machine
learning models. In this paper, we investigate the sample adversarial
perturbations for nonlinear support vector machines (SVMs). Due to the implicit
form of the nonlinear functions mapping data to the feature space, it is
difficult to obtain the explicit form of the adversarial perturbations. By
exploring the special property of nonlinear SVMs, we transform the optimization
problem of attacking nonlinear SVMs into a nonlinear KKT system. Such a system
can be solved by various numerical methods. Numerical results show that our
method is efficient in computing adversarial perturbations.
Related papers
- Robust optimization for adversarial learning with finite sample complexity guarantees [1.8434042562191815]
In this paper we focus on linear and nonlinear classification problems and propose a novel adversarial training method for robust classifiers.
We view robustness under a data driven lens, and derive finite sample complexity bounds for both linear and non-linear classifiers in binary and multi-class scenarios.
Our algorithm minimizes a worst-case surrogate loss using Linear Programming (LP) and Second Order Cone Programming (SOCP) for linear and non-linear models.
arXiv Detail & Related papers (2024-03-22T13:49:53Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Training very large scale nonlinear SVMs using Alternating Direction
Method of Multipliers coupled with the Hierarchically Semi-Separable kernel
approximations [0.0]
nonlinear Support Vector Machines (SVMs) produce significantly higher classification quality when compared to linear ones.
Their computational complexity is prohibitive for large-scale datasets.
arXiv Detail & Related papers (2021-08-09T16:52:04Z) - Nonlinear Least Squares for Large-Scale Machine Learning using
Stochastic Jacobian Estimates [0.0]
We exploit the property that the number of model parameters typically exceeds the data in one batch to compute search directions.
We develop two algorithms that estimate Jacobian matrices and perform well when compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-07-12T17:29:08Z) - Hessian Eigenspectra of More Realistic Nonlinear Models [73.31363313577941]
We make a emphprecise characterization of the Hessian eigenspectra for a broad family of nonlinear models.
Our analysis takes a step forward to identify the origin of many striking features observed in more complex machine learning models.
arXiv Detail & Related papers (2021-03-02T06:59:52Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Linear embedding of nonlinear dynamical systems and prospects for
efficient quantum algorithms [74.17312533172291]
We describe a method for mapping any finite nonlinear dynamical system to an infinite linear dynamical system (embedding)
We then explore an approach for approximating the resulting infinite linear system with finite linear systems (truncation)
arXiv Detail & Related papers (2020-12-12T00:01:10Z) - Sparse Quantized Spectral Clustering [85.77233010209368]
We exploit tools from random matrix theory to make precise statements about how the eigenspectrum of a matrix changes under such nonlinear transformations.
We show that very little change occurs in the informative eigenstructure even under drastic sparsification/quantization.
arXiv Detail & Related papers (2020-10-03T15:58:07Z) - The role of optimization geometry in single neuron learning [12.891722496444036]
Recent experiments have demonstrated the choice of optimization geometry can impact generalization performance when learning expressive neural model networks.
We show how the interplay between geometry and the feature geometry sets the out-of-sample leads and improves performance.
arXiv Detail & Related papers (2020-06-15T17:39:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.