Newton Method-based Subspace Support Vector Data Description
- URL: http://arxiv.org/abs/2309.13960v1
- Date: Mon, 25 Sep 2023 08:49:41 GMT
- Title: Newton Method-based Subspace Support Vector Data Description
- Authors: Fahad Sohrab, Firas Laakom, Moncef Gabbouj
- Abstract summary: We present an adaptation of Newton's method for the optimization of Subspace Support Vector Data Description (S-SVDD)
We leverage Newton's method to enhance data mapping and data description for an improved optimization of subspace learning-based one-class classification.
The paper discusses the limitations of gradient descent and the advantages of using Newton's method in subspace learning for one-class classification tasks.
- Score: 16.772385337198834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present an adaptation of Newton's method for the
optimization of Subspace Support Vector Data Description (S-SVDD). The
objective of S-SVDD is to map the original data to a subspace optimized for
one-class classification, and the iterative optimization process of data
mapping and description in S-SVDD relies on gradient descent. However, gradient
descent only utilizes first-order information, which may lead to suboptimal
results. To address this limitation, we leverage Newton's method to enhance
data mapping and data description for an improved optimization of subspace
learning-based one-class classification. By incorporating this auxiliary
information, Newton's method offers a more efficient strategy for subspace
learning in one-class classification as compared to gradient-based
optimization. The paper discusses the limitations of gradient descent and the
advantages of using Newton's method in subspace learning for one-class
classification tasks. We provide both linear and nonlinear formulations of
Newton's method-based optimization for S-SVDD. In our experiments, we explored
both the minimization and maximization strategies of the objective. The results
demonstrate that the proposed optimization strategy outperforms the
gradient-based S-SVDD in most cases.
Related papers
- Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - Class Gradient Projection For Continual Learning [99.105266615448]
Catastrophic forgetting is one of the most critical challenges in Continual Learning (CL)
We propose Class Gradient Projection (CGP), which calculates the gradient subspace from individual classes rather than tasks.
arXiv Detail & Related papers (2023-11-25T02:45:56Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - An alternative to SVM Method for Data Classification [0.0]
Support vector machine (SVM) is a popular kernel method for data classification.
The method suffers from some weaknesses including; time processing, risk of failure of the optimization process for high dimension cases.
In this paper an alternative method is proposed having a similar performance, with a sensitive improvement of the aforementioned shortcomings.
arXiv Detail & Related papers (2023-08-20T14:09:01Z) - Penalizing Gradient Norm for Efficiently Improving Generalization in
Deep Learning [13.937644559223548]
How to train deep neural networks (DNNs) to generalize well is a central concern in deep learning.
We propose an effective method to improve the model generalization by penalizing the gradient norm of loss function during optimization.
arXiv Detail & Related papers (2022-02-08T02:03:45Z) - Why Approximate Matrix Square Root Outperforms Accurate SVD in Global
Covariance Pooling? [59.820507600960745]
We propose a new GCP meta-layer that uses SVD in the forward pass, and Pad'e Approximants in the backward propagation to compute the gradients.
The proposed meta-layer has been integrated into different CNN models and achieves state-of-the-art performances on both large-scale and fine-grained datasets.
arXiv Detail & Related papers (2021-05-06T08:03:45Z) - Graph-Embedded Subspace Support Vector Data Description [98.78559179013295]
We propose a novel subspace learning framework for one-class classification.
The proposed framework presents the problem in the form of graph embedding.
We demonstrate improved performance against the baselines and the recently proposed subspace learning methods for one-class classification.
arXiv Detail & Related papers (2021-04-29T14:30:48Z) - Ellipsoidal Subspace Support Vector Data Description [98.67884574313292]
We propose a novel method for transforming data into a low-dimensional space optimized for one-class classification.
We provide both linear and non-linear formulations for the proposed method.
The proposed method is noticed to converge much faster than recently proposed Subspace Support Vector Data Description.
arXiv Detail & Related papers (2020-03-20T21:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.