An Autotuning-based Optimization Framework for Mixed-kernel SVM Classifications in Smart Pixel Datasets and Heterojunction Transistors
- URL: http://arxiv.org/abs/2406.18445v2
- Date: Thu, 26 Sep 2024 18:50:11 GMT
- Title: An Autotuning-based Optimization Framework for Mixed-kernel SVM Classifications in Smart Pixel Datasets and Heterojunction Transistors
- Authors: Xingfu Wu, Tupendra Oli, Justin H. Qian, Valerie Taylor, Mark C. Hersam, Vinod K. Sangwan,
- Abstract summary: Support Vector Machine (SVM) is a state-of-the-art classification method widely used in science and engineering.
We propose an autotuning-based optimization framework to quantify the ranges of hyper parameters in SVMs to identify their optimal choices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Support Vector Machine (SVM) is a state-of-the-art classification method widely used in science and engineering due to its high accuracy, its ability to deal with high dimensional data, and its flexibility in modeling diverse sources of data. In this paper, we propose an autotuning-based optimization framework to quantify the ranges of hyperparameters in SVMs to identify their optimal choices, and apply the framework to two SVMs with the mixed-kernel between Sigmoid and Gaussian kernels for smart pixel datasets in high energy physics (HEP) and mixed-kernel heterojunction transistors (MKH). Our experimental results show that the optimal selection of hyperparameters in the SVMs and the kernels greatly varies for different applications and datasets, and choosing their optimal choices is critical for a high classification accuracy of the mixed kernel SVMs. Uninformed choices of hyperparameters C and coef0 in the mixed-kernel SVMs result in severely low accuracy, and the proposed framework effectively quantifies the proper ranges for the hyperparameters in the SVMs to identify their optimal choices to achieve the highest accuracy 94.6\% for the HEP application and the highest average accuracy 97.2\% with far less tuning time for the MKH application.
Related papers
- MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence [97.93517982908007]
In cross-domain few-shot classification, NCC aims to learn representations to construct a metric space where few-shot classification can be performed.
In this paper, we find that there exist high similarities between NCC-learned representations of two samples from different classes.
We propose a bi-level optimization framework, emphmaximizing optimized kernel dependence (MOKD) to learn a set of class-specific representations that match the cluster structures indicated by labeled data.
arXiv Detail & Related papers (2024-05-29T05:59:52Z) - Sparsity-Aware Distributed Learning for Gaussian Processes with Linear
Multiple Kernel [22.23550794664218]
This paper presents a novel GP linear multiple kernel (LMK) and a generic sparsity-aware distributed learning framework.
The framework incorporates a quantized alternating direction method of multipliers (ADMM) for collaborative learning among multiple agents.
Experiments on diverse datasets demonstrate the superior prediction performance and efficiency of our proposed methods.
arXiv Detail & Related papers (2023-09-15T07:05:33Z) - Separability and Scatteredness (S&S) Ratio-Based Efficient SVM
Regularization Parameter, Kernel, and Kernel Parameter Selection [10.66048003460524]
Support Vector Machine (SVM) is a robust machine learning algorithm with broad applications in classification, regression, and outlier detection.
This work shows that the SVM performance can be modeled as a function of separability and scatteredness (S&S) of the data.
arXiv Detail & Related papers (2023-05-17T13:51:43Z) - Support Vector Machine for Determining Euler Angles in an Inertial
Navigation System [55.41644538483948]
The paper discusses the improvement of the accuracy of an inertial navigation system created on the basis of MEMS sensors using machine learning (ML) methods.
The proposed algorithm based on MO has demonstrated its ability to correctly classify in the presence of noise typical for MEMS sensors.
arXiv Detail & Related papers (2022-12-07T10:01:11Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Feature subset selection for kernel SVM classification via mixed-integer
optimization [0.7734726150561088]
We study the mixed-integer optimization (MIO) approach to feature subset selection in nonlinear kernel support vector machines (SVMs) for binary classification.
First proposed for linear regression in the 1970s, this approach has recently moved into the spotlight with advances in optimization algorithms and computer hardware.
We propose a mixed-integer linear optimization (MILO) formulation based on the kernel-target alignment for feature subset selection, and this MILO problem can be solved to optimality using optimization software.
arXiv Detail & Related papers (2022-05-28T04:01:40Z) - Handling Imbalanced Classification Problems With Support Vector Machines
via Evolutionary Bilevel Optimization [73.17488635491262]
Support vector machines (SVMs) are popular learning algorithms to deal with binary classification problems.
This article introduces EBCS-SVM: evolutionary bilevel cost-sensitive SVMs.
arXiv Detail & Related papers (2022-04-21T16:08:44Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - AML-SVM: Adaptive Multilevel Learning with Support Vector Machines [0.0]
This paper proposes an adaptive multilevel learning framework for the nonlinear SVM.
It improves the classification quality across the refinement process, and leverages multi-threaded parallel processing for better performance.
arXiv Detail & Related papers (2020-11-05T00:17:02Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Radial basis function kernel optimization for Support Vector Machine
classifiers [4.888981184420116]
OKSVM is an algorithm that automatically learns the RBF kernel hyper parameter and adjusts the SVM weights simultaneously.
We analyze the performance of our approach with respect to the classical SVM for classification on synthetic and real data.
arXiv Detail & Related papers (2020-07-16T10:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.