GBSVM: Granular-ball Support Vector Machine
- URL: http://arxiv.org/abs/2210.03120v2
- Date: Sun, 11 Feb 2024 16:02:18 GMT
- Title: GBSVM: Granular-ball Support Vector Machine
- Authors: Shuyin Xia, Xiaoyu Lian, Guoyin Wang, Xinbo Gao, Jiancu Chen, Xiaoli
Peng
- Abstract summary: GBSVM is a significant attempt to construct a classifier using the coarse-to-fine granularity of a granular-ball as input, rather than a single data point.
This paper has fixed the errors of the original model of the existing GBSVM, and derived its dual model.
The experimental results on the UCI benchmark datasets demonstrate that GBSVM has good robustness and efficiency.
- Score: 46.60182022640765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: GBSVM (Granular-ball Support Vector Machine) is a significant attempt to
construct a classifier using the coarse-to-fine granularity of a granular-ball
as input, rather than a single data point. It is the first classifier whose
input contains no points. However, the existing model has some errors, and its
dual model has not been derived. As a result, the current algorithm cannot be
implemented or applied. To address these problems, this paper has fixed the
errors of the original model of the existing GBSVM, and derived its dual model.
Furthermore, a particle swarm optimization algorithm is designed to solve the
dual model. The sequential minimal optimization algorithm is also carefully
designed to solve the dual model. The solution is faster and more stable than
the particle swarm optimization based version. The experimental results on the
UCI benchmark datasets demonstrate that GBSVM has good robustness and
efficiency. All codes have been released in the open source library at
http://www.cquptshuyinxia.com/GBSVM.html or https://github.com/syxiaa/GBSVM.
Related papers
- Granular-Balls based Fuzzy Twin Support Vector Machine for Classification [12.738411525651667]
We introduce the granular-ball twin support vector machine (GBTWSVM) classifier, which integrates granular-ball computing (GBC) with the twin support vector machine (TWSVM)
We design the membership and non-membership functions of granular-balls using Pythagorean fuzzy sets to differentiate the contributions of granular-balls in various regions.
arXiv Detail & Related papers (2024-08-01T16:43:21Z) - A Safe Screening Rule with Bi-level Optimization of $\nu$ Support Vector
Machine [15.096652880354199]
We propose a safe screening rule with bi-level optimization for $nu$-SVM.
Our SRBO-$nu$-SVM is strictly deduced by integrating the Karush-Kuhn-Tucker conditions.
We also develop an efficient dual coordinate descent method (DCDM) to further improve computational speed.
arXiv Detail & Related papers (2024-03-04T06:55:57Z) - SqueezeLLM: Dense-and-Sparse Quantization [80.32162537942138]
Main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, for single batch inference.
We introduce SqueezeLLM, a post-training quantization framework that enables lossless compression to ultra-low precisions of up to 3-bit.
Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format.
arXiv Detail & Related papers (2023-06-13T08:57:54Z) - Sampling binary sparse coding QUBO models using a spiking neuromorphic
processor [3.0586855806896045]
We consider the problem of computing a binary representation of an image.
We aim to find a binary vector minimal set of basis that when added together best reconstruct the given input.
This yields a so-called Quadratic Unconstrained Binary (QUBO) problem.
arXiv Detail & Related papers (2023-06-02T22:47:18Z) - Monarch: Expressive Structured Matrices for Efficient and Accurate
Training [64.6871423399431]
Large neural networks excel in many domains, but they are expensive to train and fine-tune.
A popular approach to reduce their compute or memory requirements is to replace dense weight matrices with structured ones.
We propose a class of matrices (Monarch) that is hardware-efficient.
arXiv Detail & Related papers (2022-04-01T17:37:29Z) - Memory and Computation-Efficient Kernel SVM via Binary Embedding and
Ternary Model Coefficients [18.52747917850984]
Kernel approximation is widely used to scale up kernel SVM training and prediction.
Memory and computation costs of kernel approximation models are still too high if we want to deploy them on memory-limited devices.
We propose a novel memory and computation-efficient kernel SVM model by using both binary embedding and binary model coefficients.
arXiv Detail & Related papers (2020-10-06T09:41:54Z) - MPLP++: Fast, Parallel Dual Block-Coordinate Ascent for Dense Graphical
Models [96.1052289276254]
This work introduces a new MAP-solver, based on the popular Dual Block-Coordinate Ascent principle.
Surprisingly, by making a small change to the low-performing solver, we derive the new solver MPLP++ that significantly outperforms all existing solvers by a large margin.
arXiv Detail & Related papers (2020-04-16T16:20:53Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z) - On Coresets for Support Vector Machines [61.928187390362176]
A coreset is a small, representative subset of the original data points.
We show that our algorithm can be used to extend the applicability of any off-the-shelf SVM solver to streaming, distributed, and dynamic data settings.
arXiv Detail & Related papers (2020-02-15T23:25:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.