Enhanced Feature Based Granular Ball Twin Support Vector Machine
- URL: http://arxiv.org/abs/2410.05786v1
- Date: Tue, 8 Oct 2024 08:10:43 GMT
- Title: Enhanced Feature Based Granular Ball Twin Support Vector Machine
- Authors: A. Quadir, M. Sajid, Mushir Akhtar, M. Tanveer, P. N. Suganthan,
- Abstract summary: We propose enhanced feature based granular ball twin support vector machine (EF-GBTSVM)
The proposed model employs the coarse granularity of granular balls (GBs) as input rather than individual data samples.
We undertake a thorough evaluation of the proposed EF-GBTSVM model on benchmark UCI and KEEL datasets.
- Score: 0.5492530316344587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose enhanced feature based granular ball twin support vector machine (EF-GBTSVM). EF-GBTSVM employs the coarse granularity of granular balls (GBs) as input rather than individual data samples. The GBs are mapped to the feature space of the hidden layer using random projection followed by the utilization of a non-linear activation function. The concatenation of original and hidden features derived from the centers of GBs gives rise to an enhanced feature space, commonly referred to as the random vector functional link (RVFL) space. This space encapsulates nuanced feature information to GBs. Further, we employ twin support vector machine (TSVM) in the RVFL space for classification. TSVM generates the two non-parallel hyperplanes in the enhanced feature space, which improves the generalization performance of the proposed EF-GBTSVM model. Moreover, the coarser granularity of the GBs enables the proposed EF-GBTSVM model to exhibit robustness to resampling, showcasing reduced susceptibility to the impact of noise and outliers. We undertake a thorough evaluation of the proposed EF-GBTSVM model on benchmark UCI and KEEL datasets. This evaluation encompasses scenarios with and without the inclusion of label noise. Moreover, experiments using NDC datasets further emphasize the proposed model's ability to handle large datasets. Experimental results, supported by thorough statistical analyses, demonstrate that the proposed EF-GBTSVM model significantly outperforms the baseline models in terms of generalization capabilities, scalability, and robustness. The source code for the proposed EF-GBTSVM model, along with additional results and further details, can be accessed at https://github.com/mtanveer1/EF-GBTSVM.
Related papers
- Enhancing Robustness and Efficiency of Least Square Twin SVM via Granular Computing [0.2999888908665658]
In the domain of machine learning, least square twin support vector machine (LSTSVM) stands out as one of the state-of-the-art models.
LSTSVM suffers from sensitivity to noise and inversions, overlooking the principle and instability in resampling.
We propose the robust granular ball LSTSVM (GBLSTSVM), which is trained using granular balls instead of original data points.
arXiv Detail & Related papers (2024-10-22T18:13:01Z) - Bridge the Points: Graph-based Few-shot Segment Anything Semantically [79.1519244940518]
Recent advancements in pre-training techniques have enhanced the capabilities of vision foundation models.
Recent studies extend the SAM to Few-shot Semantic segmentation (FSS)
We propose a simple yet effective approach based on graph analysis.
arXiv Detail & Related papers (2024-10-09T15:02:28Z) - Granular Ball Twin Support Vector Machine [0.0]
Nonparametric likelihood Estimator in MixtureTwin support vector machine (TSVM) is an emerging machine learning model with versatile applicability in classification and regression endeavors.
TSVM confronts formidable obstacles to its efficiency and applicability on large-scale datasets.
We propose the granular ball twin support vector machine (GBTSVM) and a novel large-scale granular ball twin support vector machine (LS-GBTSVM)
We conduct a comprehensive evaluation of GBTSVM and LS-GBTSVM models on benchmark datasets from UCI, KEEL, and NDC datasets.
arXiv Detail & Related papers (2024-10-07T06:20:36Z) - GB-RVFL: Fusion of Randomized Neural Network and Granular Ball Computing [0.0]
The random vector functional link (RVFL) network is a prominent classification model with strong generalization ability.
We propose granular ball RVFL (GB-RVFL) model, which uses granular balls (GBs) as inputs instead of training samples.
The proposed GB-RVFL and GE-GB-RVFL models are evaluated on KEEL, UCI, NDC and biomedical datasets.
arXiv Detail & Related papers (2024-09-25T08:33:01Z) - GRVFL-MV: Graph Random Vector Functional Link Based on Multi-View Learning [0.2999888908665658]
A novel graph random vector functional link based on multi-view learning (GRVFL-MV) model is proposed.
The proposed model is trained on multiple views, incorporating the concept of multiview learning (MVL)
It also incorporates the geometrical properties of all the views using the graph embedding (GE) framework.
arXiv Detail & Related papers (2024-09-07T07:18:08Z) - ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining [104.34751911174196]
We build a large-scale dataset of 3DGS using ShapeNet and ModelNet datasets.
Our dataset ShapeSplat consists of 65K objects from 87 unique categories.
We introduce textbftextitGaussian-MAE, which highlights the unique benefits of representation learning from Gaussian parameters.
arXiv Detail & Related papers (2024-08-20T14:49:14Z) - Granular-Balls based Fuzzy Twin Support Vector Machine for Classification [12.738411525651667]
We introduce the granular-ball twin support vector machine (GBTWSVM) classifier, which integrates granular-ball computing (GBC) with the twin support vector machine (TWSVM)
We design the membership and non-membership functions of granular-balls using Pythagorean fuzzy sets to differentiate the contributions of granular-balls in various regions.
arXiv Detail & Related papers (2024-08-01T16:43:21Z) - VST++: Efficient and Stronger Visual Saliency Transformer [74.26078624363274]
We develop an efficient and stronger VST++ model to explore global long-range dependencies.
We evaluate our model across various transformer-based backbones on RGB, RGB-D, and RGB-T SOD benchmark datasets.
arXiv Detail & Related papers (2023-10-18T05:44:49Z) - DepthFormer: Exploiting Long-Range Correlation and Local Information for
Accurate Monocular Depth Estimation [50.08080424613603]
Long-range correlation is essential for accurate monocular depth estimation.
We propose to leverage the Transformer to model this global context with an effective attention mechanism.
Our proposed model, termed DepthFormer, surpasses state-of-the-art monocular depth estimation methods with prominent margins.
arXiv Detail & Related papers (2022-03-27T05:03:56Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.