GB-RVFL: Fusion of Randomized Neural Network and Granular Ball Computing
- URL: http://arxiv.org/abs/2409.16735v1
- Date: Wed, 25 Sep 2024 08:33:01 GMT
- Title: GB-RVFL: Fusion of Randomized Neural Network and Granular Ball Computing
- Authors: M. Sajid, A. Quadir, M. Tanveer,
- Abstract summary: The random vector functional link (RVFL) network is a prominent classification model with strong generalization ability.
We propose granular ball RVFL (GB-RVFL) model, which uses granular balls (GBs) as inputs instead of training samples.
The proposed GB-RVFL and GE-GB-RVFL models are evaluated on KEEL, UCI, NDC and biomedical datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The random vector functional link (RVFL) network is a prominent classification model with strong generalization ability. However, RVFL treats all samples uniformly, ignoring whether they are pure or noisy, and its scalability is limited due to the need for inverting the entire training matrix. To address these issues, we propose granular ball RVFL (GB-RVFL) model, which uses granular balls (GBs) as inputs instead of training samples. This approach enhances scalability by requiring only the inverse of the GB center matrix and improves robustness against noise and outliers through the coarse granularity of GBs. Furthermore, RVFL overlooks the dataset's geometric structure. To address this, we propose graph embedding GB-RVFL (GE-GB-RVFL) model, which fuses granular computing and graph embedding (GE) to preserve the topological structure of GBs. The proposed GB-RVFL and GE-GB-RVFL models are evaluated on KEEL, UCI, NDC and biomedical datasets, demonstrating superior performance compared to baseline models.
Related papers
- Enhancing Robustness and Efficiency of Least Square Twin SVM via Granular Computing [0.2999888908665658]
In the domain of machine learning, least square twin support vector machine (LSTSVM) stands out as one of the state-of-the-art models.
LSTSVM suffers from sensitivity to noise and inversions, overlooking the principle and instability in resampling.
We propose the robust granular ball LSTSVM (GBLSTSVM), which is trained using granular balls instead of original data points.
arXiv Detail & Related papers (2024-10-22T18:13:01Z) - Enhanced Feature Based Granular Ball Twin Support Vector Machine [0.5492530316344587]
We propose enhanced feature based granular ball twin support vector machine (EF-GBTSVM)
The proposed model employs the coarse granularity of granular balls (GBs) as input rather than individual data samples.
We undertake a thorough evaluation of the proposed EF-GBTSVM model on benchmark UCI and KEEL datasets.
arXiv Detail & Related papers (2024-10-08T08:10:43Z) - GRVFL-MV: Graph Random Vector Functional Link Based on Multi-View Learning [0.2999888908665658]
A novel graph random vector functional link based on multi-view learning (GRVFL-MV) model is proposed.
The proposed model is trained on multiple views, incorporating the concept of multiview learning (MVL)
It also incorporates the geometrical properties of all the views using the graph embedding (GE) framework.
arXiv Detail & Related papers (2024-09-07T07:18:08Z) - A Non-negative VAE:the Generalized Gamma Belief Network [49.970917207211556]
The gamma belief network (GBN) has demonstrated its potential for uncovering multi-layer interpretable latent representations in text data.
We introduce the generalized gamma belief network (Generalized GBN) in this paper, which extends the original linear generative model to a more expressive non-linear generative model.
We also propose an upward-downward Weibull inference network to approximate the posterior distribution of the latent variables.
arXiv Detail & Related papers (2024-08-06T18:18:37Z) - Wave-RVFL: A Randomized Neural Network Based on Wave Loss Function [0.0]
We propose the Wave-RVFL, an RVFL model incorporating the wave loss function.
The Wave-RVFL exhibits robustness against noise and outliers by preventing over-penalization of deviations.
Empirical results affirm the superior performance and robustness of the Wave-RVFL compared to baseline models.
arXiv Detail & Related papers (2024-08-05T20:46:54Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Graph Federated Learning for CIoT Devices in Smart Home Applications [23.216140264163535]
We propose a novel Graph Signal Processing (GSP)-inspired aggregation rule based on graph filtering dubbed G-Fedfilt''
The proposed aggregator enables a structured flow of information based on the graph's topology.
It is capable of yielding up to $2.41%$ higher accuracy than FedAvg in the case of testing the generalization of the models.
arXiv Detail & Related papers (2022-12-29T17:57:19Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Bitwidth Heterogeneous Federated Learning with Progressive Weight
Dequantization [58.31288475660333]
We introduce a pragmatic Federated Learning scenario with bitwidth Heterogeneous Federated Learning (BHFL)
BHFL brings in a new challenge, that the aggregation of model parameters with different bitwidths could result in severe performance degeneration.
We propose ProWD framework, which has a trainable weight dequantizer at the central server that progressively reconstructs the low-bitwidth weights into higher bitwidth weights, and finally into full-precision weights.
arXiv Detail & Related papers (2022-02-23T12:07:02Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Bayesian Federated Learning over Wireless Networks [87.37301441859925]
Federated learning is a privacy-preserving and distributed training method using heterogeneous data sets stored at local devices.
This paper presents an efficient modified BFL algorithm called scalableBFL (SBFL)
arXiv Detail & Related papers (2020-12-31T07:32:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.