Wave-RVFL: A Randomized Neural Network Based on Wave Loss Function
- URL: http://arxiv.org/abs/2408.02824v2
- Date: Sat, 5 Oct 2024 18:00:17 GMT
- Title: Wave-RVFL: A Randomized Neural Network Based on Wave Loss Function
- Authors: M. Sajid, A. Quadir, M. Tanveer,
- Abstract summary: We propose the Wave-RVFL, an RVFL model incorporating the wave loss function.
The Wave-RVFL exhibits robustness against noise and outliers by preventing over-penalization of deviations.
Empirical results affirm the superior performance and robustness of the Wave-RVFL compared to baseline models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The random vector functional link (RVFL) network is well-regarded for its strong generalization capabilities in the field of machine learning. However, its inherent dependencies on the square loss function make it susceptible to noise and outliers. Furthermore, the calculation of RVFL's unknown parameters necessitates matrix inversion of the entire training sample, which constrains its scalability. To address these challenges, we propose the Wave-RVFL, an RVFL model incorporating the wave loss function. We formulate and solve the proposed optimization problem of the Wave-RVFL using the adaptive moment estimation (Adam) algorithm in a way that successfully eliminates the requirement for matrix inversion and significantly enhances scalability. The Wave-RVFL exhibits robustness against noise and outliers by preventing over-penalization of deviations, thereby maintaining a balanced approach to managing noise and outliers. The proposed Wave-RVFL model is evaluated on multiple UCI datasets, both with and without the addition of noise and outliers, across various domains and sizes. Empirical results affirm the superior performance and robustness of the Wave-RVFL compared to baseline models, establishing it as a highly effective and scalable classification solution. The source codes and the Supplementary Material are available at https://github.com/mtanveer1/Wave-RVFL.
Related papers
- Advancing RVFL networks: Robust classification with the HawkEye loss function [0.0]
We propose incorporation of HawkEye loss (H-loss) function into the Random vector functional link (RVFL) framework.
H-loss function features nice mathematical properties, including smoothness and boundedness, while simultaneously incorporating an insensitive zone.
The proposed H-RVFL model's effectiveness is validated through experiments on $40$ datasets from UCI and KEEL repositories.
arXiv Detail & Related papers (2024-10-01T08:48:05Z) - Advancing Supervised Learning with the Wave Loss Function: A Robust and Smooth Approach [0.0]
We present a novel contribution to the realm of supervised machine learning: an asymmetric loss function named wave loss.
We incorporate the proposed wave loss function into the least squares setting of support vector machines (SVM) and twin support vector machines (TSVM)
To empirically showcase the effectiveness of the proposed Wave-SVM and Wave-TSVM, we evaluate them on benchmark UCI and KEEL datasets.
arXiv Detail & Related papers (2024-04-28T07:32:00Z) - Adaptive LPD Radar Waveform Design with Generative Deep Learning [6.21540494241516]
We propose a novel, learning-based method for adaptively generating low probability of detection radar waveforms.
Our method can generate LPD waveforms that reduce detectability by up to 90% while simultaneously offering improved ambiguity function (sensing) characteristics.
arXiv Detail & Related papers (2024-03-18T21:07:57Z) - Joint Power Control and Data Size Selection for Over-the-Air Computation
Aided Federated Learning [19.930700426682982]
Federated learning (FL) has emerged as an appealing machine learning approach to deal with massive raw data generated at multiple mobile devices.
We propose to jointly optimize the signal amplification factors at the base station and the mobile devices.
Our proposed method can greatly reduce the mean-squared error (MSE) and can help to improve the performance of FL.
arXiv Detail & Related papers (2023-08-17T16:01:02Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Uncertainty-Aware Source-Free Adaptive Image Super-Resolution with Wavelet Augmentation Transformer [60.31021888394358]
Unsupervised Domain Adaptation (UDA) can effectively address domain gap issues in real-world image Super-Resolution (SR)
We propose a SOurce-free Domain Adaptation framework for image SR (SODA-SR) to address this issue, i.e., adapt a source-trained model to a target domain with only unlabeled target data.
arXiv Detail & Related papers (2023-03-31T03:14:44Z) - Amplitude-Varying Perturbation for Balancing Privacy and Utility in
Federated Learning [86.08285033925597]
This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of federated learning.
We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise.
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
arXiv Detail & Related papers (2023-03-07T22:52:40Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.