RoBoSS: A Robust, Bounded, Sparse, and Smooth Loss Function for
Supervised Learning
- URL: http://arxiv.org/abs/2309.02250v1
- Date: Tue, 5 Sep 2023 13:59:50 GMT
- Title: RoBoSS: A Robust, Bounded, Sparse, and Smooth Loss Function for
Supervised Learning
- Authors: Mushir Akhtar, M. Tanveer, and Mohd. Arshad
- Abstract summary: We propose a novel robust, bounded, sparse, and smooth (RoBoSS) loss function for supervised learning.
We introduce a new robust algorithm named $mathcalL_rbss$-SVM to generalize well to unseen data.
We evaluate the proposed $mathcalL_rbss$-SVM on $88$ real-world UCI and KEEL datasets from diverse domains.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the domain of machine learning algorithms, the significance of the loss
function is paramount, especially in supervised learning tasks. It serves as a
fundamental pillar that profoundly influences the behavior and efficacy of
supervised learning algorithms. Traditional loss functions, while widely used,
often struggle to handle noisy and high-dimensional data, impede model
interpretability, and lead to slow convergence during training. In this paper,
we address the aforementioned constraints by proposing a novel robust, bounded,
sparse, and smooth (RoBoSS) loss function for supervised learning. Further, we
incorporate the RoBoSS loss function within the framework of support vector
machine (SVM) and introduce a new robust algorithm named
$\mathcal{L}_{rbss}$-SVM. For the theoretical analysis, the
classification-calibrated property and generalization ability are also
presented. These investigations are crucial for gaining deeper insights into
the performance of the RoBoSS loss function in the classification tasks and its
potential to generalize well to unseen data. To empirically demonstrate the
effectiveness of the proposed $\mathcal{L}_{rbss}$-SVM, we evaluate it on $88$
real-world UCI and KEEL datasets from diverse domains. Additionally, to
exemplify the effectiveness of the proposed $\mathcal{L}_{rbss}$-SVM within the
biomedical realm, we evaluated it on two medical datasets: the
electroencephalogram (EEG) signal dataset and the breast cancer (BreaKHis)
dataset. The numerical results substantiate the superiority of the proposed
$\mathcal{L}_{rbss}$-SVM model, both in terms of its remarkable generalization
performance and its efficiency in training time.
Related papers
- On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Tractable and Provably Efficient Distributional Reinforcement Learning with General Value Function Approximation [8.378137704007038]
We present a regret analysis for distributional reinforcement learning with general value function approximation.
Our theoretical results show that approximating the infinite-dimensional return distribution with a finite number of moment functionals is the only method to learn the statistical information unbiasedly.
arXiv Detail & Related papers (2024-07-31T00:43:51Z) - Advancing Supervised Learning with the Wave Loss Function: A Robust and Smooth Approach [0.0]
We present a novel contribution to the realm of supervised machine learning: an asymmetric loss function named wave loss.
We incorporate the proposed wave loss function into the least squares setting of support vector machines (SVM) and twin support vector machines (TSVM)
To empirically showcase the effectiveness of the proposed Wave-SVM and Wave-TSVM, we evaluate them on benchmark UCI and KEEL datasets.
arXiv Detail & Related papers (2024-04-28T07:32:00Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Graph Embedded Intuitionistic Fuzzy Random Vector Functional Link Neural
Network for Class Imbalance Learning [4.069144210024564]
We propose a graph embedded intuitionistic fuzzy RVFL for class imbalance learning (GE-IFRVFL-CIL) model incorporating a weighting mechanism to handle imbalanced datasets.
The proposed GE-IFRVFL-CIL model offers a promising solution to address the class imbalance issue, mitigates the detrimental effect of noise and outliers, and preserves the inherent geometrical structures of the dataset.
arXiv Detail & Related papers (2023-07-15T20:45:45Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - Xtreme Margin: A Tunable Loss Function for Binary Classification
Problems [0.0]
We provide an overview of a novel loss function, the Xtreme Margin loss function.
Unlike the binary cross-entropy and the hinge loss functions, this loss function provides researchers and practitioners flexibility with their training process.
arXiv Detail & Related papers (2022-10-31T22:39:32Z) - Learning with Multiclass AUC: Theory and Algorithms [141.63211412386283]
Area under the ROC curve (AUC) is a well-known ranking metric for problems such as imbalanced learning and recommender systems.
In this paper, we start an early trial to consider the problem of learning multiclass scoring functions via optimizing multiclass AUC metrics.
arXiv Detail & Related papers (2021-07-28T05:18:10Z) - $\sigma^2$R Loss: a Weighted Loss by Multiplicative Factors using
Sigmoidal Functions [0.9569316316728905]
We introduce a new loss function called squared reduction loss ($sigma2$R loss), which is regulated by a sigmoid function to inflate/deflate the error per instance.
Our loss has clear intuition and geometric interpretation, we demonstrate by experiments the effectiveness of our proposal.
arXiv Detail & Related papers (2020-09-18T12:34:40Z) - Estimating Structural Target Functions using Machine Learning and
Influence Functions [103.47897241856603]
We propose a new framework for statistical machine learning of target functions arising as identifiable functionals from statistical models.
This framework is problem- and model-agnostic and can be used to estimate a broad variety of target parameters of interest in applied statistics.
We put particular focus on so-called coarsening at random/doubly robust problems with partially unobserved information.
arXiv Detail & Related papers (2020-08-14T16:48:29Z) - Influence Functions in Deep Learning Are Fragile [52.31375893260445]
influence functions approximate the effect of samples in test-time predictions.
influence estimates are fairly accurate for shallow networks.
Hessian regularization is important to get highquality influence estimates.
arXiv Detail & Related papers (2020-06-25T18:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.