HawkEye: Advancing Robust Regression with Bounded, Smooth, and
Insensitive Loss Function
- URL: http://arxiv.org/abs/2401.16785v2
- Date: Thu, 15 Feb 2024 12:46:41 GMT
- Title: HawkEye: Advancing Robust Regression with Bounded, Smooth, and
Insensitive Loss Function
- Authors: Mushir Akhtar, M. Tanveer, and Mohd. Arshad
- Abstract summary: We introduce a novel symmetric loss function named the HawkEye loss function.
It is the first loss function in SVR literature to be bounded, smooth, and simultaneously possess an insensitive zone.
We yield a new fast and robust model termed HE-LSSVR.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Support vector regression (SVR) has garnered significant popularity over the
past two decades owing to its wide range of applications across various fields.
Despite its versatility, SVR encounters challenges when confronted with
outliers and noise, primarily due to the use of the $\varepsilon$-insensitive
loss function. To address this limitation, SVR with bounded loss functions has
emerged as an appealing alternative, offering enhanced generalization
performance and robustness. Notably, recent developments focus on designing
bounded loss functions with smooth characteristics, facilitating the adoption
of gradient-based optimization algorithms. However, it's crucial to highlight
that these bounded and smooth loss functions do not possess an insensitive
zone. In this paper, we address the aforementioned constraints by introducing a
novel symmetric loss function named the HawkEye loss function. It is worth
noting that the HawkEye loss function stands out as the first loss function in
SVR literature to be bounded, smooth, and simultaneously possess an insensitive
zone. Leveraging this breakthrough, we integrate the HawkEye loss function into
the least squares framework of SVR and yield a new fast and robust model termed
HE-LSSVR. The optimization problem inherent to HE-LSSVR is addressed by
harnessing the adaptive moment estimation (Adam) algorithm, known for its
adaptive learning rate and efficacy in handling large-scale problems. To our
knowledge, this is the first time Adam has been employed to solve an SVR
problem. To empirically validate the proposed HE-LSSVR model, we evaluate it on
UCI, synthetic, and time series datasets. The experimental outcomes
unequivocally reveal the superiority of the HE-LSSVR model both in terms of its
remarkable generalization performance and its efficiency in training time.
Related papers
- Advancing RVFL networks: Robust classification with the HawkEye loss function [0.0]
We propose incorporation of HawkEye loss (H-loss) function into the Random vector functional link (RVFL) framework.
H-loss function features nice mathematical properties, including smoothness and boundedness, while simultaneously incorporating an insensitive zone.
The proposed H-RVFL model's effectiveness is validated through experiments on $40$ datasets from UCI and KEEL repositories.
arXiv Detail & Related papers (2024-10-01T08:48:05Z) - LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Advancing Supervised Learning with the Wave Loss Function: A Robust and Smooth Approach [0.0]
We present a novel contribution to the realm of supervised machine learning: an asymmetric loss function named wave loss.
We incorporate the proposed wave loss function into the least squares setting of support vector machines (SVM) and twin support vector machines (TSVM)
To empirically showcase the effectiveness of the proposed Wave-SVM and Wave-TSVM, we evaluate them on benchmark UCI and KEEL datasets.
arXiv Detail & Related papers (2024-04-28T07:32:00Z) - Posterior Sampling with Delayed Feedback for Reinforcement Learning with
Linear Function Approximation [62.969796245827006]
Delayed-PSVI is an optimistic value-based algorithm that explores the value function space via noise perturbation with posterior sampling.
We show our algorithm achieves $widetildeO(sqrtd3H3 T + d2H2 E[tau]$ worst-case regret in the presence of unknown delays.
We incorporate a gradient-based approximate sampling scheme via Langevin dynamics for Delayed-LPSVI.
arXiv Detail & Related papers (2023-10-29T06:12:43Z) - On User-Level Private Convex Optimization [59.75368670035683]
We introduce a new mechanism for convex optimization (SCO) with user-level differential privacy guarantees.
Our mechanism does not require any smoothness assumptions on the loss.
Our bounds are the first where the minimum number of users needed for user-level privacy has no dependence on the dimension.
arXiv Detail & Related papers (2023-05-08T17:47:28Z) - META-STORM: Generalized Fully-Adaptive Variance Reduced SGD for
Unbounded Functions [23.746620619512573]
Recent work overcomes the effect of having to compute gradients of "megabatches"
Work is widely used after update with competitive deep learning tasks.
arXiv Detail & Related papers (2022-09-29T15:12:54Z) - STIP: A SpatioTemporal Information-Preserving and Perception-Augmented
Model for High-Resolution Video Prediction [78.129039340528]
We propose a Stemporal Information-Preserving and Perception-Augmented Model (STIP) to solve the above two problems.
The proposed model aims to preserve thetemporal information for videos during the feature extraction and the state transitions.
Experimental results show that the proposed STIP can predict videos with more satisfactory visual quality compared with a variety of state-of-the-art methods.
arXiv Detail & Related papers (2022-06-09T09:49:04Z) - Reducing Predictive Feature Suppression in Resource-Constrained
Contrastive Image-Caption Retrieval [65.33981533521207]
We introduce an approach to reduce predictive feature suppression for resource-constrained ICR methods: latent target decoding (LTD)
LTD reconstructs the input caption in a latent space of a general-purpose sentence encoder, which prevents the image and caption encoder from suppressing predictive features.
Our experiments show that, unlike reconstructing the input caption in the input space, LTD reduces predictive feature suppression, measured by obtaining higher recall@k, r-precision, and nDCG scores.
arXiv Detail & Related papers (2022-04-28T09:55:28Z) - HDNet: High-resolution Dual-domain Learning for Spectral Compressive
Imaging [138.04956118993934]
We propose a high-resolution dual-domain learning network (HDNet) for HSI reconstruction.
On the one hand, the proposed HR spatial-spectral attention module with its efficient feature fusion provides continuous and fine pixel-level features.
On the other hand, frequency domain learning (FDL) is introduced for HSI reconstruction to narrow the frequency domain discrepancy.
arXiv Detail & Related papers (2022-03-04T06:37:45Z) - Quantum-Assisted Support Vector Regression for Detecting Facial
Landmarks [0.0]
We devise algorithms, namely simulated and quantum-classical hybrid, for training two SVR models.
We compare their empirical performances against the SVR implementation of Python's scikit-learn package.
Our work is a proof-of-concept example for applying quantu-assisted SVR to a supervised learning task.
arXiv Detail & Related papers (2021-11-17T18:57:10Z) - Non-asymptotic estimates for TUSLA algorithm for non-convex learning
with applications to neural networks with ReLU activation function [3.5044892799305956]
We provide a non-asymptotic analysis for the tamed un-adjusted Langevin algorithm (TUSLA) introduced in Lovas et al.
In particular, we establish non-asymptotic error bounds for the TUSLA algorithm in Wassersteinstein-1-2.
We show that the TUSLA algorithm converges rapidly to the optimal solution.
arXiv Detail & Related papers (2021-07-19T07:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.