Nonconvex Extension of Generalized Huber Loss for Robust Learning and
Pseudo-Mode Statistics
- URL: http://arxiv.org/abs/2202.11141v1
- Date: Tue, 22 Feb 2022 19:32:02 GMT
- Title: Nonconvex Extension of Generalized Huber Loss for Robust Learning and
Pseudo-Mode Statistics
- Authors: Kaan Gokcesu, Hakan Gokcesu
- Abstract summary: We show that using the log-exp together with the logistic function, we can create a loss combines.
We show a robust generalization that can be utilized to minimize the exponential convergence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an extended generalization of the pseudo Huber loss formulation.
We show that using the log-exp transform together with the logistic function,
we can create a loss which combines the desirable properties of the strictly
convex losses with robust loss functions. With this formulation, we show that a
linear convergence algorithm can be utilized to find a minimizer. We further
discuss the creation of a quasi-convex composite loss and provide a
derivative-free exponential convergence rate algorithm.
Related papers
- Enabling Tensor Decomposition for Time-Series Classification via A Simple Pseudo-Laplacian Contrast [26.28414569796961]
We propose a novel Pseudo Laplacian Contrast (PLC) tensor decomposition framework.
It integrates the data augmentation and cross-view Laplacian to enable the extraction of class-aware representations.
Experiments on various datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-23T16:48:13Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Byzantine-resilient Federated Learning With Adaptivity to Data Heterogeneity [54.145730036889496]
This paper deals with Gradient learning (FL) in the presence of malicious attacks Byzantine data.
A novel Average Algorithm (RAGA) is proposed, which leverages robustness aggregation and can select a dataset.
arXiv Detail & Related papers (2024-03-20T08:15:08Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Convex Bounds on the Softmax Function with Applications to Robustness
Verification [69.09991317119679]
The softmax function is a ubiquitous component at the output of neural networks and increasingly in intermediate layers as well.
This paper provides convex lower bounds and concave upper bounds on the softmax function, which are compatible with convex optimization formulations for characterizing neural networks and other ML models.
arXiv Detail & Related papers (2023-03-03T05:07:02Z) - Functional Output Regression with Infimal Convolution: Exploring the
Huber and $\epsilon$-insensitive Losses [1.7835960292396256]
We propose a flexible framework capable of handling various forms of outliers and sparsity in the FOR family.
We derive computationally tractable algorithms relying on duality to tackle the resulting tasks.
The efficiency of the approach is demonstrated and contrasted with the classical squared loss setting on both synthetic and real-world benchmarks.
arXiv Detail & Related papers (2022-06-16T14:45:53Z) - Generalized Huber Loss for Robust Learning and its Efficient
Minimization for a Robust Statistics [0.0]
We show that with a suitable function of choice, we can achieve a loss function which combines the desirable properties of both the absolute and the quadratic loss.
We provide an algorithm to find the minimizer of such loss functions and show that finding a centralizing metric is not that much harder than the traditional mean and median.
arXiv Detail & Related papers (2021-08-28T11:18:14Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z) - The Implicit Bias of Gradient Descent on Separable Data [44.98410310356165]
We show the predictor converges to the direction of the max-margin (hard margin SVM) solution.
This can help explain the benefit of continuing to optimize the logistic or cross-entropy loss even after the training error is zero.
arXiv Detail & Related papers (2017-10-27T21:47:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.