Functional Output Regression with Infimal Convolution: Exploring the
Huber and $\epsilon$-insensitive Losses
- URL: http://arxiv.org/abs/2206.08220v1
- Date: Thu, 16 Jun 2022 14:45:53 GMT
- Title: Functional Output Regression with Infimal Convolution: Exploring the
Huber and $\epsilon$-insensitive Losses
- Authors: Alex Lambert, Dimitri Bouche, Zoltan Szabo, Florence d'Alch\'e-Buc
- Abstract summary: We propose a flexible framework capable of handling various forms of outliers and sparsity in the FOR family.
We derive computationally tractable algorithms relying on duality to tackle the resulting tasks.
The efficiency of the approach is demonstrated and contrasted with the classical squared loss setting on both synthetic and real-world benchmarks.
- Score: 1.7835960292396256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The focus of the paper is functional output regression (FOR) with convoluted
losses. While most existing work consider the square loss setting, we leverage
extensions of the Huber and the $\epsilon$-insensitive loss (induced by infimal
convolution) and propose a flexible framework capable of handling various forms
of outliers and sparsity in the FOR family. We derive computationally tractable
algorithms relying on duality to tackle the resulting tasks in the context of
vector-valued reproducing kernel Hilbert spaces. The efficiency of the approach
is demonstrated and contrasted with the classical squared loss setting on both
synthetic and real-world benchmarks.
Related papers
- Tensor Decomposition with Unaligned Observations [4.970364068620608]
The mode with unaligned observations is represented using functions in a reproducing kernel Hilbert space (RKHS)
We introduce a versatile loss function that effectively accounts for various types of data, including binary, integer-valued, and positive-valued types.
A sketching algorithm is also introduced to further improve efficiency when using the $ell$ loss function.
arXiv Detail & Related papers (2024-10-17T21:39:18Z) - EnsLoss: Stochastic Calibrated Loss Ensembles for Preventing Overfitting in Classification [1.3778851745408134]
We propose a novel ensemble method, namely EnsLoss, to combine loss functions within the Empirical risk minimization framework.
We first transform the CC conditions of losses into loss-derivatives, thereby bypassing the need for explicit loss functions.
We theoretically establish the statistical consistency of our approach and provide insights into its benefits.
arXiv Detail & Related papers (2024-09-02T02:40:42Z) - LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Byzantine-resilient Federated Learning With Adaptivity to Data Heterogeneity [54.145730036889496]
This paper deals with Gradient learning (FL) in the presence of malicious attacks Byzantine data.
A novel Average Algorithm (RAGA) is proposed, which leverages robustness aggregation and can select a dataset.
arXiv Detail & Related papers (2024-03-20T08:15:08Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - General Loss Functions Lead to (Approximate) Interpolation in High
Dimensions [6.738946307589741]
We provide a unified framework to approximately characterize the implicit bias of gradient descent in closed form.
Specifically, we show that the implicit bias is approximated (but not exactly equal to) the minimum-norm in high dimensions.
Our framework also recovers existing exact equivalence results for exponentially-tailed losses across binary and multiclass settings.
arXiv Detail & Related papers (2023-03-13T21:23:12Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Nonconvex Extension of Generalized Huber Loss for Robust Learning and
Pseudo-Mode Statistics [0.0]
We show that using the log-exp together with the logistic function, we can create a loss combines.
We show a robust generalization that can be utilized to minimize the exponential convergence.
arXiv Detail & Related papers (2022-02-22T19:32:02Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z) - Nonlinear Functional Output Regression: a Dictionary Approach [1.8160945635344528]
We introduce projection learning (PL), a novel dictionary-based approach that learns to predict a function that is expanded on a dictionary.
PL minimizes an empirical risk based on a functional loss.
PL enjoys a particularily attractive trade-off between computational cost and performances.
arXiv Detail & Related papers (2020-03-03T10:31:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.