Score-oriented loss (SOL) functions
- URL: http://arxiv.org/abs/2103.15522v1
- Date: Mon, 29 Mar 2021 11:45:53 GMT
- Title: Score-oriented loss (SOL) functions
- Authors: Francesco Marchetti and Sabrina Guastavino and Michele Piana and
Cristina Campi
- Abstract summary: This paper introduces a class of loss functions that are defined on probabilistic confusion matrices.
The performances of these loss functions are validated during the training phase of two experimental forecasting problems.
- Score: 1.433758865948252
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Loss functions engineering and the assessment of forecasting performances are
two crucial and intertwined aspects of supervised machine learning. This paper
focuses on binary classification to introduce a class of loss functions that
are defined on probabilistic confusion matrices and that allow an automatic and
a priori maximization of the skill scores. The performances of these loss
functions are validated during the training phase of two experimental
forecasting problems, thus showing that the probability distribution function
associated with the confusion matrices significantly impacts the outcome of the
score maximization process.
Related papers
- Two-Stage Nuisance Function Estimation for Causal Mediation Analysis [8.288031125057524]
We propose a two-stage estimation strategy that estimates the nuisance functions based on the role they play in the structure of the bias of the influence function-based estimator of the mediation functional.
We provide analysis of the proposed method, as well as sufficient conditions for consistency and normality of the estimator of the parameter of interest.
arXiv Detail & Related papers (2024-03-31T16:38:48Z) - RoBoSS: A Robust, Bounded, Sparse, and Smooth Loss Function for
Supervised Learning [0.0]
We propose a novel robust, bounded, sparse, and smooth (RoBoSS) loss function for supervised learning.
We introduce a new robust algorithm named $mathcalL_rbss$-SVM to generalize well to unseen data.
We evaluate the proposed $mathcalL_rbss$-SVM on $88$ real-world UCI and KEEL datasets from diverse domains.
arXiv Detail & Related papers (2023-09-05T13:59:50Z) - A survey and taxonomy of loss functions in machine learning [51.35995529962554]
We present a comprehensive overview of the most widely used loss functions across key applications, including regression, classification, generative modeling, ranking, and energy-based modeling.
We introduce 43 distinct loss functions, structured within an intuitive taxonomy that clarifies their theoretical foundations, properties, and optimal application contexts.
arXiv Detail & Related papers (2023-01-13T14:38:24Z) - Xtreme Margin: A Tunable Loss Function for Binary Classification
Problems [0.0]
We provide an overview of a novel loss function, the Xtreme Margin loss function.
Unlike the binary cross-entropy and the hinge loss functions, this loss function provides researchers and practitioners flexibility with their training process.
arXiv Detail & Related papers (2022-10-31T22:39:32Z) - Rectified Max-Value Entropy Search for Bayesian Optimization [54.26984662139516]
We develop a rectified MES acquisition function based on the notion of mutual information.
As a result, RMES shows a consistent improvement over MES in several synthetic function benchmarks and real-world optimization problems.
arXiv Detail & Related papers (2022-02-28T08:11:02Z) - On Codomain Separability and Label Inference from (Noisy) Loss Functions [11.780563744330038]
We introduce the notion of codomain separability to study the necessary and sufficient conditions under which label inference is possible from any (noisy) loss function values.
We show that for many commonly used loss functions, including multiclass cross-entropy with common activation functions and some Bregman divergence-based losses, it is possible to design label inference attacks for arbitrary noise levels.
arXiv Detail & Related papers (2021-07-07T05:29:53Z) - General stochastic separation theorems with optimal bounds [68.8204255655161]
Phenomenon of separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities.
Errors or clusters of errors can be separated from the rest of the data.
The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same separability.
arXiv Detail & Related papers (2020-10-11T13:12:41Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Estimating Structural Target Functions using Machine Learning and
Influence Functions [103.47897241856603]
We propose a new framework for statistical machine learning of target functions arising as identifiable functionals from statistical models.
This framework is problem- and model-agnostic and can be used to estimate a broad variety of target parameters of interest in applied statistics.
We put particular focus on so-called coarsening at random/doubly robust problems with partially unobserved information.
arXiv Detail & Related papers (2020-08-14T16:48:29Z) - Mixability of Integral Losses: a Key to Efficient Online Aggregation of Functional and Probabilistic Forecasts [72.32459441619388]
We adapt basic mixable (and exponentially concave) loss functions to compare functional predictions and prove that these adaptations are also mixable (exp-concave)
As an application of our main result, we prove that various loss functions used for probabilistic forecasting are mixable (exp-concave)
arXiv Detail & Related papers (2019-12-15T14:25:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.