ASRL:A robust loss function with potential for development
- URL: http://arxiv.org/abs/2504.06935v1
- Date: Wed, 09 Apr 2025 14:40:46 GMT
- Title: ASRL:A robust loss function with potential for development
- Authors: Chenyu Hui, Anran Zhang, Xintong Li,
- Abstract summary: We propose a partition:wise robust loss function based on the previous robust loss function.<n>The characteristics of this loss function are that it achieves high robustness and a wide range of applicability.
- Score: 4.292888620805875
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this article, we proposed a partition:wise robust loss function based on the previous robust loss function. The characteristics of this loss function are that it achieves high robustness and a wide range of applicability through partition-wise design and adaptive parameter adjustment. Finally, the advantages and development potential of this loss function were verified by applying this loss function to the regression question and using five different datasets (with different dimensions, different sample numbers, and different fields) to compare with the other loss functions. The results of multiple experiments have proven the advantages of our loss function .
Related papers
- A Versatile Influence Function for Data Attribution with Non-Decomposable Loss [3.1615846013409925]
We propose a Versatile Influence Function (VIF) that can be straightforwardly applied to machine learning models trained with any non-decomposable loss.<n>VIF represents a significant advancement in data attribution, enabling efficient influence-function-based attribution across a wide range of machine learning paradigms.
arXiv Detail & Related papers (2024-12-02T09:59:01Z) - The Central Role of the Loss Function in Reinforcement Learning [46.72524235085568]
We demonstrate how different regression loss functions affect the sample efficiency and adaptivity of value-based decision making algorithms.
Across multiple settings, we prove that algorithms using the binary cross-entropy loss achieve first-order bounds scaling with the optimal policy's cost.
We hope that this paper serves as a guide analyzing decision making algorithms with varying loss functions, and can inspire the reader to seek out better loss functions to improve any decision making algorithm.
arXiv Detail & Related papers (2024-09-19T14:10:38Z) - LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Alternate Loss Functions for Classification and Robust Regression Can Improve the Accuracy of Artificial Neural Networks [6.452225158891343]
This paper shows that training speed and final accuracy of neural networks can significantly depend on the loss function used to train neural networks.
Two new classification loss functions that significantly improve performance on a wide variety of benchmark tasks are proposed.
arXiv Detail & Related papers (2023-03-17T12:52:06Z) - A survey and taxonomy of loss functions in machine learning [51.35995529962554]
We present a comprehensive overview of the most widely used loss functions across key applications, including regression, classification, generative modeling, ranking, and energy-based modeling.
We introduce 43 distinct loss functions, structured within an intuitive taxonomy that clarifies their theoretical foundations, properties, and optimal application contexts.
arXiv Detail & Related papers (2023-01-13T14:38:24Z) - Evaluating the Impact of Loss Function Variation in Deep Learning for
Classification [0.0]
The loss function is arguably among the most important hyper parameters for a neural network.
We consider deep neural networks in a supervised classification setting and analyze the impact the choice of loss function has onto the training result.
While certain loss functions perform suboptimally, our work empirically shows that under-represented losses can outperform the State-of-the-Art choices significantly.
arXiv Detail & Related papers (2022-10-28T09:10:10Z) - Hybridised Loss Functions for Improved Neural Network Generalisation [0.0]
Loss functions play an important role in the training of artificial neural networks (ANNs)
It has been shown that the cross entropy and sum squared error loss functions result in different training dynamics.
A hybrid of the entropy and sum squared error loss functions could combine the advantages of the two functions, while limiting their disadvantages.
arXiv Detail & Related papers (2022-04-26T11:52:11Z) - Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation [56.343646789922545]
We propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric.
Experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently.
arXiv Detail & Related papers (2020-10-15T17:59:08Z) - An Equivalence between Loss Functions and Non-Uniform Sampling in
Experience Replay [72.23433407017558]
We show that any loss function evaluated with non-uniformly sampled data can be transformed into another uniformly sampled loss function.
Surprisingly, we find in some environments PER can be replaced entirely by this new loss function without impact to empirical performance.
arXiv Detail & Related papers (2020-07-12T17:45:24Z) - Mixability of Integral Losses: a Key to Efficient Online Aggregation of Functional and Probabilistic Forecasts [72.32459441619388]
We adapt basic mixable (and exponentially concave) loss functions to compare functional predictions and prove that these adaptations are also mixable (exp-concave)<n>As an application of our main result, we prove that various loss functions used for probabilistic forecasting are mixable (exp-concave)
arXiv Detail & Related papers (2019-12-15T14:25:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.