Loss Functions and Metrics in Deep Learning
- URL: http://arxiv.org/abs/2307.02694v4
- Date: Sat, 12 Oct 2024 14:06:55 GMT
- Title: Loss Functions and Metrics in Deep Learning
- Authors: Juan Terven, Diana M. Cordova-Esparza, Alfonso Ramirez-Pedraza, Edgar A. Chavez-Urbiola, Julio A. Romero-Gonzalez,
- Abstract summary: We provide a comprehensive overview of the most common loss functions and metrics used across many different types of deep learning tasks.
We introduce the formula for each loss and metric, discuss their strengths and limitations, and describe how these methods can be applied to various problems within deep learning.
- Score: 0.0
- License:
- Abstract: When training or evaluating deep learning models, two essential parts are picking the proper loss function and deciding on performance metrics. In this paper, we provide a comprehensive overview of the most common loss functions and metrics used across many different types of deep learning tasks, from general tasks such as regression and classification to more specific tasks in Computer Vision and Natural Language Processing. We introduce the formula for each loss and metric, discuss their strengths and limitations, and describe how these methods can be applied to various problems within deep learning. This work can serve as a reference for researchers and practitioners in the field, helping them make informed decisions when selecting the most appropriate loss function and performance metrics for their deep learning projects.
Related papers
- Fast and Efficient Local Search for Genetic Programming Based Loss
Function Learning [12.581217671500887]
We propose a new meta-learning framework for task and model-agnostic loss function learning via a hybrid search approach.
Results show that the learned loss functions bring improved convergence, sample efficiency, and inference performance on tabulated, computer vision, and natural language processing problems.
arXiv Detail & Related papers (2024-03-01T02:20:04Z) - Generalization Performance of Transfer Learning: Overparameterized and
Underparameterized Regimes [61.22448274621503]
In real-world applications, tasks often exhibit partial similarity, where certain aspects are similar while others are different or irrelevant.
Our study explores various types of transfer learning, encompassing two options for parameter transfer.
We provide practical guidelines for determining the number of features in the common and task-specific parts for improved generalization performance.
arXiv Detail & Related papers (2023-06-08T03:08:40Z) - A survey and taxonomy of loss functions in machine learning [51.35995529962554]
We present a comprehensive overview of the most widely used loss functions across key applications, including regression, classification, generative modeling, ranking, and energy-based modeling.
We introduce 43 distinct loss functions, structured within an intuitive taxonomy that clarifies their theoretical foundations, properties, and optimal application contexts.
arXiv Detail & Related papers (2023-01-13T14:38:24Z) - Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning [50.59295648948287]
In few-shot learning scenarios, the challenge is to generalize and perform well on new unseen examples.
We introduce a new meta-learning framework with a loss function that adapts to each task.
Our proposed framework, named Meta-Learning with Task-Adaptive Loss Function (MeTAL), demonstrates the effectiveness and the flexibility across various domains.
arXiv Detail & Related papers (2021-10-08T06:07:21Z) - AutoLoss-Zero: Searching Loss Functions from Scratch for Generic Tasks [78.27036391638802]
AutoLoss-Zero is the first framework for searching loss functions from scratch for generic tasks.
A loss-rejection protocol and a gradient-equivalence-check strategy are developed so as to improve the search efficiency.
Experiments on various computer vision tasks demonstrate that our searched loss functions are on par with or superior to existing loss functions.
arXiv Detail & Related papers (2021-03-25T17:59:09Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation [56.343646789922545]
We propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric.
Experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently.
arXiv Detail & Related papers (2020-10-15T17:59:08Z) - An analysis on the use of autoencoders for representation learning:
fundamentals, learning task case studies, explainability and challenges [11.329636084818778]
In many machine learning tasks, learning a good representation of the data can be the key to building a well-performant solution.
We present a series of learning tasks: data embedding for visualization, image denoising, semantic hashing, detection of abnormal behaviors and instance generation.
A solution is proposed for each task employing autoencoders as the only learning method.
arXiv Detail & Related papers (2020-05-21T08:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.