Online Loss Function Learning
- URL: http://arxiv.org/abs/2301.13247v2
- Date: Wed, 17 Jan 2024 10:36:43 GMT
- Title: Online Loss Function Learning
- Authors: Christian Raymond, Qi Chen, Bing Xue, Mengjie Zhang
- Abstract summary: Loss function learning aims to automate the task of designing a loss function for a machine learning model.
We propose a new loss function learning technique for adaptively updating the loss function online after each update to the base model parameters.
- Score: 13.744076477599707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Loss function learning is a new meta-learning paradigm that aims to automate
the essential task of designing a loss function for a machine learning model.
Existing techniques for loss function learning have shown promising results,
often improving a model's training dynamics and final inference performance.
However, a significant limitation of these techniques is that the loss
functions are meta-learned in an offline fashion, where the meta-objective only
considers the very first few steps of training, which is a significantly
shorter time horizon than the one typically used for training deep neural
networks. This causes significant bias towards loss functions that perform well
at the very start of training but perform poorly at the end of training. To
address this issue we propose a new loss function learning technique for
adaptively updating the loss function online after each update to the base
model parameters. The experimental results show that our proposed method
consistently outperforms the cross-entropy loss and offline loss function
learning techniques on a diverse range of neural network architectures and
datasets.
Related papers
- Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms [80.37846867546517]
We show how to train eight different neural networks with custom objectives.
We exploit their second-order information via their empirical Fisherssian matrices.
We apply Loss Lossiable algorithms to achieve significant improvements for less differentiable algorithms.
arXiv Detail & Related papers (2024-10-24T18:02:11Z) - Dynamical loss functions shape landscape topography and improve learning in artificial neural networks [0.9208007322096533]
We show how to transform cross-entropy and mean squared error into dynamical loss functions.
We show how they significantly improve validation accuracy for networks of varying sizes.
arXiv Detail & Related papers (2024-10-14T16:27:03Z) - Fast and Efficient Local Search for Genetic Programming Based Loss
Function Learning [12.581217671500887]
We propose a new meta-learning framework for task and model-agnostic loss function learning via a hybrid search approach.
Results show that the learned loss functions bring improved convergence, sample efficiency, and inference performance on tabulated, computer vision, and natural language processing problems.
arXiv Detail & Related papers (2024-03-01T02:20:04Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Alternate Loss Functions for Classification and Robust Regression Can Improve the Accuracy of Artificial Neural Networks [6.452225158891343]
This paper shows that training speed and final accuracy of neural networks can significantly depend on the loss function used to train neural networks.
Two new classification loss functions that significantly improve performance on a wide variety of benchmark tasks are proposed.
arXiv Detail & Related papers (2023-03-17T12:52:06Z) - Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning [12.581217671500887]
We propose a new meta-learning framework for learning model-agnostic loss functions via a hybrid neuro-symbolic search approach.
Results show that the meta-learned loss functions discovered by the newly proposed method outperform both the cross-entropy loss and state-of-the-art loss function learning methods.
arXiv Detail & Related papers (2022-09-19T10:29:01Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - A Mathematical Analysis of Learning Loss for Active Learning in
Regression [2.792030485253753]
This paper develops a foundation for Learning Loss which enables us to propose a novel modification we call LearningLoss++.
We show that gradients are crucial in interpreting how Learning Loss works, with rigorous analysis and comparison of the gradients between Learning Loss and LearningLoss++.
We also propose a convolutional architecture that combines features at different scales to predict the loss.
We show that LearningLoss++ outperforms in identifying scenarios where the model is likely to perform poorly, which on model refinement translates into reliable performance in the open world.
arXiv Detail & Related papers (2021-04-19T13:54:20Z) - Searching for Robustness: Loss Learning for Noisy Classification Tasks [81.70914107917551]
We parameterize a flexible family of loss functions using Taylors and apply evolutionary strategies to search for noise-robust losses in this space.
The resulting white-box loss provides a simple and fast "plug-and-play" module that enables effective noise-robust learning in diverse downstream tasks.
arXiv Detail & Related papers (2021-02-27T15:27:22Z) - Progressive Self-Guided Loss for Salient Object Detection [102.35488902433896]
We present a progressive self-guided loss function to facilitate deep learning-based salient object detection in images.
Our framework takes advantage of adaptively aggregated multi-scale features to locate and detect salient objects effectively.
arXiv Detail & Related papers (2021-01-07T07:33:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.