AutoLoss: Automated Loss Function Search in Recommendations
- URL: http://arxiv.org/abs/2106.06713v1
- Date: Sat, 12 Jun 2021 08:15:00 GMT
- Title: AutoLoss: Automated Loss Function Search in Recommendations
- Authors: Xiangyu Zhao, Haochen Liu, Wenqi Fan, Hui Liu, Jiliang Tang, Chong
Wang
- Abstract summary: We propose an AutoLoss framework that can automatically and adaptively search for the appropriate loss function from a set of candidates.
Unlike existing algorithms, the proposed controller can adaptively generate the loss probabilities for different data examples according to their varied convergence behaviors.
- Score: 34.27873944762912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing an effective loss function plays a crucial role in training deep
recommender systems. Most existing works often leverage a predefined and fixed
loss function that could lead to suboptimal recommendation quality and training
efficiency. Some recent efforts rely on exhaustively or manually searched
weights to fuse a group of candidate loss functions, which is exceptionally
costly in computation and time. They also neglect the various convergence
behaviors of different data examples. In this work, we propose an AutoLoss
framework that can automatically and adaptively search for the appropriate loss
function from a set of candidates. To be specific, we develop a novel
controller network, which can dynamically adjust the loss probabilities in a
differentiable manner. Unlike existing algorithms, the proposed controller can
adaptively generate the loss probabilities for different data examples
according to their varied convergence behaviors. Such design improves the
model's generalizability and transferability between deep recommender systems
and datasets. We evaluate the proposed framework on two benchmark datasets. The
results show that AutoLoss outperforms representative baselines. Further
experiments have been conducted to deepen our understandings of AutoLoss,
including its transferability, components and training efficiency.
Related papers
- Adaptive Real-Time Multi-Loss Function Optimization Using Dynamic Memory Fusion Framework: A Case Study on Breast Cancer Segmentation [0.0]
We propose a novel framework called dynamic memory fusion for adaptive multi-loss function penalizing in real-time.
Experiments on breast ultrasound datasets demonstrate that the framework improves segmentation performance across various metrics.
arXiv Detail & Related papers (2024-10-10T11:23:04Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Alternate Loss Functions for Classification and Robust Regression Can Improve the Accuracy of Artificial Neural Networks [6.452225158891343]
This paper shows that training speed and final accuracy of neural networks can significantly depend on the loss function used to train neural networks.
Two new classification loss functions that significantly improve performance on a wide variety of benchmark tasks are proposed.
arXiv Detail & Related papers (2023-03-17T12:52:06Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - AutoLossGen: Automatic Loss Function Generation for Recommender Systems [40.21831408797939]
In recommendation systems, the choice of loss function is critical since a good loss may significantly improve the model performance.
A large fraction of previous work focuses on handcrafted loss functions, which needs significant expertise and human effort.
We propose an automatic loss function generation framework, AutoLossGen, which is able to generate loss functions directly constructed from basic mathematical operators.
arXiv Detail & Related papers (2022-04-27T19:49:48Z) - AutoBalance: Optimized Loss Functions for Imbalanced Data [38.64606886588534]
We propose AutoBalance, a bi-level optimization framework that automatically designs a training loss function to optimize a blend of accuracy and fairness-seeking objectives.
Specifically, a lower-level problem trains the model weights, and an upper-level problem tunes the loss function by monitoring and optimizing the desired objective over the validation data.
Our loss design enables personalized treatment for classes/groups by employing a parametric cross-entropy loss and individualized data augmentation schemes.
arXiv Detail & Related papers (2022-01-04T15:53:23Z) - Searching for Robustness: Loss Learning for Noisy Classification Tasks [81.70914107917551]
We parameterize a flexible family of loss functions using Taylors and apply evolutionary strategies to search for noise-robust losses in this space.
The resulting white-box loss provides a simple and fast "plug-and-play" module that enables effective noise-robust learning in diverse downstream tasks.
arXiv Detail & Related papers (2021-02-27T15:27:22Z) - Loss Function Discovery for Object Detection via Convergence-Simulation
Driven Search [101.73248560009124]
We propose an effective convergence-simulation driven evolutionary search algorithm, CSE-Autoloss, for speeding up the search progress.
We conduct extensive evaluations of loss function search on popular detectors and validate the good generalization capability of searched losses.
Our experiments show that the best-discovered loss function combinations outperform default combinations by 1.1% and 0.8% in terms of mAP for two-stage and one-stage detectors.
arXiv Detail & Related papers (2021-02-09T08:34:52Z) - Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation [56.343646789922545]
We propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric.
Experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently.
arXiv Detail & Related papers (2020-10-15T17:59:08Z) - A Unified Framework of Surrogate Loss by Refactoring and Interpolation [65.60014616444623]
We introduce UniLoss, a unified framework to generate surrogate losses for training deep networks with gradient descent.
We validate the effectiveness of UniLoss on three tasks and four datasets.
arXiv Detail & Related papers (2020-07-27T21:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.