AutoLossGen: Automatic Loss Function Generation for Recommender Systems
- URL: http://arxiv.org/abs/2204.13160v1
- Date: Wed, 27 Apr 2022 19:49:48 GMT
- Title: AutoLossGen: Automatic Loss Function Generation for Recommender Systems
- Authors: Zelong Li, Jianchao Ji, Yingqiang Ge, Yongfeng Zhang
- Abstract summary: In recommendation systems, the choice of loss function is critical since a good loss may significantly improve the model performance.
A large fraction of previous work focuses on handcrafted loss functions, which needs significant expertise and human effort.
We propose an automatic loss function generation framework, AutoLossGen, which is able to generate loss functions directly constructed from basic mathematical operators.
- Score: 40.21831408797939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recommendation systems, the choice of loss function is critical since a
good loss may significantly improve the model performance. However, manually
designing a good loss is a big challenge due to the complexity of the problem.
A large fraction of previous work focuses on handcrafted loss functions, which
needs significant expertise and human effort. In this paper, inspired by the
recent development of automated machine learning, we propose an automatic loss
function generation framework, AutoLossGen, which is able to generate loss
functions directly constructed from basic mathematical operators without prior
knowledge on loss structure. More specifically, we develop a controller model
driven by reinforcement learning to generate loss functions, and develop
iterative and alternating optimization schedule to update the parameters of
both the controller model and the recommender model. One challenge for
automatic loss generation in recommender systems is the extreme sparsity of
recommendation datasets, which leads to the sparse reward problem for loss
generation and search. To solve the problem, we further develop a reward
filtering mechanism for efficient and effective loss generation. Experimental
results show that our framework manages to create tailored loss functions for
different recommendation models and datasets, and the generated loss gives
better recommendation performance than commonly used baseline losses. Besides,
most of the generated losses are transferable, i.e., the loss generated based
on one model and dataset also works well for another model or dataset. Source
code of the work is available at https://github.com/rutgerswiselab/AutoLossGen.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Next Generation Loss Function for Image Classification [0.0]
We experimentally challenge the well-known loss functions, including cross entropy (CE) loss, by utilizing the genetic programming (GP) approach.
One function, denoted as Next Generation Loss (NGL), clearly stood out showing same or better performance for all tested datasets.
arXiv Detail & Related papers (2024-04-19T15:26:36Z) - Xtreme Margin: A Tunable Loss Function for Binary Classification
Problems [0.0]
We provide an overview of a novel loss function, the Xtreme Margin loss function.
Unlike the binary cross-entropy and the hinge loss functions, this loss function provides researchers and practitioners flexibility with their training process.
arXiv Detail & Related papers (2022-10-31T22:39:32Z) - LegoNet: A Fast and Exact Unlearning Architecture [59.49058450583149]
Machine unlearning aims to erase the impact of specific training samples upon deleted requests from a trained model.
We present a novel network, namely textitLegoNet, which adopts the framework of fixed encoder + multiple adapters''
We show that LegoNet accomplishes fast and exact unlearning while maintaining acceptable performance, synthetically outperforming unlearning baselines.
arXiv Detail & Related papers (2022-10-28T09:53:05Z) - Training Over-parameterized Models with Non-decomposable Objectives [46.62273918807789]
We propose new cost-sensitive losses that extend the classical idea of logit adjustment to handle more general cost matrices.
Our losses are calibrated, and can be further improved with distilled labels from a teacher model.
arXiv Detail & Related papers (2021-07-09T19:29:33Z) - AutoLoss: Automated Loss Function Search in Recommendations [34.27873944762912]
We propose an AutoLoss framework that can automatically and adaptively search for the appropriate loss function from a set of candidates.
Unlike existing algorithms, the proposed controller can adaptively generate the loss probabilities for different data examples according to their varied convergence behaviors.
arXiv Detail & Related papers (2021-06-12T08:15:00Z) - A Mathematical Analysis of Learning Loss for Active Learning in
Regression [2.792030485253753]
This paper develops a foundation for Learning Loss which enables us to propose a novel modification we call LearningLoss++.
We show that gradients are crucial in interpreting how Learning Loss works, with rigorous analysis and comparison of the gradients between Learning Loss and LearningLoss++.
We also propose a convolutional architecture that combines features at different scales to predict the loss.
We show that LearningLoss++ outperforms in identifying scenarios where the model is likely to perform poorly, which on model refinement translates into reliable performance in the open world.
arXiv Detail & Related papers (2021-04-19T13:54:20Z) - Loss Function Discovery for Object Detection via Convergence-Simulation
Driven Search [101.73248560009124]
We propose an effective convergence-simulation driven evolutionary search algorithm, CSE-Autoloss, for speeding up the search progress.
We conduct extensive evaluations of loss function search on popular detectors and validate the good generalization capability of searched losses.
Our experiments show that the best-discovered loss function combinations outperform default combinations by 1.1% and 0.8% in terms of mAP for two-stage and one-stage detectors.
arXiv Detail & Related papers (2021-02-09T08:34:52Z) - Adaptive Weighted Discriminator for Training Generative Adversarial
Networks [11.68198403603969]
We introduce a new family of discriminator loss functions that adopts a weighted sum of real and fake parts.
Our method can be potentially applied to any discriminator model with a loss that is a sum of the real and fake parts.
arXiv Detail & Related papers (2020-12-05T23:55:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.