Situating Recommender Systems in Practice: Towards Inductive Learning
and Incremental Updates
- URL: http://arxiv.org/abs/2211.06365v1
- Date: Fri, 11 Nov 2022 17:29:35 GMT
- Title: Situating Recommender Systems in Practice: Towards Inductive Learning
and Incremental Updates
- Authors: Tobias Schnabel, Mengting Wan, Longqi Yang
- Abstract summary: We formalize both concepts and contextualize recommender systems work from the last six years.
We then discuss why and how future work should move towards inductive learning and incremental updates for recommendation model design and evaluation.
- Score: 9.47821118140383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With information systems becoming larger scale, recommendation systems are a
topic of growing interest in machine learning research and industry. Even
though progress on improving model design has been rapid in research, we argue
that many advances fail to translate into practice because of two limiting
assumptions. First, most approaches focus on a transductive learning setting
which cannot handle unseen users or items and second, many existing methods are
developed for static settings that cannot incorporate new data as it becomes
available. We argue that these are largely impractical assumptions on
real-world platforms where new user interactions happen in real time. In this
survey paper, we formalize both concepts and contextualize recommender systems
work from the last six years. We then discuss why and how future work should
move towards inductive learning and incremental updates for recommendation
model design and evaluation. In addition, we present best practices and
fundamental open challenges for future research.
Related papers
- Generative Large Recommendation Models: Emerging Trends in LLMs for Recommendation [85.52251362906418]
This tutorial explores two primary approaches for integrating large language models (LLMs)
It provides a comprehensive overview of generative large recommendation models, including their recent advancements, challenges, and potential research directions.
Key topics include data quality, scaling laws, user behavior mining, and efficiency in training and inference.
arXiv Detail & Related papers (2025-02-19T14:48:25Z) - A Survey on Recommendation Unlearning: Fundamentals, Taxonomy, Evaluation, and Open Questions [16.00188808166725]
recommender systems have become increasingly influential in shaping user behavior and decision-making.
Widespread adoption of machine learning models in recommender systems has raised significant concerns regarding user privacy and security.
Traditional machine unlearning methods are ill-suited for recommendation unlearning due to the unique challenges posed by collaborative interactions and model parameters.
arXiv Detail & Related papers (2024-12-17T11:58:55Z) - Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems [92.89673285398521]
o1-like reasoning systems have demonstrated remarkable capabilities in solving complex reasoning tasks.
We introduce an imitate, explore, and self-improve'' framework to train the reasoning model.
Our approach achieves competitive performance compared to industry-level reasoning systems.
arXiv Detail & Related papers (2024-12-12T16:20:36Z) - Scaling New Frontiers: Insights into Large Recommendation Models [74.77410470984168]
Meta's generative recommendation model HSTU illustrates the scaling laws of recommendation systems by expanding parameters to thousands of billions.
We conduct comprehensive ablation studies to explore the origins of these scaling laws.
We offer insights into future directions for large recommendation models.
arXiv Detail & Related papers (2024-12-01T07:27:20Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [65.57123249246358]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - A Survey of Large-Scale Deep Learning Serving System Optimization:
Challenges and Opportunities [24.38071862662089]
Survey aims to summarize and categorize the emerging challenges and optimization opportunities for large-scale deep learning serving systems.
Deep Learning (DL) models have achieved superior performance in many application domains, including vision, language, medical, commercial ads, entertainment, etc.
arXiv Detail & Related papers (2021-11-28T22:14:10Z) - Incremental Learning for Personalized Recommender Systems [8.020546404087922]
We present an incremental learning solution to provide both the training efficiency and the model quality.
The solution is deployed in LinkedIn and directly applicable to industrial scale recommender systems.
arXiv Detail & Related papers (2021-08-13T04:21:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.