Developing a Recommendation Benchmark for MLPerf Training and Inference
- URL: http://arxiv.org/abs/2003.07336v2
- Date: Tue, 14 Apr 2020 12:52:16 GMT
- Title: Developing a Recommendation Benchmark for MLPerf Training and Inference
- Authors: Carole-Jean Wu and Robin Burke and Ed H. Chi and Joseph Konstan and
Julian McAuley and Yves Raimond and Hao Zhang
- Abstract summary: We aim to define an industry-relevant recommendation benchmark for theerferf Training andInference Suites.
The paper synthesizes the desirable modeling strategies for personalized recommendation systems.
We lay out desirable characteristics of recommendation model architectures and data sets.
- Score: 16.471395965484145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based recommendation models are used pervasively and broadly,
for example, to recommend movies, products, or other information most relevant
to users, in order to enhance the user experience. Among various application
domains which have received significant industry and academia research
attention, such as image classification, object detection, language and speech
translation, the performance of deep learning-based recommendation models is
less well explored, even though recommendation tasks unarguably represent
significant AI inference cycles at large-scale datacenter fleets. To advance
the state of understanding and enable machine learning system development and
optimization for the commerce domain, we aim to define an industry-relevant
recommendation benchmark for the MLPerf Training andInference Suites. The paper
synthesizes the desirable modeling strategies for personalized recommendation
systems. We lay out desirable characteristics of recommendation model
architectures and data sets. We then summarize the discussions and advice from
the MLPerf Recommendation Advisory Board.
Related papers
- Generative Large Recommendation Models: Emerging Trends in LLMs for Recommendation [85.52251362906418]
This tutorial explores two primary approaches for integrating large language models (LLMs)
It provides a comprehensive overview of generative large recommendation models, including their recent advancements, challenges, and potential research directions.
Key topics include data quality, scaling laws, user behavior mining, and efficiency in training and inference.
arXiv Detail & Related papers (2025-02-19T14:48:25Z) - Large Language Models Are Universal Recommendation Learners [27.16327640562273]
Large language models (LLMs) can function as universal recommendation learners.
We introduce a multimodal fusion module for item representation and a sequence-in-set-out approach for efficient candidate generation.
Our analysis reveals that recommendation outcomes are highly sensitive to text input.
arXiv Detail & Related papers (2025-02-05T09:56:52Z) - Scaling New Frontiers: Insights into Large Recommendation Models [74.77410470984168]
Meta's generative recommendation model HSTU illustrates the scaling laws of recommendation systems by expanding parameters to thousands of billions.
We conduct comprehensive ablation studies to explore the origins of these scaling laws.
We offer insights into future directions for large recommendation models.
arXiv Detail & Related papers (2024-12-01T07:27:20Z) - LLM-Powered Explanations: Unraveling Recommendations Through Subgraph Reasoning [40.53821858897774]
We introduce a novel recommender that synergies Large Language Models (LLMs) and Knowledge Graphs (KGs) to enhance the recommendation and provide interpretable results.
Our approach significantly enhances both the effectiveness and interpretability of recommender systems.
arXiv Detail & Related papers (2024-06-22T14:14:03Z) - LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework synergizes open-world knowledge with collaborative knowledge.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Emerging Synergies Between Large Language Models and Machine Learning in
Ecommerce Recommendations [19.405233437533713]
Large language models (LLMs) have superior capabilities in basic tasks of language understanding and generation.
We introduce a representative approach to learning user and item representations using LLM as a feature encoder.
We then reviewed the latest advances in LLMs techniques for collaborative filtering enhanced recommendation systems.
arXiv Detail & Related papers (2024-03-05T08:31:00Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.