Developing a Recommendation Benchmark for MLPerf Training and Inference
- URL: http://arxiv.org/abs/2003.07336v2
- Date: Tue, 14 Apr 2020 12:52:16 GMT
- Title: Developing a Recommendation Benchmark for MLPerf Training and Inference
- Authors: Carole-Jean Wu and Robin Burke and Ed H. Chi and Joseph Konstan and
Julian McAuley and Yves Raimond and Hao Zhang
- Abstract summary: We aim to define an industry-relevant recommendation benchmark for theerferf Training andInference Suites.
The paper synthesizes the desirable modeling strategies for personalized recommendation systems.
We lay out desirable characteristics of recommendation model architectures and data sets.
- Score: 16.471395965484145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based recommendation models are used pervasively and broadly,
for example, to recommend movies, products, or other information most relevant
to users, in order to enhance the user experience. Among various application
domains which have received significant industry and academia research
attention, such as image classification, object detection, language and speech
translation, the performance of deep learning-based recommendation models is
less well explored, even though recommendation tasks unarguably represent
significant AI inference cycles at large-scale datacenter fleets. To advance
the state of understanding and enable machine learning system development and
optimization for the commerce domain, we aim to define an industry-relevant
recommendation benchmark for the MLPerf Training andInference Suites. The paper
synthesizes the desirable modeling strategies for personalized recommendation
systems. We lay out desirable characteristics of recommendation model
architectures and data sets. We then summarize the discussions and advice from
the MLPerf Recommendation Advisory Board.
Related papers
- Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation [51.25461871988366]
We propose a sequential recommendation algorithm based on a pre-trained language model and knowledge distillation.
The proposed algorithm enhances recommendation accuracy and provide timely recommendation services.
arXiv Detail & Related papers (2024-09-23T08:39:07Z) - LLM-Powered Explanations: Unraveling Recommendations Through Subgraph Reasoning [40.53821858897774]
We introduce a novel recommender that synergies Large Language Models (LLMs) and Knowledge Graphs (KGs) to enhance the recommendation and provide interpretable results.
Our approach significantly enhances both the effectiveness and interpretability of recommender systems.
arXiv Detail & Related papers (2024-06-22T14:14:03Z) - Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Large Language Models (LLMs) pretrained on massive text corpus presents a promising avenue for enhancing recommender systems.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Emerging Synergies Between Large Language Models and Machine Learning in
Ecommerce Recommendations [19.405233437533713]
Large language models (LLMs) have superior capabilities in basic tasks of language understanding and generation.
We introduce a representative approach to learning user and item representations using LLM as a feature encoder.
We then reviewed the latest advances in LLMs techniques for collaborative filtering enhanced recommendation systems.
arXiv Detail & Related papers (2024-03-05T08:31:00Z) - Reformulating Sequential Recommendation: Learning Dynamic User Interest with Content-enriched Language Modeling [18.297332953450514]
We propose LANCER, which leverages the semantic understanding capabilities of pre-trained language models to generate personalized recommendations.
Our approach bridges the gap between language models and recommender systems, resulting in more human-like recommendations.
arXiv Detail & Related papers (2023-09-19T08:54:47Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.